Brief Announcement: Speedups for Parallel Graph Triconnectivity

Brief Announcement: Speedups for Parallel Graph Triconnectivity

Brief Announcement: Speedups for Parallel Graph Triconnectivity James Edwards and Uzi Vishkin University of Maryland 1 Introduction Motivation Begin with a theory of parallel algorithms (PRAM) Develop an architecture (XMT) based on theory Validate theory using architecture Validate architecture using theory In order to validate XMT, we need to move beyond simple benchmark kernels This is in line with the history of benchmarking of performance (e.g. SPEC) Triconnectivity is the most complex algorithm that has been tested on XMT. Only one serial implementation is publically available, and no prior parallel

implementation Prior work of similar complexity on XMT includes biconnectivity [EV12-PMAM/ PPoPP] and maximum flow [CV11-SPAA]. 2 List, Tree, and Graph Algorithms advanced planarity testing advanced triconnectivity planarity testing triconnectivity st-numbering k-edge/vertex connectivity minimum spanning forest centroid decomposition tree contraction

Euler tours ear decomposition search lowest common ancestors biconnectivity strong orientation graph connectivity tree Euler tour list ranking 2-ruling set prefix-sums deterministic coin tossing Triconnected Components 1 1

2 3 4 3 4 3 4 5 6 2 3 4 5 Input graph G 6

Triconnected components of G 4 Triconnectivity Algorithm High-level structure Key insight for serial and parallel algorithms: separation pairs lie on cycles in the input graph Serial [HT73]: use depth-first search. Parallel [RV88, MR92]: use an ear decomposition. 1 E1 2 3 E2 4 5 E3 Ear decomposition of

G 6 5 Triconnectivity Algorithm Low-level structure The bulk of the algorithm lies in general subroutines such as graph connectivity. Implementation of the triconnectivity algorithm was greatly assisted by reuse of a library developed during earlier work on biconnectivity (PMAM 12). Using this library, a majority of students successfully completed a programming assignment on biconnectivity in 2-3 weeks in a grad course on parallel algorithms. 6 The XMT Platform The Explicit Multi-Threading (XMT) architecture was developed at the University of Maryland with the following goals in mind:

Good performance on parallel algorithms of any granularity Support for regular or irregular memory access Efficient execution of code derived from PRAM algorithms A 64-processor FPGA hardware prototype and a software toolchain (compiler and simulator) exist; the latter is freely available for download. 7 Graph Families Data set Random-10K Random-20K Planar3-1000K Ladder-20K Ladder-100K Ladder-1000K Sep. pairs Vertices (n) Edges (m) (s) 10K 3000K 0 20K

5000K 0 1000K 3000K 0 20K 30K 10K 100K 150K 50K 1000K 1500K 500K Random graph: Edges are added at random between unique pairs of vertices Planar3 graph: Vertices are added in layers of three; each vertex in a layer is connected to the other vertices in the layer and two vertices of the preceding layer

Ladder: Similar to Planar3, but with two vertices per layer 8 Speedup Serial (Core i7) 64 TCUs (FPGA) 1024 TCUs (sim.) 1.006 0.777 1 0.6 0.554 0.591 0.8 Ra n d

om - 0.088 0.059 0.104 0.046 0.008 0 0.009 0.2 0.074 0.4 0.083 Normalized Runtime 1.2

Ra n Pla n L a dd L a dd L a dd do m a r 3e e e r-1 r r 20K 100 1000 - 20K 000 10K K K K 9 Analytic vs. Experimental Runtime T(n, m, s) = (2.38n + 0.238m + 4.75s) log 2

n Simulated 2.03 Predicted 2 1.23 1.5 0.25 0.5 0.02 0 Ra n d om - 0.14

1 0.13 Runtime (billions of cycles) 2.5 Ra n Pla n L a dd L a dd L a dd do m a r 3e e e r-1 r r 20K 100 1000 - 20K 000 10K K K K

10 Conclusion The speedups presented here (up to 129x) in conjunction with prior results for biconnectivity (up to 33x) and max-flow (up to 108x) demonstrates that the advantage of XMT is not limited to small kernels. Biconnectivity was an exceptional challenge due to the compactness of the serial algorithm. This work completes the capstone of the proof-ofconcept of PRAM algorithms on XMT. With this work, we now have the foundation in place to advance to work on applications. 11 References [CV11-SPAA] G. Caragea, U. Vishkin. Better Speedups for Parallel Max-Flow. Brief Announcement, SPAA 2011.

[EV12-PMAM] J. Edwards and U. Vishkin. Better Speedups Using Simpler Parallel Programming for Graph Connectivity and Biconnectivity. PMAM, 2012. [EV12-SPAA] J. Edwards and U. Vishkin. Brief Announcement: Speedups for Parallel Graph Triconnectivity. SPAA, 2012. [HT73] J. E. Hopcroft and R. E. Tarjan. Dividing a graph into triconnected components. SIAM J. Computing, 2(3):135158, 1973. 12 References [MR92] G. L. Miller and V. Ramachandran. A new graph triconnectivity algorithm and its parallelization. Combinatorica, 12(1):5376, 1992.

[KTCBV11] F. Keceli, A. Tzannes, G. Caragea, R. Barua and U. Vishkin. Toolchain for programming, simulating and studying the XMT many-core architecture. Proc. 16th Int. Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS), in conjunction with IPDPS, Anchorage, Alaska, May 20, 2011. 13 References [RV88] V. Ramachandran and U. Vishkin. Efficient parallel triconnectivity in logarithmic time. In Proc. AWOC, pages 3342, 1988. [TV85] R. E. Tarjan and U. Vishkin. An Efficient Parallel Biconnectivity Algorithm. SIAM J. Computing, 14(4):862874, 1985. [WV08] X. Wen and U. Vishkin. FPGA-Based

Prototype of a PRAM-on-Chip Processor. In Proceedings of the 5th Conference on Computing Frontiers, CF 08, pages 5566, New York, NY, USA, 2008. ACM. 14 Backup slides 15 The Problem with the PRAM PRAM algorithms are not a good match for current hardware: Fine-grained parallelism = overheads Requires managing many threads Synchronization and communication are expensive Clustering reduces granularity, but at the cost of load balancing Irregular memory accesses = poor locality Cache is not used efficiently Performance becomes sensitive to memory latency 16

The XMT Platform Main feature of XMT: Using similar hardware resources (e.g. silicon area, power consumption) as existing CPUs and GPUs, provide a platform that to a programmer looks as close to a PRAM as possible. Instead of ~8 heavy processor cores, provide ~1,024 light cores for parallel code and one heavy core for serial code. Devote on-chip bandwidth to a high-speed interconnection network rather than maintaining coherence between private caches. 17 The XMT Platform For the PRAM algorithm presented, the number of HW threads is more important than the processing power per thread because they happen to perform more work than an equivalent serial algorithm. This cost is overridden by sufficient parallelism in hardware. Balance between the tight synchrony of the PRAM and hardware constraints (such as locality) is obtained through support for fine-grained multithreaded code, where a thread can advance at it own speed between (a form of)

synchronization barriers. 18 Evaluation: Graph Families Maximal planar graph Built layer by layer The first layer has three vertices and three edges. Each additional layer has three vertices and nine edges. 19

Recently Viewed Presentations

  • CS1313 C Introduction Lesson

    CS1313 C Introduction Lesson

    C Introduction Lesson. CS1313 Fall 2019. Character String Literal Constant. A . character string literal constant. is a sequence of characters . delimited. by a double quote at the beginning and a double quote at the end.
  • EME Measurements and EMECalc - moonbouncers.org

    EME Measurements and EMECalc - moonbouncers.org

    k is Boltzmanns constant 1.38*10^-23 joules/ kelvin. Where does EMECalc get Tsys and Antenna gain from? And what is Tsys? System Noise Temperature, Tsys, definition. Tsys = Trx + Tsky + Tspill +Tft. Trx. Receiver noise temperature contribution from line...
  • Research Methods for the Learning Sciences

    Research Methods for the Learning Sciences

    If you have a question about course material you are probably better off posting to the Moodle forum than emailing me directly. I will check the forum regularly. And your classmates may give you an answer before I can ......
  • Lattice Field Theory - grids.ucs.indiana.edu

    Lattice Field Theory - grids.ucs.indiana.edu

    A QCD Grid: 5 Easy Pieces? Richard Kenway University of Edinburgh the problem of quark confinement quarks come in six flavours lattice QCD quantum mechanics + special relativity probabilities from averaging over many realisations treat space and time on an...
  • Spices that add more

    Spices that add more

    Objectives. Discuss some current medication types and possible changes to how we ask questions. Describe a few of the more commonly used herbs/spices and their potential affect.
  • Glass - tpu.ru

    Glass - tpu.ru

    Glass in the common sense contains silica as the main component and glass former, but silica-free glasses also exist . The amorphous structure of glassy Silica (SiO. 2).Long range order is not present, but we can seeshort-range order with tetrahedral...
  • Famous Entrepreneur

    Famous Entrepreneur

    FAMOUS ENTREPRENEUR By: Zeke Huffman Mrs. Lee my entrepreneur is colonel sander Colonel sanders is the founder of KFC. His full name is Harland David Sanders. He was born in September 9, 1890. He died in December 16, 1980. Born...
  • Snímek 1 - vse.cz

    Snímek 1 - vse.cz

    i.e.aerobics, spinning, fit boxing, TRX, swimming, climbing, hockey, table tennis, basketball and many more . Organizes or co-organizes individual sporting events. ... hiring a project manager involved in preparing applications for aid from OP VVV funds.