Grappa

Mass-market computer systems are designed to exploit spatial locality via cache and local memory to achieve high efficiency. Unfortunately, when processing graphs spatial locality is often difficult, if not impossible, to express.

As system size grows, edges in a graph distributed across its nodes’ memories become increasingly likely to join vertices that are far apart. The rate of traversal slows. Consequently, even though parallelism and hardware resources increase, performance degrades.

Grappa is a latency-tolerant runtime for mass-market clusters that mitigates this degradation, allowing graph processing to scale up even in the presence of diminishing locality and increasing latency. Grappa works by:

The application developer need only express parallelism, not decide when and how to exploit it.

More info on Grappa

Current Contributors

Jacob Nelson

Brandon Myers

Brandon Holt

Luis Ceze

Mark Oskin

Preston Briggs

Simon Kahan (PNNL)

Publications

Grappa: A Latency-Tolerant Runtime for Large-Scale Irregular Applications. Jacob Nelson, Brandon Holt, Brandon Myers, Preston Briggs, Luis Ceze, Simon Kahan, and Mark Oskin. Tech report, February 2014

Flat Combining Synchronized Global Data Structures. Brandon Holt, Jacob Nelson, Brandon Myers, Preston Briggs, Luis Ceze, Simon Kahan, and Mark Oskin. International Conference on PGAS Programming Models (PGAS), October 2013

Compiled Plans for In-Memory Path-Counting Queries. Brandon Myers, Jeremy Hyrkas, Daniel Halperin, and Bill Howe. International Workshop on In-Memory Data Management and Analytics (IMDM w/ VLDB), August 2013

Do We Need a Crystal Ball for Task Migration? Brandon Myers and Brandon Holt. USENIX Workshop on Hot Topics in Parallelism (HOTPAR), June 2012

Crunching Large Graphs with Commodity Processors. Jacob Nelson, Brandon Myers, A. H. Hunter, Preston Briggs, Luis Ceze, Carl Ebeling, Dan Grossman, Simon Kahan, and Mark Oskin. USENIX Workshop on Hot Topics in Parallelism (HOTPAR), June 2011