SC13 Research Highlight: Large Graph Processing Without the Overhead

By Dr. Ling Liu and Kisung Lee

November 16, 2013

Many real world information networks consist of millions or billions of vertices representing heterogeneous entities and billions or trillions of edges representing heterogeneous types of relationships among entities.

Image Source: Max Delbrück Center for Molecular Medicine

For example, the crawled Web graph is estimated to have more than 20 billions of pages (vertices) with 160 billions hyperlinks (edges). Facebook user community exceeds 1 billion users (vertices) with more than 140 billion friendship relationships (edges) in 2012.  The billion triple challenges from the Semantic Web community have put forward large collection of RDF datasets with hundreds of millions of vertices and billions of edges.

As the size and variety of information networks continue to grow in many science and engineering domains, graph computations often exceed the processing capacity of conventional hardware, software systems and tools for a number of reasons. First, graph data often exhibits higher data correlations through both direct and indirect edges and such high correlation tends to generate large size of intermediate results during graph computations. When the size of intermediate results exceeds the available memory, the out of memory problem is unavoidable. Second, the graph datasets are growing in volume, variety and velocity. The bigger the size of the graphs gets, the worse the performance will be for most of the graph computations. One open challenge in this space is how to effectively partition a large graph to enable efficient parallel processing of complex graph operations.

One of the papers to be presented at the ACM/IEEE SC13 conference, titled “Efficient data partitioning model for heterogeneous graphs in the Cloud”, proposes a flexible graph partitioning framework, called VB-partitioner. This work is co-authored by the doctorate student Kisung Lee and Prof. Dr. Ling Liu from the school of Computer Science at Georgia Institute of Technology. To make parallel graph computations highly efficient, an important design goal of VB-Partitioner is to devise graph partitioning techniques that can effectively minimize the inter-partition communication overhead and maximize the intra-partition computation capacity (local processing).

Concretely, the first prototype of the VB-Partitioner focuses on efficient processing of graph queries, namely finding all the subgraphs matching a given subgraph pattern. VB-Partitioner partitions a large graph in three steps.

  • First, it constructs three types of Vertex Blocks (in-VBs, out-VBs and bi-VBs) to capture the general graph processing locality.  Each vertex block has an anchor vertex.
  • Second, it constructs three types of k-hop Extended Vertex Blocks (in-EVBs, out-EVBs and bi-EVBs) to distribute vertex blocks with better query locality. Each EVB has one anchor vertex. It achieves query locality by employing controlled edge replication. The setting of k is determined by the radius of frequent query graphs in order to ensure that most frequently requested queries can be processed in parallel at all partitions with minimized inter-partition communication overhead.
  • Third, it partitions a graph by grouping its vertex blocks and EVBs to maximize parallelism in graph processing while ensuring load balance, controlled edge replication and fast grouping.

Four techniques are considered and compared in the context of grouping and placement of VBs and EVBs to partitions: random grouping, hash-based grouping, min-cut based grouping and high degree vertex-based grouping.  As an integral part of the VB-Partitioner, a data partition-aware query partitioning model is also developed to handle the cases where the radius of a query is larger than k. The experimental results reported in the paper demonstrate the effectiveness of VB-Partitioner in terms of query processing efficiency, data loading time and partition distribution balance.

Graph computations can be broadly classified into two categories, graph queries that find matching subgraphs of a given pattern and iterative graph algorithms that find clusters, orderings, paths or correlation patterns. The former targets at subgraph matching problems over large static graphs and the later targets at graph inference kernels that traverse the graph by iteratively updating the weight of vertices or edges, such as PageRank, shortest path algorithms, spanning tree algorithms, topological sort, and so forth.

Although the first generation of the VB-Partitioner is tailored primarily for efficient parallel processing of graph queries, the ongoing work on VB-Partitioner includes exploring the feasibility and effectiveness of VB-Partitioner in the context of iterative graph algorithms. For example, to minimize inter-partition communications and maximize parallelism in graph computation, it is crucial to optimize the shared memory by minimizing parallel overhead of synchronization barriers. It is equally important to optimize the distributed memory by bounding message buffer sizes, bundling messages, overlapping communication with computation to amortize the overhead of barriers.

image1
Image Source: Giot et al., “A Protein Interaction Map of Drosophila melanogaster”, Science 302, 1722-1736, 2003.”

In addition to exploring parallel computation opportunities through graph partitioning using multi-threads, multi-cores, multiple workers, one can also exploit and combine with other performance optimization techniques to scale large graph analytics. Example techniques include

  • Compression to provide compact storage and in-memory processing,
  • Data placements on disk and in memory to balance computation with storage, and to maximize sequential access and minimize random access,
  • Indexing at vertex and/or edge level to utilize sequential access and minimize unnecessary random access,
  • Caching at vertex, edge or query level to gain performance for repeated access.

Please come hear more on Tuesday, November 19, 2013 10:30AM – 11:00AM (Location: Room 205/207)

http://sc13.supercomputing.org/schedule/event_detail.php?evid=pap708

About the Authors

LingLing Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large scale data intensive systems. Prof. Ling Liu is an internationally recognized expert in the areas of Cloud computing, Distributed Computing, Big Data technologies, Database systems and Service oriented computing. Prof. Liu is a recipient of IEEE Computer Society Technical Achievement Award in 2012. Currently Prof. Liu is the editor in chief of IEEE Transactions on Service Computing, and serves on the editorial board of half dozen international journals, including Journal of Parallel and Distributed Computing (JPDC), ACM Transactions on Internet Technology (TOIT), ACM Transactions on Web (TWEB). Dr. Liu’s current research is primarily sponsored by NSF, IBM, and Intel.

 

luiKisung Lee is a Ph.D student in the School of Computer Science at Georgia Tech. He received his BS and MS degree in computer science from KAIST in 2005 and 2007 respectively. He had worked for ETRI as a researcher from 2007 to 2010. He is conducting research in distributed and parallel processing of big data in the Cloud, mobile computing and social network analysis.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Tech Giants Outline Battle Plans for Future HPC Market

August 21, 2017

Four companies engaged in a cage fight for leadership in the emerging HPC market of the 2020s are, despite deep differences in some areas, in violent agreement on at least one thing: the power consumption and latency pen Read more…

By Doug Black

Geospatial Data Research Leverages GPUs

August 17, 2017

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics. The San Francisco-based company is collabor Read more…

By George Leopold

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that Read more…

By Linda Barney

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that in Read more…

By John Russell

Tech Giants Outline Battle Plans for Future HPC Market

August 21, 2017

Four companies engaged in a cage fight for leadership in the emerging HPC market of the 2020s are, despite deep differences in some areas, in violent agreement Read more…

By Doug Black

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not disclosed. Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Leading Solution Providers

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This