Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
February 18, 2014

A Data Locality Cure for Irregular Applications

Carlo del Mundo
complex_event_processing

Data locality plays a critical role in energy-efficiency and performance in parallel programs. For data-parallel algorithms where locality is abundant, it is a relatively straightforward task to map and optimize for architectures with user-programmable local caches.  However, for irregular algorithms such as Breadth First Search (BFS), exploiting locality is a non-trivial task.

Guang Gao, a professor in the Department of Electrical and Computer Engineering in the University of Delaware works on mapping and exploiting locality in irregular algorithms such as BFS. Gao notes, “there are only a few studies of the energy efficiency issue of the BFS problem, … and more work is needed to analyze energy efficiency of BFS on architectures with local storage.”

In BFS, data locality is exploited in one of two ways: intra- and inter-loop locality. Intra-loop refers to locality within a loop body between adjacent loops. Inter-loop refers to locality between loop iterations in different loops. Exploiting both intra- and inter-loop locality is relatively simple assuming the programmer leverages a model that supports fine-grain parallelism.

Typical approaches to irregular algorithms do not perform well under traditional coarse-grain execution models like OpenMP.  Using BFS as their motivating example, Gao’s team exploits data locality using Codelet, a fine-grain data-flow execution model.

In the Codelet model, units of computation are called codelets. Each codelet is a sequential piece of code that can be executed without interruption (e.g., no synchronization is required). Data dependence is specified between the codelets through a directed graph called the codelet graph. At execution time, the runtime schedules the codelets accordingly based on the dependencies.

The Codelet model executes in the context of an abstract parallel machine model. The machine consists of many computing nodes stitched together via an interconnection network. Each node contains a many-core chip organized into two types of cores: CUs and SUs. This heterogeneity provides differing performance and energy profiles. Codelets that can benefit from a weaker core can be scheduled into one type of core to save energy. Conversely, a codelet that requires heavy-duty computation can be scheduled into a stronger core.

By leveraging fine-grain data-flow execution models such as Codelet, Gao and his team are able to improve dynamic energy for memory accesses by up to 7% compared to the traditional coarse-grain OpenMP model.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video