Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

June 3, 2014

LANL Demos Extreme Scale Indexing

Tiffany Trader
LANL MDHIM graphic

An HPC middleware project currently underway at Los Alamos National Laboratory has reached a significant milestone. The new supercomputing tool, developed as part of the Multi-dimensional Hashed Indexed Middleware (MDHIM) project, made 1,782,105,749 key/value inserts per second into a globally-ordered key space on Los Alamos National Laboratory’s Moonlight supercomputer. The demonstration showcases the potential of MDHIM to help enable data exploration at enormous scale.

Fundamental to the progress of science in the 21st century is the need for computer simulations to harness ever-larger numbers of computing cores in unison. As we head toward exascale, the additional computing power results in more complex simulations and more data being pumped into the analysis workflow.

With the size of today’s datasets, it’s no longer feasible to move/search/analyze all the data at once. Instead, tools are needed to identify, retrieve, and analyze smaller subsets of data in order to perform analyses. The MDHIM framework evolved to address these data management challenges, while leveraging the capabilities of extreme scale computing systems.

The framework was intended as a half-way point between fully relational databases and distributed but completely local constructs like “map/reduce.” With MDHIM, applications can take advantage of the mechanisms provided by a parallel key-value store: storing data in global multi-dimensional order and sub-setting of massive data in multiple dimensions. It also has the functions of a distributed hash table with simple but massively parallel lookups.

“In the current highly parallel computing world, the need for scalability has forced the world away from fully transactional databases and back to the loosened semantics of key value stores,” explains Gary Grider, High Performance Computing division leader at Los Alamos.

MDHIM is designed to represent petabytes of scientific data with mega- to gigabytes of representation data. It does this by utilizing the natural advantages of HPC interconnects – low latency, high bandwidth, and collective-friendliness – to scale key/value service to millions of cores. For the system to be scalable and productive, it must be capable of executing billions of inserts per second.

In a recent test run, MDHIM ran as an MPI library on 3,360 processors within 280 nodes of the 308-node Moonlight system – achieving almost 1.8 billion inserts per second.

“This milestone was achieved by a combination of good software design and refined algorithms. Our code is available on Github and we encourage others to build upon it,” says Hugh Greenberg, project leader and lead developer of the MDHIM project.

MDHIM is an important part of the Storage and I/O portion of the DOE FastForward project, a collaborative effort to accelerate the R&D needed for extreme-scale computing.

Share This