Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 29, 2012

Sequoia Supercomputer Runs Cosmology Code at 14 Petaflops

Michael Feldman

Although Lawrence Livermore Lab’s Sequoia supercomputer got knocked of its TOP500 perch a few weeks ago, the DOE machine hosted by National Nuclear Security Administration (NNSA) is proving its worth in the world of real applications.

According to the National Nuclear Security Administration, Sequoia, the world’s largest IBM Blue Gene/Q system, delivered nearly 14 petaflops on the recently developed Hardware/Hybrid Accelerated Cosmology Codes (HACC), a software framework that simulates the behavior of galaxies on a cosmological scale. Its purpose is to help scientists to reveal the nature of dark matter and dark energy. While that might seem a little tangential to NNSA’s primary mission of managing the nation’s nuclear arsenal, it does demonstrate the power of Blue Gene platform.

Sequoia supercomputer; Photo credit: Bob Hirschfeld/LLNL  

In fact, 14 petaflops is just a couple of petaflops shy of Sequoia’s Linpack mark, and just four petaflops off its peak performance number. According the DOE press release: “The HACC framework is designed for extreme performance in the weak scaling limit (high levels of memory utilization) by integrating innovative algorithms, as well as programming paradigms, in a way that easily adapts to different computer architectures.”

Applications that exhibit weak scaling (the ability to increase the problem size by applying more processors) are good candidates to use the full capability these petascale supers since they rely on high levels of compute parallelism. This is especially true of the Blue Gene architecture, which uses large numbers of relatively slow CPUs (1.6 GHz, in this case) to achieve high aggregate performance. Sequoia, with more than 1.5 million PowerPC A2 CPUs, is perhaps the most extreme example of this.

Although these results were obtained in the NNSA’s shop at LLNL, the team conducting the work came from the Argonne National Lab (ANL), a DOE facility devoted to open science and engineering. They will be running this same application on the 10-petaflop Mira supercomputer, another Blue Gene/Q system, installed at ANL.

Blue Gene systems haven’t cornered the market on petascale apps though. Titan, the new Cray XK7 supercomputer at Oak Ridge, recently debuted with a 10-petaflop run of WL-LSMS, a material science code that performs thermodynamic calculations. Titan relies on NVIDIA GPUs of the Kepler persuasion for 24 of its 27 peak petaflops, so this represents a much different architecture than that of the CPU-only Sequoia.

As multi-petaflops supercomputers start to fill in the TOP500 list, applications that can sustain this level of computing will start to proliferate as well. In three years, all of the top 500 supercomputers are expected to be a petaflop or better, offering a much wider array of machines for such computing. The real era of petascale supercomputing has just begun.