March 25, 2013

Hopper Lights Up the Cosmos

Nicole Hemsoth

The European Space Agency’s massive Planck telescope has been hard at work digging through ancient light signals to find the original spark of the Big Bang.

The clue-yielding light has traveled 13.8 billion years to reach research equipment and is so faint that Planck has to scan every point on the sky an average of 1,000 times to spot illuminations. This has resulted in an incredibly massive map of the cosmos, not to mention some interesting new spin-outs of the original research mission.

As one might image this sky-mapping and light-combing process requires some serious HPC resources. “So far, Planck has made about a trillion observations of a billion points on the sky,” said Julian Borrill of the Lawrence Berkeley National Laboratory, Berkeley, Calif. “Understanding this sheer volume of data requires a state-of-the-art supercomputer.”

But scientists behind the project point to another particularly difficult angle to their research that necessitates a high performance system.

To get to the light sources and make accurate models, there is a lot of noise from the Planck sensors to plow through—and a lot of teasing apart of these critical signals versus the static that they are wrapped in. Project scientists point to the noise as one of the fundamental challenges of the mission and have looked to a top 20 system to solve the problem.

At the heart of these signal search and filter process is the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab.

According to NASA, the computations needed for Planck’s current data release required “more than 10 million processor-hours on the Hopper computer. Fortunately, the Planck analysis codes run on tens of thousands of processors in the supercomputer at once, so this only took a few weeks.”

Hopper is NERSC’s first system at the petascale pedestal, which rounded out at number 19 on the last Top 500 list with 217 TB of memory running across 153,216 cores. The center is looking to continue the Cray tradition by tapping into the Cascade, as announced around ISC last year.

Related Articles

Berkeley Lab Contemplates Stepping Stone to Exascale Supercomputer

NERSC Signs Up for Multi-Petaflop “Cascade” Supercomputer