Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
March 25, 2013

Hopper Lights Up the Cosmos

Nicole Hemsoth

The European Space Agency’s massive Planck telescope has been hard at work digging through ancient light signals to find the original spark of the Big Bang.

The clue-yielding light has traveled 13.8 billion years to reach research equipment and is so faint that Planck has to scan every point on the sky an average of 1,000 times to spot illuminations. This has resulted in an incredibly massive map of the cosmos, not to mention some interesting new spin-outs of the original research mission.

As one might image this sky-mapping and light-combing process requires some serious HPC resources. “So far, Planck has made about a trillion observations of a billion points on the sky,” said Julian Borrill of the Lawrence Berkeley National Laboratory, Berkeley, Calif. “Understanding this sheer volume of data requires a state-of-the-art supercomputer.”

But scientists behind the project point to another particularly difficult angle to their research that necessitates a high performance system.

To get to the light sources and make accurate models, there is a lot of noise from the Planck sensors to plow through—and a lot of teasing apart of these critical signals versus the static that they are wrapped in. Project scientists point to the noise as one of the fundamental challenges of the mission and have looked to a top 20 system to solve the problem.

At the heart of these signal search and filter process is the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab.

According to NASA, the computations needed for Planck’s current data release required “more than 10 million processor-hours on the Hopper computer. Fortunately, the Planck analysis codes run on tens of thousands of processors in the supercomputer at once, so this only took a few weeks.”

Hopper is NERSC’s first system at the petascale pedestal, which rounded out at number 19 on the last Top 500 list with 217 TB of memory running across 153,216 cores. The center is looking to continue the Cray tradition by tapping into the Cascade, as announced around ISC last year.

Related Articles

Berkeley Lab Contemplates Stepping Stone to Exascale Supercomputer

NERSC Signs Up for Multi-Petaflop “Cascade” Supercomputer

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video