Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

June 19, 2014

Stampede Foreshadows Heterogenous Supercomputing

Tiffany Trader

Supercomputers, and the people who run them, are the rock stars of the science and engineering world, enabling discoveries and facilitating crucial insights on some of the most challenging problems facing humanity. One of the world’s most powerful computing systems is Stampede, a key resource of the Texas Advanced Computing Center (TACC) that was funded by the National Science Foundation (NSF). This open science research tool is also a cornerstone of the NSF’s strategy to provide American scientists with a first-class cyberinfrastructure.

Currently, both the NSF and TACC are spotlighting this petascale supercomputer and the expert staff and user community that work together to accelerate science for the nation and the world.

“Sometimes, the laboratory just won’t cut it,” notes NSF science writer Aaron Dubrow.

“After all,” he continues, “you can’t recreate an exploding star, manipulate quarks or forecast the climate in the lab. In cases like these, scientists rely on supercomputing simulations to capture the physical reality of these phenomena – minus the extraordinary cost, dangerous temperatures or millennium-long wait times.

“When faced with an unsolvable problem, researchers at universities and labs across the United States set up virtual models, determine the initial conditions for their simulations – the weather in advance of an impending storm, the configurations of a drug molecule binding to an HIV virus, the dynamics of a distant dying star – and press compute.”

The Stampede supercomputer and others like it are an indispensable part of the scientific process. These beasts of computational burden rely on thousands of multicore processors to execute workloads in minutes or hours instead of weeks and months, and in doing so, help solve our biggest challenges and toughest scientific questions.

Stampede went into operation in January 2013. The 8.5 petaflop (peak) system currently ranks number 7 in the TOP500 list of fastest supercomputers, which it achieved with a measured LINPACK of 5.2 petaflops. At any given moment, Stampede is crunching hundreds of separate workloads at once. In its first year, Stampede completed nearly 2.2 million jobs by 3,400 researchers, and supported more than 1,700 distinct science projects. The range of research that’s been enabled includes more accurate DNA sequencing, cutting-edge astrophysics, novel biofuel development and colloidal gel simulations.

Built by Intel, Dell and Mellanox, Stampede is one of the first supercomputers to employ both standard Intel Xeon E5 CPUs as well as Intel Xeon Phi coprocessors. The advantage of the Phi is that it performs a lot of calculations using less energy.

Says TACC’s Dan Stanzione: “The Xeon Phi is Intel’s approach to changing these power and performance curves by giving us simpler cores with a simpler architecture but a lot more of them in the same size package.”

While high-performance computing has primarily been concerned with exponential performance increases, that is no longer a tenable strategy going forward. Getting to the next big goalpost – exascale – requires a shift in design to emphasize performance-per-watt and efficient data movement over pure performance. One way to conserve energy is to employ less-performant, manycore chips. The community’s embrace of accelerators, GPUs from AMD and NVIDIA and the Intel MIC coprocesssor, speaks to this new reality.

“The exciting part is that MIC and GPU foreshadow what will be on the CPU in the future,” Stanzione said. “The work that scientists are putting in now to optimize codes for these processors will pay off. It’s not whether you should adopt them; it’s whether you want to get a jump on the future. ”

Phi integration on Stampede is evolving in stages. Currently the Phi coprocessors represent about 10-20 percent of the usage of the system. Researchers are using the Phi chips for the development of flu vaccines, atomic simulations in the domain of particle physics, and weather forecasting.

Share This