Visit additional Tabor Communication Publications
October 27, 2009
With all the recent hoopla about GPGPU acceleration in high performance computing, it's easy to forget that Roadrunner, the most powerful supercomputer in the world, is based on a different brand of accelerator. The machine at Los Alamos National Laboratory uses 12,960 IBM PowerXCell 8i CPUs hooked up to 6,480 AMD Opteron dual-core processors to deliver 1.1 petaflop performance on Linpack.
Because of the wide disparity in floating point performance between the PowerXCell 8i processor and the Opteron, the vast majority of Roadrunner's floating point capability resides with the Cell processors. Each PowerXCell 8i delivers over 100 double precision gigaflops per chip, which means the Opteron only contributes about 3 percent of the FLOPS of the hybrid supercomputer.
Some of those FLOPS are already being put to good use, though. This week, Los Alamos announced that the lab had completed its "shakedown" phase for Roadrunner. Because the machine was installed in May 2008, this has allowed researchers over a year to experiment with some big science applications.
These unclassified science codes included a simulation of the expanding universe, a phylogenic exploration of the evolution of the Human Immunodeficiency Virus (HIV), a simulation of laser plasma interactions for nuclear fusion, an atomic-level model of nanowires, a model of "magnetic reconnection," and a molecular dynamics simulation of how materials behave under extreme stress. All of these codes were able to make good use of the petascale performance of the Roadrunner.
Now that the shakedown period has concluded, the NNSA will move in to claim those FLOPS for nuclear weapons simulations. Since these applications are obviously of a classified nature, we're not likely to hear much about their specific outcomes. Open science codes will still get a crack at the machine, but since Roadrunner's primary mission is to support US nuclear deterrence, the unclassified workloads will presumably get pushed to the back of the line.
The bigger question is what are the longer-term prospects of a hybrid x86-Cell system architecture and the Cell processor, in general, for the high performance computing realm? Unlike GPUs or FPGAs, Cell processors contain their own CPU core (a PowerPC) along with eight SIMD coprocessing units, called Synergistic Processing Elements (SPE), so the chip represents a more fully functional architecture than its competition. Despite that advantage, the Cell's penetration into general-purpose computing has remained somewhat limited. Although the original Cell processor was the basis for the PlayStation3 gaming console and the double-precision-enhanced PowerXCell variant has found a home in HPC blades, neither version is a commodity chip in the same sense as the x86 CPU or general-purpose GPUs. The result is that Cell-based solutions are strewn rather haphazardly across the HPC landscape.
Besides the high-profile Roadrunner system, IBM also offers a standalone QS22 Cell blade, which is deployed at a handful of sites, including the Interdisciplinary Centre for Mathematical and Computational Modeling at the University of Warsaw and Repsol YPF, a Spanish oil and gas company. As it turns out, these systems are among the most energy efficient, with the Warsaw system currently sitting atop the Green500 list. Other Cell accelerator boards are available from Mercury Computer Systems, Fixstars, and Sony, but I've yet to hear of any notable HPC deployments resulting from these products.
Cell processor developer tools certainly exist, but no standard environment has come to the fore. This is rather important since the heterogeneous nature of the Cell architecture means programming is inherently more difficult. IBM, of course, provides its own software development kit for the architecture. Outside of Big Blue, Mercury Computer Systems has a Cell-friendly Multicore Plus SDK, and software vendor Gedae sells a compiler. RapidMind offers Cell support in its multicore development platform, but since the company was acquired by Intel, its Cell-loving days are likely coming to a close. French software maker CAPS was planning to offer Cell support in its HMPP manycore development suite sometime this year, but that hasn't come to pass.
With NVIDIA's Fermi GPU architecture poised to make a big entrance into high performance computing in 2010, IBM will have to make a decision about adding GPU acceleration to its existing HPC server lineup. Server rival HP has apparently already committed to including Fermi hardware in its offerings. Last week Georgia Tech announced HP and NVIDIA would be delivering a sub-petaflop supercomputer to the institute in early 2010. That system will be based on Intel Xeon servers accelerated by Fermi processors. Other HPC vendors, including Cray, have announced plans to bring Fermi into their product lines. If GPUs become the mainstream accelerator for HPC servers, IBM will be forced to follow suit.
That's not to say IBM will give up on its home-grown Cell chip. Big Blue has a tradition of offering a smorgasbord of architectures to its customers, especially in the HPC market. Today the company has high-end server products based on x86 CPUs, Blue Gene (PowerPC-based) SoCs, Power CPUs, and the Cell processor. Adding GPU-accelerated hardware wouldn't necessarily mean ditching the Cell.
On the other hand, IBM has to consider if it wants to reinvest in the architecture to keep up with the latest GPU performance numbers from NVIDIA and AMD, which would mean getting a single Cell processor to deliver hundreds of gigaflops of double-precision performance. IBM is certainly capable of building such a chip, but there's little motivation to do so. With no established base of customers clamoring for Cell-equipped supercomputers and with a relatively small volume of Cell chips from which to leverage high-end parts, it's hard to imagine that Big Blue will be doubling down on its Cell bet.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.