Requiem for Roadrunner
Right around this time of year in 2008 the supercomputing world was abuzz with news that the stubborn petaflop barrier had been broken.
At the heart of this breakthrough was the IBM Roadrunner system installed at Los Alamos National Laboratory. Although it’s hard to imagine that just five years after its record-setting run, Los Alamos announced this weekend that Roadrunner would be retired. Its replacement, the (slightly) swifter, smaller and more efficient sibling, Cielo, gets a chance to dash.
The system was shuttered Sunday with a month-long autopsy of sorts planned before they rip it down, especially on the operating system, memory and data routing fronts. “Even in death, we’re trying to learn from Roadrunner,” noted Gary Girder from the lab’s high performance computing division.
While it might have once enjoyed a top ten ranking on the Green 500, some have suggested that the main reason for the shutdown is due to energy efficiency concerns. Its petaflop performance—which still lands it within the top 30 systems in the world-might still be useful, but it comes at quite a cost.
To put this in perspective, according to the Top 500, Roadrunner gobbles about 2,345 kilowatts to attain 1.042 petaflops where as the super just below it performance-wise on the list, a 1.035 petaflop system, eats just 1,177 kilowatts to get 1.035 petaflops. While the performance benefits of Cielo over Roadrunner really aren’t stellar, if it’s really all in the name of efficiency, this decommissioning isn’t difficult to rationalize.
But what a run it had—the system was initially designed to reach a peak of 1.7 petaflops, although when it finally broke the petascale barrier it benchmarked at 1.026 petaflops. By the time it burst in to petaflop territory, Roadrunner’s Top 500 competition, another IBM machine (Blue/Gene L) performed at only half the speed of the Los Alamos system. It knocked that system, which was based at LLNL, out of the position it had been clinging to since 2004.
The historical performance significance of Roadrunner is certainly what sets it apart, but at the time, Roadrunner was recognized for a few other innovations that struck a new path for later systems at other institutions. But before we detail what made this machine a standout, take a look at this video from Roadrunner’s heyday.
While the video above doesn’t discuss this overtly, the system’s architecture is quite unique. The newsmaking super leveraged a chip that had been developed for the Playstation 3 to boost performance and efficiency. This hybrid architecture comprised a great many of the machine’s 116,640 cores and helped it climb high on the Green 500 list as well.
Specifically, the new “cell” chip technology plugged in 12,960 IBM PowerXCell 8i CPUs that were tied to 6,480 AMD Opteron dual-core processors to wrench performance and efficiency gains. As HPCwire noted back in 2009, “Because of the wide disparity in floating point performance between the PowerXCell 8i processor and the Opteron, the vast majority of Roadrunner’s floating point capability resides with the Cell processors. Each PowerXCell 8i delivers over 100 double precision gigaflops per chip, which means the Opteron only contributes about 3 percent of the FLOPS of the hybrid supercomputer.”
Despite the proven performance of the Cell, it never really caught on in other noteworthy systems. Again, as HPCwire noted, Although the original Cell processor was the basis for the PlayStation3 gaming console and the double-precision-enhanced PowerXCell variant has found a home in HPC blades, neither version is a commodity chip in the same sense as the x86 CPU or general-purpose GPUs. The result is that Cell-based solutions are strewn rather haphazardly across the HPC landscape.”
IBM wasn’t the only vendor star for the Los Alamos super—high performance computing storage company, Panasas, also touted its role in helping the system climb into the petascale arena. The company pointed to how this system required top of the line (for 2004, of course) reliability and scalability and settled on Panasas for the massively parallel I/O needed. LANL was looking for a shared storage architecture where all the compute nodes could hit the divvied-up storage (versus the more common mode then of binding a storage cluster tightly to the compute). The system they installed also had to scale to store up to 10 petabytes and scale with new nodes—part of the reason why they looked beyond implementing a NFS-based system.
The major I/O and performance requirements went beyond its initial intent to power some mission-critical work for the DOE’s National Nuclear Security Administration. The range of applications was quite broad, including HIV modeling.
And surely, the machine went on to work toward specific energy, nuclear and medical modeling applications, yielding new dividends for science, industry and security—and a new way forward for thinking about hyper-efficient, high performance systems.