IBM Stays Atop TOP500 By Sitting Tight
A rare event occurred in the TOP500 saga today at the International Supercomputing Conference. The top two systems from six months ago, IBM’s Roadrunner at Los Alamos National Lab and Cray’s Jaguar at Oak Ridge National Lab, didn’t budge at all. The number 1 and 2 supers have the same performance rating they had six month ago and remain the only petaflop supercomputers in the world — Linpack-wise.
Most HPC watchers, including me, thought Oak Ridge would have finally run the Linpack benchmark on both Jaguar subsystems and turned in the expected 1.2 petaflop result to knock IBM off its TOP500 perch. The official story is that the Oak Ridge boys didn’t want to take up days of valuable computer time running a vanity benchmark, instead of running real science codes with the machines. I’ve got to believe that this is essentially true. These days most of the big labs and established supercomputing centers — the only organizations that can afford such systems — don’t need any extra prestige from a TOP500 ranking and are more interested in running useful applications. If it had been any vendor but Cray though, I would have expected some arm-twisting and/or incentives to get Oak Ridge to run the benchmark.
There was some shuffling in the top 10 systems, although only four are new to the top spots. And of these, only two are brand new machines: Dawn, a 416 teraflop Blue Gene/P machine for Lawrence Livermore National Laboratory (and the precursor to the future 20 petaflop Sequoia system) debuts on the list at number nine; and JuRoPA, a 275 teraflop Bull/Sun hybrid cluster installed at Jülich Supercomputing Center, takes the 10 slot. JUGENE, a Blue Gene/P super, also at the Jülich Supercomputing Center was boosted to 825 teraflops to move into the number three slot behind Roadrunner and Jaguar, while Kraken, the upgraded Cray XT5 for the National Institute for Computational Sciences/University of Tennessee, comes in at number six with 463 teraflops.
As usual the top of the list was dominated by IBM and Cray. IBM claimed five of the top 10 systems and 17 of the top 50. Trailing Big Blue in eliteness, Cray placed two in the top 10 and 10 in the top 50. Overall though, IBM dominated the list, representing close to 40 percent of the aggregate processing performance on the TOP500, although HP has more total systems (212) than IBM (188). HP managed to accomplish this without having any systems in the top 10 and just four in the top 50.
As usual, most of the turnover took place in the bottom half of the list. The 500th system of the current list would have come in at number 274 in November 2008. That’s not quite the turnover the list displayed last year, but it’s about average historically. More importantly, for a year in which the global economy took a huge dive, these results make the supercomputing business look pretty resilient.
From a geographical point of view, only systems installed in the US and Germany made the top 10, with the US claiming 8 of those spots. As we look at the top 50 though, we get much more of an international flavor — China, India, Saudi Arabia, Canada, United Kingdom, Switzerland, Japan, France, Finland, Italy, Russia, and Sweden all claiming at least one system. As true in the past, the US continues to dominate the list, with 291 systems. Europe owns 145 systems and Asia has 49.
Other interesting data points (with values for one year ago in parentheses):
- Aggregate for 500 systems: 22.6 petaflops (11.7 petaflops).
- Aggregate for top 10 systems: 6.0 petaflops (3.2 petaflops).
- Number of InfiniBand-based systems: 151 (121).
- Number of Gigabit Ethernet-based systems: 282 (283).
- Intel processor-based systems: 79.8 percent (74.8 percent).
- AMD processor-based systems: 8.60 percent (11.2 percent).
- Number of vector processor-based systems: 1 (2).
Overall, the general trend of the list continues to point to the first exaflop system by 2020. Although plenty of the HPC digerati have doubts about this timeline, ISC keynoter Andy Bechtolsheim focused his Tuesday morning talk on just this subject. Ever the optimist, Bechtolsheim pointed out that Moore’s Law, with a lot of help from multi-chip module (MCM) design, optical on-chip interconnects, and in-socket water cooling, should provide the technology required for an exaflop machine in 2020. According to Bechtolsheim, an 8nm process technology can be used to construct 10 teraflop processors, with 100,000 of them yielding an exaflop. The power required to run such a system? A mere 50 MW, he says. Hopefully, Bechtolsheim is right and gets invited back to ISC’20 to map out the path to zettaflop.