Visit additional Tabor Communication Publications
June 18, 2008
There wasn't much suspense on which machine would nab the top spot on the June TOP500 list, which was released earlier today. Last week, IBM and LANL had already let everyone know that Roadrunner crossed the petaflop finish line first. Nonetheless, the new list portends some big changes ahead for supercomputing.
IBM continues to dominate the top systems, finishing 1, 2 and 3. LANL's Roadrunner system was No. 1 at 1.026 petaflops. LLNL's Blue Gene/L, which had held the top spot since 2004, drops down to No. 2 at 478.2 teraflops. Argonne National Lab's new Blue Gene/P follows close behind at 450.3 teraflops in the No. 3 spot. The new Sun-built Ranger supercluster at The Texas Advanced Computing Center (TACC) slides into the 4 spot at 326 teraflops and ORNL's recent upgrade of the Cray-built Jaguar machine moved it from No. 7 on the November 2007 list to No. 5.
But because of everyone's fascination with petaflops, the Roadrunner was the star of the show. Besides pure performance, the machine also broke another important barrier. Roadrunner became the first hybrid supercomputer -- in this case, Opteron and Cell blades -- to grab the top spot. Because of the much lower performance per watt offered by commodity x86 processors compared to the Cell, it wouldn't have been feasible to field a petaflop machine built entirely from the current crop of x86 processors. Such a system would require at least 5 megawatts, not including cooling.
Besides Roadrunner, there is only one other hybrid machine on the TOP500 -- the TSUBAME machine at Tokyo Tech. It's a Sun Fire Opteron-based cluster sped up by ClearSpeed Advance boards and, because of recent upgrades to the system, holds the No. 24 spot at 67.7 teraflops. At some point, TSUBAME might add some NVIDIA GPUs into the mix. According to Satoshi Matsuoka, the tech lead on the project, they've been looking at accelerating some nodes with GeForce 8800 GTS boards as they build toward a petaflop machine in the 2010 timeframe.
NVIDIA expects to have its GPUs on a top system or systems on November's TOP500 list. At ISC this week, Bull was talking about a system in development that had 200 teraflops of GPU acceleration hooked up with 100 teraflops of x86 servers, although no deployment date was offered.
In Roadrunner, the Cell acceleration represent 97 percent of the raw compute power of the machine. Undoubtedly x86 chips will continue to shrink and grow extra cores, but accelerators will continue to have the edge in energy efficiency until (and if) they are integrated with the CPU. Even if we don't see a wealth of petaflop machines in the next few years, accelerated hybrid systems, TOP500 or otherwise, should become much more common.
Somewhat surprisingly, Roadrunner is the only top system ever to employ InfiniBand as the interconnect, which up until now have all been proprietary. Overall, InfiniBand is the interconnect growth market in supercomputing, and is used in 49 of the top 100 systems. While Gigabit Ethernet still claims more total systems (285) than InfiniBand (120), GbE's days are numbered in the TOP500, and 10GbE has yet to make an appearance.
Other fun facts about the top supers:
This is the first list that includes a power consumption metric for many of the systems. The number represents how much power the computer draws while running Linpack, which supposedly is fairly representative of a system under a typical HPC application workload. It doesn't take into account external cooling, disks or other environment-related power draws. The idea is to offer a metric that should be reproducible if the machine were relocated. A nice addition.
Using current projections, the first exaflop system is expected in 2019, and a zettaflop system in 2030. But by that time (if you believe Ray Kurzweil), mind uploading will be all the rage, so programming the zettaflop supers should be a snap.
Posted by Michael Feldman - June 17, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.