Visit additional Tabor Communication Publications
November 17, 2008
ARMONK, N.Y., Nov. 17 -- For a record-setting ninth consecutive time, an IBM system took the No.1 spot in the ranking of the world's most powerful supercomputers. The IBM computer built for the "roadrunner project" at Los Alamos National Lab -- the first in the world to operate at speeds faster than one quadrillion calculations per second (petaflop) in June 2008 -- remains the world speed champion.
The latest bi-annual ranking of the World's TOP500 Supercomputer Sites was released today during the International Supercomputing Conference in Austin, Texas. Results show the IBM Los Alamos system, which clocked in at 1.105 petaflops, to be roughly twice as energy-efficient as the No.2 computer, using about half the total electricity (2.5MW) to maintain the same levels of petascale computing power.
Indeed, IBM swept the energy-efficiency category -- the TOP20 most energy-efficient systems are from IBM.
Overall, 20 of the TOP50 systems are built by IBM.
Over the 15-year history of the list, IBM has held the No.1 spot 11 times, a feat unmatched by any other systems vendor.
IBM Systems Set Performance for TOP500
Since November 1999, IBM systems have been the most powerful on the list, contributing more overall horsepower than any other systems vendor. The trend continues in November 2008 -- IBM's 188 systems account for about 38 percent (6.5 petaflops) of the new TOP500's combined compute power of 16.9 petaflops.
The No.4 fastest computer in the world is an IBM Blue Gene/L system at the NNSA's Lawrence Livermore National Lab in California, which clocked in at 478.2 teraflops (trillion calculations per second). Team Blue Gene also holds the No.5 spot with a 450.3-teraflop performance from the Blue Gene/P system housed at the Department of Energy's Argonne National Lab in Chicago.
IBM also had the fastest machine in Europe -- the No.11 Blue Gene/P at Juelich Research Center in Germany, running at 180 teraflops. IBM is also the brand behind the fastest computers in: Canada, United Kingdom, Spain, Netherlands, Taiwan, South Africa, Israel, Bulgaria and Slovenia.
IBM supercomputers tackle world's Grand Challenges, from genetic medicine to the hunt for new energy
IBM provides a wide variety of systems and software technology to the supercomputing market, more than any other vendor. The company's innovative HPC solutions have created a new scientific force for tackling the world's grand challenges around climate science, the hunt for new sources of energy, creating new gene-based medicines and have made significant contributions to basic scientific inquiry in physics and biology.
IBM is also leading the move to design all-new hybrid systems -- such as the roadrunner project -- that combine different types of processors for better performance and energy efficiency. IBM is currently building a 360-teraflop hybrid cluster for the University of Toronto, for example, a deal which will pair one of the world's largest Power6 clusters with IBM's new iDataPlex platform (x86) to create an extremely flexible 4000-node supercomputer capable of running a diverse range of software at high levels of performance. Starting in 2009, Canadian scientists plan to use the system to create new methods of medical imaging, among other uses. Part way through installation, the computer has already debuted at No.53 on the new TOP500.
"It's an honor to hold the record for the world's most powerful computer, but what is critical is building supercomputers that help advance the global economy and society at large," said David Turek, VP of Deep Computing at IBM. "We pioneered energy-smart supercomputer designs with Blue Gene in 2000 and build substantially on that heritage each year to the benefit of science and industry. We apply our lessons learned and the innovation that comes from these efforts to IBM's commercial systems business."
The Era of Blue Gene
IBM's chart-topping performance since 1999 has been enabled by the company's innovative Blue Gene system which uses low-power processors to create ever faster performance for users who have dominated the upper portions of TOP500 rankings going back to 2004.
Since 2004, more than 300 racks worth of Blue Gene systems -- 2,182 teraflops of compute power -- have been in near-constant use by nearly 40 of the world's leading research agencies. Blue Gene is changing the way research is done, elevating the computational component of scientific inquiry to new importance. The Blue Gene system at Lawrence Livermore National Laboratory has itself been responsible for breakthroughs in physics and materials science, among others that have been lauded on the covers of six prominent professional journals in the past four years -- twice in Nature.
About the Roadrunner project
Built by IBM for the NNSA and housed at its Los Alamos National Laboratory, the petaflop-smashing "roadrunner project" system unveiled in June 2008 gets its world-leading power from a hybrid blending of 12,960 IBM PowerXCell 8i Cell Broadband Engine processors -- derived from chips that power today's most popular videogame consoles and 6,948 AMD Opteron Dual-Core processors. The Opteron chips perform basic compute functions, freeing the IBM PowerXCell 8i chips for the math-intensive calculations that are their specialty. Press release at http://www-03.ibm.com/press/us/en/pressrelease/24405.wss
The "TOP500 Supercomputer Sites" is compiled and published by supercomputing experts Jack Dongarra from the University of Tennessee; Erich Strohmaier and Horst Simon of the Department of Energy's NERSC/Lawrence Berkeley National Laboratory; and Hans Meuer of the University of Mannheim (Germany). The entire list can be viewed at www.top500.org.
IBM's expertise in building large-scale, energy-smart hardware is also reflected on the "Green500" list of the world's most energy efficient supercomputers. IBM has all of the top 10; 24 of the top 25; and 76 of the top 100 systems on the current Green500 list. (www.green500.org)
For more information about IBM supercomputing, visit http://www-03.ibm.com/servers/deepcomputing/
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.