Visit additional Tabor Communication Publications
June 18, 2008
ARMONK, N.Y., June 18 -- IBM's history-making hybrid supercomputer, built for the National Nuclear Security Administration's (NNSA) Los Alamos National Lab, burned its way into the TOP500 Supercomputer record book today as the most powerful system in the world -- by a wide margin. Its sustained performance of 1.02 petaflops (1.02 quadrillion calculations per second) puts the system in a class of its own -- more than three times faster than the nearest non-IBM system.
The official results were reported today during the International Supercomputing Conference in Dresden, Germany, where the bi-annual listing of the World's TOP500 Supercomputer Sites was released.
Built by IBM for the NNSA and housed at its Los Alamos National Laboratory, the petaflop-smashing system gets its world-leading power from 12,240 IBM PowerXCell 8i Cell Broadband Engine processors -- derived from chips that power today's most popular videogame consoles. 6,562 AMD Opteron Dual-Core processors perform basic compute functions, freeing the IBM PowerXCell 8i chips for the math-intensive calculations that are their specialty.
This "hybrid" architecture, which optimizes the strength of multiple types of processors, is an IBM hallmark. The design is analogous to that of a hybrid car with similar benefits. For example, if the NNSA supercomputer were built with standard x86 chips alone, the system would have been significantly larger and would have required much more power.
While the NNSA supercomputer will be used for ensuring the reliability and safety of the nation's nuclear weapons stockpile, it also sets the pace for future research in a variety of scientific and commercial fields including biotech, alternative energy, climate change and physics. IBM expects its hybrid design to lead the way to a commercial supercomputer platform that will support new scientific research and engineering workloads unthinkable just a decade ago.
IBM Sets the Pace for TOP500
IBM continued its pace-setting leadership of the TOP500 with a trifecta showing in the top three spots and a total of 210 systems on the list -- the most of any supercomputer vendor. IBM also had the most aggregate performance on the list with 5.6 petaflops (48% of total); and the most systems in the top 10, top 50, and top 100.
The No.2 fastest computer in the world is an IBM Blue Gene/L system at NNSA's Lawrence Livermore National Lab in California, which clocked in at 478 teraflops (478 trillion calculations per second). Team Blue Gene also held the No.3 spot with a 450 teraflop performance from the Blue Gene/P system housed at the Department of Energy's Argonne National Lab in Chicago.
IBM also had the most power efficient systems: IBM QS22 PowerXCell 8i processor-based supercomputers at IBM Germany and Fraunhofer; and the NNSA system; and the fastest machine in Europe -- the Blue Gene/P at Juelich Research Centre in Germany.
The "TOP500 Supercomputer Sites" is compiled and published by supercomputing experts Jack Dongarra from the University of Tennessee; Erich Strohmaier and Horst Simon of the Department of Energy's NERSC/Lawrence Berkeley National Laboratory; and Hans Meuer of the University of Mannheim (Germany). The entire list can be viewed at http://www.top500.org/.
For more information about IBM supercomputing, visit http://www-03.ibm.com/servers/deepcomputing/.
Watch a video about Roadrunner on YouTube: http://www.youtube.com/watch?v=bpA129SHSuI.
More news about Roadrunner: http://www-03.ibm.com/press/us/en/pressrelease/24405.wss.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.