Visit additional Tabor Communication Publications
June 23, 2009
ARMONK, NY, June 23 -- For a record-setting tenth consecutive time, an IBM system holds the number one position in the ranking of the world's most powerful supercomputers. The IBM computer built for the "roadrunner project" at Los Alamos National Lab -- the first in the world to operate at speeds faster than one quadrillion calculations per second (petaflop) -- remains the world speed champion.
IBM also declared its intent to break the exaflop barrier, and announced that it had created a research 'collaboratory' in Dublin, in partnership with the Industrial Development Agency (IDA) of Ireland, which is focused on both achieving exascale computing and making it useful to business. An exaflop is a million trillion calculations per second, which is 1000 times faster than today's petaflop-class systems.
The latest semi-annual ranking of the World's TOP500 Supercomputer Sites was released today during the International Supercomputing Conference in Hamburg, Germany. Results show the IBM system at Los Alamos National lab, which clocked in at 1.105 petaflops, is nearly three times as energy-efficient as the number 2 computer to maintain similar levels of petascale computing power. IBM's number one system performs 444.9 megaflops per watt of energy compared only 154.2 megaflops per watt for the number 2 system.
Additional highlights from the list include:
IBM sets sights on Exascale Systems for a Smarter Planet
Having ushered in the petaflop era a year ago, IBM has established a Research collaboratory in Dublin, Ireland, in collaboration with the IDA, focused on achieving exascale computing and making it beneficial for businesses with technologies like stream computing to analyze massive amounts of real-time data. This is the first collaboratory that IBM has announced, and the company intends to create more around the world.
"It's an honor to hold the record for the world's most powerful computer, but what is critical is building supercomputers that help advance the global economy and society at large," said David Turek, vice president, IBM Deep Computing. "IBM was the first to break the petaflop barrier and we will continue to apply lessons learned as we march toward the exaflop barrier."
An IBM collaboratory is a laboratory where IBM Researchers co-locate with a university, government, or commercial partner to share skills, assets, and resources to achieve a common research goal.
IBM Researchers are already at work with government and academic leaders to develop exascale systems that will help solve the complex business and scientific problems of the future. This research collaboratory will enable IBM supercomputing and multidisciplinary experts to work directly with University researchers from Trinity College Dublin, Tyndall National Institute in Cork, National University of Ireland Galway, University College Cork and IRCSET, the Irish Research Council for Science, Engineering and Technology to develop computing architectures and technologies that can overcome current limitations -- such as space and energy consumption -- of dealing with the massive volumes of real-time data and analysis.
The technical research will explore innovative ways of using new memory architectures, interconnecting technologies and fabric structures, and will evaluate business applications that would benefit from an exascale streaming platform.
While high performance computing today primarily focuses on scientific applications in areas such as physics or medicine, the exascale research in Dublin will also focus on how these new powerful computing systems can be applied to solving complex business problems. The research will include both technical and applications research. For example, the application research for exascale computing will study financial services using real-time, intelligent analysis of a company's valuation developed from business models using data from investor profiles, live market trading and RSS news feeds.
"IBM led the industry in breaking the petaflop barrier last year," continued Turek. "Developing exascale systems challenge space and energy limitations, requiring extremely sophisticated systems management and application software that can take advantage of this computational capability. This new collaboratory is already at work solving some of these issues."
As future computing platforms are expected to produce orders of magnitude more power dissipation, researchers believe that efficiently cooling these large systems will be one of the most important factors to next generation development. Making computing systems and datacenters energy-efficient is a staggering undertaking.
In fact, up to 50 percent of an average air-cooled datacenter's carbon footprint or energy consumption today is not caused by computing but by powering the necessary cooling systems to keep the processors from overheating -- a situation that is far from optimal when looking at energy efficiency from a holistic perspective. IBM has numerous leading edge research projects underway that are addressing these "energy aware" hurdles.
Just today, IBM and the Swiss Federal Institute of Technology Zurich unveiled plans to build a first-of-a-kind water-cooled supercomputer that will directly repurpose excess heat for the university buildings. The innovative system, is expected to decrease the carbon footprint of the system by up to 85 percent and estimated to save up to 30 tons of CO2 per year, compared to a similar system using today's cooling technologies.
IBM provides a broad portfolio of systems, storage and software technology to the supercomputing market, more than any other vendor. The company's innovative HPC solutions have created a new scientific force for tackling the world's grand challenges around climate science, the hunt for new sources of energy, creating new gene-based medicines, and have made significant contributions to basic scientific inquiry in physics and biology.
The "TOP500 Supercomputer Sites" is compiled by Hans Meuer of the University of Mannheim, Germany; Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville.
For more information about IBM supercomputing, visit www.IBM.com/deepcomputing.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.