Visit additional Tabor Communication Publications
June 23, 2009
ARMONK, NY, June 23 -- For a record-setting tenth consecutive time, an IBM system holds the number one position in the ranking of the world's most powerful supercomputers. The IBM computer built for the "roadrunner project" at Los Alamos National Lab -- the first in the world to operate at speeds faster than one quadrillion calculations per second (petaflop) -- remains the world speed champion.
IBM also declared its intent to break the exaflop barrier, and announced that it had created a research 'collaboratory' in Dublin, in partnership with the Industrial Development Agency (IDA) of Ireland, which is focused on both achieving exascale computing and making it useful to business. An exaflop is a million trillion calculations per second, which is 1000 times faster than today's petaflop-class systems.
The latest semi-annual ranking of the World's TOP500 Supercomputer Sites was released today during the International Supercomputing Conference in Hamburg, Germany. Results show the IBM system at Los Alamos National lab, which clocked in at 1.105 petaflops, is nearly three times as energy-efficient as the number 2 computer to maintain similar levels of petascale computing power. IBM's number one system performs 444.9 megaflops per watt of energy compared only 154.2 megaflops per watt for the number 2 system.
Additional highlights from the list include:
IBM sets sights on Exascale Systems for a Smarter Planet
Having ushered in the petaflop era a year ago, IBM has established a Research collaboratory in Dublin, Ireland, in collaboration with the IDA, focused on achieving exascale computing and making it beneficial for businesses with technologies like stream computing to analyze massive amounts of real-time data. This is the first collaboratory that IBM has announced, and the company intends to create more around the world.
"It's an honor to hold the record for the world's most powerful computer, but what is critical is building supercomputers that help advance the global economy and society at large," said David Turek, vice president, IBM Deep Computing. "IBM was the first to break the petaflop barrier and we will continue to apply lessons learned as we march toward the exaflop barrier."
An IBM collaboratory is a laboratory where IBM Researchers co-locate with a university, government, or commercial partner to share skills, assets, and resources to achieve a common research goal.
IBM Researchers are already at work with government and academic leaders to develop exascale systems that will help solve the complex business and scientific problems of the future. This research collaboratory will enable IBM supercomputing and multidisciplinary experts to work directly with University researchers from Trinity College Dublin, Tyndall National Institute in Cork, National University of Ireland Galway, University College Cork and IRCSET, the Irish Research Council for Science, Engineering and Technology to develop computing architectures and technologies that can overcome current limitations -- such as space and energy consumption -- of dealing with the massive volumes of real-time data and analysis.
The technical research will explore innovative ways of using new memory architectures, interconnecting technologies and fabric structures, and will evaluate business applications that would benefit from an exascale streaming platform.
While high performance computing today primarily focuses on scientific applications in areas such as physics or medicine, the exascale research in Dublin will also focus on how these new powerful computing systems can be applied to solving complex business problems. The research will include both technical and applications research. For example, the application research for exascale computing will study financial services using real-time, intelligent analysis of a company's valuation developed from business models using data from investor profiles, live market trading and RSS news feeds.
"IBM led the industry in breaking the petaflop barrier last year," continued Turek. "Developing exascale systems challenge space and energy limitations, requiring extremely sophisticated systems management and application software that can take advantage of this computational capability. This new collaboratory is already at work solving some of these issues."
As future computing platforms are expected to produce orders of magnitude more power dissipation, researchers believe that efficiently cooling these large systems will be one of the most important factors to next generation development. Making computing systems and datacenters energy-efficient is a staggering undertaking.
In fact, up to 50 percent of an average air-cooled datacenter's carbon footprint or energy consumption today is not caused by computing but by powering the necessary cooling systems to keep the processors from overheating -- a situation that is far from optimal when looking at energy efficiency from a holistic perspective. IBM has numerous leading edge research projects underway that are addressing these "energy aware" hurdles.
Just today, IBM and the Swiss Federal Institute of Technology Zurich unveiled plans to build a first-of-a-kind water-cooled supercomputer that will directly repurpose excess heat for the university buildings. The innovative system, is expected to decrease the carbon footprint of the system by up to 85 percent and estimated to save up to 30 tons of CO2 per year, compared to a similar system using today's cooling technologies.
IBM provides a broad portfolio of systems, storage and software technology to the supercomputing market, more than any other vendor. The company's innovative HPC solutions have created a new scientific force for tackling the world's grand challenges around climate science, the hunt for new sources of energy, creating new gene-based medicines, and have made significant contributions to basic scientific inquiry in physics and biology.
The "TOP500 Supercomputer Sites" is compiled by Hans Meuer of the University of Mannheim, Germany; Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville.
For more information about IBM supercomputing, visit www.IBM.com/deepcomputing.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.