Visit additional Tabor Communication Publications
June 18, 2008
ARGONNE, Ill., June 18 -- The U.S. Department of Energy's (DOE) Argonne National Laboratory's IBM Blue Gene/P high-performance computing system is now the fastest supercomputer in the world for open science, according to the semiannual Top500 List of the world's fastest computers.
The Top500 List was announced today during the International Supercomputing Conference in Dresden, Germany.
The Blue Gene/P -- known as Intrepid and located at the Argonne Leadership Computing Facility (ALCF) -- also ranked third fastest overall. Both rankings represent the first time an Argonne-based supercomputing system has ranked in the top five of the industry's definitive list of supercomputers.
The Blue Gene/P has a peak-performance of 557 teraflops (put in other terms, 557 million million calculations per second). Intrepid achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top500 rankings.
"Intrepid's speed and power reflect the DOE Office of Science's determined effort to provide the research and development community with powerful tools that enable them to make innovative and high-impact science and engineering breakthroughs," said Rick Stevens, associate laboratory director for computing, environmental and life sciences at Argonne.
"The ALCF and Intrepid have only just begun to have a meaningful impact on scientific research," Stevens said. "In addition, continued expansion of ALCF computing resources will not only be instrumental in addressing critical scientific research challenges related to climate change, energy, health and our basic understanding of the world, but in the future will transform and advance how science research and engineering experiments are conducted and attract social sciences research projects, as well."
"Scientists and society are already benefiting from ALCF resources," said Peter Beckman, ALCF acting director. "For example, ALCF's Blue Gene resources have allowed researchers to make major strides in evaluating the molecular and environmental features that may lead to the clinical diagnosis of Parkinson's disease and Lewy body dementia, as well as to simulate materials and designs that are important to the safe and reliable use of nuclear energy plants."
Eighty percent of Intrepid's computing time has been set aside for open science research through the DOE Office of Science's (SC) highly select Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. There are currently 20 INCITE projects at the ALCF that will use 111 million hours of computing time this year. SC's Office of Advanced Scientific Computing Research provides high-level computer power focused on large-scale installation used by scientists and engineers in many disciplines.
The Top500 List is compiled by Hans Meuer of the University of Mannheim in Germany, Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory, and Erich Strohmaier and Horst Simon of DOE's National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. The list made its debut in June 1993 and ranked as No. 1 DOE's Los Alamos National Laboratory's Thinking Machine Corporation's CM-5, with 1,024 processors and a peak-performance of 131 gigaflops.
About Argonne National Laboratory
The U.S. Department of Energy's Argonne National Laboratory brings the world's brightest scientists and engineers together to find exciting and creative new solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.