Visit additional Tabor Communication Publications
November 16, 2009
Jaguar is now the world's fastest supercomputer
OAK RIDGE, Tenn., Nov. 16 -- An upgrade to a Cray XT5 high-performance computing system deployed by the Department of Energy has made the "Jaguar" supercomputer the world's fastest. Located at Oak Ridge National Laboratory, Jaguar is the scientific research community's most powerful computational tool for exploring solutions to some of today's most difficult problems. The upgrade, funded with $19.9 million under the Recovery Act, will enable scientific simulations for exploring solutions to climate change and the development of new energy technologies.
"Supercomputer modeling and simulation is changing the face of science and sharpening America's competitive edge," said Secretary of Energy Steven Chu. "Oak Ridge and other DOE national laboratories are helping address major energy and climate challenges and lead America toward a clean energy future."
To net the number-one spot on the TOP500 list of the world's fastest supercomputers, Jaguar's Cray XT5 component was upgraded this fall from four-core to six-core processors and ran a benchmark program called High-Performance Linpack (HPL) at a speed of 1.759 petaflop/s (quadrillion floating point operations, or calculations, per second). The rankings were announced today in Portland, Ore., at SC09, an international supercomputing conference.
In 2004, DOE's Office of Science set out to create a user facility that would provide scientists with world-leading computational research tools. One result was the Oak Ridge Leadership Computing Facility, which supports national science priorities through the deployment and operation of the most advanced supercomputers available to the scientific community.
"Our computational center works closely with the science teams to effectively use a computer system of this size and capability," said James Hack, director of the National Center for Computational Sciences that houses Jaguar in the Oak Ridge Leadership Computing Facility.
Jaguar began service in 2005 with a peak speed of 26-teraflop/s (trillion calculations per second) and through a series of upgrades in the ensuing years gained 100 times the computational performance. The upgrade of Jaguar XT5 to 37,376 six-core AMD Istanbul processors in 2009 increased performance 70 percent over that of its quad-core predecessor.
Researchers anticipate that this unprecedented growth in computing capacity may help facilitate improved climate predictions, fuel-efficient engine designs, better understandings of the origin of the universe and the underpinnings of health and disease, and creation of advanced materials for energy production, transmission, and storage.
The Oak Ridge computing complex is home to two petascale machines. In addition to DOE's Jaguar system, the National Institute for Computational Sciences, a partnership between the University of Tennessee and ORNL, operates another petascale Cray XT5 system known as Kraken, which was ranked 3rd on the November TOP500 list at a speed of 831.7 teraflops.
"The purpose of these machines is to enable the scientific community to tackle problems of such complexity that they demand a well tuned combination of the best hardware, optimized software, and a community of researchers dedicated to revealing new phenomena through modeling and simulations" said ORNL Director Thom Mason. "Oak Ridge is proud to help the Department of Energy address some of the world's most daunting scientific challenges."
Simulations on Jaguar have primarily focused on energy technologies and climate change resulting from global energy use. Scientists have explored the causes and impacts of climate change, the enzymatic breakdown of cellulose to improve biofuels production, coal gasification processes to help industry design near-zero-emission plants, fuel combustion to aid development of engines that are clean and efficient, and radio waves that heat and control fuel in a fusion reactor.
"The early petascale results indicate that Jaguar will continue to accelerate the Department of Energy's mission of breakthrough science," said Jeff Nichols, ORNL's associate laboratory director for computing and computational sciences. "With increased computational capability, the scientific research community is able to obtain results faster, understand better the complexities involved, and provide critical information to policy-makers."
Hack, a leading climate modeler, concurs. "The speed and power of petascale computing enables researchers to explore increased complexity in dynamic systems," he said. As an example he cited the world's first continuous simulation showing abrupt climate change, led by scientists at the University of Wisconsin and the National Center for Atmospheric Research. Run on Jaguar earlier this year, the computer's speed made it possible to publish the results by July in Science.
Through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, DOE's leadership computing facilities at Oak Ridge and Argonne national laboratories will employ a competitive peer review process to allocate researchers 1.6 billion processor hours in 2010. In 2009 the Oak Ridge Leadership Computing Facility allocated 470 million processor hours on Jaguar through the INCITE program.
Scientists in industry, academia, and government have requested more than 2 billion processor hours on Jaguar for 2010. The six-core upgrade on Jaguar will enable Oak Ridge to allocate 1 billion processor hours. Equipped with unprecedented computer power, materials scientists can simulate superconducting materials and magnetic nanoparticles with greater realism. Climate scientists can improve accuracy, resolution, and complexity of Earth system models, and physicists can simulate quarks and explore masses, decays, and other properties of the fundamental constituents of matter.
Oak Ridge National Laboratory is managed by UT-Battelle for the Department of Energy.
Source: Oak Ridge National Laboratory
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.