Visit additional Tabor Communication Publications
November 04, 2005
The National Nuclear Security Administration has officially dedicated a pair of next-generation supercomputers that aim to ensure the U.S. nuclear weapons stockpile remains safe and reliable without nuclear testing. The IBM machines are housed at Lawrence Livermore National Laboratory.
NNSA administrator Linton F. Brooks said the dedication marks the culmination of a 10-year campaign to use supercomputers to run three-dimensional codes at lightning-fast speeds to achieve much of the nuclear weapons analysis that was formerly accomplished by underground nuclear testing.
At an event in the LLNL Terascale Simulation Facility, Brooks also announced that the Blue Gene/L supercomputer performed a record 280.6 trillion operations per second on the industry standard Linpack benchmark.
Purple, the other half of the most powerful supercomputing twosome on earth, is a machine capable of 100 teraflops as it conducts simulations of a complete nuclear weapons performance. The IBM Power5 system is undergoing final acceptance tests at the TSF.
"The unprecedented computing power of these two supercomputers is more critical than ever to meet the time-urgent issues related to maintaining our nation's aging nuclear stockpile without testing," Brooks said. "Purple represents the culmination of a successful decade-long effort to create a powerful new class of supercomputers. Blue Gene/L points the way to the future and the computing power we will need to improve our ability to predict the behavior of the stockpile as it continues to age. These extraordinary efforts were made possible by a partnership with American industry that has reestablished American computing preeminence."
In a recent demonstration of its capability, Blue Gene/L ran a record-setting materials science application at 101.5 teraflops sustained over seven hours on the machine's 131,072 processors, running an application of importance to NNSA's effort to ensure the safety and reliability of the nation's nuclear deterrent. A teraflop is 1 trillion computer operations per second.
Both machines were developed through NNSA's Advanced Simulation and Computing program and join a series of other supercomputers at Sandia and Los Alamos national laboratories dedicated to NNSA's Stockpile Stewardship effort to maintain the nation's nuclear deterrent through science-based computation, theory and experiment.
Together, the Purple and Blue Gene/L systems will put an astounding half of a petaflop of peak performance at the disposal of scientists and engineers at Sandia, Los Alamos and Lawrence Livermore national laboratories. This is more supercomputing power than at any other scientific computing facilities in the world.
"Today marks another important milestone in the DOE Office of Science and NNSA partnership to revitalize the U.S effort in high-end computing," said Raymond L. Orbach, director of the Department of Energy's Office of Science. "NNSA and the Office of Science have leveraged resources in the areas of operating systems, systems software and on advanced computer evaluations to the benefit of both organizations. The ASC Purple and Blue Gene/L machines at Livermore are the latest in an increasingly sophisticated suite of supercomputers across the DOE complex. Together the NNSA and Office of Science high performance computing programs serve to advance U.S. energy, economic and national security by accelerating the development of new energy technologies, aiding in the discovery of new scientific knowledge, and simulating and predicting the behavior of nuclear weapons."
"The partnership between the National Nuclear Security Administration, Lawrence Livermore National Laboratories and IBM demonstrates the type of innovation that is possible when advanced science and computing power are applied to some of the most difficult challenges facing society," said Nick Donofrio, IBM executive vice president for innovation and technology. "Blue Gene/L and ASC Purple are prime examples of collaborative innovation at its best -- together, we are pushing the boundaries of insight and invention to advance national security interests in ways never before possible."
"The early success of the recent code runs on Blue Gene/L represents important scientific achievements and a big step toward achieving the capabilities we need to succeed in our stockpile stewardship mission," said Michael Anastasio, LLNL's director. "Blue Gene/L allows us to address computationally taxing stockpile science issues. And these code runs provide a glimpse at the exciting and important stockpile science data to come."
The 101 teraflop record-setting materials science calculations referred to involved the simulation of the cooling process in a molten actinide uranium system, a material and process of importance to stockpile stewardship. This was the largest simulation of its kind ever attempted and demonstrates that Blue Gene/L's architecture can operate with real-world applications. The record breaking 101 teraflop number is also significant because it was sustained over a long period of time and involved a scientific code that will be one of the workhorse codes running on the machine.
Blue Gene/L will move into classified production in February, to address problems of materials aging. The machine is primarily intended for stockpile science molecular dynamics and turbulence calculations. High peak speed, superb scalability for molecular dynamics codes, low cost and low power consumption make this an ideal solution for this area of science.
Purple consists of 94 teraflop classified and six teraflop unclassified environments. It represents the culmination of 10 years of work by the ASC program to develop a computer that could effectively run newly developed 3D weapons codes needed to simulate complete nuclear weapons performance. The machine's design or "architecture" with large memory powerful processors and massive network bandwidth is ideal for this purpose. The insights and data gained from materials aging calculations to be run on Blue Gene/L will be vital for the creation of improved models to be used for future full weapons performance simulations on Purple.
The systems are part of an approximately $200 million contract with IBM and were delivered on schedule and within budget. The machines were designed to meet requirements in weapons simulations and materials science. The approach of dividing requirements across two machines, rather than building a single machine to meet all requirements, turned out to be the efficient and cost effective way to meet program objectives.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.