Blue Gene/L Hailed as Fastest Supercomputer

By Nicole Hemsoth

November 4, 2005

The National Nuclear Security Administration has officially dedicated a pair of next-generation supercomputers that aim to ensure the U.S. nuclear weapons stockpile remains safe and reliable without nuclear testing. The IBM machines are housed at Lawrence Livermore National Laboratory.

NNSA administrator Linton F. Brooks said the dedication marks the culmination of a 10-year campaign to use supercomputers to run three-dimensional codes at lightning-fast speeds to achieve much of the nuclear weapons analysis that was formerly accomplished by underground nuclear testing.

At an event in the LLNL Terascale Simulation Facility, Brooks also announced that the Blue Gene/L supercomputer performed a record 280.6 trillion operations per second on the industry standard Linpack benchmark.

Purple, the other half of the most powerful supercomputing twosome on earth, is a machine capable of 100 teraflops as it conducts simulations of a complete nuclear weapons performance. The IBM Power5 system is undergoing final acceptance tests at the TSF.

“The unprecedented computing power of these two supercomputers is more critical than ever to meet the time-urgent issues related to maintaining our nation's aging nuclear stockpile without testing,” Brooks said. “Purple represents the culmination of a successful decade-long effort to create a powerful new class of supercomputers. Blue Gene/L points the way to the future and the computing power we will need to improve our ability to predict the behavior of the stockpile as it continues to age. These extraordinary efforts were made possible by a partnership with American industry that has reestablished American computing preeminence.”

In a recent demonstration of its capability, Blue Gene/L ran a record-setting materials science application at 101.5 teraflops sustained over seven hours on the machine's 131,072 processors, running an application of importance to NNSA's effort to ensure the safety and reliability of the nation's nuclear deterrent. A teraflop is 1 trillion computer operations per second.

Both machines were developed through NNSA's Advanced Simulation and Computing program and join a series of other supercomputers at Sandia and Los Alamos national laboratories dedicated to NNSA's Stockpile Stewardship effort to maintain the nation's nuclear deterrent through science-based computation, theory and experiment.

Together, the Purple and Blue Gene/L systems will put an astounding half of a petaflop of peak performance at the disposal of scientists and engineers at Sandia, Los Alamos and Lawrence Livermore national laboratories. This is more supercomputing power than at any other scientific computing facilities in the world.

“Today marks another important milestone in the DOE Office of Science and NNSA partnership to revitalize the U.S effort in high-end computing,” said Raymond L. Orbach, director of the Department of Energy's Office of Science. “NNSA and the Office of Science have leveraged resources in the areas of operating systems, systems software and on advanced computer evaluations to the benefit of both organizations. The ASC Purple and Blue Gene/L machines at Livermore are the latest in an increasingly sophisticated suite of supercomputers across the DOE complex. Together the NNSA and Office of Science high performance computing programs serve to advance U.S. energy, economic and national security by accelerating the development of new energy technologies, aiding in the discovery of new scientific knowledge, and simulating and predicting the behavior of nuclear weapons.”

“The partnership between the National Nuclear Security Administration, Lawrence Livermore National Laboratories and IBM demonstrates the type of innovation that is possible when advanced science and computing power are applied to some of the most difficult challenges facing society,” said Nick Donofrio, IBM executive vice president for innovation and technology. “Blue Gene/L and ASC Purple are prime examples of collaborative innovation at its best — together, we are pushing the boundaries of insight and invention to advance national security interests in ways never before possible.”

“The early success of the recent code runs on Blue Gene/L represents important scientific achievements and a big step toward achieving the capabilities we need to succeed in our stockpile stewardship mission,” said Michael Anastasio, LLNL's director. “Blue Gene/L allows us to address computationally taxing stockpile science issues. And these code runs provide a glimpse at the exciting and important stockpile science data to come.”

The 101 teraflop record-setting materials science calculations referred to involved the simulation of the cooling process in a molten actinide uranium system, a material and process of importance to stockpile stewardship. This was the largest simulation of its kind ever attempted and demonstrates that Blue Gene/L's architecture can operate with real-world applications. The record breaking 101 teraflop number is also significant because it was sustained over a long period of time and involved a scientific code that will be one of the workhorse codes running on the machine.

Blue Gene/L will move into classified production in February, to address problems of materials aging. The machine is primarily intended for stockpile science molecular dynamics and turbulence calculations. High peak speed, superb scalability for molecular dynamics codes, low cost and low power consumption make this an ideal solution for this area of science.

Purple consists of 94 teraflop classified and six teraflop unclassified environments. It represents the culmination of 10 years of work by the ASC program to develop a computer that could effectively run newly developed 3D weapons codes needed to simulate complete nuclear weapons performance. The machine's design or “architecture” with large memory powerful processors and massive network bandwidth is ideal for this purpose. The insights and data gained from materials aging calculations to be run on Blue Gene/L will be vital for the creation of improved models to be used for future full weapons performance simulations on Purple.

The systems are part of an approximately $200 million contract with IBM and were delivered on schedule and within budget. The machines were designed to meet requirements in weapons simulations and materials science. The approach of dividing requirements across two machines, rather than building a single machine to meet all requirements, turned out to be the efficient and cost effective way to meet program objectives.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural Users Group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

Imagine if all the atoms in the universe could be added up into a single number. Big number, right? Maybe the biggest number conceivable. But wait, there’s a bigger number out there. We're told that Go, the world’s Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural Users Group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

Imagine if all the atoms in the universe could be added up into a single number. Big number, right? Maybe the biggest number conceivable. But wait, there’s a Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This