Twenty years ago President Bill Clinton announced that the United States would maintain its U.S. nuclear arsenal without nuclear explosive testing. The challenge, of course, was how to actually carry out such a daunting task. The instruments were the tremendously successful Stockpile Stewardship Program (SSP) and Accelerated Strategic Computing Initiative which together drove much of supercomputing in the U.S. for sometime.
Today, the White House and DOE marked the anniversary with an event including comments from Secretary of State John Kerry, Secretary of Energy Ernest Moniz, and two panels featuring the directors of the three national labs involved at ASCI’s start – Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories. Strictly speaking, ASCI transitioned into becoming the Advanced Simulation and Computing (ASC) program in 2005 which is broadly carrying on the mission.
It’s worth noting how far supercomputing has come. Writing in 1999 on the modeling and simulation challenges ASCI faced in monitoring aging stockpiles and assessing new designs, Paul Messina, then with California Institute of Technology and DOE and now Director of Science, Argonne Leadership Computing Facility, noted:
“The goal of ASCI, however, is not a pipe dream. With funding from ASCI, the computer industry has already installed three computer systems, one at Sandia National Laboratories (built by Intel), one at Los Alamos National Laboratory (LANL) (an SGI-Cray computer), and another at Lawrence Livermore National Laboratory (LLNL) (an IBM computer), that can sustain more than 1 teraflops on real applications. At the time they were installed, each of these computers was as much as 20 times more powerful than those at the National Science Foundation (NSF) Supercomputer Centers (the Partnerships for Advanced Computational Infrastructure), the National Energy Research Supercomputing Center, and other laboratories. And this is only the beginning. By 2002, the computer industry will deliver a system 10 times more powerful than these two systems and, in between, another computer will be delivered that has three times the power of the LANL/LLNL computers. By the year 2004—only 5 years from now—computers capable of 100 trillion operations per second will be available.”[i]
Today of course, top supercomputers are petaflops machines and the new National Strategic Computing Initiative plots a course towards achieving exascale computing.
Today’s event featured remarks by: Moniz; Kerry; Deputy Secretary of Energy, Dr. Elizabeth Sherwood-Randall; NNSA Administrator, Lt. Gen. (Retired) Frank G. Klotz; NNSA Principal Deputy Administrator, Madelyn Creedon. There were also panel discussions:
- Panel I (“From Cold War to No-Testing Regime – Challenges and Opportunities). Panel members: Charles Curtis, Senior Advisor, Center for Strategic and International Studies; Brian McKeon, Principal Deputy Undersecretary of Defense for Policy, DoD; and Franklin Miller, Principal, Scowcroft Group. Moderator:Madelyn Creedon
- Panel II (“Assessing the Current Stockpile and Looking Forward 20 Years”). Panel members: Bill Goldstein, Director, Lawrence Livermore National Laboratory; Jill Hruby, Director, Sandia National Laboratories; and Charles McMillan, Director, Los Alamos National Laboratory. Moderator: General C. Robert Kehler (ret.), Former Commander, U.S. Strategic Command
It seems likely the timing of this event, at least in part, was intended to showcase U.S. strength in rigorous nuclear program assessment as implementation of the international Iran nuclear disarmament treaty unfolds. Indeed, Kerry’s comments were largely focused on the recent Iran deal.
That said, David Turek, vice president of exascale computing at IBM, posted a more personal retrospective blog around ASCI and its galvanizing effect on supercomputing and on IBM and Big Blue’s role in the program. Below is text from Turek’s blog.
What it Takes to Reinvent Supercomputing–Over and Over Again
I’m not usually a big fan of anniversaries (except my wedding day, of course), but I make an exception when it comes to IBM’s collaboration with the US Government on supercomputing.
Today is the 20th anniversary of the Accelerated Strategic Computing Initiative–a Department of Energy program that has safeguarded America’s nuclear weapon arsenal and, and the same time, helped IBM assert ongoing leadership in this most demanding of computer domains.
With help from National Laboratories scientists, teams of IBMers have produced five generations of supercomputers–repeatedly ranking among the fastest machines in the world. The journey led us to where we are today: developing a sixth generation of computers, data-centric systems designed from the ground up for the era of big data and cognitive computing.
The program was also instrumental in IBM’s rebound after the company’s near-collapse in the early 1990s.
I remember the day the original ASCI contract was signed. IBM and DOE people had gathered in a conference room at the IBM headquarters north of New York City. Unexpectedly, Lou Gerstner, IBM’s then-new CEO, popped in and gave off-the-cuff remarks. I remember him saying, “IBM is all about solving hard problems. This is the hardest problem there is. We’re all in.”
I was sitting in a chair and he was standing behind me. He put his hands on my shoulders and said, “Here’s the guy who will do it.”
Gulp.
The task of creating computers that are capable of simulating nuclear explosions so countries don’t have to test with actual bombs turned out to be difficult indeed.
The first years were the toughest.
I had been with IBM for nearly 20 years by then and had experience in both hardware and software development. Most relevantly, I had been involved in an effort to transform IBM mainframes into supercomputers. That didn’t pan out, but in the process we learned a lot about what it would take to build high-performance computers. We had relaunched our supercomputing effort with a new technology strategy just before we engaged with the Department of Energy.
To ramp up the ASCI project development team quickly, I cherry-picked people from IBM’s offices and labs all over the Hudson Valley. Some of them were green, in their 20s, but they had the nerve to rethink computing.
We made a series of radical choices. We adapted processors and systems technologies that IBM had developed for its scientific workstation business. UNIX would be the operating system. We had to invent new networking to hook all the processors together. And we were one of the first groups at IBM to use open source software. We had to move too quickly to code everything ourselves.
We also had to develop a new process for developing and manufacturing such complex systems–with thousands, and, later, millions, of processors.
With each new generation, the requirements increased dramatically. The first machines produced 3 teraflops of computing performance, or 3 trillion floating point operations per second. The current generation produces 20 petaflops; 20 quadrillion operations per second. That meant we had to invent not just individual technologies but whole new approaches to computing.
For instance, in the early 2000s, IBM Research and scientists at Lawrence Livermore National Laboratory teamed up to create a new supercomputing architecture which harnessed millions of simple, low-powered processors. The first systems based on this architecture, called Blue Gene/L, were incredibly energy efficient and exceeded the performance of Japan’s Earth Simulator by greater than a factor of 10, helping the US recapture leadership in supercomputing.
Today, we’re developing yet another generation of supercomputers for the National Laboratories. They’re based on the principle that the only way to efficiently handle today’s enormous quantities of data is to rethink computing once again. We have to bring the processing to the data rather follow the conventional approach of transmitting all of the data to central processing units.
When we first proposed this solution, we were practically laughed out of the room. But, today, data-centric computing is becoming accepted across the tech industry as the way to go forward.
Through the ASCI project, I learned lessons that I think are critical for any large-scale development project in the computer industry. First, you must assemble an integrated team of specialists in all of the hardware and software technologies. Second, you must see the big picture. Don’t think of a server computer in isolation. Plan so you can integrate servers and other components in large systems capable of taking on the most demanding computing tasks.
I guess there’s one more critical lesson I learned from this tremendous experience: recruit bright and fearless people and ask them to do nearly impossible things. Chances are, they’ll rise to the challenge.
[i] Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. http://www.ncbi.nlm.nih.gov/books/NBK44974/