The 1993 Supercomputing Conference in Portland, Ore., provided a snapshot of the uncertainty that U.S. supercomputing faced in the early 1990s. Many exhibitors would soon be gone, either bankrupt or acquired, companies such as DEC, Thinking Machines, Kendall Square Research, nCube, MasPar and even the venerable Cray Research. This was because the direction of supercomputing technology was uncertain. It was clear that existing vector symmetric multi-processor systems were approaching their limits, but what would come next? There was some hope that Massively Parallel Processing (MPP) computing offered a potential solution. However, there were many different architecture ideas and it was not clear which one would dominate.
This was also a time of uncertainty for the United States as it entered the post-Cold War era. The Berlin Wall had fallen and the Soviet Union had collapsed. This had a huge impact on the Department of Energy (DOE) and its national laboratories who were responsible for the development and stewardship of U.S. nuclear weapons. To compound that uncertainty, in late 1992 President George H. W. Bush declared a moratorium on U.S. underground nuclear testing. This was later extended by President Bill Clinton and eventually became the Comprehensive Test Ban Treaty. This meant that not only were the national labs concerned about their ongoing mission, but they had also lost their primary means of understanding the behaviors of nuclear weapons.
However, starting in 1994, things started to change. In a small set of offices in the basement of the DOE Forrestal Building, a small group of federal employees guided by Vic Reis and Gil Weigand, with their counterparts at Los Alamos, Lawrence Livermore and Sandia national lab, started to put together what would become the Accelerated Strategic Computing Initiative (ASCI). Later it was renamed the Advanced Simulation and Computing (ASC) program. By the end of the decade, this program would grow to over $700 million/year and would revolutionize the world of supercomputing and advanced modeling and simulation. Many other programs and research activities would make significant contributions, but in the late 1990s and early 2000s, ASCI provided critical leadership.
ASCI was part of a DOE strategy called Science Based Stockpile Stewardship. This program developed the means to provide confidence in the performance, safety, and reliability of nuclear weapons in the absence of underground testing. This strategy involved conducting enhanced surveillance of the weapons, building large experimental facilities, and creating unprecedented levels of high resolution, high fidelity, multi-physics modeling and simulation capabilities that could confidently predict weapon behaviors. This last element was the domain of ASCI.
To achieve its goal, ASCI was required to tackle the full range of supercomputing technology challenges. One of the most visible elements of ASCI was the procurement of computational hardware. This led to the manufacturing of ASCI Red that was the world’s first teraflops computer. Subsequent platforms include the ASCI Blues, White, Q, Purple, and Roadrunner, the first petaflops computer, and most recently Sierra, a 122-petaflops Linpack system. An important result of ASCI was the revitalization of the U.S. supercomputing industry.
However, the stockpile stewardship mission of ASCI required more than just buying powerful platforms. ASCI needed to produce the programming models, tools, and libraries to enable the development of modeling and simulation applications. ASCI also built user environments to allow scientists to apply the capabilities to nuclear weapons. These environments included storage systems, networks, and visualization tools. ASCI also funded fundamental research in hardware, mathematics, and computer science. An important element of ASCI was the development of verification, validation, and uncertainty quantification methodologies. ASCI also funded university-based projects to develop non-classified advanced computing applications.
The list of ASCI’s contributions is too long for this short history. Perhaps the most important lesson from ASCI is its comprehensive approach. ASCI did not develop everything that was needed, but the program made sure that it was available when it was needed. Driven by its urgent mission of national importance, ASCI assumed the responsibility for the end to end integration and balance of the entire computational system. In this sense, ASCI provides a great example of true co-design.
Editor’s note: For a deeply researched, highly readable accounting of the ASCI program, including a rich collection of historic photographs, see Alex Larzelere’s 200-page report: Delivering Insight–The History of the Accelerated Strategic Computing Initiative (ASCI) [PDF]
About the Author
Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”
Feature image caption: The ASCI Red system at Sandia, the world’s first teraflops computer.