A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the then-emerging petascale systems at a system power of no more than 20 MW. On November 14 at the SC18 supercomputing conference in Dallas, some of the original contributors to the report participated in a Birds of a Feather session in which they reflected on the document, sharing what they deemed to be its hits and misses and making predictions for 2028.
Session leader, Jeffrey Vetter of Oak Ridge National Laboratory, said the 2008 report, titled “Exascale Computing Study: Technology Challenges in Achieving Exascale Systems,” has been cited more than 1,000 times and that many people look to it to understand what research agendas they should undertake and to consider what are the most salient challenges to be faced in high-performance computing.
The study was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Processing Techniques Office (IPTO) with Bill Harrod as program manager. The report represents the ideas of people from universities, industry, and research labs collected during periodic meetings conducted during the course of more than a year.
Harrod, who is now program manager for the Intelligence Advanced Research Projects Activity (IARPA), told the BoF audience that consideration of petascale system specifications as they existed at the time informed the study group members’ assumptions about exascale. Petascale systems operated at about 13 MW with several hundred cabinets. Thus, the anticipated parameters for exascale were 1018 operations/second at 20 MW and with fewer than 500 cabinets. The pivotal big-picture questions, Harrod said, were whether an exascale system was needed and could it be used for scientific discovery and other practical purposes.
Two other studies, on software and resiliency, respectively, followed the study upon which the 2008 report was based. The resounding, overarching comment concerning the findings of the three studies, Harrod said, was that co-design would be essential. He added that although the co-design concept was not revolutionary, it was determined to be critical for ensuring hardware design would correspond properly with the intended uses for the system, and it became an integral aspect of the US Department of Energy’s Exascale Computing Initiative (ECI) and Exascale Computing Project (ECP).
Peter Kogge of the University of Notre Dame led the Exascale Computing study and served as editor of the 2008 report. In his presentation for the BoF, he outlined four key challenges that surfaced from the study: energy and power, memory, concurrency, and resiliency. He also summarized the 2008 computing environment and what it was anticipated to look like by 2015, noting that the study team did not focus on application needs and the Roofline model. For matrix multiply like the High-Performance Linpack (HPL) benchmark, he said, having a large enough cache would supersede concerns about memory speed; and to reach a peak of 1 exaflops, the goal was to hit 20 pJ/flop.
The team assembled what Kogge referred to as an aggressive strawman with an architecture that was largely influenced by study contributor Bill Dally (then with Stanford University, now with Nvidia), who participated in the BoF. The architecture was characterized by multicore, no coherency, and shared global address space. Reaching the 1 exaflops peak meant 68 MW power usage from 583 racks. Relative to programming, about 1 billion threads needed to be maintained. A wire interconnect was assumed.
Kogge provided details from the report on the aggressive strawman system, which he said he considered to be “remarkably prescient” with respect to what ultimately materialized in the evolution toward exascale.
A 2015 paper for the International Supercomputing Conference (ISC) by Kogge titled “Updating Energy Model for Future Exascale Systems” examined an update of the models that the Exascale Computing study team had built to project performance for only the heavyweight (Xeon chips) sockets. The paper received a Gauss Award.
The study group’s final analysis showed that an exaflops could be reached by 2020, but with a peak of 180 MW to 430 MW.
The Study Contributors’ Assessments of Hits and Misses
At the inception of the DARPA studies, the target year for reaching exascale was 2015, but based on the results of the software study it was adjusted to 2018. Today, projections are focused on the 2021–2023 time frame. Harrod said that although the projections have evolved, the studies paved the way for DARPA’s Ubiquitous High-Performance Computing (UHPC) Exascale Projects and laid the foundation for DOE’s ECI and ECP. They have, he added, greatly enhanced the environment for exascale development.
In terms of hits and misses, the importance of co-design has played out at DOE and many other places, including the FastForward and PathForward programs, Harrod said. As a key miss of the study, he highlighted the fact that it did not foresee the impact of artificial intelligence (AI).
The study group’s approach in focusing on the heavyweight systems was dead-on through 2015, and the aggressive strawman they developed greatly resembles today’s GPU, Kogge said. In addition, he said the study group was right to point out that some form of memory stacking would be necessary, and that interconnects, at least locally within racks, would still largely be copper. Among the misses, he highlighted the heterogeneous systems and the SIMT threading model, which constitutes what is done with GPUs today.
Keren Bergman (Columbia University)
Bergman said that as someone whose background is in optical networks, she considered the close examination of the energy consumption of the interconnects in this study to be enlightening. With respect to the study’s hits, she opined that the deep discussions captured the growing challenge of data movement. However, in her view, one of the study’s sizable misses was the cost associated with manufacturability. She said substantial innovations would be required to integrate photonics into chips and remedy one of the last real bottlenecks.
Dean Klein (Micron/now retired)
Klein, who was vice president of memory system development at Micron at the time of the study and today in retirement mentors and motivates engineering students, highlighted as a hit the study group’s awareness that the energy of memory subsystems would drive compromises in the memory in systems, and as a miss the idea of NAND flash playing a role in supercomputing.
The prescience of the study’s aggressive silicon strawman made it a hit, Dally said. Conversely, he viewed as shortcomings the paucity of capable networks due to funding, failure to anticipate AI, and an overly conservative approach in addressing software.
Exascale Study Contributors’ Predictions for 2028
The belief that complementary metal-oxide-semiconductor (CMOS) technology for constructing integrated circuits would remain predominant was a recurring notion, as the BoF contributors offered diverse predictions for 2028 based on the perspectives of their areas of expertise.
The contributors also responded to comments and questions from the audience.
Scott Gibson is a science writer and communications specialist with Oak Ridge National Laboratory.