Back in 2008, the U.S. Defense Advanced Research Projects Agency (DARPA) set an ambitious target: an exascale supercomputer in a 20-megawatt envelope. That target, once viewed by many with skepticism—research at the time predicted that exascale systems would require hundreds of megawatts—has now officially been met by the exascale Frontier supercomputer at Oak Ridge National Laboratory (ORNL). At ISC 2022, the organizers of the Green500 list—which ranks supercomputers based on their flops-per-watt efficiency—discussed this development and more.
A new Frontier for efficiency
The “June” (late May, actually) Green500 list was led by ORNL. In first place: Frontier’s test and development system, Frontier TDS—though we prefer its less official name, “Borg.” Borg (which is effectively just a single cabinet of the same design as Frontier’s 74 main cabinets) delivered 62.68 gigaflops per watt at a total of 19.20 Linpack petaflops. “If you were to naively extrapolate this to an exaflop, it comes in at about 16 megawatts,” said Wu Feng, custodian of the Green500 list and an associate professor at Virginia Tech, during his virtual appearance at ISC 2022. This is a staggering accomplishment of computing efficiency, eclipsing the previous Green500 champion—Preferred Networks’ MN-3—by nearly 60 percent.
Perhaps more impressive, however, is that Frontier itself placed second with 52.23 gigaflops per watt. “Frontier on the Green500 is the highest-placed number-one Top500 supercomputer on the Green500 list in its existence,” Feng said. According to the Green500 list, Frontier delivered 1.102 Linpack exaflops in a 21.1-megawatt envelope, which interpolates to one exaflop at 19.15 megawatts. However, Al Geist—CTO of the Oak Ridge Leadership Computing Facility (OLCF)—revealed during the session that this was a “very conservative number” and that the average power use that Oak Ridge submitted to the Green500 was actually 20.2 megawatts. That works out to 54.5 gigaflops per watt and interpolates to an exaflop in 18.33 megawatts. By this measurement, Frontier is more than 3.5× more efficient than the previous Top500 topper, Riken’s Fugaku system (15.42 gigaflops per watt).
This efficiency, Geist explained, speaks to a long legacy at ORNL. “Oak Ridge has really been working on energy efficient computing for about a decade,” he said, charting out how this ten-year effort had paid off from the use of GPUs in the Titan system back in 2012 through Frontier today. “Exascale has really been made possible by this sort of 200× improvement in energy-efficient computing.” Geist further credited AMD’s work into making its CPUs and GPUs more efficient, such as by allowing the chips to turn off unused resources at a very granular level. He also credited the list itself: “I think the Green500 has done a remarkable job of making the entire community much more aware of power efficiency and the importance of it.”
Frontier’s shadow even extends beyond the top two systems on the Green500. Frontier—and, by extension, Borg—are HPE Cray EX systems with AMD Milan “Trento” Epyc CPUs, AMD Instinct MI250X GPUs and HPE Slingshot-11 networking. That exact same architecture also appears in the third-place system, the 151.90 Linpack petaflops LUMI supercomputer in Finland (51.63 gigaflops per watt, third place on the Top500). It also appears in the fourth-place system, the 46.10 Linpack petaflops Adastra system in France (50.03 gigaflops per watt, tenth place on the Top500). “All four of these systems all use the same technology that was actually developed for Frontier,” Geist said. Both LUMI and Adastra also extrapolate to an exaflop under 20 megawatts.
Green500 trends
All of the top ten systems are accelerated: four with the aforementioned AMD MI250X GPUs, five with Nvidia’s A100 GPUs and one between them in fifth place using the Preferred Networks MN-Core accelerator. Further, Feng said, it was the first time that all of the top ten machines from the previous list stayed on the list—and not just on the list, but in the top 20. However, those four Frontier-type systems shot past the rest of the pack on the list: the average power efficiency of the top ten systems extrapolates to exascale at around 40 megawatts, showcasing the gap between the Frontier architecture and the competition. As shown in the box-and-whisker plot below, the remaining systems on the Green500 list showed modest improvements in efficiency compared to the November list.
There was another encouraging trend on the new list. The Green500 uses three tiers of efficiency reporting, with a level one measurement representing the whole system across a full run, a level three measurement representing a smaller fraction of the system across the core phase of a run, and a level two measurement somewhere in-between. “The total number of level 2 and level 3 entries continues to grow relative to level 1, so that’s really great,” said Natalie Bates, chair of the Energy Efficient HPC Working Group (EEHPCWG), during the Green500 session. This Green500 list included 102 measured submissions: 57 at level one, 31 at level two and 14 at level three.
Higher stakes, new strategies
Founded 16 years ago, the Green500 list aims to “raise awareness (and encourage reporting) of the energy efficiency of supercomputers” and to “drive energy efficiency as a first-order design constraint (on par with performance).” But when the Green500 list was being conceived, supercomputers rated in single-digit kilowatts; now, systems like Frontier are pulling down double-digit megawatts. ORNL Director Thomas Zacharia said in a press briefing that “when you start the [Linpack] run [on Frontier], the machine, in less than ten seconds, begins to draw an additional 15 megawatts of power … that’s a small city in the U.S., that’s roughly about how much power the city of Oak Ridge consumes.”
The sheer scale of systems like Frontier has put increased urgency on not only how much power the systems themselves consume, but also the efficiency of their supporting infrastructure and the sourcing of the power itself. Indeed, DARPA’s 20-megawatt target for exascale was predicated on costs, as Geist recounted during ORNL’s Advanced Technologies Section webinar last year: “The number that came back from the head of [the] Office of Science at the time was that they weren’t willing to pay over $100 million over the five years, so it’s simple math [based on an average cost of $1 million per megawatt per year]. The 20 megawatts had nothing to do with what might be possible, it was just that stake that we drove in the ground.”
In the Green500 session last week, Geist elaborated that Oak Ridge was dedicated to “not only reducing the amount of energy it takes to run the computer, but reducing the amount of energy it takes to cool the datacenter back down.” As a result, the Frontier datacenter achieves a power usage effectiveness (PUE) of just 1.03. “A lot of work has gone into trying to make this machine as well as the datacenter itself just as efficiently as possible,” Geist said.
EuroHPC’s aforementioned LUMI system, meanwhile, is housed in a new datacenter designed with power efficiency and sustainability in mind (pictured above). Sited in an old paper mill in Kajaani, Finland, LUMI—which currently requires less than 10 megawatts to operate—is powered by 100 percent renewable energy (local hydropower) and is designed to sell its waste heat back to the town of Kajaani, further reducing energy costs and resulting in a net-negative carbon footprint. The location in northern Finland also, of course, reduces the need for artificial cooling. During a session on EuroHPC at ISC 2022, Anders Jensen—executive director of the EuroHPC JU—stressed the importance of these holistic energy “stories” for European supercomputers. “[The] Green500 is great,” he said, “but it doesn’t take into account where the energy came from.”