After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement.
Al Geist kicked off ORNL’s Advanced Technologies Section (ATS) webinar series last month by recapping the history of the march toward exascale. As Geist described how the Frontier supercomputer addresses the four primary exascale challenges, he disclosed key information about the anticipated inaugural U.S. exascale computer.
Most notably, Frontier is poised to hit the 20 MW power goal set by DARPA in 2008 by delivering more than 1.5 peak exaflops of performance inside a 29 MW power envelope. Although the once-aspirational target had originally been etched for 2015, until fairly recently, it was not clear that the first crop of exascale supercomputers – set to arrive in the 2021-2023 timeframe – would make the cut. Indeed, it is unclear if they all will, but it is looking like Frontier, using HPE and AMD technologies, will.
Geist is a corporate fellow at and CTO of the Oak Ridge Leadership Computing Facility (OLCF) and the CTO of the Exascale Computing Project. He’s also one of the original developers of PVM (Parallel Virtual Machine) software, a de facto standard for heterogeneous distributed computing.
Geist began his talk with a review of the four major challenges that were set out in the 2008-2009 timeframe, when exascale planning was ramping up within the Department of Energy and its affiliated organizations.
“The four challenges also existed during the petascale regime, but in 2009, we felt there was a serious problem where we might not even be able to build an exascale system,” said Geist. “It wasn’t just that it would be costly, or that it would be hard to program – it may just be impossible.”
Energy consumption loomed large.
“Research papers that came out in 2008 predicted that an exaflop system would consume between 150 to up to 500 megawatts of energy. And the vendors were given this ambitious goal of trying to get that down to 20, which seems like an awful lot,” said Geist.
Then there was reliability: “The fear with the calculations we were doing at the time is that failures would happen faster than you could checkpoint a job,” said Geist.
It was further thought that billion-way concurrency would be required.
“The question was, could there be more than even just a handful of applications, if even one, that could utilize that much parallelism?” Geist recalled. “In 2009, large scale parallelism was typically less than 10,000 nodes. And the largest application we had on on record was only about 100,000 nodes used.”
The last issue was a thorny one: data movement.
“We were seeing the whole problem with the memory wall: basically that the time for moving data from memory into the processors and from the processors back out to storage was actually the main bottleneck for doing the computing; the computing time was insignificant,” said Geist. “The time to move a byte is orders of magnitude longer than a floating point operation.”
Geist recalled the DARPA exascale computing report that came out in 2008 (led by Peter Kogge). It included a deep analysis of what it would take to field a 1 exaflops peak system.
With the technologies at the time, it would take 1,000 MW to build a system of off-the-shelf components, but if you scaled the then current flops-per-watt trends, you’d cross exascale at roughly 155 MW with a very optimized architecture, Geist relayed. A barebones configuration, stripping away memory from the strawman system down to just 16 gigabytes per node, resulted in a 69-70 MW footprint.
But even the aggressive 70 MW figure was out of range. A machine that power-hungry was unlikely to secure the necessary funding approvals.
“You might wonder, where did that [20 MW number] come from?” Geist posed. “Actually, it came from a totally non-technical evaluation of what was possible. What was possible said: it’s gonna take 150 MW. What we said is: we need it to be 20 [MW]. And why we said that is that [we asked] the DOE, ‘How much are they willing to pay for power over the life of a system?’ and the number that came back from the head of Office of Science at the time was that they weren’t willing to pay over $100 million over the five years, so it’s simple math [based on an average cost of $1 million per megawatt per year]. The 20 megawatts had nothing to do with what might be possible, it was just that stake that we drove in the ground.”
Jumping ahead in the presentation (which is available to watch and linked at the end of this article), Geist traces the evolution of machines at Oak Ridge: Titan to Summit to Frontier. The extreme concurrency challenge is addressed by Frontier’s fat node approach, where the GPUs hide the parallelism inside their pipelines.
Where Titan used a one-to-one GPU-to-CPU ratio, Summit implemented a three-to-one ratio. Frontier’s design kicks that up a notch with a four-to-one GPU-to-CPU ratio.
“In the end, what we found out was that exascale didn’t require this exotic technology that came out in the 2008 report,” said Geist. “We didn’t need special architectures, we didn’t even need new programming paradigms. It turned out to be very incremental steps, not a giant leap like we thought it was going to take to get to Frontier.”
As for power, the expectation is that Frontier will exceed one-and-a-half exaflops peak performance while consuming no more than 29 megawatts. “That’s actually a little bit better than the 20 megawatts per exaflop that we just drove a stake in the ground as a rule of thumb as opposed to what technology could do,” said Geist. “But in fact, the vendors that worked on and designed Frontier did an amazing job of being able to meet that.”
“It was [largely] due to those 10 years of DOE investment that [participating] vendors were actually able to decrease the amount of energy their chips and memories needed to be able to do an exaflop of computations for only 20 megawatts of power,” said Geist.
Geist’s energy efficiency math is based on peak (double-precision) flops, not Linpack. A conservatively estimated computing efficiency of 70 percent (Rmax/Rpeak) provides 1050 Linpack petaflops at 29 megawatts, or 36.2 gigaflops-per-watt. At 80 percent computing efficiency, energy efficiency increases to 41.4 gigaflops-per-watt. (Current greenest supercomputers are nearing 30 gigaflops-per-watt.) Perlmutter, the new #5 system installed at Berkeley Lab – combining HPE, AMD and Nvidia technology and also using a four-to-one GPU-to-CPU ratio – achieves 25.50 gigaflops-per-watt. Also note that ORNL has said Frontier will be “more than” 1.5 exaflops peak.
Geist also highlighted reliability improvements owed to on-node flash memory, further enabled by the vendors making their networks and their system software much more adaptive. (Failing and restarting gracefully is key.)
With Frontier, the memory wall issue has been mitigated through the use of HBM on the GPUs. “Frontier has got high bandwidth memory (HBM) memory soldered directly onto the GPU,” Geist said. “So it increases the bandwidth by an order of magnitude. So it kind of kicks the can down the road for this problem. And the GPUs, one of the things caused by the high bandwidth is the latency can be pretty high in those cases, but the GPUs are actually very well-suited, given their pipelines, at latency hiding.”
There’s a lot more interesting material in Geist’s presentation, like the cosmic ray problem, lessons learned from Summit and Sierra and a question and answer session. Watch the full talk here: https://vimeo.com/562917879