No exascale for you* — at least, not within the High-Performance Linpack (HPL) territory of the latest Top500 list, issued today from the 33rd annual Supercomputing Conference (SC21), held in-person in St. Louis, Mo., and virtually, from Nov. 14–19. “We were hoping to have the first exascale system on this list but that didn’t happen,” said Top500 co-author Jack Dongarra in a press briefing this morning.
In an alternate timeline, the United States might have stood up two exascale systems by now: Aurora at Argonne National Laboratory and Frontier at Oak Ridge National Laboratory. Installation continues on the latter, and when we talked to Intel last week, they said that Argonne was preparing for the arrival of Aurora, now slated to be a two exaflops peak machine, doubling its (most recent) previous performance target.
The 58th edition of the Top500 offers a familiar lineup at the top. Japan’s Fugaku system is still in the number one spot providing 442 petaflops, with the U.S. systems Perlmutter – which improved its performance by nearly 10 percent to 70.9 petaflops – and Selene in fifth and sixth place, respectively. (DOE’s Summit and Sierra and China’s Sunway TaihuLight are still keeping their seats warm as well, holding second, third and fourth place respectively.) See graphic (inset, right).
The first newcomer joins at number 10: a system built by Microsoft Azure called Voyager-EUS2 that spans AMD Epyc Rome CPUs (48-core parts, running at 2.45GHz), Nvidia A100 80GB GPUs, and HDR InfiniBand, delivering 30 Linpack petaflops out of 39.5 peak petaflops. This translates into a respectable 76 percent Linpack efficiency, owing to Microsoft’s use of HDR InfiniBand. Voyager-EUS2 was spun up in the Azure East US 2 region. (This system is not affiliated with another Voyager system that was detailed earlier this year.)
HPE picks up the next two spots with two new systems making their debut on the list. At number 11 is “SSC-21”, made for Samsung Electronics. The Apollo 6500 Gen10 Plus system features AMD Epyc Milan 7543 CPUs, each with 32-cores running at 2.8GHz, and InfiniBand HDR200 networking. SSC-21 delivers 25.2 Linpack petaflops out of a potential 31.8 peak petaflops, which comes out to a Linpack efficiency of 79 percent. A smaller 2.27 petaflops HPE system that uses very similar architecture – SSC-21 Scalable Module – achieved 33.98 gigaflops-per-watt energy-efficiency, securing it a second-place spot on the Green500.
Argonne National Laboratory’s Polaris supercomputer claims the 12th spot. Based on HPE’s Apollo 6500, Polaris contains ~560 second-generation AMD Epyc Rome 7532 CPUs (32 cores, 2.4GHz) plus ~2,240 Nvidia A100 40GB SXM4 GPUs with Slingshot 10 networking. It achieved an HPL score of 23.8 petaflops out of a possible 34.6 peak petaflops (68.8 percent efficiency). Polaris is providing a bridge to Argonne’s forthcoming Aurora exascale system, providing the lab with some extra compute power and assisting with software readiness. Argonne said the system would reach 44 peak petaflops in its final configuration, which involves swapping out the second-gen Rome Epyc CPUs with the newer, more performant Milan Epyc CPUs.
New in fourteenth place is CEA-HF, a 23.24 Linpack petaflops Atos BullSequana XH2000 system provided to the Commissariat a l’Energie Atomique (CEA) in France. CEA-HF comprises AMD third-generation Milan Epyc CPUs (64-core SKUs running at 2.45GHz), networked by Atos’ own BXI V2 interconnect.
New to the list – in spots 19, 36, 40 and 43 – are four Russian systems. Chervonenkis (#19 with 21.5 petaflops) and Galushkin (#36 with 16.0 petaflops) were made by IPE, Nvidia and Tyan for Russian internet company YANDEX. The systems employ 64-core AMD Rome Epyc processors running at 2GHz paired with Nvidia A100 80GB GPUs, implementing InfiniBand networking.
A third Russian supercomputer, Lyapunov, was also built for YANDEX, grabbing the 40th spot with 12.8 Linpack petaflops. Lyapunov is based on Inspur’s NF5488A5 servers, outfitted with 64-core AMD Rome Epyc processors (running at 2GHz) paired with Nvidia A100 40GB GPUs, networked with InfiniBand. The system is manufactured by two Chinese organizations: NUDT and Inspur.
The fourth new entrant from Russia is Christofari Neo, built for SberCloud (a cloud platform backed by the Russia-based Sberbank Group). The 11.95 petaflops system comes in at number 43.
AMD continues to improve its Top500 positioning, now powering four of the top 10 machines. Across the list, there are 54 Epyc Rome systems, 17 Epyc Milan systems, and two Epyc Naples systems. With 73 systems in total, AMD’s share of the list has grown to 14.6 percent, a jump of 9.4 percent over the June list, and it has three times as many systems on the list as a year ago. Intel claims an 81.4 percent share of Top500 systems, down from 86.4 percent six months ago.
Nvidia is the manufacturer of 14 systems on this list, and it was collaboratively involved in building four others: Sierra (#2), Chervonenkis (#19), Lassen (#26) and Galushkin (#36).
No IBM systems arrived on or left the list. There are still seven: Summit (#2), Sierra (#3), Marconi-100 (#18), Lassen (#26), PANGEA III (#29), AiMOS (#57) and Longhorn (#279).
A total of 70 new systems made it onto the new Top500 list. One of the more noteworthy – in spot #435 – is “NA-IT1.” The 1.68 petaflops Linpack system marks the return of Japanese supercomputing company PEZY, which had gone quiet after the CEO and another employee were indicted for fraud in 2017 . NA-IT1 is a ZettaScaler3.0 machine with both AMD 64-core Epyc Rome processors and PEZY-SC3 proprietary manycore chips, connected at the node level by InfiniBand EDR. NA-IT1 ranks twelfth on the Green500, delivering 24.58 gigaflops-per-watt.
The new list also welcomes ARCHER2 in spot 22 with 19.5 Linpack petaflops. Installed at the EPSRC at the University of Edinburgh (UK), ARCHER2 is an HPE Cray XE system, powered by AMD Epyc Rome processors, connected by Slingshot-10 networking.
The aggregate Linpack performance provided by all 500 systems is 3.06 exaflops, up from 2.79 exaflops six months ago and 2.43 exaflops 12 months ago. While the Linpack efficiency of the entire list is essentially unchanged at 63.5 percent compared with 63.1 percent six months ago, the Linpack efficiency of the top 100 segment climbed to 76.3 percent compared with 70.7 percent six months ago. The top system, Fugaku, delivers a healthy computing efficiency of 82.28 percent.
The minimum Linpack score required for inclusion on the 58th Top500 list is now 1.65 petaflops compared with 1.51 petaflops six months ago. The entry point for the top 100 segment increased to 4.85 petaflops versus 4.13 petaflops for the previous list. The current number 500 system (NA1, Lenovo, 1.65 petaflops) was ranked at number 433 on the last edition.
Green500 — Preferred Networks got its third Green500 win with its MN-3 system. PFN’s deep-learning-optimized MN-3 system improved its energy efficiency, achieving a remarkable 39.38 gigaflops-per-watt up from 29.7 gigaflops-per-watt on the last list. MN-3 is powered by the MN-Core chip, a proprietary accelerator that targets matrix arithmetic. The system placed 301 on the Top500 list with an improved score of 2.18 petaflops, several notches up from its number 337 position six months ago. As mentioned above, HPE’s new “SSC-21 Scalable Module” took second-place on the Green500 with 33.98 gigaflops-per-watt energy-efficiency. The HPE Apollo 6500 Gen10 Plus system with 32-core AMD Epyc Rome CPUs and Nvidia A100 80GB GPUs is ranked 291 on the Top500, delivering 2.27 HPL petaflops. In third place is Tethys, an Nvidia DGX A100 “Liquid Cooled Prototype,” powered by AMD Epyc Rome CPUs and Nvidia A100 80GB, interconnected with HDR InfiniBand. Operated by Nvidia in the UK, Tethys delivers 31.54 gigaflops-per-watt, and ranks 295 on the Top500 (with 2.26 petaflops).
What’s next — Will there be a chart-topping exaflopper on the next iteration of the Top500? We’ve heard that additional details for both Aurora (Argonne) and Frontier (Oak Ridge) may be disclosed here at SC21 this week, and we have it on good authority that there are two significant pre-exascale systems and two exascale systems in China that have been held off the list. The next edition of the twice-yearly Top500 list will be published in tandem with the ISC proceedings, taking place May 29 through June 2 in Hamburg after a multi-year run in Frankfurt and two back to back years of digital-only events in 2020 and 2021.
* In 2013, Top500 co-author Horst Simon made his case for why there wouldn’t be an exascale machine before 2020.