With the publication of the 56th Top500 list today from SC20’s virtual proceedings, Japan’s Fugaku supercomputer – now fully deployed – notches another win, while Nvidia’s in-house HPC-AI machine Selene, doubles in size, moving up to two spots to secure a fifth-place finish. New to the top-10 cohort are “JUWELS Booster Module” (Forschungszentrum Jülich, #7) and Dammam-7 (Saudi Aramco, #10). Nvidia captured the Green500 crown with an A100-driven DGX SuperPOD (#172 on Top500) that delivered 26.2 gigaflops-per-watt of performance.
RIKEN’s Fugaku supercomputer boosted its Linpack score to 442 petaflops, up from its debut listing at 415 petaflops six months ago, thanks to the addition of 6,912 nodes, bringing it to its full implementation of 158,976 A64FX CPU (1x) nodes. The Fujitsu Arm-based system also extended its performance on the new mixed precision HPL-AI benchmark to 2.0 exaflops, up from 1.4 exaflops six months ago. It grew its #1 leads on the HPCG ranking and Graph500, and maintained its standing in the upper echelon of the Green500 energy-efficiency rankings, holding onto tenth position with 14.78 gigaflops-per-watt.
Simply put, it’s another multi-category sweep for the first number-one Arm supercomputer that is the namesake of the highest mountain peak in Japan (Fugaku is another name for Mount Fuji, achieved by combining the first character of 富士, Fuji, and 岳, mountain).
Summit and Sierra (IBM/Mellanox/Nvidia, United States) remain at number two and three, respectively, and Sunway TaihuLight (China) holds steady in fourth position.
Climbing two spots to number five, an upgraded Selene supercomputer delivers 63.4 Linpack petaflops, more than doubling its previous score of 27.6 petaflops. Selene implements Nvidia’s SuperPod A100 DGX modular architecture with AMD Eypc CPUs and the new A100 80GB GPUs, which provide twice as much HBM2 memory as the original A100 40GB GPUs. In-house AI workloads, system development and testing, and chip design work are all key use cases for Selene. (Side note: Selene was previously designated as an industry system; but the Nvidia site has been brought under the vendor segment, aligning with Nvidia’s status as a system supplier.)
The Chinese-built Tianhe-2A slides one spot to number six with 61.4 petaflops. Equipped with Intel Xeon chips and custom Matrox-2000 accelerators, Tianhe-2A (aka MilkyWay-2A) entered the list in 2018 at number four. It is installed at the National Super Computer Center in Guangzhou.
New at number seven is the Atos-built JUWELS Booster Module — the most powerful in Europe with 44.1 Linpack petaflops. Powered by AMD Eypc CPUs and Nvidia GPUs and installed at the Forschungszentrum Juelich (FZJ) in Germany, the system leverages a modular system architecture. It is a companion system to the Intel Xeon-powered JUWELS Module that is at position 44 on the list — and both were integrated using the ParTec Modulo Cluster Software Suite.
Dell is still the top academic and top commercial supplier with eight and ninth place wins (HPC-5/Eni, and Frontera/TACC, respectively).
Rounding out the top ten pack at number 10 is the second newcomer system: Dammam-7. Installed at Saudi Aramco in Saudi Arabia, it’s also the second industry supercomputer in the current top 10, joining HPC5 (Eni/Dell), which is at number eight. The HPE Cray CS-Storm system uses Intel Gold Xeon CPUs and Nvidia Tesla V100 GPUs. It achieved 22.4 petaflops on the HPL benchmark.
Extending our purview to the top 50 segment, the list welcomes six additional systems: Hawk at HLRS with HPE (#16), TOKI-SORA at Japan Aerospace with Fujitsu (#19), Taranis at Meteo France with Atos (#30), Plasma Simulator at Japan’s National Institute for Fusion Science with NEC (#33), an unnamed system at the Japan Atomic Energy Agency with HPE (#45), and Emmy+ at HLRN with Atos (#47).
The addition of #19 TOKI-SORA, a Fujitsu A64FX system that is similar in design to Fugaku, brings the total number of Arm-based machines on the list to five. Four of these were built by Fujitsu using their A64FX chips, while Astra at Sandia Labs (the world’s first petascale Arm system) was built by HPE using Marvell’s ThunderX2 processors.
The flattening trend we saw in June, propelled by COVID-19’s hampering effects, continues with the list setting another record low refresh rate with only 44 new entrants (38 systems fell off the latest list and another six were removed due to reaching end of life). Of this group of new entrants, the top 11 highest ranked are not based in the U.S. The U.S. added eight new systems, including Sandia Labs’ “SNL/NNSA CTS-1 Manzano” system (supplied by Penguin Computing with Intel CPUs and Intel Omni-Path) at #69 with 4.3 Linpack petaflops. China claimed the highest number of new systems – 13 – although all but ten of these use 10G or 25G interconnect, indicative of Web-scale, rather than true HPC, deployments. Japan put an impressive six new systems on the list, showcasing a diverse set of architectures: Fujitsu with A64FX Arm (and Tofu interconnect); Fujitsu with Intel and Nvidia chips; NEC SX Aurora Tsubasa vector engine; Dell PowerEdge with AMD Epycs, and HPE SGI with straight Intel CPUs.
Diving into the networking makeup of the list, 157 systems use InfiniBand, inclusive of the Sunway TaihuLight system, which uses a semi-custom version of HDR InfiniBand. There are six systems with Tofu; 31 with Aries; and a handful of custom or other proprietary interconnects. Omni-Path is the interconnect technology on 47 machines, including one new system (Livermore’s Ruby supercomputer) that uses “Cornelis Networks” version. Launched in 2015, Intel’s Omni-Path Architecture (OPA) failed to find sufficient market footing and Intel pulled the plug on it in 2019. The IP was spun-out as Cornelis Networks in September of this year with the encouragement of U.S. labs. In addition to Ruby (Supermicro/Intel), Livermore’s recently-announced Mammoth Cluster (Supermicro/AMD) also uses Cornelis Omni-Path networking.
The aggregate Linpack performance provided by all 500 systems is 2.43 exaflops, up from 2.22 exaflops six months ago and 2.65 exaflops 12-months ago. The Linpack efficiency of the entire list is holding steady at 63.3 percent compared with 63.6 percent six months ago, and the Linpack efficiency of the top 100 segment is also essentially unchanged: 71.2 percent compared with 71.3 percent six months ago. The number one system, Fugaku, delivers a healthy computing efficiency of 82.28 percent, up a smidge from June’s 80.87 percent.
The minimum Linpack score required for the 56th Top500 list is 1.32 petaflops, versus 1.22 petaflops six months ago. The entry point for the top 100 segment is 3.16 petaflops versus 2.80 petaflops for the previous list. The current #500 system was ranked at #462 on the last edition.
As was the case six months ago, only two machines have crossed the 100 Linpack petaflops horizon (Fugaku and Summit). Four if you count the two (Sugon) Chinese systems that were nearly benchmarked over the last couple of years ago but not officially placed on the list (sources reported one system measured ~200 petaflops and a second reached over 300 petaflops). China has curtailed its supercomputing PR push in response to tech war tensions with the U.S. that came to a head 18-months ago.
Energy efficiency gains
Nvidia tops Green500 energy-efficiency rankings with its DGX Superpod (#172 on the Top500). Equipped with A100 GPUs, AMD Epyc Rome CPUs and HDR InfiniBand technology, it achieved 26.2 gigaflops-per-watt power-efficiency during its 2.4 Linpack petaflops performance run.
The previous Green500 leader, MN-3 from Preferred Networks, slips to second place despite improving its rating from 21.1 to 26.0 gigaflops-per-watt. Ranked 332rd on the Top500, MN-3 is powered by the MN-Core chip, a proprietary accelerator that targets matrix arithmetic.
In third place on the Green500 is the Atos-built JUWELS Booster Module installed at Forschungszentrum Jülich. The new entrant — powered by AMD Eypc Rome CPUs and Nvidia A100 GPUs with HDR InfiniBand — delivered 25.0 gigaflops-per-watt and is ranked at number seven on the Top500.
Other list highlights
This list includes 148 systems that make use of accelerator/co-processor technology, up from 146 in June. 110 have Nvidia Volta chips, 15 use Nvidia Pascal, and eight systems leverage Nvidia Kepler. There is only one entry on the list that uses AMD GPUs: a Sugon-made, Chinese system at Pukou Advanced Computing Center, powered by AMD Epyc “Naples” CPUs and AMD Vega 20 GPUs. That system, now at #291, first made its appearance one year ago.
The Top500 reports that Intel continues to provide the processors for the largest share (91.80 percent) of Top500 systems, down from 94.00 percent six months ago. AMD supplies CPUs for 21 systems (4.2 percent), up from 2 percent the previous list. AMD enjoys a higher top ranking with Selene (#5) than Intel with Tianhe-2A (#6). The top four systems on the list are not x86 based.