Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

By Tiffany Trader

November 16, 2020

With the publication of the 56th Top500 list today from SC20’s virtual proceedings, Japan’s Fugaku supercomputer – now fully deployed – notches another win, while Nvidia’s in-house HPC-AI machine Selene, doubles in size, moving up to two spots to secure a fifth-place finish. New to the top-10 cohort are “JUWELS Booster Module” (Forschungszentrum Jülich, #7) and Dammam-7 (Saudi Aramco, #10). Nvidia captured the Green500 crown with an A100-driven DGX SuperPOD (#172 on Top500) that delivered 26.2 gigaflops-per-watt of performance.

Fugaku extends leads on HPC benchmarking (source: Satoshi Matsuoka, Top500 BoF)

RIKEN’s Fugaku supercomputer boosted its Linpack score to 442 petaflops, up from its debut listing at 415 petaflops six months ago, thanks to the addition of 6,912 nodes, bringing it to its full implementation of 158,976 A64FX CPU (1x) nodes. The Fujitsu Arm-based system also extended its performance on the new mixed precision HPL-AI benchmark to 2.0 exaflops, up from 1.4 exaflops six months ago. It grew its #1 leads on the HPCG ranking and Graph500, and maintained its standing in the upper echelon of the Green500 energy-efficiency rankings, holding onto tenth position with 14.78 gigaflops-per-watt.

Simply put, it’s another multi-category sweep for the first number-one Arm supercomputer that is the namesake of the highest mountain peak in Japan (Fugaku is another name for Mount Fuji, achieved by combining the first character of 富士, Fuji, and 岳, mountain).

Summit and Sierra (IBM/Mellanox/Nvidia, United States) remain at number two and three, respectively, and Sunway TaihuLight (China) holds steady in fourth position.

Climbing two spots to number five, an upgraded Selene supercomputer delivers 63.4 Linpack petaflops, more than doubling its previous score of 27.6 petaflops. Selene implements Nvidia’s SuperPod A100 DGX modular architecture with AMD Eypc CPUs and the new A100 80GB GPUs, which provide twice as much HBM2 memory as the original A100 40GB GPUs. In-house AI workloads, system development and testing, and chip design work are all key use cases for Selene. (Side note: Selene was previously designated as an industry system; but the Nvidia site has been brought under the vendor segment, aligning with Nvidia’s status as a system supplier.)

The Chinese-built Tianhe-2A slides one spot to number six with 61.4 petaflops. Equipped with Intel Xeon chips and custom Matrox-2000 accelerators, Tianhe-2A (aka MilkyWay-2A) entered the list in 2018 at number four. It is installed at the National Super Computer Center in Guangzhou.

New at number seven is the Atos-built JUWELS Booster Module — the most powerful in Europe with 44.1 Linpack petaflops. Powered by AMD Eypc CPUs and Nvidia GPUs and installed at the Forschungszentrum Juelich (FZJ) in Germany, the system leverages a modular system architecture. It is a companion system to the Intel Xeon-powered JUWELS Module that is at position 44 on the list — and both were integrated using the ParTec Modulo Cluster Software Suite.

Dell is still the top academic and top commercial supplier with eight and ninth place wins (HPC-5/Eni, and Frontera/TACC, respectively).

Rounding out the top ten pack at number 10 is the second newcomer system: Dammam-7. Installed at Saudi Aramco in Saudi Arabia, it’s also the second industry supercomputer in the current top 10, joining HPC5 (Eni/Dell), which is at number eight. The HPE Cray CS-Storm system uses Intel Gold Xeon CPUs and Nvidia Tesla V100 GPUs. It achieved 22.4 petaflops on the HPL benchmark.

Extending our purview to the top 50 segment, the list welcomes six additional systems: Hawk at HLRS with HPE (#16), TOKI-SORA at Japan Aerospace with Fujitsu (#19), Taranis at Meteo France with Atos (#30), Plasma Simulator at Japan’s National Institute for Fusion Science with NEC (#33),  an unnamed system at the Japan Atomic Energy Agency with HPE (#45), and Emmy+ at HLRN with Atos (#47).

The addition of #19 TOKI-SORA, a Fujitsu A64FX system that is similar in design to Fugaku, brings the total number of Arm-based machines on the list to five. Four of these were built by Fujitsu using their A64FX chips, while Astra at Sandia Labs (the world’s first petascale Arm system) was built by HPE using Marvell’s ThunderX2 processors.

The flattening trend we saw in June, propelled by COVID-19’s hampering effects, continues with the list setting another record low refresh rate with only 44 new entrants (38 systems fell off the latest list and another six were removed due to reaching end of life). Of this group of new entrants, the top 11 highest ranked are not based in the U.S. The U.S. added eight new systems, including Sandia Labs’ “SNL/NNSA CTS-1 Manzano” system (supplied by Penguin Computing with Intel CPUs and Intel Omni-Path) at #69 with 4.3 Linpack petaflops. China claimed the highest number of new systems – 13 – although all but ten of these use 10G or 25G interconnect, indicative of Web-scale, rather than true HPC, deployments. Japan put an impressive six new systems on the list, showcasing a diverse set of architectures: Fujitsu with A64FX Arm (and Tofu interconnect); Fujitsu with Intel and Nvidia chips; NEC SX Aurora Tsubasa vector engine; Dell PowerEdge with AMD Epycs, and HPE SGI with straight Intel CPUs.

The Selene SuperPOD system.

Diving into the networking makeup of the list, 157 systems use InfiniBand, inclusive of the Sunway TaihuLight system, which uses a semi-custom version of HDR InfiniBand. There are six systems with Tofu; 31 with Aries; and a handful of custom or other proprietary interconnects. Omni-Path is the interconnect technology on 47 machines, including one new system (Livermore’s Ruby supercomputer) that uses “Cornelis Networks” version. Launched in 2015, Intel’s Omni-Path Architecture (OPA) failed to find sufficient market footing and Intel pulled the plug on it in 2019. The IP was spun-out as Cornelis Networks in September of this year with the encouragement of U.S. labs. In addition to Ruby (Supermicro/Intel), Livermore’s recently-announced Mammoth Cluster (Supermicro/AMD) also uses Cornelis Omni-Path networking.

The aggregate Linpack performance provided by all 500 systems is 2.43 exaflops, up from 2.22 exaflops six months ago and 2.65 exaflops 12-months ago. The Linpack efficiency of the entire list is holding steady at 63.3 percent compared with 63.6 percent six months ago, and the Linpack efficiency of the top 100 segment is also essentially unchanged: 71.2 percent compared with 71.3 percent six months ago. The number one system, Fugaku, delivers a healthy computing efficiency of 82.28 percent, up a smidge from June’s 80.87 percent.

The minimum Linpack score required for the 56th Top500 list is 1.32 petaflops, versus 1.22 petaflops six months ago. The entry point for the top 100 segment is 3.16 petaflops versus 2.80 petaflops for the previous list. The current #500 system was ranked at #462 on the last edition.

As was the case six months ago, only two machines have crossed the 100 Linpack petaflops horizon (Fugaku and Summit). Four if you count the two (Sugon) Chinese systems that were nearly benchmarked over the last couple of years ago but not officially placed on the list (sources reported one system measured ~200 petaflops and a second reached over 300 petaflops). China has curtailed its supercomputing PR push in response to tech war tensions with the U.S. that came to a head 18-months ago.

Energy efficiency gains

Nvidia tops Green500 energy-efficiency rankings with its DGX Superpod (#172 on the Top500). Equipped with A100 GPUs, AMD Epyc Rome CPUs and HDR InfiniBand technology, it achieved 26.2 gigaflops-per-watt power-efficiency during its 2.4 Linpack petaflops performance run.

The previous Green500 leader, MN-3 from Preferred Networks, slips to second place despite improving its rating from 21.1 to 26.0 gigaflops-per-watt. Ranked 332rd on the Top500, MN-3 is powered by the MN-Core chip, a proprietary accelerator that targets matrix arithmetic.

In third place on the Green500 is the Atos-built JUWELS Booster Module installed at Forschungszentrum Jülich. The new entrant — powered by AMD Eypc Rome CPUs and Nvidia A100 GPUs with HDR InfiniBand — delivered 25.0 gigaflops-per-watt and is ranked at number seven on the Top500.

Other list highlights

This list includes 148 systems that make use of accelerator/co-processor technology, up from 146 in June. 110 have Nvidia Volta chips, 15 use Nvidia Pascal, and eight systems leverage Nvidia Kepler. There is only one entry on the list that uses AMD GPUs: a Sugon-made, Chinese system at Pukou Advanced Computing Center, powered by AMD Epyc “Naples” CPUs and AMD Vega 20 GPUs. That system, now at #291, first made its appearance one year ago.

The Top500 reports that Intel continues to provide the processors for the largest share (91.80 percent) of Top500 systems, down from 94.00 percent six months ago. AMD supplies CPUs for 21 systems (4.2 percent), up from 2 percent the previous list. AMD enjoys a higher top ranking with Selene (#5) than Intel with Tianhe-2A (#6). The top four systems on the list are not x86 based.

Performance development over time – 1993-2020 (Source: Top500)
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a province in Pavia, Italy), and delivered “as-a-service” via H Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire