Japan’s Fugaku Tops Global Supercomputing Rankings

By Tiffany Trader

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the Top500 list with 415.5 Linpack petaflops, marking a win for system builder Fujitsu, for Arm-based supercomputing and for the fight against the COVID-19 pandemic in which Fugaku is already engaged. In reduced precision, measured via the new HPL-AI benchmark, Fugaku achieved a record 1.4 exaflops. The Fujitsu Arm system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.

A decade in the making, Fugaku was developed by RIKEN in close collaboration with Fujitsu and the application community with funding from MEXT. At the centerpiece is a new processor, Fujitsu’s 48-core Arm A64FX SoC. Riken’s Top500 run was performed with 396 racks, comprising 152,064 A64FX nodes, which is approximately 95.6 percent of the entire (158,976-node) system. With nearly 7.3 million Arm cores running at 2.2GHz, Fugaku achieved a double-precision Linpack performance of 415.53 petaflops out of 513.98 theoretical petaflops, delivering a computing efficiency of 80.87 percent.

An in-depth report from Top500 co-author Jack Dongarra provides these technical details:

The Fugaku system is built on the A64FX ARM v8.2-A, which uses Scalable Vector Extension (SVE) instructions and a 512-bit implementation. The Fugaku system adds the following Fujitsu extensions: hardware barrier, sector cache, prefetch, and the 48/52 core CPU. It is optimized for high-performance computing (HPC) with an extremely high bandwidth 3D stacked memory, 4x 8 GB HBM with 1024 GB/s, on-die Tofu-D network BW (~400 Gbps), high SVE FLOP/s (3.072 TFLOP/s), and various AI support (FP16, INT8, etc.). The A64FX processor provides for general purpose Linux, Windows, and other cloud systems. 

Fugaku provides 4.85 petabytes of total memory with an aggregate 163 petabytes-per-second of memory bandwidth. The Tofu-D 6D Torus network delivers 6.49 petabytes-per-second injection bandwidth. The storage system consists of three layers: 15.9 petabytes of NVMe, a Lustre-based global file system, and cloud storage services that are in preparation. The installation occupies 1,920 square meters of floor space (equivalent to four basketball courts) and operates within a 30MW power envelope.

“We have a brand new processor,” said Fugaku project lead Satoshi Matsuoka, director of R-CCS, in today’s live-streamed Top500 briefing, hosted as part of the ISC 2020 Digital proceedings. “It’s an Arm instruction set, but is a brand new design by Fujitsu and RIKEN. [As a general-purpose CPU], it runs the same Arm code as a smartphone, it will run Red Hat Linux out of the box, [and] it will run Windows. It will run PowerPoint, even, but it’s also built to accommodate very large bandwidth, which is very important to sustain the speed up of the applications.”

Fugaku versus second-place finishers on Top500, HPCG, HPL-AI and Graph500 benchmarking (right-hand column shows speedup)

“You can think of Fugaku as putting 20 million smartphones in a single room, or equivalently 300,000 standard servers in a single room,” said Matsuoka, highlighting the scale of the system. “And these by coincidence, are about the same number as the annual shipment of respective units in Japan. So if you have two Fugakus basically, you can pretty much fill the so called edge-to-cloud compute requirements for the entire country of Japan.”

The cost to build Fugaku was about one billion dollars, on par with what is projected for the U.S. exascale machines. The total includes “significant R&D cost & the DC upgrade cost,” Matsuoka indicated in a Tweet, adding “it would have cost 3 times as much if we had used off-the-shelf CPUs.”

Fugaku demonstrated more than 2.8 times the performance of the previous list leader Summit (ORNL), benchmarked at 148.6 petaflops (and now in second place). The last time Japan clinched the top spot of the list was in November 2011, with the launch of the K computer, which held its position for six months before being supplanted by Sequoia, an IBM BlueGene/Q system installed at the National Nuclear Security Administration.

June 2020 top 100 research systems by chip architecture – aggregate performance share (source: Top500)

Fugaku contributes 18.7 percent of aggregate list flops, setting a new record. The machine’s magnitude shakes up the list dynamics, boosting Fujitsu into first place by performance share, and raising Japan into third place by performance share (behind the U.S., which still leads, and China). Segmenting the list by top 100 research systems, Japan zooms into first place (with 36 percent), and the Arm architecture, which only entered the list a year-and-a-half ago, now dominates with a 31 percent performance share.

There are just three other Arm systems on the Top500: the A64FX Fugaku prototype at Fujitsu (#205); the new Fujitsu PRIMEHPC FX1000 A64FX system, Flow, at Japan’s Nagoya University (#37); and Astra, the Marvell/Cavium ThunderX2 installation at Sandia (#245), recognized as the world’s first petascale Arm system in November 2018.

Performance fraction of Top500 systems (source: Top500)

Fugaku also broke records on the HPCG (13.4 petaflops), the Graph500 (70,980 gigaTEPS) and the HPL-AI (1.42 exaflops), coming in first in all three. Remarking on the system’s placement on the new AI-geared HPL-AI benchmark, Top500 co-author Erich Strohmaier observed, “That’s non trivial, because to satisfy the requirements of the benchmark, you cannot just compute only in 16-bit, you actually have to make up for the lost precision at the end of the benchmark to get back to the full 64-bit precision in the results. But that penalty was easily overcome by the more than two exaflops of peak performance Fugaku has in 16-bit operations.”

Fugaku is also one of the most energy-efficient machines on the Top500, joining its “mini-me” A64FX prototype, in the top ten of the Green500. With its 28.33 MW Linpack run, Fugaku delivered 14.7 gigaflops-per-watt, earning it a ninth-place finish on the Green500 list. The smaller A64FX prototype (#205 on the Top500), installed at Fujitsu’s Numazu plant, holds the fourth spot on the Green500 with 16.87 gigaflops-per-watt. Green500 glory goes to newcomer Preferred Networks, which achieved 21.1 gigaflops-per-watt, with its MN-3 system (#394 on the Top500) that combines Intel Xeon and specialized AI processors.

The two Arm systems — Fugaku and its prototype — are notable as the only systems in the top 20 of the Green500 that do not make use of GPUs or specialized accelerators. “Our power efficiency is pretty much in the range of GPUs or the latest specialized accelerators while being a general purpose CPU,” said Matsuoka, adding that the Fugaku processor is three times more powerful and also three times more power efficient [for Riken’s target workloads] compared to traditional CPUs, on account of extensive tuning.

June 2020 top 100 research systems by country – aggregate performance share (source: Top500)

Matsuoka reported that Fugaku was put into production almost a year ahead of schedule to combat COVID-19 (see additional HPCwire coverage here). For medical pharma applications that assess the effectiveness of drug targets, Fugaku is showing 100X speedups over K, according to Matsuoka. Efforts are also being directed to  societal and epidemiological applications to simulate how infections spread and the effectiveness of contact tracing. “The latter has tremendous potential and is already helping to mitigate the virus infections at macroscale,” Matsuoka added.

Asked about potential plans to grow Fugaku across the 64-bit precision exascale threshold, Matsuoka responded wryly, “If we have the money, obviously, anything is possible.” But he emphasized the goal of the project was never about peak performance.

“Our design metric was basically to accelerate existing applications by two orders of magnitude,” he said. “In some sense, the excellence is in the variety of the benchmarks, not just the Top500, but across the board, HPCG, HPL-AI, Top500, and so forth — showing basically the result of our efforts to accelerate the applications. So the outcome is applications describing the benchmarks and not the other way around. So, we’re very satisfied with the result. If we make progress it’ll only be because we will have made progress in the application speedup by which we could be achieving exaflop.”

Matsuoka added that the software ecosystem was the priority in the development of Fugaku. “That’s why we went to the Arm ecosystem from Spark, which was the K’s ecosystem and was not very, let’s say, proliferating,” he said. “The decision to go with Arm has led to a variety of collaborations with various institutions worldwide, with the DOE, with the European institutions, and so forth. Software is the key. That’s the heart of the computing system, and we’re making every effort to enrich the Arm ecosystem so that it’ll be one of the dominant systems in the HPC community.”

Feature image courtesy Riken.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Quinn in a presentation delivered to the 79th HPC User Forum Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Q&A with Intel’s Jeff McVeigh, an HPCwire Person to Watch in 2022

June 17, 2022

HPCwire presents our interview with Jeff McVeigh, vice president and general manager, Super Compute Group, Intel Corporation, and an HPCwire 2022 Person to Watch. McVeigh shares Intel's plans for the year ahead, his pers Read more…

AWS Solution Channel

Shutterstock 152995403

Bayesian ML Models at Scale with AWS Batch

This post was contributed by Ampersand’s Jeffrey Enos, Senior Machine Learning Engineer, Daniel Gerlanc, Senior Director for Data Science, and Brandon Willard, Data Science Lead. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 261863138

Using Cloud-Based, GPU-Accelerated AI for Financial Risk Management

There are strict rules governing financial institutions with a number of global regulatory groups publishing financial compliance requirements. Financial institutions face many challenges and legal responsibilities for risk management, compliance violations, and failure to catch financial fraud. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Intel CPUs and GPUs across multiple partitions. The newly reimag Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Q&A with Intel’s Jeff McVeigh, an HPCwire Person to Watch in 2022

June 17, 2022

HPCwire presents our interview with Jeff McVeigh, vice president and general manager, Super Compute Group, Intel Corporation, and an HPCwire 2022 Person to Watc Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

D-Wave Debuts Advantage2 Prototype; Seeks User Exploration and Feedback

June 16, 2022

Starting today, D-Wave Systems is providing access to a 500-plus-qubit prototype of its forthcoming 7000-qubit Advantage2 quantum annealing computer, which is d Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

Covid Policies at HPC Conferences Should Reflect HPC Research

June 6, 2022

Supercomputing has been indispensable throughout the Covid-19 pandemic, from modeling the virus and its spread to designing vaccines and therapeutics. But, desp Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire