US Leads Supercomputing with #1, #2 Systems & Petascale Arm

By Tiffany Trader

November 12, 2018

The 31st Supercomputing Conference (SC) – commemorating 30 years since the first Supercomputing in 1988 – kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding area. That means there’s another Top500 list to dive into and discuss. If you follow the space closely there were no major surprises, yet a close inspection of the list yields interesting findings and a few firsts. The United States, despite continuing to lose ground in system share, had a particularly good showing, nabbing the top two spots and standing up the world’s first petaflops Arm-powered supercomputer.

Starting from the top, DOE CORAL siblings Summit and Sierra have both upped their Linpack scores and are enjoying their number one and two spots. Built by IBM, Nvidia and Mellanox, the supercomputers entered the list six months ago with Summit taking highest honors and Sierra in third. Big sister Summit, installed at Oak Ridge, got a performance upgrade as we’d previously reported it would, climbing from 122.3 to 143.4 petaflops. It follows that Sierra, installed at number three six months ago, would likely get one as well (and it did), stepping from 71.6 to 94.6 petaflops.

Nov 2018 Top 10 – Click to Expand (Source: Top500)

Summit has also had its power efficiency optimized for the latest Linpack lineup, bumping it from 13.89 gigaflops/watts to 14.67 gigaflops/watts. Sierra didn’t include power metrics when it debuted six months ago, but now Livermore is reporting an energy efficiency of 12.72 gigaflops/watts. (We’ll look at what that means for their Green500 rankings in a moment.)

Sierra’s flops fortification was sufficient to knock China’s Sunway TaihuLight supercomputer from second to third place. Installed at the National Supercomputing Center in Wuxi, TaihuLight debuted at the top of the June 2016 listing. It is comprised almost entirely of Chinese-made indigenous computing technologies.

Following in fourth place is China’s other mega-system, the Tianhe-2A (Milky Way-2A), which achieved 61.4 petaflops thanks to an upgrade earlier this year that swapped out 2012-era Intel Xeon Phi coprocessors with proprietary Matrix-2000 accelerators. Before the U.S. debuted Summit and Sierra in June 2018, China had enjoyed a long-running lead atop the list, and claimed both the first and second spots for three list iterations (June 2016  through November 2017).

Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland, moves up one spot into fifth place thanks to an upgrade that increased its Linpack performance from 19.6 to 21.2 petaflops. The boost secures Piz Daint’s place as fastest European HPC system, although it would have maintained that status even without the additional cores (but just barely).

Moving up three spots into sixth position is Trinity, a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories. Trinity upped its performance from 14.1 to 20.2 petaflops. It is the only system in the top 10 to employ Intel Xeon Phi processors.

The AI Bridging Cloud Infrastructure (ABCI) deployed at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan has moved down two spots into seventh position with a Linpack mark of 19.9 petaflops. Made by Fujitsu the system includes Xeon Gold processors and Nvidia Tesla V100 GPUs.

SuperMUC-NG at LRZ

Welcomed into the top 10 pack as the lone new entrant is SuperMUC-NG, in sixth position with 19.5 petaflops, provided by more than 305,000 Intel Xeon 8174 cores. This is the new fastest system in Germany, built by Lenovo and installed at the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum) in Garching, near Munich. It is the only system in the top 10 to use Intel’s Omni-Path interconnect.

Boasting 26.9 peak petaflops when it launched (compared to Piz Daint’s 25.3), SuperMUC-NG had a shot at overtaking Piz Daint for title of fastest supercomputer on the European block. However, even if Piz Daint hadn’t have added additional cores and flops, it still would have kept its lead (with 19.59 petaflops versus SuperMUC-NG’s 19.48 petaflops).

Titan, the Cray XK7 supercomputer at Oak Ridge National Laboratory, moves down three spots into ninth place. The long-running U.S. record-holder debuted on the list at number one six years ago. 18,688 AMD Opterons and 18,688 Nvidia K20X GPUs provide Titan with 17.5 petaflops of Linpack goodness.

In tenth place is Sequoia, delivering 17.2 petaflops. An IBM BlueGene/Q supercomputer, Sequoia has been a critical asset of DOE’s Lawrence Livermore National Laboratory since 2011.

There are 153 new systems on the list. Lassen, in 11th place, is one of them. Lassen is an IBM Power9 System (S922LC), installed at Lawrence Livermore National Laboratory. Powered by Nvidia V100s, and networked with dual-rail Mellanox EDR Infiniband, Lassen achieves 15.4 petaflops.

New additions SuperMUC-NG and Lassen mean that NERSC’s Cori supercomputer slips from tenth to twelve position. Cori is a Cray XC40, Intel Phi-based system; it is the primary HPC resource for DOE’s Lawrence Berkeley National Lab. Cori first entered the list at number five two years ago and has maintained its 14.01 Linpack petaflops.

Other notable new entrants are Taiwania 2, Electra and Eagle, ranked at 20 (9 petaflops), 33 (5.4 petaflops) and 35 (4.85 petaflops), respectively. Installed at the Taiwan National Center for High-performance Computing, Taiwania was manufactured by Quanta Computer in collaboration with Taiwan Fixed Network and ASUS Cloud, and consists of Xeon Gold 6154 processors and Nvidia Tesla V100 GPUs. Electra and Eagle are both built by HPE using Xeon Gold processors; the former is located at NASA/Ames Research Center and the latter at National Renewable Energy Laboratory.

Last but not least is notable first-timer Astra, the new Arm-based HPE-built supercomputer, deployed at Sandia National Laboratories. Astra gets the claim to fame of being the first Arm-powered supercomputer to make it onto the Top500. Seeing multiple nations betting on Arm for their exascale targets well before Arm had reached petascale has struck me as risky. As large production systems like Astra in the US, Islambad in the UK and a CEA-run system in France are stood up, Arm server chips will have their proving ground. Astra leveraged 125,328 Marvell Cavium ThunderX2 cores to deliver 1.5 High Peformance Linpack petaflops. It enters the list at number 203.

The entry point for the Top100 has reached 1.97 petaflops and there are now 427 systems with performance greater than a petaflops on the list (up from 272 six months ago).

China-U.S. Standing

China continues to lead in system share, while the U.S. maintains the aggregate performance edge it regained six months ago with the entry of its first two CORAL systems. China now claims 229 systems (45.8 percent of the total), while U.S. share fell has dropped to the lowest ever: 108 systems (21.6 percent). That wide delta in system count is offset by the U.S. having the top two systems and generally operating more powerful systems (and more real HPC systems, as opposed to Web/cloud systems), allowing the U.S. to enjoy a 38 percent performance share, compared to China’s 31 percent. Related to the rise in these non-HPC systems, Gigabit Ethernet ropes together 254 systems. 275 systems on the list are tagged as industry.

Aggregate List Performance, Green500 & HPCG

The 52nd Top500 list holds a combined performance (rMax) of 1.41 exaflops. That is an 18.3 percent increase from six months ago, when the total performance of all 500 systems first crossed the exaflops barrier, amassing 1.22 exaflops of total aggregate performance. The total theoretical peak carried by the newly published list is 2.21 exaflops, up from 1.92 exaflops six months ago.

The Green500 has been integrated into the Top500 reporting process and HPCG is also included in the list now. Summit and Sierra hold the top positions on the HPCG ranking ahead of Japan’s K computer at number three. Newcomer Astra also achieved a notable HPCG result, coming in 36th on that list.

On the Green500, Summit and Sierra achieved a position of three and seven, respectively [with 14.67 gigaflops/watt and 12.72 gigaflops/watt, as reported up above].

The top two Green500 systems are Shoubu system B and DGX Saturn, ranked 374 and 373 on the Top500. Shoubu system B, made by PEZY/Exascalar and located at RIKEN, achieves 17.6 gigaflops/watt; while DGX Saturn, made by Nvidia for Nvidia, delivers 15.1 gigaflops/watt.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Leading Solution Providers

Contributors

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire