US Leads Supercomputing with #1, #2 Systems & Petascale Arm

By Tiffany Trader

November 12, 2018

The 31st Supercomputing Conference (SC) – commemorating 30 years since the first Supercomputing in 1988 – kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding area. That means there’s another Top500 list to dive into and discuss. If you follow the space closely there were no major surprises, yet a close inspection of the list yields interesting findings and a few firsts. The United States, despite continuing to lose ground in system share, had a particularly good showing, nabbing the top two spots and standing up the world’s first petaflops Arm-powered supercomputer.

Starting from the top, DOE CORAL siblings Summit and Sierra have both upped their Linpack scores and are enjoying their number one and two spots. Built by IBM, Nvidia and Mellanox, the supercomputers entered the list six months ago with Summit taking highest honors and Sierra in third. Big sister Summit, installed at Oak Ridge, got a performance upgrade as we’d previously reported it would, climbing from 122.3 to 143.4 petaflops. It follows that Sierra, installed at number three six months ago, would likely get one as well (and it did), stepping from 71.6 to 94.6 petaflops.

Nov 2018 Top 10 – Click to Expand (Source: Top500)

Summit has also had its power efficiency optimized for the latest Linpack lineup, bumping it from 13.89 gigaflops/watts to 14.67 gigaflops/watts. Sierra didn’t include power metrics when it debuted six months ago, but now Livermore is reporting an energy efficiency of 12.72 gigaflops/watts. (We’ll look at what that means for their Green500 rankings in a moment.)

Sierra’s flops fortification was sufficient to knock China’s Sunway TaihuLight supercomputer from second to third place. Installed at the National Supercomputing Center in Wuxi, TaihuLight debuted at the top of the June 2016 listing. It is comprised almost entirely of Chinese-made indigenous computing technologies.

Following in fourth place is China’s other mega-system, the Tianhe-2A (Milky Way-2A), which achieved 61.4 petaflops thanks to an upgrade earlier this year that swapped out 2012-era Intel Xeon Phi coprocessors with proprietary Matrix-2000 accelerators. Before the U.S. debuted Summit and Sierra in June 2018, China had enjoyed a long-running lead atop the list, and claimed both the first and second spots for three list iterations (June 2016  through November 2017).

Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland, moves up one spot into fifth place thanks to an upgrade that increased its Linpack performance from 19.6 to 21.2 petaflops. The boost secures Piz Daint’s place as fastest European HPC system, although it would have maintained that status even without the additional cores (but just barely).

Moving up three spots into sixth position is Trinity, a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories. Trinity upped its performance from 14.1 to 20.2 petaflops. It is the only system in the top 10 to employ Intel Xeon Phi processors.

The AI Bridging Cloud Infrastructure (ABCI) deployed at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan has moved down two spots into seventh position with a Linpack mark of 19.9 petaflops. Made by Fujitsu the system includes Xeon Gold processors and Nvidia Tesla V100 GPUs.

SuperMUC-NG at LRZ

Welcomed into the top 10 pack as the lone new entrant is SuperMUC-NG, in sixth position with 19.5 petaflops, provided by more than 305,000 Intel Xeon 8174 cores. This is the new fastest system in Germany, built by Lenovo and installed at the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum) in Garching, near Munich. It is the only system in the top 10 to use Intel’s Omni-Path interconnect.

Boasting 26.9 peak petaflops when it launched (compared to Piz Daint’s 25.3), SuperMUC-NG had a shot at overtaking Piz Daint for title of fastest supercomputer on the European block. However, even if Piz Daint hadn’t have added additional cores and flops, it still would have kept its lead (with 19.59 petaflops versus SuperMUC-NG’s 19.48 petaflops).

Titan, the Cray XK7 supercomputer at Oak Ridge National Laboratory, moves down three spots into ninth place. The long-running U.S. record-holder debuted on the list at number one six years ago. 18,688 AMD Opterons and 18,688 Nvidia K20X GPUs provide Titan with 17.5 petaflops of Linpack goodness.

In tenth place is Sequoia, delivering 17.2 petaflops. An IBM BlueGene/Q supercomputer, Sequoia has been a critical asset of DOE’s Lawrence Livermore National Laboratory since 2011.

There are 153 new systems on the list. Lassen, in 11th place, is one of them. Lassen is an IBM Power9 System (S922LC), installed at Lawrence Livermore National Laboratory. Powered by Nvidia V100s, and networked with dual-rail Mellanox EDR Infiniband, Lassen achieves 15.4 petaflops.

New additions SuperMUC-NG and Lassen mean that NERSC’s Cori supercomputer slips from tenth to twelve position. Cori is a Cray XC40, Intel Phi-based system; it is the primary HPC resource for DOE’s Lawrence Berkeley National Lab. Cori first entered the list at number five two years ago and has maintained its 14.01 Linpack petaflops.

Other notable new entrants are Taiwania 2, Electra and Eagle, ranked at 20 (9 petaflops), 33 (5.4 petaflops) and 35 (4.85 petaflops), respectively. Installed at the Taiwan National Center for High-performance Computing, Taiwania was manufactured by Quanta Computer in collaboration with Taiwan Fixed Network and ASUS Cloud, and consists of Xeon Gold 6154 processors and Nvidia Tesla V100 GPUs. Electra and Eagle are both built by HPE using Xeon Gold processors; the former is located at NASA/Ames Research Center and the latter at National Renewable Energy Laboratory.

Last but not least is notable first-timer Astra, the new Arm-based HPE-built supercomputer, deployed at Sandia National Laboratories. Astra gets the claim to fame of being the first Arm-powered supercomputer to make it onto the Top500. Seeing multiple nations betting on Arm for their exascale targets well before Arm had reached petascale has struck me as risky. As large production systems like Astra in the US, Islambad in the UK and a CEA-run system in France are stood up, Arm server chips will have their proving ground. Astra leveraged 125,328 Marvell Cavium ThunderX2 cores to deliver 1.5 High Peformance Linpack petaflops. It enters the list at number 203.

The entry point for the Top100 has reached 1.97 petaflops and there are now 427 systems with performance greater than a petaflops on the list (up from 272 six months ago).

China-U.S. Standing

China continues to lead in system share, while the U.S. maintains the aggregate performance edge it regained six months ago with the entry of its first two CORAL systems. China now claims 229 systems (45.8 percent of the total), while U.S. share fell has dropped to the lowest ever: 108 systems (21.6 percent). That wide delta in system count is offset by the U.S. having the top two systems and generally operating more powerful systems (and more real HPC systems, as opposed to Web/cloud systems), allowing the U.S. to enjoy a 38 percent performance share, compared to China’s 31 percent. Related to the rise in these non-HPC systems, Gigabit Ethernet ropes together 254 systems. 275 systems on the list are tagged as industry.

Aggregate List Performance, Green500 & HPCG

The 52nd Top500 list holds a combined performance (rMax) of 1.41 exaflops. That is an 18.3 percent increase from six months ago, when the total performance of all 500 systems first crossed the exaflops barrier, amassing 1.22 exaflops of total aggregate performance. The total theoretical peak carried by the newly published list is 2.21 exaflops, up from 1.92 exaflops six months ago.

The Green500 has been integrated into the Top500 reporting process and HPCG is also included in the list now. Summit and Sierra hold the top positions on the HPCG ranking ahead of Japan’s K computer at number three. Newcomer Astra also achieved a notable HPCG result, coming in 36th on that list.

On the Green500, Summit and Sierra achieved a position of three and seven, respectively [with 14.67 gigaflops/watt and 12.72 gigaflops/watt, as reported up above].

The top two Green500 systems are Shoubu system B and DGX Saturn, ranked 374 and 373 on the Top500. Shoubu system B, made by PEZY/Exascalar and located at RIKEN, achieves 17.6 gigaflops/watt; while DGX Saturn, made by Nvidia for Nvidia, delivers 15.1 gigaflops/watt.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Google and Intel. Of the seven benchmarks encompassed in version Read more…

By Tiffany Trader

Neural Network ‘Synapse’ Technology Showcased at IEEE Meeting

December 12, 2018

There’s nice snapshot of advancing work to develop improved neural network “synapse” technologies posted yesterday on IEEE Spectrum. Lower power, ease of use, manufacturability, and performance are all key paramete Read more…

By John Russell

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to provide what the companies call the “the highest performance Read more…

By Doug Black

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

4 Ways AI Analytics Projects Fail — and How to Succeed

“How do I de-risk my AI-driven analytics projects?” This is a common question for organizations ready to modernize their analytics portfolio. Here are four ways AI analytics projects fail—and how you can ensure success. Read more…

Is Amazon’s Plunge into Server Chips a Watershed Moment?

December 11, 2018

For several years now the big cloud providers – Amazon, Microsoft Azure, Google, et al – have been transforming from technology consumers into technology creators in hardware and software. The most recent example bei Read more…

By John Russell

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Goog Read more…

By Tiffany Trader

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to pr Read more…

By Doug Black

Is Amazon’s Plunge into Server Chips a Watershed Moment?

December 11, 2018

For several years now the big cloud providers – Amazon, Microsoft Azure, Google, et al – have been transforming from technology consumers into technology cr Read more…

By John Russell

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--the study of shapes--seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are being recast to use topology. For instance, looking for weather and climate patterns. Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This