Nvidia Nabs #7 Spot on Top500 with Selene, Launches A100 PCIe Cards

By Tiffany Trader

June 22, 2020

Nvidia unveiled its Selene AI supercomputer today in tandem with the updated listing of world’s fastest computers. Nvidia also introduced the PCIe form factor of the Ampere-based A100 GPU.

Nvidia’s new internal AI supercomputer, Selene, joins the upper echelon of the 55th Top500’s ranks and breaks an energy-efficiency barrier. With 27.5 double-precision Linpack petaflops, Selene landed the number seventh spot on the latest Top500 list released today as part of the ISC 2020 Digital proceedings. Selene is the second most-performant industry system on the list, coming in one spot below Eni’s HPC5 machine, which was sixth with 35.5 HPL petaflops (and also uses Nvidia GPUs).

A100 PCIe

This Top500 list marks the entrance of two industry systems into the top ten, with Selene being the first internal IT vendor system to do so. Nvidia uses supercomputers internally to support chip design and model development, as well as for its work in robotics, self-driving cars, healthcare and other research projects.

Located in Santa Clara, Calif., Selene is a DGX SuperPOD, powered by Nvidia’s A100 GPUs and AMD’s Epyc Rome CPUs within the DGX A100 form factor, clustered over Mellanox HDR InfiniBand. Altogether, Selene comprises 280 DGX A100s, housing a total 2,240 A100 GPUs and 494 Mellanox Quantum 200G InfiniBand switches, providing 56 TB/s network fabric. The system includes 7 petabytes of all-flash network storage.

A100 SXM

Selene was built with vertical integration of the network and the GPUs, using SHARP, said Gilad Shainer, senior vice president of marketing, who came to Nvidia via the Mellanox acquisition. “SHARP is the engine on the network that does the data reduction, which is a critical part in both traditional HPC simulations and deep learning,” he said in a pre-briefing held for media.

On the heels of Nvidia’s Ampere launch, Selene was constructed and up and running in less than a month, the company said.

Nvidia also runs internal workloads on three other machines that have made it into the Top500 ranking. There’s the V100-based DGX Superpod machine, which came in 24th on the latest Top500 with 9.4 Linpack petaflops; the P100-based DGX Saturn-V, deployed in 2016 that’s currently in 78th place with 3.3 petaflops; and Circe, another V100-based Superpod that’s grabbed the 91st rung with 3.1 Linpack petaflops.

Reached for comment, Karl Freund, senior analyst HPC and deep learning with Moor Insights and Strategy, underscored just how integral this in-house supercomputing power is to Nvidia’s competitive position. “First with Saturn five and now with Selene, Nvidia’s using their own technology to create better products, hardware and software, and that’s going to create a tough bar for somebody to clear competitively,” he told HPCwire. “You can’t imagine a startup spending tens of millions of dollars to develop a supercomputer that their engineers can use to develop their next chip. The use of AI, especially deep learning and reinforcement learning networks to do back-end physical design, is shown to create massive innovation.”

Nvidia’s newest AI supercomputer, Selene, accomplished a second-place finish on the Green500 list, delivering 20.52 gigaflops-per-watt, becoming one of only two machines to break the 20 gigaflops-per-watt barrier. The top-ranked green machine is MN-3, made by Top500 newcomer Preferred Networks. MN-3 turned in a record 21.1 gigaflops-per-watt run, a 1.62 petaflops Linpack score, and a 394th finish on the Top500.

Nvidia GPUs power six out of the ten most energy-efficient machines on the Top500 and fifteen out of the top 20.

Nvidia is also expanding its Ampere portfolio with a new PCIe A100 GPU card. When Nvidia launched its Ampere architecture the only way to obtain the A100 GPUs was to purchase Nvidia’s DGX A100 systems (available in four- and eight-GPU configurations) or the HGX A100 building blocks, leveraged by partnering cloud service providers and server makers. Now the datacenter company is announcing that PCIe-based systems will be forthcoming from server partners, in configurations spanning between one GPU to ten or more GPUs.

The SXM variant with NVLink is still only available as part of the HGX platform, which owning to the NVLink connectivity provide 10 times the bandwidth of PCIe Gen4, according to Nvidia.

Nvidia sold its prior generation V100 GPUs in both the SXM form factor and the PCIe form factor. SXMs were not restricted to an HGX board sale, which enabled system makers to essentially build their own DGX clones that potentially undercut Nvidia’s sales. Now Nvidia is tightening up its sales strategy, so that OEM partners that want to provide servers based on the more performant NVLink-equipped SXM parts must build their A100-based solutions using Nvidia’s four- or eight-way HGX boards.

“It’s kind of a bifurcated model by channel; direct channel customers can and will buy the DGX, and everybody else buys through OEMs,” said Freund. “It’s a pretty clean model. The OEMs are on notice that they gotta move fast or Nvidia will take up all of this as a system vendor, right? But Nvidia doesn’t really want to have a sales channel broad enough to do that exclusively. So they still need the OEMs.”

The PCIe form factor matches SXM on peak performance: 9.7 teraflops FP64 performance (up to 19.5 teraflops FP64 tensor core performance), and 19.6 teraflops FP32 performance (up to 312 teraflops tensor float 32 [with structural sparsity enabled]). However at 250 watts compared with the SXM’s 400 watts, the PCIe A100 is designed to run at a lower TDP. This means that while the peak performance is the same, sustained performance is impacted. On real applications the A100 PCIe GPU provides about 90 percent of delivered performance of A100 SXM when running on a single GPU, Nvidia said. But when scaling up where applications run on four-, eight- or more GPUs, the SXM configuration inside the HGX provides up to 50 percent higher performance on account of the NVLink connections, according to Nvidia.

Nvidia says the PCIe configuration is well suited for mainstream accelerated servers that go into the standard racks that offer lower power per server. “While the PCIe are intended for AI inference and some HPC applications that scale across one or two GPUs, the A100 SXM configuration is ideal for customers with applications scaling to multiple GPUs in a server, as well as across servers,” said Paresh Kharya, director of product management, accelerated computing at Nvidia.

Nvidia benchmarking results*

As Nvidia ramps its go to market for A100, the company is anticipating an expanded ecosystem of A100-powered servers. It expects 30 systems this summer with over 20 more coming by the end of the year. Systems are expected to be forthcoming from a wide range of system manufacturers, including ASUS, Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, One Stop Systems, Quanta/QCT and Supermicro. Nvidia also reported that it is building out its portfolio of NGC-Ready certified systems.

* 1 BERT pre-training throughput using Pytorch, including (2/3) Phase 1 and (1/3) Phase 2 | Phase 1 Seq Len = 128, Phase 2 Seq Len = 512 | V100: NVIDIA DGX-1™ server with 8x NVIDIA V100 Tensor Core GPU using FP32 precision | A100: NVIDIA DGX A100 server with 8x A100 using TF32 precision.
2 BERT large inference | NVIDIA T4 Tensor Core GPU: NVIDIA TensorRT™ (TRT) 7.1, precision = INT8, batch size 256 | V100: TRT 7.1, precision FP16, batch size 256 | A100 with 7 MIG instances of 1g.5gb; pre-production TRT, batch size 94, precision INT8 with sparsity.
3 V100 used is single V100 SXM2. A100 used is single A100 SXM4. AMBER based on PME-Cellulose, LAMMPS with Atomic Fluid LJ-2.5, FUN3D with dpw, Chroma with szscl21_24_128.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

PEARC21 Panel Reviews Eight New NSF-Funded HPC Systems Debuting in 2021

July 23, 2021

Over the past few years, the NSF has funded a number of HPC systems to further supply the open research community with computational resources to meet that community’s changing and expanding needs. A review of these systems at the PEARC21 conference (July 19-22) highlighted... Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago and a computer scientist at Argonne National Laboratory, as s Read more…

PEARC21 Plenary Session: AI for Innovative Social Work

July 21, 2021

AI analysis of social media poses a double-edged sword for social work and addressing the needs of at-risk youths, said Desmond Upton Patton, senior associate dean, Innovation and Academic Affairs, Columbia University. S Read more…

Summer Reading: “High-Performance Computing Is at an Inflection Point”

July 21, 2021

At last month’s 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART), a group of researchers led by Martin Schulz of the Leibniz Supercomputing Center (Munich) presented a “position paper” in which they argue HPC architectural landscape... Read more…

AWS Solution Channel

Accelerate innovation in healthcare and life sciences with AWS HPC

With Amazon Web Services, researchers can access purpose-built HPC tools and services along with scientific and technical expertise to accelerate the pace of discovery. Whether you are sequencing the human genome, using AI/ML for disease detection or running molecular dynamics simulations to develop lifesaving drugs, AWS has the infrastructure you need to run your HPC workloads. Read more…

PEARC21 Panel: Wafer-Scale-Engine Technology Accelerates Machine Learning, HPC

July 21, 2021

Early use of Cerebras’ CS-1 server and wafer-scale engine (WSE) has demonstrated promising acceleration of machine-learning algorithms, according to participants in the Scientific Research Enabled by CS-1 Systems panel Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

Summer Reading: “High-Performance Computing Is at an Inflection Point”

July 21, 2021

At last month’s 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART), a group of researchers led by Martin Schulz of the Leibniz Supercomputing Center (Munich) presented a “position paper” in which they argue HPC architectural landscape... Read more…

PEARC21 Panel: Wafer-Scale-Engine Technology Accelerates Machine Learning, HPC

July 21, 2021

Early use of Cerebras’ CS-1 server and wafer-scale engine (WSE) has demonstrated promising acceleration of machine-learning algorithms, according to participa Read more…

15 Years Later, the Green500 Continues Its Push for Energy Efficiency as a First-Order Concern in HPC

July 15, 2021

The Green500 list, which ranks the most energy-efficient supercomputers in the world, has virtually always faced an uphill battle. As Wu Feng – custodian of the Green500 list and an associate professor at Virginia Tech – tells it, “noone" cared about energy efficiency in the early 2000s, when the seeds... Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

ExaWind Prepares for New Architectures, Bigger Simulations

July 10, 2021

The ExaWind project describes itself in terms of terms like wake formation, turbine-turbine interaction and blade-boundary-layer dynamics, but the pitch to the Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire