AMD Courts HPC with 11.5 Teraflops Instinct MI100 GPU

By Tiffany Trader

November 16, 2020

AMD today announced the new MI100 Instinct accelerator, billing it as “the world’s fastest HPC GPU” with 11.5 teraflops of peak double-precision floating point performance. A follow on to the MI50 and MI60 Instinct accelerators launched two years ago (the “world’s first 7nm datacenter GPUs”), the MI100 is also manufactured on TSMC’s 7nm process, but boasts twice as many compute units as the previous generation within the same 300-watt power envelope.

Block diagram of the AMD Instinct MI100 accelerator, powered by the AMD CDNA architecture

The MI100 GPU is the first to incorporate AMD’s Compute DNA (CDNA) architecture with 120 CUs organized into four arrays. An evolution of AMD’s earlier GCN architecture, CDNA includes new matrix core engines that boost computational throughput for different numerical formats.

Going down the spec sheet, the MI100 offers 46.1 teraflops peak single-precision matrix (FP32), 23.1 teraflops peak single-precision (FP32), 184.6 teraflops peak half-precision (FP16) floating-point performance, and 92.3 peak teraflops of bfloat16 performance.

The new AMD matrix core technology provides the MI100 with 7x greater peak half-precision floating point performance compared to the MI50, according to AMD. Brad McCredie (corporate vice president of datacenter GPU and accelerated processing at AMD) told HPCwire the company is exploring other emerging numerical formats that target AI and ML workloads, but doesn’t want to get too far out in front of the industry.

AMD’s MI100 GPU presents a competitive alternative to Nvidia’s A100 GPU, rated at 9.7 teraflops of peak theoretical performance. However, the A100 is returning even higher performance than that on its FP64 Linpack runs. (Yes, you heard right.) The A100 GPU is achieving ~12 double-precision Linpack teraflops (see Selene, for example), and Nvidia confirmed to me they use a different double-precision peak for their marketing material and for their Top500 rMax (9.7 versus 15.1 teraflops, respectively).

As new numerical formats optimized for AI/ML gain traction, performance comparisons – already a challenging, if not dark, art – are becoming more confounding. As always the only sound comparisons rest on cost-performance and real-world evaluations for real-world applications. While prices for the MI100 have not been publicly disclosed and Nvidia does not advertise a list price for its A100s, AMD is claiming a 1.8x to 2.1x flops-per-dollar advantage over its competitor.

Fully connected 4-GPU Infinity Fabric technology hives with the AMD Instinct MI100 GPUs

Implementing the second-generation AMD Infinity Fabric Technology, AMD says the MI100 provides ~2x the peer-to-peer peak I/O bandwidth over PCIe 4.0 with up to 340 Gbps of aggregate bandwidth per card. AMD’s bridging device (see graphic) joins four MI100 PCIe cards into a single coherent scale-up solution. In a server, the MI100 GPUs can be configured with up to two integrated quad GPU hives, each providing up to 552 Gbps of peer-to-peer I/O bandwidth, according to AMD.

“We did four cards [fully-linked] because we think that is the sweet spot for HPC deployments, this four-to-one GPU to CPU ratio,” said McCredie.

Four stacks of 8GB HBM2 memory provide 32GB HBM2 memory on each MI100 GPU. At a clock rate of 1.2 GHz, that’s 1.23 Tbps of memory bandwidth. As with the MI50, the MI100’s support for PCIe Gen 4.0 technology enables 64 Gbps peak theoretical transport data bandwidth between CPU and GPU.

AMD said it has no plans for custom mezzanine form factors with this generation – but AMD does see a role for those form factors going forward as you might expect given their exascale wins (Frontier and El Capitan). While detailed node structures have not been publicly disclosed, both of these designs employ a four-to-one GPU to CPU ratio.

Source: AMD Financial Analyst Day slide (March 2020) – link to coverage

HPC market watcher Addison Snell, CEO of Intersect360 Research, remarked on AMD’s HPC focus and the implementation of its datacenter-centric CDNA architecture, distinct from the gaming-oriented RDNA (Radeon DNA) architecture.

“With the MI100 GPU, AMD is staying pure to its corporate focus on HPC,” said Snell. “While Nvidia’s messaging and benchmarking have been AI-heavy, AMD is hitting HPC hard, with 11.5 teraflops of double-precision performance as the marquee stat.”

“AMD is also emphasizing its new CDNA architecture as the focus for computing versus graphics; that’s where we find the GPU-to-GPU communication on the second-generation Infinity architecture.”

Prominent HPC sites Oak Ridge National Laboratory, the University of Pittsburgh and Pawsey Supercomputing Center evaluated the new GPUs along with AMD’s software frameworks. Their reports are positive.

“We’ve received early access to the MI100 accelerator, and the preliminary results are very encouraging. We’ve typically seen significant performance boosts, up to 2-3x compared to other GPUs,” said Bronson Messer, director of science, Oak Ridge Leadership Computing Facility. “What’s also important to recognize is the impact software has on performance. The fact that the ROCm open software platform and HIP developer tool are open source and work on a variety of platforms, it is something that we have been absolutely almost obsessed with since we fielded the very first hybrid CPU/GPU system.”

Oak Ridge National Laboratory: NAMD 2.14, STMV 1.06M atoms benchmark, 2x EPYC 7742 + MI100 vs 2x Power9 + V100 SXM, Cholla, Total Run measured. 2x EPYC 7742 + MI100 vs 2x EPYC 7742 + V100, PIConGPU, Total Run measured. 2x EPYC 7742 + MI100 vs 2x EPYC 7742 + V100, GESTS, Total Run measured, 2x EPYC 7742 + MI100 vs 2x EPYC 7742 + V100 (Source: Oak Ridge and AMD)

AMD is preparing ROCm – its open source toolset consisting of compilers, programming APIs and libraries – to be foundational for exascale computing. The recently released ROCm 4.0 has upgraded the compiler to be open source and unified to support both OpenMP 5.0 and HIP, said AMD. HIP (AMD’s heterogeneous-compute interface for portability) is a C++ runtime API that allows developers to write single-source code that can run on AMD and Nvidia GPUs (and possibly future Intel ones as well).

AMD reported that MI100-based systems will start shipping this month from a number of partners, among them Dell, Gigabyte, Hewlett Packard Enterprise and Supermicro.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Watch Nvidia’s GTC21 Keynote with Jensen Huang Livestreamed Here, Monday at 8:30am PT

April 9, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Argonne Supercomputing Supports Caterpillar Engine Design

April 8, 2021

Diesel fuels still account for nearly ten percent of all energy-related U.S. carbon emissions – most of them from heavy-duty vehicles like trucks and construction equipment. Energy efficiency is key to these machines, Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new training and inference servers that will power the upcoming Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

What’s New in HPC Research: Tundra, Fugaku, µHPC & More

April 6, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

RIKEN’s Ongoing COVID Research Includes New Vaccines, New Tests & More

April 6, 2021

RIKEN took the supercomputing world by storm last summer when it launched Fugaku – which became (and remains) the world’s most powerful supercomputer – ne Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire