AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

By Tiffany Trader

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops.

“We’re in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance delivered more efficiently and at ever-larger scale to power the services and devices that define modern life,” said Su.

AMD’s new third-generation Epyc CPU with AMD 3D V-Cache, codenamed Milan-X, is the company’s first server CPU with 3D chiplet technology. The processors have three times the L3 cache compared to standard Milan processors. In Milan, each complex core die (CCD) had 32 megabytes of cache; Milan-X adds 64 megabytes of 3D stacked cache on top for a total of 96 megabytes per CCD. With eight CCDs, that adds up to 768 megabytes of L3 cache. Adding in L2 and L1 cache, there is a total of 804 megabytes of cache per socket.

Milan-X is built on the same 7nm Zen 3 cores as Milan, and will also have a max core count of 64 total cores. The enhanced processors are compatible with existing platforms after a BIOS upgrade. 

Milan-X with 3D V-Cache employs a hybrid bond plus through silicon vias approach, providing more than 200 times the interconnect density of 2D chiplets and more than 15 times the density compared to the existing 3D stacking solutions, according to AMD. The die-to-die interface uses a direct copper-copper bond with no solder bumps to improve thermals, transistor density and interconnect pitch. 

AMD is reporting a 50 percent performance improvement for Milan-X on targeted technical computing workloads compared to Milan processors. The chipmaker demonstrated Milan-X’s performance speedup on an EDA workload, running Synopsys’ verification solution VCS. A 16-core Milan-X with AMD’s 3D V-Cache delivered 66 percent faster RTL verification compared to the standard Milan without V-Cache. VCS is used by many of the world’s top semiconductor companies to catch defects early in the development process before a chip is committed to silicon.

Microsoft Azure is the first announced customer for Milan-X, with upgraded HBv3 instances in preview today, and a planned refresh on the way for its entire HBv3 deployment. Traditional OEM and ODM server partners Dell Technologies, HPE, Lenovo, and Supermicro are preparing Milan-X products for the first quarter of 2022. Named ISV ecosystem partners include Altair, Ansys, Cadence, Siemens and Synopsys.

Manufactured on TSMC’s 6nm process, the MI200 is the world’s first multichip GPU, designed to maximize compute and data throughput in a single package. The MI200 series contains two CDNA 2 GPU dies harnessing 58 billion transistors. It features up to 220 compute units and 880 second-generation matrix cores. Eight stacks of HBM2e memory provide a total 128 gigabytes memory at 3.2 TB/s, four times more capacity and 2.7 times more bandwidth than the MI100. Connecting the two CDNA2 dies are Infinity Fabric links running at 25 Gbps for a total of 400 GB/s of bidirectional bandwidth.

The MI200 accelerator – with up to 47.9 peak double-precision teraflops ostensibly answers the question, what if a chip designer dramatically optimized the GPU architecture for double-precision (FP64) performance? The MI250X ramps up peak double-precision 4.2 times over the MI100 in one year (47.9 teraflops versus 11.5 teraflops). By comparison, AMD pointed out that Nvidia grew its traditional double-precision FP64 peak performance for its server GPUs 3.7 times from 2014 until 2020. In a side by side comparison, the MI200 OAM is nearly five times faster than Nvidia’s A100 GPU in peak FP64 performance, and 2.5 times faster in peak FP32 performance.

Further, the Instinct MI250X delivers 47.9 teraflops of peak single-precision (FP32) performance and provides 383 teraflops of peak theoretical half-precision (FP16) for AI workloads. That dense computational capability doesn’t come without a power cost. The top of stack part, the OAM MI250X, consumes up to 560 watts, while air-cooled and other configurations will require somewhat less power. However, remember you’re essentially getting two GPUs in one package with that 500-560 watt TDP, and based on some of the disclosed system specs (like Frontier), the flops-per-watt targets are impressive.

During this morning’s launch event, Forrest Norrod, senior vice president and general manager of the datacenter and embedded solutions business group at AMD, showed head-to-head comparisons for the MI200 OAM versus Nvidia’s A100 (80GB) GPU on a range of HPC applications. In AMD testing, a single-socket 3rd gen AMD Eypc server with one AMD Instinct MI250X OAM 560 watt GPU achieved a median score of 42.26 teraflops on the High Performance Linpack benchmark.

Norrod also showed a competitive comparison of the MI200 OAM versus the Nvidia A100 (80GB) on the molecular simluation code LAMMPS, running a combustion simulation of a hydrocarbon molecule. In the timelapse of the simulation, four MI250X 560 watt GPUs can be seen completing the job in less than half the time of four A100 SXM 80GB 400 watt GPUs.

The MI200 accelerators introduce the third-generation AMD Infinity Fabric architecture. Up to eight Infinity Fabric links connect the AMD Instinct MI200 with 3rd generation Epyc Milan CPUs and other GPUs in the node to deliver up to 800 GB/s of aggregate bandwidth and enable unified CPU/GPU memory coherency. 

AMD is also introducing its Elevated Fanout Bridge (EFB) technology. “Unlike substrate embedded silicon bridge architectures, EFB enables use of standard substrates and assembly techniques, providing better precision, scalability and yields while maintaining high performance,” said Norrod.

 

Three form factors were announced for the new MI200 series: the MI250X and MI250, available in an open-hardware compute accelerator module or OCP Accelerator Module (OAM) form factor; and a PCIe card form factor, the AMD Instinct MI210, that will be forthcoming in OEM servers.

The AMD MI250X accelerator is currently available from HPE in the HPE Cray EX Supercomputer. Other MI200 series accelerators, including the PCIe form factor, are expected in Q1 2022 from server partners, including ASUS, ATOS, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

The MI250X accelerator will be the primary computational engine of the upcoming exascale supercomputer Frontier, currently being installed at the DOE’s Oak Ridge National Laboratory in partnership with HPE. Each of Frontier’s 9,000+ nodes will include one “optimized 3rd Gen AMD Epyc CPU” not Milan-X linked to four AMD MI250X accelerators over AMD’s coherent Infinity Fabric.

During this morning’s proceedings, ORNL Director Thomas Zacharia noted that a single MI250X GPU is more powerful than an entire node of ORNL’s Summit supercomputer, which is currently the fastest system in the United States. With a promised performance target of >1.5 peak double-precision exaflops, Frontier could achieve greater than 1.72 exaflops peak just owing to its GPUs (9,000 x 4 x 95.7 teraflops).

As we detailed recently, the MI200 will be powering three giant systems on three continents. In addition to Frontier, expected to be the United States’ first exascale computer coming online next year, the MI200 was selected for the European Union’s pre-exascale LUMI system and Australia’s petascale Setonix system.

AMD Instinct MI200 OAM accelerator

“The adoption of Milan has significantly outpaced Rome as our momentum grows,” said Su. Looking ahead on the roadmap, the next-gen “Genoa” Epyc platform will have up to 96 high-performance 5nm “Zen 4” cores, and will support next-generation memory and IO capabilities DDR5, PCIe Gen 5 and CXL. Genoa is now sampling to customers with production and launch anticipated next year, AMD said.

“We’ve worked with TSMC to optimize 5nm for high performance computing,” said Su. “[The new process node] offers twice the density, twice the power efficiency and 1.25x the performance of the 7nm process we’re using in today’s products.”

Su also unveiled a new version of Zen 4 for cloud native computing, called “Bergamo.” Bergamo features up to 128 high performance “Zen 4 C” cores, and will come with the other features of Genoa: DDR5, PCIe Gen 5, CXL 1.1, and the full suite of Infinity Guard security features. Further, it is socket compatible with Genoa with the same Zen 4 instruction set. Bergamo is on track to start shipping in the first half of 2023, Su said.

“Our investment in multi-generational CPU core roadmaps combined with advanced process and packaging technology enables us to deliver leadership across general purpose technical computing and cloud workloads,” said Su. “You can count on us to continue to push the envelope in high-performance computing.”

AMD also announced version 5.0 of ROCm, its open software platform that supports environments across multiple accelerator vendors and architectures. “With ROCm 5.0, we’re adding support and optimization for the MI200, expanding ROCm support to include the Radeon Pro W6800 workstation GPUs, and improving developer tools that increase end user productivity,” said AMD’s Corporate Vice President, GPU Platforms, Brad McCredie in a media briefing last week.

The company also introduced AMD Infinity Hub, an online portal where developers can access documentation, tools and education materials for HIP and OpenMP, and system administrators and scientists can download containerized HPC apps and ML frameworks that are optimized and supported on AMD platforms.

Commenting on today’s news raft, market watcher Addison Snell, CEO of Intersect360 Research, said, “AMD has set the new bar for performance in HPC – in CPU, in GPU, and in packaging both together. Either Milan-X or MI200 makes a statement on its own – multiple statements, based on the benchmarks. Having coherent memory over Infinity Fabric is a game-changer that neither Intel nor Nvidia is going to be able to match soon.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hyperion Study Tracks Rise and Impact of Linux Supercomputers

May 17, 2022

That supercomputers produce impactful, lasting value is a basic tenet among the HPC community. To make the point more formally, Hyperion Research has issued a new report, The Economic and Societal Benefits of Linux Super Read more…

ECP Director Doug Kothe Named ORNL Associate Laboratory Director

May 16, 2022

The Department of Energy's Oak Ridge National Laboratory (ORNL) has selected Doug Kothe to be the next Associate Laboratory Director for its Computing and Computational Sciences Directorate (CCSD), HPCwire has learned. K Read more…

Google Cloud’s New TPU v4 ML Hub Packs 9 Exaflops of AI

May 16, 2022

Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…

Q&A with Candace Culhane, SC22 General Chair and an HPCwire Person to Watch in 2022

May 14, 2022

HPCwire is pleased to present our interview with SC22 General Chair Candace Culhane, program/project director at Los Alamos National Lab and an HPCwire 2022 Person to Watch. In this exclusive Q&A, Culhane covers her Read more…

Argonne Supercomputer Advances Energy Storage Research

May 13, 2022

The lack of large-scale energy storage bottlenecks many sources of renewable energy, such as sunlight-reliant solar power and unpredictable wind power. Researchers from Lawrence Livermore National Laboratory (LLNL) are w Read more…

AWS Solution Channel

shutterstock 1103121086

Encoding workflow dependencies in AWS Batch

Most users of HPC or Batch systems need to analyze data with multiple operations to get meaningful results. That’s really driven by the nature of scientific research or engineering processes – it’s rare that a single task generates the insight you need. Read more…

Supercomputing an Image of Our Galaxy’s Supermassive Black Hole

May 13, 2022

A supermassive black hole called Sagittarius A* (yes, the asterisk is part of it!) sits at the center of the Milky Way. Now, for the first time, we can see it. The resulting direct image of Sagittarius A*, revealed this Read more…

Google Cloud’s New TPU v4 ML Hub Packs 9 Exaflops of AI

May 16, 2022

Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…

Q&A with Candace Culhane, SC22 General Chair and an HPCwire Person to Watch in 2022

May 14, 2022

HPCwire is pleased to present our interview with SC22 General Chair Candace Culhane, program/project director at Los Alamos National Lab and an HPCwire 2022 Per Read more…

Supercomputing an Image of Our Galaxy’s Supermassive Black Hole

May 13, 2022

A supermassive black hole called Sagittarius A* (yes, the asterisk is part of it!) sits at the center of the Milky Way. Now, for the first time, we can see it. Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

Intel Extends IPU Roadmap Through 2026

May 10, 2022

Intel is extending its roadmap for infrastructure processors through 2026, the company said at its Vision conference being held in Grapevine, Texas. The company's IPUs (infrastructure processing units) are megachips that are designed to improve datacenter efficiency by offloading functions such as networking control, storage management and security that were traditionally... Read more…

Exascale Watch: Aurora Installation Underway, Now Open for Reservations

May 10, 2022

Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…

Intel’s Habana Labs Unveils Gaudi2, Greco AI Processors

May 10, 2022

At the hybrid Intel Vision event today, Intel’s Habana Labs team launched two major new products: Gaudi2, the second generation of the Gaudi deep learning training processor; and Greco, the successor to the Goya deep learning inference processor. Intel says that the processors offer significant speedups relative to their predecessors and the... Read more…

IBM Unveils Expanded Quantum Roadmap; Talks Up ‘Quantum-Centric Supercomputer’

May 10, 2022

IBM today issued an extensive and detailed expansion of its Quantum Roadmap that calls for developing a new 1386-qubit processor – Kookaburra – built from modularly scaled chips, and delivering a 4,158-qubit POC system built using three connected Kookaburra processors by 2025. Kookaburra (Australian Kingfisher) is a new architecture... Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire