AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

By Tiffany Trader

March 15, 2021

At a virtual launch event held today (Monday) led by CEO Lisa Su, AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs — including the flagship 64-core 7763 part —  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent more instructions per clock (IPC) than predecessor second-gen Rome, based on a geomean of 28 workloads.

Like Rome (released in August 2019), the Milan series processors employ up to 64 7nm cores per processor and integrate PCIe Gen 4 connectivity, but the latest versions add per socket performance and per core performance with new “Zen 3” cores and enhanced security features.

“With Zen 3, we redesigned the core to deliver our largest generation over generation performance increase since we first launched Zen in 2017, and as a result the third-generation Epyc CPUs are the fastest server processors in the world,” said Su.

Although Milan started shipping in the fourth-quarter of 2020, today marks the official product launch. Q1 is ramping “very, very aggressively,” noted AMD’s Dan McNamara, senior vice president and general manager, server business unit, in a pre-briefing held for media last week. He added that Rome has had a very nice run and is “still building” too.

The 7003-series SKU Stack

The Epyc 7003-series family includes a total 19 SKUs, 15 dual-socket capable parts, and four single-socket capable parts, with a TDP ranging from 155-280 watts. Four of the dual-socket parts are per-core performance optimized parts, including two 64-core SKUs: the HPC-targeted 7763 and the enterprise-focused 7713.

The top-of-bin Epyc 7763 part comes in a 225-280 watt TDP and provides 3.58 teraflops of peak double-precision performance running at max boost frequency of 3.5 GHz — over 7 teraflops in a dual-socket server. At its base frequency of 2.45 GHz, the 7763 tops out as a theoretical 2.5 double-precision teraflops. The list price for 1,000-unit lots is $7,890 per processor.

Next in line is a new 56-core offering and a 48-core part. And then at each of the 32, 24 and 16 core levels, there are three different SKUs, one for per-core performance, one that’s optimized for socket performance, and the other a value part, McNamara explained.

There is also a 28-core processor that is the same core count as the top-of-stack Intel Cascade Lake Refresh CPU. AMD said it provides similar performance, while driving performance-per-dollar value.

“All of these processors, regardless of their core count, include the complete set of Epyc features, 8 channels of high performance memory supporting up to four terabytes of DRAM per CPU, 128 or more lanes of PCIe generation 4 I/O to connect to networking, storage and accelerators, and crucially…an unmatched set of security features,” said Forrest Norrod, senior vice president and general manager of the datacenter and embedded solutions business group at AMD.

AMD touts its chiplet hybrid multi-die architecture as key to enabling configurability and flexibility in designing the third-generation Epyc product stack. “The first thing AMD set out to do was to keep it simple by maintaining commonality across the stack in terms of features and speeds and feeds,” said Ram Peddibhotla, corporate vice president of Epyc product management. “The second was to enable true customer choice, by not artificially corralling them into buying higher up in the stack just to get either higher memory speed, or more I/O lanes,” he said.

New in Zen 3 

Getting to that 19 percent IPC uplift required significant changes to the Zen architecture, noted Mike Clark, corporate fellow and silicon design engineer during last week’s pre-briefing. “Everywhere from the front-end, increases in the Micro-Op cache, the execution engine, load store unit, everything had to come together to create a balanced pipeline, and be able to produce that great 19 percent IPC upload.”

Optimizations were made to the integer unit, the floating point unit and the load store unit.

Where Zen 2 could do four-wide dispatch to the floating point, Zen 3 can do a full six-wide dispatch to the floating point. Zen 3 still has the traditional four pipes (two multiplies, two adds) but AMD split out two more pipes. This means it can do store data as well as floating point integer moves, freeing up those multiply and add pipes to just focus on that work and get more throughput to the machine. On the load store unit, prior Zen cores could do three memory ops per cycle, but now all of them can be loads, and it can also do two stores per cycle out of those three memory ops. That provides a lot more flexibility and throughput in the load store unit, said Clark.

For the floating point multiply accumulate (FMAC), AMD reduced the cycle of latency by one. And it doubled the INT8 pipelines from one to two, providing more throughput for inference workloads. 

SoC Changes

As with Rome, the Milan SoCs are built as nine-die packages with eight 7nm complex core die (CCD) chiplets — with up to eight cores each — surrounding a 14nm I/O die, connected via AMD’s second-gen Infinity fabric. The hybrid die architecture with four chiplets on the top, four on the bottom and the I/O die through the middle allowed AMD engineers to split out the compute cores from the socket I/O compatibility. 

For Milan, AMD redesigned the CCD, so there are now 32 megabytes of L3 across eight cores. Where Zen 2 was organized with two 4-core cache complexes on each CCD, Zen 3 transitions to having a single unified 8-core cache complex on each CCD. In Zen 3, all the cores on the same CCD have direct access to 32 megabytes of L3 cache, whereas on Zen 2, each core could only access 16 megabytes of L3 directly.

“Having every core on the CCD communicating directly with the entire CCD’s cache reduces latency, and that especially helps applications that make heavy use of the memory subsystem,” said Noah Beck, AMD fellow and silicon design engineer for server SOC, during last week’s pre-briefing.

“More interestingly,” he added, “for many server throughput workloads, the single L3 per CCD also expands the effective cache capacity that’s available per core, versus having a cache at the same total capacity shared by a smaller number of cores.”

Sidenote: the 28-core and 56-core SKUs both are based on multiples of seven cores. The 56-core part has eight CCDs with seven active cores per CCD and the 28-core part has four CCDs with seven active cores per CCD.

Zen 3 SoC changes were made while maintaining full I/O compatibility with platforms that are designed for the second generation. The third generation Epyc is drop-in compatible with Rome-based platforms, requiring only a BIOS update. This compatibility was emphasized by AMD and partners and was cited as one of the reasons for strong readiness at launch from some partners. The length of the shipping ramp ahead of today’s official launch was another.

Today’s launch also emphasized Milan’s new security features, including Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP), which expands the existing SEV features on Epyc processors and helps prevent malicious hypervisor-based attacks. Further, Shadow Stack provides hardware mitigation against control flow attacks that leverage return oriented programming techniques.

Benchmarking: Milan vs. Rome vs. the Competition

Benchmarks presented by AMD showed Milan outperforming Rome and besting the publicly available scores of competitive hardware. Using SPECfp as an HPC proxy benchmark, the top-of-the-stack, 64-core Epyc Milan 7763 delivered 106 percent faster results than the top socketed Intel part, the 28-core 6258R Cascade Lake Xeon CPU. AMD also claimed performance leadership in the middle of the stack with its 32-core 75F3 Milan part, which it said performed 70 percent faster than Intel’s 28-core 6258R on SPECfp. 

On the same SPECfp benchmark, the top-bin 64-core 7763 (Milan) delivered 17 percent better performance than top-bin 64-core 7H12 (Rome); while the 32-core 75F3 (Milan) achieved a 21 percent higher rating than the equivalent-class 32-core 7532 (Rome).

AMD also touted competitive TCO claims, stating that its top-of-stack third-generation Epyc processors provide customers with equivalent performance in half the number of servers, based on hitting 25,000 units on the SPECint rate benchmark. Combining capex and opex gains, AMD claimed a TCO savings of 35 percent, while noting the potential for lower software licensing costs as well.

AMD explained to media representatives that its competitive comparisons were made against Intel’s Cascade Lake microarchitecture because numbers were not yet available for Intel’s third-generation Xeon Ice Lake processors. Ice Lake CPUs are currently shipping, but have not yet made their official launch debut.

A Growing Ecosystem

The 7003-series products are available now through a number of OEMs, ODMs, cloud providers and channel partners. According to AMD, by the end of 2021, there will be 400 cloud instances powered by Epyc processors of all generations and 100 new server platforms using third-generation Epyc processors. 

Microsoft Azure and Oracle Cloud today announced general availability of new instances based on custom Milan 64-core SKUs (similar to the 7763 part). Other cloud partners — AWS, Google Cloud, IBM Cloud and Tencent — are planning to offer Milan-backed machines this year.

Launch partners HPE, Dell, Cisco, Lenovo, Atos and Supermicro are introducing new or refreshed servers based on the new AMD CPUs; and VMware is leveraging Milan’s enhanced security features in VMware vSphere 7. More announcements will be made in the days and weeks to come.

Takeaways

Intersect360 Research’s Chief Research Officer Dan Olds views the launch as another positive step from AMD, continuing the momentum of the last two launches. “With Zen3, AMD is once again pushing the bar up in terms of performance, which will be of particular interest to potential HPC customers. According to our latest research, AMD processors are already deployed in more than 70 percent of HPC datacenters,” said Olds. Interestingly, this growth was strongest in the commercial sector.

“AMD isn’t only increasing core counts, they’ve also significantly increased the single thread performance over their previous generation. This will improve performance for customers who need high per thread throughput and who are not helped as much by higher core counts,” Olds also shared.

Steve Conway, senior advisor with Hyperion Research, told us “AMD is in an Opteron 2.0 resurgence with a combination of strong performance fueled by memory bandwidth, combined with attractive pricing. We expect AMD to gain market share in HPC now through at least 2022.”

AMD is claiming both TCO and performance leadership with Milan. “We not only double the performance over the competition in HPC, cloud and enterprise workloads with our newest server CPUs, but together with the AMD Instinct GPUs, we are breaking the exascale barrier in supercomputing and helping to tackle problems that have previously been beyond humanity’s reach,” said Forrest Norrod, senior vice president and general manager, data center and embedded solutions business group, in prepared remarks.

AMD is working with HPE to deliver the United State’s first exascale system, Frontier, to Oak Ridge National Laboratory later this year. Frontier will employ a custom AMD Epyc CPU plus four future-gen Radeon Instinct GPUs connected by an enhanced Infinity fabric.

The next-generation 5nm Zen 4 Epyc, codenamed Genoa, is “well underway and on track to come to market in 2022,” according to AMD CTO Mark Papermaster, and Zen 5 is in design phase. “The AMD product development team is hitting on all cylinders,” said Papermaster during today’s launch event.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institute of Science and Engineering (NAISE), at the most recent HPC Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pushes chemistry calculations forward, D-Wave prepares for its Read more…

Asetek Announces It Is Exiting HPC to Protect Future Profitability

September 22, 2021

Liquid cooling specialist Asetek, well-known in HPC circles for its direct-to-chip cooling technology that is inside some of the fastest supercomputers in the world, announced today that it is exiting the HPC space amid multiple supply chain issues related to the pandemic. Although pandemic supply chain... Read more…

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between the hexokinase-II (HKII) enzyme and the proteins found in a specific channel on the mitochondria’s outer membrane. Now, simulations conducted on supercomputers at the Texas Advanced Computing Center (TACC) have simulated... Read more…

AWS Solution Channel

Introducing AWS ParallelCluster 3

Running HPC workloads, like computational fluid dynamics (CFD), molecular dynamics, or weather forecasting typically involves a lot of moving parts. You need a hundreds or thousands of compute cores, a job scheduler for keeping them fed, a shared file system that’s tuned for throughput or IOPS (or both), loads of libraries, a fast network, and a head node to make sense of all this. Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-apples) datacenter and edge categories. Perhaps more interesti Read more…

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institut Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pu Read more…

Asetek Announces It Is Exiting HPC to Protect Future Profitability

September 22, 2021

Liquid cooling specialist Asetek, well-known in HPC circles for its direct-to-chip cooling technology that is inside some of the fastest supercomputers in the world, announced today that it is exiting the HPC space amid multiple supply chain issues related to the pandemic. Although pandemic supply chain... Read more…

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between the hexokinase-II (HKII) enzyme and the proteins found in a specific channel on the mitochondria’s outer membrane. Now, simulations conducted on supercomputers at the Texas Advanced Computing Center (TACC) have simulated... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark Nossokoff looks at key storage trends in the context of the evolving HPC (and AI) landscape... Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Latest MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence

June 30, 2021

While Nvidia (again) dominated the latest round of MLPerf training benchmark results, the range of participants expanded. Notably, Google’s forthcoming TPU v4 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire