AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

By Tiffany Trader

March 15, 2021

At a virtual launch event held today (Monday) led by CEO Lisa Su, AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs — including the flagship 64-core 7763 part —  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent more instructions per clock (IPC) than predecessor second-gen Rome, based on a geomean of 28 workloads.

Like Rome (released in August 2019), the Milan series processors employ up to 64 7nm cores per processor and integrate PCIe Gen 4 connectivity, but the latest versions add per socket performance and per core performance with new “Zen 3” cores and enhanced security features.

“With Zen 3, we redesigned the core to deliver our largest generation over generation performance increase since we first launched Zen in 2017, and as a result the third-generation Epyc CPUs are the fastest server processors in the world,” said Su.

Although Milan started shipping in the fourth-quarter of 2020, today marks the official product launch. Q1 is ramping “very, very aggressively,” noted AMD’s Dan McNamara, senior vice president and general manager, server business unit, in a pre-briefing held for media last week. He added that Rome has had a very nice run and is “still building” too.

The 7003-series SKU Stack

The Epyc 7003-series family includes a total 19 SKUs, 15 dual-socket capable parts, and four single-socket capable parts, with a TDP ranging from 155-280 watts. Four of the dual-socket parts are per-core performance optimized parts, including two 64-core SKUs: the HPC-targeted 7763 and the enterprise-focused 7713.

The top-of-bin Epyc 7763 part comes in a 225-280 watt TDP and provides 3.58 teraflops of peak double-precision performance running at max boost frequency of 3.5 GHz — over 7 teraflops in a dual-socket server. At its base frequency of 2.45 GHz, the 7763 tops out as a theoretical 2.5 double-precision teraflops. The list price for 1,000-unit lots is $7,890 per processor.

Next in line is a new 56-core offering and a 48-core part. And then at each of the 32, 24 and 16 core levels, there are three different SKUs, one for per-core performance, one that’s optimized for socket performance, and the other a value part, McNamara explained.

There is also a 28-core processor that is the same core count as the top-of-stack Intel Cascade Lake Refresh CPU. AMD said it provides similar performance, while driving performance-per-dollar value.

“All of these processors, regardless of their core count, include the complete set of Epyc features, 8 channels of high performance memory supporting up to four terabytes of DRAM per CPU, 128 or more lanes of PCIe generation 4 I/O to connect to networking, storage and accelerators, and crucially…an unmatched set of security features,” said Forrest Norrod, senior vice president and general manager of the datacenter and embedded solutions business group at AMD.

AMD touts its chiplet hybrid multi-die architecture as key to enabling configurability and flexibility in designing the third-generation Epyc product stack. “The first thing AMD set out to do was to keep it simple by maintaining commonality across the stack in terms of features and speeds and feeds,” said Ram Peddibhotla, corporate vice president of Epyc product management. “The second was to enable true customer choice, by not artificially corralling them into buying higher up in the stack just to get either higher memory speed, or more I/O lanes,” he said.

New in Zen 3 

Getting to that 19 percent IPC uplift required significant changes to the Zen architecture, noted Mike Clark, corporate fellow and silicon design engineer during last week’s pre-briefing. “Everywhere from the front-end, increases in the Micro-Op cache, the execution engine, load store unit, everything had to come together to create a balanced pipeline, and be able to produce that great 19 percent IPC upload.”

Optimizations were made to the integer unit, the floating point unit and the load store unit.

Where Zen 2 could do four-wide dispatch to the floating point, Zen 3 can do a full six-wide dispatch to the floating point. Zen 3 still has the traditional four pipes (two multiplies, two adds) but AMD split out two more pipes. This means it can do store data as well as floating point integer moves, freeing up those multiply and add pipes to just focus on that work and get more throughput to the machine. On the load store unit, prior Zen cores could do three memory ops per cycle, but now all of them can be loads, and it can also do two stores per cycle out of those three memory ops. That provides a lot more flexibility and throughput in the load store unit, said Clark.

For the floating point multiply accumulate (FMAC), AMD reduced the cycle of latency by one. And it doubled the INT8 pipelines from one to two, providing more throughput for inference workloads. 

SoC Changes

As with Rome, the Milan SoCs are built as nine-die packages with eight 7nm complex core die (CCD) chiplets — with up to eight cores each — surrounding a 14nm I/O die, connected via AMD’s second-gen Infinity fabric. The hybrid die architecture with four chiplets on the top, four on the bottom and the I/O die through the middle allowed AMD engineers to split out the compute cores from the socket I/O compatibility. 

For Milan, AMD redesigned the CCD, so there are now 32 megabytes of L3 across eight cores. Where Zen 2 was organized with two 4-core cache complexes on each CCD, Zen 3 transitions to having a single unified 8-core cache complex on each CCD. In Zen 3, all the cores on the same CCD have direct access to 32 megabytes of L3 cache, whereas on Zen 2, each core could only access 16 megabytes of L3 directly.

“Having every core on the CCD communicating directly with the entire CCD’s cache reduces latency, and that especially helps applications that make heavy use of the memory subsystem,” said Noah Beck, AMD fellow and silicon design engineer for server SOC, during last week’s pre-briefing.

“More interestingly,” he added, “for many server throughput workloads, the single L3 per CCD also expands the effective cache capacity that’s available per core, versus having a cache at the same total capacity shared by a smaller number of cores.”

Sidenote: the 28-core and 56-core SKUs both are based on multiples of seven cores. The 56-core part has eight CCDs with seven active cores per CCD and the 28-core part has four CCDs with seven active cores per CCD.

Zen 3 SoC changes were made while maintaining full I/O compatibility with platforms that are designed for the second generation. The third generation Epyc is drop-in compatible with Rome-based platforms, requiring only a BIOS update. This compatibility was emphasized by AMD and partners and was cited as one of the reasons for strong readiness at launch from some partners. The length of the shipping ramp ahead of today’s official launch was another.

Today’s launch also emphasized Milan’s new security features, including Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP), which expands the existing SEV features on Epyc processors and helps prevent malicious hypervisor-based attacks. Further, Shadow Stack provides hardware mitigation against control flow attacks that leverage return oriented programming techniques.

Benchmarking: Milan vs. Rome vs. the Competition

Benchmarks presented by AMD showed Milan outperforming Rome and besting the publicly available scores of competitive hardware. Using SPECfp as an HPC proxy benchmark, the top-of-the-stack, 64-core Epyc Milan 7763 delivered 106 percent faster results than the top socketed Intel part, the 28-core 6258R Cascade Lake Xeon CPU. AMD also claimed performance leadership in the middle of the stack with its 32-core 75F3 Milan part, which it said performed 70 percent faster than Intel’s 28-core 6258R on SPECfp. 

On the same SPECfp benchmark, the top-bin 64-core 7763 (Milan) delivered 17 percent better performance than top-bin 64-core 7H12 (Rome); while the 32-core 75F3 (Milan) achieved a 21 percent higher rating than the equivalent-class 32-core 7532 (Rome).

AMD also touted competitive TCO claims, stating that its top-of-stack third-generation Epyc processors provide customers with equivalent performance in half the number of servers, based on hitting 25,000 units on the SPECint rate benchmark. Combining capex and opex gains, AMD claimed a TCO savings of 35 percent, while noting the potential for lower software licensing costs as well.

AMD explained to media representatives that its competitive comparisons were made against Intel’s Cascade Lake microarchitecture because numbers were not yet available for Intel’s third-generation Xeon Ice Lake processors. Ice Lake CPUs are currently shipping, but have not yet made their official launch debut.

A Growing Ecosystem

The 7003-series products are available now through a number of OEMs, ODMs, cloud providers and channel partners. According to AMD, by the end of 2021, there will be 400 cloud instances powered by Epyc processors of all generations and 100 new server platforms using third-generation Epyc processors. 

Microsoft Azure and Oracle Cloud today announced general availability of new instances based on custom Milan 64-core SKUs (similar to the 7763 part). Other cloud partners — AWS, Google Cloud, IBM Cloud and Tencent — are planning to offer Milan-backed machines this year.

Launch partners HPE, Dell, Cisco, Lenovo, Atos and Supermicro are introducing new or refreshed servers based on the new AMD CPUs; and VMware is leveraging Milan’s enhanced security features in VMware vSphere 7. More announcements will be made in the days and weeks to come.

Takeaways

Intersect360 Research’s Chief Research Officer Dan Olds views the launch as another positive step from AMD, continuing the momentum of the last two launches. “With Zen3, AMD is once again pushing the bar up in terms of performance, which will be of particular interest to potential HPC customers. According to our latest research, AMD processors are already deployed in more than 70 percent of HPC datacenters,” said Olds. Interestingly, this growth was strongest in the commercial sector.

“AMD isn’t only increasing core counts, they’ve also significantly increased the single thread performance over their previous generation. This will improve performance for customers who need high per thread throughput and who are not helped as much by higher core counts,” Olds also shared.

Steve Conway, senior advisor with Hyperion Research, told us “AMD is in an Opteron 2.0 resurgence with a combination of strong performance fueled by memory bandwidth, combined with attractive pricing. We expect AMD to gain market share in HPC now through at least 2022.”

AMD is claiming both TCO and performance leadership with Milan. “We not only double the performance over the competition in HPC, cloud and enterprise workloads with our newest server CPUs, but together with the AMD Instinct GPUs, we are breaking the exascale barrier in supercomputing and helping to tackle problems that have previously been beyond humanity’s reach,” said Forrest Norrod, senior vice president and general manager, data center and embedded solutions business group, in prepared remarks.

AMD is working with HPE to deliver the United State’s first exascale system, Frontier, to Oak Ridge National Laboratory later this year. Frontier will employ a custom AMD Epyc CPU plus four future-gen Radeon Instinct GPUs connected by an enhanced Infinity fabric.

The next-generation 5nm Zen 4 Epyc, codenamed Genoa, is “well underway and on track to come to market in 2022,” according to AMD CTO Mark Papermaster, and Zen 5 is in design phase. “The AMD product development team is hitting on all cylinders,” said Papermaster during today’s launch event.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire