At a virtual launch event held today (Monday) led by CEO Lisa Su, AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs — including the flagship 64-core 7763 part — aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent more instructions per clock (IPC) than predecessor second-gen Rome, based on a geomean of 28 workloads.
Like Rome (released in August 2019), the Milan series processors employ up to 64 7nm cores per processor and integrate PCIe Gen 4 connectivity, but the latest versions add per socket performance and per core performance with new “Zen 3” cores and enhanced security features.
“With Zen 3, we redesigned the core to deliver our largest generation over generation performance increase since we first launched Zen in 2017, and as a result the third-generation Epyc CPUs are the fastest server processors in the world,” said Su.
Although Milan started shipping in the fourth-quarter of 2020, today marks the official product launch. Q1 is ramping “very, very aggressively,” noted AMD’s Dan McNamara, senior vice president and general manager, server business unit, in a pre-briefing held for media last week. He added that Rome has had a very nice run and is “still building” too.
The 7003-series SKU Stack
The Epyc 7003-series family includes a total 19 SKUs, 15 dual-socket capable parts, and four single-socket capable parts, with a TDP ranging from 155-280 watts. Four of the dual-socket parts are per-core performance optimized parts, including two 64-core SKUs: the HPC-targeted 7763 and the enterprise-focused 7713.
The top-of-bin Epyc 7763 part comes in a 225-280 watt TDP and provides 3.58 teraflops of peak double-precision performance running at max boost frequency of 3.5 GHz — over 7 teraflops in a dual-socket server. At its base frequency of 2.45 GHz, the 7763 tops out as a theoretical 2.5 double-precision teraflops. The list price for 1,000-unit lots is $7,890 per processor.
Next in line is a new 56-core offering and a 48-core part. And then at each of the 32, 24 and 16 core levels, there are three different SKUs, one for per-core performance, one that’s optimized for socket performance, and the other a value part, McNamara explained.
There is also a 28-core processor that is the same core count as the top-of-stack Intel Cascade Lake Refresh CPU. AMD said it provides similar performance, while driving performance-per-dollar value.
“All of these processors, regardless of their core count, include the complete set of Epyc features, 8 channels of high performance memory supporting up to four terabytes of DRAM per CPU, 128 or more lanes of PCIe generation 4 I/O to connect to networking, storage and accelerators, and crucially…an unmatched set of security features,” said Forrest Norrod, senior vice president and general manager of the datacenter and embedded solutions business group at AMD.
AMD touts its chiplet hybrid multi-die architecture as key to enabling configurability and flexibility in designing the third-generation Epyc product stack. “The first thing AMD set out to do was to keep it simple by maintaining commonality across the stack in terms of features and speeds and feeds,” said Ram Peddibhotla, corporate vice president of Epyc product management. “The second was to enable true customer choice, by not artificially corralling them into buying higher up in the stack just to get either higher memory speed, or more I/O lanes,” he said.
New in Zen 3
Getting to that 19 percent IPC uplift required significant changes to the Zen architecture, noted Mike Clark, corporate fellow and silicon design engineer during last week’s pre-briefing. “Everywhere from the front-end, increases in the Micro-Op cache, the execution engine, load store unit, everything had to come together to create a balanced pipeline, and be able to produce that great 19 percent IPC upload.”
Optimizations were made to the integer unit, the floating point unit and the load store unit.
Where Zen 2 could do four-wide dispatch to the floating point, Zen 3 can do a full six-wide dispatch to the floating point. Zen 3 still has the traditional four pipes (two multiplies, two adds) but AMD split out two more pipes. This means it can do store data as well as floating point integer moves, freeing up those multiply and add pipes to just focus on that work and get more throughput to the machine. On the load store unit, prior Zen cores could do three memory ops per cycle, but now all of them can be loads, and it can also do two stores per cycle out of those three memory ops. That provides a lot more flexibility and throughput in the load store unit, said Clark.
For the floating point multiply accumulate (FMAC), AMD reduced the cycle of latency by one. And it doubled the INT8 pipelines from one to two, providing more throughput for inference workloads.
As with Rome, the Milan SoCs are built as nine-die packages with eight 7nm complex core die (CCD) chiplets — with up to eight cores each — surrounding a 14nm I/O die, connected via AMD’s second-gen Infinity fabric. The hybrid die architecture with four chiplets on the top, four on the bottom and the I/O die through the middle allowed AMD engineers to split out the compute cores from the socket I/O compatibility.
For Milan, AMD redesigned the CCD, so there are now 32 megabytes of L3 across eight cores. Where Zen 2 was organized with two 4-core cache complexes on each CCD, Zen 3 transitions to having a single unified 8-core cache complex on each CCD. In Zen 3, all the cores on the same CCD have direct access to 32 megabytes of L3 cache, whereas on Zen 2, each core could only access 16 megabytes of L3 directly.
“Having every core on the CCD communicating directly with the entire CCD’s cache reduces latency, and that especially helps applications that make heavy use of the memory subsystem,” said Noah Beck, AMD fellow and silicon design engineer for server SOC, during last week’s pre-briefing.
“More interestingly,” he added, “for many server throughput workloads, the single L3 per CCD also expands the effective cache capacity that’s available per core, versus having a cache at the same total capacity shared by a smaller number of cores.”
Sidenote: the 28-core and 56-core SKUs both are based on multiples of seven cores. The 56-core part has eight CCDs with seven active cores per CCD and the 28-core part has four CCDs with seven active cores per CCD.
Zen 3 SoC changes were made while maintaining full I/O compatibility with platforms that are designed for the second generation. The third generation Epyc is drop-in compatible with Rome-based platforms, requiring only a BIOS update. This compatibility was emphasized by AMD and partners and was cited as one of the reasons for strong readiness at launch from some partners. The length of the shipping ramp ahead of today’s official launch was another.
Today’s launch also emphasized Milan’s new security features, including Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP), which expands the existing SEV features on Epyc processors and helps prevent malicious hypervisor-based attacks. Further, Shadow Stack provides hardware mitigation against control flow attacks that leverage return oriented programming techniques.
Benchmarking: Milan vs. Rome vs. the Competition
Benchmarks presented by AMD showed Milan outperforming Rome and besting the publicly available scores of competitive hardware. Using SPECfp as an HPC proxy benchmark, the top-of-the-stack, 64-core Epyc Milan 7763 delivered 106 percent faster results than the top socketed Intel part, the 28-core 6258R Cascade Lake Xeon CPU. AMD also claimed performance leadership in the middle of the stack with its 32-core 75F3 Milan part, which it said performed 70 percent faster than Intel’s 28-core 6258R on SPECfp.
On the same SPECfp benchmark, the top-bin 64-core 7763 (Milan) delivered 17 percent better performance than top-bin 64-core 7H12 (Rome); while the 32-core 75F3 (Milan) achieved a 21 percent higher rating than the equivalent-class 32-core 7532 (Rome).
AMD also touted competitive TCO claims, stating that its top-of-stack third-generation Epyc processors provide customers with equivalent performance in half the number of servers, based on hitting 25,000 units on the SPECint rate benchmark. Combining capex and opex gains, AMD claimed a TCO savings of 35 percent, while noting the potential for lower software licensing costs as well.
AMD explained to media representatives that its competitive comparisons were made against Intel’s Cascade Lake microarchitecture because numbers were not yet available for Intel’s third-generation Xeon Ice Lake processors. Ice Lake CPUs are currently shipping, but have not yet made their official launch debut.
A Growing Ecosystem
The 7003-series products are available now through a number of OEMs, ODMs, cloud providers and channel partners. According to AMD, by the end of 2021, there will be 400 cloud instances powered by Epyc processors of all generations and 100 new server platforms using third-generation Epyc processors.
Microsoft Azure and Oracle Cloud today announced general availability of new instances based on custom Milan 64-core SKUs (similar to the 7763 part). Other cloud partners — AWS, Google Cloud, IBM Cloud and Tencent — are planning to offer Milan-backed machines this year.
Launch partners HPE, Dell, Cisco, Lenovo, Atos and Supermicro are introducing new or refreshed servers based on the new AMD CPUs; and VMware is leveraging Milan’s enhanced security features in VMware vSphere 7. More announcements will be made in the days and weeks to come.
Intersect360 Research’s Chief Research Officer Dan Olds views the launch as another positive step from AMD, continuing the momentum of the last two launches. “With Zen3, AMD is once again pushing the bar up in terms of performance, which will be of particular interest to potential HPC customers. According to our latest research, AMD processors are already deployed in more than 70 percent of HPC datacenters,” said Olds. Interestingly, this growth was strongest in the commercial sector.
“AMD isn’t only increasing core counts, they’ve also significantly increased the single thread performance over their previous generation. This will improve performance for customers who need high per thread throughput and who are not helped as much by higher core counts,” Olds also shared.
Steve Conway, senior advisor with Hyperion Research, told us “AMD is in an Opteron 2.0 resurgence with a combination of strong performance fueled by memory bandwidth, combined with attractive pricing. We expect AMD to gain market share in HPC now through at least 2022.”
AMD is claiming both TCO and performance leadership with Milan. “We not only double the performance over the competition in HPC, cloud and enterprise workloads with our newest server CPUs, but together with the AMD Instinct GPUs, we are breaking the exascale barrier in supercomputing and helping to tackle problems that have previously been beyond humanity’s reach,” said Forrest Norrod, senior vice president and general manager, data center and embedded solutions business group, in prepared remarks.
AMD is working with HPE to deliver the United State’s first exascale system, Frontier, to Oak Ridge National Laboratory later this year. Frontier will employ a custom AMD Epyc CPU plus four future-gen Radeon Instinct GPUs connected by an enhanced Infinity fabric.
The next-generation 5nm Zen 4 Epyc, codenamed Genoa, is “well underway and on track to come to market in 2022,” according to AMD CTO Mark Papermaster, and Zen 5 is in design phase. “The AMD product development team is hitting on all cylinders,” said Papermaster during today’s launch event.