Aug. 19, 2021 — At Intel’s Architecture Day 2021, Raja Koduri and Intel architects provided details on two new x86 core architectures; Intel’s first performance hybrid architecture, code-named “Alder Lake,” with the intelligent Intel Thread Director workload scheduler; “Sapphire Rapids,” the next-generation Intel Xeon Scalable processor for the data center; new infrastructure processing units; and upcoming graphics architectures, including the Xe HPG and Xe HPC microarchitectures, and Alchemist and Ponte Vecchio SoCs.
Raja Koduri addressed the importance of architectural advancement to meet this demand, saying: “Architecture is alchemy of hardware and software. It blends the best transistors for a given engine, connects them through advanced packaging, integrates high-bandwidth, low-power caches, and equips them with high-capacity, high-bandwidth memories and low-latency scalable interconnects for hybrid computing clusters in a package, while also ensuring that all software accelerates seamlessly. … The breakthroughs we disclosed today demonstrate how architecture will satisfy the crushing demand for more compute performance as workloads from the desktop to the data center become larger, more complex and more diverse than ever.”
Next-Generation Intel Xeon Scalable Processor (code-named “Sapphire Rapids”)
Sapphire Rapids represents Intel’s biggest data center platform advancement. The processor delivers substantial compute performance across dynamic and increasingly demanding data center usages and is workload-optimized to deliver high performance on elastic compute models like cloud, microservices and AI.
At the heart of Sapphire Rapids is a tiled, modular SoC architecture that leverages Intel’s embedded multi-die interconnect bridge (EMIB) packaging technology to deliver significant scalability while maintaining the benefits of a monolithic CPU interface. Sapphire Rapids provides a single balanced unified memory access architecture, with every thread having full access to all resources on all tiles, including caches, memory and I/O. The result offers consistent low-latency and high cross-section bandwidth across the entire SoC.
Sapphire Rapids is built on Intel 7 process technology and features Intel’s new Performance-core microarchitecture, which is designed for speed and pushes the limits of low-latency and single-threaded application performance.
Sapphire Rapids delivers the industry’s broadest range of data center-relevant accelerators, including new instruction set architecture and integrated IP to increase performance across the broadest range of customer workloads and usages. The new built-in acceleration engines include:
• Intel Accelerator Interfacing Architecture (AIA) – Supports efficient dispatch, synchronization and signaling to accelerators and devices
• Intel Advanced Matrix Extensions (AMX) – A new workload acceleration engine introduced in Sapphire Rapids that delivers massive speed-up to the tensor processing at the heart of deep learning algorithms. It can provide an increase in computing capabilities with 2K INT8 and 1K BFP16 operations per cycle. Using early Sapphire Rapids silicon, optimized internal matrix-multiply micro benchmarks run over 7x faster using new Intel AMX instruction set extensions compared to a version of the same micro benchmark using Intel AVX-512 VNNI instructions, delivering substantial performance gains across AI workloads for both training and inference
• Intel Data Streaming Accelerator (DSA) – Designed to offload the most common data movement tasks that cause the overhead seen in data center scale deployments. Intel DSA improves processing of these overhead tasks to deliver increased overall workload performance and can move data among CPU, memory and caches, as well as all attached memory, storage and network devices These architectural advancements enable Sapphire Rapids to deliver great out-of-the-box performance for the broadest range of workloads and deployment models in the cloud, data center, network and intelligent edge. The processor is built to drive industry technology transitions with advanced memory and next generation I/O, including PCIe 5.0, CXL 1.1, DDR5 and HBM technologies.
Xe HPC and Ponte Vecchio
Ponte Vecchio, based on the Xe HPC microarchitecture, delivers industry-leading FLOPs and compute density to accelerate AI, high performance computing (HPC), and advanced analytics workloads. Intel disclosed IP block information of the Xe HPC microarchitecture; including eight Vector and Matrix engines (referred to as XMX – Xe Matrix eXtensions) per Xe -core; slice and stack information; and tile information including process nodes for the Compute, Base, and Xe Link tiles. At Architecture Day, Intel showed that early Ponte Vecchio silicon is demonstrating leadership performance, setting an industry-record in both inference and training throughput on a popular AI benchmark.1
Intel’s A0 silicon performance is providing greater than 45 TFLOPS FP32 throughput, greater than 5 TBps memory fabric bandwidth and greater than 2 TBps connectivity bandwidth. Intel also shared a demo showing ResNet inference performance of over 43,000 images per second and greater than 3,400 images per second with ResNet training, both of which are on
track to deliver performance leadership.1
Ponte Vecchio is comprised of several complex designs that manifest in tiles, which are then assembled through an EMIB tile that enables a low-power, high-speed connection between the tiles. These are put together in Foveros packaging that creates the 3D stacking of active silicon for power and interconnect density. A high-speed MDFI interconnect allows scaling from one to two stacks.
Compute Tile is a dense package of Xe-cores and is the heart of Ponte Vecchio.
• One tile has eight Xe-cores with a total of 4MB L1 cache, its key to delivering power-efficient compute
• Built on TSMC’s most advanced process technology, N5
• Intel has paved the way with the design infrastructure set-up and tools flows, and methodology to be able to test and verify tiles for this node
• The tile has an extremely tight 36-micron bump pitch for 3D stacking with Foveros
Base Tile is the connective tissue of Ponte Vecchio. It is a large die built on Intel 7 optimized for Foveros technology.
• The Base Tile is where all the complex I/O and high bandwidth components come together with the SoC infrastructure – PCIe Gen5, HBM2e memory, MDFI links to connect tile-to-tile and EMIB bridges
• Super-high-bandwidth 3D connect with high 2D interconnect and low latency makes this an infinite connectivity machine
• The Intel technology development team worked to match the requirements on bandwidth, bump pitch and signal integrity
Xe Link Tile provides the connectivity between GPUs supporting eight links per tile.
• Critical for scale-up for HPC and AI
• Targeting the fastest SerDes supported at Intel – up to 90G
• This tile was added to enable the scale-up solution for the Aurora exascale supercomputer
Ponte Vecchio is powered on, is in validation and has begun limited sampling to customers. Ponte Vecchio will be released in 2022 for HPC and AI markets.
Infrastructure Processing Unit (IPU)
The IPU is a programmable networking device designed to enable cloud and communication service providers to reduce overhead and free up performance for CPUs.
Intel’s IPU-based architecture has several major advantages:
• The strong separation of infrastructure functions and tenant workload allows tenants to take full control of the CPU
• The cloud operator can offload infrastructure tasks to the IPU, maximizing CPU utilization and revenue
• IPUs can manage storage traffic, which reduces latency while efficiently using storage capacity via a diskless server architecture. With an IPU, customers can better utilize resources with a secure, programmable and stable solution that enables them to balance processing and storage
Recognizing “one-size-does-not-fit-all,” Intel offered a deeper look at its IPU architecture and introduced the following new members of the IPU family – all designed to address the complexity of diverse and dispersed data centers.
1 For workloads and configurations visit www.intel.com/ArchDay21claims. Results may vary.
Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.
Source: Intel Corp.