At SC20 this week, Intel teased its forthcoming third-generation Xeon “Ice Lake-SP” server processor, claiming competitive benchmarking results against AMD’s second-generation Epyc “Rome” processor.
Ice Lake-SP, Intel’s first server processor with 10nm technology, and its companion Whitley 2-socket server platform increase the number of DDR4 memory channels from four to six per CPU and introduce PCI Express Gen4. The platform supports up to 6TB per socket using Intel Optane persistent memory technology.
Intel says that Ice Lake will deliver increased performance for HPC workloads through “higher memory bandwidth, a new core architecture, increased processor cores and faster input/output.”
In her keynote talk, Trish Damkroger, general manager of Intel’s HPC group, said the Ice Lake server platform offers 18 percent higher instruction per clock (IPC) versus the previous-generation Cascade Lake platform, positioning it to be competitive against higher-core CPUs. She said Ice Lake is already demonstrating this competitive advantage on key life science and financial service applications, including LAMPPS, NAMD molecular dynamic workloads and Monte Carlo simulations.
Internal benchmarking conducted by Intel shows pre-production 32-core Ice Lake parts delivering 20-30 percent performance improvements over AMD’s 64-core AMD Epyc 7742 processor on a number of HPC applications. (Details here.)
Of course, AMD is not standing still; its third-generation Epyc Milan CPUs are on track to launch in the first quarter of 2021, and the CPUs are shipping this quarter to select cloud and HPC customers, as announced this week.
Ice Lake-SP servers are slated to begin shipping in early 2021 and a number of high-profile customers are preparing to take delivery, including:
Korea Meteorological Administration, which selected Ice Lake server processors to power its Supercomputer No. 5. The system will deliver 50 petaflops performance to help study weather and climate change and enable more reliable and actionable forecasting relative to its current system.
The Max Planck Computing and Data Facility, which will adopt Ice Lake for use in its new Raven system. The Raven system will deliver 9 petaflops performance and enable groundbreaking research in physics, bioscience, theoretical chemistry and more.
The National Institute of Advanced Industrial Science and Technology (AIST), which will use Ice Lake to power its AI Bridging Green Cloud Infrastructure system being added to its AI Data Center Building. The system is expected to deliver a theoretical peak performance of half-precision floating-point operations of 850 petaflops.
The University of Tokyo and Osaka University, which are the first Japanese universities to leverage Ice Lake. The University of Tokyo’s 2.0 petaflops system and Osaka University’s 2.8 petaflops system will be used for general research and data analytics.
Oracle which will deploy Ice Lake within its Oracle Cloud Infrastructure to power its X9 Generation HPC cloud instance, targeted at computationally intensive workloads such as crash simulation, seismic analysis and electronic design automation.
Offering a technical discussion on the architecture at SC20, Irma Esmer Papazian, senior principal engineer at Intel Corporation, detailed a number of enhancements on Ice Lake’s Sunny Cove core, including an improved front end with larger structures and a better branch predictor. The architecture features a wider and deeper execution engine, while server enhancements include larger mid-level cache (L2) and a second FMA (compared to the client version).
Papazian presented benchmarking that showed performance improvements for Ice Lake over previous-generation Cascade Lake.
“Ice Lake targets high throughput performance and high per-core performance across a full range of applications, and we think it will be an excellent foundation or HPC,” said Papazian.
Her full presentation, while only 12-and-a-half minutes, offers a lot more detail. Watch it here.
Feature image: Ice Lake-SP diagram