Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 conference being held in San Jose. While technical details were scant, Gelsinger ticked through Intel’s chip roadmap, including the launch date for its E-core and Ultra Core processors in December, touted the growing traction of its AI chip lineup, including a big win with Stability AI to build an AI supercomputer using 4000 Gaudi2 accelerators, and he showed off “the world’s first multi-chiplet package using Universal Chiplet Interconnect Express (UCIe).”
There was, of course, a good deal more as Gelsinger explored Intel’s AI Everywhere strategy, reviewed progress on its ambitious five-nodes-in-four-years manufacturing process plan – they’re on target, he said – and gushed about the emergence of the AI PC, powered by an array of Intel products. He even briefly touched on Intel’s neuromorphic chip progress and its quantum efforts. Yes, Intel has a lot going on.
Perhaps the biggest news was disclosure that the forthcoming Sierra Forest processor will feature a version with two die in a single package and 288 cores. It’s the first 5th Gen Xeon being rolled out. Gelsinger said it will deliver 2.5x better rack density and 2.4x higher performance per watt over 4th Gen Xeon, and will be manufacturing-ready by the end of this year. Sierra Forest and Granite Rapids, are the first products manufactured using the Intel 3 process.
Encompassing this wide range of activities, Gelsinger introduced the idea of The Siliconony, a term coined by Gelsinger. “AI represents a generational shift in computing that is giving rise to the Siliconomy,” said Gelsinger.
Gelsinger has written a guest editorial (The Stern Stewart Institute) describing the idea:
“Everything digital is based on silicon. Today, the digital economy alone contributes to more than 15% of global gross domestic product (GDP), and in the past decade it has been growing two and a half times faster than physical world GDP.
“Welcome to what I’ve dubbed, “the siliconomy!” I used to broadly proclaim that every company is a tech company. But in this new era, technology is a standard baseline for success. As we look ahead to the decades yet to come, we will continue to see a move toward digital for everything – the way we work, learn, connect, worship, care for and evolve. As advanced semiconductors enable new levels of human achievement, the world’s need for compute exponentially increases at an inverse ratio of size, cost and power.”
Here are a few bullets from Gelsinger’s keynote:
- Intel confirmed its five-nodes-in-four-years process technology plan remains on track, and it demonstrated the world’s first multi-chiplet package using Universal Chiplet Interconnect Express (UCIe) interconnects.
- The company revealed new details on next-generation Intel Xeon processors, including major advances in power efficiency and performance, and an E-core processor with 288 cores. 5th Gen Intel Xeon processors will launch Dec. 14
- The AI PC arrives with the launch of Intel Core Ultra processors on Dec. 14. With Intel’s first integrated neural processing unit, Core Ultra will deliver power-efficient AI acceleration and local inference on the PC.
- A large AI supercomputer will be built on Intel Xeon processors and Intel Gaudi2 AI hardware accelerators, with Stability AI as the anchor customer. Alibaba plans wide use of 4th gen Xeon CPUs for its inference engine.
- General availability announced for the Intel Developer Cloud for building and testing high-performance applications like AI, including details that it is already in use by customers. It had been in Beta.
- New and forthcoming Intel software solutions, including the 2023.1 release of the Intel Distribution of OpenVINO toolkit, will help developers unlock new AI capabilities.
Intel has archived Gelsinger’s keynote (link here) which is best watched directly. CTO Greg Lavender will deliver today’s keynote. While much of the material being discussed is familiar, the fact that Intel is hitting its stated milestones is significant. After years of uneven performance in ramping up new manufacturing processes, Intel hoping its current progress will instill wider confidence in its ability to deliver.
The new Core Ultra processor, formerly code-named Meteor Lake, underpins Intel’s client-side bet on AI-empowered PC.
“AI will fundamentally transform, reshape and restructure the PC experience – unleashing personal productivity and creativity through the power of the cloud and PC working together,” Gelsinger said. “We are ushering in a new age of the AI PC.” Core Ultra features Intel’s first integrated neural processing unit (NPU) for power-efficient AI acceleration. Intel says, it’s the first client chiplet design enabled by Foveros 3D packaging technology. In addition to the NPU and major advances in power-efficient performance thanks to Intel 4 process technology, the new processor brings discrete-level graphics performance with onboard Intel Arc graphics.
PC’s based on the new chip, including from Acer, will launch in Dec. 14.
About Intel’s various process nodes, Gelsinger reported Intel 7 is done, Intel 4 is done and now heading over to Ireland for high-volume ramp of the Intel Core Ultra chip; Intel 3 will be manufacturing-ready by the end of this year — Sierra Forest and Granite Rapids, the first products on Intel 3, are sampling to customers and on track; and Intel 20A is on track to be manufacturing ready next year.
Perhaps predictably, the conference which is aimed at the broad developer community has a heavier emphasis on client-side and enterprise server activities rather than HPC. The emerging AI PC held center stage yesterday. Gelsinger said, “We’re going to bring millions of AI-enabled PCs ramping [quickly] to hundreds of millions, and we’re working with the industry and our OEM partners to make these sustainable, energy efficient platforms. And the journey begins with our upcoming new Intel Core Ultra processor launch.” PC’s based on the new chip, including from Acer, will launch in Dec. 14.
On the developer sider, Intel announced wider availability of its Developer Cloud. “At GA, developers and select commercial customers will be invited to use the service platform to test and deploy AI, HPC and security applications and solutions across a breadth of Intel CPUs, GPUs, and AI accelerators, as well as take advantage of cutting-edge developer tools to enable advanced AI and performance,” according to Intel.
Since its beta period last year, Intel has made the following progress to the Intel Developer Cloud:
- New and additional Intel architectures were added across CPUs, GPUs and AI accelerators including Habana Gaudi2 processors for Deep Learning.
- Public access to all CPU and GPU hardware types is now available and select pre-qualified customers can access Habana Gaudi2 processors.
- LLM/MLOps training and oneAPI onboarding are available to help developers get started.
- New Intel Certified Developer program within the Intel Developer Cloud allows users to complete modules and training to advance their AI design skills.
Of course, what would an Intel presentation be without a few comments on Moore’s Law, which Intel insists is far from out of gas.
“As Gordon [Moore] said, nothing can go on forever, no physical quantity can change exponentially forever, but it can be delayed. And he was often amazed by just the creativity of how we just continue to find workarounds to barriers. And we at Intel, we see ourself as the stewards of Moore’s law, and this relentless pursuit of computing and efficiency scale, and we will not rest, we are committed to continuing this pursuit. And as I like to say, until every element to the periodic table is exhausted, we ain’t done.”