In an era of prosperity for chip makers, Intel is a titanic disappointment. There’s no other way to put it. The company reported a second-quarter loss of $1.6 billion, including all charges and provisions, compared to a $1.47 billion profit in the year-ago quarter.
Intel should have been back on track after restructuring over the last few years. The company laid off thousands, axed products, and spun-off units into independent entities.
But it is now back to square one, with more layoffs and product cuts announced on Thursday (August 1st).
“We plan to deliver $10 billion in cost savings in 2025, including reducing our headcount by roughly 15,000 roles, or 15% of our workforce. The majority of these actions will be completed by the end of this year,” Intel CEO Pat Gelsinger said in a statement.
GPU Companies Are Rich
Intel desperately needs a viable GPU on its product roadmap. Intel’s rivals are generating billions from GPUs. On a recent earnings call, Intel didn’t discuss upcoming GPUs or share any details about upcoming AI accelerators.
AMD and Nvidia share roadmaps with a new GPU every year. It’s safe to say that no one clearly knows what Intel is doing with AI chips, or even if they know what they are doing.
Intel has cancelled GPUs, changed release dates, and not shown confidence in Falcon Shores, the only GPU currently on the company’s roadmap and due late next year. Intel will ship its flagship data-center AI accelerator, Gaudi, this quarter but has not shared any details about its successor.
Two days ago, AMD’s data center group reported revenue of $2.8 billion, up 115% compared to the year-ago quarter. The growth was attributed to the ramp of AMD Instinct GPU shipments.
Nvidia, which is swimming in cash, is expected to have another blow-out quarter. Analysts expect revenue of $28.5 billion, two times the $13.51 billion reported by the company in the year-ago quarter. TSMC, which manufactures GPUs for AMD and Nvidia, recorded a revenue increase of 40.1% compared to the year-ago quarter.
Google, Meta, Microsoft, and Amazon are spending billions to create or upgrade data centers to run AI. Nvidia and AMD are cashing in on the upgrades, while Intel has missed out on the opportunity.
“We’re making the early inroads on the AI side of the data center, and that’s only going to grow as we go into next year,” Gelsinger said on an earnings call.
Intel’s GPU poor strategy has hurt the company. The company’s GPU Max product, also called Ponte Vecchio, was cancelled once the Aurora supercomputer passed the finish line. The system has more than 60,000 GPUs, making it the largest GPU installation in the world. However, technical and manufacturing issues with the GPU delayed Aurora’s implementation.
Intel is now flying blind with a giant question market around its data-center GPU product to compete with Nvidia and AMD. The origin successor to Ponte Vecchio, an integrated CPU-GPU chip called Falcon Shores, was canceled. Intel redesigned Falcon Shores into a discrete GPU, now rescheduled for release in late 2025.
Intel’s primary AI offering, an ASIC called Gaudi 3, will ship in the third quarter, Gelsinger said.
Intel plans to unite Gaudi-3 elements inside Falcon Shores. But GPUs from AMD and Nvidia are taking off because of support for general-purpose compute. Customers can use the chips for multiple AI models and applications.
Gelsinger made no mention of a GPU on the earnings call, but was pushing CPUs as a catalyst for growth. Intel’s version of the CPU and GPU story goes through its Xeon 6 and Gaudi chips, which is an alternate offering to AMD’s Epyc CPU and Instinct GPU, and Nvidia’s Grace CPU and Hopper GPU.
“That will also help us for positioning on both sides of the cloud and the enterprise market for both CPU and GPU,” Gelsinger said.
Gelsinger hopes that Xeon CPUs provide a foundation for the GPUs installed in AI-specific data-centers. Intel’s goal is to go out and win sockets for Sierra Forest, its dense server CPU, and Granite Rapids, the upcoming flagship server CPU in the tradition of older Xeon CPUs.
“As we’ve reestablished Xeon’s competitive position, we are strongly positioned as the head node of choice in AI servers. We’re also focused on improving our accelerator roadmap,” Gelsinger said.
He acknowledged that ARM was taking more server market share, but x86’s share would stabilize in the second half.
“One of the good things that we’ve seen for our server market is the AI head nodes, where we’re quite advantaged. We’re seeing a lot of interest in Xeon being the head node of choice for anybody’s accelerator, including ours,” Gelsinger said.
The current Xeon product, called Emerald Rapids, hasn’t been as successful as Sapphire Rapids. Cloud providers have largely passed on Emerald Rapids after a red-carpet rollout for Sapphire Rapids, which is the mainstream offering by cloud providers. AWS isn’t offering Emerald Rapids VMs, while Microsoft launched Emerald Rapids offerings just recently.
Spending on Factories
Intel is transitioning to becoming a global chip manufacturing firm (rather than for its products). Its goal—to advance five nodes in five years—ends with node 18A in 2025. The process brings some new technologies, including PowerVia and RibbonFet, which Intel hopes will put it back in leadership over TSMC.
But Intel’s transition to the nodes in the middle hasn’t been perfect. Intel paid a premium to have its upcoming PC CPUs called Lunar Lake made by TSMC.
“The good news for 2026 … is that that really begins the shift back to the internal manufacturing footprint for a lot of our tiles. Bringing back more wafers to the internal network will meaningfully improve the cost structure,” said Dave Zinsner, chief financial officer, during the earnings call
Some Light at the End of the Tunnel
Intel has a complex nexus in its manufacturing processes and product releases. The upcoming server chip, Clearwater Forest, is one of the lead products for its upcoming manufacturing process called 18A.
The chip is designed for low-power servers, which are designed to compete ARM-based servers. The chips use e-cores, a term that implies energy-efficient cores, where server makers can cram more CPU cores for web-serving or AI inferencing.
“Clearwater Forest has achieved power-on and is on track to launch in 2025,” Gelsinger said.
Intel has fairly stable software operations and is up-to-speed on its AI software ecosystem. The company is focusing on open-source models, some of which include Llama 3.1, Mistral, Phi, and Gemma. That means Intel isn’t looking to sell AI accelerators into proprietary models from OpenAI, Microsoft, and Google.
Intel’s development framework, OneAPI, includes tools to port AI models to its chips.
“Our focus on open models, open developer frameworks, and reference designs combining Xeon with accelerators through OPEA, or Open Platform for Enterprise AI, are gaining considerable market traction,” Gelsinger said.