Intel Is GPU Poor – Dissecting the Company’s Disastrous Q2 Earnings

By Agam Shah

August 5, 2024

In an era of prosperity for chip makers, Intel is a titanic disappointment. There’s no other way to put it. The company reported a second-quarter loss of $1.6 billion, including all charges and provisions, compared to a $1.47 billion profit in the year-ago quarter.

Intel should have been back on track after restructuring over the last few years. The company laid off thousands, axed products, and spun-off units into independent entities.

But it is now back to square one, with more layoffs and product cuts announced on Thursday (August 1st).

“We plan to deliver $10 billion in cost savings in 2025, including reducing our headcount by roughly 15,000 roles, or 15% of our workforce. The majority of these actions will be completed by the end of this year,” Intel CEO Pat Gelsinger said in a statement.

GPU Companies Are Rich

Intel desperately needs a viable GPU on its product roadmap. Intel’s rivals are generating billions from GPUs. On a recent earnings call, Intel didn’t discuss upcoming GPUs or share any details about upcoming AI accelerators.

AMD and Nvidia share roadmaps with a new GPU every year. It’s safe to say that no one clearly knows what Intel is doing with AI chips, or even if they know what they are doing.

Intel has cancelled GPUs, changed release dates, and not shown confidence in Falcon Shores, the only GPU currently on the company’s roadmap and due late next year. Intel will ship its flagship data-center AI accelerator, Gaudi, this quarter but has not shared any details about its successor.

Falcon Shores2
Falcon Shores 2 GPU

Two days ago, AMD’s data center group reported revenue of $2.8 billion, up 115% compared to the year-ago quarter. The growth was attributed to the ramp of AMD Instinct GPU shipments.

Nvidia, which is swimming in cash, is expected to have another blow-out quarter. Analysts expect revenue of $28.5 billion, two times the $13.51 billion reported by the company in the year-ago quarter. TSMC, which manufactures GPUs for AMD and Nvidia, recorded a revenue increase of 40.1% compared to the year-ago quarter.

Google, Meta, Microsoft, and Amazon are spending billions to create or upgrade data centers to run AI. Nvidia and AMD are cashing in on the upgrades, while Intel has missed out on the opportunity.

“We’re making the early inroads on the AI side of the data center, and that’s only going to grow as we go into next year,” Gelsinger said on an earnings call.

Intel’s GPU poor strategy has hurt the company. The company’s GPU Max product, also called Ponte Vecchio, was cancelled once the Aurora supercomputer passed the finish line. The system has more than 60,000 GPUs, making it the largest GPU installation in the world. However, technical and manufacturing issues with the GPU delayed Aurora’s implementation.

Intel is now flying blind with a giant question market around its data-center GPU product to compete with Nvidia and AMD. The origin successor to Ponte Vecchio, an integrated CPU-GPU chip called Falcon Shores, was canceled. Intel redesigned Falcon Shores into a discrete GPU, now rescheduled for release in late 2025.

Intel’s primary AI offering, an ASIC called Gaudi 3, will ship in the third quarter, Gelsinger said.

Intel tackles the generative AI gap by introducing the Intel Gaudi 3 AI accelerator at the Intel Vision event on April 9, 2024, in Phoenix, Arizona. (Credit: Intel Corporation)

Intel plans to unite Gaudi-3 elements inside Falcon Shores. But GPUs from AMD and Nvidia are taking off because of support for general-purpose compute. Customers can use the chips for multiple AI models and applications.

Gelsinger made no mention of a GPU on the earnings call, but was pushing CPUs as a catalyst for growth. Intel’s version of the CPU and GPU story goes through its Xeon 6 and Gaudi chips, which is an alternate offering to AMD’s Epyc CPU and Instinct GPU, and Nvidia’s Grace CPU and Hopper GPU.

“That will also help us for positioning on both sides of the cloud and the enterprise market for both CPU and GPU,” Gelsinger said.

Gelsinger hopes that Xeon CPUs provide a foundation for the GPUs installed in AI-specific data-centers. Intel’s goal is to go out and win sockets for Sierra Forest, its dense server CPU, and Granite Rapids, the upcoming flagship server CPU in the tradition of older Xeon CPUs.

“As we’ve reestablished Xeon’s competitive position, we are strongly positioned as the head node of choice in AI servers. We’re also focused on improving our accelerator roadmap,” Gelsinger said.

He acknowledged that ARM was taking more server market share, but x86’s share would stabilize in the second half.

“One of the good things that we’ve seen for our server market is the AI head nodes, where we’re quite advantaged. We’re seeing a lot of interest in Xeon being the head node of choice for anybody’s accelerator, including ours,” Gelsinger said.

The current Xeon product, called Emerald Rapids, hasn’t been as successful as Sapphire Rapids. Cloud providers have largely passed on Emerald Rapids after a red-carpet rollout for Sapphire Rapids, which is the mainstream offering by cloud providers. AWS isn’t offering Emerald Rapids VMs, while Microsoft launched Emerald Rapids offerings just recently.

Spending on Factories

Intel is transitioning to becoming a global chip manufacturing firm (rather than for its products). Its goal—to advance five nodes in five years—ends with node 18A in 2025. The process brings some new technologies, including PowerVia and RibbonFet, which Intel hopes will put it back in leadership over TSMC.

But Intel’s transition to the nodes in the middle hasn’t been perfect. Intel paid a premium to have its upcoming PC CPUs called Lunar Lake made by TSMC.

“The good news for 2026 … is that that really begins the shift back to the internal manufacturing footprint for a lot of our tiles. Bringing back more wafers to the internal network will meaningfully improve the cost structure,” said Dave Zinsner, chief financial officer, during the earnings call

Some Light at the End of the Tunnel

Intel has a complex nexus in its manufacturing processes and product releases. The upcoming server chip, Clearwater Forest, is one of the lead products for its upcoming manufacturing process called 18A.

The chip is designed for low-power servers, which are designed to compete ARM-based servers. The chips use e-cores, a term that implies energy-efficient cores, where server makers can cram more CPU cores for web-serving or AI inferencing.

“Clearwater Forest has achieved power-on and is on track to launch in 2025,” Gelsinger said.

Intel has fairly stable software operations and is up-to-speed on its AI software ecosystem. The company is focusing on open-source models, some of which include Llama 3.1, Mistral, Phi, and Gemma. That means Intel isn’t looking to sell AI accelerators into proprietary models from OpenAI, Microsoft, and Google.

Intel’s development framework, OneAPI, includes tools to port AI models to its chips.

“Our focus on open models, open developer frameworks, and reference designs combining Xeon with accelerators through OPEA, or Open Platform for Enterprise AI, are gaining considerable market traction,” Gelsinger said.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qubits on an H1 system to simulate an iron catalyst's low ener Read more…

Diversity Hiring Maximizes Everyone’s Success in STEM and Beyond

September 12, 2024

Despite overwhelming evidence, some companies remain surprised by this simple revelation: Diverse workforces and leadership teams are good for business. Companies that cultivate diverse hiring practices and maintain a di Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI report. The global study, conducted by S&P Global Market In Read more…

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI rep Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

AWS’s High-performance Computing Unit Has a New Boss

September 10, 2024

Amazon Web Services (AWS) has a new leader to run its high-performance computing GTM operations. Thierry Pellegrino, who is well-known in the HPC community, has Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire