Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst firm TechInsights. Nvidia’s GPU shipments in 2023 grew by more than 1 million units compared to 2022, when Nvidia’s data-center GPU shipments totaled 2.64 million units, according to the study.
Nvidia had a dominant 98% market share in data-center GPU shipments in 2023, similar to market share numbers in 2022.
When including AMD and Intel, the total data-center GPU unit shipments in 2023 totaled 3.85 million, growing from about 2.67 million units in 2022, according to TechInsights.
Nvidia also had a 98% revenue share of the data-center GPU revenue market with $36.2 billion, more than three times the growth from $10.9 billion in 2022.
AI alternatives to Nvidia GPUs are emerging in the form of Google TPUs, AMD’s GPUs, Intel’s AI chips, and even CPUs said James Sanders, an analyst at TechInsights.
Sanders said there isn’t enough AI hardware to match the rapid progress in AI software.
“I suspect that because of the growth of AI, it is a little bit inevitable that it will have to diversify from Nvidia,” Sanders said.
The shortage and cost of Nvidia GPUs helped AMD and Intel, which showed signs of life with their own AI chips in 2023.
AMD shipped about 500,000 units in 2023, with Intel filling in the rest with 400,000 units, according to TechInsights.
AMD’s data-center GPU shipments are poised to go up this year.
AMD’s MI300-series GPUs are doing well, with purchases from Microsoft, Meta, and Oracle locked in. On an April earnings call, AMD CEO Lisa Su said MI300 sales totaled $1 billion in less than two quarters.
“We now expect data center GPU revenue to exceed $4 billion in 2024, up from the $3.5 billion we guided in January,” Su said, according to the earnings call transcript on The Motley Fool.
At Computex this month, AMD also said it would release new GPUs on a yearly cycle, with the MI325X planned for this year, the MI350 in 2025, and MI400 in 2026.
AMD is following Nvidia’s a-GPU-a-year blueprint. Nvidia has already announced its Blackwell GPU for this year, an incremental upgrade in 2025, and new GPUs from the new Rubin family in 2026 and 2027.
Intel’s GPU future remains a question mark. The company recently discontinued its Ponte Vecchio GPU, which is redesigning its Falcon Shore GPU for release in 2025. The company also offers the Flex series of inferencing and media-serving data-center GPUs.
Intel is now focusing on the Gaudi AI chips, which aren’t as flexible as GPUs. Generative AI models must be specially programmed to run on Gaudi chips, which requires a lot of effort. Nvidia’s GPUs are better adapted to run a wide range of models.
Falcon Shores will “combine the great systolic performance of Gaudi 3 with a fully programmable architecture… and then we have a very aggressive cadence of Falcon Shores products following that,” said Intel CEO Pat Gelsinger during an April earnings call, according to a transcript on The Motley Fool.
Gaudi 3 is giving Intel a foothold in the AI chip market, and Intel now expects “over $500 million in accelerated revenue in the second half of 2024,” Gelsinger said.
“There’s also a lot of action outside of GPUs, especially with Google’s TPUs, given the capacity and price issues attached to Nvidia’s GPUs,” TechInsights’ Sanders said.
“Google’s custom silicon efforts generate more revenue than custom silicon efforts from AWS and merchant silicon vendors like AMD and Ampere,” Sanders said.
Google has equipped its Google Cloud data centers with its homegrown chips, including the recently announced Axion CPU and sixth-generation TPU, an AI chip which is branded as Trillium. The new chips weren’t considered in the study by TechInsights.
“That’s how Google wound up in a position where they’re technically the third largest data center silicon provider [by revenue] because of just kind of a weird confluence of market forces,” Sanders said.
Google introduced its TPU in 2015 and has gradually captured market share. A captive audience for Google’s TPUs includes internal applications and Google Cloud users.
“Argos, the video encoder that they made for YouTube, think about all of the video that YouTube has to ingest on an hourly basis. For every Argos video encoder ASIC they’ve been able to deploy, they displaced 10 Xeon CPUs in the process. From a power consumption standpoint, that is a massive change,” Sanders said.
Amazon, which has its own Graviton CPUs and AI chips called Trainium and Inferentia, has kept its AI chips as cheap as possible for the customer.
In 2023, TechInsights said in a research note that AWS rented the equivalent of 2.3 million homegrown processors to its customers, of which Graviton accounted for 17%, exceeding the usage of AMD chips on the platform.
“Their total revenue for that isn’t going to be super high even with high volumes. They want to keep … a pretty consistent 10% to 20% discount compared to an Intel or AMD-powered instance,” Sanders said.
All major cloud providers and hyperscalers are developing homegrown chips, which are replacing chips made by Intel and AMD.
Nvidia’s sheer dominance is forcing cloud providers to assign dedicated space controlled by Nvidia, which is putting its DGX servers and CUDA software stack in those spaces.
“The cloud platforms won’t get completely away from Intel, AMD, or Nvidia because there’s always going to be customer demand for chips from those companies in these clouds,” Sanders said.
Microsoft also introduced its own chips, the Cobalt CPU and the Maia AI accelerator, close to 10 years after Google started its homegrown chip efforts in 2013 to fill a need to accelerate internal workloads.
The ramp of internal chips developed by cloud companies depends on the software infrastructure. Google’s LLMs were developed to run on their TPUs, which will ensure the chips ramp up quickly.
Microsoft relies on Nvidia GPUs for its AI infrastructure and is now adapting its software stack to homegrown chips. AWS mostly rents out its chips to companies deploying their own software stacks.