Nvidia was invisible with a very small booth and limited floor presence, but thanks to its sheer AI dominance, it was a winner at the Supercomputing 2023.
Nvidia’s logos were plastered across the floor at the booths of its hardware partners. Nvidia’s dominance as an AI player may continue, but companies are also hungry for alternatives.
The good news kept coming for Nvidia, announcing more than 40 scientific research organizations adopting its GH200 chip. The chip package pairs the new H200 GPU with its Arm-based Grace CPU.
Supercomputing centers said Nvidia’s GPU was the only accelerated computing option for their growing compute requirements.
However, customers are increasingly evaluating alternative GPUs and AI chips.
NCSA announced DeltaAI, an AI supercomputing installation with the GH200 chipset. The university is also evaluating chips from SambaNova and other AI chip makers.
Companies like Groq and Cerebras also showed off their hardware on the floor. Others want to get away from Nvidia to perform more AI while drawing less power. The show floor was full of companies making liquid cooling products to cool Nvidia GPUs.
The other surprise against Nvidia came far away from the SC23 show.
At its Ignite conference, Microsoft announced two homegrown chips: Maia AI Accelerator, which is for generative AI applications, and the Cobalt CPU, an ARM CPU for deployment in its Azure cloud service. Microsoft’s AI infrastructure is currently being built on Nvidia GPUs, and the new AI chip accelerator could cut back GPU deployments.

Microsoft’s Eagle server runs on Nvidia GPUs and took the third spot on the Top500 list.
The chips will be in Microsoft’s datacenters in a few months and power the Microsoft Copilot or Azure OpenAI Service. Those services are currently powered by Nvidia GPUs.
“The chips represent a last puzzle piece for Microsoft to deliver infrastructure systems – which include everything from silicon choices, software, and servers to racks and cooling systems – that have been designed from top to bottom and can be optimized with internal and customer workloads in mind,” Microsoft said in a blog entry.
The Maia accelerators are more for inferencing and have been designed for the company’s AI infrastructure. Microsoft said the chip provides more flexibility in optimizing power, performance, sustainability, or cost.
“Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers,” Microsoft said.
Microsoft also announced it would add Nvidia H200 GPU for inferencing. The company also announced ND MI300X virtual machines based on AMD MI300X GPUs.