AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades.
AMD has claimed its new Instinct MI300X GPU is the fastest AI chip in the world, beating Nvidia’s red-hot H100 and upcoming H200 GPUs.
“It’s the highest performance accelerator in the world for generative AI,” said Lisa Su, AMD’s CEO, during an on-stage speech at a company’s AI event this week.
The event marked the official launch of the MI300X, a beefier version of the MI300A that is going into the two-exaflop supercomputer code-named El Capitan, which is being built at the Lawrence Livermore National Laboratory.
The MI300X is built on the CDNA3 architecture, which delivers more than three times higher performance for key AI data types like FP16 and BFLoat16. The chip has 153 billion transistors and is built on 3D packaging. It combines chip modules made using the 5- and 6-nanometer processes.
The chip has 304 GPU compute units, 192GB of HBM3 memory, and 5.3 TB/s of memory bandwidth.
MI300X delivers 163.4 teraflops of peak FP32 performance and 81.7 teraflops of peak FP64 performance.
The previous generation MI250X delivers peak single-precision (FP32) vector and double-precision (FP64) vector performance of 47.9 teraflops. AMD also compared its chip to the SXM version of the H100, but the H100 NVL model with NVLink technology closes the performance gap.
The Nvidia H100 SXM delivers 68 teraflops of peak FP32 and 34 teraflops of FP64 performance. But the H100 NVL model closes that gap, delivering 134 teraflops of FP32 performance and 68 teraflops of FP64 performance.
Nvidia’s upcoming H200 is a memory upgrade to the H100 but still contains less memory and bandwidth than the MI300X. The H200 has 141GB of GPU memory with a bandwidth of 4.8TB/second.
“If you look at MI300X, we made a very conscious decision to add more flexibility, more memory capacity, and more bandwidth. What that translates to is 2.4 times more memory capacity and 1.6 times more memory bandwidth than the competition,” said Su.
Su in this case compares the MI300X to Nvidia’s H100 SXM model, which has 80GB of HBM memory and 3.35TB/s of memory bandwidth. The two-piece H100 NVL model has 188GB of HBM3 memory but beats the MI300X with 7.8TB/s of memory bandwidth.
How long AMD will hold the title remains to be seen. Nvidia is planning yearly upgrades for its chips, with the new B100 GPU coming next year and the X100 GPU in 2025.
AMD has come a long distance in just a year. A year ago, AMD was caught off guard when ChatGPT was introduced. The chatbot propelled Nvidia’s growth into a trillion-dollar company, and the A100 and H100 GPUs became the hottest tech property.
Nvidia’s hardware, behind GPT-4, single-handedly fueled AI adoption and remains the undisputed AI champion. But Nvidia’s hardware shortage has customers looking for alternatives and opened an opportunity for AMD to present its latest GPUs and systems as a viable alternative.
Beyond Nvidia, there’s plenty of opportunity for AMD in the market.
“We’re now expecting that the data center accelerator TAM will grow more than 70% annually over the next four years to over $400 billion in 2027,” Su said.
The MI300X chip has 153 billion transistors and has a dozen 5- and 6-nanometer chiplets.
“It uses the most advanced packaging in the world. If you look at how we put it together, it’s actually pretty amazing,” Su said. The MI300X has four IO dies in the base layer. Each IO die has 256 megabytes of Infinity Cache and next-generations IO such as 128 channel HBM3 interfaces, PCIe Gen5 support, and the company’s fourth-generation Infinity Fabric that connects multiple MI300Xs.
The chip stacks eight CDNA3 accelerator chiplets on top of the IO die. The 304 compute units are connected via dense through-silicon vias (TSVs). That supports up to 17 terabytes per second of bandwidth. The chip connects eight stacks of HBM3 for a total of 192 gigabytes of memory and 5.3 TB/second of bandwidth.
Cloud providers Microsoft, Oracle, and Meta have put MI300X GPUs in their cloud infrastructure, though those companies still primarily generate their AI horsepower from Nvidia chips.
Cloud providers offering AI alternatives aren’t new: Amazon provides various options, including its newly released Trainium2 chips and Intel’s Gaudi processors. But the intent is clear: customers have more choices and do not have to succumb to Nvidia’s sky-high prices for its H100 chips.
“It’s… exciting right now seeing the bring up of GPT-4 on MI300X, seeing the performance of Llama, getting it rolled into production,” said Kevin Scott, Microsoft’s chief technology officer, during an on-stage appearance at the AMD event.
Oracle Cloud is also putting the MI300X in its cloud service. It is also working with early adopters such as Naveen Rao, whose MosaicML AI services company was recently acquired for $1.3 billion by Databricks.
As reported on HPCwire, a new cloud service company, TensorWave, that will introduce a new scalable and adaptable GPU architecture in 2024. Based on the GigaIO FabreX composable PCIe technology, the TensorNODE system will support up to 5,760 Instinct MI300X GPUs and present a single FabreX memory fabric domain to all GPUs.
AMD followed Nvidia’s footsteps by also announcing its own server architecture by showing an Open Compute Project-compliant server design with eight MI300X GPUs, which are interconnected by Infinity Fabric. The board drops into any OCP-compliant open blueprint on which customers can build servers.
“We did this for a very deliberate reason. We wanted to make this as easy as possible for customers to adopt, so you can take out your motherboard and put in the MI300X Instinct platform,” Su said.
Such systems will be cheaper to build, giving customers flexibility in acquiring hardware at the best prices. That’s a very different approach compared to Nvidia, whose HGX systems are based on proprietary architecture and cost a premium.
AMD’s plans to make MI300X OCP-compliant are already paying dividends, with Meta deploying servers with the GPU in record time.
“[MI300X] leverages the OCP module, standard, and platform, which has helped us adopt it in record time. In fact, MI300X is one of the fastest deployment solutions in Meta’s history,” said Ajit Mathews, senior director of engineering at Meta, in an on-stage appearance.
AMD’s hardware focus has sabotaged the company’s AI software strategy, which has lagged behind Nvidia, which provides the CUDA developer framework. The CUDA support has helped boost Nvidia’s GPU adoption among companies using AI.
The company is releasing the next-generation ROCm 6 soon and has claimed new features and performance benefits. Developer George Hotz famously criticized AMD for lacking software support, documentation, and support responses to developers for its GPUs.
ROCm 6 delivers eight times better performance with MI300X compared to the previous generation release, said Victor Peng, president at AMD.
“We have 62,000 models running on Instinct today, and more models will be running on the MI300 very soon,” Peng said.
The ROCm 6 is eight times faster than MI300X than MI250 with ROCm 5 for a large language model with 70 billion parameters. The ROCm 6 framework will support new data types, including FP16, which will boost performance and open up memory resources and bandwidth. The framework will also have many low-level optimizations for better AI performance.