AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

By Agam Shah

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades.

AMD has claimed its new Instinct MI300X GPU is the fastest AI chip in the world, beating Nvidia’s red-hot H100 and upcoming H200 GPUs.

“It’s the highest performance accelerator in the world for generative AI,” said Lisa Su, AMD’s CEO, during an on-stage speech at a company’s AI event this week.

The event marked the official launch of the MI300X, a beefier version of the MI300A that is going into the two-exaflop supercomputer code-named El Capitan, which is being built at the Lawrence Livermore National Laboratory.

The MI300X is built on the CDNA3 architecture, which delivers more than three times higher performance for key AI data types like FP16 and BFLoat16. The chip has 153 billion transistors and is built on 3D packaging. It combines chip modules made using the 5- and 6-nanometer processes.

The chip has 304 GPU compute units, 192GB of HBM3 memory, and 5.3 TB/s of memory bandwidth.

MI300X delivers 163.4 teraflops of peak FP32 performance and 81.7 teraflops of peak FP64 performance.

The previous generation MI250X delivers peak single-precision (FP32) vector and double-precision (FP64) vector performance of 47.9 teraflops. AMD also compared its chip to the SXM version of the H100, but the H100 NVL model with NVLink technology closes the performance gap.

The Nvidia H100 SXM delivers 68 teraflops of peak FP32 and 34 teraflops of FP64 performance. But the H100 NVL model closes that gap, delivering 134 teraflops of FP32 performance and 68 teraflops of FP64 performance.

Nvidia’s upcoming H200 is a memory upgrade to the H100 but still contains less memory and bandwidth than the MI300X. The H200 has 141GB of GPU memory with a bandwidth of 4.8TB/second.

“If you look at MI300X, we made a very conscious decision to add more flexibility, more memory capacity, and more bandwidth. What that translates to is 2.4 times more memory capacity and 1.6 times more memory bandwidth than the competition,” said Su.

Su in this case compares the MI300X to Nvidia’s H100 SXM model, which has 80GB of HBM memory and 3.35TB/s of memory bandwidth. The two-piece H100 NVL model has 188GB of HBM3 memory but beats the MI300X with 7.8TB/s of memory bandwidth.

How long AMD will hold the title remains to be seen. Nvidia is planning yearly upgrades for its chips, with the new B100 GPU coming next year and the X100 GPU in 2025.

AMD has come a long distance in just a year. A year ago, AMD was caught off guard when ChatGPT was introduced. The chatbot propelled Nvidia’s growth into a trillion-dollar company, and the A100 and H100 GPUs became the hottest tech property.

Nvidia’s hardware, behind GPT-4, single-handedly fueled AI adoption and remains the undisputed AI champion. But Nvidia’s hardware shortage has customers looking for alternatives and opened an opportunity for AMD to present its latest GPUs and systems as a viable alternative.

Beyond Nvidia, there’s plenty of opportunity for AMD in the market.

“We’re now expecting that the data center accelerator TAM will grow more than 70% annually over the next four years to over $400 billion in 2027,” Su said.

The MI300X chip has 153 billion transistors and has a dozen 5- and 6-nanometer chiplets.

The MI300X has 304 GPU compute units, 192GB of HBM3 memory, and 5.3 TB/s of memory bandwidth and delivers 163.4 teraflops of peak FP32 and 81.7 teraflops of peak FP64 performance.

“It uses the most advanced packaging in the world. If you look at how we put it together, it’s actually pretty amazing,” Su said.  The MI300X has four IO dies in the base layer. Each IO die has 256 megabytes of Infinity Cache and next-generations IO such as 128 channel HBM3 interfaces, PCIe Gen5 support, and the company’s fourth-generation Infinity Fabric that connects multiple MI300Xs.

The chip stacks eight CDNA3 accelerator chiplets on top of the IO die. The 304 compute units are connected via dense through-silicon vias (TSVs). That supports up to 17 terabytes per second of bandwidth. The chip connects eight stacks of HBM3 for a total of 192 gigabytes of memory and 5.3 TB/second of bandwidth.

Cloud providers Microsoft, Oracle, and Meta have put MI300X GPUs in their cloud infrastructure, though those companies still primarily generate their AI horsepower from Nvidia chips.

Cloud providers offering AI alternatives aren’t new: Amazon provides various options, including its newly released Trainium2 chips and Intel’s Gaudi processors. But the intent is clear: customers have more choices and do not have to succumb to Nvidia’s sky-high prices for its H100 chips.

“It’s… exciting right now seeing the bring up of GPT-4 on MI300X, seeing the performance of Llama, getting it rolled into production,” said Kevin Scott, Microsoft’s chief technology officer, during an on-stage appearance at the AMD event.

Oracle Cloud is also putting the MI300X in its cloud service. It is also working with early adopters such as Naveen Rao, whose MosaicML AI services company was recently acquired for $1.3 billion by Databricks.

As  reported on HPCwire, a new cloud service company, TensorWave, that will introduce a new scalable and adaptable GPU architecture in 2024. Based on the GigaIO FabreX composable PCIe technology, the TensorNODE system will support up to 5,760 Instinct MI300X GPUs and present a single FabreX memory fabric domain to all GPUs.

AMD followed Nvidia’s footsteps by also announcing its own server architecture by showing an Open Compute Project-compliant server design with eight MI300X GPUs, which are interconnected by Infinity Fabric. The board drops into any OCP-compliant open blueprint on which customers can build servers.

“We did this for a very deliberate reason. We wanted to make this as easy as possible for customers to adopt, so you can take out your motherboard and put in the MI300X Instinct platform,” Su said.

Such systems will be cheaper to build, giving customers flexibility in acquiring hardware at the best prices. That’s a very different approach compared to Nvidia, whose HGX systems are based on proprietary architecture and cost a premium.

AMD’s plans to make MI300X OCP-compliant are already paying dividends, with Meta deploying servers with the GPU in record time.

“[MI300X] leverages the OCP module, standard, and platform, which has helped us adopt it in record time. In fact, MI300X is one of the fastest deployment solutions in Meta’s history,” said Ajit Mathews, senior director of engineering at Meta, in an on-stage appearance.

AMD’s hardware focus has sabotaged the company’s AI software strategy, which has lagged behind Nvidia, which provides the CUDA developer framework. The CUDA support has helped boost Nvidia’s GPU adoption among companies using AI.

The company is releasing the next-generation ROCm 6 soon and has claimed new features and performance benefits. Developer George Hotz famously criticized AMD for lacking software support, documentation, and support responses to developers for its GPUs.

ROCm 6 delivers eight times better performance with MI300X compared to the previous generation release, said Victor Peng, president at AMD.

“We have 62,000 models running on Instinct today, and more models will be running on the MI300 very soon,” Peng said.

The ROCm 6 is eight times faster than MI300X than MI250 with ROCm 5 for a large language model with 70 billion parameters. The ROCm 6 framework will support new data types, including FP16, which will boost performance and open up memory resources and bandwidth. The framework will also have many low-level optimizations for better AI performance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top500 list of the fastest supercomputers in the world. At s Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance computing (HPC) will remain essential, even as many applicati Read more…

EuroHPC Expands: United Kingdom Joins as 35th Member

May 14, 2024

The United Kingdom has officially joined the EuroHPC Joint Undertaking, becoming the 35th member state. This was confirmed after the 38th Governing Board meeting, and it's set to enhance Europe's supercomputing capabilit Read more…

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Software Foundation (HPSF). The announcement was made at the ISC Read more…

Nvidia Showcases Work with Quantum Centers at ISC 2024

May 13, 2024

With quantum computing surging in Europe, Nvidia took advantage of ISC 2024 to showcase its efforts working with quantum development centers. Currently, Nvidia GPUs are dominant inside classical systems used for quantum Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger systems (e.g. exascale), according to Hyperion Research’s ann Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire