Supercomputing enthusiasts are speed demons, so it made sense for Nvidia to discuss its 2024 computing products at Supercomputing 2023.
Nvidia’s next-generation products are being planned around H200 GPU, which will be available through cloud providers and system vendors in the second quarter of next year.
The H200 is an incremental improvement over the H100, with more memory capacity and bandwidth necessary to run heavy-duty AI and high-performance computing applications.
Nvidia seems done talking about the Hopper-based H100 GPU, which pushed the company past the $1 trillion market cap this year.
In a way, the H200 focus at SC23 is a redux of Supercomputing 2022, when Nvidia announced the widespread availability of H100.
The new HBM3e memory brings more bandwidth and memory capacity to applications. The H100 had the HBM3 memory.
The system has 141GB of memory, which is an improvement from the 80GB of HBM3 memory in the SXM and PCIe versions of the H100.
The H200 has a memory bandwidth of 4.8 terabytes per second, while Nvidia’s H100 boasts 3.35 terabytes per second.
As the product name indicates, the H200 is based on the Hopper microarchitecture. Outside of the memory improvements, the H100 and H200 are equivalent on most floating point and integer measures, including BFLOAT, FP, and TF. Like the H100, the H200 has a thermal design limit of 700 watts.
Nvidia provided some comparative inferencing benchmarks, saying it was 1.6 times faster than the H100 on GPT-3 and 1.9 times faster on Llama.
The H200 is two times faster in scientific computing than the A100, which shipped in 2021 and is still widely used in the cloud. Automobile company Tesla still uses A100 for AI, and all major cloud providers offer A100 VMs that are significantly cheaper than H100 instances.
“For memory-intensive HPC applications like simulations, scientific research, and artificial intelligence, the H200’s higher memory bandwidth ensures that data can be accessed and manipulated efficiently,” Nvidia says in a spec sheet.
The H200 can be paired with multiple GPUs in a server system with Nvidia’s NVLink interconnect, which transfers data at 900GB/s. It also supports PCIe Gen5, which can achieve transfer speeds of 128GB/s.
All major cloud providers, minus IBM Cloud, will offer H200 instances starting next year.
But here’s where things get interesting — CoreWeave, which has leveraged its H100 GPUs as collateral for $2.3 billion in debt, will also offer the H200 in its cloud.
Customers can also test the H200 on the company’s Launchpad service. The company’s AI Enterprise software package allows companies to develop applications for the GPU, and its CUDA software stack was recently updated to version 12.3.
The H200 was part of Nvidia’s GH200 chipset, which pairs a Grace CPU and Hopper GPU in a chipset. Nvidia has connected GH200 chips in a package with 282GB HBM3e memory, which is an improvement on the 192GB of HBM memory from the previous GH100 chip. The chipset provides eight petaflops of AI computing capabilities and 10 TB/sec of HBM3e performance.
Nvidia at SC23 also announced the Quad Nvidia GH200 chip, which basically is four GH200 chips on a motherboard. The chip package will have 288 ARM cores (based on the Neoverse V2 design), deliver six petaflops of AI performance, and have 2.3TB of high-speed memory. The GH200 Quad will also be in supercomputing sleds by Eviden Bull Sequana and HPE.
Europe’s first exascale supercomputer, called Jupiter, will use GH200 chips. The GH200 will be in a booster module based on Nvidia technology for AI and HPC applications. The Jupiter core compute module is based on SiPearl’s Rhea1 chip, based on ARM technology and is considered Europe’s first homegrown CPU.
Europe is trying to shift its computing systems to open technology and away from proprietary technology like Nvidia’s all-inclusive GH200. But Europe doesn’t have a native GPU and didn’t have much of an option outside Nvidia GPUs to provide the boost to get to Exascale.
“Since the beginning, we knew that we had to work with external GPUs,” said SiPearl CEO Philippe Notton, adding that a lot of work is still happening on the Rhea1 CPU and the core computing model. The company is designing its Rhea1 CPU to run simulation models and other AI activities.
The Nvidia GH200 chip will also be in the University of Bristol’s Isambard-AI, National Center for Supercomputing Applications’ DeltaAI supercomputer.
Nvidia claimed that all the major scientific computing datacenters with its Grace Hopper chips will deliver a combined performance of 350 exaflops of AI performance by 2025.
Nvidia also announced a server system called HGX 200 with H200 GPUs. The HGX 200 will be available in four- or eight-way systems and will also package the latest NVLink and NVSwitch interconnects to link up the GPUs.
An eight-way HGX 200 can run 32 petaflops of FP8 AI performance. Intel, Nvidia, and ARM have settled on FP8 benchmarks as being an important measure of performance.
Nvidia previously announced it was creating a Grace Hopper supercomputer with 256 GH200 GPUs — now called DGX GH200 — that will deliver one exaflop of AI performance. Nvidia CEO Jensen Huang has called the system the “world’s largest single GPU” when all 256 GH200s work in tandem.
Next year, Nvidia will release the flagship B100 GPU, which will be on a brand new architecture. In 2025, Nvidia will release the X100 GPU. The company is also releasing lower-performing versions of the flagship chips. Nvidia recently announced that it would release new AI GPUs every year leading into 2025.
Nvidia is speeding up its roadmap because it wants to put more distance between itself and the competition, said a principal analyst at Tirias Research.
“The company had such a clear advantage for so long it could take more time between generations. The semiconductor market is also entering a new era with chiplets, which could be changing how Nvidia designs its chips,” Krewell said.
Nvidia rounded out its Supercomputing 2023 announcements with Nvidia CUDA Quantum, which researchers use to simulate quantum applications. Quantum computers are not easily available, so Nvidia’s GPU speed simulates applications as if they were processed on quantum circuits.
Companies also dabble with creating a computing environment where quantum processors are paired with GPUs.
BASF researchers recently ran a 24-qubit simulation on GPUs. The team will plan to run a 50-qubit simulation on NVIDIA’s Eos H100 Supercomputer. The Eos H100 has 576 DGX H100 systems and 4,608 H100 GPUs.