AI Trifecta – MLPerf Issues Latest HPC, Training, and Tiny Benchmarks

By John Russell

November 10, 2022

MLCommons yesterday issued its latest round of MLPerf benchmarks – for Training, HPC, and Tiny. Releasing three sets of benchmarks at the same time makes parsing the results, never an easy chore, even more so. Top line: Nvidia accelerators again dominated Training and HPC (didn’t submit in the Tiny category) but Habana’s Gaudi2 performed well in Training, and Intel’s forthcoming Sapphire Rapids CPU demonstrated that CPU-based training, though not blindingly fast, can also work well.

More broadly, while the number of submissions across the three different benchmark suites was impressive, the number of non-Nvidia accelerators used in the Training and HPC exercises remained low. In this round (Training and HPC) there was only one pure-play competitive accelerator. There were more (3) competitive offerings in the last round. It’s not clear what exactly this means.

Here’s a snapshot of today’s announcement with links to each set of results:

  • MLPerf Training v2.1 benchmark, targeting machine learning models that are used in commercial applications,included nearly 200 results from 18 different submitters spanning small workstations to large-scale data center systems with thousands of processors. (Link to Training results)
  • MLPerf HPC v2.0 benchmark, targeting supercomputers, included “over 20 results from 5 organizations” including submissions from some of the world’s largest supercomputers. (Link to HPC results)
  • The MLPerf Tiny v1.0 benchmark, is intended for lowest power devices and small form factors and measures inference performance – how quickly a trained neural network can process new data and includes an optional energy measurement component. It had submissions from 8 different organizations including 59 performance results with 39 energy measurements. This was a new record for Tiny. (Link to Tiny results)

During an Nvidia pre-briefing, David Salvator, director of AI, benchmarking and cloud, was asked by analyst Karl Freund of Cambrian AI Research, “Given the dearth of competitive submissions, are you concerned about the long-term bias viability of MLPerf?”

Salvator responded, “That’s a fair question. We are doing everything we can to encourage participation. It is our hope that as some of the new solutions continue to come to market from others, that they will want to show off the benefits of those solutions on an industry standard benchmark, as opposed to doing their own.”

At the broader MLPerf pre-briefing, MLCommons executive director David Kanter answered essentially the same question, “The first thing I would say, from an MLPerf standpoint, is I think that all submissions are interesting, regardless of what type of hardware or software is used. We have new submissions using new processors (Intel/Habana).” He also emphasized it’s not just hardware that’s being tested; there are many different software elements being tested which have a dramatic effect on performance. The latter is certainly true.

How important the lack of alternative accelerators is remains an open question and perhaps a sensitive point for MLCommons. There is a definite sense that systems buyers are asking systems vendors for MLPerf performance results, official or otherwise, to help guide their procurement decisions. For distinguishing between system vendors and various configurations intended for AI-related tasks, MLPerf is proving significantly useful.

That said Habana’s Gaudi2 was the only other “accelerator” used in submissions this round (Training and HPC). Intel’s forthcoming Sapphire Rapids CPU was its own accelerator, and leveraged Sapphire Rapids’ internal matrix multiply engine. Intel was quick to point out it wasn’t aiming to outrace dedicated AI servers here, but to demonstrate viability for those with many workloads and intermittent AI needs.

Intel’s Sapphire Rapids

Jordan Plawner, senior director, AI product manager, Intel, said simply, “We have proven that on a standard two-socket system, [with] Intel Xeon scalable processors server, you can train –  in this case three different deep learning workloads. And we are training much more now. Our cluster was only up a few weeks before the MLPerf submission, but we’re training a dozen right now, and we’ll submit even more for next time. This just says it can be done.”

For the moment, MLPerf remains largely a showcase for Nvidia GPUs and systems vendors’ ingenuity in using them for AI tasks. MLCommons has made digging into the results posted on its website relatively easy. Given three very different sets of benchmark results were issued today, it’s best to dig directly into the spreadsheet of interest. MLCommons also invites participating organizations to submit statements describing AI-relevant features of their systems – they range from mostly promotional to instructive. (Link to statements at end of article)

Presented are brief highlights from the MLPerf Training. We’ll have coverage of the HPC results soon in a subsequent article. To get the most out the MLPerf exercise it’s best to dig into the spreadsheet and carefully compare systems and their configuration (CPUs, number of accelerators, interconnects) across workloads of interest.

It’s (Mostly) the Nvidia Showcase Again

Training is generally considered the most compute-intensive of AI tasks and Nvidia’s A100-based system have dominated performance in MLPerf Training since the A100 introduction in 2020. For the latest round, Nvidia submitted systems using its new H100 GPU, introduced in the spring, but in the preview category (parts commercially available in six months) rather than the available now category. A100-based systems from a diverse array of vendors were in the available division (link to MLPerf Training rules).

In a blog today Nvidia is claiming a sweep of top performances by A100 in the closed division (apples-to-apples systems) and also touting the H100’s performance. In a press/analyst briefing, Salvator worked to shine a spotlight on both GPUs without diminishing either.

“H100 is delivering up to 6.7x. more performance than A100 and the way we’re baselining is comparing it to the A100 submission that was made a couple of years ago. We’re expressing it this way because the A100 has seen a 2.5x gain in just software optimizations alone over the last couple of years. This is the first time we’re submitting H100. It will not be the last [time]. Over the next couple of years, I can’t tell you absolutely that we’re going to get 2.5x more performance on top of what we’ve gotten with Hopper now, but I can tell you, we will get more performance,” said Salvator.

The H100’s ability to perform mixed precision calculations using what Nvidia calls its transformer technology engine is a driver of H100 advantage over the A100.

“Transformer engine technology [works] layer by layer. We analyze each layer and ask, can we accurately run this layer using FP8? If the answer is yes, we use a FP8. If the answer is no, we use a higher-precision like mixed-precision, where we do our calculations using FP16 precision, and we accumulate those results using FP32 to preserve accuracy. This is key with MLPerf performance [whose] tests are about time-to-solution. What that means is you not only have to run fast, but you have to converge [on an accurate model]. The point is the transformer engine technology works well for transformer-based networks like BERT, and there’s a lot of other transformer-based networks out [there].”

He cited NLP and Stable Diffusion (image generation from text) as examples that could benefit from Nvidia’s transformer engine technology.

Gaudi2 and Sapphire Rapids Join the Chase

The only other pure-play accelerator was the Gaudi2 from Habana Labs, owned by Intel, which like the H100 was introduced in the spring. Intel/Habana was able to run Gaudi2-based systems shortly after its launch and in time for the MLPerf Training results released in June. It was a great show of out-of-the-box readiness if not full optimization and the Guadi2-based systems performed well. It posted the best time against the A100 on ResNet-Training.

At the time Habana blogged, “Compared to our first-generation Gaudi, Gaudi2 achieves 3x speed-up in Training throughput for ResNet-50 and 4.7x for BERT. These advances can be attributed to the transition to the 7nm process from 16nm, tripling the number of Tensor Processor Cores, increasing our GEMM engine compute capacity, tripling the in-package high bandwidth memory capacity and increasing its bandwidth, and doubling the SRAM size. For vision models, Gaudi2 has another new feature, an integrated media engine, which operates independently and can handle the entire pre-processing pipe for compressed imaging, including data augmentation required for AI training.”

Since then, Habana has been optimizing the software and Gaudi2’s performance in the latest MLPerf training round demonstrated that improvement as shown in the slide below.

Habana reported that when comparing systems with the same number of accelerators – in this case 8 GPUs or 8 Gaudi2s – Gaudi2 again outperformed the A100, but not the H100. Habana COO Eitan Medina was not impressed with the H100 and took a jab at Nvidia in a pre-briefing with HPCwire.

“When we looked at them (H100), they actually took advantage of FP8 in the submission. We submitted still using FP16 even though we do have FP8 that the team is working to enable that in the next several months. So there’s significant, you know, potential for improvement for us,” said Medina.

“What’s jumping out, at least for me, is the fact that H100 is has not really improved as much as we expected, at least based on what was provided. If you review Nvidia documentation on what they expect the acceleration versus A00, it was actually much a larger factor than what was actually shown here. We don’t know why. We can only speculate but I would have expected that if they said that GPT-3 would be accelerated by a factor of 6x over A100, how come it’s only a factor of 2.6x,” he said.

One has the sense that this is just an early round in a marketing battle. Nvidia today dominates the market.

Lastly, there’s Intel’s Sapphire Rapids-based submission in the preview category. Leaving aside Sapphire Rapids’ much-watched trip to market, Intel was able to demonstrate, as Plawner said, that a two-socket system using Sapphire Rapids is able to effectively perform training.

Asked what makes Sapphire Rapids a good AI platform, Plawner said, “For any training accelerator/CPU, you need a balanced platform. So one of the areas where we’ve been imbalanced is we haven’t had the compute power. Advanced matrix instructions, AMX, is kind of an ISA. You can almost think of it like a coprocessor that sits on every CPU core, that’s invoked through an instruction set, and the functions are offloaded to that block within the CPU core. That’s giving us – we’ve already talked about this architecture day, it’s at the operator level, sort of the lowest level of the model – an 8x speedup.

“But models are not all GEMs and operators, right. In practice, it’s a 3-to-6x speed-up, because you have some models that are memory-IO bound and some that are more compute bound. You get closer to 8x with the compute-bound ones. You get closer to 3-or-4x with memory-bound ones. The other parts of the balanced platform are going to be PCIe Gen 5and DDR memory, because then that’s what feeds the models and that’s what feeds the compute engine. When we talk about Sapphire Rapids being a great platform for AI – it’s really a balanced platform, it’s advanced matrix instructions, PCIe Gen 5, and DDR and our cache hierarchy that all come together for this solution,” he said.

Intel’s ambition here, is not to build a dedicated Xeon-alone-based AI platform. (We’ll have to wait and see if Intel builds Sapphire Rapids-based system with multiple Intel GPUs for dedicated AI workloads).

Said Plawner, “While we can speed up training by just continually adding more nodes, we don’t think it’s practical to tell users to use 50 or 100 nodes. The point is to show people that on a reasonable number of nodes – reasonable being what you can get out of a cluster in your datacenter, what you can get from a CSP, what you might already have as a part of an HPC job-sharing cluster – and that if you don’t need a dedicated all-day, year-round training for deep learning specifically, then you can just simply do your training on Xeon. The TCO here is that’s TCO for all the workloads across the whole datacenter.”

He noted that Intel is looking at many training technology improvement. “[We’re] looking at fine tuning, that’s not part of the MLPerf results here. But that’s where you can take a large model and just bring the last few layers. We have some experiments in house now. We’ll release them when we do the tech press briefings in December to show that we can train go do fine-tuning in less than 10 minutes on some of these models, including, you know, something as large as a BERT large model. It will share the specific data then once it’s approved.”

Feature art: Nvidia’s H100 GPU

Link to MLCommons press release,

Link to Nvidia blog,

Link to Intel blog,

Links to statements by MLPerf submitters:




Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Introduces a Flurry of New EC2 Instances at re:Invent

November 30, 2022

AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances – including ones targeting HPC – at its AWS re:Invent 2022 Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaboration, an Intel executive said last week. There are close t Read more…

2022 Road Trip: NASA Ames Takes Off

November 25, 2022

I left Dallas very early Friday morning after the conclusion of SC22. I had a race with the devil to get from Dallas to Mountain View, Calif., by Sunday. According to Google Maps, this 1,957 mile jaunt would be the longe Read more…

2022 Road Trip: Sandia Brain Trust Sounds Off

November 24, 2022

As the 2022 Great American Supercomputing Road Trip carries on, it’s Sandia’s turn. It was a bright sunny day when I rolled into Albuquerque after a high-speed run from Los Alamos National Laboratory. My interview su Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the car on November 3rd and headed towards SC22 in Dallas, stoppi Read more…

AWS Solution Channel

Shutterstock 110419589

Thank you for visiting AWS at SC22

Accelerate high performance computing (HPC) solutions with AWS. We make extreme-scale compute possible so that you can solve some of the world’s toughest environmental, social, health, and scientific challenges. Read more…



AI and the need for purpose-built cloud infrastructure

Modern AI solutions augment human understanding, preferences, intent, and even spoken language. AI improves our knowledge and understanding by delivering faster, more informed insights that fuel transformation beyond anything previously imagined. Read more…

Chipmakers Looking at New Architecture to Drive Computing Ahead

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

AWS Introduces a Flurry of New EC2 Instances at re:Invent

November 30, 2022

AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaborat Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the c Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built o Read more…

SC22’s ‘HPC Accelerates’ Plenary Stresses Need for Collaboration

November 21, 2022

Every year, SC has a theme. For SC22 – held last week in Dallas – it was “HPC Accelerates”: a theme that conference chair Candace Culhane said reflected Read more…

Quantum – Are We There (or Close) Yet? No, Says the Panel

November 19, 2022

For all of its politeness, a fascinating panel on the last day of SC22 – Quantum Computing: A Future for HPC Acceleration? – mostly served to illustrate the Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

Gordon Bell Special Prize Goes to LLM-Based Covid Variant Prediction

November 17, 2022

For three years running, ACM has awarded not only its long-standing Gordon Bell Prize (read more about this year’s winner here!) but also its Gordon Bell Spec Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

AMD Thrives in Servers amid Intel Restructuring, Layoffs

November 12, 2022

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

JPMorgan Chase Bets Big on Quantum Computing

October 12, 2022

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Leading Solution Providers


UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Pr Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

Intel Is Opening up Its Chip Factories to Academia

October 6, 2022

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

AMD’s Genoa CPUs Offer Up to 96 5nm Cores Across 12 Chiplets

November 10, 2022

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

AMD Previews 400 Gig Adaptive SmartNIC SOC at Hot Chips

August 24, 2022

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow