Nvidia Dominates (Again) Latest MLPerf Inference Results

By John Russell

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a point made clearest by the fact that 85 percent of the submissions used Nvidia accelerators. One wonders where the rest of AI accelerator crowd is? (Cerebras (CS-1), AMD (Radeon), Groq (Tensor Streaming Processor), SambaNova (Reconfigurable Dataflow Unit), Google’s (TPU) et. al.)

For the moment, Nvidia rules the MLPerf roost. It posted the top performances in categories in which it participated, dominating the ‘closed’ datacenter and closed edge categories. MLPerf’s closed categories impose system/network restrictions intended to ensure apples-to-apples comparisons among participating systems. The ‘open’ versions of categories permit customization. Practically speaking, few of the non-Nvidia submissions were expected to outperform Nividia’s phalanx of A100s, T4s, and Quadro RTXs.

Nvidia touted the results in a media briefing and subsequent blog by Paresh Kharya, senior director, product management, accelerated systems. The A100 GPU is up to 237X faster than CPU-based systems in the open datacenter category, reported Nvidia, and its Jetson AGX video analytics system and T4 chips also performed well in the power-sensitive edge category.

Broadly, CPU-only systems did less well, though interestingly, Intel had a submission in the notebook division using one of its long-awaited Xe GPU line; it was the only submission in the category.

Kharya declared, “The Nvidia A100 is 237 times faster than the Cooper Lake CPU. To put that into perspective, look at the chart on the right, a single DGX-A100 provides the same performance on recommendation systems as 1000 CPU-servers.” The competitive juices are always flowing and given the lack of alternative accelerators represented, Nvidia can perhaps be forgiven for crowing in the moment.

Leaving aside Nvidia’s dominance, MLPerf continues improving its benchmark suite and process. It added several models, added new categories based on form factor, instituted randomized third-party audits of rules compliance, and attracted roughly double the number of submissions (23 versus 12) from its first inferencing run of a year ago.

Moreover, the head-to-head comparisons among participating systems makers – Dell EMC, Inspur, Fujitsu, Netrix, Supermicro, QCT, Cisco, Atos – will make interesting reading (more below). It was also good to see you can run inferencing effectively at the edge with other platforms such as Raspberry Pi4 and Firefly RK-3399, both using Arm technology (Cortex-A72).

“We’re pleased with the results and progress,” said David Kanter, executive director of MLCommons (organizer of MLPerf.org). “We have more benchmarks to cover more use case areas. I think we did a much better job of having good divisions between the different classes of systems. If you look at the first round of inference, we had smartphone chips, and then we had 300watt monster chips, right, and it doesn’t really make sense to compare those things for the most part.” The latest inference suite – v0.7 –  has the following divisions: datacenter (closed and open); edge (closed and open); mobile phones (closed and open) mobile notebooks (closed and open.)

On balance observers were mildly disappointed but not surprised by the lack of young accelerator chip/system companies who participated. Overall, the AI community still seems largely supportive of MLPerf and says it remains on track to become an important forum:

  • Karl Freund of Moor Insights and Strategy said, “NVIDIA did great against a shallow field of competitors. Their A100 results were amazing, compared to the V100, demonstrating the value of their enhanced tensor core architecture. That being said, the competition is either too busy with early customer projects or their chips are just not yet ready. For example, SambaNova announced a new partnership with LLNL, and Intel Habana is still in the oven. If I were still at a chip startup, I would wait to run MLPerf (an expensive project) until I already had secured a few lighthouse customers. MLPerf is the right answer, but will remain largely irrelevant until players are farther along their life cycle.”
  • Rick Stevens, associate director of Argonne National Laboratory, said, “I think the other companies are still quite early in optimizing their software and hardware stacks. At some point I would expect Intel and AMD GPUs to start showing up when they have gear in the field and software is tuned up. It takes a mature stack, mature hardware and an experienced team to do well on these benchmarks. Also the benchmarks need to track the research front of AI models and that takes effort as well. For the accelerator “startups” this is a huge amount of work and most of their teams are still small and focused on getting product up and out.”

Stevens also noted, “I should point out that many of the startups are trying to go for particular model types and scenarios somewhat orthogonal to compete with existing players and the MLPerf is more focused on mainstream models and many not represent these new directions very well. One idea might be to create an ‘unlimited’ division where new companies could demonstrate any results they want on any models.”

MLPerf has deliberately worked to stress real-world models said Kanter who added that the organization is actively looking at ways to attract more entrants from the burgeoning AI chip and systems makers’ ranks.

Each MLPerf Inference benchmark is defined by a model, a dataset, a quality target, and a latency constraint. There are three benchmark suites in MLPerf inference v0.7, one for datacenter systems, one for edge systems and one for mobile systems. The data center suite targets systems designed for data center deployments. The edge suite targets systems deployed outside of data centers. The suites share multiple benchmarks with different requirements.

MLPerf Inference v0.7 suite includes four new benchmarks for data center and edge systems:

  • BERT: Bi-directional Encoder Representation from Transformers (BERT) fine tuned for question answering using the SQuAD 1.1 data set. Given a question input, the BERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.
  • DLRM: Deep Learning Recommendation Model (DLRM) is a personalization and recommendation model that is trained to optimize click-through rates (CTR). Common examples include recommendation for online shopping, search results, and social media content ranking.
  • 3D U-Net: The 3D U-Net architecture is trained on the BraTS 2019 dataset for brain tumor segmentation. The network identifies whether each voxel within a 3D MRI scan belongs to a healthy tissue or a particular brain abnormality (i.e. GD-enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor core), and is representative of many medical imaging tasks.
  • RNN-T: Recurrent Neural Network Transducer is an automatic speech recognition (ASR) model that is trained on a subset of LibriSpeech. Given a sequence of speech input, it predicts the corresponding text. RNN-T is representative of widely used speech-to-text systems.

The latest inference round introduces MLPerf Mobile, “the first open and transparent set of benchmarks for mobile machine learning. MLPerf Mobile targets client systems with well-defined and relatively homogeneous form factors and characteristics such as smartphones, tablets, and notebooks. The MLPerf Mobile working group, led by Arm, Google, Intel, MediaTek, Qualcomm, and Samsung Electronics, selected four new neural networks for benchmarking and developed a smartphone application.”

The four new mobile benchmarks are available in the TensorFlow, TensorFlow Lite, and ONNX formats, and include:

  • MobileNetEdgeTPU: This an image classification benchmark that is considered the most ubiquitous task in computer vision. This model deploys the MobileNetEdgeTPU feature extractor which is optimized with neural architecture search to have low latency and high accuracy when deployed on mobile AI accelerators. This model classifies input images with 224 x 224 resolution into 1000 different categories.
  • SSD-MobileNetV2: Single Shot multibox Detection (SSD) with MobileNetv2 feature extractor is an object detection model trained to detect 80 different object categories in input frames with 300×300 resolution. This network is commonly used to identify and track people/objects for photography and live videos.
  • DeepLabv3+ MobileNetV2: This is an image semantic segmentation benchmark. This model is a convolutional neural network that deploys MobileNetV2 as the feature extractor, and uses the Deeplabv3+ decoder for pixel-level labeling of 31 different classes in input frames with 512 x 512 resolution. This task can be deployed for scene understanding and many computational photography applications.
  • MobileBERT: The MobileBERT model is a mobile-optimized variant of the larger BERT model that is fine-tuned for question answering using the SQuAD 1.1 data set. Given a question input, the MobileBERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.

“The MLPerf Mobile app is extremely flexible and can work on a wide variety of smartphone platforms, using different computational resources such as CPU, GPUs, DSPs, and dedicated accelerators,” said Vijay Janapa Reddi from Harvard University and chair of the MLPerf Mobile working group in the official results press release. The app comes with built-in support for TensorFlow Lite, providing CPU, GPU, and NNAPI (on Android) inference backends, and also supports alternative inference engines through vendor-specific SDKs.

MLPerf says its mobile application will be available for download on multiple operating systems in the near future, so that consumers across the world can measure the performance of their own smartphones. “We got all three of the major independent (mobile) SOC vendors. We built something we think is strong and hope to see it more widely used, drawing some of the OEMs and additional SOC vendors,” said Kanter.

The datacenter and edge closed categories drew the lion’s share of submissions and are, perhaps, of most interest to the HPC and broader enterprise AI communities. It’s best to go directly to the results tables which MLPerf has made available and easily searched.

Dell EMC, for example, had 16 different systems (PowerEdge and DSS) in various configurations using different accelerators and processors in the closed datacenter grouping. Its top performer on image classification (ImageNet) was a DSS 8440 system with 2 Intel 6230 Xeon Gold processors and 10 Quadro RTX 8000s. The three top performers on that particular test were: an Inspur system (NF5488A5) with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4 (NVLink) GPUs; an Nvidia DGX-A100, also with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4s; and a QCT system (D526) with 2 Intel Xeon Gold 6248 CPUs and 10 Nvidia A100-PCIe GPUs.

This just one of the tests in the datacenter suite. Performance varies across the various tests (image classification, NLP, medical image analysis, etc.). Here’s a snapshot of a very few results excerpted from MLPerf’s tables from the closed datacenter category (some data has been omitted).

As noted earlier the results are best examined directly and include information about stacks and networks, etc., that permit more thorough assessment. MLPerf skipped v0.6, which would have been released soon, to more closely align release of training and inferencing results.

There’s a 2019 paper (MLPerf Benchmark) roughly a year ago which details the thinking that went into forming the effort.

One interesting note is the growth of GPU use for AI activities generally. The hyperscalers has played a strong role in developing the technology (frameworks and hardware) and have been ramping up accelerator-based instance offerings to accommodate growing demand and to be able to handle increasingly large and complex models.

In his pre-briefing Kharya said, “Since AWS launched our GPUs in 2010, 10 years ago to now, we have exceeded the aggregate amount of GPU compute in the cloud compared to all of the cloud CPUs.”

That’s a big claim. When pressed in Q&A he confirmed this estimate of AI computer inference capacity (ops) is based on all the CPUs shipped, not just those shipped for inference. “Yes, that is correct. That’s correct. All CPUs shipped, and all GPU shipped. For precision, we’ve taken the best precision meaning for Cascade Lake, Intel introduced the integer eight, and so we’ve taken integer eight for CPUs, and similarly, we’ve taken the best precision integer rate or FP16, depending upon the generation of our GPU architecture,” he said.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire