Nvidia Dominates (Again) Latest MLPerf Inference Results

By John Russell

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a point made clearest by the fact that 85 percent of the submissions used Nvidia accelerators. One wonders where the rest of AI accelerator crowd is? (Cerebras (CS-1), AMD (Radeon), Groq (Tensor Streaming Processor), SambaNova (Reconfigurable Dataflow Unit), Google’s (TPU) et. al.)

For the moment, Nvidia rules the MLPerf roost. It posted the top performances in categories in which it participated, dominating the ‘closed’ datacenter and closed edge categories. MLPerf’s closed categories impose system/network restrictions intended to ensure apples-to-apples comparisons among participating systems. The ‘open’ versions of categories permit customization. Practically speaking, few of the non-Nvidia submissions were expected to outperform Nividia’s phalanx of A100s, T4s, and Quadro RTXs.

Nvidia touted the results in a media briefing and subsequent blog by Paresh Kharya, senior director, product management, accelerated systems. The A100 GPU is up to 237X faster than CPU-based systems in the open datacenter category, reported Nvidia, and its Jetson AGX video analytics system and T4 chips also performed well in the power-sensitive edge category.

Broadly, CPU-only systems did less well, though interestingly, Intel had a submission in the notebook division using one of its long-awaited Xe GPU line; it was the only submission in the category.

Kharya declared, “The Nvidia A100 is 237 times faster than the Cooper Lake CPU. To put that into perspective, look at the chart on the right, a single DGX-A100 provides the same performance on recommendation systems as 1000 CPU-servers.” The competitive juices are always flowing and given the lack of alternative accelerators represented, Nvidia can perhaps be forgiven for crowing in the moment.

Leaving aside Nvidia’s dominance, MLPerf continues improving its benchmark suite and process. It added several models, added new categories based on form factor, instituted randomized third-party audits of rules compliance, and attracted roughly double the number of submissions (23 versus 12) from its first inferencing run of a year ago.

Moreover, the head-to-head comparisons among participating systems makers – Dell EMC, Inspur, Fujitsu, Netrix, Supermicro, QCT, Cisco, Atos – will make interesting reading (more below). It was also good to see you can run inferencing effectively at the edge with other platforms such as Raspberry Pi4 and Firefly RK-3399, both using Arm technology (Cortex-A72).

“We’re pleased with the results and progress,” said David Kanter, executive director of MLCommons (organizer of MLPerf.org). “We have more benchmarks to cover more use case areas. I think we did a much better job of having good divisions between the different classes of systems. If you look at the first round of inference, we had smartphone chips, and then we had 300watt monster chips, right, and it doesn’t really make sense to compare those things for the most part.” The latest inference suite – v0.7 –  has the following divisions: datacenter (closed and open); edge (closed and open); mobile phones (closed and open) mobile notebooks (closed and open.)

On balance observers were mildly disappointed but not surprised by the lack of young accelerator chip/system companies who participated. Overall, the AI community still seems largely supportive of MLPerf and says it remains on track to become an important forum:

  • Karl Freund of Moor Insights and Strategy said, “NVIDIA did great against a shallow field of competitors. Their A100 results were amazing, compared to the V100, demonstrating the value of their enhanced tensor core architecture. That being said, the competition is either too busy with early customer projects or their chips are just not yet ready. For example, SambaNova announced a new partnership with LLNL, and Intel Habana is still in the oven. If I were still at a chip startup, I would wait to run MLPerf (an expensive project) until I already had secured a few lighthouse customers. MLPerf is the right answer, but will remain largely irrelevant until players are farther along their life cycle.”
  • Rick Stevens, associate director of Argonne National Laboratory, said, “I think the other companies are still quite early in optimizing their software and hardware stacks. At some point I would expect Intel and AMD GPUs to start showing up when they have gear in the field and software is tuned up. It takes a mature stack, mature hardware and an experienced team to do well on these benchmarks. Also the benchmarks need to track the research front of AI models and that takes effort as well. For the accelerator “startups” this is a huge amount of work and most of their teams are still small and focused on getting product up and out.”

Stevens also noted, “I should point out that many of the startups are trying to go for particular model types and scenarios somewhat orthogonal to compete with existing players and the MLPerf is more focused on mainstream models and many not represent these new directions very well. One idea might be to create an ‘unlimited’ division where new companies could demonstrate any results they want on any models.”

MLPerf has deliberately worked to stress real-world models said Kanter who added that the organization is actively looking at ways to attract more entrants from the burgeoning AI chip and systems makers’ ranks.

Each MLPerf Inference benchmark is defined by a model, a dataset, a quality target, and a latency constraint. There are three benchmark suites in MLPerf inference v0.7, one for datacenter systems, one for edge systems and one for mobile systems. The data center suite targets systems designed for data center deployments. The edge suite targets systems deployed outside of data centers. The suites share multiple benchmarks with different requirements.

MLPerf Inference v0.7 suite includes four new benchmarks for data center and edge systems:

  • BERT: Bi-directional Encoder Representation from Transformers (BERT) fine tuned for question answering using the SQuAD 1.1 data set. Given a question input, the BERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.
  • DLRM: Deep Learning Recommendation Model (DLRM) is a personalization and recommendation model that is trained to optimize click-through rates (CTR). Common examples include recommendation for online shopping, search results, and social media content ranking.
  • 3D U-Net: The 3D U-Net architecture is trained on the BraTS 2019 dataset for brain tumor segmentation. The network identifies whether each voxel within a 3D MRI scan belongs to a healthy tissue or a particular brain abnormality (i.e. GD-enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor core), and is representative of many medical imaging tasks.
  • RNN-T: Recurrent Neural Network Transducer is an automatic speech recognition (ASR) model that is trained on a subset of LibriSpeech. Given a sequence of speech input, it predicts the corresponding text. RNN-T is representative of widely used speech-to-text systems.

The latest inference round introduces MLPerf Mobile, “the first open and transparent set of benchmarks for mobile machine learning. MLPerf Mobile targets client systems with well-defined and relatively homogeneous form factors and characteristics such as smartphones, tablets, and notebooks. The MLPerf Mobile working group, led by Arm, Google, Intel, MediaTek, Qualcomm, and Samsung Electronics, selected four new neural networks for benchmarking and developed a smartphone application.”

The four new mobile benchmarks are available in the TensorFlow, TensorFlow Lite, and ONNX formats, and include:

  • MobileNetEdgeTPU: This an image classification benchmark that is considered the most ubiquitous task in computer vision. This model deploys the MobileNetEdgeTPU feature extractor which is optimized with neural architecture search to have low latency and high accuracy when deployed on mobile AI accelerators. This model classifies input images with 224 x 224 resolution into 1000 different categories.
  • SSD-MobileNetV2: Single Shot multibox Detection (SSD) with MobileNetv2 feature extractor is an object detection model trained to detect 80 different object categories in input frames with 300×300 resolution. This network is commonly used to identify and track people/objects for photography and live videos.
  • DeepLabv3+ MobileNetV2: This is an image semantic segmentation benchmark. This model is a convolutional neural network that deploys MobileNetV2 as the feature extractor, and uses the Deeplabv3+ decoder for pixel-level labeling of 31 different classes in input frames with 512 x 512 resolution. This task can be deployed for scene understanding and many computational photography applications.
  • MobileBERT: The MobileBERT model is a mobile-optimized variant of the larger BERT model that is fine-tuned for question answering using the SQuAD 1.1 data set. Given a question input, the MobileBERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.

“The MLPerf Mobile app is extremely flexible and can work on a wide variety of smartphone platforms, using different computational resources such as CPU, GPUs, DSPs, and dedicated accelerators,” said Vijay Janapa Reddi from Harvard University and chair of the MLPerf Mobile working group in the official results press release. The app comes with built-in support for TensorFlow Lite, providing CPU, GPU, and NNAPI (on Android) inference backends, and also supports alternative inference engines through vendor-specific SDKs.

MLPerf says its mobile application will be available for download on multiple operating systems in the near future, so that consumers across the world can measure the performance of their own smartphones. “We got all three of the major independent (mobile) SOC vendors. We built something we think is strong and hope to see it more widely used, drawing some of the OEMs and additional SOC vendors,” said Kanter.

The datacenter and edge closed categories drew the lion’s share of submissions and are, perhaps, of most interest to the HPC and broader enterprise AI communities. It’s best to go directly to the results tables which MLPerf has made available and easily searched.

Dell EMC, for example, had 16 different systems (PowerEdge and DSS) in various configurations using different accelerators and processors in the closed datacenter grouping. Its top performer on image classification (ImageNet) was a DSS 8440 system with 2 Intel 6230 Xeon Gold processors and 10 Quadro RTX 8000s. The three top performers on that particular test were: an Inspur system (NF5488A5) with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4 (NVLink) GPUs; an Nvidia DGX-A100, also with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4s; and a QCT system (D526) with 2 Intel Xeon Gold 6248 CPUs and 10 Nvidia A100-PCIe GPUs.

This just one of the tests in the datacenter suite. Performance varies across the various tests (image classification, NLP, medical image analysis, etc.). Here’s a snapshot of a very few results excerpted from MLPerf’s tables from the closed datacenter category (some data has been omitted).

As noted earlier the results are best examined directly and include information about stacks and networks, etc., that permit more thorough assessment. MLPerf skipped v0.6, which would have been released soon, to more closely align release of training and inferencing results.

There’s a 2019 paper (MLPerf Benchmark) roughly a year ago which details the thinking that went into forming the effort.

One interesting note is the growth of GPU use for AI activities generally. The hyperscalers has played a strong role in developing the technology (frameworks and hardware) and have been ramping up accelerator-based instance offerings to accommodate growing demand and to be able to handle increasingly large and complex models.

In his pre-briefing Kharya said, “Since AWS launched our GPUs in 2010, 10 years ago to now, we have exceeded the aggregate amount of GPU compute in the cloud compared to all of the cloud CPUs.”

That’s a big claim. When pressed in Q&A he confirmed this estimate of AI computer inference capacity (ops) is based on all the CPUs shipped, not just those shipped for inference. “Yes, that is correct. That’s correct. All CPUs shipped, and all GPU shipped. For precision, we’ve taken the best precision meaning for Cascade Lake, Intel introduced the integer eight, and so we’ve taken integer eight for CPUs, and similarly, we’ve taken the best precision integer rate or FP16, depending upon the generation of our GPU architecture,” he said.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX develop Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computi Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” The newly announced SuperPods come just two years after the Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fle Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire