Nvidia Dominates (Again) Latest MLPerf Inference Results

By John Russell

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a point made clearest by the fact that 85 percent of the submissions used Nvidia accelerators. One wonders where the rest of AI accelerator crowd is? (Cerebras (CS-1), AMD (Radeon), Groq (Tensor Streaming Processor), SambaNova (Reconfigurable Dataflow Unit), Google’s (TPU) et. al.)

For the moment, Nvidia rules the MLPerf roost. It posted the top performances in categories in which it participated, dominating the ‘closed’ datacenter and closed edge categories. MLPerf’s closed categories impose system/network restrictions intended to ensure apples-to-apples comparisons among participating systems. The ‘open’ versions of categories permit customization. Practically speaking, few of the non-Nvidia submissions were expected to outperform Nividia’s phalanx of A100s, T4s, and Quadro RTXs.

Nvidia touted the results in a media briefing and subsequent blog by Paresh Kharya, senior director, product management, accelerated systems. The A100 GPU is up to 237X faster than CPU-based systems in the open datacenter category, reported Nvidia, and its Jetson AGX video analytics system and T4 chips also performed well in the power-sensitive edge category.

Broadly, CPU-only systems did less well, though interestingly, Intel had a submission in the notebook division using one of its long-awaited Xe GPU line; it was the only submission in the category.

Kharya declared, “The Nvidia A100 is 237 times faster than the Cooper Lake CPU. To put that into perspective, look at the chart on the right, a single DGX-A100 provides the same performance on recommendation systems as 1000 CPU-servers.” The competitive juices are always flowing and given the lack of alternative accelerators represented, Nvidia can perhaps be forgiven for crowing in the moment.

Leaving aside Nvidia’s dominance, MLPerf continues improving its benchmark suite and process. It added several models, added new categories based on form factor, instituted randomized third-party audits of rules compliance, and attracted roughly double the number of submissions (23 versus 12) from its first inferencing run of a year ago.

Moreover, the head-to-head comparisons among participating systems makers – Dell EMC, Inspur, Fujitsu, Netrix, Supermicro, QCT, Cisco, Atos – will make interesting reading (more below). It was also good to see you can run inferencing effectively at the edge with other platforms such as Raspberry Pi4 and Firefly RK-3399, both using Arm technology (Cortex-A72).

“We’re pleased with the results and progress,” said David Kanter, executive director of MLCommons (organizer of MLPerf.org). “We have more benchmarks to cover more use case areas. I think we did a much better job of having good divisions between the different classes of systems. If you look at the first round of inference, we had smartphone chips, and then we had 300watt monster chips, right, and it doesn’t really make sense to compare those things for the most part.” The latest inference suite – v0.7 –  has the following divisions: datacenter (closed and open); edge (closed and open); mobile phones (closed and open) mobile notebooks (closed and open.)

On balance observers were mildly disappointed but not surprised by the lack of young accelerator chip/system companies who participated. Overall, the AI community still seems largely supportive of MLPerf and says it remains on track to become an important forum:

  • Karl Freund of Moor Insights and Strategy said, “NVIDIA did great against a shallow field of competitors. Their A100 results were amazing, compared to the V100, demonstrating the value of their enhanced tensor core architecture. That being said, the competition is either too busy with early customer projects or their chips are just not yet ready. For example, SambaNova announced a new partnership with LLNL, and Intel Habana is still in the oven. If I were still at a chip startup, I would wait to run MLPerf (an expensive project) until I already had secured a few lighthouse customers. MLPerf is the right answer, but will remain largely irrelevant until players are farther along their life cycle.”
  • Rick Stevens, associate director of Argonne National Laboratory, said, “I think the other companies are still quite early in optimizing their software and hardware stacks. At some point I would expect Intel and AMD GPUs to start showing up when they have gear in the field and software is tuned up. It takes a mature stack, mature hardware and an experienced team to do well on these benchmarks. Also the benchmarks need to track the research front of AI models and that takes effort as well. For the accelerator “startups” this is a huge amount of work and most of their teams are still small and focused on getting product up and out.”

Stevens also noted, “I should point out that many of the startups are trying to go for particular model types and scenarios somewhat orthogonal to compete with existing players and the MLPerf is more focused on mainstream models and many not represent these new directions very well. One idea might be to create an ‘unlimited’ division where new companies could demonstrate any results they want on any models.”

MLPerf has deliberately worked to stress real-world models said Kanter who added that the organization is actively looking at ways to attract more entrants from the burgeoning AI chip and systems makers’ ranks.

Each MLPerf Inference benchmark is defined by a model, a dataset, a quality target, and a latency constraint. There are three benchmark suites in MLPerf inference v0.7, one for datacenter systems, one for edge systems and one for mobile systems. The data center suite targets systems designed for data center deployments. The edge suite targets systems deployed outside of data centers. The suites share multiple benchmarks with different requirements.

MLPerf Inference v0.7 suite includes four new benchmarks for data center and edge systems:

  • BERT: Bi-directional Encoder Representation from Transformers (BERT) fine tuned for question answering using the SQuAD 1.1 data set. Given a question input, the BERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.
  • DLRM: Deep Learning Recommendation Model (DLRM) is a personalization and recommendation model that is trained to optimize click-through rates (CTR). Common examples include recommendation for online shopping, search results, and social media content ranking.
  • 3D U-Net: The 3D U-Net architecture is trained on the BraTS 2019 dataset for brain tumor segmentation. The network identifies whether each voxel within a 3D MRI scan belongs to a healthy tissue or a particular brain abnormality (i.e. GD-enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor core), and is representative of many medical imaging tasks.
  • RNN-T: Recurrent Neural Network Transducer is an automatic speech recognition (ASR) model that is trained on a subset of LibriSpeech. Given a sequence of speech input, it predicts the corresponding text. RNN-T is representative of widely used speech-to-text systems.

The latest inference round introduces MLPerf Mobile, “the first open and transparent set of benchmarks for mobile machine learning. MLPerf Mobile targets client systems with well-defined and relatively homogeneous form factors and characteristics such as smartphones, tablets, and notebooks. The MLPerf Mobile working group, led by Arm, Google, Intel, MediaTek, Qualcomm, and Samsung Electronics, selected four new neural networks for benchmarking and developed a smartphone application.”

The four new mobile benchmarks are available in the TensorFlow, TensorFlow Lite, and ONNX formats, and include:

  • MobileNetEdgeTPU: This an image classification benchmark that is considered the most ubiquitous task in computer vision. This model deploys the MobileNetEdgeTPU feature extractor which is optimized with neural architecture search to have low latency and high accuracy when deployed on mobile AI accelerators. This model classifies input images with 224 x 224 resolution into 1000 different categories.
  • SSD-MobileNetV2: Single Shot multibox Detection (SSD) with MobileNetv2 feature extractor is an object detection model trained to detect 80 different object categories in input frames with 300×300 resolution. This network is commonly used to identify and track people/objects for photography and live videos.
  • DeepLabv3+ MobileNetV2: This is an image semantic segmentation benchmark. This model is a convolutional neural network that deploys MobileNetV2 as the feature extractor, and uses the Deeplabv3+ decoder for pixel-level labeling of 31 different classes in input frames with 512 x 512 resolution. This task can be deployed for scene understanding and many computational photography applications.
  • MobileBERT: The MobileBERT model is a mobile-optimized variant of the larger BERT model that is fine-tuned for question answering using the SQuAD 1.1 data set. Given a question input, the MobileBERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.

“The MLPerf Mobile app is extremely flexible and can work on a wide variety of smartphone platforms, using different computational resources such as CPU, GPUs, DSPs, and dedicated accelerators,” said Vijay Janapa Reddi from Harvard University and chair of the MLPerf Mobile working group in the official results press release. The app comes with built-in support for TensorFlow Lite, providing CPU, GPU, and NNAPI (on Android) inference backends, and also supports alternative inference engines through vendor-specific SDKs.

MLPerf says its mobile application will be available for download on multiple operating systems in the near future, so that consumers across the world can measure the performance of their own smartphones. “We got all three of the major independent (mobile) SOC vendors. We built something we think is strong and hope to see it more widely used, drawing some of the OEMs and additional SOC vendors,” said Kanter.

The datacenter and edge closed categories drew the lion’s share of submissions and are, perhaps, of most interest to the HPC and broader enterprise AI communities. It’s best to go directly to the results tables which MLPerf has made available and easily searched.

Dell EMC, for example, had 16 different systems (PowerEdge and DSS) in various configurations using different accelerators and processors in the closed datacenter grouping. Its top performer on image classification (ImageNet) was a DSS 8440 system with 2 Intel 6230 Xeon Gold processors and 10 Quadro RTX 8000s. The three top performers on that particular test were: an Inspur system (NF5488A5) with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4 (NVLink) GPUs; an Nvidia DGX-A100, also with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4s; and a QCT system (D526) with 2 Intel Xeon Gold 6248 CPUs and 10 Nvidia A100-PCIe GPUs.

This just one of the tests in the datacenter suite. Performance varies across the various tests (image classification, NLP, medical image analysis, etc.). Here’s a snapshot of a very few results excerpted from MLPerf’s tables from the closed datacenter category (some data has been omitted).

As noted earlier the results are best examined directly and include information about stacks and networks, etc., that permit more thorough assessment. MLPerf skipped v0.6, which would have been released soon, to more closely align release of training and inferencing results.

There’s a 2019 paper (MLPerf Benchmark) roughly a year ago which details the thinking that went into forming the effort.

One interesting note is the growth of GPU use for AI activities generally. The hyperscalers has played a strong role in developing the technology (frameworks and hardware) and have been ramping up accelerator-based instance offerings to accommodate growing demand and to be able to handle increasingly large and complex models.

In his pre-briefing Kharya said, “Since AWS launched our GPUs in 2010, 10 years ago to now, we have exceeded the aggregate amount of GPU compute in the cloud compared to all of the cloud CPUs.”

That’s a big claim. When pressed in Q&A he confirmed this estimate of AI computer inference capacity (ops) is based on all the CPUs shipped, not just those shipped for inference. “Yes, that is correct. That’s correct. All CPUs shipped, and all GPU shipped. For precision, we’ve taken the best precision meaning for Cascade Lake, Intel introduced the integer eight, and so we’ve taken integer eight for CPUs, and similarly, we’ve taken the best precision integer rate or FP16, depending upon the generation of our GPU architecture,” he said.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Exascale Climate Model Used to Examine the Climate Consequences of Nuclear War

December 3, 2020

Often, we can tend to examine global crises in isolation – however, they can have surprising feedback effects on one another. The COVID-19 pandemic, for instance, has led to a small global decrease in emissions, and cl Read more…

By Oliver Peckham

Natural Compounds, Identified via HPC, Move Forward in COVID-19 Therapeutic Testing

December 2, 2020

Nearly six months ago, HPCwire reported on efforts by researchers at the University of Alabama in Huntsville (UAH) to identify natural compounds that could be useful in the fight against COVID-19. At the time, the resear Read more…

By Oliver Peckham

AWS Reveals Gaudi-based EC2 Instances Coming in 2021

December 2, 2020

Amazon Web Services has a broad swath of new and bolstered services coming for customers in 2021, from the implementation of powerful Habana Gaudi AI hardware in Amazon EC2 instances for machine learning workloads to cus Read more…

By Todd R. Weiss

AWS Goes Supersonic with Boom

December 2, 2020

Supersonic flights are a Holy Grail of commercial aviation, promising halvings of international flight times. While commercial supersonic flights have operated in the past, high costs for both airlines and passengers led Read more…

By Oliver Peckham

VAST Data Makes the Case for All-Flash Storage; Do you Agree?

December 1, 2020

Founded in 2016, all-flash storage startup VAST Data says it is on the verge of upending storage practices in line with its original mission which was and remains “to kill the hard drive,” says Jeff Denworth, one of Read more…

By John Russell

AWS Solution Channel

Add storage to your high-performance file system with a single click and meet your scalability needs

Many organizations have on-premises, high-performance workloads burdened with complex management and scalability challenges. Scaling data-intensive workloads on-premises typically involves purchasing more hardware, which can slow time to production and require high upfront investment. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

HPC Career Notes: December 2020 Edition

December 1, 2020

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

By Mariana Iriarte

AWS Reveals Gaudi-based EC2 Instances Coming in 2021

December 2, 2020

Amazon Web Services has a broad swath of new and bolstered services coming for customers in 2021, from the implementation of powerful Habana Gaudi AI hardware i Read more…

By Todd R. Weiss

AWS Goes Supersonic with Boom

December 2, 2020

Supersonic flights are a Holy Grail of commercial aviation, promising halvings of international flight times. While commercial supersonic flights have operated Read more…

By Oliver Peckham

VAST Data Makes the Case for All-Flash Storage; Do you Agree?

December 1, 2020

Founded in 2016, all-flash storage startup VAST Data says it is on the verge of upending storage practices in line with its original mission which was and remai Read more…

By John Russell

HPC Career Notes: December 2020 Edition

December 1, 2020

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

By Mariana Iriarte

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This