The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a point made clearest by the fact that 85 percent of the submissions used Nvidia accelerators. One wonders where the rest of AI accelerator crowd is? (Cerebras (CS-1), AMD (Radeon), Groq (Tensor Streaming Processor), SambaNova (Reconfigurable Dataflow Unit), Google’s (TPU) et. al.)
For the moment, Nvidia rules the MLPerf roost. It posted the top performances in categories in which it participated, dominating the ‘closed’ datacenter and closed edge categories. MLPerf’s closed categories impose system/network restrictions intended to ensure apples-to-apples comparisons among participating systems. The ‘open’ versions of categories permit customization. Practically speaking, few of the non-Nvidia submissions were expected to outperform Nividia’s phalanx of A100s, T4s, and Quadro RTXs.
Nvidia touted the results in a media briefing and subsequent blog by Paresh Kharya, senior director, product management, accelerated systems. The A100 GPU is up to 237X faster than CPU-based systems in the open datacenter category, reported Nvidia, and its Jetson AGX video analytics system and T4 chips also performed well in the power-sensitive edge category.
Broadly, CPU-only systems did less well, though interestingly, Intel had a submission in the notebook division using one of its long-awaited Xe GPU line; it was the only submission in the category.
Kharya declared, “The Nvidia A100 is 237 times faster than the Cooper Lake CPU. To put that into perspective, look at the chart on the right, a single DGX-A100 provides the same performance on recommendation systems as 1000 CPU-servers.” The competitive juices are always flowing and given the lack of alternative accelerators represented, Nvidia can perhaps be forgiven for crowing in the moment.
Leaving aside Nvidia’s dominance, MLPerf continues improving its benchmark suite and process. It added several models, added new categories based on form factor, instituted randomized third-party audits of rules compliance, and attracted roughly double the number of submissions (23 versus 12) from its first inferencing run of a year ago.
Moreover, the head-to-head comparisons among participating systems makers – Dell EMC, Inspur, Fujitsu, Netrix, Supermicro, QCT, Cisco, Atos – will make interesting reading (more below). It was also good to see you can run inferencing effectively at the edge with other platforms such as Raspberry Pi4 and Firefly RK-3399, both using Arm technology (Cortex-A72).
“We’re pleased with the results and progress,” said David Kanter, executive director of MLCommons (organizer of MLPerf.org). “We have more benchmarks to cover more use case areas. I think we did a much better job of having good divisions between the different classes of systems. If you look at the first round of inference, we had smartphone chips, and then we had 300watt monster chips, right, and it doesn’t really make sense to compare those things for the most part.” The latest inference suite – v0.7 – has the following divisions: datacenter (closed and open); edge (closed and open); mobile phones (closed and open) mobile notebooks (closed and open.)
On balance observers were mildly disappointed but not surprised by the lack of young accelerator chip/system companies who participated. Overall, the AI community still seems largely supportive of MLPerf and says it remains on track to become an important forum:
- Karl Freund of Moor Insights and Strategy said, “NVIDIA did great against a shallow field of competitors. Their A100 results were amazing, compared to the V100, demonstrating the value of their enhanced tensor core architecture. That being said, the competition is either too busy with early customer projects or their chips are just not yet ready. For example, SambaNova announced a new partnership with LLNL, and Intel Habana is still in the oven. If I were still at a chip startup, I would wait to run MLPerf (an expensive project) until I already had secured a few lighthouse customers. MLPerf is the right answer, but will remain largely irrelevant until players are farther along their life cycle.”
- Rick Stevens, associate director of Argonne National Laboratory, said, “I think the other companies are still quite early in optimizing their software and hardware stacks. At some point I would expect Intel and AMD GPUs to start showing up when they have gear in the field and software is tuned up. It takes a mature stack, mature hardware and an experienced team to do well on these benchmarks. Also the benchmarks need to track the research front of AI models and that takes effort as well. For the accelerator “startups” this is a huge amount of work and most of their teams are still small and focused on getting product up and out.”
Stevens also noted, “I should point out that many of the startups are trying to go for particular model types and scenarios somewhat orthogonal to compete with existing players and the MLPerf is more focused on mainstream models and many not represent these new directions very well. One idea might be to create an ‘unlimited’ division where new companies could demonstrate any results they want on any models.”
MLPerf has deliberately worked to stress real-world models said Kanter who added that the organization is actively looking at ways to attract more entrants from the burgeoning AI chip and systems makers’ ranks.
Each MLPerf Inference benchmark is defined by a model, a dataset, a quality target, and a latency constraint. There are three benchmark suites in MLPerf inference v0.7, one for datacenter systems, one for edge systems and one for mobile systems. The data center suite targets systems designed for data center deployments. The edge suite targets systems deployed outside of data centers. The suites share multiple benchmarks with different requirements.
MLPerf Inference v0.7 suite includes four new benchmarks for data center and edge systems:
- BERT: Bi-directional Encoder Representation from Transformers (BERT) fine tuned for question answering using the SQuAD 1.1 data set. Given a question input, the BERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.
- DLRM: Deep Learning Recommendation Model (DLRM) is a personalization and recommendation model that is trained to optimize click-through rates (CTR). Common examples include recommendation for online shopping, search results, and social media content ranking.
- 3D U-Net: The 3D U-Net architecture is trained on the BraTS 2019 dataset for brain tumor segmentation. The network identifies whether each voxel within a 3D MRI scan belongs to a healthy tissue or a particular brain abnormality (i.e. GD-enhancing tumor, peritumoral edema, necrotic and non-enhancing tumor core), and is representative of many medical imaging tasks.
- RNN-T: Recurrent Neural Network Transducer is an automatic speech recognition (ASR) model that is trained on a subset of LibriSpeech. Given a sequence of speech input, it predicts the corresponding text. RNN-T is representative of widely used speech-to-text systems.
The latest inference round introduces MLPerf Mobile, “the first open and transparent set of benchmarks for mobile machine learning. MLPerf Mobile targets client systems with well-defined and relatively homogeneous form factors and characteristics such as smartphones, tablets, and notebooks. The MLPerf Mobile working group, led by Arm, Google, Intel, MediaTek, Qualcomm, and Samsung Electronics, selected four new neural networks for benchmarking and developed a smartphone application.”
The four new mobile benchmarks are available in the TensorFlow, TensorFlow Lite, and ONNX formats, and include:
- MobileNetEdgeTPU: This an image classification benchmark that is considered the most ubiquitous task in computer vision. This model deploys the MobileNetEdgeTPU feature extractor which is optimized with neural architecture search to have low latency and high accuracy when deployed on mobile AI accelerators. This model classifies input images with 224 x 224 resolution into 1000 different categories.
- SSD-MobileNetV2: Single Shot multibox Detection (SSD) with MobileNetv2 feature extractor is an object detection model trained to detect 80 different object categories in input frames with 300×300 resolution. This network is commonly used to identify and track people/objects for photography and live videos.
- DeepLabv3+ MobileNetV2: This is an image semantic segmentation benchmark. This model is a convolutional neural network that deploys MobileNetV2 as the feature extractor, and uses the Deeplabv3+ decoder for pixel-level labeling of 31 different classes in input frames with 512 x 512 resolution. This task can be deployed for scene understanding and many computational photography applications.
- MobileBERT: The MobileBERT model is a mobile-optimized variant of the larger BERT model that is fine-tuned for question answering using the SQuAD 1.1 data set. Given a question input, the MobileBERT language model predicts and generates an answer. This task is representative of a broad class of natural language processing workloads.
“The MLPerf Mobile app is extremely flexible and can work on a wide variety of smartphone platforms, using different computational resources such as CPU, GPUs, DSPs, and dedicated accelerators,” said Vijay Janapa Reddi from Harvard University and chair of the MLPerf Mobile working group in the official results press release. The app comes with built-in support for TensorFlow Lite, providing CPU, GPU, and NNAPI (on Android) inference backends, and also supports alternative inference engines through vendor-specific SDKs.
MLPerf says its mobile application will be available for download on multiple operating systems in the near future, so that consumers across the world can measure the performance of their own smartphones. “We got all three of the major independent (mobile) SOC vendors. We built something we think is strong and hope to see it more widely used, drawing some of the OEMs and additional SOC vendors,” said Kanter.
The datacenter and edge closed categories drew the lion’s share of submissions and are, perhaps, of most interest to the HPC and broader enterprise AI communities. It’s best to go directly to the results tables which MLPerf has made available and easily searched.
Dell EMC, for example, had 16 different systems (PowerEdge and DSS) in various configurations using different accelerators and processors in the closed datacenter grouping. Its top performer on image classification (ImageNet) was a DSS 8440 system with 2 Intel 6230 Xeon Gold processors and 10 Quadro RTX 8000s. The three top performers on that particular test were: an Inspur system (NF5488A5) with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4 (NVLink) GPUs; an Nvidia DGX-A100, also with 2 AMD Epyc 7742 CPUs and 8 Nvidia A100-SXM4s; and a QCT system (D526) with 2 Intel Xeon Gold 6248 CPUs and 10 Nvidia A100-PCIe GPUs.
This just one of the tests in the datacenter suite. Performance varies across the various tests (image classification, NLP, medical image analysis, etc.). Here’s a snapshot of a very few results excerpted from MLPerf’s tables from the closed datacenter category (some data has been omitted).
As noted earlier the results are best examined directly and include information about stacks and networks, etc., that permit more thorough assessment. MLPerf skipped v0.6, which would have been released soon, to more closely align release of training and inferencing results.
There’s a 2019 paper (MLPerf Benchmark) roughly a year ago which details the thinking that went into forming the effort.
One interesting note is the growth of GPU use for AI activities generally. The hyperscalers has played a strong role in developing the technology (frameworks and hardware) and have been ramping up accelerator-based instance offerings to accommodate growing demand and to be able to handle increasingly large and complex models.
In his pre-briefing Kharya said, “Since AWS launched our GPUs in 2010, 10 years ago to now, we have exceeded the aggregate amount of GPU compute in the cloud compared to all of the cloud CPUs.”
That’s a big claim. When pressed in Q&A he confirmed this estimate of AI computer inference capacity (ops) is based on all the CPUs shipped, not just those shipped for inference. “Yes, that is correct. That’s correct. All CPUs shipped, and all GPU shipped. For precision, we’ve taken the best precision meaning for Cascade Lake, Intel introduced the integer eight, and so we’ve taken integer eight for CPUs, and similarly, we’ve taken the best precision integer rate or FP16, depending upon the generation of our GPU architecture,” he said.