MLPerf today launched a benchmark suite for inferencing, v0.5, which joins the MLPerf training suite launched a little over a year ago. The new inferencing benchmark, which has been anticipated, covers models applicable to a “wide range of applications including autonomous driving and natural language processing, on a variety of form factors, including smartphones, PCs, edge servers, and cloud computing platforms in the data center,” according MLPerf.
Efforts to deliver benchmarking tools for machine learning and AI writ large have mushroomed in the past couple of years, mirroring the rapid adoption of AI technologies. MLPerf’s training benchmark suite was launched in May of 2018 and followed by release of its initial set of public results in December. Nvidia did well on that testing and widely promoted its performance. (see HPCwire article, Nvidia Leads Alpha MLPerf Benchmarking Round).
MLPerf, now supported by roughly 40 companies and industry researchers, has shown remarkable traction. There was an excellent presentation by Dell EMC at GTC19, Demystifying Deep Learning Infrastructure Choices Using MLPerf Benchmark Suite, which HPCwire covered. Here’s an excerpt from HPCwire’s coverage:
“There is no one server that does the job perfectly well,” said Ramesh Radhakrishnan, distinguished engineer, Dell EMC, to a packed session[I] at GTC last month. “You see a variety of servers used to execute these kinds of workloads.” Precisely to this point, a flurry of benchmarking tools is emerging to help make sense of ML/DL performance requirements and optimizations. There’s the Deep500 with grand ambitions but still very nascent and aimed mostly at very large scale systems. There are early movers – DeepBench, TF_CNN_Bench, and DAWNBench, for example – with typically narrower strengths and notable shortfalls. More recently, MLPerf has started emerging as popular tool that borrows from those coming before it.
Training is typically the most compute-intensive piece of developing and deploying AI technologies. However, inferencing technology is likely to be a much bigger piece of the pie in terms of volume of systems (and, of course, chips) deployed.
As explained by MLPerf, “By measuring inference, this benchmark suite will give valuable information on how quickly a trained neural network can process new data to provide useful insights. Previously, MLPerf released the companion Training v0.5 benchmark suite leading to 29 different results measuring the performance of cutting-edge systems for training deep neural networks.”
MLPerf Inference v0.5 consists of five benchmarks, focused on three common ML tasks:
- Image Classification– predicting a “label” for a given image from the ImageNet dataset, such as identifying items in a photo.
- Object Detection– picking out an object using a bounding box within an image from the MS-COCO dataset, commonly used in robotics, automation, and automotive.
- Machine Translation– translating sentences between English and German using the WMT English-German benchmark, similar to auto-translate features in widely used chat and email applications.
MLPerf singled out several contributors – Arm, Cadence, Centaur Technology, Dividiti, Facebook, General Motors, Google, Habana Labs, Harvard University, Intel, MediaTek, Microsoft, Myrtle, Nvidia, Real World Insights, University of Illinois at Urbana-Champaign, University of Toronto, and Xilinx – for efforts in developing the inferencing suite.
“The new MLPerf inference benchmarks will accelerate the development of hardware and software to unlock the full potential of ML applications. They will also stimulate innovation within the academic and research communities. By creating common and relevant metrics to assess new machine learning software frameworks, hardware accelerators, and cloud and edge computing platforms in real-life situations, these benchmarks will establish a level playing field that even the smallest companies can use,” according to the organization.
Here is the description of scenarios and metrics from the MLPerf website: “In order to enable representative testing of a wide variety of inference platforms and use cases, MLPerf has defined four different scenarios as described below. A given scenario is evaluated by the LoadGen generating inference requests in a particular pattern and measuring a specific metric.
- Single-stream: Evaluates real-world scenarios such as a smartphone user taking a picture. For the test run, LoadGen sends an initial query then continually sends the next query as soon as the previous query is processed. The metric is the 90th percentile latency (the latency such that 90% of queries complete at least that fast).
- Multi-stream: Evaluates real-world scenario such as a multi-camera automotive system that detects obstacles. The LoadGen uses multiple test runs to determine the maximum number of streams the system can support while meeting the latency constraint. The metric is the number of streams supported.
- Server: Evaluates real-world scenario such as a server in a datacenter that is servicing online requests. The LoadGen uses multiple test runs to determine the maximum throughput value in queries-per-second (QPS) the system can support while meeting the latency constraint 90% of the time. The metric is QPS.
- Offline: Evaluates real-world scenarios such as a batch processing system. For the test run, LoadGen sends all queries at once. The metric is throughput.”
With the new benchmark suite released, organizations can submit results that demonstrate the benefits of their ML systems on these benchmarks. Interested organizations should contact [email protected]. It took roughly seven months for the first set of training results to be released – that suggests that the first set of inferencing results might be issued around the end of this year or beginning of next year.
Link to MLPerf release: https://mlperf.org/press#mlperf-inference-launched