MLPerf Debuts HPC Training Benchmark with Small but Impressive List of Participants

By John Russell

November 18, 2020

The AI benchmarking organization MLPerf.org dipped a toe into HPC-centric waters today with release of results from its first HPC training run – MLPerf HPC Training v0.7. The new suite, which includes CosmoFlow and DeepCAM models, is intended to measure machine learning performance on large-scale high performance computing systems.

While the number of first-round entries was small, they represented impressive HPC systems – Fugaku (RIKEN), top performer on the last two Top500 Lists; Piz Daint (CSCS); Cori (Lawrence Berkeley National Laboratory); Frontera (Texas Advanced Computing Center); AI Bridging Cloud Infrastructure (ABCI, Fujitsu) and HAL cluster (National Center for Supercomputer Applications).

There have been a few significant rule changes from MLPerf ‘standard’ training benchmarks for the HPC exercise. Time-to-solution is still the metric of merit but MLPerf has made an effort to include other bottlenecks beyond (mostly) processor performance. The size and complexity of the HPC systems participating make it necessary to dig into the details of the results for a proper performance evaluation. That said, among the more interesting pieces of today’s MLPerf announcement are statements provided by the participants on their MLPerf work (see below). It’s interesting, for example, to see how batch size was a factor.

Steven Farrell, an engineer at NERSC and member of the MLPerf team spearheading development of the HPC benchmarking effort said, “The rules for the HPC differ a little bit from MLPerf training. For example, any data movement as part of the preprocessing from a general parallel file system to local storage or I/O accelerator type system [must] be included in the benchmark reported time. And actually we captured the time that’s spent in this staging process.”

“At the end of day, the way the MLPerf HPC results are presented doesn’t really give you a strict, ranking, like number one, number two, number three. You kind of have to parse the results with some sense of the scale of the system used [to make a judgement],” said Farrell. MLPerf provides easy access to the data for slicing and dicing.

The top performer on CosmoFlow in the closed division was ABCI at 34.2 minutes; its closest rival was Fugaku at 101.49 minutes. ABCI was also the top performer in the open division at 13.21 minutes. Closed division regulations are more restrictive while the open division permits more flexibility in how one runs the benchmark.

David Kanter, executive director of MLPerf noted, “HPC training is what I think of as a supercomputer and site specific benchmark. Whereas MLPerf training tends to be more vendor specific. There is a lot of analysis that shows that the interconnect matters tremendously on training for HPC and I sincerely hope that that aspect is reflected in the benchmark.”

Per MLPerf’s long-term preference, the HPC training suite uses real-world applications. The first two benchmarks measure the time it takes to train emerging scientific machine learning models to a standard quality target in tasks relevant to climate analytics and cosmology. Both benchmarks make use of large scientific simulations to generate training data:

  • CosmoFlow: A 3D convolutional architecture trained on N-body cosmological simulation data to predict four cosmological parameter targets.
  • DeepCAM: A convolutional encoder-decoder segmentation architecture trained on CAM5+TECA climate simulation data to identify extreme weather phenomena such as atmospheric rivers and tropical cyclones.

The models and data used by the HPC suite differ from the canonical MLPerf training benchmarks in significant ways. For instance, CosmoFlow is trained on volumetric (3D) data, rather than the 2D data commonly employed in training image classifiers.

Similarly, DeepCAM is trained on images with 768 x 1152 pixels and 16 channels, which is substantially larger than standard vision datasets like ImageNet. Both benchmarks have massive datasets – 8.8 TB in the case of DeepCAM and 5.1 TB for CosmoFlow – introducing significant I/O challenges that expose storage and interconnect performance.

More generally, MLPerf HPC v0.7 follows MLPerf Training v0.7 rules. One exception, as noted earlier, is the effort to capture the complexity of large-scale data movement experience in HPC systems; all data staging from parallel file systems into accelerated and/or on-node storage systems must be included in the measured runtime.

Noteworthy, only two participants submitted results for the DeepCAM benchmark, and although the total number of participants in this first MLPerf HPC Training exercise was small, they did “showcase the state-of-the-art capabilities of supercomputers for training large scale scientific problems, utilizing data-parallel and model-parallel training techniques on thousands to tens of thousands of processors,” said Farrell.

It was interesting that Nvidia did not participate with its Selene supercomputer. Kanter said the COVID-19 pandemic likely an issue in keeping participation down as so many HPC systems have been pressed into service for COVID-related research. MLPerf has high hopes for the new HPC metric but recognizes establishing it may take time. Plans call for adding models and perhaps following a twice-yearly cadence (perhaps around ISC and SC) though that is uncertain.

Kanter said, “Part of this is sort of based on demand. Prior to the advent of HPC training we had been approached by a couple of supercomputing centers that were interested in using MLPerf training for bids qualification and acceptance. I think at this point, over a billion dollars of bids have used MLPerf components in the bidding process. Hopefully that’ll be more going forward.

“Frankly one of the things we see as value that we can provide to the industry is sort of aligning sales, marketing, engineering, making sure that people are using the right metrics.”

CSCS

The Swiss National Supercomputing Centre (CSCS) participated in the first MLPerf HPC Training round as part of our benchmarking initiative to identify the needs of future systems to support ML workflows in science. We focused on two data-parallel submissions with CosmoFlow on Piz Daint with 128 and 256 GPUs, one GPU per node. By using Sarus, a container engine with near-native performance for Docker-compatible containers, we were able to rapidly test and tune fine-grained communication for distributed training with Horovod and NCCL for near optimal weak scaling in the range of 100-1000 nodes.

Curiously, execution time per epoch scaled 12% faster than ideal from 128 to 256 GPUs. This scaling is a result of being able to cache the data set in RAM with 256 GPUs, whereas with 128 GPUs parallel filesystem I/O becomes an overhead. This overhead could be alleviated using near-compute storage. Algorithmically, our submission demonstrates the limits of CosmoFlow’s data-parallel scalability under closed division rules. Specifically, the number of epochs to converge scales up by about 1.6X as the system scales from 128 to 256 GPUs, while scaling from 32 to 128 GPUs only increases the epoch count by about 1.3X. Additionally, the standard deviation increases by 7X, making the model harder to train. In summary, we have identified fine-grained communication together with the addition of near-compute storage as key optimizations for ML on HPC systems, and CSCS will continue working on alternative parallelization strategies to overcome the data-parallel scalability challenge found in this round.

Fujitsu

AI Bridging Cloud Infrastructure (ABCI) is the world’s first large-scale Open AI Computing Infrastructure, constructed and operated by National Institute of Advanced Industrial Science and Technology (AIST) [1]. The ABCI system is powered by 2,176 Intel Xeon Scalable Processors (Skylake-SP), 4,352 NVIDIA Tesla V100 GPUs, and dual-rail Infiniband EDR interconnects. Fujitsu in collaboration with AIST and Fujitsu Laboratories submitted CosmoFlow and DeepCAM results. For CosmoFlow, 128 nodes (512 GPUs) were used for closed division and 512 nodes (2,048 GPUs) were used for open division. The dataset was reformatted to tar.xz files to reduce data staging time, and the following performance optimizations were applied to improve training throughput: (1) improve data loader throughput using NVIDIA Data Loading Library (DALI), (2) apply mixed precision training, (3) increase validation batch size. For open division the following accuracy improvement techniques were also applied: (1) use linear learning rate decay scheduler, (2) apply data augmentation, (3) disable dropout layers. These techniques enabled increasing batch size from 512 to 2,048 and reduced run time by 2.61x.

For DeepCAM, 256 nodes (1,024GPUs) were used for closed and open divisions. The dataset was reformatted to tar files to reduce data staging time, and the distributed data shuffles were applied among intra-node multi GPUs, and hyper-parameters were tuned including warmup steps to reduce the number of epochs to convergence. For open division, the Gradient Skipping (GradSkip) technique, one of Content-Aware Computing (CAC) techniques developed by Fujitsu Laboratories, was also applied. GradSkip avoids updating weights in some layers in the training process, by finding layers which have little effect on accuracy, based on automatic analysis of the content of data during the training process.

Fujitsu + RIKEN

RIKEN and Fujitsu are jointly developing the world’s top-level supercomputer—the supercomputer Fugaku—capable of realizing high effective performance for a broad range of application software, with the goal of full operation in 2021. RIKEN and Fujitsu in collaboration with Fujitsu Laboratories submitted CosmoFlow results for closed division using 512 nodes and 8,192 nodes, and for open division using 16,384 nodes. The dataset was reformatted to tar.xz files for reducing data staging time and LLIO (Lightweight Layered IO Accelerator) was used to make use of temporary local file system in a process. Optimized oneAPI Deep Neural Network Library (oneDNN) was developed to exploit the performance of A64FX.

Since the accuracy could not reach the target using the batch size larger than 4,096, the model parallel was introduced to apply hybrid parallelism of both data and model. Model parallel in TensorFlow was extended based on Mesh TensorFlow (MTF) so that multi processes of both data and model parallelisms are enabled. Model parallel was applied in Conv3d layers by spatial partitioning in two dimensions. The hybrid parallelism enabled scaling the number of CPUs up to 8,192 for closed division and 16,384 for open division (about 1/10 of Fugaku).

LBNL

MLPerf HPC is an important opportunity for the National Energy Research and Scientific Computing (NERSC) center at Lawrence Berkeley National Laboratory as we prepare for a growing scientific AI workload in the coming years. Berkeley Lab co-led the published scientific applications that the current benchmarks are based on, DeepCAM and CosmoFlow. For this first round of results, we have submitted results measured on the Cori supercomputer at NERSC, demonstrating data-parallel training capabilities on both the KNL and GPU partitions up to 1024 nodes and 64 GPUs, respectively. Our participation in MLPerf HPC v0.7 marks an important step for us to standardize our AI benchmarking strategy in preparation for our announced next machine coming online in 2021: Perlmutter.

NCSA

One of the goals of the Innovative Systems Lab (ISL) at the National Center for Supercomputing Applications (NCSA) is to evaluate emerging hardware and software systems of interest to the AI research community. MLPerf HPC provides a great tool to conduct such evaluations. For this round of benchmarks, we have submitted the results obtained on our Hardware-Accelerated Learning (HAL) cluster based on IBM POWER9 CPUs and NVIDIA V100 GPUs.  The system consists of 16 IBM AC922 nodes backed by an all-flash DDN storage array and EDR InfiniBand interconnect and shows great distributed training capabilities across the entire cluster.  We have developed significant experience while participating in the MLPerf HPC v0.7 project, which will benefit us in our future system designs.

TACC

The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. MLPerf HPC applications like CosmoFlow provide an invaluable opportunity to understand next-generation ML and DL applications’ requirements. TACC participated in MLPerf HPC v0.7 and submitted the performance for the Cosmoflow application at 64 GPUs on the Frontera RTX partition. The lessons learned will be used to derive the architecture of future TACC systems for the benefit of the vast growing AI community.

Link to MLPerf results: https://mlperf.org/training-results-0-7

Link to MLPerf announcement: https://mlperf.org/press#mlperf-hpc-v0.7-results

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Watch Nvidia’s GTC21 Keynote with Jensen Huang Livestreamed Here, Monday at 8:30am PT

April 9, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Argonne Supercomputing Supports Caterpillar Engine Design

April 8, 2021

Diesel fuels still account for nearly ten percent of all energy-related U.S. carbon emissions – most of them from heavy-duty vehicles like trucks and construction equipment. Energy efficiency is key to these machines, Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new training and inference servers that will power the upcoming Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

What’s New in HPC Research: Tundra, Fugaku, µHPC & More

April 6, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

RIKEN’s Ongoing COVID Research Includes New Vaccines, New Tests & More

April 6, 2021

RIKEN took the supercomputing world by storm last summer when it launched Fugaku – which became (and remains) the world’s most powerful supercomputer – ne Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire