MLCommons Issues MLPerf HPC Training Results for Larger Systems

By John Russell

November 14, 2022

MLCommons last week issued its third annual set of MLPerf HPC (v2.0) results intended to showcase the performance of larger systems when training more rigorous scientific models. The large size of systems participating in all of the MLPerf HPC rounds so far has been impressive and includes, for example, Fugaku (at RIKEN) and Longhorn (Texas Advanced Computing Center), however the number of submitters remains low; it dipped to five this year from eight last year and six the year before.

With the exception of Fugaku, which uses Fujitsu’s A64FX 64-bit Arm-based microprocessor, all of the submissions used Nvidia A100 or V100 GPUs as accelerators. Dell was a new submitter to the MLPerf HPC category (32x PowerEdge XE8545 servers with 128 Nvidia A100 SXM GPUs). The systems in the latest round are all impressive and quite different, making comparisons among them tricky. It’s best to look at the results directly.

There were no changes to the models or datasets used in the latest round – DeepCAM (climate), CosmoFlow (cosmology prediction), and OpenCatalyst (molecular modeling.) Both time-to-train (strong scaling) and throughput (weak scaling, models trained per minute) are measured.

“MLPerf HPC, in many ways, inherits the rules from MLPerf Training with a few changes. In particular, the clock starts in a slightly different location,” said David Kanter, executive director of MLCommons, the parent organization for MLPerf. “The data starts in globally shared storage [and must] be distributed across your cluster network to all of the compute nodes. In MLPerf Training we allow the data to reside on local storage for all of your compute nodes. So, there’s more of a storage element in the HPC [exercise]. The HPC workloads selected are very focused on scientific datasets and scientific problems,” he said.

“Time-to-train is used [to measure] strongest scaling, but there’s [also] a throughput metric, because many of the HPC systems being measured are large-scale clusters. For example, one of the submissions was done on Fugaku, which has tens of thousands of nodes. We wanted the ability to measure weak scaling – that is how [well] you run multiple jobs – because the reality for really large-scale HPC clusters is they’re typically not running one job. They’re usually running many jobs simultaneously. To reflect that, we built this throughput metric; if you’re training multiple models concurrently [it measures] what the actual throughput of those models is,” said Kanter.

Nvidia A100 80GB GPU (Image credit: Nvidia)

Nvidia, not surprisingly, touted the fact that its GPUs are widely used in top-end machines, and also showcased the performance of its Selene supercomputer which uses A100 GPUs. Why not? At least for the moment, Nvidia remains the dominant GPU supplier for systems of all sizes including supercomputers and large HPC clusters.

David Salvator, director of AI, benchmarking and cloud at Nvidia, noted: “We’ve been able to improve our time to train on [CosmoFlow] by 9x, which is just a massive improvement. As I’ve talked about, one of the things about training is that it is iterative. You do multiple training runs [and] there is a certain amount of experimentation – that’s nicer sounding than trial and error – but what it means is you’re trying different things (parameters). If it works, great. If it doesn’t, you basically tweak some of your parametric knobs and try again. The ability to run much faster means you can do many more trials in a given period.”

With so few submissions, their widely varying configuration/size, and the overwhelming use of Nvidia GPUs as accelerators, it’s difficult to make too many meaningful comparisons. It will be interesting to see if any of the new/forthcoming systems that use either an AMD CPU/GPU combination or Intel CPU/GPU combination will participate in future MLPerf HPC exercises.

One has the sense that MLPerf HPC is still finding its identity. MLCommons encourages participating organizations to submit statements describing their systems and any special steps used to optimize them for handling the training workloads (full statements included at the end of the article). It probably should be noted that the Fugaku statement submitted this year – which includes a description of some tuning elements – is a direct copy of its statement submitted last year.

Three of the five submitters cited the value of participating MLPerf HPC.

  • Helmholtz AI: “The MLPerf HPC benchmarking suite is a great opportunity for us to fine-tune both code-based and system-based optimization methods and tools. For CosmoFlow, we were able to improve our submission by over 300 percent compared to last year! While fine-tuning our IO operations, for example, we discovered ways for our filesystems to more reliably deliver read and write performance.”
  • Nvidia: “Importantly, MLPerf HPC exercises, and is sensitive to the impact of every key subsystem from memory bandwidth to shared filesystem throughput. Therefore, we believe the MLPerf HPC benchmark represents one of the best tools for HPC and AI centers system bring-up and acceptance testing while also being the best metric to use for system comparison during design and acquisition phases.”
  • TACC: “MLCommons HPC workgroup provides an excellent opportunity to evaluate Machine Learning applications on supercomputing platforms. In the v2.0 submission round, Dr. Amit Ruhela ran two Machine Learning applications, i.e. Cosmoflow and Deepcam, on the TACC Longhorn system and submitted the performance numbers a third time. These benchmarks allow TACC staff to envisage and plan specifications for their upcoming supercomputing systems.”

Kanter said, “I am sure we will get more submitters next round as well; you’ve probably noticed, some of the supercomputers are just sort of getting up and running.”

Stay tuned.

Link to MLPerf release, https://www.hpcwire.com/off-the-wire/latest-mlperf-results-display-gains-for-all/

Link to MLPerf HPC v2.0 results, https://mlcommons.org/en/training-hpc-20/

SUBMITTED VENDOR STATEMENTS  

Dell

Dell Technologies has long been dedicated to advancing, democratizing, and optimizing HPC to make it accessible to anyone who wants to use it. Together, Dell and Nvidia have partnered to deliver unprecedented acceleration and flexibility for AI, data analytics and HPC workloads to help enterprises tackle some of the world’s toughest computing challenges.

For the MLPerf HPC Training 2.0 testing, Dell submitted model 32x PowerEdge XE8545 servers with 128 NVIDIA A100 SXM GPUs for DeepCAM training model. This submission is from the Rattler supercomputer at the Dell Technologies Edge Innovation Center. The HPC system, stemming from a partnership with NVIDIA, is designed to showcase extreme scalability and was previously recognized on the TOP500 list of the world’s fastest supercomputers.

There are always going to be bigger questions and bigger data sets requiring HPC solutions to keep pace with the speed of innovation. Dell has the engineering expertise needed to build large scale GPU solutions to meet these growing demands across industries. Scientific researchers at Oregon State University (OSU) are using Dell servers with NVIDIA GPUs for climate change research, among other areas. For them, innovative HPC technology in tailored configurations is the must-have capability to drive meaningful discoveries. “It used to take about 10 years to fully sequence a seawater sample”, says Christopher Sullivan, Assistant Director of Biocomputing at OSU’s Center for Genome Research and Biocomputing. “Now it takes about less than a week to analyze and sequence all of the DNA in a sample.”

Experience Dell’s solutions for HPC for yourself in one of our worldwide Customer Solution Centers. Tap into one of our HPC & AI Centers of Excellence and/or collaborate with our HPC & AI Innovation Lab. When you engage with the Lab, you work directly with experts to design a solution for your unique HPC workloads.

Fujitsu + RIKEN

RIKEN and Fujitsu jointly developed the world’s top-level supercomputer—the supercomputer Fugaku—capable of realizing high effective performance for a broad range of application software, and started its official operation on March 9, 2021 [1]. RIKEN and Fujitsu submitted CosmoFlow results to closed division using 512 nodes for strong scaling and 81,536 nodes (=128 nodes×637 model instances) for weak scaling.

For both weak and strong scaling, LLIO (Lightweight Layered IO Accelerator) was used to cache library and program files from FEFS (Fujitsu Exabyte File System) storage. We developed customized TensorFlow and optimized oneAPI Deep Neural Network Library (oneDNN) as the backend [2]. The oneDNN uses JIT assembler Xbyak_aarch64 to exploit the performance of A64FX.

For weak scaling, since the job scheduler cannot launch a large number of instances immediately, inter-instance synchronization across jobs was added to align start times among instances. Moreover, to avoid excessive access to the FEFS from all instances, the dataset is staged to node local memory using a MPI program that only the first instance reads the dataset from FEFS and broadcasts it to the other instances. We actually ran 648 instances (82,944 nodes) but submitted 637 instance results of them. The pruned instances consist of 1 instance that hung during training, 6 instances that used the same seed value as others unintentionally, and 4 instances that took particularly long time.

For strong scaling, we used reformatted uncompressed TFRecord dataset to improve training throughput. The reference dataset is compressed with gzip and needs decompression at each training step. Since the number of nodes increases from weak scaling and the amount of staging data per node decreases, the uncompressed dataset could be used.

In this round, the performance of the Fugaku half-system with more than 80,000 nodes can be evaluated using the weak scaling metric.

[1] https://www.fujitsu.com/global/about/innovation/fugaku/ [2] https://github.com/fujitsu

Helmholtz AI

In Helmholtz AI, Germany’s largest research association has teamed up to bring cutting-edge AI methods to researchers from the natural sciences. With this in mind, the Helmholtz AI members from the Steinbuch Centre for Computing (SCC) at Karlsruhe Institute of Technology (KIT) and the Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich have jointly submitted their results for the MLPerf HPC benchmarking suite. We are proud of our large-scale training runs using NVIDIA A100 GPUs on both the HoreKa supercomputer at SCC and the JUWELS Booster at JSC. On the latter, we used up to 3,072 NVIDIA A100 GPUs during these measurements.

The MLPerf HPC benchmarking suite is a great opportunity for us to fine-tune both code-based and system-based optimization methods and tools. For CosmoFlow, we were able to improve our submission by over 300% compared to last year! While fine-tuning our IO operations, for example, we discovered ways for our filesystems to more reliably deliver read and write performance.

As the impacts of climate change become more apparent, it is also imperative to be more conscious about our environmental footprint, especially with respect to energy consumption. To that end, the system administrators at HoreKa have enabled the use of the Lenovo XClarity Controller to measure the energy consumption of the compute nodes*. For the submission runs on HoreKa, 1,127.8 kWh were used. This is more than it takes to drive an average electric car from Miami to Vancouver or from Portugal to Finland.

The MLPerf HPC benchmarking suite is vital to determining the utility of our HPC machines for modern work flows. We look forward to submitting again next year!

*This measurement does not include all parts of the system and is not an official MLCommons methodology, however it provides a minimum measurement for the energy consumed on our system. As each system is different, these results cannot be directly transferred to any other submission.

Nvidia

The HPC community is amid a second renaissance – one associated with adopting AI methods to augment or replace traditional HPC approaches. Over the last five years, the number of research papers published about AI-accelerated simulation has increased from less than 100 per year to nearly 5,000 in the last year.

MLPerf HPC benchmarks measure training time and throughput for three types of high-performance simulations that have adopted machine learning techniques. Peer-reviewed industry-standard benchmarks are a critical tool for evaluating HPC platforms, and we believe access to reliable performance data will help guide HPC architects of the future in their design decisions.

The MLPerf HPC benchmarks seek to model the types of workloads HPC centers perform:

  • Cosmoflow – physical quantity estimation from cosmological image data
  • Deepcam – identification of hurricanes and atmospheric rivers in climate simulation 
data
  • Opencatalyst – prediction of molecular configuration energy levels based on graph 
connectivity

Importantly, MLPerf HPC exercises, and is sensitive to the impact of every key subsystem from memory bandwidth to shared filesystem throughput. Therefore, we believe the MLPerf HPC benchmark represents one of the best tools for HPC and AI centers system bring-up and acceptance testing while also being the best metric to use for system comparison during design and acquisition phases.

Nvidia continues to improve scores year over year for this submission by bettering the strong scaling scores of Cosmoflow by 2.1X and the best Opencatalyst score by 5.1X compared to last year. Nvidia and partner ecosystem submitted using two generations of Nvidia GPUs (V100 and A100). Supercomputing centers Jülich, the Texas Advanced Computing Center, and Nvidia partner Dell made submissions.

All software used for Nvidia submissions is available from the MLPerf repository. Nvidia is constantly making performance improvements, including those from MLPerf, to our software available on NGC, our software hub for GPU applications.

Texas Advanced Computing Center

MLCommons HPC workgroup provides an excellent opportunity to evaluate Machine Learning applications on supercomputing platforms. In the v2.0 submission round, Dr. Amit Ruhela ran two Machine Learning applications, i.e. Cosmoflow and Deepcam, on the TACC Longhorn system and submitted the performance numbers a third time. These benchmarks allow TACC staff to envisage and plan specifications for their upcoming supercomputing systems.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pegasus ‘Big Memory’ Supercomputer Now Deployed at the University of Tsukuba

March 25, 2023

In the bevy of news from Nvidia's GPU Technology Conference this week, another new system has come to light: Pegasus, which entered operations at the University of Tsukuba’s Center for Computational Sciences in January Read more…

EuroHPC Summit: Tackling Exascale, Energy, Industry & Sovereignty

March 24, 2023

As the 2023 EuroHPC Summit opened in Gothenburg on Monday, Herbert Zeisel – chair of EuroHPC’s Governing Board – commented that the undertaking had “left its teenage years behind.” Indeed, a sense of general ma Read more…

Is Fortran the Best Programming Language? Asking ChatGPT

March 23, 2023

I recently wrote about my experience with interviewing ChatGPT here. As promised, in this follow-on and conclusion of my interview, I focus on Fortran and other languages. All in good fun. I hope you enjoy the conclusion of my interview. After my programming language questions, I conclude with a few notes... Read more…

Nvidia Doubling Down on China Market in the Face of Tightened US Export Controls

March 23, 2023

Chipmakers are tightlipped on China activities following a U.S. crackdown on hardware exports to the country. But Nvidia remains unfazed, and is doubling down on China being an important country for its computing hardwar Read more…

Intel’s Sapphire Rapids Comes to Australia’s Gadi Supercomputer

March 22, 2023

Until the launch of Pawsey’s Setonix system last year, NCI’s Gadi system – launched in 2020 – was Australia’s most powerful publicly ranked supercomputer. Now, the system has received a major boost powered by I Read more…

AWS Solution Channel

Shutterstock_2206622211

Install optimized software with Spack configs for AWS ParallelCluster

With AWS ParallelCluster, you can choose a computing architecture that best matches your HPC application. But, HPC applications are complex. That means they can be challenging to get working well. Read more…

 

Get the latest on AI innovation at NVIDIA GTC

Join Microsoft at NVIDIA GTC, a free online global technology conference, March 20 – 23 to learn how organizations of any size can power AI innovation with purpose-built cloud infrastructure from Microsoft. Read more…

Nvidia Announces BlueField-3 GA, Oracle Cloud Is Early User

March 21, 2023

Nvidia today announced general availability for its BlueField-3 data processing unit (DPU) along with impressive early deployments including Oracle Cloud Infrastructure. First described in 2021 and now being delivered, B Read more…

Pegasus ‘Big Memory’ Supercomputer Now Deployed at the University of Tsukuba

March 25, 2023

In the bevy of news from Nvidia's GPU Technology Conference this week, another new system has come to light: Pegasus, which entered operations at the University Read more…

EuroHPC Summit: Tackling Exascale, Energy, Industry & Sovereignty

March 24, 2023

As the 2023 EuroHPC Summit opened in Gothenburg on Monday, Herbert Zeisel – chair of EuroHPC’s Governing Board – commented that the undertaking had “lef Read more…

Nvidia Doubling Down on China Market in the Face of Tightened US Export Controls

March 23, 2023

Chipmakers are tightlipped on China activities following a U.S. crackdown on hardware exports to the country. But Nvidia remains unfazed, and is doubling down o Read more…

Nvidia Announces BlueField-3 GA, Oracle Cloud Is Early User

March 21, 2023

Nvidia today announced general availability for its BlueField-3 data processing unit (DPU) along with impressive early deployments including Oracle Cloud Infras Read more…

Nvidia Announces ‘Tokyo-1’ Generative AI Supercomputer Amid Gradual H100 Rollout

March 21, 2023

Nvidia’s Hopper-generation H100 GPU is continuing its slow march toward “current-generation.” After Nvidia announced that the H100 was in “full producti Read more…

DGX Cloud Is Here: Nvidia’s AI Factory Services Start at $37,000

March 21, 2023

If you are a die-hard Nvidia loyalist, be ready to pay a fortune to use its AI factories in the cloud. Renting the GPU company's DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month. The rental includes access to a cloud computer with eight Nvidia H100 or A100 GPUs and 640GB... Read more…

Quantum Bits: IBM-Cleveland Clinic Launch; D-Wave Adds Solver; DOE/AWS Offer QICK

March 20, 2023

IBM today launched the first installation of an IBM Quantum System One at a collaborator site in the U.S. – this one is at the Cleveland Clinic where IBM’s Read more…

SCA23: Pawsey’s Mark Stickells on Sustainable Australian Supercomputing

March 17, 2023

“While the need for supercomputing is great, we have, in my view, reached a tipping point,” said Mark Stickells, executive director of Australia’s Pawsey Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire