Summit — the world’s top-ranking supercomputer — has been used to test-drive a new mixed-precision Linpack benchmark, which for now is being called HPL-AI.
Traditionally, supercomputer performance is measured using the High-Performance Linpack (HPL) benchmark, which is the basis for the Top500 list that biannually ranks world’s fastest supercomputers. The Linpack benchmark tests a supercomputer’s ability to conduct high-performance tasks (like simulations) that use double-precision math. On June’s Top500 list, announced Monday, Summit’s 148 Linpack petaflops land it first place by a comfortable margin.
Using that same machine configuration, Oak Ridge National Laboratory (ORNL) and Nvidia have tested Summit on HPL-AI and gotten a result of 445 petaflops.
A different kind of benchmark
While the HPL benchmark tests supercomputers’ performance in double-precision math, AI is a rapidly growing use case for supercomputers — and most AI models use mixed-precision math.
The HPL-AI benchmark is specifically designed to bridge this gap in evaluation, complementing — rather than supplanting — the traditional HPL approach. Based on the HPL standard, HPL-AI adds mixed-precision calculations to evaluate AI model performance.
“Mixed-precision techniques have become increasingly important to improve the computing efficiency of supercomputers, both for traditional simulations with iterative refinement techniques as well as for AI applications,” said Jack Dongarra, who introduced Linpack in the late 1970s. “Just as HPL allows benchmarking of double-precision capabilities, this new approach based on HPL allows benchmarking of mixed-precision capabilities of supercomputers at scale.”
Reaching new peaks of performance
Nvidia and ORNL tested the HPL-AI benchmark on Summit. The behemoth supercomputer — built by IBM, Mellanox and Nvidia and equipped with 9,216 IBM Power9 CPUs and 27,648 Nvidia Volta V100 GPUs — blazed through the computations, completing the test in half an hour (compared to its 90-minute HPL run). Its performance was rated at 445 petaflops — nearly half an exaflops, and triple Summit’s 148 petaflops performance on HPL.
This benchmark marks a few significant accomplishments — one, of course for Summit; another for GPU-based supercomputing; and a third for the HPC-AI benchmark itself.
“Ever since the delivery and installation of our 200-petaflops Summit system — which included the mixed-precision Tensor Core capability powered by Nvidia’s Volta GPU — it has been a goal of ours to not only use this unique aspect of the system to do AI but also to use it in our traditional HPC workloads,” said Jeff Nichols, associate laboratory director at ORNL. “Achieving a 445 petaflops mixed-precision result on HPL (equivalent to our 148 petaflops [double-precision] result) demonstrates that this system is capable of delivering up to 3x more performance on our traditional and AI workloads. This gives us a huge competitive edge in delivering science at an unprecedented scale.”
Nvidia is hoping that the HPC-AI benchmark can become a new, complementary standard for the supercomputing industry, much like the Green500 list became a standard measure of efficiency.
“Today, no benchmark measures the mixed-precision capabilities of the largest-scale supercomputing systems the way the original HPL does for double-precision capabilities,” wrote Ian Buck, general manager and vice president of Accelerated Computing at Nvidia. “HPL-AI can fill this need, showing how a supercomputing system might handle mixed-precision workloads such as large-scale AI.”
In a blog post, Buck highlighted several use cases (included below) for which scientists are turning to mixed-precision supercomputing.
Nuclear fusion is effectively replicating the sun in a bottle. While it promises unlimited clean energy, nuclear fusion reactions involve working with temperatures above 10 million degrees Celsius. They’re also prone to disruptions — and tricky to sustain for more than a few seconds. Researchers at ORNL are simulating fusion reactions so that physicists can study the instabilities of plasma fusion, giving them a better understanding of what’s happening inside the reactor. The mixed-precision capabilities of Tensor Core GPUs speed up these simulations by 3.5x to advance the development of sustainable energy at leading facilities such as ITER.
Identifying new molecules
Whether it’s to develop a new chemical compound for industrial use or a new drug to treat a disease, scientists need to identify and synthesize new molecules with desirable chemical properties. Using NVIDIA V100 GPUs for training and inference, Dow Chemical Company researchers developed a neural network to identify new molecules for use in the chemical manufacturing and pharmaceutical industries.
Seismic fault interpretation
The oil and gas industry analyzes seismic images to detect fault lines, an essential step toward characterizing reservoirs and determining well placement. This process typically takes days to weeks for one iteration — but with an NVIDIA GPU, University of Texas researchers trained an AI model that can predict faults in mere milliseconds instead.
Tiffany Trader contributed to this report.