June 29, 2021 — At ISC High Performance 2021, a European virtual conference for high-performance computing (HPC), the Oak Ridge Leadership Computing Facility’s (OLCF’s) Summit was ranked as the world’s second-fastest supercomputer in the 57th TOP500 list. But it also took second place in a relatively new benchmark test apart from the main competition: High-Performance Linpack–Accelerator Introspection (HPL-AI).
With its submitted speed of 1.15 exaflops, or a billion billion (1018) floating point operations per second, has Summit somehow jumped into the exascale era of supercomputing ahead of the OLCF’s upcoming Frontier system? No. When operational in 2022, Frontier is expected to deliver more than 1.5 exaflops of double-precision performance. On the other hand, Summit’s HPL-AI performance scores its mixed-precision compute capabilities. These two methods of calculating arithmetic are used for different applications in computational science, and double precision is considered the ultimate standard.
“Many modern simulations require double precision to ensure that physical quantities are computed accurately, especially when those quantities are sort of pushing and pulling at once, for example, the forces acting on atoms in molecules or the fight between nuclear fusion and gravity that happens in a star. So, double-precision performance is the key to determining how useful a supercomputer is for science,” said Bronson Messer, the OLCF’s director of science. “But not all of the operations in the codes need to be carried out at this level of precision all the time. Modern GPUs offer very high performance for lower precision, and taking advantage of this fact is a real benefit to many applications.”
The main rankings of the biannual TOP500 list use the High Performance Linpack (HPL) test, the industry standard for measuring double-precision (64-bit) arithmetic performance by traditional CPU supercomputers. First introduced in 1979 by Jack Dongarra, director of the Innovative Computing Laboratory (ICL) at the University of Tennessee, Knoxville, HPL has evolved over the decades along with supercomputer architectures and techniques. At ISC 2019, the ICL team of Jack Dongarra, Piotr Luszczek, and Azzam Haidar (now a senior engineer at NVIDIA) proposed the first implementation of the HPL-AI benchmark and submitted the first entry for Summit, scoring 450 petaflops. Later that year, they released the HPL-AI reference implementation to address the growing trend of supercomputers that use mixed-precision (16- or 32-bit) arithmetic in data science.
“Historically, HPC workloads are benchmarked at double precision, representing the accuracy requirements in computational astrophysics, computational fluid dynamics, nuclear engineering, and quantum computing,” Dongarra said. “But within the past few years, hardware vendors have started designing special-purpose units for low-precision arithmetic in response to the machine learning community’s demand for high computing power in low-precision formats.”
Unlike high-fidelity simulations, data-science applications such as artificial intelligences or neural networks don’t always require the ultimate in 64-bit precision to accomplish their tasks effectively. Consequently, GPU makers have been adding the ability to conduct lower precision calculations in their products, such as the NVIDIA V100 Tensor Core GPUs in Summit or the AMD Instinct™ GPUs coming in Frontier. This can result in a big speed increase for those data-driven applications.
“In general, when you do a simulation, you’re trying to represent the world—the locations of molecules or atoms or climate currents—in the most precise way that you can. So you want all 64 bits of double precision to represent a numeric value,” said Mallikarjun Shankar, head of the Advanced Technologies Section in the National Center for Computational Science at the US Department of Energy’s (DOE) Oak Ridge National Laboratory. “Now, in the world of data science, and for certain classes of operations, you’re often classifying or categorizing quantities or operating on a smaller set of quantities where you don’t need all 64 bits to represent the quantity.”
To test mixed-precision-capable systems such as Summit, HPL-AI runs a computation with a workload like the HPL benchmark’s but uses low precision on part of the algorithm, iteratively refining the solution to make it reach the FP64 level of accuracy.
When NVIDIA ran the HPL-AI test on Summit in 2020, it achieved 550 petaflops—or 0.55 exaflops. NVIDIA assigned a cross-team group of engineers to further develop the benchmark’s code, including Haidar, compute architect Nikhil Jain, and developer technology engineer Jiqun Tu. The team was able to greatly boost Summit’s performance through a combination of mixed-precision communication and improvements in the CUDA Math library.
“We study all aspects—algorithmic, software, and architectural—that impact the end-to-end performance of mixed-precision computing in HPL-AI and develop new optimizations to improve the overall performance,” Jain said. “We plan to explore more compute-side and communication-related optimizations and also look at using mixed-precision math for matrices prevalent in more scientific domains.”
Dongarra said he doesn’t expect HPL-AI to supplant HPL but rather serve as a complement, bridging the gap in evaluating mixed-precision performance as the technique gains more traction in computational science.
“Given the additional flops available, it is useful to start asking the questions: how does your computer perform in a mixed-precision regime, and what kinds of computing campaigns can effectively use mixed-precision techniques?” Shankar said.
The OLCF is a US DOE Office of Science user facility located at ORNL.
UT-Battelle LLC manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
Click here to learn more.