Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eight of the ten fastest systems featured accelerators. The most common form of accelerators is the Graphical Processing Units (GPUs). The June 2020 edition of the Top500 is the first edition listing a system equipped with Nvidia’s new A100 GPU—the HPC-centric Ampere GPU designed with AI applications in mind. With this new flagship Nvidia chip now on the market, domain scientists relying on GPU-accelerated scientific simulations codes wonder whether it is time to upgrade their hardware.
To help answer this question, we take a look at the performance we achieve on the Nvidia A100 for sparse and batched computations and quantify the acceleration over its predecessor, the Nvidia V100 GPU. The motivation for focusing on these routines is that many scientific applications are either (1) based on batched and sparse linear algebra library routines or (2) composed of operations with very similar characteristics. Consequently, the performance gains for these benchmarks may be indicative of the acceleration we may see when porting a scientific computing application from a V100 platform to the A100 architecture, without applying additional code modifications.
In Figure 1, we are visualizing the speedups we get when replacing an Nvidia V100 GPU with an Nvidia A100 GPU without code modification. While the main memory bandwidth has increased on paper from 900 GB/s (V100) to 1,555 GB/s (A100), the speedup factors for the STREAM benchmark routines range between 1.6× and 1.72× for large data sets. At the same time, we observed that when accessing small data sets, the memory bandwidth of the A100 architecture is actually lower than the bandwidth of the V100.
For the sparse matrix-vector product (SpMV)—a key algorithm for sparse linear algebra and scientific computing applications—the performance improvements depend on the individual sparse data format, the kernel implementation, and the specific problem characteristics. The speedup numbers for the SpMV kernels from Nvidia’s cuSPARSE library and the Ginkgo open-source library shown in Figure 1 are all averaged over the more than 2,800 test matrices available in the Suite Sparse Matrix Collection. As many of these matrices are small, the kernels are unable to saturate the memory bandwidth. Consequently, the speedup values for the SpMV kernels are generally much lower than those for the STREAM benchmarks. In the performance analysis for Ginkgo’s iterative linear solvers, we focus on large test problems to ensure the bandwidth is saturated in the vector operations. Depending on the individual algorithm, Ginkgo’s iterative solvers run between 1.5× and 1.8× times faster on the A100 GPU over the V100 GPU.
Finally, we also investigate the acceleration of batched routines that are also common in scientific computing applications. We note MAGMA’s batched routines are heavily tuned for the V100 architecture, and higher speedups may be possible by tuning for the A100 architecture. Nevertheless, we see attractive performance gains up to 1.6× that come “for free” by just switching to newer hardware architecture. It is worth mentioning that the A100 GPU provides tensor core acceleration for FP64 arithmetic. This is a new hardware capability that did not exist on the A100 predecessors. Such drastic architectural improvements present a challenge for open source libraries, such as MAGMA, that aims to provide highly tuned numerical software for a wide range of hardware architectures. As an example, the existing compute-bound kernels in MAGMA do not currently take advantage of the A100 tensor cores for double precision. This means that those kernels are bound, at best, by a theoretical peak performance of 9.7 teraflops (which is about 1.3x better than the V100). However, if MAGMA can take advantage of the new tensor core accelerators, the theoretical peak performance is 19.5 teraflops (which is 2.6x better than the V100). And future versions of MAGMA will take advantage of the new tensor cores.
Given these overall consistent results, we may also expect that complex scientific computing applications will experience a 1.3× to 1.7× speedup that comes when moving from an Nvidia V100 GPU to the new A100 GPU without modification, and this is not even accounting for additional architecture-specific performance optimization. While we cannot answer the question of whether this justifies the investment, it is clear that the Nvidia team succeeded in delivering an architecture with a new focus that delivers considerable performance improvement over its predecessor—not just incremental acceleration.
A preprint provides that provides much more details on the performance characteristics of sparse linear algebra routines on the Nvidia V100 and A100 GPUs can be found at https://arxiv.org/abs/2008.08478.
Author Bio – Hartwig Anzt
Hartwig Anzt is a Helmholtz-Young-Investigator Group leader at the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology (KIT). He obtained his Ph.D. in Mathematics at the Karlsruhe Institute of Technology and afterward joined Jack Dongarra’s Innovative Computing Lab at the University of Tennessee in 2013. Since 2015 he also holds a Senior Research Scientist position at the University of Tennessee. Hartwig Anzt has a strong background in numerical mathematics, specializes in iterative methods and preconditioning techniques for the next generation hardware architectures. His Helmholtz group on Fixed-point methods for numerics at Exascale (“FiNE”) is granted funding until 2022. Hartwig Anzt has a long track record of high-quality software development. He is author of the MAGMA-sparse open-source software package managing lead and developer of the Ginkgo numerical linear algebra library, and part of the US Exascale computing project delivering production-ready numerical linear algebra libraries.
Author Bio – Ahmad Abdelfattah
Ahmad Abdelfattah is a research scientist at the Innovative Computing Laboratory, the University of Tennessee. He received his Ph.D. in computer science from King Abdullah University of Science and Technology (KAUST) in 2015, where he was a member of the Extreme Computing Research Center (ECRC). His research interests include numerical linear algebra, parallel algorithms, and performance optimization on massively parallel processors. He received his BSc. and MSc. degrees in computer engineering from Ain Shams University, Egypt.
Author Bio – Jack Dongarra
Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University.