Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eight of the ten fastest systems featured accelerators. The most common form of accelerators is the Graphical Processing Units (GPUs). The June 2020 edition of the Top500 is the first edition listing a system equipped with Nvidia’s new A100 GPU—the HPC-centric Ampere GPU designed with AI applications in mind. With this new flagship Nvidia chip now on the market, domain scientists relying on GPU-accelerated scientific simulations codes wonder whether it is time to upgrade their hardware.

To help answer this question, we take a look at the performance we achieve on the Nvidia A100 for sparse and batched computations and quantify the acceleration over its predecessor, the Nvidia V100 GPU. The motivation for focusing on these routines is that many scientific applications are either (1) based on batched and sparse linear algebra library routines or (2) composed of operations with very similar characteristics. Consequently, the performance gains for these benchmarks may be indicative of the acceleration we may see when porting a scientific computing application from a V100 platform to the A100 architecture, without applying additional code modifications.

In Figure 1, we are visualizing the speedups we get when replacing an Nvidia V100 GPU with an Nvidia A100 GPU without code modification. While the main memory bandwidth has increased on paper from 900 GB/s (V100) to 1,555 GB/s (A100), the speedup factors for the STREAM benchmark routines range between 1.6× and 1.72× for large data sets. At the same time, we observed that when accessing small data sets, the memory bandwidth of the A100 architecture is actually lower than the bandwidth of the V100.

For the sparse matrix-vector product (SpMV)—a key algorithm for sparse linear algebra and scientific computing applications—the performance improvements depend on the individual sparse data format, the kernel implementation, and the specific problem characteristics. The speedup numbers for the SpMV kernels from Nvidia’s cuSPARSE library and the Ginkgo open-source library shown in Figure 1 are all averaged over the more than 2,800 test matrices available in the Suite Sparse Matrix Collection. As many of these matrices are small, the kernels are unable to saturate the memory bandwidth. Consequently, the speedup values for the SpMV kernels are generally much lower than those for the STREAM benchmarks. In the performance analysis for Ginkgo’s iterative linear solvers, we focus on large test problems to ensure the bandwidth is saturated in the vector operations. Depending on the individual algorithm, Ginkgo’s iterative solvers run between 1.5× and 1.8× times faster on the A100 GPU over the V100 GPU.

Finally, we also investigate the acceleration of batched routines that are also common in scientific computing applications. We note MAGMA’s batched routines are heavily tuned for the V100 architecture, and higher speedups may be possible by tuning for the A100 architecture. Nevertheless, we see attractive performance gains up to 1.6× that come “for free” by just switching to newer hardware architecture. It is worth mentioning that the A100 GPU provides tensor core acceleration for FP64 arithmetic. This is a new hardware capability that did not exist on the A100 predecessors. Such drastic architectural improvements present a challenge for open source libraries, such as MAGMA, that aims to provide highly tuned numerical software for a wide range of hardware architectures. As an example, the existing compute-bound kernels in MAGMA do not currently take advantage of the A100 tensor cores for double precision. This means that those kernels are bound, at best, by a theoretical peak performance of 9.7 teraflops (which is about 1.3x better than the V100). However, if MAGMA can take advantage of the new tensor core accelerators, the theoretical peak performance is 19.5 teraflops (which is 2.6x better than the V100). And future versions of MAGMA will take advantage of the new tensor cores.

Given these overall consistent results, we may also expect that complex scientific computing applications will experience a 1.3× to 1.7× speedup that comes when moving from an Nvidia V100 GPU to the new A100 GPU without modification, and this is not even accounting for additional architecture-specific performance optimization. While we cannot answer the question of whether this justifies the investment, it is clear that the Nvidia team succeeded in delivering an architecture with a new focus that delivers considerable performance improvement over its predecessor—not just incremental acceleration.

Figure 1: Performance increase that comes “for free” when moving from the Nvidia V100 GPU to the Nvidia A100 GPU without applying hardware-specific code optimization.

A preprint provides that provides much more details on the performance characteristics of sparse linear algebra routines on the Nvidia V100 and A100 GPUs can be found at https://arxiv.org/abs/2008.08478.

Author Bio – Hartwig Anzt

Hartwig Anzt is a Helmholtz-Young-Investigator Group leader at the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology (KIT). He obtained his Ph.D. in Mathematics at the Karlsruhe Institute of Technology and afterward joined Jack Dongarra’s Innovative Computing Lab at the University of Tennessee in 2013. Since 2015 he also holds a Senior Research Scientist position at the University of Tennessee. Hartwig Anzt has a strong background in numerical mathematics, specializes in iterative methods and preconditioning techniques for the next generation hardware architectures. His Helmholtz group on Fixed-point methods for numerics at Exascale (“FiNE”) is granted funding until 2022. Hartwig Anzt has a long track record of high-quality software development. He is author of the MAGMA-sparse open-source software package managing lead and developer of the Ginkgo numerical linear algebra library, and part of the US Exascale computing project delivering production-ready numerical linear algebra libraries.

Author Bio – Ahmad Abdelfattah

Ahmad Abdelfattah is a research scientist at the Innovative Computing Laboratory, the University of Tennessee. He received his Ph.D. in computer science from King Abdullah University of Science and Technology (KAUST) in 2015, where he was a member of the Extreme Computing Research Center (ECRC). His research interests include numerical linear algebra, parallel algorithms, and performance optimization on massively parallel processors. He received his BSc. and MSc. degrees in computer engineering from Ain Shams University, Egypt.

Author Bio – Jack Dongarra

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with the enterprise strengths of its recent acquisitions, notably Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Preparing for Exascale Science on Day 1

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascal Read more…

By Linda Barney

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This