NVIDIA Unleashes Monster Pascal GPU Card at GTC16

By Tiffany Trader

April 5, 2016

Earlier today (Tuesday) at the seventh-annual GPU Technology Conference (GTC) in San Jose, Calif., NVIDIA revealed its first Pascal-architecture based GPU card, the P100, calling it “the most advanced accelerator ever built.” The P100 is based on the NVIDIA Pascal GP100 GPU — a successor to the Kepler GK110/210 — and is aimed squarely at HPC, technical computing and deep learning workloads.

Packing a whopping 5.3 teraflops of double-precision floating point performance, the P100 is NVIDIA’s most performant chip to date. And with 15.3 billion transistors, it’s also the largest GPU that NVIDIA has ever made in spite of it being built on TSMC’s 16nm FinFET manufacturing process.

The P100 is the flagship Pascal architecture offering, and it’s also the first product to implement the heralded HBM2 and NVLink technologies. During his keynote address, NVIDIA CEO Jen-Hsun Huang said the chip had entered volume production and would ship by first quarter 2017. Partners Dell, HPE, Cray and IBM are expected to come out with Pascal-equipped servers by the end of 2016, with production availability in early 2017.

Huang also showed how the new Pascal GPU fits in with the Tesla family. The new DGX-1 deep learning supercomputer is shown all the way to the right. With a price tag of $129,000 the DGX-1 puts the equivalent of 250 servers in a box, said Huang. It packs eight of the new P100 GPUs and 7 TB of SSD storage and can pump out up to 170 teraflops of half-precision floating point performance.

GTC16 Tesla Family full 1400x

In a separate presentation, Lars Nyland, NVIDIA senior architect, and Mark Harris, chief technologist of GPU computing software at NVIDIA, provided a deep dive into the new architecture. Before unpacking the new features and specs, the duo looked at some real-world performance speedups for the P100. This chart depicts the benefits of the faster GPU and the high-bandwidth NVLink interconnect technology.

NVIDIA Tesla P100 chart

GPU Breakdown

The cradle for computation in Tesla GPUs is the SM, the streaming multiprocessor. The SM creates, manages, schedules and executes instructions from many threads in parallel. The Tesla GP100 GPU has 60 SMs. For the P100, NVIDIA has enabled 56 of them for a total of 3,584 CUDA (enabled) cores. Memory bandwidth is 720 GB/s and the memory size is 16 GB.

There are 64 CUDA cores in the GP100 SM, which at first seems small in comparison to Maxwell’s SM with 128, but there is a reason. To arrive at the GP100 SM, NVIDIA started with a Maxwell SM and chopped it in half.

“The cores are your most important resource on the SM and if you don’t use them in any particular clock cycle, you are wasting your chip,” explained Nyland. “What we wanted to do was improve the efficiency to make them be used more often. We started with the Maxwell SM and we cut it in half – that gives us two P100 SMs. We could have stopped here, but we added the 64-bit floating point double precision cores and then we doubled the occupancy, we doubled the number of warps per P100 SM so that the occupancy went back to being equivalent to Maxwell SM. Then we also doubled the register files and finally we duplicated the shared memory – so now we have two SMs with 64 cores each from what we started with, one Maxwell 128 core SM.”

Pascal GP100_SM_diagramThe net effect, the NVIDIA reps go on to explain, is that the GP100 SM has more resources per core. It has twice the number of registers more core, 1.33x more shared memory capacity, 2x shared memory bandwidth and twice as many warps resident at the same time on the SMs.

The overall impact is higher instruction throughput leading to higher utilization when running codes. “There’s a big effort that’s gone into making sure you get more value out of every core,” said Harris.

“We’ve put more resources around each core” Nyland added, “By having more warps ready, the scheduler has more to choose from and so it has more opportunities to run something than it would if there were fewer warps. By having more register, we can have higher occupancy. And then having a shared memory, we have double the bandwidth and less contention, less cores shared with the shared memory so there is more access. All of this adds up to more utilization of the cores, which is the real goal.”

Floating point is cited as another critical resource. The three sizes – half-precision, single-precision and double-precision — all fit the IEEE standard. The peak speed of 5.2 teraflops double-precision performance doubles to 10.6 teraflops running in single-precision floating point mode. Double it again, and you get 21 peak teraflops of half-precision floating point performance — another first.

“GPUs have used half-precision for at least a dozen years as a storage mechanism to save space — for textures — but we’ve never built an arithmetic pipeline to implement the 16-bit floating point directly, we’ve always converted it,” Nyland said. “What we’ve done is left it in its native size and then pair it together and execute an instruction on a pair of values every clock – this is compared to the single-precision where we execute one instruction every clock and the double-precision runs at one every two clocks.”

NVIDIA has also added atomic addition for 64-bit floating point values.

GTC16 Pascal Tesla P100 comparison 700x

NVLink

One of the most important new features that debuted with the Pascal architecture is NVLink, NVIDIA’s communications protocol that allows high-speed communication from one GPU to another and to future NVLink-enabled CPUs as well. The point-to-point interconnect is said to allow data sharing at rates five to 12 times faster than traditional PCI Express Gen 3 (PCIe). And its compatibility with the GPU ISA means it can support shared memory multiprocessing workloads.

Currently implemented in Tesla P100 accelerator boards and Pascal GP100 GPUs, NVLink supports reads, writes and atomics between GPUs. There are four NVLinks on every P100 GPU. A single link supports up to 40 GB/sec of bidirectional bandwidth between the endpoints and links can be connected in a gang to enable an aggregate maximum theoretical bandwidth of 160 GB/sec bidirectional bandwidth per P100. The ability to have four or eight GPUs all sharing data with each other is the real benefit of NVlink, said Harris. “We can build really powerful machines and still program them with familiar programming models,” he added.

NVIDIA NVLink 8-GPU-hybrid-cube-mesh 800x

NVLink can also be coupled with an NVLink enabled CPU, like the future POWER CPU that IBM has announced. In this configuration, four P100 GPUs can be fully connected in a gang and they’ll have a link that connects to the CPU. The GPUs will thus be able to access all the memory of the other GPUs and CPU memory.

High Bandwidth Memory 2

The other major first for P100 is its use of HBM2 stacked memory. This technology enables multiple layers of DRAM components to be integrated vertically on the package along with the GPU. The P100 accelerators have four 4-die HBM2 stacks, for a total of 16 GB of memory, and 720 GB/s peak bandwidth.

With HBM2, error correcting code (ECC) functionality is free, the NVIDIA reps noted. What this means is there is no capacity taken away and there’s no processing performance penalty associated with using ECC. This wasn’t the case with the GDDR memories. For those, NVIDIA implemented an ECC scheme that used some of the memory for the ECC bit data and they had to create that data themselves, which had a slight penalty.

Unified Memory

Pascal has also expanded on the unified memory features of CUDA 6 by adding support for large address spaces and page faulting capability. Because Kepler and Maxwell weren’t allowed to page fault, the developer was only allowed to allocate unified memory up to the size of the GPU’s physical memory. Also, because the GPU can page fault, when you launch a kernel, any pages that were migrated to the CPU need to be migrated back to the GPU before the kernel can run – this translates to launch overhead. This meant that on Kepler and Maxwell the CPU and GPU code could not simultaneously access the same memory allocations.

“With the page migration engine, the larger address space and the ability to page fault, you now get all of these things you couldn’t do before,” explained Nyland. “You can allocate beyond the size of the GPU memory up to the physical system memory size, which means you can oversubscribe the GPU memory and do the processing of large scale models out of core. You can now simultaneously access those allocations from the CPU and GPU without fatal errors. This means you have much finer-grained communication between the processors but you do still have to take care of proper synchronization so you don’t have race conditions. You can even do unified memory atomic operations, and across NVLink these are native.”

In the future, on systems that support it, Pascal enables the possibility of using the system allocator to allocate unified memory. In this case, malloc would be able to pass those pointers to CPU or GPU and share that data. NVIDIA is working with Red Hat and the Linux community on the operating system changes that are necessary to unlock this functionality, reported Nyland.

More to come…

This was the first major walk-through of the Pascal architecture and GPU launch, however that’s by no means all of it. NVIDIA will be discussing other GP100 features — such as preemption and a larger L2 cache — in the days and weeks to come. A white paper is said to be forthcoming and we will link to it here when it’s available. For now, if you are hungry for more Pascal information, check out this blog post by Mark Harris, which reprises the content of today’s deep dive session.

Pricing for the Tesla P100 is not yet available and shipping is still about a year away, but there is a way to get your hands on one. The GPUs are available for $16,125 each in quantities of eight — if you spring for the DGX-1. NVIDIA also makes select hardware available to its joint lab partners and its early access program partners.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

What We Know about Alice Recoque, Europe’s Second Exascale System

June 24, 2024

Europe officially announced its second exascale system, Alice Recoque, and you can expect to see that name on the Top500 supercomputer list in a few years. Alice Recoque is the new name for a supercomputer with the opera Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – or collections of software components that work together to Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, was an unforgettable event. Other than being the first busi Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to release an AI accelerator with heavy in-memory processing, b Read more…

ASC24 Student Cluster Competition: Who Won and Why?

June 18, 2024

As is our tradition, we’re going to take a detailed look back at the recently concluded the ASC24 Student Cluster Competition (Asia Supercomputer Community) to see not only who won the various awards, but to figure out Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch that D-Wave’s brand of analog quantum computing (quantum Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…

Shutterstock_666139696

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire