NVIDIA Steers Roadmap Around GPU Bottlenecks

By Nicole Hemsoth

March 25, 2014

The GPU Technology Conference (GTC ’14) kicked off this morning in San Jose with NVIDIA CEO, Jen-Hsun Huang, opting to open the event with a preview of what’s ahead for GPUs in big data–and big computing. While the gaming and entertainment eye candy  one expects at GTC did indeed find its way into the mix, high performance computing, machine learning, computer vision and large-scale analytics talk set the tone for the year, leaving no room for doubt that the GPU maker is serious about its business for performance and efficiency-conscious mainstream enterprise and research users.

NVIDIA’s roadmap for GPU computing revolves around resolving some of the core bottlenecks that have always existed for accelerators in terms of data movement and memory capability. In this era of “big data,” the performance levels drop off with the addition of ever-larger data streams, even with innovations that have tried to get around this by letting the GPU crunch while data movement goes on in the background as with recent efforts around direct memory access (DMA).

NVIDIA’s answer to the data movement bottleneck is found in today’s announcement of NVLink, which is its newly announced chip-to-chip communication approach that lets the GPU talk on a dedicated line with other GPUs, as well as hook directly to the CPU along unified memory lines without the weight of PCIe—which even at its best in the current 3.0 state can’t compare to what they’ve cooked. In effect, this bundle of PCIe pipes with DMA acts much like an extension of high bandwidth (and proprietary, one should add) PCIe. It splits the efficiency and performance drain of pure PCIe into components instead of running both through the same pipes. The end result, said Huang during his keynote, is a 5-12x performance improvement over PCIe 3.0 and a 4x efficiency boost.

NVIDIA was reluctant to share a great deal in the way of detail, but in essence, NVLink is comprised of bi-directional 8-lane “bricks” which can be put together to get the bandwidth boost promised. The speed on each of the lanes is around 20 Gb/s for each brick. However, it appears that this will be the second generation of the interconnect instead with the first iteration sporting a four-lane highway, which will be found first in Pascal, which we’ll get to in a moment.

NVLink1

In the event that a user is hooked in with a CPU that doesn’t support NVLink, the same fast lane can be opened between GPUs as below.

NVLink2

This is the sort of development one might expect out of a research group led by interconnect wizard, Bill Dally. And it might seem that there would have to be a “catch” of some sort. Other than having to start from the ground in terms of building and investing in new motherboards and an ecosystem, it’s hard to see what some of the challenges might be at this point beyond which OEMs will go out of their way to meet the terms of the yet-unannounced licensing plan. While it may involve a new set of motherboards to contend with, the good news is the module, which is very small, can be snapped in to allow for the construction of very dense servers. Additionally, the programming model shouldn’t be its own bottleneck as NVLink looks very mich like PCIe, but with its own special DMA capabilities to allow software to adapt to it easily. NVIDIA notes that the first generation will not be memory coherent, users will have to hold out for the second iteration of NVLink, by which time there might be a chance for an ecosystem to develop around it.

All of this work starts to hum together around the 2016 timeframe with the addition of Pascal, which was announced today to fill in the gaps between now and Volta. Pascal, named after the famous mathematician, will provide unified memory and 3D memory in addition to sporting what will likely be the first generation of NVLink. As you can see, the current status of Maxwell is right on time, however NVIDIA declined questions about when that would be extended to meet the needs of the Tesla group.

NVPascal

One of the key features of Pascal is the addition of stacked memory, which NVIDIA says will well over triple (almost quadruple) the bandwidth, from 288 to 1000 around a TB/sec. Additionally, this fix to the off-package GDDR5 is set to offer around 4x the energy efficiency by making the voltage regulators and compute close neighbors.

“GPUs have 288 GB/s of bandwidth already, which is many times that of the CPU—the very reason why GPUs contribute so much to parallel computation,” said Huang. “Of course we would love to have many times more. But the challenge is, the GPU already has a lot of pins; it’s already the biggest chip in the world. The interface is already very wide. How do you solve this when going wider would make the package enormous and making the signaling go faster would push down energy efficiency and we know we’re power limited in almost every application we’re pursing?”

Click here to view photos from the NVIDIA GPU Technology Conference 2014
Click here to view photos from the NVIDIA GPU Technology Conference 2014
Huang answered his own questions by introducing Pascal, which is the size of an iPhone (around 1/3 the size of a PCIe card), which will sport the 3D memory and first-generation of NVLink. What’s rather interesting about the outlook for Pascal is that Huang didn’t talk about it in terms of form factors. He referred to it simply as a “module”, meaning that while servers are a natural home, NVIDIA wants to shop around for other places to place it.

During the keynote the emphasis was on various modes of mobility and access—from cloud-delivered services, self-driving cars with modular units in the trunk, hints at ultrasound and other medical devices being suitable hosts and more. In short, as we wait for NVIDIA to roll out Volta, Maxwell and eventually Pascal, could be making the rounds outside of the box.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire