Nvidia Launches Pascal GPUs for Deep Learning Inferencing

By Tiffany Trader

September 12, 2016

Already entrenched in the deep learning community for neural net training, Nvidia wants to secure its place as the go-to chipmaker for datacenter inferencing. At the GPU Technology Conference (GTC) in Beijing today (Tuesday), Nvidia CEO Jen-Hsun Huang unveiled the latest additions to the Tesla line, Pascal-based P4 and P40 GPU accelerators, as well as new software all aimed at improving performance for inferencing workloads that undergird applications like voice-activated assistants, spam filters, and recommendation engines.

Employing the same form factor as the Maxwell-based M4 and M40 GPUs, the new Pascal cards were designed to accelerate inferencing workloads. Most significantly, the GPUs feature specialized inference instructions based on 8-bit (INT8) operations. Using the VGG image recognition model as a benchmark, Nvidia reports that the P40 achieved a 45x faster response than a E5-2690v4 Xeon (with the latest Intel Math Kernel Library) and a 4x improvement over the M40, which debuted last November at Supercomputing. In both cases, the P40 was running INT8 instructions, while the comparison hardware was employing FP32.

For the test, Nvidia paired the Tesla P40 with an internal version of the company’s TensorRT library, which is also being announced today. TensorRT, formerly known as GIE (GPU Inference Engine), enables the trained neural net to run well on Pascal GPUs, says Nvidia. The library takes neural nets, typically built with 32-bit or 16-bit operations, and tunes them for the specific GPU to be used for deployment.

“If there’s a GPU in the datacenter like the P4 or P40 then TensorRT will automatically recognize that and transform that neural net into 8 bit,” said Roy Kim, a product manager in Nvidia’s Tesla HPC business unit. “And TensorRT will take neural net and deploy it anywhere – it could deploy it in an embedded Jetson program for example.”

On the training side, models need the higher accuracy of at least 16-bit floating point (FP16), but once the models are trained, this dynamic range can be reduced down to an 8-bit range without a loss of accuracy. The upshot of INT8, is that it enables four times as much throughput compared to single-precision floating point (FP32).

nvidia-tesla-p4-specs

nvidia-tesla-p40-specs

The P4 is designed for the scale-out datacenter server and prioritizes energy efficiency whereas the P40 emphasizes high throughput for deep learning workloads. The P40 is for customers who want to deploy lots of GPUs in a box in batch mode for overnight processing of video data, for example, said Kim. A single Tesla P4 provides 22 Tera-Operations per second (TOPS) while the P40 offers 47 Tera-Operations per second (TOPS) — both figures are with boost clock enabled.

Nvidia also unveiled a new software development kit to help speed video analytics workloads. DeepStream SDK has APIs for transcoding video onto various formats, it has SDK to preprocess those videos, and it has the APIs and support for deep learning frameworks, the company said. With DeepStream, a single Tesla P4 server (with two E5-2650 v4 CPUs) can simultaneously decode and analyze up to 93 HD video streams in real time compared with seven streams on a GPU-less Broadwell-based box, according to Nvidia.

Nvidia continues to count Baidu as a key partner and confirmed that the Chinese search giant still uses Nvidia GPUs for training and inferencing its Deep Speech 2 system. Hyperscalers like Baidu are increasingly concerned with minimizing the time it takes for their systems to recognize speech, images or text in response to queries from users and devices.

“Delivering simple and responsive experiences to each of our users is very important to us,” said Greg Diamos, senior researcher at Baidu. “At Baidu, we have deployed NVIDIA GPUs in production to provide AI-powered services such as our Deep Speech 2 system and the use of GPUs enables a level of responsiveness that would not be possible on un-accelerated servers.”

“The complexity of that Deep Speech 2 model has increased by 10x in just one year,” said Nvidia’s Kim. “So it makes sense from the training side why they need GPUs. But on the inferencing side, they are seeing a problem. Whereas it used to be okay to deploy on CPU servers, it isn’t tenable anymore. With hyperscalers every millisecond matters. Baidu believes that after 500 milliseconds, user engagement goes down. With the Pascal GPU the response is almost immediate, about 100 milliseconds.”

Nvidia said it went through pains to ensure it used the latest Intel hardware and software for its comparison testing. The graphics chipmaker’s message is that even the latest Broadwell CPUs are challenged by today’s complex inferencing workloads. To Intel’s mind, however, the star of its deep learning portfolio is its Xeon Phi manycore processor. We imagine a fuller picture of the comparative performance advantages of Nvidia and Intel silicon will emerge when Pascal GPUs go head to head against Knights Landing on a range of workloads. Things will get even more interesting next year with the debut of the next-generation Phi processor, Knights Mill, which will support lower-precision computations.

The Tesla P40 is expected to be available next month and the P4 the month after. The cards will be available from all major OEMs and ODMs, including Dell Technologies, HPE, Inspur, Inventec, Lenovo, QCT, Quanta Computer and Wistron.

The DeepStream SDK will be available to early users as part of an invite-only closed beta program.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Channel Islands “A”

May 3, 2024

This is the second team from California State University, Channel Islands – or maybe it’s the first team? Not sure, but I do know they have two teams total, and this is one of them. As you’ll see in the video in Read more…

Intersect360 Research Takes a Deep Dive into the HPC-AI Market in New Report

May 3, 2024

A new report out of analyst firm Intersect360 Research is shedding some new light on just how valuable the HPC and AI market is. Taking both of these technologies as a singular unit, Intersect360 Research found that the Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market analysis that Hyperion Research is planning on rolling out over Read more…

2024 Winter Classic: Meet Team Jackson State

May 3, 2024

This is the second time we’re seeing a team from Jackson State university. The team features two veterans of the 2023 Winter Classic, which should help, but it’s also a team whose members are involved in a lot of oth Read more…

2024 Winter Classic: NASA Results Revealed!

May 2, 2024

In this edition of the Winter Classic Studio Update Show we reveal the results from the NASA BTIO Challenge. The benchmark, BTIO, is a subset of the NAS Parallel benchmark and NASA set up a formidable set of milestones, Read more…

2024 Winter Classic: NASA Mentor Interview

May 2, 2024

The folks at NASA Ames once again did a bang-up job as a mentor for the 2024 Winter Classic. This is the third time they’ve fulfilled this vital function, and their challenges keep getting better and better. In thei Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire