GTC15 Keynote Highlights 10X GPU Computing Growth

By Tiffany Trader

March 17, 2015

Although the focus of this year’s GPU Technology Conference keynote wasn’t particularly HPC-centric, references to supercomputing abounded throughout the two-hour presentation delivered by charismatic NVIDIA CEO Jen-Hsun Huang to a crowd of 4,000 this morning inside the packed San Jose McEnery Convention Center.

In addition to the requisite-but-very-cool GPU-enabled visual demonstrations meant to showcase this year’s theme of deep learning, attendees also heard about the progress GPU computing has made in the last six years. And, as has become custom at the annual event, NVIDIA debuted a new GPU and revealed key pieces of its graphics computing roadmap out to 2018.

As for that next generation of NVIDIA GPU that just dropped, the honor goes to Titan X, a variant of the Titan chip that NVIDIA launched in 2013 in homage to its flagship supercomputer win of the same name at the Oak Ridge National Laboratory. Titan X, though, offers little in the way of double-precision floating point performance — just 0.2 teraflops — so obviously not a great fit for most HPC workloads. But with 7 teraflops of single-precision performance, Titan X *is* a boon to deep learning workloads (natch).

GTC15 Titan X GPU launch slide

To FP64 performance-seekers, Huang pointed to Titan Z, which has 2.6 teraflops DP (and 8.0 teraflops SP). “The Titan X,” said Huang, “is designed for single-precision. For people who want double-precision, we still have Titan Z, the fastest single-card double-precision GPU we have. Titan X is the highest performing single-precision with the largest frame buffer [12BG] and most advanced GPU architecture we have created, all based on Maxwell.”

In an press forum after the keynote, Senior Vice President of GPU engineering at NVIDIA Jonah Alben addressed the dearth of double-precision floating point performance, noting:

“NVIDIA has one common GPU architecture, but we make different choices depending on which particular customers we are targeting given chip for. Titan X is based on a GPU that’s part of the family of the GM20x chips [second-generation Maxwell] therefore it has the same properties as those chips and it’s targeted for a deep learning type customer… and we have other products that are great for double-precision.”

The keynote was also an opportunity for Jen-Hsun Huang to highlight the growth of GPU computing since 2008 (CUDA debuted in 2007). In that fledgling year, there were 150,000 CUDA downloads, 27 CUDA apps, 4,000 relevant academic papers, 60 universities starting to teach CUDA-accelerated computing, and 6,000 Tesla GPUs shipped — the equivalent of 77 teraflops GPU-accelerated supercomputing power.

GTC15 10X growth GPU computing cropped

Moving ahead to the present day reflects roughly 10X jump in NVIDIA-backed GPU computing with 3 million CUDA downloads, 319 CUDA applications, 800 universities around the world teaching CUDA and GPU acceleration, 60,000 papers citing the use of GPUs for research, and 450,000 Tesla GPUs shipped providing a whopping 54 petabytes of accelerated computing to supercomputers and high-performance computing centers globally.

“What we enable is the world’s most popular, world’s most accessible supercomputing platform,” Huang effused. “Any researcher, any student, any engineer can reach out very easily and get a GPU that’s powered by CUDA to accelerate their research.”

“Most of the applications we serve are really about speed,” he later continued. “Without the speed, it is simply impossible for you to do your work. One of my favorite quotes was when a researcher came to me and said ‘Because of your work I’m now able to do my life’s work in my lifetime.’”

Looking Ahead

A refreshed graphics processor roadmap was also on the agenda for the morning talk, providing a glimpse at the upcoming Pascal GPUs and Volta parts. Not much has changed with regard to Pascal in the last twelve months, but Huang did confirm key elements, notably: mixed precision computing for greater accuracy, 3D memory with 3X the bandwidth and nearly 3X the frame buffer capacity of Maxwell, and of course NVLINK, which is on track to provide a 5-to-12 times speedup in data movement between GPUs and CPUs compared with today’s current standard, PCI-Express.

Volta returns to the lineup after NVIDIA switched up some key pieces of the roadmap last year putting Pascal in the place earlier reserved for Volta, leaving many wondering about the fate of that architecture. The 3D stacked memory and NVLINK technology originally planned for a Volta debut were moved over to Pascal, enabling them to still make the 2016 schedule. No details of Volta were disclosed today other than its projected 2018 launch.

GTC15 GPU roadmap

GTC15 roadmap above — GTC14 roadmap below (note the return of Volta, which was pulled from last year’s roadmap)

GTC14 roadmap graphic

Huang also shared some figures that show Pascal getting 10X better performance over Maxwell.

GTC15 Pascal 10X Maxwell details

It was later clarified in the press Q&A that this significant speedup was specifically referencing applications that benefit from FP16 computation, like deep learning and imaging in general.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Recipe for Scaling: ARQUIN Framework for Simulating a Distributed Quantum Computing System

October 14, 2024

One of the most difficult problems with quantum computing relates to increasing the size of the quantum computer. Researchers globally are seeking to solve this “challenge of scale.” To bring quantum scaling closer Read more…

Nvidia Is Increasingly the Secret Sauce in AI Deployments, But You Still Need Experience

October 14, 2024

I’ve been through a number of briefings from different vendors from IBM to HP, and there is one constant: they are all leaning heavily on Nvidia for their AI services strategy. That may be a best practice, but Nvidia d Read more…

Zapata Computing, Early Quantum-AI Software Specialist, Ceases Operations

October 14, 2024

Zapata Computing, which was founded in 2017 as a Harvard spinout specializing in quantum software and later pivoted to an AI focus, is ceasing operations, according to an SEC filing last week. Zapata had gone public one Read more…

AMD Announces Flurry of New Chips

October 10, 2024

AMD today announced several new chips including its newest Instinct GPU — the MI325X — as it chases Nvidia. Other new devices announced at the company event in San Francisco included the 5th Gen AMD EPYC processors, Read more…

NSF Grants $107,600 to English Professors to Research Aurora Supercomputer

October 9, 2024

The National Science Foundation has granted $107,600 to English professors at US universities to unearth the mysteries of the Aurora supercomputer. The two-year grant recipients will write up what the Aurora supercompute Read more…

VAST Looks Inward, Outward for An AI Edge

October 9, 2024

There’s no single best way to respond to the explosion of data and AI. Sometimes you need to bring everything into your own unified platform. Other times, you lean on friends and neighbors to chart a way forward. Those Read more…

Nvidia Is Increasingly the Secret Sauce in AI Deployments, But You Still Need Experience

October 14, 2024

I’ve been through a number of briefings from different vendors from IBM to HP, and there is one constant: they are all leaning heavily on Nvidia for their AI Read more…

NSF Grants $107,600 to English Professors to Research Aurora Supercomputer

October 9, 2024

The National Science Foundation has granted $107,600 to English professors at US universities to unearth the mysteries of the Aurora supercomputer. The two-year Read more…

VAST Looks Inward, Outward for An AI Edge

October 9, 2024

There’s no single best way to respond to the explosion of data and AI. Sometimes you need to bring everything into your own unified platform. Other times, you Read more…

Google Reports Progress on Quantum Devices beyond Supercomputer Capability

October 9, 2024

A Google-led team of researchers has presented more evidence that it’s possible to run productive circuits on today’s near-term intermediate scale quantum d Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it w Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whateve Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire