Baidu Researcher Pushes GPU Scalability for Deep Learning

By Tiffany Trader

June 20, 2016

Editor’s Note: While Andrew Ng, chief scientist at Baidu was delivering his ISC keynote this morning on how HPC is supercharging AI, his colleague Greg Diamos, research scientist at Baidu’s Silicon Valley AI Lab, was preparing to present a paper on GPU-based deep learning at the 33rd International Conference on Machine Learning in New York.

Greg Diamos, senior researcher, Silicon Valley AI Lab, Baidu, is on the front lines of the reinvigorated frontier of machine learning. Before joining Baidu, Diamos was in the employ of NVIDIA, first as a research scientist and then an architect (for the GPU streaming multiprocessor and the CUDA software). Given this background, it’s natural that Diamos’ research is focused on advancing breakthroughs in GPU-based deep learning. Ahead of the paper he is presenting, Diamos answered questions about his research and his vision for the future of machine learning.

HPCwire: How would you characterize the current era of machine learning?

Greg Diamos Baidu headshot
Greg Diamos

Diamos: There are two strong forces in machine learning. One is big data, or the availability of massive data sets enabled by the growth of the internet. The other is deep learning, or the discovery of how to train very deep artificial neural networks effectively. The combination of these two forces is driving fast progress on many hard problems.

HPCwire: There’s a lot of excitement for deep learning – is it warranted and what would you say to those who say they aren’t on-board yet?

Diamos: It is warranted. Deep learning has already tremendously advanced the state of the art of real world problems in computer vision and speech recognition. Many problems in these domains and others that were previously considered too difficult are now within reach.

HPCwire: What’s the relationship between machine learning and high-performance computing and how is it evolving?

Diamos: The ability to train deep artificial neural networks effectively and the abundance of training data has pushed machine learning into a compute bound regime, even on the fastest machines in the world. We find ourselves in a situation where faster computers directly enable better application level performance, for example, better speech recognition accuracy.

HPCwire: So you’re presenting a paper at the 33rd International Conference on Machine Learning in New York today. The title is Persistent RNNs: Stashing Recurrent Weights On-Chip. First, can you explain what Recurrent Neural Networks are and what problems they solve?

Diamos: Recurrent neural networks are functions that transform sequences of data – for example, they can transform an audio signal into a transcript, or a sentence in English into a sentence in Chinese. They are similar to other deep artificial neural networks, with the key difference being that they operate on sequences (e.g. an audio signal of arbitrary length) instead of fixed sized data (e.g. an image of fixed dimensions).

Figure 5 Baidu Diamos ICML 2016HPCwire: Can you provide an overview of your paper? What problem(s) did you set out to solve and what was achieved?

Diamos: It turns out that although deep learning algorithms are typically compute bound, we have not figured out how to train them at the theoretical limits of performance of large clusters, and there is a big opportunity remaining. The difference between the sustained performance of the fastest RNN training system that we know about at Baidu, and the theoretical peak performance of the fastest computer in the world is approximately 2500x.

The goal of this work is to improve the strong scalability of training deep recurrent neural networks in an attempt to close this gap. We do this by making GPUs 30x more efficient on smaller units of work, enabling better strong scaling. We achieve a 16x increase in strong scaling, going from 8 GPUs without our technique to 128 GPUs with it. Our implementation sustains 28 percent of peak floating point throughput at 128 GPUs over the entire training run, compared to 31 percent on a single GPU.

HPCwire: GPUs are closely associated with machine learning, especially deep neural networks. How important have GPUs been to your research and development at Baidu?

Diamos: GPUs are important for machine learning because they have high computational throughput, and much of machine learning, deep learning in particular, is compute limited.

HPCwire: And a related question – what does the scalability offered by dense servers all the way up to large clusters enable for deep learning and other machine learning workloads?

Diamos: Scaling training to large clusters enables training bigger neural networks on bigger datasets than are possible with any other technology.

HPCwire: What are you watching in terms of other processing architecture (Xeon Phi Knights Landing, FPGAs, ASICs, DSPs, ARM and so forth)?

Diamos: In the five year timeframe I am watching two things: peak floating point throughput and software support for deep learning. So far GPUs are leading both categories, but there is certainly room for competition. If other processors want to compete in this space, they need to be serious about software, in particular, releasing deep learning primitive libraries with simple C interfaces that achieve close to peak performance. Looking farther ahead to the limits of technology scaling, I hope that a processor is developed in the next two decades that enables deep learning model training at 10 PFLOP/s in 300 Watts, and 150 EFLOP/s in 25 MWatts.

HPCwire: Baidu is using machine learning for image recognition, speech recognition, the development of autonomous vehicles and more, what does the research you’ve done here help enable?

Diamos: This research allows us to train our models faster, which so far has translated into better application level performance, e.g. speech recognition accuracy. I think that this is an important message for people who work in high performance computing systems. It provides a clear link between the work that they do to build faster systems and our ability to apply machine learning to important problems.

Relevant links:

ICML paper: Persistent RNNs: Stashing Recurrent Weights On-Chip: http://jmlr.org/proceedings/papers/v48/diamos16.pdf

Video about Greg’s work at Baidu: https://www.youtube.com/watch?v=JkXbTOt_JxE

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Google and Intel. Of the seven benchmarks encompassed in version Read more…

By Tiffany Trader

Neural Network ‘Synapse’ Technology Showcased at IEEE Meeting

December 12, 2018

There’s nice snapshot of advancing work to develop improved neural network “synapse” technologies posted yesterday on IEEE Spectrum. Lower power, ease of use, manufacturability, and performance are all key paramete Read more…

By John Russell

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to provide what the companies call the “the highest performance Read more…

By Doug Black

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

4 Ways AI Analytics Projects Fail — and How to Succeed

“How do I de-risk my AI-driven analytics projects?” This is a common question for organizations ready to modernize their analytics portfolio. Here are four ways AI analytics projects fail—and how you can ensure success. Read more…

Is Amazon’s Plunge into Server Chips a Watershed Moment?

December 11, 2018

For several years now the big cloud providers – Amazon, Microsoft Azure, Google, et al – have been transforming from technology consumers into technology creators in hardware and software. The most recent example bei Read more…

By John Russell

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Goog Read more…

By Tiffany Trader

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to pr Read more…

By Doug Black

Is Amazon’s Plunge into Server Chips a Watershed Moment?

December 11, 2018

For several years now the big cloud providers – Amazon, Microsoft Azure, Google, et al – have been transforming from technology consumers into technology cr Read more…

By John Russell

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--the study of shapes--seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are being recast to use topology. For instance, looking for weather and climate patterns. Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This