HPC Technique Propels Deep Learning at Scale

By Tiffany Trader

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community.

The technique, a modified version of the OpenMPI algorithm “ring all-reduce,” is being used at Baidu to parallelize the training of their speech recognition model, Deep Speech 2, across many GPU nodes. The two pieces of software Baidu is announcing today are the baidu-allreduce C library, as well as a patch for TensorFlow, which allows people who have already modeled in TensorFlow to compile this new version and use it for parallelizing across many devices. The codes are available on GitHub.

Ring all-reduce – all GPUs send data simultaneously

Baidu’s SVAIL team developed the approach about two years ago for their internal deep learning framework, named Gene and Majel (in tribute to the famous Star Trek creator and the actress who voiced the onboard computer interfaces for the series). The technique is commonplace in HPC circles, but underused within artificial intelligence and deep learning, according to Baidu.

Many of the researchers in the SVAIL group had come from the high performance computing space and recognized the competitive edge it offered.

“The algorithm is actually part of OpenMPI, but the OpenMPI implementation is not as fast,” comments Baidu Research Scientist Shubho Sengupta. “So the way we stumbled upon it was we started using OpenMPI for doing training and we realized it was not scaling to the extent that we want it to scale. I started digging through the OpenMPI source, found the algorithm, saw that it’s not very efficient, and reimplemented it.”

The SVAIL researchers wrote their own implementation of the ring algorithm for higher performance and better stability. The key distinction from the OpenMPI version is that the SVAIL implementation avoids extraneous copies between the CPU and GPU.

Explains Sengupta, “Once OpenMPI does the communication of these matrices, if the matrices are in GPU memory, it actually copies to CPU memory to do the reduction part of it – that’s actually quite wasteful. You don’t really need to do a copy, you could just write a small kernel that does the reduction in GPU memory space itself. And this especially helps when you are doing all-reduce within a node and all the GPUs are within a PCI root complex, then it doesn’t do any of the copies actually – it can just do everything in GPU memory space. This very simple idea of eliminating this copy resulted in this speedup in scaling over OpenMPI’s own implementation.”

Employing this algorithm along with SVAIL’s focus on fast networking (InfiniBand) and careful hardware-software codesign has enabled the team to get linear GPU scaling up to 128 GPUs, an achievement that was detailed in their December 2015 paper, “Deep Speech 2: End-to-End Speech Recognition in English and Mandarin.”

With their internal implementation of ring all-reduce, the team achieves between a 2.3-21.4X speedup over OpenMPI (version 1.8.5) depending on the number of GPUs.

Sengupta notes that their implementation is fastest for a small number of GPUs. “At 8 GPUs it’s about 20x faster, then as you increase the number of GPUs, it drops because now you actually have to copy data to the CPU to send across the network. But for the internal framework, we can scale all the way up to 128 GPUs and get linear scaling.”

Comparison of two different all-reduce implementations. All times are in seconds. Performance gain is the ratio of OpenMPI all-reduce time to SVAIL’s all-reduce time. (Source: Deep Speech 2 paper)

Sengupta’s teammate Baidu Research Scientist Andrew Gibiansky says similar benefits can now be seen with TensorFlow: “In terms of the TensorFlow implementation, we get the same linear scaling path past eight. In terms of a comparison with running on a single GPU, it ends up being about 31x faster at 40 GPUs.”

After the Deep Speech 2 paper was published, the SVAIL team began getting requests from the community who wanted to know more about the implementation. Given that the algorithm is pretty tightly coupled to SVAIL’s proprietary deep learning framework, they needed to come up with a different way to release it, so they created two new implementations, one specifically for TensorFlow and one that is more general.

Gibiansky, who led the work on the TensorFlow patch, describes their multi-pronged approach to disseminating the information. “You can read the blog post [for a thorough technical explanation] and figure it out. If you’re using TensorFlow, you can use our modification to train your own models with this. And if you’re a deep learning author, you can look at our C library and integrate that. The goal is really to take this idea we’ve found to be really successful internally and try to start spreading it so that other people can also take advantage of it.”

Sengupta shares an interesting perspective on the opportunities to be mined for deep learning within HPC.

“With MPI – people [in deep learning] think that it is this old technology, that it is not relevant, but I think because of our work we have shown that you can build very fast collectives using MPI and that allows you to do synchronous gradient descent which converges faster, gives you deterministic results and you don’t need to do asynchronous gradient descent with parameter servers which was the dominant way of doing this when we first started,” says Sengupta.

As for the reduced-copy approach propagating back to MPI, Gibiansky notes that if you look at some of the other MPI implementations, they’re slowly moving their collectives to GPU versions. “MVPICH recently introduced an all-gather that doesn’t end up copying to CPU – so OpenMPI will probably get there, it just might take a while. Potentially giving this a little more visibility, we can spur that on.”

“There’s a lot of interest now in collectives and one thing we also realized is the all-reduce operation used in traditional HPC setups, it actually transfers data that’s actually not very large,” Sengupta adds. “What it usually does, when I talk to HPC people, it’s trying to figure out the status of something across a bunch of machines – while in deep learning we are transferring these large matrices – like 2048×2048, essentially 4 million 32-bit floating points. For the traditional HPC community, this is a very atypical input for all-reduce. The traditional HPC community does not actually use all-reduce with really large data sizes. I think with deep learning, more and more people are realizing that collective operations for really large matrices is also very important.”

A detailed explanation of ring all-reduce and Baidu’s GPU implementation is covered in this technical blog post, published today by Baidu Research. A variant of the technique is also used to provide high-performance node-local scaling for PaddlePaddle, the company’s open source deep learning framework.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This