NERSC Scales Scientific Deep Learning to 15 Petaflops

By Rob Farber

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according to the authors of the paper (and to the best of their knowledge), currently the most scalable deep-learning implementation in the world. The work described in the paper, Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data1, reported that a Cray XC40 system with a configuration of 9,600 self-hosted 1.4GHz Intel Xeon Phi Processor 7250 based nodes achieved a peak rate between 11.73 and 15.07 petaflops (single-precision) and an average sustained performance of 11.41 to 13.47 petaflops when training on physics and climate based data sets using Lawrence Berkeley National Laboratory’s (Berkeley Lab) NERSC (National Energy Research Scientific Computing Center) Cori Phase-II supercomputer. The group utilized an amalgamation of Intel Caffe, Intel Math Kernel Library (Intel MKL), and Intel Machine Learning Scaling Library (Intel MLSL) software to achieve this scalability and performance.2

Along with scalability, Joe Curley, Intel’s senior director of HPC platform and ecosystem enabling, highlights the scientific accomplishments this level of performance brings to deep learning researchers, and data-intensive scientific communities such as climate and High Energy Physics (HEP). He also pointed out how the results further establish the deep learning performance capability of the Intel Xeon Phi processor based computational nodes in the Cori supercomputer. “These were not just a set of heroic runs, they have solved real problems at the scale of a top five supercomputer using new methods,” Curley said.

Advancing Deep Learning at Scale

Prabhat, Data and Analytics Group Lead at NERSC, Berkeley Lab, emphasized that this performance and scalability result was very much a collaborative effort that: (A) utilized a neural network update scheme by Christopher Ré’s group (Department of Computer Science at Stanford University), (B) created a software infrastructure by the Parallel Computing Lab, Intel MKL and Intel MLSL product teams at Intel, and (C) leveraged the world-class people and hardware resources at NERSC.

Overall, Curley observes that the collaboration reported “reasonably good scaling performance” as the 9,600 node cluster delivered an approximate 7,205x speedup. (Perfect scaling would have delivered a 9,600x speedup.) Curly is excited by the potential of this early work stating, “Opportunities exist to improve performance and scaling in either future runs, or in the course of solving new problems. This really was an amazing early result on a fairly new machine.”

Algorithmic advances for deep learning scalability

The update scheme by Ré allows both the synchronous and asynchronous updates of the ANN (Artificial Neural Network) parameters.

Conceptually asynchronous updates (and asynchronous architectures in general) provide the ability to scale to large numbers of nodes by removing synchronization barriers. This lock-free approach allows for faster model updates, but can require more updates to yield an equally good final model.  Thus asynchronous updates can make the training process run longer – meaning it can take longer to converge. The thought process behind the use of asynchronous updates is that the extra computational nodes add enough parallelism (and hence can deliver greater performance) to overcome the potentially slower convergence behavior and thus deliver an overall faster time-to-model. Failure to converge to a good solution is also a possibility although Ioannis Mitliagkas, former Postdoctoral scholar at Stanford and currently Assistant Professor at the University of Montreal, observes from both classical and modern results on asynchrony that, “on well-behaved objectives, failure to converge implies a mis-tuned system.” Thus tuning is critical. However, it is worth noting that on deep learning objectives no system, synchronous or asynchronous is guaranteed to converge to a good solution.

The asynchronous deep learning architecture used in the paper is illustrated below.

Each node works on its own iteration (mini-batch) and produces independent updates to the model. Those updates are sent to a central parameter store called the parameter server (noted as PS in the figure), which applies the updates to the model in the order they are received. After each setup, the PS sends the new model, back to the worker where the update originated.

Asynchronous systems do not suffer from straggler effects and are not limited by the total batch size in the same way that synchronous systems are, an important property at scale.

Figure 1: Example synchronous and asynchronous architectures

Staleness is the reason that asynchronous systems may need more iterations to converge to a solution. Said another way, they have worse statistical efficiency.

In contrast, the reduction operation used by synchronous training introduces an O(log(#Nodes)) runtime growth. Further, it make the training susceptible to jitter (e.g. the “straggler effect”) as the computation can be rate limited by the slowest node in the system during each iteration of the training procedure.3 The straggler effect occurs when any delay on any node exceeds the ability of the implementation of the reduction operation to hide latency. Achieving low latency is an important goal for developers of products such as Intel Omni-Path Architecture (Intel OPA) MPI libraries. In addition, using too many nodes during training can reduce the number of examples per node (e.g. the mini-batch size) to the point of reduced node efficiency. Thus the authors note in their paper that synchronous training can potentially deliver worse hardware efficiency.

The trade-off between statistical efficiency vs. hardware efficiency suggested a third kind of architecture to the paper authors, which they call a hybrid system.

Mitliagkas points out, “Synchronous systems are the classic, straightforward approach, and it has some good and bad attributes. The bad attributes (straggler effect and susceptibility to slow nodes, and huge effective mini-batches at scale) motivate asynchronous systems. Those have different strengths and weaknesses motivating a tradeoff. Hybrid systems give you control of the tradeoff.”

In the hybrid approach, worker nodes coalesce into separate, synchronous compute groups where the workers split a mini-batch quantity of work among themselves to produce a single update to the model. There is no synchronization across compute groups so they are able to run asynchronously. The hybrid architecture used in the paper is illustrated below.

Figure 2: Hybrid architecture example

The authors report they observed better scaling for their hybrid asynchronous updates over synchronous configurations due to reduced straggler effects. “This has been a big engineering effort,” Mitliagkas notes, “On the Stanford side, this would not have been possible without the engineering skills and hard work of Jian Zhang.” He also points out that in large-scale HPC runs, say 10,000 nodes, the likelihood that there will be some ’slow’ nodes is significant. For this reason, synchronous system’s performance can be really unpredictable. On the other hand, asynchronous systems degrade more gracefully: a slow node only affects a single synchronous group. Results in their paper show the hybrid method performing 1.66x better than the best synchronous run, and about 10x better than the worst synchronous run.

Figure 3: Training losses vs wall clock time for HEP on 1K nodes. Comparing synchronous configuration to 2, 4, and 8 groups

The weak scaling plots below, where the amount of work per node is kept constant, show that the scalability of the system can vary with the task.

Mitliagkas emphatically states, “People typically report weak scaling, because strong scaling is hard.” He continues, “For machine learning systems, strong scaling (keeping the total amount of work constant) is more representative of actual performance.”4 He reinforces his point by stating that, “synchronous approaches can only scale up to the size of the mini-batch.”

Figure 5: Strong scaling results for synchronous and hybrid approaches (batch size = 2048 per synchronous group).

Finding a good configuration is not an easy task. Recognizing this complexity, Thorsten Kurth, HPC consultant at NERSC, notes: “it is unreasonable to expect scientists to be conversant in the art of hyper-parameter tuning. Hybrid schemes, like the one presented in this paper, add an extra parameter to be tuned, which stresses the need for principled momentum tuning approaches, an active area of research (eg. YellowFin). With hyper-parameter tuning taken care of, higher-level libraries such as Spearmint can be used for automating the search for network architectures.”

Reduced precision

The results presented in the paper were based on 32-bit, single-precision arithmetic because there are open questions regarding the use of reduced precision for training. Specifically, Thorsten observes, “more aggressive optimizations involving computing in low-precision and communicating high order bits of weight updates are poorly understood with regards to their implications for classification and regression accuracy for scientific datasets” [Italic emphasis by the authors]. He concludes, “The field of Deep Learning is evolving rapidly, and we look forward to adopting advances in the near future.”

Asynchronous momentum and tuning the convergence rate

Mitliagkas notes that in the past industry groups have reported very good performance from asynchronous systems in commercial applications. In building on that work, his post-doc showed that asynchronous behavior effectively introduces a momentum term into the optimization. However, this momentum needs to be tuned dynamically to take into account past history as it can have significant impact on the convergence rate. Mitliagkas makes the point that the hybrid architecture means the user does not have to choose to run entirely in synchronous or asynchronous mode but can tune the hybrid method to best fit the machine and problem. In his opinion, this flexibility makes hybrid systems much more useful for general users and accounts for issues general users have had in the past with purely asynchronous systems. More information can be found in the Omnivore system. Mitliagkas and Zhang’s newer work, Yellowfin, automates the process of automatic tuning even more as described in this Stanford blog post.

Intel MLSL

Nadathur Satish, Research Scientist at Intel, noted, “The Intel team performed a significant amount of work to extend the Intel MLSL library to support the hybrid asynchronous code for this paper.”  Specifically the Intel MLSL team added the ability to instantiate multiple synchronous groups and interface it with a parameter server. The Intel MLSL was initially designed to provide scalable behavior for deep learning synchronous codes. Satish noted that scalability is key to advancing the field of deep learning.

Deep Learning for Science

Prabhat notes, “For this paper, it was critical for us to demonstrate the viability of scaling Deep Learning for real scientific applications, in contrast to ImageNet. We have numerous scientific workloads at NERSC that are currently using Deep Learning, this work sets a high bar for HPC systems. Thorsten Kurth and Wahid Bhimiji chose to demonstrate the efficacy of training on simulated HEP data at scale to learn how to separate the rare signals of new particles from background events – without human intervention. Improvements in identifying these new particles could aid discoveries that might redefine our understanding of the fundamental nature of our universe. Similarly, Evan Racah and I took on the problem of identifying features in climate data. Automatically extracting such patterns will enable us to better characterize changes in frequency and intensity of extreme weather under climate change.”

High-Energy Physics

For the paper, the team used data from an LHC simulator to identify massive supersymmetric particles in multi-jet final states as they should appear in real-life at the LHC. This required training on 10 million events contained in roughly 10 terabytes of data. The team verified the trained ANN had similar baseline performance as reported by the ATLAS collaboration. Kurth summarize the results by saying, “The capability to achieve high sensitivities to new-physics signals from classification on low-level detector quantities, without the need to design, reconstruct, or tune, high-level features offers considerable potential for enabling new-physics discoveries in future HEP analyses.”


For the task of detecting extreme weather patterns, the team developed a novel semi-supervised architecture that uses an auto-encoder to capture various patterns in the dataset and simultaneously asks the network to predict bounding boxes for known patterns (such as hurricanes , extra-tropical cyclones and atmospheric rivers). Figure 6 highlights predictions from the network: Black bounding boxes show ground truth and Red boxes are predictions by the network. Note the strong overlap in all but one example.

Figure 6: Results from plotting the network’s most confident (>95%) box predictions on an image for integrated water vapor (TMQ) from the test set for the climate problem.

Single-node Cori Intel Xeon Phi processor performance

The core computation in deep learning is dense linear algebra; specifically matrix multiply and convolution operations. The authors observe that the hardware efficiency of these kernels heavily depends on input data sizes and model parameters (weight matrix dimensions, number of convolutions, convolution strides, padding, and etcetera).

As a reference, they note that DeepBench from Baidu captures the best known performance of deep learning kernels with varied input sizes and model parameters on NVIDIA GPUs and Intel Xeon Phi processors. DeepBench results show that performance on all architectures can be as high as 75-80% of peak flops for some kernels,  and as low as 20-30% as the minibatch size decreases (determined by dimension ’N’ for matrix multiply and convolutions).

With those caveats in place, the team reports that an Intel Xeon Phi processor 7250 can deliver a 2.09 teraflops overall flop rate for the climate network and 1.90 teraflops for the HEP network. For both networks, most of the runtime is spent in convolutional layers, which can obtain between 3.5 teraflops for layers with many channels, and around 1.25 teraflops on the initial layers with very few channels.1


Work by a number of researchers around the world is demonstrating both the usefulness, performance, and scalability of deep learning on very large, data intensive workloads. High per node performance is simply not enough as people apply deep learning technology to increasingly complex problems. Succinctly, the more complex the problem, the larger the data set that is required to adequately represent the problem space during training. With scalable algorithms, researchers can train on orders of magnitude larger data sets and achieve thousands of times faster time-to-model performance which, in turn, means they can address more complex (and potentially more valuable) problems. Further, this collaborative effort by Intel, NERSC, and Stanford shows that deep learning is a candidate exascale workload that can help realize the tremendous potential of computing at the exascale.

About the Author

Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national labs and commercial organizations. Rob can be reached at


2 NERSC is a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

3 I recommend the paper “The case of the missing supercomputer performance” to better understand the impact of jitter at scale in HPC systems.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This