NERSC Scales Scientific Deep Learning to 15 Petaflops

By Rob Farber

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according to the authors of the paper (and to the best of their knowledge), currently the most scalable deep-learning implementation in the world. The work described in the paper, Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data1, reported that a Cray XC40 system with a configuration of 9,600 self-hosted 1.4GHz Intel Xeon Phi Processor 7250 based nodes achieved a peak rate between 11.73 and 15.07 petaflops (single-precision) and an average sustained performance of 11.41 to 13.47 petaflops when training on physics and climate based data sets using Lawrence Berkeley National Laboratory’s (Berkeley Lab) NERSC (National Energy Research Scientific Computing Center) Cori Phase-II supercomputer. The group utilized an amalgamation of Intel Caffe, Intel Math Kernel Library (Intel MKL), and Intel Machine Learning Scaling Library (Intel MLSL) software to achieve this scalability and performance.2

Along with scalability, Joe Curley, Intel’s senior director of HPC platform and ecosystem enabling, highlights the scientific accomplishments this level of performance brings to deep learning researchers, and data-intensive scientific communities such as climate and High Energy Physics (HEP). He also pointed out how the results further establish the deep learning performance capability of the Intel Xeon Phi processor based computational nodes in the Cori supercomputer. “These were not just a set of heroic runs, they have solved real problems at the scale of a top five supercomputer using new methods,” Curley said.

Advancing Deep Learning at Scale

Prabhat, Data and Analytics Group Lead at NERSC, Berkeley Lab, emphasized that this performance and scalability result was very much a collaborative effort that: (A) utilized a neural network update scheme by Christopher Ré’s group (Department of Computer Science at Stanford University), (B) created a software infrastructure by the Parallel Computing Lab, Intel MKL and Intel MLSL product teams at Intel, and (C) leveraged the world-class people and hardware resources at NERSC.

Overall, Curley observes that the collaboration reported “reasonably good scaling performance” as the 9,600 node cluster delivered an approximate 7,205x speedup. (Perfect scaling would have delivered a 9,600x speedup.) Curly is excited by the potential of this early work stating, “Opportunities exist to improve performance and scaling in either future runs, or in the course of solving new problems. This really was an amazing early result on a fairly new machine.”

Algorithmic advances for deep learning scalability

The update scheme by Ré allows both the synchronous and asynchronous updates of the ANN (Artificial Neural Network) parameters.

Conceptually asynchronous updates (and asynchronous architectures in general) provide the ability to scale to large numbers of nodes by removing synchronization barriers. This lock-free approach allows for faster model updates, but can require more updates to yield an equally good final model.  Thus asynchronous updates can make the training process run longer – meaning it can take longer to converge. The thought process behind the use of asynchronous updates is that the extra computational nodes add enough parallelism (and hence can deliver greater performance) to overcome the potentially slower convergence behavior and thus deliver an overall faster time-to-model. Failure to converge to a good solution is also a possibility although Ioannis Mitliagkas, former Postdoctoral scholar at Stanford and currently Assistant Professor at the University of Montreal, observes from both classical and modern results on asynchrony that, “on well-behaved objectives, failure to converge implies a mis-tuned system.” Thus tuning is critical. However, it is worth noting that on deep learning objectives no system, synchronous or asynchronous is guaranteed to converge to a good solution.

The asynchronous deep learning architecture used in the paper is illustrated below.

Each node works on its own iteration (mini-batch) and produces independent updates to the model. Those updates are sent to a central parameter store called the parameter server (noted as PS in the figure), which applies the updates to the model in the order they are received. After each setup, the PS sends the new model, back to the worker where the update originated.

Asynchronous systems do not suffer from straggler effects and are not limited by the total batch size in the same way that synchronous systems are, an important property at scale.

Figure 1: Example synchronous and asynchronous architectures

Staleness is the reason that asynchronous systems may need more iterations to converge to a solution. Said another way, they have worse statistical efficiency.

In contrast, the reduction operation used by synchronous training introduces an O(log(#Nodes)) runtime growth. Further, it make the training susceptible to jitter (e.g. the “straggler effect”) as the computation can be rate limited by the slowest node in the system during each iteration of the training procedure.3 The straggler effect occurs when any delay on any node exceeds the ability of the implementation of the reduction operation to hide latency. Achieving low latency is an important goal for developers of products such as Intel Omni-Path Architecture (Intel OPA) MPI libraries. In addition, using too many nodes during training can reduce the number of examples per node (e.g. the mini-batch size) to the point of reduced node efficiency. Thus the authors note in their paper that synchronous training can potentially deliver worse hardware efficiency.

The trade-off between statistical efficiency vs. hardware efficiency suggested a third kind of architecture to the paper authors, which they call a hybrid system.

Mitliagkas points out, “Synchronous systems are the classic, straightforward approach, and it has some good and bad attributes. The bad attributes (straggler effect and susceptibility to slow nodes, and huge effective mini-batches at scale) motivate asynchronous systems. Those have different strengths and weaknesses motivating a tradeoff. Hybrid systems give you control of the tradeoff.”

In the hybrid approach, worker nodes coalesce into separate, synchronous compute groups where the workers split a mini-batch quantity of work among themselves to produce a single update to the model. There is no synchronization across compute groups so they are able to run asynchronously. The hybrid architecture used in the paper is illustrated below.

Figure 2: Hybrid architecture example

The authors report they observed better scaling for their hybrid asynchronous updates over synchronous configurations due to reduced straggler effects. “This has been a big engineering effort,” Mitliagkas notes, “On the Stanford side, this would not have been possible without the engineering skills and hard work of Jian Zhang.” He also points out that in large-scale HPC runs, say 10,000 nodes, the likelihood that there will be some ’slow’ nodes is significant. For this reason, synchronous system’s performance can be really unpredictable. On the other hand, asynchronous systems degrade more gracefully: a slow node only affects a single synchronous group. Results in their paper show the hybrid method performing 1.66x better than the best synchronous run, and about 10x better than the worst synchronous run.

Figure 3: Training losses vs wall clock time for HEP on 1K nodes. Comparing synchronous configuration to 2, 4, and 8 groups

The weak scaling plots below, where the amount of work per node is kept constant, show that the scalability of the system can vary with the task.

Mitliagkas emphatically states, “People typically report weak scaling, because strong scaling is hard.” He continues, “For machine learning systems, strong scaling (keeping the total amount of work constant) is more representative of actual performance.”4 He reinforces his point by stating that, “synchronous approaches can only scale up to the size of the mini-batch.”

Figure 5: Strong scaling results for synchronous and hybrid approaches (batch size = 2048 per synchronous group).

Finding a good configuration is not an easy task. Recognizing this complexity, Thorsten Kurth, HPC consultant at NERSC, notes: “it is unreasonable to expect scientists to be conversant in the art of hyper-parameter tuning. Hybrid schemes, like the one presented in this paper, add an extra parameter to be tuned, which stresses the need for principled momentum tuning approaches, an active area of research (eg. YellowFin). With hyper-parameter tuning taken care of, higher-level libraries such as Spearmint can be used for automating the search for network architectures.”

Reduced precision

The results presented in the paper were based on 32-bit, single-precision arithmetic because there are open questions regarding the use of reduced precision for training. Specifically, Thorsten observes, “more aggressive optimizations involving computing in low-precision and communicating high order bits of weight updates are poorly understood with regards to their implications for classification and regression accuracy for scientific datasets” [Italic emphasis by the authors]. He concludes, “The field of Deep Learning is evolving rapidly, and we look forward to adopting advances in the near future.”

Asynchronous momentum and tuning the convergence rate

Mitliagkas notes that in the past industry groups have reported very good performance from asynchronous systems in commercial applications. In building on that work, his post-doc showed that asynchronous behavior effectively introduces a momentum term into the optimization. However, this momentum needs to be tuned dynamically to take into account past history as it can have significant impact on the convergence rate. Mitliagkas makes the point that the hybrid architecture means the user does not have to choose to run entirely in synchronous or asynchronous mode but can tune the hybrid method to best fit the machine and problem. In his opinion, this flexibility makes hybrid systems much more useful for general users and accounts for issues general users have had in the past with purely asynchronous systems. More information can be found in the Omnivore system. Mitliagkas and Zhang’s newer work, Yellowfin, automates the process of automatic tuning even more as described in this Stanford blog post.

Intel MLSL

Nadathur Satish, Research Scientist at Intel, noted, “The Intel team performed a significant amount of work to extend the Intel MLSL library to support the hybrid asynchronous code for this paper.”  Specifically the Intel MLSL team added the ability to instantiate multiple synchronous groups and interface it with a parameter server. The Intel MLSL was initially designed to provide scalable behavior for deep learning synchronous codes. Satish noted that scalability is key to advancing the field of deep learning.

Deep Learning for Science

Prabhat notes, “For this paper, it was critical for us to demonstrate the viability of scaling Deep Learning for real scientific applications, in contrast to ImageNet. We have numerous scientific workloads at NERSC that are currently using Deep Learning, this work sets a high bar for HPC systems. Thorsten Kurth and Wahid Bhimiji chose to demonstrate the efficacy of training on simulated HEP data at scale to learn how to separate the rare signals of new particles from background events – without human intervention. Improvements in identifying these new particles could aid discoveries that might redefine our understanding of the fundamental nature of our universe. Similarly, Evan Racah and I took on the problem of identifying features in climate data. Automatically extracting such patterns will enable us to better characterize changes in frequency and intensity of extreme weather under climate change.”

High-Energy Physics

For the paper, the team used data from an LHC simulator to identify massive supersymmetric particles in multi-jet final states as they should appear in real-life at the LHC. This required training on 10 million events contained in roughly 10 terabytes of data. The team verified the trained ANN had similar baseline performance as reported by the ATLAS collaboration. Kurth summarize the results by saying, “The capability to achieve high sensitivities to new-physics signals from classification on low-level detector quantities, without the need to design, reconstruct, or tune, high-level features offers considerable potential for enabling new-physics discoveries in future HEP analyses.”


For the task of detecting extreme weather patterns, the team developed a novel semi-supervised architecture that uses an auto-encoder to capture various patterns in the dataset and simultaneously asks the network to predict bounding boxes for known patterns (such as hurricanes , extra-tropical cyclones and atmospheric rivers). Figure 6 highlights predictions from the network: Black bounding boxes show ground truth and Red boxes are predictions by the network. Note the strong overlap in all but one example.

Figure 6: Results from plotting the network’s most confident (>95%) box predictions on an image for integrated water vapor (TMQ) from the test set for the climate problem.

Single-node Cori Intel Xeon Phi processor performance

The core computation in deep learning is dense linear algebra; specifically matrix multiply and convolution operations. The authors observe that the hardware efficiency of these kernels heavily depends on input data sizes and model parameters (weight matrix dimensions, number of convolutions, convolution strides, padding, and etcetera).

As a reference, they note that DeepBench from Baidu captures the best known performance of deep learning kernels with varied input sizes and model parameters on NVIDIA GPUs and Intel Xeon Phi processors. DeepBench results show that performance on all architectures can be as high as 75-80% of peak flops for some kernels,  and as low as 20-30% as the minibatch size decreases (determined by dimension ’N’ for matrix multiply and convolutions).

With those caveats in place, the team reports that an Intel Xeon Phi processor 7250 can deliver a 2.09 teraflops overall flop rate for the climate network and 1.90 teraflops for the HEP network. For both networks, most of the runtime is spent in convolutional layers, which can obtain between 3.5 teraflops for layers with many channels, and around 1.25 teraflops on the initial layers with very few channels.1


Work by a number of researchers around the world is demonstrating both the usefulness, performance, and scalability of deep learning on very large, data intensive workloads. High per node performance is simply not enough as people apply deep learning technology to increasingly complex problems. Succinctly, the more complex the problem, the larger the data set that is required to adequately represent the problem space during training. With scalable algorithms, researchers can train on orders of magnitude larger data sets and achieve thousands of times faster time-to-model performance which, in turn, means they can address more complex (and potentially more valuable) problems. Further, this collaborative effort by Intel, NERSC, and Stanford shows that deep learning is a candidate exascale workload that can help realize the tremendous potential of computing at the exascale.

About the Author

Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national labs and commercial organizations. Rob can be reached at [email protected].


2 NERSC is a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

3 I recommend the paper “The case of the missing supercomputer performance” to better understand the impact of jitter at scale in HPC systems.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LANL Researchers Simulate Billion-Atom Biomolecule

April 23, 2019

Simulating large biomolecules has long been challenging. Now, researchers from Los Alamos National Laboratory, RIKEN Center for Computational Science in Japan, the New Mexico Consortium, and New York University have succ Read more…

By John Russell

Students Gird for Cluster Mayhem at ASC19

April 23, 2019

Final cluster configurations have been set, and competitors in the ASC19 Student Supercomputer Challenge have started running the various AI models and HPC benchmarks that will determine who is declared champion. But if Read more…

By Alex Woodie

Student Cluster Season Opener: ASC19

April 22, 2019

Calling all computer sports fans! Now hear this:  The 2019 Student Cluster Competition season is officially underway with the beginning of the ASC19 event on Tuesday, April 22nd. For you millions of student cluster c Read more…

By Dan Stark

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

A Beginner’s Guide to the ASC19 Finals

April 22, 2019

Three thousand watts. That's how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody would like more juice to run compute-intensive HPC simulatio Read more…

By Alex Woodie

A Beginner’s Guide to the ASC19 Finals

April 22, 2019

Three thousand watts. That's how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This