Deep Learning for Science: A Q&A with NERSC’s Prabhat

By Kathy Kincade

November 7, 2017

Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.

These tools are now poised to help scientists contend with some of the most challenging data analytics problems in a number of domains. For example, extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather data from satellites and weather stations and characterizing changes in extremes in simulations is an important task. Similarly, upcoming astronomical sky surveys will obtain measurements of tens of billions of galaxies, enabling precision measurements of the parameters that describe the nature of dark energy. But in each case, analyzing the mountains of resulting data poses a daunting challenge.

Prabhat, NERSC

A growing number of scientists are already employing HPC systems for data analytics, and many are now beginning to apply deep learning and other types of machine learning to their large datasets. Toward this end, in 2016 the U.S. Department of Energy’s National Energy Research Scientific Computing Center (NERSC) expanded its support for deep learning and began forming hands-on collaborations with scientists and industry. NERSC users from science domains such as geosciences, high energy physics, earth systems modeling, fusion and astrophysics are now working with NERSC staff, software tools and services to explore how deep learning can improve their ability to solve challenging science problems.

In this Q&A with Prabhat, who leads the Data and Analytics Services Group at NERSC, he talks about the history of deep learning and machine learning and the unique challenges of applying these data analytics tools to science. Prabhat is also an author on two related technical papers being presented at SC17, “Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data” and “Galactos: Computing the 3-pt Anisotropic Correlation for 2 Billion Galaxies,” and is conducting two deep learning roundtables in the DOE Booth (#613) at SC17. He is also giving a plenary talk on deep learning for science on Sunday, November 12 at the Intel HPC Developer Conference held in conjunction with SC17.

How do you define deep learning, and how does it differ from machine learning?

At the Department of Energy, we tackle inference problems across numerous domains. Given a noisy observation, you would like to infer properties of the object of interest. The discipline of statistics is ideally suited to solve inference problems. The discipline of Machine Learning lies at the intersection of statistics and computer science, wherein core statistical methods were employed by computer scientists to solve applied problems in computer vision and speech recognition. Machine learning has been around for more than 40 years, and there have been a number of different techniques that have fallen in and out of favor: linear regression, k-means, support vector machines and random forests. Neural networks have always been part of machine learning – they were developed at MIT starting in the 1960s – there was the major development of the back-propagation algorithm in the mid-1980s, but they never really picked up until 2012. That is when the new flavor of neural networks – that is, deep learning – really gained prominence and finally started working. So the way I think of deep learning is as a subset of machine learning, which in turn is closely related to the field of statistics, and all of them have to do with solving inference problems of one kind or another.

What technological changes occurred that enabled deep learning to finally start working?

Three important trends have happened over the last 20 years or so. First, thanks to the internet, “big Data,” or large archives of labeled and unlabeled datasets, has become readily accessible. Second, thanks to Moore’s Law, computers have become extremely powerful. A laptop featuring a GPU and a CPU is more capable than supercomputers from previous decades. These two trends were prerequisites for enabling the third wave of modern neural nets, deep learning, to take off. The basic machinery and algorithms have been in existence for three decades, but it is only the unique confluence of large datasets and massive computational horsepower that enabled us to explore the expressive capabilities of Deep Networks.

What are some of the leading types of deep learning methods used today for scientific applications?

As we’ve gone about systematically exploring the application of deep learning to scientific problems over the last four years, what we have found is that there are two dominant architectures that are relevant to science problems. The first is called the convolutional network. This architecture is widely applicable because a lot of the data that we obtain from experimental and observational sources (telescopes and microscopes) and simulations – tend to be in the form of a grid or an image. Similar to commodity cameras, we have 2D images, but we also typically deal with 3D, 4D and multi-channel images. Supervised pattern classification is a common task shared across commercial and scientific use cases; applications include face detection, face recognition, object detection and object classification.

The second approach is more sophisticated and has to do with the recurrent neural network: the long short-term memory (LSTM) architecture. In commercial applications, LSTMs are used for translating speech by learning the sequence-to-sequence mapping between one language and another. In our science cases, we also have sequence-to-sequence mapping problems, such as gene sequencing, for example, or in earth systems modeling, where you are tracking storms in space and time. There are also problems in neuroscience that take recordings from the brain and use LSTM to predict speech. So broadly those two flavors of architectures – convolutional networks and LSTMs – are the dominant deep learning methodologies for science today.

In recent years, we have also explored auto-encoder architectures, which can be used for unsupervised clustering of datasets. We have had some success in applying such methods for analysis of galaxy images in astronomy, and Data Bay sensor data for neutrino discovery. The latest trend in deep learning is the generative adversarial network (GAN). This architecture can be used for creating synthetic data. You can feed in examples from a certain domain, say cosmology images or Large Hadron Collider (LHC) images, and the network will essentially learn a process that can explain these images. Then you can ask that same network to produce more synthetic data that is consistent with other images it has seen. We have empirical evidence that you can use GANs to produce synthetic cosmology or synthetic LHC data without resorting to expensive computational simulations.

What is driving NERSC’s growing deep learning efforts, and how did you come to lead these efforts?

I have a long-standing interest in image processing and computer vision. During my undergrad at IIT Delhi, and grad studies at Brown, I was intrigued by object recognition problems, which seemed to be fairly hard to solve. There was incremental progress in the field through the 1990s and 2000s, and then suddenly in 2012 and 2013 you see this breakthrough performance in solving real problems on real datasets. At that point, the MANTISSA collaboration – a research project originally begun when I was part of Berkeley Lab’s Computational Research Division – was exploring similar pattern detection problems, and it was natural for us to explore whether deep learning could be applied to science problems. We spent the next three to four years exploring applications in earth systems modeling, neuroscience, astronomy and high energy physics.

When a new method/technology comes along, one has to make a judgment call on how long you want to wait before investing time and energy in exploring the possibilities. I think the DAS group at NERSC was one of the early adopters. We recognized the importance of this technique and demonstrated that it could work for science. In the experimental and observational data community, there are a lot of examples of domain scientists who have been struggling with pattern recognition problems for a long time. And now the broader science community is waking up to the possibilities of machine learning to help them solve these problems.

What is NERSC’s current strategy for bringing deep learning capabilities to its users?

Since NERSC is a DOE Office of Science national user facility, we listen to our users, track their emerging requirements and respond to their needs. Our users are telling us that they would like to explore machine learning/deep learning and see what it can do for them. We currently have about 70 users who are actively using deep learning software at NERSC, and we want to make sure that our software, hardware, policies and documentation are all up to speed. Over the past two years, we have worked with the vendor community and identified a few popular deep learning frameworks (TensorFlow, Caffe, Theano and Torch) and have deployed them on Cori. In addition to making the software available, we have documentation and case studies in place. We also have in-depth collaborations in about a dozen areas where NERSC staff, mostly from the DAS group, have worked with scientists to help them explore the application of deep learning. And we are forming strategic relationships with commercial vendors and other research partners in the community to explore the frontier of deep learning for science.

Do certain areas of scientific research lend themselves more than others to applying deep learning?

Right now our success stories span research sponsored by several DOE Office of Science program offices, including BER, HEP and NP. In earth systems modeling, we have shown that convolutional architectures can extract extreme weather patterns in large simulations datasets. In cosmology, we have shown that CNNs can predict cosmological constants, and GANs can be potentially used to supplement existing cosmology simulations.  In astronomy, the Celeste project has effectively used auto-encoders for modeling galaxy shapes. In high energy physics, we are using convolutional architectures for discriminating between different models of particle physics, exploring LSTM architectures for particle tracking. We’ve also shown that deep learning can be used for clustering and classifying various event types at the Daya Bay experiment.

So the big takeaway here is that for the tasks involving pattern classification, regression and creating fast simulators, deep learning seems to do a good job – IF you can find training data. That’s the big catch – if you have labeled data, you can employ deep learning. But it can be a challenge to find training data in some domain sciences.

Looking ahead, what are some of the challenges in developing deep learning tools for science and applying them to research projects at NERSC and other scientific supercomputing facilities?

We can see a range of short-term and long-term challenges in deep learning for science. The short-term challenges are mostly pragmatic issues pertaining to development, enhancement and deployment of tools. These include handling complex data; scientific data tends to be very diverse (compared to images and speech), we are working with 2D, 3D, even 4D data and the datasets can be sparse or dense and defined over a regular, or irregular grid. Deep learning frameworks will need to account for this diversity going forward. Performance and scaling are also barriers. Our current networks can take several days to converge on O(10) GB datasets, but several scientific domains would like to apply deep learning to 10TB-100TB datasets. Thankfully, this problem is right up our alley at HPC centers.

Another important challenge faced by domain scientists is hyper-parameter tuning: Which network architecture do you start with? How do you choose an optimization algorithm? How do you get the network to converge? Unfortunately, only a few deep learning experts know how to address this problem; we need automated strategies/tools. Finally, once scientific communities realize that deep learning can work for them, and access to labeled datasets is the key barrier to entry, they will need to self-organize and conduct labeling campaigns.

The longer-term challenges for deep learning in science are harder, by definition, and include a lack of theory, interpretability, uncertainty quantification and the need for a formal protocol. I believe it’s very early days in the application of deep learning to scientific problems. There’s a lot of low-hanging fruit in publishing easy papers that demonstrate state-of-the-art accuracy for classification, regression and clustering problems. But in order to ensure that the domain science community truly embraces the power of deep learning methods, we have to keep the longer term, harder challenges in mind.

About the Author

Kathy Kincade is a science & technology writer and editor with the Berkeley Lab Computing Sciences Communications Group.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Russian and American Scientists Achieve 50% Increase in Data Transmission Speed

September 20, 2018

As high-performance computing becomes increasingly data-intensive and the demand for shorter turnaround times grows, data transfer speed becomes an ever more important bottleneck. Now, in an article published in IEEE Tra Read more…

By Oliver Peckham

IBM to Brand Rescale’s HPC-in-Cloud Platform

September 20, 2018

HPC (or big compute)-in-the-cloud platform provider Rescale has formalized the work it’s been doing in partnership with public cloud vendors by announcing its Powered by Rescale program – with IBM as its first named Read more…

By Doug Black

Democratization of HPC Part 1: Simulation Sheds Light on Building Dispute

September 20, 2018

This is the first of three articles demonstrating the growing acceptance of High Performance Computing especially in new user communities and application areas. Major reasons for this trend are the ongoing improvements i Read more…

By Wolfgang Gentzsch

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Clouds Over the Ocean – a Healthcare Perspective

Advances in precision medicine, genomics, and imaging; the widespread adoption of electronic health records; and the proliferation of medical Internet of Things (IoT) and mobile devices are resulting in an explosion of structured and unstructured healthcare-related data. Read more…

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Gordon Bell Prize used Summit in their work. That’s impres Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This