Deep Learning for Science: A Q&A with NERSC’s Prabhat

By Kathy Kincade

November 7, 2017

Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.

These tools are now poised to help scientists contend with some of the most challenging data analytics problems in a number of domains. For example, extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather data from satellites and weather stations and characterizing changes in extremes in simulations is an important task. Similarly, upcoming astronomical sky surveys will obtain measurements of tens of billions of galaxies, enabling precision measurements of the parameters that describe the nature of dark energy. But in each case, analyzing the mountains of resulting data poses a daunting challenge.

Prabhat, NERSC

A growing number of scientists are already employing HPC systems for data analytics, and many are now beginning to apply deep learning and other types of machine learning to their large datasets. Toward this end, in 2016 the U.S. Department of Energy’s National Energy Research Scientific Computing Center (NERSC) expanded its support for deep learning and began forming hands-on collaborations with scientists and industry. NERSC users from science domains such as geosciences, high energy physics, earth systems modeling, fusion and astrophysics are now working with NERSC staff, software tools and services to explore how deep learning can improve their ability to solve challenging science problems.

In this Q&A with Prabhat, who leads the Data and Analytics Services Group at NERSC, he talks about the history of deep learning and machine learning and the unique challenges of applying these data analytics tools to science. Prabhat is also an author on two related technical papers being presented at SC17, “Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data” and “Galactos: Computing the 3-pt Anisotropic Correlation for 2 Billion Galaxies,” and is conducting two deep learning roundtables in the DOE Booth (#613) at SC17. He is also giving a plenary talk on deep learning for science on Sunday, November 12 at the Intel HPC Developer Conference held in conjunction with SC17.

How do you define deep learning, and how does it differ from machine learning?

At the Department of Energy, we tackle inference problems across numerous domains. Given a noisy observation, you would like to infer properties of the object of interest. The discipline of statistics is ideally suited to solve inference problems. The discipline of Machine Learning lies at the intersection of statistics and computer science, wherein core statistical methods were employed by computer scientists to solve applied problems in computer vision and speech recognition. Machine learning has been around for more than 40 years, and there have been a number of different techniques that have fallen in and out of favor: linear regression, k-means, support vector machines and random forests. Neural networks have always been part of machine learning – they were developed at MIT starting in the 1960s – there was the major development of the back-propagation algorithm in the mid-1980s, but they never really picked up until 2012. That is when the new flavor of neural networks – that is, deep learning – really gained prominence and finally started working. So the way I think of deep learning is as a subset of machine learning, which in turn is closely related to the field of statistics, and all of them have to do with solving inference problems of one kind or another.

What technological changes occurred that enabled deep learning to finally start working?

Three important trends have happened over the last 20 years or so. First, thanks to the internet, “big Data,” or large archives of labeled and unlabeled datasets, has become readily accessible. Second, thanks to Moore’s Law, computers have become extremely powerful. A laptop featuring a GPU and a CPU is more capable than supercomputers from previous decades. These two trends were prerequisites for enabling the third wave of modern neural nets, deep learning, to take off. The basic machinery and algorithms have been in existence for three decades, but it is only the unique confluence of large datasets and massive computational horsepower that enabled us to explore the expressive capabilities of Deep Networks.

What are some of the leading types of deep learning methods used today for scientific applications?

As we’ve gone about systematically exploring the application of deep learning to scientific problems over the last four years, what we have found is that there are two dominant architectures that are relevant to science problems. The first is called the convolutional network. This architecture is widely applicable because a lot of the data that we obtain from experimental and observational sources (telescopes and microscopes) and simulations – tend to be in the form of a grid or an image. Similar to commodity cameras, we have 2D images, but we also typically deal with 3D, 4D and multi-channel images. Supervised pattern classification is a common task shared across commercial and scientific use cases; applications include face detection, face recognition, object detection and object classification.

The second approach is more sophisticated and has to do with the recurrent neural network: the long short-term memory (LSTM) architecture. In commercial applications, LSTMs are used for translating speech by learning the sequence-to-sequence mapping between one language and another. In our science cases, we also have sequence-to-sequence mapping problems, such as gene sequencing, for example, or in earth systems modeling, where you are tracking storms in space and time. There are also problems in neuroscience that take recordings from the brain and use LSTM to predict speech. So broadly those two flavors of architectures – convolutional networks and LSTMs – are the dominant deep learning methodologies for science today.

In recent years, we have also explored auto-encoder architectures, which can be used for unsupervised clustering of datasets. We have had some success in applying such methods for analysis of galaxy images in astronomy, and Data Bay sensor data for neutrino discovery. The latest trend in deep learning is the generative adversarial network (GAN). This architecture can be used for creating synthetic data. You can feed in examples from a certain domain, say cosmology images or Large Hadron Collider (LHC) images, and the network will essentially learn a process that can explain these images. Then you can ask that same network to produce more synthetic data that is consistent with other images it has seen. We have empirical evidence that you can use GANs to produce synthetic cosmology or synthetic LHC data without resorting to expensive computational simulations.

What is driving NERSC’s growing deep learning efforts, and how did you come to lead these efforts?

I have a long-standing interest in image processing and computer vision. During my undergrad at IIT Delhi, and grad studies at Brown, I was intrigued by object recognition problems, which seemed to be fairly hard to solve. There was incremental progress in the field through the 1990s and 2000s, and then suddenly in 2012 and 2013 you see this breakthrough performance in solving real problems on real datasets. At that point, the MANTISSA collaboration – a research project originally begun when I was part of Berkeley Lab’s Computational Research Division – was exploring similar pattern detection problems, and it was natural for us to explore whether deep learning could be applied to science problems. We spent the next three to four years exploring applications in earth systems modeling, neuroscience, astronomy and high energy physics.

When a new method/technology comes along, one has to make a judgment call on how long you want to wait before investing time and energy in exploring the possibilities. I think the DAS group at NERSC was one of the early adopters. We recognized the importance of this technique and demonstrated that it could work for science. In the experimental and observational data community, there are a lot of examples of domain scientists who have been struggling with pattern recognition problems for a long time. And now the broader science community is waking up to the possibilities of machine learning to help them solve these problems.

What is NERSC’s current strategy for bringing deep learning capabilities to its users?

Since NERSC is a DOE Office of Science national user facility, we listen to our users, track their emerging requirements and respond to their needs. Our users are telling us that they would like to explore machine learning/deep learning and see what it can do for them. We currently have about 70 users who are actively using deep learning software at NERSC, and we want to make sure that our software, hardware, policies and documentation are all up to speed. Over the past two years, we have worked with the vendor community and identified a few popular deep learning frameworks (TensorFlow, Caffe, Theano and Torch) and have deployed them on Cori. In addition to making the software available, we have documentation and case studies in place. We also have in-depth collaborations in about a dozen areas where NERSC staff, mostly from the DAS group, have worked with scientists to help them explore the application of deep learning. And we are forming strategic relationships with commercial vendors and other research partners in the community to explore the frontier of deep learning for science.

Do certain areas of scientific research lend themselves more than others to applying deep learning?

Right now our success stories span research sponsored by several DOE Office of Science program offices, including BER, HEP and NP. In earth systems modeling, we have shown that convolutional architectures can extract extreme weather patterns in large simulations datasets. In cosmology, we have shown that CNNs can predict cosmological constants, and GANs can be potentially used to supplement existing cosmology simulations.  In astronomy, the Celeste project has effectively used auto-encoders for modeling galaxy shapes. In high energy physics, we are using convolutional architectures for discriminating between different models of particle physics, exploring LSTM architectures for particle tracking. We’ve also shown that deep learning can be used for clustering and classifying various event types at the Daya Bay experiment.

So the big takeaway here is that for the tasks involving pattern classification, regression and creating fast simulators, deep learning seems to do a good job – IF you can find training data. That’s the big catch – if you have labeled data, you can employ deep learning. But it can be a challenge to find training data in some domain sciences.

Looking ahead, what are some of the challenges in developing deep learning tools for science and applying them to research projects at NERSC and other scientific supercomputing facilities?

We can see a range of short-term and long-term challenges in deep learning for science. The short-term challenges are mostly pragmatic issues pertaining to development, enhancement and deployment of tools. These include handling complex data; scientific data tends to be very diverse (compared to images and speech), we are working with 2D, 3D, even 4D data and the datasets can be sparse or dense and defined over a regular, or irregular grid. Deep learning frameworks will need to account for this diversity going forward. Performance and scaling are also barriers. Our current networks can take several days to converge on O(10) GB datasets, but several scientific domains would like to apply deep learning to 10TB-100TB datasets. Thankfully, this problem is right up our alley at HPC centers.

Another important challenge faced by domain scientists is hyper-parameter tuning: Which network architecture do you start with? How do you choose an optimization algorithm? How do you get the network to converge? Unfortunately, only a few deep learning experts know how to address this problem; we need automated strategies/tools. Finally, once scientific communities realize that deep learning can work for them, and access to labeled datasets is the key barrier to entry, they will need to self-organize and conduct labeling campaigns.

The longer-term challenges for deep learning in science are harder, by definition, and include a lack of theory, interpretability, uncertainty quantification and the need for a formal protocol. I believe it’s very early days in the application of deep learning to scientific problems. There’s a lot of low-hanging fruit in publishing easy papers that demonstrate state-of-the-art accuracy for classification, regression and clustering problems. But in order to ensure that the domain science community truly embraces the power of deep learning methods, we have to keep the longer term, harder challenges in mind.

About the Author

Kathy Kincade is a science & technology writer and editor with the Berkeley Lab Computing Sciences Communications Group.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NSF Extends Access to Its Leadership Systems Blue Waters & Frontera

December 14, 2018

The National Science Foundation is seeking supplemental requests for access on its leadership-class computers Blue Waters and Frontera to enable "fundamental science and engineering research that would otherwise not be p Read more…

By Staff

CFD on ORNL’s Titan Simulates Cleaner, Low-MPG ‘Opposed Piston’ Engine

December 13, 2018

Pinnacle Engines is out to substantially improve vehicle gasoline efficiency and cut greenhouse gas emissions with a new motor based on an “opposed piston” design that the company hopes will be widely adopted while t Read more…

By Doug Black

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC) is procuring from Atos in two phases over the next year-an Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

4 Ways AI Analytics Projects Fail — and How to Succeed

“How do I de-risk my AI-driven analytics projects?” This is a common question for organizations ready to modernize their analytics portfolio. Here are four ways AI analytics projects fail—and how you can ensure success. Read more…

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Google and Intel. Of the seven benchmarks encompassed in version Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Goog Read more…

By Tiffany Trader

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to pr Read more…

By Doug Black

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--the study of shapes--seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are being recast to use topology. For instance, looking for weather and climate patterns. Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This