Deep Learning-Based Surrogate Models Outperform Simulators and Could Hasten Scientific Discoveries

June 18, 2020

June 18, 2020 — Surrogate models supported by neural networks can perform as well, and in some ways better, than computationally expensive simulators and could lead to new insights in complicated physics problems such as inertial confinement fusion (ICF), Lawrence Livermore National Laboratory (LLNL) scientists reported.

Lawrence Livermore researchers are integrating technologies such as the Sierra supercomputer (left) and the National Ignition Facility (NIF) (right) to understand complex problems like fusion in energy and the aging effects in nuclear weapons. Data from NIF experiments (inset, right) and simulation (inset, left) are being combined with deep learning methods to improve areas important to national security and our future energy sector. Illustration by Tanya Quijalvo/LLNL.

In a paper published by the Proceedings of the National Academy of Sciences (PNAS), LLNL researchers describe the development of a deep learning-driven Manifold & Cyclically Consistent (MaCC) surrogate model incorporating a multi-modal neural network capable of quickly and accurately emulating complex scientific processes, including the high-energy density physics involved in ICF.

The research team applied the model to ICF implosions performed at the National Ignition Facility (NIF), in which a computationally expensive numerical simulator is used to predict the energy yield of a target imploded by shock waves produced by the facility’s high-energy laser. Comparing the results of the neural network-backed surrogate to the existing simulator, the researchers found the surrogate could adequately replicate the simulator, and significantly outperformed the current state-of-the-art in surrogate models across a wide range of metrics.

“One major question we were dealing with was ‘how do we start using machine learning when you have a lot of different kinds of data?’ ” said LLNL computer scientist and lead author Rushil Anirudh. “What we proposed was making the problem simpler by finding a common space where all these modalities, such as high pressure or temperature, live and do the analysis within that space. We’re saying that deep learning can capture the important relationships between all these different data sources and give us a compact representation for all of them.

“The nice thing about doing all this is not only that it makes the analysis easier, because now you have a common space for all these modalities, but we also showed that doing it this way actually gives you better models, better analysis and objectively better results than with baseline approaches,” Anirudh added.

Simulations that would normally take a numerical simulator a half-hour to run could be done equally as well within a fraction of a second using neural networks, Anirudh explained. Perhaps even more valuable than saving compute time, explained computer scientist and co-author Timo Bremer, is the demonstrated ability of the deep learning surrogate model to analyze a large volume of complex, high-dimensional data in the ICF test case, which has implications for stockpile modernization efforts. The results indicate the approach could lead to new scientific discoveries and a completely novel class of techniques for performing and analyzing simulations, Bremer said.

This is particularly important at NIF, Bremer explained, where scientists do not yet fully understand why discrepancies exist between simulations and experiments. In the future, deep learning models could elicit capabilities that didn’t exist before and provide a way for scientists to analyze the massive amounts of X-ray images, sensor data and other information collected from diagnostics of each NIF shot, including data that has not been incorporated because there is too much of it to be analyzed by humans alone, Bremer said.

“This tool is providing us with a fundamentally different way of connecting simulations to experiments,” Bremer said. “By building these deep learning models, it allows us to directly predict the full complexity of the simulation data. Using this common latent space to correlate all these different modalities and different diagnostics, and using that space to connect experiments to simulations, is going to be extremely valuable, not just for this particular piece of science, but everything that tries to combine computational sciences with experimental sciences. This is something that could potentially lead to new insights in a way that’s just unfeasible right now.”

Comparing the results of predictions made by the surrogate model to the simulator typically used for ICF experiments, the researchers found the MaCC surrogate was nearly indistinguishable from the simulator in errors and expected quantities of energy yield and more accurate than other types of surrogate models. Researchers said the key to the MaCC model’s success was the coupling of forward and inverse models and training them on data together. The surrogate model used data inputs to make predictions, and those predictions were run through an inverse model to estimate, from the outputs, what the inputs might have been. During training, the surrogate’s neural networks learned to be compatible with the inverse models, meaning that errors did not accumulate as much as they would have before, Anirudh said.

“We were exploring this notion of self-consistency,” Anirudh explained. “We found that including the inverse problem into the surrogate modeling process is actually essential. It makes the problem more data-efficient and slightly more robust. When you put these two pieces together, the inverse model and the common space for all the modalities, you get this grand surrogate model that has all these other desirable properties — it is more efficient and better with less amount of data, and it’s also resilient to sampling artifacts.”

The team said the benefit of machine learning-based surrogates is that they can speed up extremely complex calculations and compare varied data sources efficiently without requiring a scientist to scan tremendous amounts of data. As simulators become increasingly complex, producing even more data, such surrogate models will become a fundamental complementary tool for scientific discovery, researchers said.

“The tools we built will be useful even as the simulation becomes more complex,” said computer scientist and co-author Jayaraman Thiagarajan. “Tomorrow we will get new computing power, bigger supercomputers and more accurate calculations, and these techniques will still hold true. We are surprisingly finding that you can produce very powerful emulators for the underlying complex simulations, and that’s where this becomes very important.

‘As long as you can approximate the underlying science using a mathematical model, the speed at which we can explore the space becomes really, really fast,” Thiagarajan continued. “That will hopefully help us in the future to make scientific discoveries even quicker and more effectively. We believe that even though we used it for this particular application, this approach is broadly applicable to the general umbrella of science.”

Researchers said the MaCC surrogate model could be adapted for any future change in modality, new types of sensors or imaging techniques. Because of its flexibility and accuracy, the model and its deep learning approach, referred to at LLNL as “cognitive simulation” or simply CogSim, is being applied to a number of other projects within the Laboratory and is transitioning over to programmatic work, including efforts in uncertainty quantification, weapons physics design, magnetic confinement fusion and other laser projects.

MaCC is a key product of the Lab’s broader Cognitive Simulation Director’s Initiative, led by principal investigator and LLNL physicist Brian Spears and funded through the Laboratory Directed Research and Development (LDRD) program. The initiative aims to advance a wide range of AI technologies and computational platforms specifically designed to improve scientific predictions by more effectively coupling precision simulation with experimental data. By focusing on both the needs in critical mission spaces and the opportunities presented by AI and compute advances, the initiative has helped further LLNL’s lead in using AI for science.

“MaCC’s ability to combine multiple, scientifically relevant data streams opens the door for a wide range of new analyses,” Spears said. “It will allow us to extract information from our most valuable and mission-critical experimental and simulation data sets that has been inaccessible until now. Fully exploiting this information in concert with a new suite of related CogSim tools will lead quickly and directly to improved predictive models.”

The research team has made their data publicly available on the web.


Source: Lawrence Livermore National Laboratory

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Watch Nvidia’s GTC21 Keynote with Jensen Huang Livestreamed Here, Monday at 8:30am PT

April 9, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Argonne Supercomputing Supports Caterpillar Engine Design

April 8, 2021

Diesel fuels still account for nearly ten percent of all energy-related U.S. carbon emissions – most of them from heavy-duty vehicles like trucks and construction equipment. Energy efficiency is key to these machines, Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new training and inference servers that will power the upcoming Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

What’s New in HPC Research: Tundra, Fugaku, µHPC & More

April 6, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

RIKEN’s Ongoing COVID Research Includes New Vaccines, New Tests & More

April 6, 2021

RIKEN took the supercomputing world by storm last summer when it launched Fugaku – which became (and remains) the world’s most powerful supercomputer – ne Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire