Normally, even a two-fold speedup is a big deal for a large-scale simulation, saving large amounts of time (and energy, and money) on machines that are often booked to capacity. Now, a team of researchers from Stanford University and the University of Oxford have applied deep learning to speed simulations quite a bit more – up to billions of times faster – without sacrificing accuracy.
Most simulations start from the ground up, building a system – such as a cell, a climate or a galaxy – one piece at a time and then letting them operate by a set of rules and other inputs to produce an outcome and answer a question. Emulators help accelerate this process by allowing researchers to feed a series of inputs and corresponding outputs into a machine learning model, after which the emulator attempts to predict what the output would be for a given set of inputs. But producing training data and optimized architectures for emulators means running those costly simulations many, many times, diluting the computational benefits.
These researchers’ approach takes a different route: a tool called Deep Emulator Network SEarch (DENSE). DENSE adds random inputs between the inputs and outputs, testing with each iteration whether the added layer improves performance and quickly training the model. DENSE emulators can also solve inverse problems, in which they identify the best parameters for output prediction. DENSE is based on an approach co-developed by Melody Guan of Stanford University, who told Science that she was excited to see her work used for scientific discovery.
“The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases,” the researchers wrote in the abstract of their paper, “including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters.”
More impressively still, the resulting emulators – which ran fastest on GPUs – achieved extremely high levels of accuracy, reaching 99.9% identical in the case of the astronomy simulation. “Compared with other non-deep learning techniques usually employed in building emulators,” the researchers wrote, “the models found and trained by DENSE achieved the best results in all tested cases, and in most cases by a significant margin.”
“It’s a really cool result,” said Laurence Perreault-Levasseur, an astrophysicist at the University of Montreal, in Science. “It’s very impressive that this same methodology can be applied for these different problems, and that they can manage to train it with so few examples.”
“This is a big deal,” said Donald Lucas, head of climate simulations at Lawrence Livermore National Laboratory, in Science. “It would change things in a big way.”
About the research
The research discussed in this article was published as “Up to two billion times acceleration of scientific simulations with deep neural architecture search” and is accessible here. It was written by M. F. Kasim, D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. H. Froula, G. Gregori, M. Jarvis, S. Khatiwala, J. Korenaga, J. Topp-Mugglestone, E. Viezzer and S. M. Vinko.
To read the original article discussing this research in Science, click here.