April 7, 2021 — The rapidly growing fields of artificial intelligence (AI) and machine learning (ML) have become cornerstones of Lawrence Livermore National Laboratory’s (LLNL) data science research activities. The Lab’s scientific community regularly publishes advancements in both AI/ML applications and theory, contributing to international discourse on the possibilities of these compelling technologies.
The large volume of AI/ML scientific literature can be overwhelming, so researchers sometimes organize reading groups where one person reads a paper and presents the methods and results to colleagues. For instance, the Lab has active reading groups studying ML and reinforcement learning. The Data-Driven Physical Simulation (DDPS) reading group has been meeting biweekly since October 2019. DDPS is led by Youngsoo Choi, a computational scientist at LLNL’s Center for Applied Scientific Computing.
Choi created the group to study literature that bridges a gap between purely data-driven approaches and first principles in physics simulations. He explains, “Now there is a great push to incorporate existing first principles into machine learning, resulting in many good papers and presentations. I wanted to make this work visible to my fellow scientists so their projects can benefit from other research and ideas.” For example, Choi’s work in reduced order modeling (ROM) combines data and underlying first principles to accelerate physical simulations—in other words, reducing computational complexity without losing accuracy.
A Virtual Transformation
The COVID-19 pandemic almost disbanded the group. But Choi says his colleagues were eager to continue via WebEx conferencing. After several months, the virtual format inspired a new possibility. Choi explains, “If we hold this meeting virtually, why limit the speakers to those within our group? Why not listen to the authors of those papers?”
Immediately Choi began contacting authors of standout papers on DDPS topics. “They were all interested in giving a virtual talk to LLNL staff,” he states. “So the reading group turned into a seminar series.” The invited speakers are not necessarily existing research partners with LLNL scientists—but they could be. Choi adds, “I hope the speakers and our scientists will connect and collaborate on research relevant to the Lab’s missions.”
The virtual format is convenient and eliminates the need for speakers to travel, and Choi expects to continue the series after the pandemic has passed. “I am getting great feedback on the series. There is much interest in this topic,” he says, pointing out that the audience has increased since the group’s transformation. The February 18 presentation “Differentiable Physics Simulations for Deep Learning” by Nils Thuerey (Technical University of Munich) saw more than 200 attendees.
Watch and Learn
The DDPS seminar series is available as a playlist in the Livermore Lab Events YouTube channel. Each talk runs about an hour and concludes with Q&A. For first-time viewers, Choi recommends Paris Perdikaris’s presentation “When and Why Physics-Informed Neural Networks Fail to Train.” Perdikaris (University of Pennsylvania) introduced a new way of simulating physics using neural networks on February 4.
Choi explains, “Although the physics-informed neural network is slower and less accurate than the traditional way of simulating physics, the way it solves the physical simulations is completely different in that you can recover a reasonable solution of physical simulations, usually represented by partial differential equations, even with pretty sparse data.”
For a quicker example, Choi presented a five-minute introduction to ROM based on a research poster accepted to the 2020 Conference on Neural Information Processing Systems (NeurIPS).