Fluid dynamics simulations are critical for applications ranging from wind turbine design to aircraft optimization. Running these simulations through direct numerical simulations, however, is computationally costly. Many researchers instead turn to large-eddy simulations (LES), which generalize the motions of a given fluid in order to reduce the computational costs – but these generalizations lead to tradeoffs in accuracy. Now, researchers are using supercomputers at the High-Performance Computing Center Stuttgart (HLRS) to help make those more accurate simulations accessible to more researchers.
Much of this work focuses on what’s known as “closure terms”: effectively, a way of upscaling lower-resolution simulations into higher-resolution representations.
“To use a photograph analogy, the closure term is the expression for what is missing between the coarse grained image and the full image,” said Andrea Beck, the researcher at the University of Stuttgart’s Institute for Aerodynamics and Gas Dynamics (IAG) who led the project, in an interview with HLRS’ Eric Gedenk. “It is a term you are trying to replace, in a sense. A closure tells you how this information from the full image influences the coarse one.”
Using supervised learning, the researchers set out to train an artificial neural network to understand these closure terms. “For supervised learning, it is like giving the algorithm 1,000 pictures of cats and 1,000 pictures of dogs,” Beck said. “Eventually, when the algorithm has seen enough examples of each, it can see a new picture of a cat or a dog and be able to tell the difference.”
But first, to create the data necessary for this training process, they ran a series of direct numerical simulations on supercomputers at HLRS: specifically, the Hawk and Hazel Hen systems. At launch, Hawk’s 5,632 AMD Epyc-powered nodes deliver 26 peak petaflops, while Hazel Hen’s 7,712 Intel Xeon-powered nodes deliver 7.4 peak petaflops. Hawk also recently received an upgrade that supplied it with 24 HPE Apollo 6500 Gen10 Plus systems outfitted with 192 Nvidia A100 GPUs.
The team ran around 40 sets of calculations on the two supercomputers, with each run utilizing around 20,000 cores. After using the resulting data to test two training approaches, the team identified one approach that achieved 99 percent accuracy in selecting the correct closure term.
“This is a great step forward in helping us to augment traditional HPC codes with new, data-driven methods and fills a definite gap,” Beck said. “It will not only help speed up our development and research processes, but provide us with the opportunity to deploy them at scale on Hawk.”
The team plans to make its data available within the year.
Header image: a direct numerical simulation of turbulence. Image courtesy of Marius Kurz, University of Stuttgart.