The ExaWind project describes itself in terms of terms like wake formation, turbine-turbine interaction and blade-boundary-layer dynamics, but the pitch to the layperson is comparatively simple: more wind energy at a lower cost is a good thing, and to really get those numbers up, the U.S. will need large wind farms comprised of megawatt-scale turbines both onshore and offshore. Achieving that goal, in turn, requires understanding those complex dynamics. Enter ExaWind, a project sponsored by the Department of Energy and the Exascale Computing Project that aims to do just that with the power of high-performance computing. At ISC21, Mike Sprague – a research scientist with the National Wind Technology Center at the National Renewable Energy Laboratory (NREL) – delved into the state of ExaWind and current benchmarks.
Understanding how different types of wind spin a turbine – or don’t – is critical to siting that turbine for maximum energy production (and thus, maximum cost efficiency). “I would argue that for any system like this, only when we can really model it well can we optimize that system,” Sprague said.
But simulating wind turbines effectively is an enormously challenging proposition: the flow dynamics for a single turbine are already complex, but the complexity compounds as more turbines are introduced and begin to affect one another. “If you think about wind farm flow dynamics and coupled turbine structural dynamics, it is truly a very complex system,” Sprague explained. “Modeling that system is arguably a grand challenge, if you’re actually gonna do a predictive simulation of a wind farm.”
Indeed, as Sprague explained, solving Navier-Stokes equations with incompressible flow constraints for every timestep is a “dominating challenge,” and when turbulence is added, it approaches impossibility. “Once we discretize these equations, it’s very easy to see that high-performance computing is gonna be needed to solve those systems,” he said. “You can easily have millions to many billions of equations, depending on how high of a resolution, or the number of scales we’re trying to capture.”
One of the main challenges is the enormous range of scales in wind turbine simulation: from large-scale systems like weather, down to wind farms spread over a few miles, down to turbines, and down and down until you reach the sub-millimeter scale of the layers of the turbine’s individual blades.
The ExaWind researchers tackle this with a combination of tools, predominantly Nalu-Wind – an incompressible-flow computational fluid dynamics tool with an unstructured grid – and AMR-Wind, which is similar but which uses a structured grid. A third tool, TIOGA, allows the ExaWind team to overlay those meshes on one another. “Our move to a hybrid solver was a game-changer for us,” Sprague said, explaining that the hybrid model “greatly” improved time-to-solution.
The effects are visible in the example below, where the unstructured mesh around the sphere is served by the Nalu-Wind model, which sits in a structured mesh provided by the AMR-Wind model, with TIOGA performing the unification.
Similarly, in this “victory” simulation of a wind turbine with 122 million grid points, the left image shows the structured grid from AMR-Wind, slowly stepping down in refinement as it approaches the turbine blades. Less visible – but still present – are the unstructured grids around the blades themselves. On the right, the resulting simulations, complete with turbulent atmospheric flow, as resolved by the CPUs of NREL’s Eagle supercomputer. (Eagle, a 4.9 Linpack petaflops HPE system installed in 2018, still ranks among the top hundred publicly ranked supercomputers.)
“The simulation is resolving eight orders of magnitude – it goes from 10-5-meter cell sizes on the boundary layer of the blades to 103 meters in the domain size,” Sprague said. “This is a validation-quality simulation.”
But of course, as supercomputer architectures rapidly evolve, the ExaWind researchers are working to ensure that it remains capable and relevant when run on tomorrow’s supercomputers – and even on non-supercomputers. “Performance portability is central to our development plan,” Sprague said. ExaWind, he said, is open-source and multi-fidelity, and can be run on anything from laptops to next-gen supercomputers. To that end, the team has lately devoted significant effort to benchmarking and improving performance on GPUs, which are increasingly standard for major supercomputers and computational fluid dynamics simulations — and which are foundational to the three incoming U.S. exascale systems.
On AMR-Wind, Sprague said, the strong scaling and weak scaling across CPUs was “almost ideal” – but, on the other hand, “the GPU results do not show ideal … scaling.” Rather, the GPUs showed substantially diminishing returns, but were also significantly faster on a per-node basis for both types of scaling. On Nalu-Wind, meanwhile, CPUs outperformed GPUs by all metrics except for an edge of the scaling chart.
Sprague and his colleagues found these benchmarks – which were run on the 200.8 Linpack petaflops Summit system at Oak Ridge National Laboratory – to show “very promising GPU-based results” for both tools. These benchmarks were run separately, as opposed to the hybrid model that ExaWind is currently running; Sprague says that benchmark results for the hybrid model will be available in the coming weeks.
Looming on the horizon for ExaWind is the next frontier: offshore wind. On that front, Sprague said, stay tuned for the next developments.