According to the American Wind Energy Association, wind energy in the U.S. has more than tripled over the last ten years, making it the largest renewable energy source in the country. As wind increasingly becomes a power player, so to speak, in competition with coal and natural gas, understanding how to maximize efficiency (and thus minimize cost per megawatt-hour) is more important than ever. Now, NERSC’s Jennifer Huber has highlighted ExaWind, a project under the U.S. Department of Energy’s Exascale Computing Project that is working to bring wind energy simulation into the exascale era.
“Our ExaWind challenge problem is to simulate the air flow of nine wind turbines arranged as a three-by-three array inside a space five kilometers by five kilometers on the ground and a kilometer high,” explained Shreyas Ananthan, an NREL research software engineer and lead technical expert on ExaWind. “And we need to run about a hundred seconds of real-time simulation.” This work will help shepherd wind simulation from the single-turbine petascale era into the multi-turbine exascale era.
By doing this, the ExaWind researchers aim to gain a better understanding of the intricate physics that govern wind’s passage through a wind farm, leading to breakthroughs in the design, operation and even siting of wind turbines. The researchers are designing a predictive high-resolution model to tackle these complex physics problems, which include factors as complex as ground terrain’s impact on wind and turbine-turbine wind interactions.
For the most critical calculations – solving wind near the turbines – the researchers are applying Nalu-Wind, an unstructured, flexible piece of code. But Nalu-Wind is computationally expensive, which led researchers to seek an alternative. “Originally, ExaWind planned to use Nalu-Wind everywhere, but coupling Nalu-Wind with a structured grid code may offer a much faster time-to-solution,” said Ann Almgren, head of the Center for Computational Sciences and Engineering in Berkeley Lab’s Computational Research Division.
Ananthan guided the project toward AMReX, a software framework that would allow the researchers to conduct simulations on a structured mesh hierarchy. “AMReX allows you to zoom in to get fine resolution in the regions you care about but have coarse resolution everywhere else,” Almgren said.
The researchers built a code based on AMReX, called AMR-Wind, that coupled with Nalu-Wind via a coupling code called TIOGA. The result: Nalu-Wind solves the flow around the turbines, while AMR-Wind solves the flow everywhere else, refining its resolution as it approaches the area around the turbines.
To test the code, the ExaWind team has been leveraging a number of HPC systems. “For the last three years, we’ve been using NERSC’s Cori heavily,” Ananthan said, “as well as NREL’s Peregrine and Eagle.” On Cori, a Cray XC40 system rated at 14 Linpack petflops, the team used up to 1,024 of its Haswell-based nodes. The team also leveraged Argonne National Laboratory’s Mira, an IBM system rated at 8.6 Linpack petaflops, using up to 49,152 nodes.
Thanks to this testing, work is going swiftly. “Right now we’re still doing proof-of-concept testing for coupling the AMR-Wind and Nalu-Wind codes, but we expect to have the coupled software running on the full domain by the end of FY20,” said Almgren.
To read NERSC’s Jennifier Huber’s article discussing this research, click here.