Advanced Wind Farm Simulations Key to Energy Strategy

By Tiffany Trader

May 14, 2014

With energy consumption on the rise around the world, interest in renewable energy sources has taken off. Wind power is a major component of the US energy strategy – it’s known for being affordable, efficient and abundant, as well as being pollution-free. Over the last decade, wind turbine farms have become a common feature, dotting landscapes across the nation, and today such massive operations comprise 4 percent of the total electricity generated in the US.

While wind power has many positive attributes, it’s main downside is its sporadic nature. In fact, actual power production is correlated with a range of atmospheric variables, such as wind speed and turbulence, as well as spatial and temporal scales.

Getting the most energy from these mechanical giants is thus a complex endeavor, but research teams are working hard to reduce the uncertainty that affects wind power forecasts. One of the main sites dedicated to optimizing wind power in the US is Lawrence Livermore National Laboratory. The lab has about a dozen atmospheric scientists, mechanical and computational engineers, and statisticians using fieldwork, advanced simulation, and statistical analysis to boost wind power production. High-performance computing is integral to the effort.

Jeff Roberts, Program Leader for Renewable Energy and Energy Systems, recently published a letter describing the lab’s role in developing this valuable resource.

“We must reduce our dependence on imported fossil fuels while ensuring plentiful clean energy with renewable sources,” Roberts writes. “The wind, however, is an intermittent resource that is challenging to predict, sometimes varying significantly from minute to minute. What’s more, complex atmospheric factors, such as turbulence, and topographical features, such as hills, modify the wind speed and direction and hence the power that can be extracted by wind turbines. Turbulence also plays an important role in the reliability and life span of turbine components.”

These simulations can be extraordinarily complicated, says Roberts. The complexity is owed to length scales that can an span eight orders of magnitude – from millimeters in the rotor-blade boundary layer to 100 kilometers for large atmospheric weather patterns.

“Simulating wind change and its effects on turbines is challenging because of the complex forces driving wind,” explains Livermore mechanical engineer Wayne Miller, associate program leader for wind and solar power. “We’re essentially simulating a fluid flow in an environment where factors such as aerosols, clouds, humidity, surface–atmosphere energy exchange, and terrain influence to varying degrees both the complexity of the flow and how much power can be extracted by a spinning turbine.”

The computational challenges are numerous, especially when simulating farms of more than 100 turbines. Terrain variations can significantly alter output from one turbine to the next and there are wakes coming from the spinning turbine blades that diminish power from turbines downstream. To negotiate these complexities, scientists are expanding the applicability of the Weather Research Forecasting (WRF) modeling system to be suitable for wind farm scale. Developed primarily for larger-scale weather applications, WRF is maintained by more than 10,000 users and contributors worldwide.

The model was modified for use at smaller scales and to satisfy the multiscale requirements of wind power forecasting. For example, a job may start out with a simulation of the western US to capture the dominant weather patterns. Then a combination of smaller grid spacing and models developed at Livermore are pulled in to accurately capture the smaller-scale features that affect wind farms.

The project seeks to blend WRF atmospheric simulation with scales of motion that are typically the purview of computational fluid dynamics (CFD) codes. To more expertly capture the complex interplay of variables, Livermore scientists have brought in a number of codes, such as WRF-GAD, immersed boundary method (IBM), as well as CGWind and HPCMP CREATE-AV Helios (aka HELIOS), which are used for even smaller-scale simulations that are outside the range of WRF.

A team of scientists from Livermore and University of Wyoming employed the WRF model and HELIOS to perform the first-ever simulation of a 50-turbine wind farm that takes into account individual spinning turbine blades using turbulent winds. This degree of precision and realism is helping researchers to understand why real wind farms fall short of their theoretical counterparts.

Atmospheric scientist Jeff Mirocha is one of the project leads exploring ways of studying phenomena that are specific to a wind farm environment. “The simulation framework we are developing will provide advanced tools to address these knowledge gaps,” he says, “leading to improved operations, longer component life spans, and ultimately cheaper electricity.”

President Barack Obama’s administration has set a goal for the nation to obtain 20 percent of its electricity from wind energy by 2030. The LLNL team thinks that’s a reasonable goal given the current high rate of wind turbine deployment nationwide. From 2008 to 2012, wind power capacity has expanded by 167 percent.

With precision models like the ones LLNL and its parters are developing, wind farm developers and operators have the information they need to select ideal wind farm locations and run the sites more efficiently.

“It’s a big team effort,” says Livermore’s Miller. Other collaborators include National Renewable Energy Laboratory, National Center for Atmospheric Research, University of Colorado at Boulder, Sandia and Pacific Northwest national laboratories, University of Wyoming, University of Oklahoma, University of California at Berkeley, U.S. Army, and other wind power industry stakeholders. Funding comes from the Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy, as well as Livermore’s Laboratory Directed Research and Development (LDRD) Program, and industrial partnerships.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

November 20, 2017

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore Read more…

By John Russell

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops (peak) machine based on IBM’s Power9 chip being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the ~200 petaflops system being built at Oak Ridge Natio Read more…

By John Russell

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This