Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

By Rob Johnson

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts flight characteristics. However, modeling the complexities and subtleties of air movement is no easy task. In addition to understanding “ideal” airflow scenarios, engineers need detailed insights regarding turbulence and vortices to understand how they interact with an aircraft in flight. Kenneth Jansen, Professor of Aerospace Engineering at the University of Colorado Boulder, seeks to improve the process through his work in the field of computational fluid dynamics. Where existing predictive models are insufficient, Jansen and his research step in.

For several years, Jansen has tapped the supercomputing resources at the Argonne Leadership Computing Facility (ALCF) to improve computational modeling capabilities to provide deeper insight into the problems posed by fluid flow and how their resolution can lead to refined aircraft design. To prepare for Argonne’s future exascale system, Aurora, Jansen is currently leading two ALCF Early Science Program projects focused on advancing simulation, data analytics, and machine learning methods to enable flow simulations of unprecedented scale and complexity.

Jansen’s specialized work involves developing “scale-resolving simulations” to obtain a more detailed analysis of airflow characteristics. His models augment traditional simulation methods by evaluating unsteady, turbulent motions using high-performance computing. Said Jansen, “This approach allows us to resolve the turbulent-scale dynamics to get a much better overall prediction than if we modeled everything at once.” From there, Jansen and his team employ adaptive methods for prediction. When doing any simulation, said Jansen, “We learn where our predictions are right and where they are not as effective. Those predictions that need improvement undergo adaptive methods to hone and refine the simulation for greater accuracy.”

“We call the air that surrounds the airplane a fluid volume. That envelope is exceedingly difficult to analyze holistically, so we break it down into what we call cells. The size of these cells dictates how much of the turbulence detail we can resolve. By adapting the overall mesh of individual cells, we can make the mesh finer in regions where more detail about airflow is needed.”

Jansen’s CFD research models and predicts fluid flow around aerospace vehicles to allow engineers to design more fuel-efficient planes. (Image courtesy Ken Jansen, University of Colorado Boulder, and Argonne National Laboratory).

Safer, more efficient aircraft

Jansen offers an anecdote to describe the nature of the work and reasoning behind it. “In addition to turbulent airflow, we also seek predictions about other things. For instance, how much lift is generated by airplane wings at a certain speed? Simple models describing a typical flight can accomplish this straightforward task relatively easily. However, the models change dramatically in a scenario like an engine failure on a two-engine plane. To fly the plane straight ahead, the pilot must move the rudder to one side to account for the lack of thrust from the failed engine. In aircraft designs, many have rudders sized about 25 percent larger than necessary to handle that type of situation. However, the increased drag caused by oversized rudders means heavier fuel consumption. Smaller rudders alone could save $300 million a year in fuel costs.”

Aerospace is a very competitive, economically-sensitive market. Those purchasing aircraft seek planes which have long ranges and better fuel economy to make flights more profitable. Jansen’s work simulating airflow helps address these needs by suggesting airframe optimizations which can reduce operational costs of each plane as well as its carbon footprint.

Exascale Computing

“Exascale systems will enable new possibilities in our work,” he noted. “First, its computing prowess can resolve more complex turbulent scales, so we can provide engineers a better predictive capacity for complicated flow conditions like when a rudder is compensating for a failed engine. Secondly, exascale computing empowers us to do many lower-fidelity calculations quickly. This process is especially important when we consider things like wing thickness, where to place flow control devices, and more. By doing thousands of these smaller-scale simulations, we can more efficiently impact an aircraft design in positive ways.”

Partners in flight

“In some sense, we blaze a new trail with this research because we can work closely with aircraft designers – and highly advanced compute systems – to help them accomplish work the aircraft industry may not be able to accomplish on its own for many years. Our discoveries can impact new designs today,” Jansen said. He and his colleagues interface with aircraft companies at multiple levels. They work directly with design engineers to increase the accuracy of their simulations, to improve current aircraft designs, and help them plan next-generation airframes. While most major manufacturers have an internal ‘think tank’ group that does research paralleling Jansen’s, the collaborative effort also helps mine deeper for all possible ways to tweak current designs. Together they pursue augmented simulations to assist both today’s and tomorrow’s endeavors.

Advanced simulations using Aurora

Exascale computing[*] facilities, like the forthcoming Aurora system at Argonne National Laboratory, will open the doors to new opportunities in this arena.

Argonne anticipates delivery of Aurora in 2021. Once online, the system will have the capability to perform billion-billion calculations per second. Built by Cray, Aurora’s performance will derive from advanced hardware including the future generations of Intel Xeon processors, Intel Optane DC Persistent Memory,  Intel Xe technologies, and more. Commented Jansen, “Aurora would not be possible without the support of companies like Cray and Intel. Aurora will advance many scientific projects, including my own. With a tool that powerful, my team has new opportunities to make meaningful contributions to aircraft manufacturing and the environment too.”

Before high-performance computing (HPC) existed, wind tunnels provided the most accurate data for airframe simulations on a more massive scale. More recently though, Argonne’s Theta supercomputer, Aurora’s petascale predecessor, supported Jansen’s simulations of aircraft flight characteristics. Even with Theta though, barriers in computing speed constrained the resulting simulations. Models simulated an aircraft at one-nineteenth its actual size, flying at a quarter of its real-world velocity. In contrast, said Jansen, “Aurora will help us learn more about the fundamental physics of flow control in a full-sized, full-speed aircraft simulation. From there we can identify where big or small design improvements can make an important difference in flight characteristics.”

Even with exascale systems supporting his work, Jansen recognizes the magnitude of the work ahead, “We want to make the best use of Aurora’s resources, so we must ensure our computational methods are both efficient and effective. Making the best use of hardware means we need to re-shape data structures and algorithms, plus we must develop more accurate numerical methods.”

Overcoming turbulence

“As any airline passenger knows, air turbulence can vary greatly throughout a flight. Sometimes you barely notice it, and other times, well, it’s quite bumpy,” he chuckled. The seemingly infinite variability of turbulence makes it very difficult to simulate an entire aircraft’s interaction with it. At any given second, different parts of a plane experience different impacts from the airflow. Even an exascale computer cannot keep up with storing the enormous volume of data necessary for the job. Added Jansen, “We need to get data insights without the need to write all that information to file. That means we must do co-processing of data real-time as the simulation progresses. We call that process in situ data analytics.” Jansen elaborated, “in situ lets us examine visualizations over time increments, allowing us to see airflow dynamics without writing to file.”

“I’m excited about using Aurora for the first time and performing exascale-level simulations. It will put us at the forefront of predicting and understanding fluid flow around complicated things like airplanes.” Continuing, Jansen added, “We finally have the compute performance to simulate complex airframe components like a full vertical tail and rudder assembly and do it at full scale. That feat has not been accomplished before.”

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC, a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.

[*] Editor’s note: Aurora disclosures made in March cited a performance goal of sustained exaflop/s.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Stampede2 ‘Shocks’ with New Shock Turbulence Insights

August 19, 2019

Shockwaves play roles in everything from high-speed aircraft to supernovae – and now, supercomputer-powered research from the Texas A&M University and the Texas Advanced Computing Center (TACC) is helping to shed l Read more…

By Oliver Peckham

Nanosheet Transistors: The Last Step in Moore’s Law?

August 19, 2019

Forget for a moment the clamor around the decline of Moore’s Law. It's had a brilliant run, something to be marveled at given it’s not a law at all. Squeezing out the last bit of performance that roughly corresponds Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip using standard CMOS fabrication. At Hot Chips 31 in Stanfor Read more…

By Tiffany Trader

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This