Accelerating HPC in Formula One

By Tiffany Trader

November 3, 2015

The Formula One racing community has embraced HPC as an essential part of its workflow to build ever-speedier race cars, but suggestions that CFD should obviate the need for wind tunnel testing continue to draw heated commentary. One of the aerodynamics software vendors that supports this move is Exa Corporation. The Burlington, Mass.-headquartered company serves the international automotive and transportation markets and maintains a global customer roster that includes NASA, BMW, Ford, Jaguar and Tesla, and more than one F1 racing team.

Motor sports are specialty for Exa, which positions its PowerFLOW software as uniquely suited to the demands of racing-oriented computation.

“PowerFLOW fits into the rapidly changing environment of racing by providing a broad range of data and flow visualization, which can be used to tune the vehicle design for optimal performance,” the company asserts on its website. “You can compute the dependence on ride heights, vehicle speed, wheel positions, and crosswind or slip angles by simulating key points on the track circuit and evaluate the effects of your aerodynamic tuning and design choices.”

At the heart of Exa’s PowerFLOW software suite is a patented Lattice Boltzmann-based implementation of computational fluid dynamics (CFD) that is known for its high accuracy in low Mach number regimes. Founded in 1991, Exa has spent more than two decades perfecting its use of Lattice Boltzmann-based physics to perform simulations that accurately predict the performance of designs and provide a route to further optimizations.

HPCwire recently spoke with the Exa team about how CFD generally as well as their approach specifically offers benefits over expensive physical testing and reduces the need for costly late-stage changes. Much of our conversation centered on the limitations of wind tunnels with respect to complexities of the testing process and specific shortfalls that arise due to its being “a simulated version of reality that creates wind in a building around a car that’s not moving and a road surface that’s not moving underneath it.”

Exa finds itself having to counter the perception that wind tunnels provide this perfect representation of reality, but the fact is that wind tunnel testing is still a facsimile, a physical simulation as opposed to a digital one. Dr. Brad Duncan, aerodynamics applications director at Exa, explained further.

“The primary problem with a wind tunnel,” Duncan began, “is that you’re trying to model a moving car in a stationary laboratory. In order to make the experience road-like you have to blow the air at the right speed that the car would be driving; you also have to model the fact that the airflow is an open space, but the lab is a confined space; and there’s the fact that everything about the wind tunnel is fixed, but in the real-world the car is on a road and is moving, so you would have belts moving and wind blowing under the car. To capture all of this requires quite a few approximations and complex systems that have a lot of issues.

“One of the main challenges relates to air flow. When blowing air through a duct, trying to get the speed of that air right is a very tricky thing. One of the leading order errors is that the flow speed around the car doesn’t exactly match what you are trying to model, which is the car driving at a certain speed, say 150-200 miles per hour, down the track. The solution is to pick a point of calibration, but if the speed is correct at two lengths in front of the car, it doesn’t mean it’s accurate at the car. In aerodynamics flow speed is very important for accuracy, but in this simulated environment, forces will be too high or too low based on an error in the test speed calibration.

“There’s another problem because when you are testing in the wind tunnel with a Formula One car, you are not able to test at the true speed. A lot of wind tunnels will go faster than highway speed but max out at a speed that is lower than true track speed. Straightaway speed, depending on the particular circuit, can be as high as 200 miles per hour. When these high-speeds are attempted, the tires and the air heat up, which also skews the data. There are a stack of correction factors and adjustments that have to be made to try to address these effects and standardize them, and that creates an error stack up that can add additional tolerance to every number.”

Another factor is the force of the air flow coming off the rear wing, Duncan explained. Very much like how takeoff works on an airplane (but upside down), the flow that goes past the rear car wing is getting pushed up in order to create a tremendous down force on the back of the car. The problem is that the wind tunnel is feeling the presence of this flow that is getting pushed upward far away from the car but starting to hit the walls of the wind tunnel in the collector, the part of the wind tunnel where the air goes back in. That upwash from the wing makes the car look bigger to the air, making an even bigger obstacle to the airflow in the wind tunnel, according to Exa’s Duncan, who adds that all of these individual variances accumulate and will shift up or down the forces that engineering teams are trying to measure.

Sometimes the car itself is also a model at about 60 percent scale (injecting even more approximations into the process), but there is testing where the actual car is used, however even then considerations must be made to how vehicles are attached to the wind tunnel. For example, the car may be attached to a central sting that goes up to the roof (generally used in a scale model) or it can be secured at the struts.

Traditionally, the entire process was based on physical testing where all design was done in a wind tunnel. Now each car company has its own unique workflow and CFD testing can be used in all stages of the process and wind tunnels can also be used throughout. Most automotive companies are using a mix of both; a typical workflow may be heavy in CFD in the beginning and then wind tunnel testing comes in due to compliance and regulations dictated by the EPA (in the US) and the NEDC (in Europe). Even in this area, changes are coming, say Exa, referencing a recent Jaguar model that was completely developed using Exa CFD software and then only the final validation was done in wind tunnel. Discounting the artificial restrictions set down by the Formula One management body, wind tunnel testing is a more limited resource than CFD, but it is still essential for meeting regulatory requirements around fuel emissions. (There is some overlap with safety regulations, but according to Exa, that tends to be its own domain: crash testing and crash simulation.)

“The real value of simulation for aerodynamics is about designing the vehicle so it performs well. What we’re trying to do is make the world a better place — improve fuel economy, reduce emissions, and still make it possible for car companies to sell those cars because their ability to sell is based on the aesthetics and performance of the car and all the other things that consumers are looking for. They’re not necessarily going pay for fuel economy as much as some other things but we want the car companies to be able to deliver a competitive vehicle that performs well in every way, is still beautiful, still has the interior space, still has the safety, and meets all the regulations. We want to give car companies a way to be able to be best in class in aerodynamics and then they can say that this car has better fuel economy than all of its competitors,” said Duncan.

“Where aerodynamics simulation fits in is it allows you to design for fuel economy and emissions. Because you are using a digital test environment you are able to do that in collaboration with the other work that the car company is doing, which is designing the car for style and checking that it meets all of their other engineering requirements.”

“When it comes to engineering simulations, aerodynamics is the one that is most closely coupled to the aesthetics of the styling of the car,” Duncan shared. “Brand identity and what determines if a car sells has so much more to do with styling.

“They say that form follows function, but that’s not really true in the automotive market.

“And for race cars it’s a different story, it’s about winning the next race. For that the simulation has to be accurate and it has to give them a better design solution.

“In automotive racing, you have these engineers that are competitive and then they have to find the best answer to the design problem as fast as possible, which means they need accurate data but they also need something more like what you would call insight simulation can give you that insight because it offers so much more data — visual data and and practical data that helps them make a design decision.

“That’s one of the key limitations to the wind tunnel; you can’t get insight into why something works or doesn’t work.”

Duncan also noted that doing something on a computer allows engineers to optimize the testing process in a way that just isn’t possible with physical testing.

“The workflow is complex and involves adding and removing parts on a car, positioning them differently, combining them in different ways,” he explained. “When you remove material or reposition a part, if you then decide to put it back, if you’re trying to do three different things and find the combinations means you have to keep putting part A back to test part B and C in their positions. That can be challenging unless you have a very well-designed test, designed for efficient modularity. All these steps also involve a lot of time delays and the result is a number on the screen. Automated optimization is on a computer is much more straight-forward.”

Duncan reserved his heartiest endorsement of CFD for F1 design for Exa’s PowerFLOW software, a fully-transient time-accurate solver. PowerFLOW’s transient property means that the software can model how turbulence changes with time. According to Exa, the software provides unmatched simulation accuracy of ±1 count (equivalent to a Cd of 0.001), as opposed to an estimated ±30 counts for the rival steady-state codes. With Lattice Boltzmann physics, fluid is treated as mesoscopic particles (not molecules). “We’re using a very different approach to the physics,” he added, “based on IP and advanced research. You can look up Lattice Boltzmann in a text book, but you can’t look up what’s in our solver.”

“Some of the older CFD codes based on steady-state solvers are providing only one time-averaged flow result and it’s not able to tell you that flow is changing in time, so it’s not getting buffeting and oscillations or the changes that are happening due to a realistic flow effect. A flow around a car is not actually steady, so they are making this giant assumption by using steady-state, one-shot simulations. So they are not able to get as much benefit out of it. This built-in inaccuracy has traditionally been accepted as a tradeoff made to get the answer faster.”

“PowerFLOW doesn’t have a steady-state mode; it only runs in what we call real-mode,” said Duncan. “You see these small vortices and wakes and structures in the airflow that give you that look and feel that the flow is real and also is much more accurate because of that. It’s more similar to the wind tunnel and the track in that sense that the flow is realistically modeled because we have a significant competitive advantage in how we model airflow.

Most CFD codes use Navier-Stokes equations, but Lattice-Boltzman achieves very realistic flow simulation that works for these complex flows that flow around the F1 car. There are other codes based on LBM, inluding XFlow released by Next Limit Technologies, but Exa claims its PowerFLOW software has much more development behind it.

The tradeoff of this additional accuracy is that PowerFLOW is more computationally expensive to run a fully time accurate solver. “It’s so much more accurate that it’s hard to say that that’s actually a problem because if you run a steady-state solver and you get the wrong answer it might cost you more because you’re trying to work around the limitation and you might test something and find that it doesn’t work and then you have expensive delays and rebuilds because the information it gave you wasn’t very helpful. So in the end, you might say that a more accurate solver is priceless.”

In line with these benefits, Exa is not known for being a low-cost solution, Duncan said, but he pointed to Exa’s IP, higher levels of expertise and service and support. “We’re not just providing software in a box, we’re providing the whole engineering solution, so I think the customer is getting a lot more.”

Every simulation typically runs on a compute cluster of 200-400 cores for about one-to-two days, with turnaround time being the critical parameter. “The code scales very well, better than most other CFD solvers,” Duncan said, explaining that because the LB is particle-based there’s a lot of independent operations that make it very condusive to scaling up to thousands of processors. “Most customers are running the code on commodity machines with the individual core performance being the most important parameter along with good networking since overall performance is limited by the data bandwidth between nodes. We try to keep up on the latest Intel racks and interconnect fabrics, InfiniBand and the like,” said Duncan.

The F1 regulations currently enforce limitations on both wind tunnel testing and CFD work with an inverse relationship between the two. Testing is limited no matter how you do it, although workarounds have been rumored to exist. Given this limitation there is an argument to be made that having a more accurate test would make the most use out of the limited time available. As for those who point to poor results with teams who have used a pure-play CFD approach, Duncan suggested that this could be do to those traditional steady-state codes that make accuracy tradeoffs for the sake of speed. Looked at another way, the faster run times mean more runs can be completed within the mandated envelope, but there’s a catch-22 here. If the rules cap CPU hours, teams can do a lot more tests if they have a less accurate method, but a less accurate method is less valuable.

Clarifying an earlier point that he’d made, Duncan noted that he wouldn’t advise that just any CFD solution would be sufficient to completely replace wind tunnels, but an accurate simulation method could replace wind tunnels completely. “There are so many ways that you can get better data from CFD than wind tunnels in terms of predicting the performance of the car on the road. You can make the case that a team would benefit from spending their entire allocation on a more accurate solution.”

One gets the sense that these limitations while intentioned to enhance the competitiveness are hampering the sport and reducing potential innovations that could set the stage for technology transfer.

Duncan responded that automotive companies are very competitive right now to improve aerodynamics. With the CAFE requirements from the Obama administration, car makers are in a tight spot because they have to perform better and still make a car that people want to buy. They can’t give up performance or any other attributes of the car in order to meet these fuel economy mandates. The solution, said Duncan, has to be innovation.

“The only way to get a more aerodynamic car and still meet all these other constraints and make a beautiful car is a truly innovative design process,” he continued. “So that’s where certainly in motor sports and especially Formula One things that they try are always leading-edge. They have to do something else or they are not going to win. The strides that they would make would advance aerodynamics and fuel economy for the rest of the world.”

PowerFLOW is available via annual license or pay-as-you-go on-demand, with the option to run on secure, hosted, high-performance systems as part of its ExaCLOUD offering.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This