Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
July 22, 2014

The Evolution of HPC in Manufacturing

Tiffany Trader
auto simulation

HPC has reached a tipping point where industry use has expanded dramatically. So what role do supercomputers have in manufacturing? Speaking on this topic is Greg Clifford, manufacturing segment manager at Cray. Manufacturing here refers to any company that’s producing a product, be that automotive and aerospace, consumer products, tire manufacturing, and so on. What ties all these segments together, says Clifford, is the use of CAE application codes for simulating products.

In a brief blog and companion video, Clifford sketches out how the HPC application landscape is shaping up in the manufacturing sphere. In automotive, for example, crash simulation is the dominant simulation field. Typical applications include LS-DYNA, Pam-Crash, RADIOSS, ABAQUS. Computational fluid dynamics comes in as the next biggest consumer of high-performance computing cycles (FLUENT, STAR-CCM+), followed by structural analysis in general.

In aerospace, on the other hand, computational fluid dynamics is likely the number one application area and a lot of in-house codes are used in addition to popular vendor applications. While in other industries, it’s a combination of codes. Here, Clifford says what ties all these industries together is they are using basically the same set of application codes.

While the automotive industry is not new to HPC, the capabilities of HPC systems have evolved significantly. About ten years ago is when the CAE application segment made the transition to MPI parallelism. Five years ago, the typical MPI job was running using a few dozen cores on average. Today, there is a push to higher levels of parallelism with simulations using 256 compute cores not uncommon, with demand for simulations using thousands of cores on the rise.

For automotive and aerospace, the move is to CAE-driven design, using simulation in lieu of or in addition to physical testing. This has resulted in high-fidelity simulation in which five or ten thousand elements can be involved in any given crash test. The technology is moving to embrace hundreds of millions of cells moving toward a billion cells per simulation. Scalability becomes critical in this environment in order to satisfy turnaround times. The other piece to this is ensemble analysis where the manufacturer runs dozens or even hundreds of jobs to fully explore a design space.

“Both of these things – higer fidelity and ensemble analysis coupled together – are dramatically increasing requirements for computing power and specifically high-performance computing simulation,” says the Cray rep.

As a final point, Clifford points to predictive simulation as the next stage for the industry. This is where manufacturers use computer simulation to actually predict the performance of a product or feature rather than just analyzing it after design. What makes this necessary is the use of all many new materials, like novel aluminum and steel alloys, composite structures, and honeycomb materials, which require simulation to fully understand their behavior. The situation is further compounded by a highly competitive enviroment and shorter design cycles, which leaves the industry seeking out increasingly-powerful HPC tools for their competitive value.