For the automotive and aerospace industries, crash and safety analysis by finite elements is used to shorten the design cycle and reduce costs. Recently, a popular crash/safety simulation set a new speed record. Over on the Cray blog, Greg Clifford, manufacturing segment manager at the historic supercomputing company, explains how LS-DYNA “car2car” simulation reached new heights, running on a Cray supercomputer, pointing the way for engineering simulations that can take advantage of the massive computing power offered by next-generation systems.
The Cray XC30 supercomputer, outfitted with Intel Xeon processors and bolstered by the scalability of the Aries interconnect, enabled engineers to run the “car2car” model, a 2.4-million element crash/safety simulation, in under 1,000 seconds. The results of the LS-DYNA simulation are posted on topcrunch.org, which documents the performance of HPC systems running engineering codes.
The record-setting job turnaround time was 931 seconds, but equally important, the simulation broke new ground by harnessing 3,000 cores. “As the automotive and aerospace industries continue to run larger and more complex simulations, the performance and scalability of the applications must keep pace,” notes Clifford.
Clocking in under 1,000 seconds marks a significant milestone in the ongoing effort to enhance performance. Over the past quarter-century model sizes for crash safety simulations have increased by a factor of 500. At first, the computing power only enabled single load cases, like frontal crashes. Over time, the models grew to support 30 load cases at once, and now incorporate frontal, side, rear and offset impacts.
As further detailed in this paper, researchers from Cray and Livermore Software Technology Corporation found the key to improving LS-DYNA scalability was to employ HYBRID LS-DYNA, which combines distributed memory parallelism using MPI with shared memory parallelism using OpenMP. This was preferable to using MPP LS-DYNA, which only scales to about 1,000 to 2,000 cores depending on the size of the problem.
Clifford writes that time crash/safety simulation has evolved from being mainly a research endeavor to becoming a crucial part of the design process – it was a change that followed the democratization of HPC, as ushered in by Moore’s law-prescribed progress. The automotive and aerospace fields have become full-fledged HPC-driven enterprises, and have reaped the benefits of shorter design times and safer, more-performant end products.
The MPI framework for parallel simulations and the increase in processor frequency provided the foundation for this transformation. But the playing field is changing. With chip speeds leveling off, now software must be mined for hidden inefficiencies. This is why, in Clifford’s opinion, the recent car2car benchmark performance is so significant. It signifies a changing paradigm and where the focus must shift.
Some of the models in use today incorporate millions of elements. Take the THUMS human body model with 1.8 million elements – and safety simulations, which are headed to over 50 million elements.
“Models of this size will require scaling to thousands of cores just to maintain the current turnaround time,” observes Clifford. “The introduction of new materials, including aluminum, composites and plastics, means more simulations are required to explore the design space and account for variability in material properties. Using average material properties can predict an adequate design, but an unfortunate combination of material variability can result in a failed certification test. Hence the increased requirement for stochastic simulation methods to ensure robust design. This in turn will require dozens of separate runs for a given design and a significant increase in compute capacity — but that’s a small cost compared to the impact of reworking the design of a new vehicle.”