Digital Prototyping a Mercedes

By John Russell

July 14, 2015

ISC 2015’s emphasis on HPC use in industry was reflected in the choice of Monday’s opening keynote speaker, Jürgen Kohler, senior manager, NVH (noise, vibration, and harshness) CAE & Vehicle Concepts, Mercedes-Benz Cars Development. Kohler presented a fascinating overview of the evolution of the auto industry’s use of HPC-based modeling and simulation. (Did you know simulating road noise on American roads is one of the toughest challenges? The surfaces are rougher than elsewhere, said Kohler.)

“I’m not an HPC guy, not an expert who deals all day with exascale or new chip architectures. I’m an engineer developing fascinating cars with the help of modern HPC-based CAE tools. Our goal is that these cars are as safe and as comfortable and as efficient as possible,” Kohler told the ISC audience.

In the rarified air of HPC it’s sometimes forgotten that technical computing has a concrete role to play in industry. The auto industry has long been a poster child for its effective use of modeling and simulation to improve performance, increase safety, and achieve cost savings and remarkable manufacturing efficiencies.

Begun in the 1970s, early modeling and simulation of the Mercedes fleet was relatively crude (hundreds to a few thousands of elements). The results were taken as rough guides and physical testing regimes were remained the gold standard relied upon. Today the situation is nearly reversed. Structural integrity, airflow, in-car acoustics, crash dynamics, passenger safety are just a few of the many variables simulated prior to manufacturing.

In his talk, Kohler loosely summarized the development of M&S at Daimler and reviewed a few examples of how it is used.

Daimler’s pursuit of a digital prototype program started about 15 years ago he said and has since become standard operating procedure. Today there are more than 30 digital prototype projects underway, and the computational requirements necessary for effective simulation have grown steadily with the sophistication of the models. Besides assisting in the design and manufacture of beautiful cars, the increased use of M&S has dragged along familiar HPC headaches (bandwidth problems, IO and latency roadblocks, data management and storage challenges, etc.).

“Beside expanding our product line (new models), we are facing many new technologies like dealing with electric drives, dealing with hybrids, and still improving traditional combustion engines. Maintain sustained mobility through networks [is] another – you can now know if your son or daughter is driving the car when they shouldn’t be. Consider the fascinating field of autonomous driving. We already have [that] available in the new S Class or E Class with autonomous driving in a traffic jam up to a speed of 30 km/h,” said Kohler.

Without digital modeling and simulation it would be virtually impossible to design and efficiently manufacture modern cars and trucks. Moreover investments in required plants and manufacturing equipment are typically made two years ahead of market launch and are based on the digital prototype.

“These results have to be absolutely reliable. It’s very expensive to have to change expensive tooling [after the fact],” he said. “We need competence in software and hardware interacting together modeling especially in transferring our ideas and measure into the product. We have usually local clusters with specific applications that run hundreds of jobs every day.”

While not revealing much detail about the Daimler’s specific HPC infrastructure, Kohler presented a handful of M&S examples including collision safety modeling, passenger safety, and ride quality. He also showed a short video:

“In [about] 1970 when the film was made and we had started working on these methods and it took quite a long time before method got established. This is one of our first [crash] simulation models for stiffness with 1119 elements,” he said. Simulating crashes and NVH in the new S Class uses models with millions of elements and some aerodynamics applications with 80 million cells.”

Kohler then showed a video of modern simulation of a crash between an S Class car and Smart Car (made by Daimler). “Our goal is that both cars are very safe. The S Class is a big car, weighing more than 2000kg and has a long crumple zone in the front. The Smart Car weighs about half as much and has a very stiff cell, which protects the passengers, along with an elaborate restraint systems. The simulation is of 50km/h collision run on 490 cpus for about 30 hours (8 million elements.) The mesh size is critical.”

Today, Daimler simulates about 70,000 crashes year in addition to conducting 700 physical crashes per year. “You see that’s a lot of work. Turnaround time began at five days and today it’s a half–day or one day for bigger problems, said Kohler.

These simulations generate an avalanche of data. “If you do 70,000 crash simulations a year and you store all the data which is computed, it would be about 40 exabytes. We don’t. Instead we temporarily store about 6 petabytes and reduce that down and store only 400 terabytes a year. I hear a lot about big data and it’s an important topic but not as important for us in simulation. There are some big data projects in our company, but they are quality and sales,” he said.

Not surprisingly passenger safety is an area of emphasis and an area where simulation has distinct advantages. “A traditional dummy is [essentially] an instrument for measuring defined forces and simulates a crash. The problem is the bones are made of steel in order to measure forces. If you take a human arm or leg it is so different and so much lighter.

“We use human models. It’s very important to have valid human kinematics to evaluate injury or risk. We are able to make models of ten different body shapes, about 400k elements in the model, and the total cpu time varies from 1 hour up to 25 hours,” said Kohler.

NVH is another important measure as it directly affects comfort in the car. Kohler said these models can get quite large and bog down processing time. Engine excitation, cabin vibration, motor housing vibrations, and stiffness of rubber bearings are just a few of the aspects measured. “You can have a very small excitation of the chassis and you wouldn’t see it without simulations,” said Kohler. Airflow, of course, is important.

“At higher frequencies you have fluctuating turbulence, and street noise. [Simulating] an S Class with a mesh size here with 150 million cells on 500 cores takes two weeks. There’s still potential for improvement,” Kohler said.

The ballooning of model sizes has been challenging. It was necessary to adopt parallelization techniques to get runtimes down. Adoption of HPC software, such as Automated MultiLevel Substructuring simulation and optimizing the system for it has helped cut processing times.

“Today we are able to simulate very detailed models. As an example, the current model of the S Class with 25 million degrees of freedom running on 6000 nodes would take 200 hours for computer; with AMLS solver it takes less than two hours,” said Kohler.

Clearly HPC technology and techniques, said Kohler, have made a major impact. “HPC gives us a deeper understanding of a system and helps reduce the need for prototypes and tests, and shortens development. He is quick to add M&S alone isn’t enough. Physical testing is required and indeed Daimler has a wind tunnel with a 28m2 nozzle.

Most of us take our cars for granted but the truth is they are in many ways technological marvels and remarkably reliable given the wide range of conditions (weather, roads, collisions, temperature swings) in which they operate and the years of service we expect from them.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire