EQSIM: Exascale Computing Project Moves Needle on Earthquake Risk Assessment

By Rob Farber

March 19, 2021

As part of the US Department of Energy’s Exascale Computing Project (ECP), the Earthquake Simulation (EQSIM) application development team is creating a computational tool set and workflow for earthquake hazard and risk assessment that moves beyond the traditional empirically based techniques which are dependent on historical earthquake data. With software assistance from the ECP’s software technology group, the EQSIM team is working to give scientists and engineers the ability to simulate full end-to-end earthquake processes. This means understanding what takes place from the initiation of fault rupture (i.e., start of an earthquake) to modeling surface ground motions (i.e., earthquake hazard) to providing engineers with precise information that they can use to evaluate infrastructure response and evaluate the risk to people and property. EQSIM’s ultimate goal is to remove the computational limitations that currently prevent understanding earthquake phenomenology and practical earthquake hazard and risk assessments.

Traditional Empirically Based Ground Motion Estimates Do Not Capture Site Specificity

Traditional earthquake hazard and risk assessments for critical infrastructure have relied on empirically based approaches that use historical earthquake ground motions from many different locations to estimate future earthquake ground shake at a specific site of interest, such as a bridge or building. Because ground motions for a particular site are strongly influenced by the physics of the specific earthquake processes—including the fault rupture mechanics and seismic wave propagation through the ground (a complex heterogeneous medium)—much of the complexity of the ground shake is lost. Unfortunately, the homogenization of many disparate records in traditional empirically based estimates cannot fully capture the complex site specificity of ground motion, including frequency, amplitude, and directionality.

Advances in Computing Power Give Scientists the Ability to Assess Infrastructure Risk

Historically, limitations in available computing power meant that scientists and engineers running regional-scale earthquake simulations could only model ground vibrations at about 1 or 2 Hz, or one or two cycles per second. Although much progress had been made, it was insufficient because critical infrastructure, such as buildings, bridges, and energy system infrastructure can be seriously impacted by higher frequency vibrations up to 5 or 10 Hz. This mismatch between existing computational capability and what is needed to perform high fidelity simulation has limited scientists’ ability to simulate ground motions at frequencies relevant to structures and thus assess the risks associated with building collapse and the economic consequences of key infrastructure damage, such as bridges failing.

Groundbreaking performance increase

The focused effort of the EQSIM team has addressed this computational barrier and is now able to model ground movement up to 10 Hz. Along with other work, a porting effort to new GPU-based supercomputers has been fundamental in delivering this additional computational capability. The initial work was performed by using the Summit supercomputer at Oak Ridge National Laboratory with the assistance of ECP’s software technology team who support the RAJA performance portability libraries and other work related to preparation for the efficient execution of scientific software on emerging US Department of Energy exaflop platforms in the 2022 timeframe.

To appreciate this accomplishment, it is necessary to understand that the computational effort to perform ground motion simulations varies as frequency to the fourth power. Thus, doubling the frequency resolution requires 16 times more computational effort.

The increased capability to assess damage for a broad class of infrastructure is shown in Figure 1. In addition to running much higher fidelity models to capture higher frequency resolution, it is important to be able to run the associated models fast so the full space of earthquake parameters (e.g. the different ways a given fault can rupture) can be appropriately accounted for.

Figure 1. The EQSIM challenge of regional simulation at frequencies that impact a broad range of infrastructure. (Source: https://ecpannualmeeting.com/assets/overview/sessions/ECP2020McCallenFinal-compressed.pdf.)

EQSIM Demonstrates the Value of Exascale Supercomputers

The large 16× growth in computational capability provides a ground level view of the tremendous value of exascale supercomputers. Current leadership-class machines provide an existing platform that acts as a jumping off point to demonstrate what is possible on these future machines. These initial efforts give scientists important insight into what is needed to accommodate the runtime growth of challenging but tractable physics-based simulations when the exascale systems become available. It also gives them insight into any limitations in the current models (e.g., limitations in models, mesh size, mesh resolution) that must be addressed to deliver optimal performance and actionable results.

End Result will Save Lives and Avoid Severe Economic Consequences

These efforts, as exemplified by EQSIM, will result in lives saved and the ability to assess and plan to avoid catastrophic infrastructure failure. In this case, existing infrastructure can be reinforced and policies amended for new building construction in earthquake zones.

The value proposition for EQSIM is great, and historic failures abound. Examples include the collapse of the double-deck Cypress Street Viaduct off Interstate 880 in West Oakland during the 1989 Loma Prieta earthquake in California. The failure of a 1.25 mi (2.0 km) section of the viaduct killed 42, injured many more, and caused roughly $11.6–12.4 billion in damage in inflation-adjusted dollars. Similarly, the 1994 Northridge earthquake killed 60 people, injured more than 9,000, and caused approximately $22–86 billion in 2014 inflation-adjusted dollars, making it one of the costliest natural disasters in US history. The Northridge earthquake also damaged portions of several major roads and freeways, including Interstate 10 over La Cienega Boulevard, and the interchanges of Interstate 5 with California State Route 14, 118, and Interstate 210 were also closed due to structural failure or collapse. All these previous events impacted transportation and the economies of the region for extended periods after the earthquake. Figure 2 shows EQSIM’s exascale goal to be able to execute high-fidelity simulations quickly within a computational ecosystem that delivers relevant results.

Figure 2. The EQSIM exascale goal to be able to execute high-fidelity simulations quickly within a computational ecosystem that delivers relevant results. (Source: https://ecpannualmeeting.com/assets/overview/sessions/ECP2020McCallenFinal-compressed.pdf.)

EQSIM Encapsulates Extraordinary Physics in an End-to-End Workflow

The EQSIM team has focused on three areas to provide an end-to-end workflow that encompasses the relevant physics and performance requirements needed to give scientists the information they need to assess risk and hopefully avoid catastrophe.

David McCallen—Professor in the Department of Civil and Environmental Engineering at the University of Nevada, Reno and Senior Scientist at Lawrence Berkeley National Laboratory—observed in his interview with ECP Communications Specialist Scott Gibson that the team has been working in three areas. (Scott’s interview is available in text and podcast form.)

  • The EQSIM team has been improving the algorithms and sophistication of existing codes for ground motion simulation. The team is working with and optimizing the SW4 code that was originally developed at Lawrence Livermore National Laboratory.
  • The team is translating and porting codes to leadership-class GPU-based supercomputers, such as Summit. Currently, the team has achieved 10 Hz simulations, and seismic inversion capabilities being developed under EQSIM will provide a tool for improving the geologic models necessary to support these high frequency simulations.
  • The team is rigorously coupling the resulting ground motions to detailed infrastructure models including coupled soil-structure systems.
    This linkage between ground motion and infrastructure is very important because engineers can see how those complex 3D incident waves from ground movement impinge and interact with infrastructure. Previously, engineers had to make simplifying assumptions about how those incident waves looked, which necessarily limited the accuracy of their risk assessments using traditional empirically based techniques.

The richness of the information provided by the complete EQSIM workflow about the distribution of ground motions and the distribution of infrastructure risk is shown in Figure 3. The workflow includes several established and respected codes, including SW4, a fourth-order, 3D seismic wave propagation model; NEVADA, a nonlinear, finite displacement program for building earthquake response; and OPENSEES, a nonlinear finite-element program for coupled soil-structure interaction.

Figure 3. EQSIM provides a framework for regional-scale fault-to-structure simulation – San Francisco Bay Area regional domain for EQSIM performance testing, Hayward fault is shown along the eastern margin of the San Francisco Bay. (Source: https://ecpannualmeeting.com/assets/overview/sessions/ECP2020McCallenFinal-compressed.pdf.)

Assessing the Results

To evaluate regional-scale simulations and measure the computational progress of the application development and exascale performance goals of this project, the team created a representative large regional-scale detailed model of the San Francisco Bay Area (SFBA), as shown in Figure 3.

This model includes all the necessary geophysics modeling features (e.g., 3D geology, earth surface topography, material attenuation, nonreflecting boundaries, fault rupture models). For a 10 Hz simulation, the computational domain includes up to 300 billion grid points in the finite difference domain for models that contain fine-scale representations of soft near-surface sedimentary soils. The SFBA model provides a comprehensive basis for testing and evaluating advanced physics algorithms and computational implementations. The Hayward fault, running along the east side of the San Francisco Bay and a central focus of the EQSIM performance testing (shown by the line paralleling the San Francisco Bay in Figure 3), has consistently generated a major earthquake every 150 years on average and the last event occurred in 1868, making the simulation of this region and this fault of particular societal importance.

McCallen notes that moving to the Summit supercomputer and the associated software development effort were “tremendously enabling” because they increased the EQSIM figure of merit (FOM) from a factor of 66 to 189. The FOM is a quantitative metric of the scientific work rate of an application. As the code is optimized to run faster, the FOM increases. As shown in Figure 4, the 1 year jump between FY19 and FY20 is huge.

Figure 4. Advancements in the EQSIM FOM; benchmark performance tests A through F. (Source: https://www.exascaleproject.org/research-project/eqsim.)

Summary

The EQSIM project demonstrates the value of extreme scale (e.g., exascale) supercomputers. When coupled with the equally sophisticated software, such computational power can demonstrably deliver the performance scientists and engineers need to solve socially relevant problems that can save lives and prevent future economic distress.

Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national laboratories and commercial organizations. Rob can be reached at [email protected]

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qubits on an H1 system to simulate an iron catalyst's low ener Read more…

Diversity Hiring Maximizes Everyone’s Success in STEM and Beyond

September 12, 2024

Despite overwhelming evidence, some companies remain surprised by this simple revelation: Diverse workforces and leadership teams are good for business. Companies that cultivate diverse hiring practices and maintain a di Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI report. The global study, conducted by S&P Global Market In Read more…

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI rep Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

AWS’s High-performance Computing Unit Has a New Boss

September 10, 2024

Amazon Web Services (AWS) has a new leader to run its high-performance computing GTM operations. Thierry Pellegrino, who is well-known in the HPC community, has Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire