Nvidia Combines Modulus, Omniverse for Earth-2 and Other Digital Twins

By Oliver Peckham

March 22, 2022

An accurate digital twin can be a boon to scientific endeavors, from recreating individual buildings in a city to understand energy use to recreating the Earth’s climate system to understand the effects of policies on climate change. At GTC21, Nvidia made waves by announcing that its Modulus framework for physics-based ML models and its Omniverse real-time simulation platform were being used to power digital twins of power plants for predictive maintenance — and that it was planning to use these tools to create a digital twin of the planet in the coming years. At GTC22, Nvidia is doubling down on its scientific twins, announcing further integration of Omniverse and Modulus for Earth-2 and renewable energy simulations.

“Basically, we’re integrating Modulus within Omniverse,” explained Dion Harris, senior manager for datacenter products at Nvidia. He explained that Modulus would be available as an extension in Omniverse, allowing users to build AI surrogate models that provide real-time, interactive, AI-driven simulation.

Nearing Earth-2

Earth-2 was, again, the headline item of this segment. Harris showed a chart illustrating projected progress toward sub-meter resolution in Earth system modeling — the required resolution for resolving many important cloud formations. “It’ll be another 40 years before we get there,” Harris said. “So that’s simply too long, so the whole promise of Earth-2 is to basically bring about AI, bring about digital twin modeling so that we can speed up that process and get a better understanding of climate and therefore, hopefully, do something about it before it’s too late.”

Earth-2, he said, was different: an unprecedented combination of first-principles simulation and data-driven models, presented in a real-time, interactive digital twin.

“As a first step, we’ve developed an AI surrogate model,” he said. “So we’ve worked with collaborators from Berkeley Lab, CalTech, Purdue, Michigan and Rice University, and we’ve built this AI surrogate model called FourCastNet.”

FourCastNet — short for “Fourier Forecasting Neural Network” — is a Transformer-based model that allows models to be trained at a low resolution, then interpolate the results for application with higher-resolution data. FourCastNet, Harris said, was up to 45,000× faster than traditional computational fluid dynamics at comparable accuracy (and with a 12,000 percent increase in energy efficiency). Nvidia says that this performance improvement enables larger ensemble climate models with thousands of iterations.

The Fourier neural operator that enables these advances is now integrated into Nvidia’s digital twin tools, enabling users to build models like FourCastNet within Modulus for use in Omniverse.

Image courtesy of Nvidia.

“Digital twins allow researchers and decision-makers to interact with data and rapidly explore what-if scenarios, which are nearly impossible with traditional modeling techniques because they are expensive and time-consuming,” said Karthik Kashinath, senior developer technology scientist and engineer at Nvidia. “Central to Earth-2, Nvidia’s FourCastNet enables the development of Earth’s digital twin by emulating the physics and dynamics of global weather faster and more accurately.”

“For the first time, a deep learning model has achieved better accuracy and skill on precipitation forecasting than state-of-the-art numerical models,” added Nvidia CEO Jensen Huang during his keynote today.

Twinned wind

The second use case for the Modulus-Omniverse combo that Nvidia showed off was a collaboration with Siemens, which previously worked with Nvidia’s digital twins for predictive power plant maintenance. This time, they worked with Siemens Gamesa, a renewable energy company owned by Siemens Energy. “What they’ve been able to do is take Omniverse and Modulus and build a digital twin of wind farms,” Harris said.

“If you look at a wind farm, it looks like it’s randomly distributed,” he continued. “But in actuality they’re always carefully placed so that the wakes and the streams coming off of the subsequent wind farms help power and propel to create more power as you go through that chain of windmills.” Figuring out how to optimize this placement, explained, requires “intense simulation” — so Siemens Gamesa is using Modulus and Omniverse to create digital twins of its wind farms to simulate the effects of turbines in close proximity to one another. Nvidia says that the use of its digital twins for this application — also powered by that Fourier neural operator — allows for an up to 4,000× speedup compared to traditional large-eddy simulation models.

“The collaboration between Siemens Gamesa and Nvidia has meant a great step forward in accelerating both the computational speed and the deployment speed of our latest algorithms development in such a complex field as computational fluid dynamics, and set the foundations for a strong partnership in the future,” said Sergio Dominguez, onshore digital portfolio manager at Siemens Gamesa.

So, while Earth-2 isn’t quite here yet, Nvidia is still touting real results from its digital twins platform for scientific applications.

“Accelerated computing with AI at datacenter scale has the potential to deliver millionfold increases in performance to tackle challenges, such as mitigating climate change, discovering drugs and finding new sources of renewable energy,” said Ian Buck, vice president, Accelerated Computing at Nvidia. “Nvidia’s AI-enabled framework for scientific digital twins equips researchers to pursue solutions to these massive problems.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

AMD Announces Flurry of New Chips

October 10, 2024

AMD today announced several new chips including its newest Instinct GPU — the MI325X — as it chases Nvidia. Other new devices announced at the company event in San Francisco included the 5th Gen AMD EPYC processors, Read more…

NSF Grants $107,600 to English Professors to Research Aurora Supercomputer

October 9, 2024

The National Science Foundation has granted $107,600 to English professors at US universities to unearth the mysteries of the Aurora supercomputer. The two-year grant recipients will write up what the Aurora supercompute Read more…

VAST Looks Inward, Outward for An AI Edge

October 9, 2024

There’s no single best way to respond to the explosion of data and AI. Sometimes you need to bring everything into your own unified platform. Other times, you lean on friends and neighbors to chart a way forward. Those Read more…

Google Reports Progress on Quantum Devices beyond Supercomputer Capability

October 9, 2024

A Google-led team of researchers has presented more evidence that it’s possible to run productive circuits on today’s near-term intermediate scale quantum devices that are beyond the reach of classical computing. � Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it would build the supercomputer with Nvidia's Blackwell GPUs an Read more…

ZLUDA Takes Third Wack as a CUDA Emulator

October 7, 2024

The ZLUDA CUDA emulator is back in its third invocation. At one point, the project was quietly funded by AMD and demonstrated the ability to run unmodified CUDA applications with near-native performance on AMD GPUs. Cons Read more…

NSF Grants $107,600 to English Professors to Research Aurora Supercomputer

October 9, 2024

The National Science Foundation has granted $107,600 to English professors at US universities to unearth the mysteries of the Aurora supercomputer. The two-year Read more…

VAST Looks Inward, Outward for An AI Edge

October 9, 2024

There’s no single best way to respond to the explosion of data and AI. Sometimes you need to bring everything into your own unified platform. Other times, you Read more…

Google Reports Progress on Quantum Devices beyond Supercomputer Capability

October 9, 2024

A Google-led team of researchers has presented more evidence that it’s possible to run productive circuits on today’s near-term intermediate scale quantum d Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it w Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whateve Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire