Rolling Thunder: Predicting Storms 24 Hours in Advance

By Michael Schneider

November 25, 2005

This spring, for the first time, real-time forecasts running daily on Pittsburgh Supercomputing Center’s LeMieux, a lead resource of the TeraGrid, correctly predicted the details of thunderstorms 24 hours in advance.

If anything is certain in 2005, not counting death and taxes, it’s that we’re at the mercy of forces we don’t control. Despite incredible advances in understanding nature, leading to amazing technologies our forebears couldn’t imagine, our planet still unleashes furious energies that devastate communities and lives.

Even before Katrina, the U.S. loss due to extreme weather such as hurricanes, floods, winter storms and tornadoes, averaged $13 billion annually. The human cost of nearly 1,000 people each year is incalculable. Would better forecasting make a difference? No doubt. More to the point, is better forecasting possible?

You bet, according to Kelvin Droegemeier, who directs the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma, Norman. Take thunderstorms, the nasty ones, with rotating updrafts — called supercells. They surge across the Great Plains each spring with the potential to spawn deadly tornados. How much would it be worth to know six hours in advance — instead of, as with current forecasting, a half-hour to an hour — that one of these storms is headed your way? And not to have just an ambiguous “storm warning” but precise information about when and where it will hit, how severe it is, how long it will last?

“We want to be able to say that over Pittsburgh this afternoon at 3:30 there’ll be a thunderstorm with 30 mile-per-hour wind, golfball-sized hail, two and a half inches of rain, and it will last ten minutes, and to give you that forecast six hours in advance,” said Droegemeier.

Since the 90s, CAPS has taken several strides toward demonstrating that, with sufficient resources in data-gathering and computing, it’s possible to do this. This spring, they took another stride. In a major one-of-a-kind collaboration with NOAA (the National Oceanic and Atmospheric Administration), CAPS used resources of the NSF TeraGrid, in particular LeMieux, PSC’s terascale system, to produce the highest-resolution storm forecasts yet attempted. On several occasions, CAPS predicted the occurrence of storms within 20 miles and 30 minutes of where and when they actually happened, and they did it 24 hours in advance.

“That type of result pretty much sets conventional thinking on its ear,” said Droegemeier.

Are Thunderstorms Predictable?

In contrast to the daily weather reports on TV, which are generated from large-scale models that predict atmospheric structure over the continental United States, storm-scale forecasting involves a tighter focus — at the scale of a county or city. It requires observational data such as temperature, pressure, humidity, wind speed and direction, and other variables — at a corresponding finer spatial resolution, and it demands the most powerful computing available, and then some, to run the models.

When CAPS began in 1988, the prevailing view about storm-scale forecasting was skepticism. Numerical weather prediction was not in question. Since the 1970s, computers programmed with equations that represent the atmosphere and initialized with observational data were proven to be the best way, by far, to forecast weather. The question was more fundamental. Are thunderstorms predictable?

“The challenge we set ourselves to was, if you take the concept of computer forecast technology and apply it at this smaller scale, does the atmosphere possess any intrinsic fundamental predictability, or is at all turbulence,” asked Droegemeier? “We had hopes, but we didn’t know. With big help from the Pittsburgh Supercomputing Center, we resolved that question.”

CAPS developed groundbreaking new techniques to gather atmospheric data from Doppler radar and to assimilate this data with other meteorological information. And they developed a computational model that uses this data to predict weather at thunderstorm scale.

“It all starts with observations, because to predict we need to know what’s going on right now,” said Droegemeier. Data to feed weather models comes from many sources including upper air balloons, the national Doppler radar network, satellites, and sensing systems on commercial airplanes. From these sources, a huge amount of information, computationally processed and spread across a 3D grid representing the atmosphere, becomes the initial conditions for National Weather Service forecasts. With grid spacing at 10 to 30 kilometers, the NWS operational models do well at showing high and low pressure areas and storm fronts that develop from them — weather that happens, roughly speaking, on the scale of states. Individual thunderstorms originate at smaller scales and to forecast them, requires much finer spacing, down to at least one to two kilometers.

The foundation of a storm-forecast model is 15 to 20 non-linear differential equations. They represent the physical phenomena of the atmosphere and how it interacts with the surface of the Earth. To make a forecast involves feeding the 3D grid with initializing data, solving these equations at each position on the grid, and then doing it over again for the next time step, every five to ten seconds for 24 hours of weather. For a single forecast, this means solving trillions of equations. Each doubling of the number of grid points in 3D requires eight times more computing. If you also halve the time step, to capture corresponding finer detail in time, it’s a 16-fold computing increase. For this reason and others, storm-scale forecasting poses an enormous computational challenge.

Since 1993, CAPS has run forecasting experiments during spring storm season. In 1995 and ’96, using PSC’s CRAY T3D, a leading-edge system at the time, for a limited region of the Great Plains, they successfully forecast location, structure and timing of individual storms six hours in advance — a forecasting milestone. For this accomplishment, CAPS and PSC won the 1997 Computerworld-Smithsonian award for science, and CAPS garnered a 1997 Discover Magazine award for technological innovation.

If there were lingering doubts about storm forecasting, that the question has shifted from scientific and technological feasibility to national policy — whether sufficient resources can be made available and when — this spring’s storm forecast experiment should erase them.

Watershed Forecasts

As they have during many storm seasons over the past dozen years, CAPS and PSC this spring collaborated to produce real-time storm forecasts. The difference this year was that the forecasts covered two-thirds of the continental United States, from the Rockies east to the Appalachians. Using LeMieux, they successfully produced an on-time, daily forecast from mid-April through early June. “This was an unprecedented experiment that meteorologists could only dream of several years ago,” said Droegemeier.

Conducted in collaboration with NOAA, the program included about 60 weather researchers and forecasters from several NOAA organizations — the Storm Prediction Center and the National Severe Storms Laboratory, both in Norman, and the Environmental Modeling Center in Maryland — and the NSF-sponsored National Center for Atmospheric Research in Boulder, Colorado along with CAPS.

This experiment offered an unprecedented chance for forecasters, as well as researchers, to work with advanced technology on a daily basis, technology that, according to Droegemeier, may be five years from being incorporated in daily forecast operations at the resolutions used. Each evening, meteorologists in Norman transmitted new atmospheric conditions to Pittsburgh. By the next morning, LeMieux produced a forecast that covered the next 30 hours, and transmitted the forecast back to SPC and NSSL in Norman, where researchers turned the model output-data into images corresponding to what they see on radar. These model runs were conducted daily with virtually no problems.

Using several different versions of the Weather Research and Forecasting Model, an advanced model designed for research as well as operational use, the partners generated forecasts three times daily. EMC and NCAR used grid spacing of from four to 4.5 kilometers. With LeMieux at its disposal, running on 1,228 processors, CAPS went a step further. With grid spacing of two kilometers, more than five times finer than the most sophisticated NWS operational model — and requiring 300 times more raw computing power — their forecasts are the highest-resolution storm forecasts to date.

“Our daily WRF-model forecasts had twice the horizontal resolution and nearly 50-percent greater vertical resolution than the other two experimental products,” said Droegemeier. This higher resolution meant that the forecasts were able to capture individual thunderstorms, including their rotation. On several occasions, when the 24-hour forecast showed development of thunderstorms, it proved to be accurate within 20 miles and 30 minutes.

Just as importantly, the computer model produced images that matched well in structure with what forecasters saw later on radar. “The computer forecasts looked very similar to what we see on radar,” said Steven Weiss, SPC science and operations officer. “The structure you see on the screen is important in judging whether the storm is likely to produce tornadoes, hail or dangerous wind. These results were an eye-opener in many respects.”

“Real time daily forecasts over such a large area and with such high spatial resolution, have never been attempted before, and these results suggest that the atmosphere may be fundamentally more predictable at the scale of individual storms and especially organized storm systems than previously thought,” said Droegemeier.

Such results could potentially lead to a revision of classical predictability theory put forth by Edward Lorenz, the now retired MIT professor, whose pioneering research led to chaos theory. The forecasting community is still absorbing the findings, but it may mark a watershed in the understanding of atmospheric predictability.

For more information, including graphics, visit http://www.psc.edu/science/2005/storms/.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire