Topology, Physics & Machine Learning Take on Climate Research Data Challenges

September 7, 2018

Sept. 7, 2018 — Two PhD students who first came to Lawrence Berkeley National Laboratory (Berkeley Lab) as summer interns in 2016 are spending six months a year at the lab through 2020 developing new data analytics tools that could dramatically impact climate research and other large-scale science data projects.

Grzegorz Muszynski is a PhD student at the University of Liverpool, U.K. studying with Vitaliy Kurlin, an expert in topology and computational geometry. Adam Rupe is pursuing his PhD at the University of California at Davis under the supervision of Jim Crutchfield, an expert in dynamical systems, chaos, information theory and statistical mechanics. Both are also currently working in the National Energy Research Scientific Computing Center’s (NERSC) Data & Analytics Services (DAS) group, and their PhDs are being funded by the Big Data Center (BDC), a collaboration between NERSC, Intel and five Intel Parallel Computing Centers launched in 2017 to enable capability data-intensive applications on NERSC’s supercomputing platforms.

During their first summer at the lab, Muszynski and Rupe so impressed their mentors that they were invited to stay on another six months, said Karthik Kashinath, a computer scientist and engineer in the DAS group who leads multiple BDC climate science projects. Their research also fits nicely with the goals of the BDC, which was just getting off the ground when they first came on board. Muszynski and Rupe are now in the first year of their respective three-year BDC-supported projects, splitting time between their PhD studies and their research at the lab.

A Grand Challenge in Climate Science

From the get-go their projects have been focused on addressing a grand challenge in climate science: finding more effective ways to detect and characterize extreme weather events in the global climate system across multiple geographical regions and developing more efficient methods for analyzing the ever-increasing amount of simulated and observational data. Automated pattern recognition is at the heart of both efforts, yet the two researchers are approaching the problem in distinctly different ways: Muszynski is using various combinations of topology, applied math and machine learning to detect, classify and characterize weather and climate patterns, while Rupe has developed a physics-based mathematical model that enables unsupervised discovery of coherent structures characteristic of the spatiotemporal patterns found in the climate system.

“When you are investigating extreme weather and climate events and how they are changing in a warming world, one of the challenges is being able to detect, identify and characterize these events in large data sets,” Kashinath said. “Historically we have not been very good at pulling out these events from very large data sets. There isn’t a systematic way to do it, and there is no consensus on what the right approaches are.”

This is why the DAS group and the BDC are so enthusiastic about the work Muszynski and Rupe are doing. In their time so far at the lab, both students have been extremely productive in terms of research progress, publications, presentations and community outreach, Kashinath noted. Together, their work has resulted in six articles, eight poster presentations and nine conference talks over the last two years, which has fueled interest within the climate science community—and for good reason, he emphasized. In particular, Muszynski’s work was noted as novel and powerful at the Atmospheric Rivers Tracking Method Intercomparison Project (ARTMIP), an international community of researchers investigating Atmospheric Rivers.

“The volume at which climate data is being produced today is just insane,” he said. “It’s been going up at an exponential pace ever since climate models came out, and these models have only gotten more complex and more sophisticated with much higher resolution in space and time. So there is a strong need to automate the process of discovering structures in data.”

There is also a desire to find climate data analysis methods that are reliable across different models, climates and variables. “We need automatic techniques that can mine through large amounts of data and that works in a unified manner so it can be deployed across different data sets from different research groups,” Kashinath said.

Using Geometry to Reveal Topology

Muszynski and Rupe are both making steady progress toward meeting these challenges. Over his two years at the lab so far, Muszynski has developed a framework of tools from applied topology and machine learning that are complementary to existing tools and methods used by climate scientists and can be mixed and matched depending on the problem to be solved. As part of this work, Kashinath noted, Muszynski parallelized his codebase on several nodes on NERSC’s Cori supercomputer to accelerate the machine learning training process, which often requires hundreds to thousands of examples to train a model that can classify events accurately.

His topological methods also benefited from the guidance of Dmitriy Morozov, a computational topologist and geometer at CRD. In a paper submitted earlier this year to the journal Geoscientific Model Development, Muszynski and his co-authors used topological data analysis and machine learning to recognize atmospheric rivers in climate data, demonstrating that this automated method is “reliable, robust and performs well” when tested on a range of spatial and temporal resolutions of CAM5.1 climate model output. They also tested the method on MERRA-2, a climate reanalysis product that incorporates observational data that makes pattern detection even more difficult. In addition, they noted, the method is “threshold-free”, a key advantage over existing data analysis methods used in climate research.

“Most existing methods use empirical approaches where they set arbitrary thresholds on different physical variables, such as temperature and wind speed,” Kashinath explained. “But these thresholds are highly dependent on the climate we are living in right now and cannot be applied to different climate scenarios. Furthermore, these thresholds often depend on the type of dataset and spatial resolution. With Grzegorz’s method, because it is looking for underlying shapes (geometry and topology) of these events in the data, they are inherently free of the threshold problem and can be seamlessly applied across different datasets and climate scenarios. We can also study how these shapes are changing over time that will be very useful to understand how these events are changing with global warming.”

While topology has been applied to simpler, smaller scientific problems, this is one of the first attempts to apply topological data analysis to large climate data sets. “We are using topological data analysis to reveal topological properties of structures in the data and machine learning to classify these different structures in large climate datasets,” Muszynski said.

The results so far have been impressive, with notable reductions in computational costs and data extraction times. “I only need a few minutes to extract topological features and classify events using a machine learningclassifier, compared to days or weeks needed to train a deep learning model for the same task,” he said. “This method is orders of magnitude faster than traditional methods or deep learning. If you were using vanilla deep learning on this problem, it would take 100 times the computational time.”

Another key advantage of Muszynski’s framework is that “it doesn’t really care where you are on the globe,” Kashinath said. “You can apply it to atmospheric rivers in North America, South America, Europe – it is universal and can be applied across different domains, models and resolutions. And this idea of going after the underlying shapes of events in large datasets with a method that could be used for various classes of climate and weather phenomena and being able to work across multiple datasets—that becomes a very powerful tool.”

To read the full article, click here.


Source: Kathy Kincade, NERSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire