HPC Experts Provide Glue Between Supercomputers and Climate Science

By Michael Feldman

November 30, 2011

Some of the most important supercomputing models aimed at climate change research have been developed by National Oceanic and Atmospheric Administration (NOAA), and in particular, its Geophysical Fluid Dynamics Laboratory (GFDL) at Princeton University. The GFDL researchers are experts in climate science, but as with many scientists, they are often less adept with the vagaries of supercomputing technology. That’s where HPTi comes in.

HPTi (High Performance Technologies Inc.) was bought by DRC Company in July, but maintains its autonomy and mission as a federal contractor for high-end technology support. The company’s strength lies in its HPC expertise and its ability to apply computational and science research to their clients’ applications. For NOAA, HPTi provides high performance computing know-how, consulting and training. As part that contract, HPTi supports the GFDL climate work, helping researchers there upgrade their climate models as well as providing guidance for future supercomputing hardware and software tools.

A major reason the GFDL work is so important is that their results are incorporated into climate assessments composed by the Intergovernmental Panel on Climate Change (IPCC). Although IPCC does use results from climate research derived at other labs, the GFDL models are central to their assessments. And the reports themselves have become the de facto standard for climate policymakers, scientists, the press, and the public.

The niche HPTi has carved out with GFDL has allowed researchers at the lab to concentrate on the physics of the models, leaving the nitty-gritty of HPC to the HPTi staff of computational scientists, software developers, systems support people, and other consultants. Because the company provides a lot of the computation glue for the researchers, HPTi tends to maintain long-term relationships (and contracts) with agencies like NOAA. In this case, the company has been supporting the GFDL climate research effort since 2008.

The continuity of involvement is important. The HPC hardware for the climate work gets upgraded every few years, requiring a reassessment of the software as well as the software development tools. According to William Cooke, a senior associate who works with climate modeling team at GFDL, over the last four years, the HPC systems used to run the models have changed dramatically.

Cooke says as recently as a few years ago, GFDL was employing an 8,000-core SGI Altix supercomputer, with a shared-memory architecture. A couple of years ago, they move to a Cray XT6 machine, with 30,000-plus cores. In the next few months, they’re going to upgrade that system to an XE6, adding 78,000 more cores in the process. When installed, that system will represent a peak petaflop of computational horsepower.

Ideally the scientists would like to just recompile their application software and run it on the new machine, but in practice, that’s not what happens. The hardware upgrades, especially the greatly increased core counts, necessitate that the climate models be modified if they are to take advantage of the additional computational power.

The additional power also allows the researchers to consider adding extra features, such as atmospheric chemistry, CO2 feedback, phytoplankton blooms, more detailed landforms and so on. But more directly, the extra cores can be used to increase the fidelity of the existing models.

For example, the current climate models use a two-degree square resolution for the atmosphere and land and a one-degree resolution for the ocean and ice. To get more fine-grained results, the scientists would like to get the atmosphere/land model down to half a degree or better and the ocean/ice model to at least a quarter of a degree. The resulting simulation will be better at picking up smaller scale effects like hurricane activity and the intensity of regional rainfall or drought.

To make it all work though, the models need to be stitched together, which takes a special piece of software called the coupler. According to Cooke, that’s another critical components that HPTi has been spending a lot of time with. And in this area, he says, the increased core count that came with the Cray supercomputer forced a rewrite of the underlying algorithms. The new version not only enabled the coupled model to run on over 10,000 cores, but it also cut the simulation time in half.

That’s significant, given the amount of computer time devoted to these climate models. Running a 20- to 30-year simulation takes about a week on the current system, but forecasting hundreds of model-years can tie up the same machine for up to six months. The scaled up software translates into more runs for the researchers, allowing them to refine their results and create more “what if” scenarios.

At the backend of the simulation, the computation turns into a typical big data problem. According to Cooke, even at two degrees resolution, the models generate about half a petabyte per month, and this has been going on for the last couple of years. With finer resolution, these datasets will get even larger.

Currently the raw data is sent over NOAA’s research network (N-Wave) every day to be post-processed at GFDL. But as the models generate more and more data, it tends to become stuck in place, which is why data lifecycle management is becoming a critical component of the research. This is yet another area that HPTi is providing guidance for.

The IPCC’s Fifth Assessment Report, which will include the latest simulation work from GFDL, is now underway and is scheduled for completion in 2013-2014.  The report, the research, and the data upon which it rests will be available in the public domain.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks.  These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of i Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

Texas A&M HPRC at PEARC24: Building the National CI Workforce

October 1, 2024

Texas A&M High-Performance Research Computing (HPRC) significantly contributed to the PEARC24 (Practice & Experience in Advanced Research Computing 2024) conference. Eleven HPRC and ACES’ (Accelerating Computin Read more…

A Q&A with Quantum Systems Accelerator Director Bert de Jong

September 30, 2024

Quantum technologies may still be in development, but these systems are evolving rapidly and existing prototypes are already making a big impact on science and industry. One of the major hubs of quantum R&D is the Q Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks.  These benchmarks have focused on mathematical ML operations and accelerators (e.g., N Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced  export controls on quantum computing technologies as well as new controls for advanced semiconductors and additiv Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire