HPC Experts Provide Glue Between Supercomputers and Climate Science

By Michael Feldman

November 30, 2011

Some of the most important supercomputing models aimed at climate change research have been developed by National Oceanic and Atmospheric Administration (NOAA), and in particular, its Geophysical Fluid Dynamics Laboratory (GFDL) at Princeton University. The GFDL researchers are experts in climate science, but as with many scientists, they are often less adept with the vagaries of supercomputing technology. That’s where HPTi comes in.

HPTi (High Performance Technologies Inc.) was bought by DRC Company in July, but maintains its autonomy and mission as a federal contractor for high-end technology support. The company’s strength lies in its HPC expertise and its ability to apply computational and science research to their clients’ applications. For NOAA, HPTi provides high performance computing know-how, consulting and training. As part that contract, HPTi supports the GFDL climate work, helping researchers there upgrade their climate models as well as providing guidance for future supercomputing hardware and software tools.

A major reason the GFDL work is so important is that their results are incorporated into climate assessments composed by the Intergovernmental Panel on Climate Change (IPCC). Although IPCC does use results from climate research derived at other labs, the GFDL models are central to their assessments. And the reports themselves have become the de facto standard for climate policymakers, scientists, the press, and the public.

The niche HPTi has carved out with GFDL has allowed researchers at the lab to concentrate on the physics of the models, leaving the nitty-gritty of HPC to the HPTi staff of computational scientists, software developers, systems support people, and other consultants. Because the company provides a lot of the computation glue for the researchers, HPTi tends to maintain long-term relationships (and contracts) with agencies like NOAA. In this case, the company has been supporting the GFDL climate research effort since 2008.

The continuity of involvement is important. The HPC hardware for the climate work gets upgraded every few years, requiring a reassessment of the software as well as the software development tools. According to William Cooke, a senior associate who works with climate modeling team at GFDL, over the last four years, the HPC systems used to run the models have changed dramatically.

Cooke says as recently as a few years ago, GFDL was employing an 8,000-core SGI Altix supercomputer, with a shared-memory architecture. A couple of years ago, they move to a Cray XT6 machine, with 30,000-plus cores. In the next few months, they’re going to upgrade that system to an XE6, adding 78,000 more cores in the process. When installed, that system will represent a peak petaflop of computational horsepower.

Ideally the scientists would like to just recompile their application software and run it on the new machine, but in practice, that’s not what happens. The hardware upgrades, especially the greatly increased core counts, necessitate that the climate models be modified if they are to take advantage of the additional computational power.

The additional power also allows the researchers to consider adding extra features, such as atmospheric chemistry, CO2 feedback, phytoplankton blooms, more detailed landforms and so on. But more directly, the extra cores can be used to increase the fidelity of the existing models.

For example, the current climate models use a two-degree square resolution for the atmosphere and land and a one-degree resolution for the ocean and ice. To get more fine-grained results, the scientists would like to get the atmosphere/land model down to half a degree or better and the ocean/ice model to at least a quarter of a degree. The resulting simulation will be better at picking up smaller scale effects like hurricane activity and the intensity of regional rainfall or drought.

To make it all work though, the models need to be stitched together, which takes a special piece of software called the coupler. According to Cooke, that’s another critical components that HPTi has been spending a lot of time with. And in this area, he says, the increased core count that came with the Cray supercomputer forced a rewrite of the underlying algorithms. The new version not only enabled the coupled model to run on over 10,000 cores, but it also cut the simulation time in half.

That’s significant, given the amount of computer time devoted to these climate models. Running a 20- to 30-year simulation takes about a week on the current system, but forecasting hundreds of model-years can tie up the same machine for up to six months. The scaled up software translates into more runs for the researchers, allowing them to refine their results and create more “what if” scenarios.

At the backend of the simulation, the computation turns into a typical big data problem. According to Cooke, even at two degrees resolution, the models generate about half a petabyte per month, and this has been going on for the last couple of years. With finer resolution, these datasets will get even larger.

Currently the raw data is sent over NOAA’s research network (N-Wave) every day to be post-processed at GFDL. But as the models generate more and more data, it tends to become stuck in place, which is why data lifecycle management is becoming a critical component of the research. This is yet another area that HPTi is providing guidance for.

The IPCC’s Fifth Assessment Report, which will include the latest simulation work from GFDL, is now underway and is scheduled for completion in 2013-2014.  The report, the research, and the data upon which it rests will be available in the public domain.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire