Climate Modelers Have Insatiable Appetite for HPC

By Steve Conway

March 14, 2011

Since the dawn of high performance computing, climate modeling has been one of its most demanding domains. The hunger for computational capability is unending, as researchers work to incorporate more of nature’s complexity into their models at higher resolutions. HPCwire talked with NOAA/GFDL Deputy Director Brian Gross and Venkatramani Balaji, head of the lab’s Modeling Systems Group.

HPCwire: How important have HPC-based modeling and simulation been in increasing human understanding of climate behavior and climate change?

Brian Gross: The climate system is inherently complex, measured by the number of processes and feedbacks between climate variables. It has interactions at all time and space scales, from minutes to millennia, and from millimeters to planet-scale. The role of HPC in addressing these inherent computational challenges to achieve the tremendous advances in our understanding of the Earth System cannot be overstated.

Venkatramani Balaji: In fact, Nature listed the first ocean-atmosphere coupled model — achieved by Suki Manabe, Kirk Bryan, and their collaborators at NOAA/GFDL in 1969 — as a milestone in scientific computing. That model, run on the HPC of the ’60s, was the first to show that adding CO2 to the atmosphere changes the radiative balance so as to increase surface temperatures. HPC-based modeling is the only science-based method to project future climate change.

HPCwire: In the 1990s, US climate researchers published a paper lamenting the lack of access to the most powerful supercomputers for climate modeling, which at that time were vector systems. Has anything been lost in the transition to non-vector supercomputers?

Gross: It turns out, no. On a scientific level, US labs without vector supercomputers kept pace with European and Japanese labs with vector machines. There is no evidence in hindsight that being denied access to vector machines hurt the US labs, whether measured in terms of scientific breakthroughs, or publications, or metrics of model skill.

Balaji: This is not to say that we went through the transition with no pain! The switch from vector to distributed memory machines was certainly disruptive and required a thorough technology refresh of the models. Labs had to expend a lot of effort recoding and then verifying that the new codes were capable of reproducing proven results.

Gross: We used the occasion also to instill better software engineering practices, and I think most people will agree that we’re the better for it. The models today are more agile and more configurable. We can build more complexity into our models than we were able to in the ’90s because of component-based design. We are now able to include atmospheric chemistry, aerosols, dynamic ecosystems on land and ocean, and we can study the complete Earth system. We couldn’t have done this very easily with models of the 1990 vintage.

HPCwire: What are the biggest challenges facing the climate modeling community today?

Gross: The principal challenges we face in climate modeling today remain the same as they have for decades: our limited understanding of the way the Earth System works, how accurately we can translate what we do know into computational algorithms and numerical models, quantifying uncertainty, and efficiently running our increasingly computationally intensive climate models on the largest HPC systems in the world.

It is worth pointing out that the direction of technology today, using more processors rather than faster processors, greatly favors weak scaling over strong scaling. The consequence is that we can often execute more complex, higher-resolution models at a fixed rate, as measured by, say, model years per day.

Balaji: But it’s much more difficult to execute a given model at a faster rate. This can often impede our scientific progress, given the very long time scales associated with some climate processes, such as the global ocean circulation and long-lived greenhouse gases like carbon dioxide. We’ll return to these challenges in a minute.

HPCwire: In the next few years, what are the goals for increased resolution of coupled earth system models?

Gross: The question of anthropogenic climate change on the scale of the planet is settled from a purely scientific viewpoint. However, understanding the details of climate change on a regional scale is harder. We’re not yet at a point where we can attribute local or regional climate change to human actions with the same confidence.

The goal for the current generation of IPCC-class models is to see if higher resolution yields better skill on regional scales. This is not a given. As processes that are currently unresolved become resolved, their representation in models changes from “parameterized” to “simulated.”

Balaji: There are key processes — for instance mesoscale eddies in the ocean, and deep convection in the atmosphere — that will undergo this transition over the next 5-10 years. Some current problems, such as cloud-climate feedback and ocean mixing, will be solved, but new ones might emerge. But certainly cloud-resolving and ocean-eddy-resolving coupled models promise to yield qualitatively new and exciting science.

HPCwire: What are the biggest barriers to greater scalability? Is it the codes, the models, or the limitations of the known science?

Balaji: All of these are barriers, but this list is incomplete. Why are hardware and system software not on your list? Our main difficulty is the speed of a single operation has not got faster for a while and is likely to become slower on the many-core and GPU cluster type systems. Compilers have not got any better for a long time at interpreting our codes, and are even more immature on the novel architectures.

Gross: The expectation had been that a given model at a given resolution would get faster over time just by advances in technology. We’ve just had a rude awakening.

Balaji: As an aside, I’d focus on time-to-solution rather than scalability per se. We all know tricks that make models run on more processors, yet take longer to reach the same solution. We class our models as 1 year/day models, 10 year/day models, and so on. Each can be used for a different class of scientific problem.

HPCwire: It seems that generational advances in computing power reduce uncertainty by enabling greater resolution, but adding new components to coupled models, such as for the carbon cycle, can offset these gains by increasing the complexity of the models? How do you balance these choices?

Gross: Good question. Our feeling is that the complexity comes first. When we feel we’ve reached a level of understanding of some process — say aerosol-cloud interactions, or dynamic vegetation — they get added to the models, and a new realm of scientific problem opens up. We then look at what hardware we can get with our computing budget, and that tells what resolutions we can use while achieving the target model years/day pace necessary for useful science.

HPCwire: How well do the atmospheric, oceanic and other components of coupled models and ensemble models “talk to” each other? How compatible are the physics and the scales in these models?

Balaji: We typically change one component at a time, so that you can do careful comparisons with previous results and trace differences back to a single component. But resolutions stay close, usually within a factor of two or so.

Not to say that the grids are the same. Atmosphere and ocean modelers have taken different routes to avoiding grid singularities and other numerical issues. Coupling technology is stable and mature. There are good, efficient, scalable, conservative coupling and regridding methods, but there’s always an open question as to whether they’ll keep scaling as we add resolution. Also, we’re not well situated to take advantage of AMR [adaptive mesh refinement], and so on. These methods are not much in use in the climate field today.

HPCwire: The goal is for the “Gaea” Cray XT6 supercomputer at ORNL to grow to a 720-teraflop Cray XE6 system in mid-2011. The plan is for “Gaea” to expand to 1.1 petaflops later on. What will these increases make possible?

Gross: Gaea puts within reach the eddy-resolving ocean models and cloud-resolving models we just spoke about. Separately, we’re already there. We believe we’ll be doing useful science with these models in coupled mode shortly after we get the full petaflop machine. Okay, maybe not cloud-resolving, but tropical storm-resolving.

Balaji: Additionally, we’re exploring predictability issues with our models. How sensitive are predictions to initial conditions? These studies explore probability distributions across ensembles of runs initialized with an advanced coupled data assimilation system. These will also stress the capacity of the machine.

Putting these two together, for predictability changes as a function of resolution, we could use up these cycles many times over. And I haven’t even mentioned the Earth System models, which apply this unique resource to substantially increase complexity,adding in atmospheric chemistry, fully interactive land-based ecosystem dynamics and carbon, nitrogen, and other biogeochemical cycles.

HPCwire: What elements of this supercomputer are especially important for weather and climate modeling?

Gross: We hope we’ve made it clear that we can now envision an unprecedented set of exciting science that was out of our reach before. The Cray SeaStar interconnect allows extraordinary levels of scaling, and we’re looking forward to seeing results on the Gemini upgrade, which should be even better.

HPCwire: How much of NOAA’s focus is on modeling weather and climate phenomena in the US, versus other areas of the world?

Balaji: All of our models are global, and the processes and feedbacks are linked on the planetary scale. It’s generally found that to get the climate right over the US, you do need to worry about clouds off the coast of Peru, or you need to get North Atlantic sea surface temperatures right to simulate drought in the Sahel, to take some prominent examples of global linkages. Some short runs are undertaken with regional models, but the fundamental basis of all research and operations is global models.

Gross: We are now configuring some variable resolution models such as the stretched cubed sphere, where resolution can be focused on the US, for instance.

HPCwire: There’s considerable pressure to reduce federal spending in every area possible. Why should strong funding for weather and climate modeling continue?

Gross: Just check out NOAA’s Next-Generation Strategic Plan. Climate change has already had profound implications for society, and climate model predictions and projections foretell a host of additional significant impacts both nationally and internationally.

We need the best possible science-based information on future climate so that decision-makers can develop and evaluate options that mitigate the human causes of climate change and allow society to adapt to foreseeable climate impacts. This information can only be obtained through state-of-the-science climate models. The cost of the associated HPC is trivial compared to the social gains from mitigation and adaptation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire