International Project Readies Climate Models For Exascale Era

By Michael Feldman

May 12, 2011

However well-meaning, the efforts of individual nations to curb climate change will always fall short. Given that climate does not respect national borders, global cooperation will be the key to any solution. While international political cooperation to deal with the issue has been frustratingly slow, at least one aspect of the problem is now getting some global focus: climate modeling.

The first international effort to bring climate simulation software onto the next-generation exascale platforms got underway earlier this spring. The project, named Enabling Climate Simulation (ECS) at Extreme Scale, is being funded by the G8 Research Councils Initiative on Multilateral Research and brings together some of the heavy-weight organizations in climate research and computer science, not to mention some of the top supercomputers on the planet.

This project came out of the ongoing collaboration of University of Illinois at Urbana-Champaign (UIUC) and the French National Institute for Research in Computer Science and Control (INRIA) though their Joint Laboratory for Petascale Computing and takes advantage of the support of NCSA, which will provide access to the upcoming multi-petaflop Blue Waters system.

In a nutshell, the objective of the G8 ECS project is to investigate how to efficiently run climate simulations on future exascale systems and get correct results. It will focus on three main topics: (1) how to complete simulations with correct results despite frequent system failures; (2) how to exploit hierarchical computers with hardware accelerators close to their peak performance; and (3) how to run efficient simulations with 1 billion threads. This project also aims at educate new generations of climate and computer scientists about techniques for high performance computing at extreme scale.

The team is led by the UIUC’s Marc Snir (project director), and INRIA’s Franck Cappello (associate director). It gathers researchers from five of the G8 nations: the US (University of Illinois at Urbana Champaign, University of Tennessee and National Center for Atmospheric Research), France (INRIA), Germany (German Research School for Simulation Sciences), Japan (Tokyo Tech and University of Tsukuba), Canada (University of Victoria) and Spain (Barcelona Supercomputing Center).

HPCwire got the opportunity to ask project director Mark Snir and atmospheric scientist Don Wuebbles at UIUC and INRIA’s Franck Cappello about the particulars of the G8 ECS effort and to provide some perspective on what it means to the climate research and computer science communities.

HPCwire: How do the current climate models that are being run on terascale and petascale systems fall short?

Don Wuebbles: There is a strong need to run global climate models with detailed treatments of atmospheric, land, ocean, and biospheric processes at very high resolution, with the newest generation of climate models that can be run on petascale computers being able to get to a horizontal resolution of as low as about 13 kilometers. Such a capability allows for many relevant processes to be treated without having to make the severe approximations and parameterizations found in the models used in previous climate assessments.

As an example, it is now known that ocean models need to be run at roughly a tenth of a degree or about 10 kilometers horizontal resolution in order to adequately represent ocean eddy processes. Even on a petascale machine, only a limited number of runs can be done with the new high resolution models. A exascale machine will allow for even high resolution as new dynamical cores are developed. Even more important though is that ensembles of the climate analyses extending over many hundreds of years can be run, thus allowing better representation of natural variability in the climate system.

In addition, exascale computing will allow for well-characterized studies of the uncertainties in modeling of the climate system that are impossible on current computer systems because of the extensive resources required.

HPCwire: Will  ECS effort be able leverage any of the work done by the International Exacale Software Project (IESP)?

Marc Snir: Many partners of the project are active participants of IESP either as leader, members of the executive committee or experts of IESP. The research program has been defined taking into account the IESP results. IESP work was a instrumental in the clarification of the challenges and the definition of the research scope in the three main topic of our ECS project. Our project also carefully followed the discussions within the European Exascale Software Initiative (EESI) and Japan, where several G8 ECS partners are playing leading roles. IESP was instrumental in motivating the RFP that was issued jointly by seven of the G8 countries. However, one should remember that IESP established a roadmap. New collaborations are needed to implement it. The program that funds us and five other projects is a (very modest) first step in this direction.

HPCwire: What kinds of assumptions will have to be made about the future exascale systems to redesign the software?

Franck Cappello: We tried to take reasonable assumptions according to the current state of the art, the projections made in the exascale preparation reports and discussions with hardware developers. These assumptions are essentially following the ones considered in IESP. Exascale systems are likely to have hybrid (SIMD plus sequential) cores, hundreds of cores per chip, many chips per nodes and deep memory hierarchies. Another important element is the uncertainty about the system MTBF predictions. This essentially will depends on the level of masking provided by the hardware.

A key choice in our project was to test our research idea on a significant variety of available HPC systems: Blue Waters, Blue Gene P and Q, Tsubame2, the K machine in Kobe and Marenostrum2. We believe that what we will learn by testing our improvements on these machines will help us to better prepare climate code for exascale.

HPCwire: What kinds of changes to today’s climate simulations do you anticipate to bring this software into the exascale realm?

Cappello: Our project focuses on three key issues: system level scalability, node level performance and resilience. No existing climate model scales to the order of a million cores. Thus, studying system level scalability is a critical. The main research driver is to preserve locality, since strong locality will be crucial for performance. We shall explore three key areas: topology and computation-intensity-aware mappings of simulation processes to system, communication-computation overlap, and the use of asynchronous collective communications.

Concerning node level performance, we shall explore modeling and auto-tuning/scheduling of intra-node heterogeneity with massive numbers of cores, for example, GPUs; exploiting locality and latency hiding extensively to mitigate the performance impact of intra-node traffic; and studying task parallelism for the physics modules in the atmosphere model.

ECS will address resilience from multiple complementary approaches, including resilient climate simulation algorithms, new programming extensions for resilience, and new fault tolerant protocols for uncoordinated checkpointing and partial restart. These three approaches could be considered as three levels of failure management, each level being triggered when the previous one is not enough to recover the execution.

Our work is by no means a full solution to the problem of exascale climate simulations. New algorithms will be needed. There is another G8 project that looks at algorithm changes to enhance scalability.

New programming models may be needed to better support fine-grain communication and load balancing. Some of us are involved in other projects that focus on this problem. However, our work is, to a large extent, agnostic on these issues.

HPCwire: By the time the first exascale systems appear in 2018 to 2020, climate change will almost certainly be much further along than it is now. Assuming we’re able move the software onto these exascale platforms and obtain a much more accurate representation of the climate system, what will policy makers be able to do with these results?

Snir: I suspect that all participants in our project believe that the time to act on global warming is now, not ten years from now. The unfortunate situation is that we seem incapable of radical action, for a variety of reasons. It is hard to have international action when any individual country will be better served by shirking its duties — the prisoner’s paradox — and it is hard to act when the cost of action is immediate and the reward is far in the future.

As unfortunate as this is, we might have to think of mitigation, rather than remediation. More accurate simulations will decrease the existing uncertainty about the rate of global warming and its effects; and will be needed to assess the effect of unmitigated climate change, and the effect of various mitigation actions. Current simulations use 100 km grids. At that scale, California is represented by a few points, with no discrimination between Coast Range and Central Valley, or Coastal Range and Sacrament-San Joaquin Delta. Clearly, global warming will have very different effects on these different geographies. With better simulations, each House member will know how his or her district will be impacted.

HPCwire: How much funding is available for this work and over what time period? Is each country contributing?

Cappello: This three-year project receives G8 coordinated funding from the Natural Sciences and Engineering Research Council of Canada (NSERC), French National Research Agency (ANR), German Research Foundation (DFG), Japan Society for the Promotion of Science (JSPS) and the National Science Foundation (NSF). This project, together with five other projects, was funded as part of the G8 Research Councils Initiative on Multilateral Research, Interdisciplinary Program on Application Software towards Exascale Computing for Global Scale Issues.

This is the first initiative of its kind to foster broad international collaboration on the research needed to enable effective use of future exascale platforms. The total funding for this initiative is modest, about 10 million euros over 3 years, spread over 6 projects.

HPCwire: Is that enough money to meet the goals of the project? Do you anticipate follow-on funding?

Snir: The project has received enough money to fund the research phase and develop separated prototypes on the three main topics. Our focus is on understanding the limitations of current codes and developing a methodology for making future codes more performing and more resilient. The development of these future codes will require significantly higher funding. We expect to collaborate with other teams that are continuing to improve climate codes and seek future funding to continue our work as new codes are developed.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale companies and their embrace of AI and deep learning – tha Read more…

By Doug Black

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In thi Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big data and artificial intelligence software to its top-of-the-l Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “global” launch event in Austin TX. In many ways it was a fu Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it, analysts and journalists want to report on it. Deep learni Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “g Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it Read more…

By Doug Black

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This