NOAA-ORNL Climate Research Collaboration Sets Lofty Goals for New Supercomputer

By Nicole Hemsoth

July 26, 2010

A year ago, NOAA and DOE signed an agreement calling for closer cooperation between NOAA and Oak Ridge National Laboratory. The agreement tasked ORNL with “providing research collaboration and technical support for high performance computing and data systems that will deliver improved climate data and model experiments.” Jim Rogers, director of operations for the National Center for Computational Sciences at ORNL, discusses the agreement and the goals for the Climate Modeling and Research System (CMRS), the initial supercomputer chosen for the collaborative work.

HPCwire: What are the scientific goals for CMRS? What kind of modeling resolution are you targeting? Will this allow you to add more components to the ensemble models?

Rogers: The high-level goal for this project is to develop better models for predicting climate variability and change. ORNL’s role is to provide NOAA with both the HPC resources and the collaborative support needed to extend and improve these models.

On NOAA’s current systems, the typical resolution of the coupled climate model has been limited to a grid increment of 200 km for the atmosphere, and 100 km for the ocean model because of limitations in computational resources. However, on the new Cray XE6, we expect that NOAA scientists will quickly transition to a much higher resolution 50 km atmosphere and 25 km ocean model. And while I expect that this will be the initial workhorse, NOAA is already working on a 25 km atmosphere and 10 km ocean model with better physics.

There are several things in play as we move to these higher resolution models. The first is identifying core-count sweet spots for the existing model, the second is improving the scalability of the current code so that it can effectively use larger numbers of cores, and the third is introducing a new version of the atmosphere that includes a more complete treatment of the upper-level atmospheric physics and dynamics.

HPCwire: Who are NOAA’s research partners in this endeavor?

Rogers: This agreement specifically includes collaboration among scientists within NOAA and DOE/ORNL. Jim Hack, Director of the National Center for Computational Sciences, is working with Brian Gross and Venkatramani Balaji of NOAA/GFDL to identify and scope these collaborative efforts.

HPCwire: Why did NOAA decide to use ORNL as a host site for CMRS?

Rogers: ORNL plays a leadership role for climate change science and is a well-established HPC resource provider, with the current fastest computer system in the world. NOAA has been using a significant number of processor hours at ORNL on both the Cray XT4 and XT5 since 2008. This existing relationship provides a strong basis for the more dedicated support that they will receive with the CMRS. This arrangement allows NOAA to leverage our unique strengths as the host site for the equipment, as well as collaborate on the science side in partnering two strong climate science communities.

HPCwire: As part of its energy research mission, ORNL has been active in climate research for a long time, but the lab has really stepped up its climate work in recent years, including recruiting top research talent in this field. What’s driving this escalation?

Rogers: ORNL has definitely increased its focus on climate modeling and research. Day to day, I see growth in this area through the Oak Ridge Climate Change Science Institute. There is a lot of momentum in this area, a lot of attention from the public, and significant opportunities for fostering collaborative work in earth systems modeling.

HPCwire: Is there a “critical mass” effect from having all this climate research talent and multiple petascale supercomputers in one place?

Rogers: There is clearly an advantage to this situation.

HPCwire: Do you expect the petascale CMRS system to attract even more climate research talent to the NOAA site at ORNL?

Rogers: The priorities for use of the CMRS system will be up to NOAA management, but it’s easy to imagine how the huge increase in capability will provide NOAA with the flexibility to do new things and more fully engage other components of the NOAA climate change program. The opportunity to work on state-of-the-art hardware will always be a draw, especially on this Cray XE6, which provides some very attractive features that even big brother “Jaguar” cannot provide, including denser, faster nodes and the higher-speed interconnect.

HPCwire: NOAA is providing ORNL with $215 million over five years for supporting the climate research work. This is federal stimulus money. How much do you expect this big funding infusion to accelerate progress in climate research?

Rogers: Only the first $73 million is ARRA [American Recovery and Reinvestment Act] money. That money has been budgeted for the acquisition, installation, operation, and support of the CMRS. Other funding sources up to the $215 million will round out many of the collaborative science projects and activities. The impact of this stimulus funding is pretty clear, though. In Year 1, the new CMRS provides a 5x increase in computational capability over NOAA’s current largest system. In the second year, the capacity quadruples to more than 1.1 petaflops. This is a huge resource, delivered in step with the scientific community’s needs.

HPCwire: How will the increased computational power and research funding affect America’s standing in the global climate research community? Will the US be taking on a bigger share of the work for IPCC [Intergovernmental Panel on Climate Change] or other collaborative projects?

Rogers: I certainly expect the CMRS systems to be used for IPCC AR5 [Fifth Assessment Report] work.

HPCwire: Is NOAA’s climate research work always collaborative, or do you sometimes compete with other large climate centers around the world?

Rogers: Climate science is by definition a highly collaborative enterprise. I imagine that this machine acquisition will put NOAA in a role to take on additional leadership roles in exploring questions about climate change.

HPCwire: This will bring the number of Cray petascale systems at ORNL to three. Why did you choose the Cray supercomputers for this work?

Rogers: This was the outcome of a competitive procurement that assessed a large number of factors, including technical solution and strategy, benchmarks, past performance, and total cost of ownership. Intense interest from the HPC vendors led to very good proposals. In the end, the Cray solution using the XE6 was the most competitive, demonstrating a very good fit for the high-resolution climate models, an aggressive installation and upgrade plan, and the greatest ability to deliver cycles to the NOAA climate community.

HPCwire: You’ll soon have the CMRS petascale system. What could you do with an exascale supercomputer?

Rogers: The climate modeling community has articulated plans to pursue higher-resolution models with much more realistic physics, with a goal of improving simulation fidelity. Exascale capabilities will be needed to achieve many of these challenging scientific goals. Of course, the modeling activities will need to be able to exploit a much more complex architecture to take advantage of an exascale computer, which will provide an equally challenging technical task for the climate community.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This