Climate Science Triggers Torrent of Big Data Challenges

By Dawn Levy

August 15, 2012

Supercomputers at the Oak Ridge National Laboratory (ORNL) computing complex produce some of the world’s largest scientific datasets. Many are from studies using high-resolution models to evaluate climate change consequences and mitigation strategies. The Department of Energy (DOE) Office of Science’s Jaguar (the pride of the Oak Ridge Leadership Computing Facility, or OLCF), the National Science Foundation (NSF)University of Tennessee’s Kraken (NSF’s first petascale supercomputer), and the National Oceanic and Atmospheric Administration’s Gaea (dedicated solely for climate modeling) all run climate simulations at ORNL to meet the science missions of their respective agencies.

Such simulations reveal Earth’s climate past, for example as described in a 2012 Nature article that was the first to show the role carbon dioxide played in helping end the last ice age. They also hint at our climate’s future, as evidenced by the major computational support that ORNL and Lawrence Berkeley National Laboratory continue to provide to U.S. global modeling groups participating in the upcoming Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change.

Remote sensing platforms such as DOE’s Atmospheric Radiation Measurement facilities, which support global climate research with a program studying cloud formation processes and their influence on heat transfer, and other climate observation facilities, such as DOE’s Carbon Dioxide Information Analysis Center at ORNL and the ORNL Distributed Active Archive Center, which archives data from the National Aeronautics and Space Administration’s Earth science missions, generate a wide variety of climate observations.

Researchers at the Oak Ridge Climate Change Science Institute (ORCCSI) use coupled Earth system models and observations to explore connections among atmosphere, oceans, land, and ice and to better understand the Earth system. These simulations and climate observations produce a lot of data that must be transported, analyzed, visualized, and stored.

In this interview, Galen Shipman, data-systems architect for ORNL’s Computing and Computational Sciences Directorate and the person who oversees data management at the OLCF, discusses strategies for coping with the “3 Vs”— variety, velocity, and volume — of the big data that climate science generates.

HPCwire: Why do climate simulations generate so much data?    

Galen Shipman: The I/O workloads in many climate simulations are based on saving the state of the simulation, the Earth system, for post analysis. Essentially, they’re writing out time series information at predefined intervals—everything from temperature to pressure to carbon concentration, basically an entire set of concurrent variables that represent the state of the Earth system within a particular spatial region.

If you think of, say, the atmosphere, it can be gridded around the globe as well as vertically, and for each subgrid we’re saving information about the particular state of that spatial area of the simulation. In terms of data output, this generally means large numbers of processors concurrently writing out system state from a simulation platform such as Jaguar.

Many climate simulations output to a large number of individual files over the entire simulation run. For a single run you can have many files created, which, when taken in aggregate, can exceed several terabytes. Over the past few years, we have seen these dataset sizes increase dramatically.

Climate scientists, led by ORNL’s Jim Hack, who heads ORCCSI and directs the National Center for Computational Sciences, have made significant progress in increasing the resolution of climate models both spatially and temporally along with increases in physical and biogeochemical complexity, resulting in significant increases in the amount of data generated by the climate model. Efforts such as increasing the frequency of sampling in simulated time are aimed at better understanding aspects of climate such as the daily cycle of the Earth’s climate. Increased spatial resolution is of particular importance when you’re looking at localized impacts of climate change.

If we’re trying to understand the impact of climate change on extreme weather phenomena, we might be interested in monitoring low-pressure areas, which can be done at a fairly coarse spatial resolution. But if you want to identify a smaller-scale low-pressure anomaly like a hurricane, we need to go to even higher resolution, which means even more data are generated with more analysis required of that data following the simulation.

In addition to higher-resolution climate simulations, a drive to better understand the uncertainty of a simulation result, what can naively be thought of as putting “error bars” around a simulation result, is causing a dramatic uptick in the volume and velocity of data generation. Climate scientist Peter Thornton is leading efforts at ORNL to better quantify uncertainty in climate models as part of the DOE Office of Biological and Environmental Research (BER)–funded Climate Science for a Sustainable Energy Future project.

In many of his team’s studies, a climate simulation may be run hundreds, or even thousands, of times, each with slightly different model configurations in an attempt to understand the sensitivity of the climate model to configuration changes. This large number of runs is required even when statistical methods are used to reduce the total parameter space explored. Once simulation results are created, the daunting challenge of effectively analyzing them must be addressed.

HPCwire: What is daunting about analysis of climate data?

Shipman: The sheer volume and variety of data that must be analyzed and understood are the biggest challenges. Today it is not uncommon for climate scientists to analyze multiple terabytes of data spanning thousands of files across a number of different climate models and model configurations in order to generate a scientific result. Another challenge that climate scientists are now facing is the need to analyze an increasing variety of datasets — not simply simulation results, but also climate observations often collected from fixed and mobile monitoring.

The fusion of climate simulation and observation data is being driven to develop increasingly accurate climate models and to validate this accuracy using historical measurements of the Earth’s climate. Conducting this analysis is a tremendous challenge, often requiring weeks or even months using traditional analysis tools. Many of the traditional analysis tools used by climate scientists were designed and developed over two decades ago when the volume and variety of data that scientists must now contend with simply did not exist.

To address this challenge, DOE BER began funding a number of projects to develop advanced tools and techniques for climate data analysis, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project, a collaboration including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, the University of Utah, Los Alamos National Laboratory, New York University, and KitWare, a company that develops a variety of visualization and analytic software. Through this project we have developed a number of parallel analysis and visualization tools specifically to address these challenges.

Similarly, we’re looking at ways of integrating this visualization and analysis toolkit within the Earth System Grid Federation, or ESGF, a federated system for managing geographically distributed climate data, to which ORNL is a primary contributor. The tools developed as a result of this research and development are used to support the entire climate science community.

While we have made good progress in addressing many of the challenges in data analysis, the geographically distributed nature of climate data, with archives of data spanning the globe, presents other challenges to this community of researchers.

HPCwire: Does the infrastructure exist to support sharing and analysis of this geographically distributed data?

Shipman: Much has been done to provide the required infrastructure to support this geographically distributed data, particularly between major DOE supercomputing facilities like the one at Lawrence Livermore National Laboratory that stores and distributes climate datasets through the Program for Climate Model Diagnosis and Intercomparison. To support the growing demands of data movement and remote analysis and visualization between major facilities at Oak Ridge, Argonne, and Lawrence Berkeley National Laboratories, for example, in 2009 the DOE Office of Advanced Scientific Computing Research began the Advanced Networking Initiative with the goal of demonstrating and hardening the technologies required to deliver 100-gigabit connectivity between these facilities, which span the United States.

This project has now delivered the capabilities required to transition the high-speed Energy Sciences Network (ESnet) to 100-gigabit communication between these facilities. ESnet serves thousands of DOE scientists and users of DOE facilities and provides connectivity to more than 100 other networks. This base infrastructure will provide a tenfold increase in performance for data movement, remote analysis, and visualization.

Moreover, DOE BER, along with other mission partners, is continuing to make investments in the software technologies required to maintain a distributed data archive with multiple petabytes of climate data stored worldwide through the Earth System Grid Federation project. The ESGF system provides climate scientists and other stakeholders with the tools and technologies to efficiently locate and gain access to climate data of interest from any ESGF portal regardless of where the data reside. While primarily used for sharing climate data today, recent work in integrating UV-CDAT and ESGF allows users to conduct analysis on data anywhere in the ESGF distributed system directly within UV-CDAT as if the data were locally accessible.

Further advances such as integrated remote analysis within the distributed archive are still required, however, as even with dramatic improvements in the underlying networking infrastructure, the cost of moving data is often prohibitive. It is often more efficient to simply move the analysis to where the data reside rather than moving the data to a local system and conducting the analysis.

HPCwire: What challenges loom for data analysis, especially data visualization?

Shipman: The major challenge for most visualization workloads today is data movement. Unfortunately, this challenge will become even more acute in the future. As has been discussed broadly in the HPC community, performance improvements in data movement will continue to significantly lag performance improvements in floating-point performance. That is to say, future HPC systems are likely to continue a trend of significant improvements in total floating point performance, most notably measured via the TOP500 benchmark, while the ability to move data both within the machine and to storage will see much more modest increases.This disparity will necessitate advances in how data analysis and visualization workloads address data movement.

One promising approach is in situ analysis in which visualization and analysis are embedded within the simulation, eliminating the need to move data from the compute platform to storage for subsequent post-processing. Unfortunately in situ analysis is not a silver bullet, and post-processing of data from simulations is often required for exploratory visualization and analysis. We are tackling this data-movement problem through advances in analysis and visualization algorithms, parallel file systems such as Lustre, and development of advanced software technologies such as ADIOS [Adaptable Input/Output System, or open-source middleware for I/O].

HPCwire: What’s the storage architecture evolving to in a parallel I/O environment?

Shipman: From a system-level architecture perspective, most parallel I/O environments have evolved to incorporate a shared parallel file system, similar to the Spider file system that serves all major compute platforms at the OLCF. I expect this trend will continue in most HPC environments as it provides improved usability, availability of all datasets on all platforms, and significantly reduced total cost of ownership over dedicated storage platforms.

At the component level, the industry is clearly trending toward the incorporation of solid-state storage technologies as increases in hard-disk-drive performance significantly lag increases in capacity and continued increases in computational performance. There is some debate as to what this storage technology will be, but in the near term, probably through 2017, NAND Flash will likely dominate.

HPCwire: What hybrid approaches to storage are possible?       

Shipman: Introducing a new layer in the storage hierarchy, something between memory and traditional rotating media, seems to be the consensus. Likely technologies include flash and in the future other NVRAM technologies. As improved manufacturing processes are realized for NVRAM technologies, costs will fall significantly. These storage technologies are more tolerant of varied workloads.

For analysis workloads, which are often read-dominant, NVRAM will likely be used as a higher-performance, large-capacity read cache, effectively expanding the application’s total memory space while providing performance characteristics similar to that of a remote memory operation. Unlike most storage systems today, however, future storage platforms may provide more explicit control of the storage hierarchy, allowing applications or middleware to explicitly manage data movement between levels of the hierarchy.

HPCwire: How does big data for climate relate to other challenges for big data at ORNL and beyond?

Shipman: Many of the challenges we face in supporting climate science at ORNL are similar to the three main challenges of big data — the velocity, variety, and volume of data.The velocity at which high-resolution climate simulations are capable of generating data rivals that of most computational environments of which I am aware and necessitates a scalable high-performance I/O system.

The variety of data generated from climate science ranges from simulation datasets from a variety of global, regional, and local modeling simulation packages to remote sensing information from both ground-based assets and Earth-observing satellites. These datasets come in a variety of data formats and span a variety of metadata standards. We’re seeing similar volumes, and in some cases larger growth, in other areas of simulation, including fusion science in support of ITER.

The President in a recent release from the Office of Science and Technology Policy highlighted many of the challenges in big data faced, not only across DOE, but also the National Science Foundation and the Department of Defense. A number of the solutions to these big-data challenges that were highlighted in this report have been developed in part here at Oak Ridge National Laboratory, including the ADIOS system, the Earth Systems Grid Federation, the High Performance Storage System, and our work in streaming data capture and analysis through the ADARA [Accelerating Data Acquisition, Reduction, and Analysis] project, which aims to develop a streaming data infrastructure allowing scientists to go from experiment to insight and result in record time at the world’s highest-energy neutron source, the Spallation Neutron Source at Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Long Flights to Cluster Fights: Meet the Asian Student Cluster Teams

November 22, 2017

Five teams from Asia traveled thousands of miles to compete at the SC17 Student Cluster Competition in Denver. Our cameras were there to meet ‘em, greet ‘em, and grill ‘em about their clusters and how they’re doi Read more…

By Dan Olds

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open question. The latest geo-region to throw its hat in the quantum co Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshop Read more…

By Andrew Jones

HPE Extreme Performance Solutions

HPE Wins “Best HPC Server” for the Apollo 6000 Gen10 System

Hewlett Packard Enterprise (HPE) was nominated for 14 HPCwire Readers’ and Editors’ Choice Awards—including “Best High Performance Computing (HPC) Server Product or Technology” and “Top Supercomputing Achievement.” The HPE Apollo 6000 Gen10 was named “Best HPC Server” of 2017. Read more…

Turnaround Complete, HPE’s Whitman Departs

November 22, 2017

Having turned around the aircraft carrier the Silicon Valley icon had become, Meg Whitman is leaving the helm of a restructured Hewlett Packard. Her successor, technologist Antonio Neri will now guide what Whitman assert Read more…

By George Leopold

Long Flights to Cluster Fights: Meet the Asian Student Cluster Teams

November 22, 2017

Five teams from Asia traveled thousands of miles to compete at the SC17 Student Cluster Competition in Denver. Our cameras were there to meet ‘em, greet ‘em Read more…

By Dan Olds

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC Read more…

By Andrew Jones

SC Bids Farewell to Denver, Heads to Dallas for 30th Anniversary

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This