Climate Science Triggers Torrent of Big Data Challenges

By Dawn Levy

August 15, 2012

Supercomputers at the Oak Ridge National Laboratory (ORNL) computing complex produce some of the world’s largest scientific datasets. Many are from studies using high-resolution models to evaluate climate change consequences and mitigation strategies. The Department of Energy (DOE) Office of Science’s Jaguar (the pride of the Oak Ridge Leadership Computing Facility, or OLCF), the National Science Foundation (NSF)University of Tennessee’s Kraken (NSF’s first petascale supercomputer), and the National Oceanic and Atmospheric Administration’s Gaea (dedicated solely for climate modeling) all run climate simulations at ORNL to meet the science missions of their respective agencies.

Such simulations reveal Earth’s climate past, for example as described in a 2012 Nature article that was the first to show the role carbon dioxide played in helping end the last ice age. They also hint at our climate’s future, as evidenced by the major computational support that ORNL and Lawrence Berkeley National Laboratory continue to provide to U.S. global modeling groups participating in the upcoming Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change.

Remote sensing platforms such as DOE’s Atmospheric Radiation Measurement facilities, which support global climate research with a program studying cloud formation processes and their influence on heat transfer, and other climate observation facilities, such as DOE’s Carbon Dioxide Information Analysis Center at ORNL and the ORNL Distributed Active Archive Center, which archives data from the National Aeronautics and Space Administration’s Earth science missions, generate a wide variety of climate observations.

Researchers at the Oak Ridge Climate Change Science Institute (ORCCSI) use coupled Earth system models and observations to explore connections among atmosphere, oceans, land, and ice and to better understand the Earth system. These simulations and climate observations produce a lot of data that must be transported, analyzed, visualized, and stored.

In this interview, Galen Shipman, data-systems architect for ORNL’s Computing and Computational Sciences Directorate and the person who oversees data management at the OLCF, discusses strategies for coping with the “3 Vs”— variety, velocity, and volume — of the big data that climate science generates.

HPCwire: Why do climate simulations generate so much data?    

Galen Shipman: The I/O workloads in many climate simulations are based on saving the state of the simulation, the Earth system, for post analysis. Essentially, they’re writing out time series information at predefined intervals—everything from temperature to pressure to carbon concentration, basically an entire set of concurrent variables that represent the state of the Earth system within a particular spatial region.

If you think of, say, the atmosphere, it can be gridded around the globe as well as vertically, and for each subgrid we’re saving information about the particular state of that spatial area of the simulation. In terms of data output, this generally means large numbers of processors concurrently writing out system state from a simulation platform such as Jaguar.

Many climate simulations output to a large number of individual files over the entire simulation run. For a single run you can have many files created, which, when taken in aggregate, can exceed several terabytes. Over the past few years, we have seen these dataset sizes increase dramatically.

Climate scientists, led by ORNL’s Jim Hack, who heads ORCCSI and directs the National Center for Computational Sciences, have made significant progress in increasing the resolution of climate models both spatially and temporally along with increases in physical and biogeochemical complexity, resulting in significant increases in the amount of data generated by the climate model. Efforts such as increasing the frequency of sampling in simulated time are aimed at better understanding aspects of climate such as the daily cycle of the Earth’s climate. Increased spatial resolution is of particular importance when you’re looking at localized impacts of climate change.

If we’re trying to understand the impact of climate change on extreme weather phenomena, we might be interested in monitoring low-pressure areas, which can be done at a fairly coarse spatial resolution. But if you want to identify a smaller-scale low-pressure anomaly like a hurricane, we need to go to even higher resolution, which means even more data are generated with more analysis required of that data following the simulation.

In addition to higher-resolution climate simulations, a drive to better understand the uncertainty of a simulation result, what can naively be thought of as putting “error bars” around a simulation result, is causing a dramatic uptick in the volume and velocity of data generation. Climate scientist Peter Thornton is leading efforts at ORNL to better quantify uncertainty in climate models as part of the DOE Office of Biological and Environmental Research (BER)–funded Climate Science for a Sustainable Energy Future project.

In many of his team’s studies, a climate simulation may be run hundreds, or even thousands, of times, each with slightly different model configurations in an attempt to understand the sensitivity of the climate model to configuration changes. This large number of runs is required even when statistical methods are used to reduce the total parameter space explored. Once simulation results are created, the daunting challenge of effectively analyzing them must be addressed.

HPCwire: What is daunting about analysis of climate data?

Shipman: The sheer volume and variety of data that must be analyzed and understood are the biggest challenges. Today it is not uncommon for climate scientists to analyze multiple terabytes of data spanning thousands of files across a number of different climate models and model configurations in order to generate a scientific result. Another challenge that climate scientists are now facing is the need to analyze an increasing variety of datasets — not simply simulation results, but also climate observations often collected from fixed and mobile monitoring.

The fusion of climate simulation and observation data is being driven to develop increasingly accurate climate models and to validate this accuracy using historical measurements of the Earth’s climate. Conducting this analysis is a tremendous challenge, often requiring weeks or even months using traditional analysis tools. Many of the traditional analysis tools used by climate scientists were designed and developed over two decades ago when the volume and variety of data that scientists must now contend with simply did not exist.

To address this challenge, DOE BER began funding a number of projects to develop advanced tools and techniques for climate data analysis, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project, a collaboration including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, the University of Utah, Los Alamos National Laboratory, New York University, and KitWare, a company that develops a variety of visualization and analytic software. Through this project we have developed a number of parallel analysis and visualization tools specifically to address these challenges.

Similarly, we’re looking at ways of integrating this visualization and analysis toolkit within the Earth System Grid Federation, or ESGF, a federated system for managing geographically distributed climate data, to which ORNL is a primary contributor. The tools developed as a result of this research and development are used to support the entire climate science community.

While we have made good progress in addressing many of the challenges in data analysis, the geographically distributed nature of climate data, with archives of data spanning the globe, presents other challenges to this community of researchers.

HPCwire: Does the infrastructure exist to support sharing and analysis of this geographically distributed data?

Shipman: Much has been done to provide the required infrastructure to support this geographically distributed data, particularly between major DOE supercomputing facilities like the one at Lawrence Livermore National Laboratory that stores and distributes climate datasets through the Program for Climate Model Diagnosis and Intercomparison. To support the growing demands of data movement and remote analysis and visualization between major facilities at Oak Ridge, Argonne, and Lawrence Berkeley National Laboratories, for example, in 2009 the DOE Office of Advanced Scientific Computing Research began the Advanced Networking Initiative with the goal of demonstrating and hardening the technologies required to deliver 100-gigabit connectivity between these facilities, which span the United States.

This project has now delivered the capabilities required to transition the high-speed Energy Sciences Network (ESnet) to 100-gigabit communication between these facilities. ESnet serves thousands of DOE scientists and users of DOE facilities and provides connectivity to more than 100 other networks. This base infrastructure will provide a tenfold increase in performance for data movement, remote analysis, and visualization.

Moreover, DOE BER, along with other mission partners, is continuing to make investments in the software technologies required to maintain a distributed data archive with multiple petabytes of climate data stored worldwide through the Earth System Grid Federation project. The ESGF system provides climate scientists and other stakeholders with the tools and technologies to efficiently locate and gain access to climate data of interest from any ESGF portal regardless of where the data reside. While primarily used for sharing climate data today, recent work in integrating UV-CDAT and ESGF allows users to conduct analysis on data anywhere in the ESGF distributed system directly within UV-CDAT as if the data were locally accessible.

Further advances such as integrated remote analysis within the distributed archive are still required, however, as even with dramatic improvements in the underlying networking infrastructure, the cost of moving data is often prohibitive. It is often more efficient to simply move the analysis to where the data reside rather than moving the data to a local system and conducting the analysis.

HPCwire: What challenges loom for data analysis, especially data visualization?

Shipman: The major challenge for most visualization workloads today is data movement. Unfortunately, this challenge will become even more acute in the future. As has been discussed broadly in the HPC community, performance improvements in data movement will continue to significantly lag performance improvements in floating-point performance. That is to say, future HPC systems are likely to continue a trend of significant improvements in total floating point performance, most notably measured via the TOP500 benchmark, while the ability to move data both within the machine and to storage will see much more modest increases.This disparity will necessitate advances in how data analysis and visualization workloads address data movement.

One promising approach is in situ analysis in which visualization and analysis are embedded within the simulation, eliminating the need to move data from the compute platform to storage for subsequent post-processing. Unfortunately in situ analysis is not a silver bullet, and post-processing of data from simulations is often required for exploratory visualization and analysis. We are tackling this data-movement problem through advances in analysis and visualization algorithms, parallel file systems such as Lustre, and development of advanced software technologies such as ADIOS [Adaptable Input/Output System, or open-source middleware for I/O].

HPCwire: What’s the storage architecture evolving to in a parallel I/O environment?

Shipman: From a system-level architecture perspective, most parallel I/O environments have evolved to incorporate a shared parallel file system, similar to the Spider file system that serves all major compute platforms at the OLCF. I expect this trend will continue in most HPC environments as it provides improved usability, availability of all datasets on all platforms, and significantly reduced total cost of ownership over dedicated storage platforms.

At the component level, the industry is clearly trending toward the incorporation of solid-state storage technologies as increases in hard-disk-drive performance significantly lag increases in capacity and continued increases in computational performance. There is some debate as to what this storage technology will be, but in the near term, probably through 2017, NAND Flash will likely dominate.

HPCwire: What hybrid approaches to storage are possible?       

Shipman: Introducing a new layer in the storage hierarchy, something between memory and traditional rotating media, seems to be the consensus. Likely technologies include flash and in the future other NVRAM technologies. As improved manufacturing processes are realized for NVRAM technologies, costs will fall significantly. These storage technologies are more tolerant of varied workloads.

For analysis workloads, which are often read-dominant, NVRAM will likely be used as a higher-performance, large-capacity read cache, effectively expanding the application’s total memory space while providing performance characteristics similar to that of a remote memory operation. Unlike most storage systems today, however, future storage platforms may provide more explicit control of the storage hierarchy, allowing applications or middleware to explicitly manage data movement between levels of the hierarchy.

HPCwire: How does big data for climate relate to other challenges for big data at ORNL and beyond?

Shipman: Many of the challenges we face in supporting climate science at ORNL are similar to the three main challenges of big data — the velocity, variety, and volume of data.The velocity at which high-resolution climate simulations are capable of generating data rivals that of most computational environments of which I am aware and necessitates a scalable high-performance I/O system.

The variety of data generated from climate science ranges from simulation datasets from a variety of global, regional, and local modeling simulation packages to remote sensing information from both ground-based assets and Earth-observing satellites. These datasets come in a variety of data formats and span a variety of metadata standards. We’re seeing similar volumes, and in some cases larger growth, in other areas of simulation, including fusion science in support of ITER.

The President in a recent release from the Office of Science and Technology Policy highlighted many of the challenges in big data faced, not only across DOE, but also the National Science Foundation and the Department of Defense. A number of the solutions to these big-data challenges that were highlighted in this report have been developed in part here at Oak Ridge National Laboratory, including the ADIOS system, the Earth Systems Grid Federation, the High Performance Storage System, and our work in streaming data capture and analysis through the ADARA [Accelerating Data Acquisition, Reduction, and Analysis] project, which aims to develop a streaming data infrastructure allowing scientists to go from experiment to insight and result in record time at the world’s highest-energy neutron source, the Spallation Neutron Source at Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high-end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This