Climate Science Triggers Torrent of Big Data Challenges

By Dawn Levy

August 15, 2012

Supercomputers at the Oak Ridge National Laboratory (ORNL) computing complex produce some of the world’s largest scientific datasets. Many are from studies using high-resolution models to evaluate climate change consequences and mitigation strategies. The Department of Energy (DOE) Office of Science’s Jaguar (the pride of the Oak Ridge Leadership Computing Facility, or OLCF), the National Science Foundation (NSF)University of Tennessee’s Kraken (NSF’s first petascale supercomputer), and the National Oceanic and Atmospheric Administration’s Gaea (dedicated solely for climate modeling) all run climate simulations at ORNL to meet the science missions of their respective agencies.

Such simulations reveal Earth’s climate past, for example as described in a 2012 Nature article that was the first to show the role carbon dioxide played in helping end the last ice age. They also hint at our climate’s future, as evidenced by the major computational support that ORNL and Lawrence Berkeley National Laboratory continue to provide to U.S. global modeling groups participating in the upcoming Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change.

Remote sensing platforms such as DOE’s Atmospheric Radiation Measurement facilities, which support global climate research with a program studying cloud formation processes and their influence on heat transfer, and other climate observation facilities, such as DOE’s Carbon Dioxide Information Analysis Center at ORNL and the ORNL Distributed Active Archive Center, which archives data from the National Aeronautics and Space Administration’s Earth science missions, generate a wide variety of climate observations.

Researchers at the Oak Ridge Climate Change Science Institute (ORCCSI) use coupled Earth system models and observations to explore connections among atmosphere, oceans, land, and ice and to better understand the Earth system. These simulations and climate observations produce a lot of data that must be transported, analyzed, visualized, and stored.

In this interview, Galen Shipman, data-systems architect for ORNL’s Computing and Computational Sciences Directorate and the person who oversees data management at the OLCF, discusses strategies for coping with the “3 Vs”— variety, velocity, and volume — of the big data that climate science generates.

HPCwire: Why do climate simulations generate so much data?    

Galen Shipman: The I/O workloads in many climate simulations are based on saving the state of the simulation, the Earth system, for post analysis. Essentially, they’re writing out time series information at predefined intervals—everything from temperature to pressure to carbon concentration, basically an entire set of concurrent variables that represent the state of the Earth system within a particular spatial region.

If you think of, say, the atmosphere, it can be gridded around the globe as well as vertically, and for each subgrid we’re saving information about the particular state of that spatial area of the simulation. In terms of data output, this generally means large numbers of processors concurrently writing out system state from a simulation platform such as Jaguar.

Many climate simulations output to a large number of individual files over the entire simulation run. For a single run you can have many files created, which, when taken in aggregate, can exceed several terabytes. Over the past few years, we have seen these dataset sizes increase dramatically.

Climate scientists, led by ORNL’s Jim Hack, who heads ORCCSI and directs the National Center for Computational Sciences, have made significant progress in increasing the resolution of climate models both spatially and temporally along with increases in physical and biogeochemical complexity, resulting in significant increases in the amount of data generated by the climate model. Efforts such as increasing the frequency of sampling in simulated time are aimed at better understanding aspects of climate such as the daily cycle of the Earth’s climate. Increased spatial resolution is of particular importance when you’re looking at localized impacts of climate change.

If we’re trying to understand the impact of climate change on extreme weather phenomena, we might be interested in monitoring low-pressure areas, which can be done at a fairly coarse spatial resolution. But if you want to identify a smaller-scale low-pressure anomaly like a hurricane, we need to go to even higher resolution, which means even more data are generated with more analysis required of that data following the simulation.

In addition to higher-resolution climate simulations, a drive to better understand the uncertainty of a simulation result, what can naively be thought of as putting “error bars” around a simulation result, is causing a dramatic uptick in the volume and velocity of data generation. Climate scientist Peter Thornton is leading efforts at ORNL to better quantify uncertainty in climate models as part of the DOE Office of Biological and Environmental Research (BER)–funded Climate Science for a Sustainable Energy Future project.

In many of his team’s studies, a climate simulation may be run hundreds, or even thousands, of times, each with slightly different model configurations in an attempt to understand the sensitivity of the climate model to configuration changes. This large number of runs is required even when statistical methods are used to reduce the total parameter space explored. Once simulation results are created, the daunting challenge of effectively analyzing them must be addressed.

HPCwire: What is daunting about analysis of climate data?

Shipman: The sheer volume and variety of data that must be analyzed and understood are the biggest challenges. Today it is not uncommon for climate scientists to analyze multiple terabytes of data spanning thousands of files across a number of different climate models and model configurations in order to generate a scientific result. Another challenge that climate scientists are now facing is the need to analyze an increasing variety of datasets — not simply simulation results, but also climate observations often collected from fixed and mobile monitoring.

The fusion of climate simulation and observation data is being driven to develop increasingly accurate climate models and to validate this accuracy using historical measurements of the Earth’s climate. Conducting this analysis is a tremendous challenge, often requiring weeks or even months using traditional analysis tools. Many of the traditional analysis tools used by climate scientists were designed and developed over two decades ago when the volume and variety of data that scientists must now contend with simply did not exist.

To address this challenge, DOE BER began funding a number of projects to develop advanced tools and techniques for climate data analysis, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project, a collaboration including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, the University of Utah, Los Alamos National Laboratory, New York University, and KitWare, a company that develops a variety of visualization and analytic software. Through this project we have developed a number of parallel analysis and visualization tools specifically to address these challenges.

Similarly, we’re looking at ways of integrating this visualization and analysis toolkit within the Earth System Grid Federation, or ESGF, a federated system for managing geographically distributed climate data, to which ORNL is a primary contributor. The tools developed as a result of this research and development are used to support the entire climate science community.

While we have made good progress in addressing many of the challenges in data analysis, the geographically distributed nature of climate data, with archives of data spanning the globe, presents other challenges to this community of researchers.

HPCwire: Does the infrastructure exist to support sharing and analysis of this geographically distributed data?

Shipman: Much has been done to provide the required infrastructure to support this geographically distributed data, particularly between major DOE supercomputing facilities like the one at Lawrence Livermore National Laboratory that stores and distributes climate datasets through the Program for Climate Model Diagnosis and Intercomparison. To support the growing demands of data movement and remote analysis and visualization between major facilities at Oak Ridge, Argonne, and Lawrence Berkeley National Laboratories, for example, in 2009 the DOE Office of Advanced Scientific Computing Research began the Advanced Networking Initiative with the goal of demonstrating and hardening the technologies required to deliver 100-gigabit connectivity between these facilities, which span the United States.

This project has now delivered the capabilities required to transition the high-speed Energy Sciences Network (ESnet) to 100-gigabit communication between these facilities. ESnet serves thousands of DOE scientists and users of DOE facilities and provides connectivity to more than 100 other networks. This base infrastructure will provide a tenfold increase in performance for data movement, remote analysis, and visualization.

Moreover, DOE BER, along with other mission partners, is continuing to make investments in the software technologies required to maintain a distributed data archive with multiple petabytes of climate data stored worldwide through the Earth System Grid Federation project. The ESGF system provides climate scientists and other stakeholders with the tools and technologies to efficiently locate and gain access to climate data of interest from any ESGF portal regardless of where the data reside. While primarily used for sharing climate data today, recent work in integrating UV-CDAT and ESGF allows users to conduct analysis on data anywhere in the ESGF distributed system directly within UV-CDAT as if the data were locally accessible.

Further advances such as integrated remote analysis within the distributed archive are still required, however, as even with dramatic improvements in the underlying networking infrastructure, the cost of moving data is often prohibitive. It is often more efficient to simply move the analysis to where the data reside rather than moving the data to a local system and conducting the analysis.

HPCwire: What challenges loom for data analysis, especially data visualization?

Shipman: The major challenge for most visualization workloads today is data movement. Unfortunately, this challenge will become even more acute in the future. As has been discussed broadly in the HPC community, performance improvements in data movement will continue to significantly lag performance improvements in floating-point performance. That is to say, future HPC systems are likely to continue a trend of significant improvements in total floating point performance, most notably measured via the TOP500 benchmark, while the ability to move data both within the machine and to storage will see much more modest increases.This disparity will necessitate advances in how data analysis and visualization workloads address data movement.

One promising approach is in situ analysis in which visualization and analysis are embedded within the simulation, eliminating the need to move data from the compute platform to storage for subsequent post-processing. Unfortunately in situ analysis is not a silver bullet, and post-processing of data from simulations is often required for exploratory visualization and analysis. We are tackling this data-movement problem through advances in analysis and visualization algorithms, parallel file systems such as Lustre, and development of advanced software technologies such as ADIOS [Adaptable Input/Output System, or open-source middleware for I/O].

HPCwire: What’s the storage architecture evolving to in a parallel I/O environment?

Shipman: From a system-level architecture perspective, most parallel I/O environments have evolved to incorporate a shared parallel file system, similar to the Spider file system that serves all major compute platforms at the OLCF. I expect this trend will continue in most HPC environments as it provides improved usability, availability of all datasets on all platforms, and significantly reduced total cost of ownership over dedicated storage platforms.

At the component level, the industry is clearly trending toward the incorporation of solid-state storage technologies as increases in hard-disk-drive performance significantly lag increases in capacity and continued increases in computational performance. There is some debate as to what this storage technology will be, but in the near term, probably through 2017, NAND Flash will likely dominate.

HPCwire: What hybrid approaches to storage are possible?       

Shipman: Introducing a new layer in the storage hierarchy, something between memory and traditional rotating media, seems to be the consensus. Likely technologies include flash and in the future other NVRAM technologies. As improved manufacturing processes are realized for NVRAM technologies, costs will fall significantly. These storage technologies are more tolerant of varied workloads.

For analysis workloads, which are often read-dominant, NVRAM will likely be used as a higher-performance, large-capacity read cache, effectively expanding the application’s total memory space while providing performance characteristics similar to that of a remote memory operation. Unlike most storage systems today, however, future storage platforms may provide more explicit control of the storage hierarchy, allowing applications or middleware to explicitly manage data movement between levels of the hierarchy.

HPCwire: How does big data for climate relate to other challenges for big data at ORNL and beyond?

Shipman: Many of the challenges we face in supporting climate science at ORNL are similar to the three main challenges of big data — the velocity, variety, and volume of data.The velocity at which high-resolution climate simulations are capable of generating data rivals that of most computational environments of which I am aware and necessitates a scalable high-performance I/O system.

The variety of data generated from climate science ranges from simulation datasets from a variety of global, regional, and local modeling simulation packages to remote sensing information from both ground-based assets and Earth-observing satellites. These datasets come in a variety of data formats and span a variety of metadata standards. We’re seeing similar volumes, and in some cases larger growth, in other areas of simulation, including fusion science in support of ITER.

The President in a recent release from the Office of Science and Technology Policy highlighted many of the challenges in big data faced, not only across DOE, but also the National Science Foundation and the Department of Defense. A number of the solutions to these big-data challenges that were highlighted in this report have been developed in part here at Oak Ridge National Laboratory, including the ADIOS system, the Earth Systems Grid Federation, the High Performance Storage System, and our work in streaming data capture and analysis through the ADARA [Accelerating Data Acquisition, Reduction, and Analysis] project, which aims to develop a streaming data infrastructure allowing scientists to go from experiment to insight and result in record time at the world’s highest-energy neutron source, the Spallation Neutron Source at Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Launches Massive 100 Petaflops ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Japan Plans Super-Efficient AI Supercomputer

November 28, 2016

Japan intends to deploy a 130-petaflops (half-precision) supercomputer by early 2018 as part of a 19.5 billion yen ($173 million) project called ABCI (for AI Bridging Cloud Infrastructure). Read more…

By Tiffany Trader

AWS Launches Massive 100 Petaflops ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This