As Supercomputers Approach Exascale, Experts Wrestle with Big Data

By Dawn Levy

July 21, 2011

The Oak Ridge Leadership Computing Facility (OLCF) plans on delivering a system with peak performance of 10–20 petaflops at Oak Ridge National Laboratory (ORNL) in 2012. This system will begin the OLCF’s transition from petaflop to exaflop computing this decade. Providing an environment that balances computational speed, memory bandwidth, and the input/output of data — so no aspect bottlenecks the performance of scientific applications — will require dramatic advances in parallel I/O capabilities. These high performance computing systems will generate unprecedented amounts of data — and unprecedented challenges in managing it.

In this interview, Galen Shipman, who heads OLCF’s Technology Integration group, discusses the challenges of managing big data. Shipman’s group is charged with integrating advanced technologies in the areas of networks, file systems, and archival storage infrastructures into the OLCF’s HPC systems, such as Jaguar, a 2.33-petaflop Cray XT funded by the Department of Energy Office of Science. From the evolution of supercomputing architectures and storage systems to the development of software and tools, it is clear major change is afoot as computing accelerates toward the exascale.

HPCwire: What data management nightmares keep you up at night?

Shipman: In the short term, the challenge of providing high-performance, reliable, and scalable I/O systems to meet the needs of a growing number of users from broadening domains of science. Users of our computing systems often employ a number of different applications to support their research, often with distinct parallel I/O and data management requirements. As these applications are scaled to systems such as Jaguar, I/O techniques that may have worked adequately at a few thousand cores often prove unscalable at tens or even hundreds of thousands of cores. Educating users about how to optimize their I/O and assisting them in doing so has proved effective.

Once these applications are optimized for our computational environment, the next challenge, managing the data that these applications produce, comes to the forefront. The OLCF currently manages more than 220 million files and more than 22 petabytes of data stored across our high-performance Lustre file systems and our High Performance Storage System (HPSS) archive. Managing this “big-data” is truly a grand challenge and spans not only HPC environments but private industry as well.

To facilitate data management at this scale, the OLCF has developed a number of tools, including advanced search and discovery, metadata harvesting, parallel data movement, and system monitoring and administration for our large-scale data systems. While we have made great progress in this area, the sheer scope of this challenge will necessitate a multi-institutional approach that spans government research laboratories and private industry.

From a hardware technology perspective, a more fundamental challenge that the storage community is facing is the widening gap between the performance of traditional storage technologies and the amount of data they can store. Historically, disk bandwidth has improved by 20 percent for sequential I/O and 8 percent for random I/O annually, while disk drive densities have increased by nearly 50 percent per year. For traditional storage technologies, this trend is likely to continue, resulting in ever-increasing storage capacities with ever-dwindling accessibility of the data sets that reside on the storage media.

To illustrate the challenges we will face with traditional storage technologies, take for example the community-developed exascale roadmap. It points toward parallel I/O environments that support up to 60 terabytes per second of bandwidth to transfer data to and from persistent storage. Assuming historical rates of performance improvement continue, achieving this performance level with traditional storage technologies would require more than 200,000 disk drives with a projected cost of more than $200 million. Even at this level of investment, the performance delivered to most applications may be a small fraction of the goal as drive-latency improvements become increasingly harder to realize.

Within the HPC community, there is general consensus that, over the next decade, the traditional storage hierarchy will incorporate technologies such as non-volatile random access memory (NVRAM) to address this problem. Many point to flash-based storage as a likely candidate for inclusion in this deeper hierarchy, but I believe it is too early to make a call on a specific NVRAM technology. One thing is clear: A fundamental technological shift coupled with end-to-end optimization will be required to meet our performance requirements within a tractable price point.

HPCwire: What research enterprises are generating big data at the OLCF?

Shipman: The majority of our users — scientists and engineers in academia, government, and industry—come to us through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, through which Oak Ridge and Argonne national laboratories will deliver 1.7 billion processor hours on advanced supercomputers in 2011. While all OLCF users rely on a parallel I/O environment, a number of heavy hitters in terms of I/O have unique performance and data-management requirements. Of particular note are the climate science, fusion energy, and combustion communities.

Fusion energy simulations may generate up to 100 terabytes of data per day. This is really on the edge in terms of parallel I/O performance, dwarfing that of most other science domains. The climate science community is somewhat unique in terms of its need for structured data management to enable intermodel comparison. The Coupled Model Intercomparison Project Phase 5 will not only generate petabytes of data but also require sophisticated data management tools to facilitate scientific review and intercomparison of simulation results from climate modeling centers around the world.

As simulation continues to mature as a fundamental tool for scientific discovery, I expect other scientific domains to have similar data management requirements. The OLCF is well positioned to meet these requirements through its development of advanced data-management technologies.

HPCwire: How do you store such big data sets?

Shipman: Petascale datasets from all major OLCF platforms are accessible though our center-wide Spider file system, which has both short-term and longer-term storage for computational science teams. Spider is one of the world’s largest and fastest parallel I/O environments, with 10.7 petabytes of disk space and more than 240 gigabytes per second of aggregate throughput. In addition to providing a common center-wide file system to all of ORNL’s supercomputing platforms, the Spider parallel I/O environment provides connectivity to remote facilities such as Argonne National Laboratory and Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center via dedicated data-transfer nodes running GridFTP.

Most users generate large datasets as part of their scientific simulations either for improved resiliency through checkpoint/restart — saving the application state at predefined intervals — or for subsequent data analysis. Checkpoint/restart datasets are generally purged by the application itself or through a system-wide sweep of “expired” datasets and thus have a limited shelf life.

Datasets to support subsequent data analysis have longer-term value, particularly when a scientist is interested in analyzing and comparing the results of multiple simulations. The OLCF provides every INCITE project with a long-term archival area to store these datasets via HPSS.

HPCwire: What are the logistics of longer-term, or archival, storage?

Shipman: When users decide to move data from Spider into long-term storage, the data are archived on our HPSS, which offers tens of petabytes of capacity and more than 12 gigabytes per second of bandwidth. Incoming data are written to disk and later migrated to tape for long-term archiving in high-capacity, robotic libraries. As of June 2011, the OLCF has more than 18 petabytes stored in more than 20 million files in HPSS from researchers in climate science, nuclear fusion, combustion, astrophysics, materials science, and many other scientific fields. It is not uncommon for users from a single science domain to have well over a petabyte of data in archive. These datasets are of high value, often requiring millions of processor hours to generate. Recreating these datasets and the scientific insight they may provide is increasingly difficult as the demand from the scientific community for large-scale computing resources such as Jaguar is outpaces our ability to supply the needed processor hours.

For the foreseeable future, we expect tape technology to be in play for archival storage because of its low power consumption and relatively low cost per terabyte of capacity. While massive array of idle disk (MAID) technology will lower the power requirements of disk-only solutions, there are concerns with the reliability of drive technology compared with tape, particularly when drives are frequently spun-up/down as employed in MAID. I expect disk technologies will still be in play for quite some time as well, as the capacity and sequential bandwidth capabilities at a relatively competitive cost point will continue to make this technology compelling. For archiving, I expect disk and tape to continue to be the dominant storage technologies for scientific computing environments. Our primary challenge will be balancing the storage requirements of our users with the cost of storage capacity and bandwidth. In response, I expect the scientific computing community to increasingly adopt data-reduction strategies within their simulation and analysis workflows.

HPCwire: For analyzing and visualizing big data, what is the main challenge?

Shipman: From my experiences with the data-analysis and visualization communities, their primary bottleneck in time-to-solution is I/O performance. Our work with the climate community on the Ultrascale Visualization — Climate Data Analysis Tools (UV-CDAT) project is tackling this issue head-on from an end-to-end perspective. We are working on ultrascale visualization techniques with a number of researchers from Los Alamos and Lawrence Livermore national laboratories, New York University, and Kitware to optimize the infrastructure of the Visualization ToolKit, an open-source software system for 3D computer graphics, image processing, analysis, and visualization.

Early results are encouraging, resulting in linear speedup — scaling — of common visualization workloads on Jaguar using Spider. Through an end-to-end approach that encompassed system architecture experts, visualization researchers, and middleware software engineers, we are able to address one of the most challenging aspects of visualization workloads. I believe this coordinated multidisciplinary approach will be of increasing value as the I/O performance gap continues to widen in the future.

Another promising approach to overcoming the I/O performance bottleneck is “in situ” data analysis, moving from the traditional post-processing model of data analysis to integration of data analysis directly within the simulation. This strategy can result in significant data reduction for certain use cases, particularly when scientists are interested in well-understood phenomena within their simulations.

The OLCF provides a number of dedicated visualization and analysis resources to our computational scientists. Our 32-node Linux cluster, known as Lens, is a dedicated platform that enables analysis and visualization of simulation data generated on Jaguar, providing a conduit for large-scale scientific discovery.

The facility also features EVEREST, the Exploratory Visualization Environment for REsearch in Science and Technology, and its associated visualization cluster. EVEREST is 30 feet wide and 10 feet tall and features a 27-projector Powerwall to display 35 million pixels for extremely high-definition scientific visualizations. Coupled with a dedicated Lustre file system, EVEREST provides a compelling experience for scientists exploring data at extremely fine scales. ORNL’s Visualization Task Group, led by Sean Ahern, helps researchers gain a better understanding of their data through visualization techniques.

HPCwire: What are the big challenges in scalable I/O?

Shipman: There are a number of big challenges in scalable I/O, from the sheer component counts to support hundreds of gigabytes per second of bandwidth and the ever-growing demands on the scalability of the file system software, to the scalability of middleware libraries and the applications that use them. Over the past decade we have seen dramatic increases in the number of components deployed both within our archive and on our high-performance parallel I/O systems. The OLCF currently has more than 24,000 disk drives supporting our high-performance parallel I/O and archival systems with nearly 500 gigabytes per second of bandwidth and more than 15 petabytes of capacity. System reliability and resiliency are critical at this scale. We use a number of techniques to improve the reliability and resiliency of our systems, from hardware-level redundancy to advanced software-level resiliency. An eye toward engineering reliability into our large-scale systems has allowed us to maintain extremely high availability of these critical resources.

Over the past decade we have seen dramatic increases in the scale of the computational platforms that we deploy at the OLCF. Today we support more than 25,000 file system clients on our largest-scale parallel file systems, an increase of an order of magnitude since 2005. Much of our work in providing a scalable parallel I/O environment has been focused on the Lustre file system. Through collaborative development of the open-source Lustre file system with Cluster File Systems, Sun, Oracle, and most recently Whamcloud, we have successfully supported the OLCF’s transition from teraflop to petaflop computing over this relatively short time frame. Our efforts in forming Open Scalable File Systems, a non-profit mutual benefit organization, are aimed at building upon our successes in collaborative development to meet similar challenges in the future.

Although we have made significant strides in improving the scalability and resiliency of the underlying file and storage systems, achieving optimal performance and scalability at the application level can be elusive to all but a handful of I/O experts. To bridge this gap, we employ a number of I/O middleware technologies, including the Adaptable I/O System (ADIOS), HDF-5, MPI-I/O, NetCDF, Parallel NetCDF, and the Parallel Log-structured File System (PLFS). These I/O middleware technologies serve a variety of functions from structured data models as found in NetCDF and HDF-5, to I/O transformation techniques as found in ADIOS and PLFS. Much of the development of ADIOS is conducted here at ORNL and led by Scott Klasky, a member of our Scientific Computing Group. Using these I/O middleware technologies in concert with our large-scale parallel I/O environment, we have delivered unprecedented levels of I/O performance to our end-users.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This