As Supercomputers Approach Exascale, Experts Wrestle with Big Data

By Dawn Levy

July 21, 2011

The Oak Ridge Leadership Computing Facility (OLCF) plans on delivering a system with peak performance of 10–20 petaflops at Oak Ridge National Laboratory (ORNL) in 2012. This system will begin the OLCF’s transition from petaflop to exaflop computing this decade. Providing an environment that balances computational speed, memory bandwidth, and the input/output of data — so no aspect bottlenecks the performance of scientific applications — will require dramatic advances in parallel I/O capabilities. These high performance computing systems will generate unprecedented amounts of data — and unprecedented challenges in managing it.

In this interview, Galen Shipman, who heads OLCF’s Technology Integration group, discusses the challenges of managing big data. Shipman’s group is charged with integrating advanced technologies in the areas of networks, file systems, and archival storage infrastructures into the OLCF’s HPC systems, such as Jaguar, a 2.33-petaflop Cray XT funded by the Department of Energy Office of Science. From the evolution of supercomputing architectures and storage systems to the development of software and tools, it is clear major change is afoot as computing accelerates toward the exascale.

HPCwire: What data management nightmares keep you up at night?

Shipman: In the short term, the challenge of providing high-performance, reliable, and scalable I/O systems to meet the needs of a growing number of users from broadening domains of science. Users of our computing systems often employ a number of different applications to support their research, often with distinct parallel I/O and data management requirements. As these applications are scaled to systems such as Jaguar, I/O techniques that may have worked adequately at a few thousand cores often prove unscalable at tens or even hundreds of thousands of cores. Educating users about how to optimize their I/O and assisting them in doing so has proved effective.

Once these applications are optimized for our computational environment, the next challenge, managing the data that these applications produce, comes to the forefront. The OLCF currently manages more than 220 million files and more than 22 petabytes of data stored across our high-performance Lustre file systems and our High Performance Storage System (HPSS) archive. Managing this “big-data” is truly a grand challenge and spans not only HPC environments but private industry as well.

To facilitate data management at this scale, the OLCF has developed a number of tools, including advanced search and discovery, metadata harvesting, parallel data movement, and system monitoring and administration for our large-scale data systems. While we have made great progress in this area, the sheer scope of this challenge will necessitate a multi-institutional approach that spans government research laboratories and private industry.

From a hardware technology perspective, a more fundamental challenge that the storage community is facing is the widening gap between the performance of traditional storage technologies and the amount of data they can store. Historically, disk bandwidth has improved by 20 percent for sequential I/O and 8 percent for random I/O annually, while disk drive densities have increased by nearly 50 percent per year. For traditional storage technologies, this trend is likely to continue, resulting in ever-increasing storage capacities with ever-dwindling accessibility of the data sets that reside on the storage media.

To illustrate the challenges we will face with traditional storage technologies, take for example the community-developed exascale roadmap. It points toward parallel I/O environments that support up to 60 terabytes per second of bandwidth to transfer data to and from persistent storage. Assuming historical rates of performance improvement continue, achieving this performance level with traditional storage technologies would require more than 200,000 disk drives with a projected cost of more than $200 million. Even at this level of investment, the performance delivered to most applications may be a small fraction of the goal as drive-latency improvements become increasingly harder to realize.

Within the HPC community, there is general consensus that, over the next decade, the traditional storage hierarchy will incorporate technologies such as non-volatile random access memory (NVRAM) to address this problem. Many point to flash-based storage as a likely candidate for inclusion in this deeper hierarchy, but I believe it is too early to make a call on a specific NVRAM technology. One thing is clear: A fundamental technological shift coupled with end-to-end optimization will be required to meet our performance requirements within a tractable price point.

HPCwire: What research enterprises are generating big data at the OLCF?

Shipman: The majority of our users — scientists and engineers in academia, government, and industry—come to us through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, through which Oak Ridge and Argonne national laboratories will deliver 1.7 billion processor hours on advanced supercomputers in 2011. While all OLCF users rely on a parallel I/O environment, a number of heavy hitters in terms of I/O have unique performance and data-management requirements. Of particular note are the climate science, fusion energy, and combustion communities.

Fusion energy simulations may generate up to 100 terabytes of data per day. This is really on the edge in terms of parallel I/O performance, dwarfing that of most other science domains. The climate science community is somewhat unique in terms of its need for structured data management to enable intermodel comparison. The Coupled Model Intercomparison Project Phase 5 will not only generate petabytes of data but also require sophisticated data management tools to facilitate scientific review and intercomparison of simulation results from climate modeling centers around the world.

As simulation continues to mature as a fundamental tool for scientific discovery, I expect other scientific domains to have similar data management requirements. The OLCF is well positioned to meet these requirements through its development of advanced data-management technologies.

HPCwire: How do you store such big data sets?

Shipman: Petascale datasets from all major OLCF platforms are accessible though our center-wide Spider file system, which has both short-term and longer-term storage for computational science teams. Spider is one of the world’s largest and fastest parallel I/O environments, with 10.7 petabytes of disk space and more than 240 gigabytes per second of aggregate throughput. In addition to providing a common center-wide file system to all of ORNL’s supercomputing platforms, the Spider parallel I/O environment provides connectivity to remote facilities such as Argonne National Laboratory and Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center via dedicated data-transfer nodes running GridFTP.

Most users generate large datasets as part of their scientific simulations either for improved resiliency through checkpoint/restart — saving the application state at predefined intervals — or for subsequent data analysis. Checkpoint/restart datasets are generally purged by the application itself or through a system-wide sweep of “expired” datasets and thus have a limited shelf life.

Datasets to support subsequent data analysis have longer-term value, particularly when a scientist is interested in analyzing and comparing the results of multiple simulations. The OLCF provides every INCITE project with a long-term archival area to store these datasets via HPSS.

HPCwire: What are the logistics of longer-term, or archival, storage?

Shipman: When users decide to move data from Spider into long-term storage, the data are archived on our HPSS, which offers tens of petabytes of capacity and more than 12 gigabytes per second of bandwidth. Incoming data are written to disk and later migrated to tape for long-term archiving in high-capacity, robotic libraries. As of June 2011, the OLCF has more than 18 petabytes stored in more than 20 million files in HPSS from researchers in climate science, nuclear fusion, combustion, astrophysics, materials science, and many other scientific fields. It is not uncommon for users from a single science domain to have well over a petabyte of data in archive. These datasets are of high value, often requiring millions of processor hours to generate. Recreating these datasets and the scientific insight they may provide is increasingly difficult as the demand from the scientific community for large-scale computing resources such as Jaguar is outpaces our ability to supply the needed processor hours.

For the foreseeable future, we expect tape technology to be in play for archival storage because of its low power consumption and relatively low cost per terabyte of capacity. While massive array of idle disk (MAID) technology will lower the power requirements of disk-only solutions, there are concerns with the reliability of drive technology compared with tape, particularly when drives are frequently spun-up/down as employed in MAID. I expect disk technologies will still be in play for quite some time as well, as the capacity and sequential bandwidth capabilities at a relatively competitive cost point will continue to make this technology compelling. For archiving, I expect disk and tape to continue to be the dominant storage technologies for scientific computing environments. Our primary challenge will be balancing the storage requirements of our users with the cost of storage capacity and bandwidth. In response, I expect the scientific computing community to increasingly adopt data-reduction strategies within their simulation and analysis workflows.

HPCwire: For analyzing and visualizing big data, what is the main challenge?

Shipman: From my experiences with the data-analysis and visualization communities, their primary bottleneck in time-to-solution is I/O performance. Our work with the climate community on the Ultrascale Visualization — Climate Data Analysis Tools (UV-CDAT) project is tackling this issue head-on from an end-to-end perspective. We are working on ultrascale visualization techniques with a number of researchers from Los Alamos and Lawrence Livermore national laboratories, New York University, and Kitware to optimize the infrastructure of the Visualization ToolKit, an open-source software system for 3D computer graphics, image processing, analysis, and visualization.

Early results are encouraging, resulting in linear speedup — scaling — of common visualization workloads on Jaguar using Spider. Through an end-to-end approach that encompassed system architecture experts, visualization researchers, and middleware software engineers, we are able to address one of the most challenging aspects of visualization workloads. I believe this coordinated multidisciplinary approach will be of increasing value as the I/O performance gap continues to widen in the future.

Another promising approach to overcoming the I/O performance bottleneck is “in situ” data analysis, moving from the traditional post-processing model of data analysis to integration of data analysis directly within the simulation. This strategy can result in significant data reduction for certain use cases, particularly when scientists are interested in well-understood phenomena within their simulations.

The OLCF provides a number of dedicated visualization and analysis resources to our computational scientists. Our 32-node Linux cluster, known as Lens, is a dedicated platform that enables analysis and visualization of simulation data generated on Jaguar, providing a conduit for large-scale scientific discovery.

The facility also features EVEREST, the Exploratory Visualization Environment for REsearch in Science and Technology, and its associated visualization cluster. EVEREST is 30 feet wide and 10 feet tall and features a 27-projector Powerwall to display 35 million pixels for extremely high-definition scientific visualizations. Coupled with a dedicated Lustre file system, EVEREST provides a compelling experience for scientists exploring data at extremely fine scales. ORNL’s Visualization Task Group, led by Sean Ahern, helps researchers gain a better understanding of their data through visualization techniques.

HPCwire: What are the big challenges in scalable I/O?

Shipman: There are a number of big challenges in scalable I/O, from the sheer component counts to support hundreds of gigabytes per second of bandwidth and the ever-growing demands on the scalability of the file system software, to the scalability of middleware libraries and the applications that use them. Over the past decade we have seen dramatic increases in the number of components deployed both within our archive and on our high-performance parallel I/O systems. The OLCF currently has more than 24,000 disk drives supporting our high-performance parallel I/O and archival systems with nearly 500 gigabytes per second of bandwidth and more than 15 petabytes of capacity. System reliability and resiliency are critical at this scale. We use a number of techniques to improve the reliability and resiliency of our systems, from hardware-level redundancy to advanced software-level resiliency. An eye toward engineering reliability into our large-scale systems has allowed us to maintain extremely high availability of these critical resources.

Over the past decade we have seen dramatic increases in the scale of the computational platforms that we deploy at the OLCF. Today we support more than 25,000 file system clients on our largest-scale parallel file systems, an increase of an order of magnitude since 2005. Much of our work in providing a scalable parallel I/O environment has been focused on the Lustre file system. Through collaborative development of the open-source Lustre file system with Cluster File Systems, Sun, Oracle, and most recently Whamcloud, we have successfully supported the OLCF’s transition from teraflop to petaflop computing over this relatively short time frame. Our efforts in forming Open Scalable File Systems, a non-profit mutual benefit organization, are aimed at building upon our successes in collaborative development to meet similar challenges in the future.

Although we have made significant strides in improving the scalability and resiliency of the underlying file and storage systems, achieving optimal performance and scalability at the application level can be elusive to all but a handful of I/O experts. To bridge this gap, we employ a number of I/O middleware technologies, including the Adaptable I/O System (ADIOS), HDF-5, MPI-I/O, NetCDF, Parallel NetCDF, and the Parallel Log-structured File System (PLFS). These I/O middleware technologies serve a variety of functions from structured data models as found in NetCDF and HDF-5, to I/O transformation techniques as found in ADIOS and PLFS. Much of the development of ADIOS is conducted here at ORNL and led by Scott Klasky, a member of our Scientific Computing Group. Using these I/O middleware technologies in concert with our large-scale parallel I/O environment, we have delivered unprecedented levels of I/O performance to our end-users.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Geospatial Data Research Leverages GPUs

August 17, 2017

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics. The San Francisco-based company is collabor Read more…

By George Leopold

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that Read more…

By Linda Barney

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that in Read more…

By John Russell

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

Spoiler Alert: Glimpse Next Week’s Solar Eclipse Via Simulation from TACC, SDSC, and NASA

August 17, 2017

Can’t wait to see next week’s solar eclipse? You can at least catch glimpses of what scientists expect it will look like. A team from Predictive Science Inc. (PSI), based in San Diego, working with Stampede2 at the Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not disclosed. Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Leading Solution Providers

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This