Storage at Exascale: Some Thoughts from Panasas CTO Garth Gibson

By Nicole Hemsoth

May 25, 2011

Exascale computing is not just about FLOPS. It will also require a new breed of external storage capable of feeding these exaflop beasts. Panasas co-founder and chief technology officer Garth Gibson has some ideas on how this can be accomplished and we asked him to expound on the topic in some detail.

HPCwire: What kind of storage performance will need to be delivered for exascale computing?

Garth Gibson: The top requirement for storage in an exascale supercomputer is the capability to store a checkpoint in approximately 15 minutes or less so as to keep the supercomputer busy with computational tasks most of the time. If you do a checkpoint in 15 minutes, your compute period can be as little as two and a half hours and you still spend only 10 percent of your time checkpointing. The size of the checkpoint data is determined by the memory sizing; something that some experts expect will be approximately 64 petabytes based on the power and capital costs involved. Based on that memory size, we estimate the storage system must be capable of writing at 70 terabytes per second to support a 15 minute checkpoint.

HPCwire: Given the slower performance slope of disk compared to compute, what types of hardware technologies and storage tiering will be required to provide such performance?

Gibson: While we have seen peak rates of throughput on the hundreds of gigabytes per second range today, we have to scale 1000x to get to the required write speed for exascale compute. The challenge with the 70 terabyte-per-second write requirement is that traditional disk drives will not get significantly faster over the coming decade so it will require almost 1000x the number of spindles to sustain this level of write capability.

After all, we can only write as fast as the sum of the individual disk drives. We can look at other technologies like flash storage — such as SSDs — with faster write capabilities. The challenge with this technology, however, is the huge cost delta between flash-based solutions compared to ones based on traditional hard drives. Given that the scratch space will likely be at least 10 times the size of main memory, we are looking at 640 petabytes of scratch storage which translates to over half a billion dollars of cost in flash based storage alone.

The solution is a hybrid approach where the data is initially copied to flash at 70 terabytes per second but the second layer gets 10 times as much time to write from flash to disk, lowering storage bandwidth requirements to 7 terabytes per second, and storage components to only about 100x today’s systems. You get the performance out of flash and the capacity out of spinning disk. In essence, the flash layer is really temporary “cheap memory,” possibly not part of the storage system at all, with little of no use of its non-volatility, and perhaps not using a disk interface like SATA.

HPCwire: What types of software technologies will have to be developed?

Gibson: If we solve the performance/capacity/cost issue with a hybrid model using flash as a temporary memory dump before data is written off to disk, it will require a significant amount of intelligent copy and tiering software to manage the data movement between main memory and the temporary flash memory and from there on to spinning disks. It is not even clear what layers of the application, runtime system, operating system or file system manage this flash memory.

Perhaps more challenging, there will have to be a significant amount of software investment in building reliability into the system. An exascale storage system is going to have two orders of magnitude more components than current systems. With a lot more components comes a significantly higher rate of component failure. This means more RAID reconstructions needing to rebuild bigger drives and more media failures during these reconstructions.

Exascale storage will need higher tolerance for failure as well as the capability for much faster reconstruction, such as is provided by Panasas’ parallel reconstruction, in addition to improved defense against media failures, such as is provided by Panasas’ vertical parity. And more importantly, end to end data integrity checking of stored data, data in transit, data in caches, data pushed through servers and data received at compute nodes, because there is just so much data flowing that detection of the inevitable flipped bit is going to be key. The storage industry is started on this type of high reliability feature development, but exascale computing will need exascale mechanisms years before the broader engineering marketplaces will require it.

HPCwire: How will metadata management need to evolve?

Gibson: At Carnegie Mellon University we have already seen with tests run at Oak Ridge National Laboratory that it doesn’t take a very big configuration before it starts to take thousands of seconds to open all the files, end-to-end. As you scale up the supercomputer size, the increased processor count puts tremendous pressure on your available metadata server concurrency and throughput. Frankly, this is one of the key pressure points we have right now – just simply creating, opening and deleting files can really eat into your available compute cycles. This is the base problem with metadata management.

Exascale is going to mean 100,000 to 250,000 nodes or more. With hundreds to thousands of cores per node and many threads per core — GPUs in the extreme — the number of concurrent threads in exascale computing can easily be estimated in the billions. With this level of concurrent activity, a highly distributed, scalable metadata architecture is a must, with dramatically superior performance over what any vendor offers today. While we at Panasas believe we are in a relatively good starting position, it will nevertheless require a very significant software investment to adequately address this challenge.

HPCwire: Do you believe there is a reasonable roadmap to achieve all this? Do you think the proper investments are being made?

Gibson: I believe that there is a well reasoned and understood roadmap to get from petascale to exascale. However it will take a lot more investment than is currently being put into getting to the roadmap goals. The challenge is the return on investment for vendors. When you consider that the work will take most of the time running up to 2018, when the first exascale systems will be needed, and that there will barely be more than 500 publicly known petascale computers at that time, based on TOP500.org’s historical 7-year lag on the scale of the 500th largest computer.

It is going to be hard to pay for systems development on that scale now, knowing that there is going to be only a few implementations to apportion the cost against this decade and that it will take most of the decade after that for the exascale installed base to grow to 500. We know that exascale features are a viable program at a time far enough down the line to spread the investment cost across many commercial customers such as those in the commercial sector doing work like oil exploration or design modeling.

However, in the mean time, funding a development project like exascale storage systems could sink a small company and it would be highly unattractive on the P&L of a publicly traded company. What made petascale storage systems such as Panasas and Lustre a reality was the investment that the government made with DARPA in the 1990’s and with the DOE Path Forward program this past decade. Similar programs are going to be required to make exascale a reality. The government needs to share in this investment if it wants production quality solutions to be available in the target exascale timeframe.

HPCwire: What do you think is the biggest hurdle for exascale storage?

Gibson: The principal challenge for this type of scale will be the software capability. Software that can manage these levels of concurrency, streaming at such high levels of bandwidth without bottlenecking on metadata throughput, and at the same time ensure high levels of reliability, availability, integrity, and ease-of-use, and in a package that is affordable to operate and maintain is going to require a high level of coordination and cannot come from stringing together a bunch of open-source modules. Simply getting the data path capable of going fast by hooking it together with bailing wire and duct tape is possible but it gives you a false confidence because the capital costs look good and there is a piece of software that runs for awhile and appears to do the right thing.

But in fact, having a piece of software that maintains high availability, doesn’t lose data, and has high integrity and a manageable cost of operation is way harder than many people give it credit for being. You can see this tension today in the Lustre open source file system which seems to require a non-trivial, dedicated staff trained to keep the system up and effective.

HPCwire: Will there be a new parallel file system for exascale?

Gibson: The probability of starting from scratch today and building a brand new production file system deployable in time for 2018 is just about zero. There is a huge investment in software technology required to get to exascale and we cannot get there without significant further investment in the parallel file systems available today. So if we want to hit the timeline for exascale, it is going to have to take investment in new ideas and existing implementations to hit the exascale target.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU--and a refresh of its inference server software packaged as Read more…

By George Leopold

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

NSF Highlights Expanded Efforts for Broadening Participation in Computing

September 13, 2018

Today, the Directorate of Computer and Information Science and Engineering (CISE) of the NSF released a letter highlighting the expansion of its broadening participation in computing efforts. The letter was penned by Jam Read more…

By Staff

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This