SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

By John Russell

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering is a practical necessity for the vast majority of HPC systems. Faster, cheaper, and more flexible as the new solid state hardware choices are, they are not yet cheap enough. And there’s a huge base of existing storage infrastructure (software and hardware) that’s not going away quickly.

An SC20 panel held last week – Diverse Approaches to Tiering HPC Storage – dug into current tiering trends and although it produced little agreement on when or precisely how storage tiering will enter its own end-of-life phase, the conversation covered a lot of ground. This was a large group of prominent panelists:  Glenn Lockwood, storage architect, NERSC/LBNL; Wayne Sawdon, CTO and strategy architect, IBM (Spectrum Scale); Jeff Denworth, VP of products and marketing, and co-founder, VAST Data; Liran Zvibel, CEO and founder, WekaIO; Curtis Anderson, senior software architect, Panasas; Matthew Starr, CTO, Spectra Logic; and Andreas Dilger, Lustre principal architect, Whamcloud (owned by DDN). The moderator was Addison Snell, CEO and founder, Intersect360 Research.

Lockwood’s early comments set the stage well and describe, at least in an aspirational sense, how many in HPC view storage tiering. He’s on the team standing up NERSC’s next supercomputer, Perlmutter. Here’s a somewhat extended (and lightly-edited) excerpt from his opening comments:

“We have a very large and very broad user base comprised of thousands of active users, hundreds of projects and hundreds of different applications every year. We also support not only traditional simulation workloads, but the large-scale data analytics workflows coming from experimental facilities such as telescopes and beam lines, and because of this breadth, our aggregate workload is very data intensive.

“Our storage systems typically see over an exabyte of I/O annually. Balancing this I/O intensive workload with the economics of storage means that at NERSC, we live and breathe tiering. And this is a snapshot of the storage hierarchy we have on the floor today at NERSC. Although it makes for a pretty picture, we don’t have storage tiering because we want to, and in fact, I’d go so far as to say it’s the opposite of what we and our users really want. Moving data between tiers has nothing to do with scientific discovery.

“To put some numbers behind this, last year we did a study that found that between 15% and 30% of that exabyte of I/O is not coming from our users’ jobs, but instead coming from data movement between storage tiers. That is to say that 15% to 30% of the I/O at NERSC is a complete waste of time in terms of advancing science. But even before that study, we knew that both the changing landscape of storage technology and the emerging large-scale data analysis and AI workloads arriving at NERSC required us to completely rethink our approach to tiered storage.

“Back in 2015, we began devising a strategic plan [for storage]. Our goal is ultimately to have as few tiers as possible for our users’ sake, while balancing the economic realities of storage media as they evolve over the next decade. Fortunately, economic trends are on our side for storage. And in anticipation of falling flash prices, we gave ourselves a goal of collapsing our performance tiers by 2020 to coincide with our 2020 HPC procurement, which is now called Perlmutter.

“I’m happy to report that we are on track with our 2020 goal. We’re currently in the process of deploying the world’s first all NVMe parallel file system at scale that will replace both our previous burst buffer tier and disk-based scratch tier. Once the dust is settled with Perlmutter, we’re aiming to further reduce down to just two tiers that map directly to the two fundamental categories of data that users store at NERSC: one a hot tier for data that users are actively using to carry out their work, and two a cold tier for data that users don’t actively need but are storing just in case they or someone else needs it somewhere down the road.

“By only having these two tiers, the only data movement that users need to worry about is moving data before and after their entire scientific project is carried out. At NERSC, this means that they will only have to worry about this once or twice per year. We will still keep two separate tiers because they represent two fundamentally different ways users interact with their data.

“The hot tier will be optimized for performance and getting data to and from user applications as fast as possible, so that they can spend their time advancing science rather than waiting for I/O. And their cold tier will be optimized for making data easy to search, index and share with their collaborators reliably. Because we don’t expect any magical media to hit the market between now and 2025, we’re relying on software to bridge the gap between different media types, so that even if there are both say flash and non-volatile memory and our hot tier, users don’t have to worry about which media is backing their data within that tier.”

Keep in mind Perlmutter is a government-funded supercomputer supported by technical and financial resources that are beyond the scope of most HPCers. That said Lockwood’s description of the problem and NERSC’s target solution echo the perspectives of many in the HPC user community.

Snell pointed out in shaping the discussion, “The implementation of solid state or flash storage has continued to grow among HPC users, and in most cases it exists together with, not separate from, conventional spinning disk hard drives. Having data on the right tier at the right time has become a compelling conversation point in determining what constitutes high performance storage, together with the potential latency and moving data between tiers. Meanwhile, the need for long term data stewardship is no less important. And to make things even more complicated, cloud computing is increasingly common in high performance computing, bringing in the notion of cloud is yet another data location.”

Because the panel was quite long (1.5 hours) presented here are just a few comments from each of the panelist on how their companies implement tiering.

PANASAS

Panasas’s view is conventional tiering, when all things are considered, just hurts price performance. You’ve got your compute cluster. You have a hot tier, that’s probably made of NVMe flash or something expensive. You have a cold tier that’s made out of hard drives and lower cost technologies. Then you have the storage management software layer that’s moving data back and forth. What do you get out of that? You get unpredictable performance. If you have guessed right, or your data has been recently used, then it’s in the hot tier [and] performance is not bad. If you guessed wrong or the workloads have changed or the compute cluster is overloaded, the data is still in the cold tier, and your nodes are idle while you’re pulling data up to the hot tier,” said Anderson.

“In addition, you get three separate management domains. There’s probably three separate products, they all need management. This is an example of temperature based data placement, how recently something has been accessed, determines where it lives, unless you’re reaching in and manually managing where data lives which has its own set of costs and issues,” he said.

“We believe that data placement based upon size, not on temperature, is the right solution. So we’ve built the ActiveStor Ultra around this concept. Hard drives, just as a as an example, are not terrible products from an HPC storage perspective, they’re great products for HPC. They deliver a tremendous amount of bandwidth, as long as you only give them large files and large sequential transfers to operate on. They’re very inexpensive per terabyte of storage, and they deliver great performance. So this [ActiveStor Ultra] is a an architecture that’s based on device efficiency, getting the most out of the fundamental building blocks of SSDs and HDDS that are part of the solution.

Panasas is getting a 2x performance advantage over competitive products, according to Anderson, although he didn’t specify which products. Using what he called a single tier approach, where data placement is based on size using a mix of technologies, is the key. “Comparing HPC storage products is difficult, because there’s such wildly different hardware. But if you boil it down to gigabytes delivered per 100 spindles of hard drive, then you get a comparison we believe is fair. The reason we’re getting this benefit is the architectural difference,” he said.

VAST DATA

VAST Data has vast aspirations and Denworth’s enthusiastic pitch matched those aspirations. Time will tell how successful this young company fares. About tiering he said, “Basically just throw the whole concept right out the window. We don’t see a lot of utility in the topic in 2020.” The key ingredients in VAST’s formula for dispensing with tiering include QLC flash memory, 3D XPoint, NVMe fabric, fresh coding, and buying scale.

“What we’ve done is we’ve combined a next generation computing layer that’s built in stateless Docker containers over a next generation NVMe fabric which can be implemented over Ethernet or InfiniBand, essentially, to disaggregate the containers from the underlying storage media, and make it such that they share all of the media, all of the NVMe devices within the cluster,” said Denworth. “That allows us to basically implement a new class of global codes, global codes around how we get to really, really efficient data protection. Imagine two and a half percent overhead for raid at scale without compromising on resilience, and global data reduction codes, such that you can get to now dramatic efficiency gains from where customers have come from in the past. And finally, global flash translation codes that get up to 10 years of use out of low-cost QLC flash that other storage systems can’t even use in their architectures.”

“When you put this together, you now have this stateless containerized architecture that can scale to exabytes, doesn’t cost any more for your infrastructure than what you paid for hard drive-based infrastructure, and the cost of flash is declining at a rate that’s much more aggressive than hard drives, such that we feel that we’re on the right side of that curve,” he said.

Denworth noted the rise of AI workloads as an important factor driving change, “AI changes the I/O paradigm altogether and where HPC systems of yesterday were designed to optimize for writes and then burst buffers came in and further optimized for writes. On the flip side, these new algorithms want to randomly read through the largest amounts of data at the fastest rates of speed, and you just can’t prefetch a random read. And we’re not alone on the [flash] island. organizations like the DOE Office of Science have concluded that the only way to efficiently feed next generation AI algorithms is to use random access flash.”

WEKAIO

WekaIO’s take on tiering was interesting. “It’s a central part of what we do,” said Zvibel. He describes WekaIO an enterprise grade storage file system for customers that need the NAS feature set but also need performance of a parallel file system. “If you go with NAS, you get feature rich enterprise grade, but you’re usually limited by your scale and performance. On the other hand, if you’re going with a parallel file system, you’re going to get a lot of scale, great throughput [but] be limited on mixed workloads and low latency I/Os.”

The WekaIO file system, says the company, has been built for use with flash and optimized for NVMe and cloud performance and agnostic about whose hardware is used (see architecture slide).

“You buy your commodity servers from vendors you like, put the Weka software on, you’re getting a parallel file system for NVMe over fabric. Then you install our clients on your compute and the I/O runs parallel to all of the NVMe-over-fabric servers and [provides] the extra performance. We also support GPU direct storage for the Nvidia GPUs, NFS, SMB and S3,” said Zvibel.

Regarding tiering, Zvibel said, “A lot of the projects have a portion of the data that is their active window. This could be hundreds of terabytes. It could be few petabytes or dozens of petabytes, but usually they have a lot more capacity that is stored, and they don’t need to access it for every run. For that we’re enabling tiering to any S3 enabled object storage and we can tier it to more than one on-prem in the cloud. You can have your data stored in separate locations. We’re not just tiering, we’re actually enabling a feature we called snap-to-object. So if you’re tiering to two locations, you can save snapshots and have the Weka system spun up on the other side, and basically keep running from that point in time.”

SPECTRA LOGIC

Starr from long-time tape powerhouse Spectra Logic briefly reviewed storage technology pyramid and noted it’s relative inefficiencies. He singled out the weakness of of metadata handling capabilities of file systems as an ongoing issue. He also singled out 3D XPoint’s load-store functionality as a potential gamer-changer “for what applications can do for doing things like snapshots; instead of writing to a disk or an NVMe system to a disk drive interface, or file system interface, [they can use] XPoint to do load-store and actually getting persistent storage.”

On balance, Starr sees the emergence of dynamic tiering across hybrid technologies as the wave of the future. “I think the model is going to [be one] where you end up with CPU RAM, XPoint, NVMe and compute together. That system will be the scratch file system, the sandbox for people to play in. Then [you’ll have] a separate storage area, made of hybrid [technologies] of the flash arrays, HDD, tape cloud, and most likely that’s going to have object ReSTful interface.” He thinks that latter interface is likely to “be an immutable interface so that as data’s coming into this scratch area, and being written back out, new versions are being written back out on the right side, when they need to be written out.”

Overall Starr suggested the following trends: “First, we’re seeing a move off HSMs (hierarchical storage management systems) – not saying those systems are going away tomorrow, but the idea that HSMS are going to be replaced with object storage interfaces. I think the new storage tiers like XPoint are going to start changing how applications are written, especially when you think about snapshots, or VM farms, how much RAM you can actually get into a server with 3D XPoint. Customers are going to look at open standards a lot more, like LTFS (linear tape file system). Those are the winners, just like Linux won the Unix war, LTFS is going to win the tape format, standard war.

“I think we are going to see a lot more ReSTful object storage interfaces and open standards being deployed where you can deploy content in the cloud [but still keep] a copy of that data on site for easy retrieval, but have a copy in the cloud to share it with other people. [I think we’ll see more] immutable archives. Lastly, I think that we’re going to be looking a lot more search [capability] and how we perform data capture and search before we put data deep into an archive? Those are the things I think are trending in the storage areas today.”

IBM

Spectrum Scale, formerly GPFS, has a long history of storage tiering support. “By 2000, two years after we introduced the product, we supported it for tape using the XDSM DMAPI standard. In 2006, we introduced the information Lifecycle Management, which divides the online storage into pools, allowing us to distinguish fast storage from slower storage. Along with the pools we introduced policy rules. In 2010, we introduced the active file management, which allows us to tier data across the wide area network to other sites in other locations,” said Swadon.

Wayne Sawdon, IBM

“In 2016, we introduced the transparent cloud sharing to allow us to move data to and from Cloud Storage. Spectrum Scale continue to invest into data tiering to extend its common namespace across data, lakes and object storage. We are also investing to transparently tier data into and out of client compute nodes using storage rich servers with local NVMe and persistent memory.”

Unlike Denworth, IBM sees storage tiering as something that will continue to be important. The goal, reiterated Sawdon, is to move the data as close to the computation as possible to reduce the time required to derive value from the data. IBM believes Spectrum Scale will keep evolving to meet the task of serving different media (and functional) cost-performance requirements.

“With today’s analytics on data, we see an increase in both the volume of the data and then the value of the data. High volume increases the demand for tiering to cheaper storage; high value increases the demand for tiering the high-performance storage to reduce the time for analytics. Thus, we conclude that data tiering is important now and will be even more important in the future,” said Sawdon.

Sawdon also emphasized Spectrum Scale’s software defined nature. “The benefit of being software defined is we can run on any hardware, we don’t care if we’re running on cloud hardware, we don’t care if it’s on prem. So we do have HPC deployments in the cloud. What we’re seeing for cloud deployments are [that] these are in fact, larger than our on-prem ones. Virtual machines are easy to spin up and getting 100,000 node clusters just happens. So we’re seeing lots of activity in that space. The interesting thing for Spectrum Scale is we’ve built common namespaces across different installations. You can build common namespaces between your cloud deployment and an on-prem deployment, and transparently move data if the customer wants to do it. Today, customers aren’t doing that yet. But we do have customers in the cloud who are running HPC.”

WHAMCLOUD/LUSTRE

Lustre has long been an HPC mainstay. Since DDN acquired Whamcloud (Lustre) from Intel, it has been working to incorporate common features used in enterprise file systems and to add ease of use features. Lustre underpins DDN’s EXAScaler product line (see HPCwire coverage). Dilger provided a brief overview of tiering in Lustre.

“The primary mechanism by which Lustre achieves storage tiering is through the use of storage pools to identify different storage technologies, and then the file layout which is stored on every file individually, and can be specified either as a default for the whole file system, per directory or per file. This allows a great deal of flexibility in terms of where files are located. To avoid applications or users trying to consume all of this storage space in a say flash pool on a system there are quotas to prevent abuse of the resources,” said Dilger

Typically, files are initially placed on a flash pool, then files can be mirrored to a different storage pool. “If the flash storage pool is becoming full, the policy agent can release the copy of older files and leave the one copy on the disk. It’s even possible to split a single file across different storage types. This is practical for very large files that can’t necessarily all fit into a single storage pool, or for files that have different types of data in them, for instance, an index at the start of the file, and then large data stored at the end of the file,” said Dilger.

In addition to managing storage on the server, Lustre can now also manage storage directly in the client. “This is through the persistent client cache. This allows files to be stored on local storage that are very large or need very low latency access. This leverages a local file system such as EC4, or NOVA. There are two types of storage for persistent client cache, either read-only mirror that’s shared among the file system and multiple clients or an exclusive copy where the client can write locally and then the data is mirrored asynchronously back to the Lustre file system,” he said.

Discussion around how quickly NVMe and FLASH and 3D XPoint technology would overturn the current tiering paradigm was lively but ultimately inconclusive. Best to watch the SC20 video for that interchange. Broadly, in the near-term, most panelists think dynamically tiering across media types will collapse tiers, at least in the sense that the tiers are increasingly invisible to uses and applications. Also, the persistence of POSIX, installed infrastructure (HDD et al) and improving tape performance suggest some form of discrete storage tiering will remain for quite some time.

There was a fair amount of discussion around cloud storage economics and interestingly also around AI application I/O patterns. The thinking on AI I/O patterns was that they were largely developed on smaller devices (laptops) without much thought about large-scale systems or storage systems behavior – that may change.

One balance, the panelists agree NVMe flash and 3D XPoint are the future, it’s a matter of when, and most of the panelists expect an evolution rather than abrupt replacement of existing storage tech.

Lockwood noted, “Ideally, economics is what allows us to keep collapsing tiers and not the other way around. So the project costs, how much we spend on storage, has no change. For Perlmutter, that is approximately true. Our previous system, Cori, I want to say the storage budget was somewhere between 10% and 15% of the total system cost. For Perlmutter, it was about the same. So we’re just leveraging the falling economics of flash. Where additional money probably will be needed is in the software and software enablement part of this, and whether or not you consider that part of the capital expenditure or non-recurring engineering or internal R&D effort is a much more complicated question.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire