A new era needs new HPC/AI storage

October 30, 2019

New ClusterStor E1000 HPC/AI storage system from Cray, a Hewlett Packard Enterprise company

Introduction / Executive Summary

High-performance computing (HPC) is changing fundamentally with disruptive implications for HPC storage. The classic modeling and simulation workloads running on supercomputers and large HPC clusters are joined by new workloads like machine learning and deep learning (ML/DL) running on mission- or business critical workflows on the same machine. In addition, high-performance data analysis (HPDA) of experimental and observational data from much more powerful instruments and machine-generated data from Internet of Things (IoT) sensors at the edge — soon connected by fifth-generation wireless (5G) cellular networks — is integrated into the workflow.

Cray calls this new era the exascale era and it’s characterized by converged workloads for crucial workflows running on the same machine, often simultaneously.

Cray saw this coming years ago and has developed the new Cray® ClusterStor® E1000 HPC/AI storage platform to enable HPC users to efficiently cope with the data explosion in this new paradigm.

Storage challenges facing today’s customers

Most customers have separate infrastructures for classic HPC (modeling and simulation workloads) and AI (ML/DL workloads) today.

  • Supercomputers and HPC clusters mainly based on compute nodes with CPUs served by HDD-based parallel HPC storage with storage capacity measured in petabytes (PB).
  • Artificial intelligence (AI) infrastructures mainly based on GPUs served by SSD-based enterprise network attached storage (NAS) with storage capacity measured in terabytes (TB).

The current status quo becomes a challenge once AI methods like ML/DL are running together with modeling and simulation workloads in workflows on supercomputers and HPC clusters. More than 50% of HPC users are already running machine learning programs today.[1] In general, HDD-based storage was designed for the write-intensive, sequential input/output (I/O) for large files. It is not well suited to serve the read-intensive I/O of a larger number of often small files that need to be accessed in random order during the training phases of ML/DL models.

As workloads converge, customers cannot easily stay on current HPC storage infrastructure. If they do, I/O bottlenecks will result in job pipeline congestion and expensive compute infrastructure will idle waiting for I/O to complete. At the same time, they also cannot scale their current AI storage — deployed in capacities of terabytes — to the petabyte scale capacities needed by supercomputer and HPC clusters running converged HPC and AI workloads. SSD storage is much more expensive than HDD storage today and IDC research shows that SSD in 2023 is projected to still be eight times more expensive than HDD in $/GB. [2]

Cray ClusterStor E1000 solves the exascale storage challenge

Customers should consider moving to the Cray® ClusterStor® E1000 HPC/AI storage solution.

“It is a flexible HPC/AI storage platform that leverages the strength of both SSD and HDD without incurring their respective weaknesses. ClusterStor E1000 solves the I/O bottleneck in the era of converged workloads without breaking the bank by orchestrating the data flow with the workflow through intelligent software. Cray saw the storage challenges of converged workloads coming as early as 2017 when development of the next generation ClusterStor storage system began,” says Uli Plechschmidt, director of storage product marketing for Cray.

Introducing the Cray ClusterStor E1000

Figure 1. Cray ClusterStor E1000 at a glance

Cray ClusterStor E1000 benefits

Performance: Meets the most demanding requirements of HPC/AI workloads with up to 1.6 TB/s and 50 million input/output operations per second (IOPS) per SSD rack and up to 120 GB/s and up to 10 PB usable capacity per HDD rack

Flexibility: All-SSD, all-HDD, and hybrid configurations orchestrating the data flow with the workflow through intelligent Cray software. Attaches to any supercomputer and any HPC cluster of any vendor via either 100/200 Gigabit Ethernet, EDR/HDR InfiniBand or the Cray Slingshot™ interconnect.

Scalability: Enables customers to start anywhere – as low as one-fifth of a rack – and to grow wherever they need to go. Up to and beyond the largest ClusterStor E1000 system already contracted (nearly 10 TB/s with more than 700 PB usable capacity).

Conclusion

The converged workflows of the exascale era break legacy HPC storage and AI storage models. The “cost of doing nothing” is either massive job pipeline congestion through I/O bottlenecks or spending most of the total system budget on storage. A storage solution is needed for the new era.

“Unlike HPC/AI storage-only vendors we believe that storage is a means to an end. The end being insights and breakthroughs that can make the world a better place to work and live in,” says Plechschmidt. “Cray has architected the ClusterStor E1000 in a way that can meet the storage requirements of converged workloads in the most efficient way. Exploiting the strengths of HDD and SSD without incurring their weaknesses — for HDD the challenge to serve random I/O of large numbers of small files and for SSD the high cost per gigabyte — by intelligent hardware design and through intelligent software that aligns the data flow with the workflow. Due to that differentiated design, customers who choose the ClusterStor E1000 typically will be able to achieve their storage requirements in a more efficient way — enabling them to spend more of their total system budget on CPU/GPU compute nodes when compared to alternative HPC/AI storage offerings.”

—————————————————————————————————————————–

References

1] Intersect360 HPC User Budget Map Survey: Machine Learning’s Impact on HPC Environments, September 2018.

2] IDC, Worldwide 2019-2023 Enterprise SSD and HDD Combined Market Overview, June 2019.

 

About Cray

About Cray Inc.

Cray, a Hewlett Packard Enterprise company, combines computation and creativity so visionaries can keep asking questions that challenge the limits of possibility. Drawing on more than 45 years of experience, Cray develops the world’s most advanced supercomputers, pushing the boundaries of performance, efficiency and scalability. Cray continues to innovate today at the convergence of data and discovery, offering a comprehensive portfolio of supercomputers, high-performance storage, data analytics and artificial intelligence solutions.  Go to www.cray.com for more information.

CRAY and ClusterStor are registered trademarks of Cray Inc. in the United States and other countries, and Shasta and Slingshot are trademarks of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners.

Copyright 2019

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire