MinIO Debuts DataPod, a Reference Architecture for Exascale AI Storage

By Alex Woodie

August 6, 2024

The number of companies planning to store an exabyte of data or more is skyrocketing, thanks to the AI revolution. To help streamline the storage buildouts and calm queasy CFO stomachs, MinIO last week proposed a reference architecture for exascale storage that allows enterprises to get to exascale in repeatable 100 PB increments using industry standard off-the-shelf infrastructure, called DataPod.

Ten years ago, at the height of the big data boom, the average analytics deployment among enterprises was in the single-digit petabytes, and only the largest data-first companies had data sets exceeding 100 PB, usually on HDFS clusters, according to AB Periasamy, co-founder and co-CEO at MinIO.

“That has completely shifted now,” Periasamy said. “One hundred to 200 petabytes is the new single-digit petabytes, and the data-first organization is moving towards consolidating all of their data. They’re actually going to exabytes.”

The generative AI revolution is driving enterprises to rethink their storage architectures. Enterprises are planning to build these massive storage clusters on-prem, since putting them in the cloud would be 60% to 70% more expensive, MinIO says. Often times, enterprises have already invested in GPUs and need bigger and faster storage to keep them fed with data.

MinIO spells out exactly what goes into its exascale DataPod reference architecture (Image courtesy MinIO)

MinIO’s DataPod reference architecture features industry-standard X86 servers from Dell, HPE, and Supermicro, NVMe drives, Ethernet switches, and MinIO’s S3-compatible object storage system.

Each 100 PB DataPod is composed of 11 identical racks, and each rack is composed of 11 2U storage servers, two top of rack (TOR) layer 2 switches, and one management switch. Each 2U storage server in the rack is equipped with a 64-core, single-socket processor, 256GB of RAM, a dual-port 200 Gbe Ethernet NIC, 24 2.5” U.2 NVMe drive bays, and 1,600W redundant power supplies. The spec calls for 30TB NVMe drives, for a total of 720 TB raw capacity per server.

Thanks to the sudden demand for developing AI, enterprises are now adopting concepts about scalability that folks in the HPC world have been using for years, says Periasamy, who is a co-creator of the Gluster distributed file system used in supercomputing.

“It’s actually a simple term we used in the supercomputing case. We called it scalable units,” he tells Datanami and HPCwire. “When you build very large systems, how do you even build and ship them? We delivered in scalable units. That’s how they planned everything, from logistics to rolling out. A core operational system was designed in terms of scalable units. And that’s how they also expanded.

MinIO uses dual 100GbE switches with its DataPod reference architecture (Image courtesy MinIO)

“At that scale, you don’t really think in terms of ‘Oh I’m going to add few more drives, a few more enclosures, a few more servers,’” he continues. “You don’t do one server, two servers. You think in terms of rack units. And now that we are talking in terms of exascale, when you are looking at exascale, your unit is different. That unit we are talking about is the DataPod.”

MinIO has worked with enough customers with exascale plans over the past 18 months that it felt comfortable defining the core tenets in a reference architecture, with the hope that it will simplify life for customers in the future.

“What we learned from our top line customers, now we are seeing a common pattern emerging for the enterprise,” Periasamy says. “We are simply teaching the customers that, if you follow this blueprint, your life is going to be easy. We don’t need to reinvent the wheel.”

MinIO has validated this architecture with multiple customers, and can vouch that it scales up to an exabyte of data and beyond, says MinIO CMO Jonathan Symonds.

“It just takes much friction out of the equation, because they don’t go back and forth,” Symonds says. “It facilitates for them ‘This is how to think about the problem.’ I want to think about it in terms of A, units of measure, buildable units; B, the network piece; and C, these are the types of vendors and these are the types of boxes.”

AB Periasamy, the co-founder and co-CEO of MinIO

MinIO has worked with Dell, HPE, and Supermicro to come up with this reference architecture, but that doesn’t mean it’s limited to them. Customers can plug other hardware vendors into the equation, and even mix and match their server and drive vendors as they build out their DataPods.

Enterprises are concerned about hitting limits to their scalability, which is something that MinIO took into consideration with devising the architecture, Symonds says.

“’Smart software, dumb hardware’ is very much embedded into the kind of corpus of what DataPod offers,” he says. “Now you can think about it and be like, alright, I can plan for the future in a way that I can understand the economics, because I know what these things cost and I can understand the performance implications of that, particularly that they can scale linearly. Because that’s a huge problem: Once you can get to 100 petabytes or 200 petabytes or up to an exabyte, is this concept of performance at scale. That’s the huge challenge.”

In its white paper, MinIO published average street pricing, which a amounted to $1.50 per TB/month for the hardware and $3.54 per TB/month for the MinIO software. At a rate of about $5 per TB per month, a 100PiB (pebibyte) system would cost roughly $500,000 per month. Multiply that times 10 to get the rough cost for an exabyte system.

The large costs may having you looking twice, but it’s important to keep in mind that, if you decided to store that much data in the cloud, the cost would be 60% to 70% higher, Periasamy says. Plus, it would cost much more to actually move that data into the cloud if it wasn’t already there, he adds.

“Even if you want to take hundreds of petabytes into the cloud, the closest thing you’ve got is UPS and FedEx,” Periasamy says. “You don’t have the kind of bandwidth on the network even if the network is free. But network is very expensive compared to even the storage costs.”

When you factor in how much customers can save on the compute side of the equation by using their own GPU clusters, the savings really add up, he says.

“GPUs are ridiculously expensive on the cloud,” Periasamy says. “For some time, cloud really helped, because those vendors could procure all of the GPUs available at the time and that was the only way to go do any kind of GPU experimentation. Now that that’s easing out, customers are figuring out that going to the co-lo, they save tons, not just on the storage side, but on the hidden part–the network and the compute side. That’s where all the savings are enormous.”

You can read more about MinIO’s DataPod here.

Related Items:

Data Is the Foundation for GenAI, MIT Tech Review Says

GenAI Show Us What’s Most Important, MinIO Creator Says: Our Data

MinIO, Now Worth $1B, Still Hungry for Data

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

LLNL Engineers Harness Machine Learning to Unlock New Possibilities in Lattice Structures

September 9, 2024

Lattice structures, characterized by their complex patterns and hierarchical designs, offer immense potential across various industries, including automotive, aerospace, and biomedical engineering. With their outstand Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, integrated, and secured data. Now scientists working at univer Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently posted the following on X/Twitter: "This weekend, the @xA Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Shutterstock 1622080153

AWS Perfects Cloud Service for Supercomputing Customers

August 29, 2024

Amazon's AWS believes it has finally created a cloud service that will break through with HPC and supercomputing customers. The cloud provider a Read more…

HPC Debrief: James Walker CEO of NANO Nuclear Energy on Powering Datacenters

August 27, 2024

Welcome to The HPC Debrief where we interview industry leaders that are shaping the future of HPC. As the growth of AI continues, finding power for data centers Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire