HPC Center Traces Storage Selection Experience

By Nicole Hemsoth

July 8, 2011

We often hear about national labs and universities settling on a particular vendor for server and storage solutions, but details are usually in short supply when it comes to how vendors stacked up against one another in a head-to-head bidding war.

HP announced last week that the University of Utah’s Center for High Performance Computing (CHPC) moved into its Converged Infrastructure arena by selecting the HP X9320 IBRIX Network Storage System coupled with ProLiant SL160z G6 servers. This announcement, like many others of this ilk was full of the expected hyperbole about scalability and cost, so we followed up with the Brian Haymore, who heads the HPC storage team at CHPC to find out how they evaluated the competing vendors to enhance the center’s Updraft cluster and what ultimately led to their storage decision.

The I/O issue isn’t new for Haymore’s team. He says that this pain point was one they recognized early on but that came into more focus when they would have one or two users running large cases on the clusters, then having everyone else wanting to go to the scratch file system to look at the results they’d run weeks or months ago. He said that at this point the file system would be dead in the water–quite a problem when their people expected interactive responsiveness. He claims they knew the applications were saturating everything the current file system could offer and that it wasn’t a network saturation issue. He remained convinced that NFS just wouldn’t offer the scalability for some applications and that proprietary solutions might offer the only remedy.

The chemical and fuels engineering group at CHPC was running an application that was authored by the Center for the Simulation of Accidental Fires and Explosions. This application is a composite of code contributed from scientists across the country, which fine-tunes its results but is difficult to modify from an I/O perspective. This meant that for Haymore’s team, the storage selection process required more than just looking at price points—they needed a file system that was going to fit with the application without manipulating application itself.

With that in mind, the I/O difficulties were at the heart of performance hitches. During the baseline test, which was against their standard NFS server they were running at about 90 seconds per iteration with about 45 percent of that time being gobbled by I/O. In other words, half of the time that baseline system over the standard NFS server was spent in I/O activity.

Four vendors were vying for a chance to improve the I/O capabilities at CHPC, including Panasas, HP with its IBRIX solution, partners Dell and Terascala with their Lustre offering, and the partnership to provide GPFS from IBM and DDN. Haymore told us that while these were the four main vendors considered, others, including Isilon were evaluated early on. Isilon’s solution would only have been suitable if the application could be changed, which was not a possibility.

Haymore says that Panasas provided no performance increase with their application. His team wanted to dig deeper with the Panasas engineering team to look for the choke point but they were unable to gain any traction with that process. Eventually, he says, this option timed out and they considered other alternatives.

While they were able to realize a tripling in performance with the Dell and Terascala Lustre offering, the excitement over the performance increase was hampered by a troubling series of mysterious I/O errors that affected 50 percent of the runs, even those that used the exact same dataset. As Haymore described, there seemed to be no rhyme or reason—the “file system just puked.”

He says that they found good support from the Dell Terascala team but ultimately they were never able to resolve the error after determining it was not a tuning error and instead was likely a bug that had been filed with the Lustre package that could not be fixed in a reasonable timeframe. Besides, as Haymore noted, aside from these more practical concerns about stability, the very status of the Lustre file system was in question as it was being handed off to Oracle.

In the end, the choice boiled down to the DDN/IBM GPFS and HP’s IBRIX solutions as they both performed almost exactly the same. He says that in this case, the tipping point wasn’t based on pricing alone—rather, he said, the support model was a major factor. As Haymore pointed out, getting your hardware from DDN and software from IBM required two hops for support whereas with HP, it was a single, unified support model—an important factor in his team’s final decision.

Make no mistake, however, price did play a role. While he admits that even at the beginning he expected the HP solution to be quite expensive, he says that they were able to accommodate their budget—the icing on the cake, as far as Haymore was concerned.

On that note, we asked if he went into the closed bidding process thinking that one solution would win out. He says that he would have counted on Lustre as being the champion if he had to make an early pre-benchmarking guess. This is because, as he put it, “Part of us doing our jobs is to keep our finger on the pulse of what the big boys are doing and for us, those big boys are the national labs. Lustre is heavily deployed there but it’s hard to tell if it’s because that’s what won the bid on a price point or if it was really the king of performance….We don’t know why it is always selected. We just figured we’d mimic national labs since it’s been their trend for the last several years.” While he notes that they do use other file systems, he says he’s still surprised at the errors they faced with Lustre.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire