Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

By Tiffany Trader

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-related dip in 2020, the inclusion of Fugaku system revenue in the fourth quarter brought the total HPC spend for the year to $13.7 billion, an increase of 1.1 percent over the previous year. On the horizon are continued growth for HPC cloud and a possible surge in storage spending.

Riken’s Fugaku supercomputer – the world’s fastest at 442 Linpack petaflops – was put into service a year ahead of schedule to fight COVID. Although the system did not reach full operational status until 2021, system revenue was recorded by Fujitsu in December 2020.

The shift in server market revenue from 2021 to 2020 portends a flat growth rate for for the next couple of years, said Hyperion Research CEO Earl Joseph. But the market will return to sizable growth, reaching roughly $19 billion by 2024, he said.

That puts the five-year, compound annual growth rate (2019-2024) for on-premises purchases at 6.8 percent, subject of course to COVID dynamics in addition to the usual market uncertainties. Because of COVID-19, Hyperion is adjusting its forecasts more regularly, making updates once or even twice a quarter, according to Joseph.

Looking at the market by system segment (supercomputer, divisional, departmental and workgroup), the supercomputer segment reached nearly $6 billion (up 13.7 percent from 2019). This robust growth was largely due to the the Fugaku system, which accounted for about $1 billion. The workgroup segment declined the most and is under pressure in multiple different ways, said Joseph.

By vertical, five segments hit or exceeded the $1 billion per year mark: government lab, university/academic, CAE, defense, and biosciences. Government lab was the highest of these, responsible for nearly $3.4 billion in HPC spending, with Fugaku contributing about a third of that.

Shifting to the vendor landscape, HPE and Dell Technologies maintained their first and second place positions, with 33.4 percent and 20.8 percent shares respectively, although server sales were down for both year-over-year. With the Fugaku sale on its books, Fujitsu had a strong year, earning $1.3 billion for a 9.6 percent share. Inspur is next with 7.2 percent market share, followed by Lenovo with 6.8 percent. The other category (which includes 39 companies) accounts for $1.5 billion in spending, a 10.9 percent share.

Adding in the rest of the on-prem market categories (storage, middleware, applications and service categories), Hyperion expects the total on-premises HPC market to grow to nearly $38 billion in 2024 (a CAGR of 6.6 percent).

For comparison, here is the market forecast that Hyperion issued last June, not-yet-revised for COVID effects. Note that the pandemic has seemingly caused or at least contributed to a two-year “growth lag” relative to the most recent pre-COVID forecast.

Hyperion’s June 2020 numbers (prior to COVID adjustments)


AI, Cloud, Storage & Exascale = Growth!

Certain subsegments of the market are growing faster than others, as much as seven times faster. The highest growth areas are related to AI, machine learning and deep learning, along with HPDA (high-performance data analytics) and big data, said Joseph. The use of deep learning in HPC is expected to grow the fastest with a 41.8 percent projected five-year CAGR. The traditional modeling and simulation area is expected to grow at roughly 7-8 percent over the same timeframe, driven by new enterprise users, said Joseph, and at the very high-end of the market exascale systems are driving growth.

Other major growth areas are the use of clouds for running HPC workloads (17.6 percent five-year CAGR), and storage (8.3 percent five-year CAGR).

End users spent $4.3 billion running HPC workloads in the public cloud in 2020, according to Hyperion, and that number is projected to more than double over the next four years, reaching $8.8 billion in 2024. That’s more than 2.5 times the growth rate of the on-prem HPC server market.

When public cloud is added to traditional on-premises purchases, the 2024 market forecast jumps to $47 billion. The public cloud spend makes up the second largest segment (with an 18.7 percent share) next to servers (40.5 percent share). Storage is a close third (with a 17.2 percent share).

HPC in the cloud hit a tipping point in 2019, Joseph said, where the numbers came in $1 billion higher than Hyperion had forecasted.

Now Hyperion is predicting a second tipping point for HPC cloud spending, sometime in the 2022-2023 period, Joseph said.

The analyst firm lists several potential big drivers and notes that if only two or three happen, there could be another billion dollar surge. Cited drivers include dramatic in-cloud performance improvements, increased ease of use, cost effectiveness/transparency, a significant uptick in AI workloads, and/or an emerging workload that is particularly well-matched for cloud. 

Storage will likely also see major growth, Joseph said, driven by big data, AI and machine learning as well as larger traditional simulations.

“It’s possible that the storage growth numbers will actually be larger than what we’re showing,” said Joseph. “It all depends on how quickly the AI, machine learning and big data applications are successful and are applied in more areas. At some point in time, we expect storage to go through a tipping point, much like we saw the cloud, where there’s going to be a major increase in usage.”

The dawning era of exascale is another growth driver.

Exascale and near exascale systems are nearing readiness in China, EU, the UK, Japan, U.S. and other countries. Four-to-six systems per year, representing on the order of $2 billion a year, will be coming online starting within the next 12-24 months, according to Hyperion’s research. By 2026, Hyperion forecasts the total (cumulative) value will reach $10 billion to $15 billion.

Earl Joseph

“We are expecting 2022 to 2024 to be strong growth years, driven heavily by the exascale systems coming onboard, AI and HPDA, as well as HPC spending in the cloud,” said Joseph. “The exciting part to me is all the new technologies are showing up, whether it’s in the processor side, the hardware, the software, new storage approaches, memories, and everything. So it’s a very exciting time in the marketplace right now, providing many different choices for users.”

Hyperion has also updated its ROI report that examines investments in HPC projects and their financial and innovation returns. Based on 763 successful (revenue-generating) HPC projects, Hyperion’s model shows $507 in revenue was generated per dollar of HPC invested. On the profit and cost saving side, Hyperion saw $47 generated per dollar of HPC invested. For the next phase of the ROI study, Hyperion is working on including the projects that were not successful, so that it’s possible to determine the ROI generated by a given supercomputer or datacenter over time. 

The ROI reports and the methodology (as well as the raw data) are available on Hyperion’s website. “The entire database is available to be downloaded by anyone in the community. So for example, if you want to use a different calculation, a different economic model than what we’ve used here, you’re able to do that,” said Joseph.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire