Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

By Agam Shah

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. A typical supercomputer lifecycle is about five to six years. However, Japan-based RIKEN is planning to run its existing Fugaku for ten years, and Lawrence Livermore National Labs (LLNL) has some systems running for 7-10 years in some cases.

“We plan on extending the lifetime of our machines,” said Satoshi Matsuoka, director at Japan’s RIKEN Center for Computational Science, during a panel discussion on sustainability of supercomputing at ISC 2024.

Panel members, which included some top names from supercomputing labs, criticized vendors for purposefully deprecating hardware early on and called for an end to this practice.

The Fugaku supercomputer.

“These machines are still good to go after five years, but we sometimes have no choice because they are ending their support. We have to stop these practices and tell the vendors to prepare for a much longer lifespan,” Matsuoka said.

LLNL plans for a five-year lifespan for systems, as hardware maintenance typically becomes cost-prohibitive beyond that point.

“We run systems … in practice, it’s about 7 to 8 years. We’ve run several systems for ten years,” said Bronis de Supinski, chief technology officer at LLNL.

The decision to retire supercomputers largely depends on the energy efficiency and power-performance benefits of newer systems.

An Uptime Institute report pegs the longevity of systems from 18 months to 7 years, according to Jon Summers, research lead for data centers at RISE Research Institutes of Sweden.

RIKEN’s ten-year Fugaku run will overlap with the upcoming FugakuNEXT, which is expected around 2030. “We plan on keeping it for a two- or three-year overlap, and also because we think it’s worth it,” Matsuoka said.

Fugaku, built with ARM processors, is low-power by design and focused on data movement. Experts agree that it remains one of the best-architected supercomputers to date. Optimizing software and algorithms will go a long way in extending the life of Fugaku.

HPC is mostly data or memory-bound, and efficiency in both will remain high. Fugaku will remain reasonably power-efficient even at ten years, Matsuoka said.

“We expect to see over time unless there’s some innovation in-memory technology, that prolonging the lifetime of a machine is the best way to be sustainable,” Matsuoka said.

Panelists at ISC said that supercomputers are partially running longer because, despite many innovations, system power efficiency hasn’t improved significantly.

The average age of the systems on the Top500 list was about 35 months in the June 2024 list, which was a record high. The average age of systems was about 5 to 10 months from 1995 to 2011.

Systems are also staying on the Top500 list much longer, which means the average age of the systems on the list is much higher.

“That’s not just a result of the dynamic of the list, but it is actually something we see in the field – systems are used longer because the incentive to replace them is not as strong as it used to be,” said Erich Strohmaier, organizer of the Top500 list.

Supercomputers also last longer because new systems are expensive to build.

Large supercomputer installations are experimenting with various ways to achieve efficiency, such as using direct-liquid cooling and packing in more graphics processors and accelerators like Nvidia GPUs.

For example, LLNL increased cooling to 28,000 tons with an additional 18,000 tons and raised the power supply to 85 megawatts to support current and future systems.

“El Capitan will fit into the envelope from its RFP; it will be under 40 megawatts, about 30 megawatts, but that is a lot of electricity,” de Supinski said.

Although a system like the El Capitan supercomputer may not be the most environmentally friendly, it serves other needs, helping society move in the right direction by solving problems like climate change.

“A 30-megawatt supercomputer? I’m not going to tell you that that’s a sustainable resource, but it could do a lot to address the societal problems that we want to get addressed,” de Supinski said.

Panelists agreed that no single metric could measure sustainability. PUE—which was initially used by Google 20 years ago—is a widely acceptable metric, but it has issues.

De Supinski said PUE (Power Usage Effectiveness) is an ineffective metric as it doesn’t measure the effectiveness of work done relative to power consumed.

Panelists agreed on that. For example, research on climate change may be more worthwhile within a power envelope than bitcoin mining.

Other sustainability metrics, such as carbon offsetting—which is hard to measure—were also debated. Panelists also talked about capturing and reusing wasted heat, effectively disposing of e-waste, and reusing materials, Summers said.

“A server has 17 to 25 of the critical raw materials on the planet, and we’re disposing of them at a high rate. We should be trying to recycle some of that, but not all of it is captured,” Summers said.

Labs are using renewable energy and liquid cooling to make computing sustainable.

Germany’s LRZ compared an air-cooled Nvidia DGX A100, with 16-24 GPUs per rack and a 1.65 to 1.80 PUE (Power Usage Effectiveness), to a water-cooled Nvidia HGX with 144 A100 chips per rack, which was more power-efficient at 1.05 PUE.

“We are saving about 50% of the energy for cooling,” said Dieter Kranzlmüller, chairman of LRZ (Leibniz Rechenzentrum der Bayerischen Akademie der Wissenschaften) in Germany.

Kranzlmüller said HPC users should not look at energy efficiency from a supercomputer perspective, but rather “more thinking about whether the energy getting into the building, and what do we get out of the building and what we are doing with energy overall.”

The panelists didn’t name Nvidia but cracked a joke about their ‘sustainable’ GPUs. The chip maker’s upcoming super-hot Blackwell GPU, announced in March, has a TDP of 1200 watts.

“I was watching the news the other day, and they came on talking about a certain processor manufacturer that likes to promote their processors as being ‘sustainable’ and ‘energy efficient.’ They happen to be making 1000-watt processors. A 1000-watt processor is not green,” de Supinski said.

“It’s color green,” Matsuoka responded in jest.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…


Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers


Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

  • arrow
  • Click Here for More Headlines
  • arrow