Exascale: Power Is Not the Problem!

By Andrew Jones

August 29, 2011

To build exascale systems, power is probably the biggest technical hurdle on the hardware side. In terms of getting to exascale computing, demonstrating the value of supercomputing to funders and the public is a more urgent challenge. But the top roadblock for realizing the potential benefits from exascale is software.

That title is probably controversial to most readers. It is likely that if you asked members of the supercomputing community what is the single biggest challenge for exascale computing, the most common answer would be “power.” It is widely reported, widely talked about, and in many places, generally accepted that finding a few orders of magnitude improvement in power consumption is the biggest roadblock on the way to viable exascale computing. Otherwise, the first exascale computers will require 60MW, 120MW or 200MW — pick your favorite horror figure. I’m not so convinced.

I’m not saying the power estimates for exascale computing are not a problem — they are — but they are not the problem. Because, in the end, it is just a money problem. For most in the community, the objection is not so much to the fact of 60-plus MW supercomputers. Instead, the objection is the resulting operating costs of 60-plus MW supercomputers. We simply don’t want to pay $60 million each year for electricity (or more precisely we don’t want to have to justify to someone else — e.g., funding agencies — that we need to pay that much). But why are we so concerned about large power costs?

Are we really saying, with our concerns over power, that we simply don’t have a good enough case for supercomputing — the science case, business case, track record of innovation delivery, and so on? Surely if supercomputing is that essential, as we keep arguing, then the cost of the power is worth it.

There are several large scientific facilities that have comparable power requirements, often with much narrower missions — remember that supercomputing can advance almost all scientific disciplines — for example, LHC, ITER, NIF, and SNS. And indeed, most of the science communities behind those facilities are also large users of supercomputing.

I occasionally say, glibly and deliberately provocatively, if the scientific community can justify billions of dollars, 100MW of power, and thousands of staff in order to fire tiny particles that most people have never heard of around a big ring of magnets for a fairly narrow science purpose that most people will never understand, then how come we can’t make a case for a facility needing only half of those resources that can do wonders for a whole range of science problems and industrial applications?

[There is a partial answer to that, which I have addressed on my HPC Notes blog to avoid distraction here.]

But secondly, and more importantly, the power problem can be solved with enough money if we can make the case. Accepting huge increases in budgets would also go a long way toward solving several of the other challenges of exascale computing. For example, resiliency could be substantially helped if we could afford comprehensive redundancy and other advanced RAS features; data movement challenges could be helped if we could afford huge increases in memory bandwidth at all levels of the system; and so on.

Those technical challenges would not be totally solved but they would be substantially reduced by money. I don’t mean to trivialize those technical challenges, but certainly they could be made much less scary if we weren’t worried about the cost of solutions.

So, the biggest challenge for exascale computing might not be power (or your other favorite architectural roadblock) but rather our ability to justify enough budget to pay for the power, or more expensive hardware, etc. However, beyond even that, there is a class of challenges for which money alone is not enough.

Assume a huge budget meant an exascale computer with good enough resiliency, plenty of memory bandwidth and every other needed architectural attribute was delivered tomorrow, and never mind the power bills. Could we use it? No. Because of a series of challenges that need not only money, but also lots of time to solve, and in most cases need research because we just don’t know the solutions.

I am thinking of the software related challenges.

Even if we have highly favorable architectures (expensive systems with lots of bandwidth, good resiliency, etc.) I think the community and most, if not all, of the applications are still years away from having algorithms and software implementations that can exploit that scale of computing efficiently.

There is a reasonable effort underway to identify the software problems that we might face in using exascale computing (e.g., IESP and EESI). However, in most cases we can only identify the problems; we still don’t have much idea about the solutions. Even where we have a good idea of the way forward, sensible estimates of the effort required to implement software capable of using exascale computing — OS, tools, applications, post-processing, etc. — is measured in years with large teams.

It certainly requires money, but it needs other scarce resources too, specifically time and skills. That involves a large pool of skilled parallel software engineers, scientists with computational expertise, numerical algorithms research and so on. Scarce resources like these are possibly even harder to create than money!

Power is a problem for exascale computing, and with current budget expectations is probably the biggest technical challenge for the hardware. In terms of getting to exascale computing, demonstrating the value of increased investment in supercomputing to funders and the public/media is probably a more urgent challenge. But the top roadblock for achieving the hugely beneficial potential output from exascale computing is software. There are many challenges to do with the software ecosystem that will take years, lots of skilled workers, and sustained/predictable investment to solve.

That “sustained/predictable” is important. Ad-hoc research grants are not an efficient way to plan and conduct a many-year, many-person, community-wide software research and development agenda. Remember that agenda will consume a non-trivial portion of the careers of many of the individuals involved. And when the researchers start out on this necessary software journey, they need confidence that funding will be there all the way to production deployment and ongoing maintenance many years into the future.

About the Author

Andrew is Vice-President of HPC Services and Consulting at the Numerical Algorithms Group (NAG). He was originally a researcher using HPC and developing related software, later becoming involved in leadership of HPC services. He is also interested in exascale, manycore, skills development, broadening usage, and other future concerns of the HPC community.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire