Berkeley Lab’s John Shalf Ponders the Future of HPC Architectures

By Kathy Kincade

June 27, 2019

Editor’s note: Ahead of John Shalf’s well-attended and well-received “high-bandwidth” keynote at ISC 2019, Shalf discussed the talk’s major themes in an interview with Berkeley Lab’s Kathy Kincade. 

What will scientific computing at scale look like in 2030? With the impending demise of Moore’s Law, there are still more questions than answers for users and manufacturers of HPC technologies as they try to figure out what their next best investments should be. As he prepared to head to ISC19 in Frankfurt, Germany, to give a keynote address on the topic, John Shalf – who leads the Computer Science Department in Lawrence Berkeley National Laboratory’s Computational Research Division – shared his thoughts on what the future holds for computing technologies and architectures in the era beyond exascale. ISC took place June 16-20; Shalf’s keynote was on Tuesday, June 18.

What was the focus of your keynote at ISC?

What the landscape of computing, in general, is going to look like after the end of Moore’s Law. We’ve come to depend on Moore’s Law and to really expect that every generation of chips will double the speed, performance, and efficiency of the previous generation. Exascale will be the last iteration of Moore’s Law before the bottom drops out – and the question then is, how do we continue? Is exascale the last of its kind, or are we going to embark on a first-of-its-kind machine for the future of computing?

How long have you been thinking/talking about what’s next for HPC after Moore’s Law?

Where we are now is really the second shoe dropping. I got involved in the Exascale Computing Initiative discussions back in 2008, but actually, my interest in this predates exascale. Back in 2005, David Patterson’s group at UC Berkeley was talking about it in the Parallel Computing Laboratory, and we spent two years there in discussion and debate about the end of  Dennard’s scaling. Ultimately we published “The Landscape of Parallel Computing Research: A View from Berkeley,” which was the prediction that parallel computing would become ubiquitous on account of clock frequencies no longer scaling at exponential rates. This was followed closely by the DARPA 2008 Exascale report that set the stage for the Exascale Initiative for HPC. So the end of Dennard’s scaling was the first shoe to drop, but we always knew that the second shoe would drop fairly soon after the first. And the second shoe dropping means we can’t shrink transistors at all anymore, and that is the real end of Moore’s Law. Exascale is addressing the mass parallelism from the first shoe dropping, and I’ve been concerned about the second shoe dropping during the entire 10-year ramp-up to the Exascale Computing Initiative and subsequent Project, as were many others who were involved in writing the View from Berkeley report and the DARPA 2008 report.

How is the slowing of Moore’s Law already affecting HPC technologies and the industry itself?

We are seeing already procurement cycles stretching out so that the replacement of machines is happening at a slower pace than it has historically. Eric Strohmaier at Berkeley Lab has been tracking the replacement rate on the TOP500 very closely, and he has seen a noticeable slowdown in system replacement rates. I’ve also heard from our colleagues in industry that this is a troubling development that will affect their business model in the future. But we are also seeing these effects in the mega datacenter space, such as Google, Facebook, and Amazon. Google has actually taken to designing its own chips, specialized for particular parts of their workflow, such as the Tensor Processing Unit (TPU). We will probably see even more specialization in the future, but how this applies to HPC is less clear at this point – and that’s what I would like to get people thinking about during my keynote.

Is the lithography industry experiencing a parallel paradigm shift?

Yes, the lithography industry is also being affected, and something’s going to need to change in the economics for that industry. What we have seen in the past decade is that we’ve gone from nearly a dozen leading-edge fabs down to two. Global Foundries recently dropped out as a leading-edge fab, and Intel has had a huge amount of trouble getting its 10nm fab line off the ground. So clearly there are huge tectonic shifts happening in the lithography market as we speak, and how that will resolve itself ultimately remains unclear.

Do we have to start imagining an entirely new computing technology development and production process?

I think the way in which we select and procure systems is going to have to be revisited. While using user application codes to run benchmarks to assess the performance and usability of emerging systems is a great way for us to select systems today that use general purpose processors, it doesn’t seem to be a very good approach for selecting systems that might have specialized features for science. In the future, we need to be more closely involved in the design of the machines with our suppliers to deliver machines that are truly effective for scientific workloads. This is as much about sustainable economic models as it is a change in the design process. The most conventional or even the most technologically elegant solution might not survive, but the one that makes a lot of money will. But our current economic model is breaking.

Looking ahead, I see three paths going forward. The first is specialization and better packaging – specialization meaning designing a machine for a targeted class of applications. This has already been demonstrated in the successful case of the Google TPU, for example. So that is the most immediate path forward.

Another potential path forward is new transistor technology that replaces CMOS that is much more energy efficient and scalable. However, we know from past experience that it takes about 10 years to get from a lab demo to a production product. There are promising candidates, but no clear replacements demonstrated in the lab, which means we are already 10 years too late for that approach to be adopted by time Moore’s Law fails. We need to dramatically accelerate the discovery process in that area through a much more comprehensive materials-to-systems co-design process.

The third approach is to explore alternative models of computation such as quantum and neuromorphic and other, related approaches. These are all fantastic, but they are really expanding computing into areas where digital computing performs very poorly. They aren’t necessarily replacement technologies for digital general purpose computing; they are merely expanding into areas where digital isn’t very effective to start with. So I think these are worthy investments, but they aren’t the replacement technology. They will have a place, but how broadly applicable they will be is still being explored.

What about the development of new chip materials – what role might they play in the future of HPC architectures?

New materials are definitely part of the CMOS replacement. It’s not just new materials; fundamental breakthroughs in solid-state physics will be required to create a suitable CMOS replacement. The fundamental principle of operation for existing transistor technology cannot be substantially improved beyond what we see today. So to truly realize a CMOS replacement will require a new physical principle for switching, whether electrical, optical, or magnetic switching. A fundamentally new physical principle will need to be discovered and that, in turn, will require new materials and new material interfaces to realize effective and manufacturable solutions.

Are there any positives when you look at what is happening in this field right now?

Yes, definitely there are positives. We believe the co-design processor is going to require not just software and hardware people to collaborate, it is going to require this collaboration to go all the way down into the materials and materials physics level. And for the national laboratories, this is a great opportunity for us to work closely with our colleagues in the materials science divisions of our respective laboratories. I work at a national laboratory because I’m excited by cross-disciplinary collaboration, and clearly, that is the only way we are going to make forward progress in this area. The recent ASCR Extreme Heterogeneity and DOE Microelectronics BRNs show strong interest by DOE in this deep co-design and collaborative research that is really needed in this space. So to that extent, it is kind of an exciting time.

When you think about the future of HPC and supercomputing architectures and technologies, what do you imagine they will look like 10 years from now?

I think we’re going to have smaller machines that are more effective for the workflows they target. For three decades we have become used to ever-growing, larger and larger machines, but that doesn’t seem to be the winning approach for creating effective science in the post-exascale and post-Moore era.


About the Author

Kathy Kincade is a science & technology writer and editor with the Berkeley Lab Computing Sciences Communications Group.


Article courtesy Berkeley Lab; Feature image credit: ISC High Performance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire