ANL Special Colloquium on The Future of Computing

By John Russell

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that it will never be able to solve today’s biggest challenges.

The broad idea is that the steady abstracting away of informational content from each piece of modern computing’s complicated assemblage (chips, architecture, programming) inexorably increases the cumulative energy cost, leading toward a hard ceiling. Leaving aside, for a moment, the decline in Moore’s law (just a symptom really), it is the separation (abstraction) of information from direct computation that’s the culprit argues Shankar. Every added step adds energy cost.

Nature, on the other hand, bakes information into things. Consider, said Shankar, how a string of amino acids folds into its intended 3-D conformation on a tiny energy budget and in a very short time just by interacting with its environment, and contrast that with the amount of compute required – i.e. energy expended – to accurately predict protein folding from a sequence of amino acids. Shankar, research technology manager at the SLAC National Laboratory and adjunct Stanford professor, argues computing must take a lesson from nature and strive to pack information more tightly into applications and compute infrastructure.

Information theory is a rich field with a history of rich debate. Turning theory into practice has often proven more difficult and messy. Shankar (and his colleagues) have been developing a formal framework for classifying the levels of information content in human-made computation schemes and natural systems in a way that permits direct comparison between the two. The resulting scale has eight classification levels (0-7).

There’s a lot to digest in Shankar’s talk. Rather than going off the rails here with a garbled explanation it’s worth noting that Argonne has archived the video and Shankar has a far-along paper that’s expected in a couple of months. No doubt some of his ideas will stir conversation. Given that Argonne will be home to Aurora, the exascale supercomputer now being built at the lab, it was an appropriate site for a talk on the future of computing.

Before jumping into what the future may hold, here’s a quick summary of Shankar’s two driving points – 1) Moore’s law, or more properly the architecture and semiconductor technology on which it rests, is limited and 2) the growing absolute energy cost of information processing using traditional methods (von Neumann) are limiting:

  • Moore’s Law. “Let us assume we are looking at 10 nanometers today. How many more generations (doubling) are there if you go down all the way to a hydrogen atomic diameter, [which] we agree you cannot get there. You have 14 doublings beyond the 10-nanometer technology. However, more realistic would be silicon’s Van der Waals diameter, because that is when the properties of silicon material properties are changed, you essentially have nine doublings. But [even] more realistically, you may have only five doublings. Because if you look at the number of atoms on the surface of silicon, as opposed to the bulk, the surface atoms increase [more] than the number of atoms and the bulk, which means it’s no longer behaving like a silicon. If you put that as the endpoint, you essentially have only five doublings left. No matter how you look at it, you have about five to 14 doublings,” said Shankar.
  • Energy Cost. “Why is [this] a major problem for computing? The 10-year estimate of amount of information processed is going to be 1024 bits per year. The energy to process bits could end up being 100 exajoules per year. The total energy consumption that humans use this year is about 580 exajoules, so 20 percent of the energy could go into computing. And this is going to be a big problem. The three most important items for future of computing would be energy for information processed, energy for manufacturing – I didn’t even tell you the EUV [extreme ultraviolet lithography] for example, takes 100 kilowatts just to process pattern layers. Energy appears to be the single most important problem.”

A big part of the answer to the question of how computing must progress, suggested Shankar, is to take a page from Feynman’s reverberating idea – not just for quantum computing – and emulate the way nature computes, pack[ing] all of the information needed for the computing into the things themselves or at least by reducing abstraction as much as possible.

Argonne assembled an expert panel to bat Shankar’s ideas around. The panel included moderator Rick Stevens (associate laboratory director and Argonne distinguished fellow), Salman Habib (director, Argonne computational science division and Argonne distinguished fellow), Yanjing Li (assistant professor, department of computer science, University of Chicago), and Fangfang Xia (computer scientist, data science and learning division, ANL).

Few quibbled with the high-energy cost of computing as described by Shankar but they had a variety of perspectives on moving forward. One of the more intriguing comments came from Xia, an expert in neuromorphic computing. He suggested using neuromorphic systems to discover new algorithms is a potentially productive approach.

“My answer goes back to the earlier point Sadas and Rick made which is, if we’re throwing away efficiency in the information power conversion process, why don’t we stay with biological system for a bit longer. There’s this interesting field called synthetic biological intelligence. They are trying to do these brain-computer interfaces, not in a Neurolink way, because that’s still shrouded in uncertainty. But there is a company and they grow these brain cells in a petri dish. Then they connect this to an Atari Pong game. And you can see that after just 10 minutes, these brain cells self-organize into neural networks, and they can learn to play the game,” said Xia.

“Keep in mind, this is 10 minutes in real life, it’s not a simulation time. It’s only dozens of games, just like how we pick up games. So this data efficiency is enormous. What I find particularly fascinating about this is that in this experiment there was no optimization goal. There is no loss function you have to tweak. The system, when connected in this closed loop fashion, will just learn in an embodied way. That opens so many possibilities, you think about all these dishes, just consuming glucose, you can have them to learn latent representations, maybe to be used in digital models.”

 

Li, a computer architecture expert, noted that general purpose computing infrastructure has existed for a long time.

“I remember this is the same architecture of processor design I learned at school, and I still teach the same materials today. For the most part, when we’re trying to understand how CPUs work, and even some of the GPUs, those have been around for a long time. I don’t think there has been a lot of very revolutionary kind of changes for those architectures. There’s a reason for that, because we have developed, good tool chains, the compiler tool change people are educated to understand and program and build those systems. So anytime we want to make a big change [it has] to be competitive and as usable as what we know of today,” Li said.

On balance, she expects more incremental changes. “I think it’s not going to be just a big jump and we’ll get there tomorrow. We have to build on small steps looking at building on existing understanding and also evolving along with the application requirements. I do think that there will be places where we can increase energy efficiency. If we’re looking at the memory hierarchy, for example, we know caches and that it helps us with performance. But it’s also super inefficient from an energy performance standpoint. But this has worked for a long time, because traditional applications have good locality, but we are increasingly seeing new applications where [there] may not be as many localities so there’s a way for innovation in the memory hierarchy path. For example, we can design different memory, kind of reference patterns and infrastructures or applications that do not activate locality, for example. That will be one way of making the whole computing system much more efficient.”

Li noted the trend toward specialized computing was another promising approach: “If we use a general-purpose computing system like a CPU, there’s overhead that goes into fetching the instructions, decoding them. All of those are overheads are not directly solving the problem, but it’s just what you need to get the generality you need to solve all problems. Increasing specialization towards offloading different specialized tasks would be another kind of interesting perspective of approaching this problem.”

There was an interesting exchange between Shankar and Stevens over the large amount of energy consumed in training today’s large natural language processing models.

Shankar said, “I’m quoting from literature on deep neural networks or any of these image recognition networks. They scale quadratically with the number of data points. One of the latest things that is being hyped about in the last few weeks is a trillion parameter, natural language processing [model]. So here are the numbers. To train one of those models, it takes the energy equivalent to four cars being driven a whole year, just to train the model, including the manufacturing cost of the car. That is how much energy is spent in the training on this, so there is a real problem, right?”

Not so fast countered Stevens. “Consider using the same numbers for how much energy is going into Bitcoin, right? So the estimate is maybe something like 5 percent of global energy production. At least these neural network models are useful. They’re not just used for natural language processing. You can use it for distilling knowledge. You can use them for imaging and so forth. I want to shift gears a little bit. Governments around the world and VCs are putting a lot of money into quantum computing, and based on what you were talking about, it’s not clear to me that that’s actually the right thing we should be doing. We have lots of opportunities for alternative computing models, alternative architectures that could open up spaces that we know in principle can work. We have classical systems that can do this,” he said.

Today, there’s an army of computational scientists around the world seeking ways to advance computing, some of them focused on the energy aspect of the problem, others focused on other areas such on performance or capacity. It will be interesting to see if the framework and methodology embodied on Shankar’s forthcoming paper not only provokes discussion but also provides a concrete methodology for comparing computing system efficiency.

Link to ANL video: https://vimeo.com/event/2081535/17d0367863

Brief Shankar Bio

Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory and Adjunct Professor in Stanford Materials Science and Engineering. He is also an Associate in the Department of Physics in Harvard Faculty of Arts and Sciences, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard University and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to materials, computing, and sustainability and was awarded Harvard University Teaching Excellence Award. He is involved in research in materials, chemistry, and specialized AI methods for complex problems in physical and natural sciences, and new frameworks for studying computing. He is a co-founder and the Chief Scientist in Material Alchemy, a “last mile” translational and independent venture for sustainable design of materials.

Dr. Shankar was a Senior Fellow in UCLA-IPAM during a program on Machine Learning and Many-body Physics, invited speaker in The Camille and Henry Dreyfus Foundation on application of Machine Learning for chemistry and materials, Carnegie Science Foundation panelist for Brain and Computing, National Academies speaker on Revolutions in Manufacturing through Mathematics, invited to White House event for Materials Genome, Visiting Lecturer in Kavli Institute of Theoretical Physics in UC-SB, and the first Intel Distinguished Lecturer in Caltech and MIT. He has given several colloquia and lectures in universities all over the world. Dr. Shankar also worked in the semiconductor industry in the areas of materials, reliability, processing, manufacturing, and is a co-inventor in over twenty patent filings. His work was also featured in the journal Science and as a TED talk.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire