Cerebras CEO Andrew Feldman Sounds Off on Nvidia’s Roadmap and Chiplets 

By Agam Shah

October 30, 2023

Nvidia, this month, unexpectedly released an updated GPU roadmap with new products every year.  

The new GPUs for 2024-2026 came despite customers lining up for the red-hot A100 and H100 GPUs for their AI computing needs.  

Tesla was among the companies waiting to receive Nvidia GPUs and finally received a batch of 10,000 H100s to power its AI operations, CEO Elon Musk said during an earnings call last week. 

Nvidia clearly is not resting on its laurels but declined to comment on its roadmap.  

Industry observers suggested Nvidia could leverage chiplets, advanced packaging, and manufacturing technologies to advance chips on an unprecedented yearly basis. Also, Nvidia’s roadmap may be a placeholder, and the company does not have an obligation to deliver on it.  

Andrew Feldman, the CEO of Cerebras Systems, felt differently and called Nvidia’s roadmap a “predatory pre-announcement” and said the company was using deceptive practices and its dominant position to hinder competition.  

Feldman offered his unabashed opinion to HPCwire of why Nvidia’s roadmap may not be realistic and why it could turn customers off.  

Feldman is one of the most vocal critics of Nvidia, but he also has the pedigree as the architect of the world’s largest AI chip. He also talked about how Cerebras’ integrated chip development approach – albeit at a wafer scale — is still important in a world heading toward chiplets. 

Cerebras WSE2 vs Nvidia GPU (Source Cerebras)

HPCwire: What do you think of Nvidia’s yearly product roadmap? 

Andrew Feldman: I think this is very likely a predatory pre-announce. It is hard to say. Is the pre-announcement because they want to do it or because it helps confuse the market? I think it is the latter.  

What Cisco did – they pre-announced a three-phase program that supposedly solved world peace but never got to phase two, let alone three.  

HPCwire: What was Cisco’s predatory pre-announce affair? 

Feldman: In the late 90s, suddenly, there were a whole bunch of competitors that were eating Cisco’s lunch. And they could not do their engineering as fast. 

They put out a three-phase plan that would take five years. The whole kitchen sink got thrown in. It froze the market for a little bit and gave their engineering a chance to sort of catch up. They never delivered on all three phases, ever.  

In many ways, it has been a terrible block of time for Nvidia. Stability AI said they were going to go on Intel. Amazon said the Anthropic was going to run on them. We announced a monstrous deal that would produce enough compute so it would be clear that you could build… large clusters with us.  

[Nvidia’s] response, not surprising to me, in the strategy realm, is not a better product. It’s… throw sand up in the air and move your hands a lot. And you know, Nvidia was a year late with the H100. 

HPCwire: It is an interesting time… you can accelerate roadmaps with chiplets and advances in manufacturing. You can add different parts, especially SRAM and analog, which cannot scale to three nanometers. 

Feldman: Companies have been making chips for a long time, and nobody has ever been able to succeed on a one-year cadence because the fabs do not change at a one-year pace. 

That means you are paying a huge amount of money to wait for masks and not getting enough time to amortize the cost of those masks. Your vendor does not make money on masks; they make money on the runs. 

I think of that as not designing a new chip but modifying the package. You might be able to swap chiplets at regular intervals but remember, that means every nine months, you are going to piss off a customer by selling them a chip that is out of date three months later.  

If they are changing the package, it is certainly a smaller lift. It puts some pressure on your software team. And it certainly puts pressure on your customers … every nine months, everything they bought is immediately moved off the cutting edge in favor of some other product. 

HPCwire: Cerebras has gone big, with everything integrated into one giant wafer. Others are going in another direction but differently — by decomposing integrated chips into chiplets. Why don’t you do the same? 

Feldman: There are two ways to look at it. One is that they are going small, but the other is that they were not good enough to go big. They need more silicon, too, and they are just doing it on lots of little pieces of separate silicon. 

We can put it on one piece of silicon, but they want more total silicon. And they [Nvidia] are using an 800-mm2 primary chip, and then they are using lots of memory chips, and then they are using IO chips. And all of that. We just went with a big chip.  

I think both strategies try to use more silicon area. We used it on one undiced wafer. They’ve broken it up into many little pieces that must be reassembled on a motherboard or the package.  

At the highest level, there’s absolute agreement that you need more silicon area, and we need more transistors for these problems. Whether you do it with one big chip or lots of little chips is an implementation detail of the general idea that you need more silicon.  

HPCwire: How do you look at chip design going into the future?  

Feldman: We have the most memory bandwidth. We have huge amounts of IO, and I think everybody wants more.  

Thinking about how to get more is hugely important. And thinking about how to — whether it’s with chiplets, other techniques, stacking, or other innovative approaches — everyone is hunting for more memory bandwidth because these problems are memory bandwidth constrained. And that is why we are faster than GPUs. But nobody is standing still. 

HPCwire: How do you pack more memory in integrated versus the chiplet design approach?  

Feldman: SRAM is on your main die. It is the memory that lives next to compute. If you have a limited-size chip like 800mm2 like the H100, every square millimeter you give to SRAM, you take away from a core. You have this dilemma — you can put more memory on chip, which is blisteringly fast, or you can have more compute.  

What has been done is on GPUs — they have skinnied up the SRAM on the chip in favor of DRAM or HBM off-chip, which costs a ton. It is a hard problem. That is why we went to wafer scale, so we could slam down a huge amount of SRAM and a huge number of cores. That is what all those architectural choices are about. 

HPCwire: Is the advantage the bandwidth? 

Feldman: That’s it. That’s how you get on and off of the chip. That is how you power the chip. Those are fundamental elements often overlooked — the package delivers power and IO. 

Our decision to put everything on one wafer vastly simplified our ability to communicate across the equivalent of hundreds of GPUs. They have to put switches down, invent NVLink, and then they’ve got some of their customers that don’t buy NVLink and have to use InfiniBand or Ethernet. We move faster at 1,000th of the power 1,000 times as fast. 

[Nvidia] recognizes now that they are going to need more IO, do some chiplets, and those are going to spin at a different rate than their primary processors. But they are attacking the same fundamental problem, which is — how on earth do we get more silicon to bear on the problem? 

HPCwire: Chiplets seem better for technologies like analog chips, which may not scale to cutting edge. How do you overcome that with your integrated approach?  

Historically, there were parts on your chip, in particular, SERDES (serializer/deserializer, a  transceiver that converts parallel data to serial data and vice versa) that were analog. And that IP was not moving at the same speed as the rest of the CMOS design, the rest of your logic. We designed around that problem early on.  

Our view was that it is a huge problem, and it is also a huge problem that you are likely to buy SERDES from a few numbers of vendors, and they are extraordinarily expensive. Why don’t we design them out completely? So, instead of disaggregating them, we designed them out. 

HPCwire: Where is the complexity in AI chip design – is it in learning or inferencing? 

Feldman: Inference is a very easy problem, except generative inference, which is a very hard problem and extremely memory and bandwidth-intensive. All the inference we do on images is a trivial problem.  

Generative AI is a very hard inference problem. GPUs are very bad at it. And we all do it this minute. But CPUs did it for a while. I think you will see a whole bunch of new parts coming out over the next 6-9-12 months that will be better at it.  

But it is a very, very hard problem; it is extremely memory intensive because you are generating each token based on the previous tokens, and that is a linked problem, and you are doing that within a context. And that’s memory, memory, memory. 

HPCwire: Sparsity and keeping data closer to processing seems to be a big deal in your AI stack. 

Feldman: Sparsity gives you an advantage in every step. You do not store stuff you are not going to use. It is not going to produce any new information. You do not transport bits that don’t carry information. In each of those, you can think about it as a form of compression. You compress the amount of data you need to move so you get more bang for your bandwidth. Each of those is fundamental to the way we think about the problem.

HPCwire: You are still at 7-nm. Nvidia carries a significant advantage in process. With the chip being on a wafer, does the nanometer process even matter for you? 

Feldman: Our ability to put transistors down is one of humanity’s crowning achievements. That we can put transistors down at five or three nanometers is extraordinary. The gains you get are real and meaningful, and that cannot be ignored.  

However, in the most recent generation, [Nvidia] did not come with any pricing advantage. The H100 is approximately twice [the size of] A100; it has approximately twice as many transistors. So you got twice the compute for twice the price. And that is not a huge gain traditionally. 

Your choices are to invent things like we did. And put 46,000 square millimeters of silicon. If you do not want to invent things, you are going to reorganize chips at 800 mm2 and smaller. 

It is like saying. ‘Oh, look, we can put two on a motherboard.’ Okay. ‘Oh look, we can tie two together with an NVLink switch and put a CPU complex.’ Okay. ‘Look, we can put a chip down and another little chiplet that helps it with IO.’ Each of those is the same but slightly different in the grand scheme of things. It is tossing your salad differently. 

HPCwire: What have you got coming up? 

Feldman: I cannot share it with you right now. This industry is a treadmill. Either you are moving forward, or you are racing backward. There are all sorts of really interesting stuff that will be announced over time. Right now, we’re building and selling a huge amount of [silicon]. 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire