Nvidia’s GTC Is the New Intel IDF

By Agam Shah

April 9, 2024

After many years, Nvidia’s GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI.

In a way, Nvidia is the new Intel IDF, the hottest chip show until it was disbanded in 2017 after a successful 20-year run. Intel was the big boss in the semiconductor industry, and other hardware companies played to its tune.

Nvidia’s GTC started in 2009 with a consistent format. CEO Jensen Huang rocked a packed stadium with new GPUs, prognostications of the future; new systems and lots of AI and metaverse demos.

Jensen talked about AI as assistants in helping humans be more productive. He also talked about how computing will look in the next five years: AI will help humans interact with computers by just talking in English, which reduces the need for programming. English will be the ultimate programming language, and all humans need to do is talk, and computers will use AI models to get the job done.

Right now, companies are building models by scraping information off the Internet. In five years, live information will be fed to models, learning in real time.

The GTC action started a week ahead of the show, and it was a blast from the past.

About a week before Intel’s IDF, for a few years, AMD held its demo in the Bay Area to demonstrate it had the world’s fastest desktop PC chip on this planet. Being the world’s fastest chip mattered; AMD was flailing and needed market share, and gigahertz numbers were — and still is — a big deal.

With Nvidia now the big chip boss, AI chip rivals and data center providers came out a week ahead of GTC to take some of the thunder away. It’s important to recap those announcements as a reminder that there is a world outside Nvidia.

The message from Nvidia’s rivals is: ” We have the world’s fastest chips,” and you don’t need Nvidia GPUs.

Either way, GTC is helping the entire AI industry. Even Intel got its own pre-GTC AI announcement from an AI partner, highlighting just how far the chip company has fallen in the pecking order.

The most significant announcement was from Cerebras, which announced WSE-3, the world’s largest and fastest AI chip. The chip’s wow factor is in its numbers; it delivers close to two times the performance of its predecessor, CS-2, but at the same size and price.

Cerebras’s mega-AI chip is meant only to train large models, which are then deployed for consumption through inferencing. The CS-3 has 900,000 cores and provides 125 petaflops of computing performance. The CS-3 has 4 trillion transistors, which is an improvement from the 2.6 billion transistors in the CS-2.

The CS-3 chip was made using the TSMC’s 5-nm process, which is more advanced than the 7-nm process used for the CS-2 chip. The on-chip memory on CS-3 is 44GB, and the memory bandwidth is 21 petabytes per second, with no change from the previous-generation CS-2.

The chip is 57 times larger than Nvidia’s H100 GPU, has 52 times more cores, and exceeds the Nvidia GPU in memory, bandwidth, and other major measures by many magnitudes. It’s not exactly an apples-to-apples comparison, but Cerebras’ CEO Feldman has never been shy of criticizing Nvidia’s approach to GPUs, and there are differences in the implementations.

Cerebras chips are space-saving and more power-efficient than GPUs in training, and have an easier programming model that largely relies on Python, Git-style repositories and customized wrappers for libraries such as PyTorch. The wrappers allow PyTorch programs to run on CS-3.

“CS-3 delivers answers in minutes or hours that would take days, weeks, or longer on large multi-rack clusters of legacy, general purpose processors,” the company says in its literature.

Nvidia’s GPUs have a wide installed base, as the company was long the only one making AI hardware, with others coming much later. Nvidia’s GPUs are also all-purpose AI and graphics chips. They are already being used to train AI models but can also run inferencing, graphics, metaverse, and scientific computing applications.

Companies have already invested billions of dollars in Nvidia hardware. Switching over may be difficult for such organizations as the cost and risk of exploring new hardware could be high. It could involve rewriting software and fine-tuning to achieve performance.

“Most of the time, you engage our platforms for inferencing. Today, 100% of the world’s inferencing is Nvidia,” Huang said in a recent keynote at Stanford.

Cerebras has a go-to chip for its inferencing through a partnership with Qualcomm.

Separately, Stability AI, which makes Stability Diffusion AI text-to-image generator model, promoted Intel’s Gaudi-2 chips by saying it was faster at inferencing than the older Nvidia A100 chips. Inferencing on the Stable Diffusion 3 model — which is coming out soon — processed over three times more images per second compared to A100-80GB GPUs, the company said in a blog entry.

“Companies like ours face an increasing demand for more powerful and efficient computing solutions. Our findings underscore the need for alternatives like the Gaudi 2, which not only offers superior performance to other 7nm chips but also addresses critical market needs such as affordability, reduced lead times, and superior price-to-performance ratios,” the company wrote.

AI chip maker Groq has received extensive press coverage recently to raise its profile. Late last week, it added support for the lightweight and open-source Gemma 7B Instruct model, which was built on Google’s Gemini model. It is the latest LLM to hit the market, following Mistral and Llama 2, which Meta open-sourced last year.

But Groq also received bad news when its booster at venture capital firm Social Capital, Jay Zaveri, was fired from the firm. The firing was linked to the investment in Groq, according to a story in Fortune.

Groq posted a no-comment on Twitter, saying it had very little knowledge of the firing and that it “will determine how Mr. Zaveri’s departure from Social Capital will impact Groq’s Board of Directors.”

Meta, a big backer of Nvidia’s GPUs, also provided additional details on its investment in GPUs in a blog entry.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storage, throughput, and new computing technologies. This round Read more…

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Core42 Is Building Its 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storag Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire