Google Launches TPU v4 AI Chips

By Todd R. Weiss

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I/O virtual conference this week, but it may have been the most important and awaited news from the event.

With the new release, the company has boosted the performance of its TPU hardware by more than two times over the previous TPU v3 chips, bringing critical new power and promise to machine learning training speeds on the Google Cloud Platform.

“Our compute infrastructure is how we drive and sustain these [AI and ML] advances and Tensor Processing Units are a big part of that,” said Pichai during the almost two-hour-long keynote on May 18 (Tuesday). “Today I’m excited to announce our next generation, the TPU v4. TPUs are connected together into supercomputers, called pods. A single v4 pod contains 4,096 v4 chips, and each pod has 10x the interconnect bandwidth per chip at scale, compared to any other networking technology.”

Google CEO Sundar Pichai announcing TPU v4 at Google I/O 2021.

The resulting computing power of the new TPUs means that one TPU pod of v4 chips can deliver more than one exaflops of floating point performance, said Pichai. The performance metrics are based on Google’s custom floating point format, called “Brain Floating Point Format,” or bfloat16.

The new TPU v4 infrastructure, which will be available to Google Cloud customers later this year, is the fastest system ever deployed at Google, which Pichai called “a historic milestone.”

Creating an exaflops of computing power previously required a custom-built supercomputer, he said. “But we already have many of these deployed today, and we’ll soon have dozens of TPU v4 pods in our datacenters, many of which will be operating at or near 90 percent carbon-free energy. It’s tremendously exciting to see this pace of innovation.”

Google’s previous version TPU 3.0 was unveiled in 2018.

TPUs are Google’s custom-developed application-specific integrated circuits (ASICs) which are used to accelerate ML workloads. Developers can use Google Cloud TPUs and Google’s TensorFlow open source machine learning software library to run their ML workloads. TensorFlow was developed and first released by Google in 2015.

Google Cloud TPU is designed to help researchers, developers and businesses build TensorFlow compute clusters that can use CPUs, GPUs and TPUs as needed. TensorFlow APIs allow users to run replicated models on Cloud TPU hardware, while TensorFlow applications can access TPU nodes from containers, instances or services on Google Cloud.

Several AI analysts were quick to tout the TPU v4 news and what it will mean for enterprises that are faced with constantly growing ML training demands.

“If you’re trying to train a large AI/ML system, and you are using Google’s TensorFlow specifically, this will be a big deal,” Jack E. Gold, president and principal analyst with J. Gold Associates, told EnterpriseAI. “There is never enough processing power when large models are being trained, with some taking days or weeks to run on current systems available in the cloud, and mostly based on highly parallel GPUs. And this can be very costly in terms of cloud costs and power.”

What Google has done in response with its TPUs is to build chips that are highly optimized for TensorFlow-based modeling to expedite the training of models, especially those that must be updated often or that use large data sets, said Gold.

“So, what Google is doing here with its v4 chip is to dramatically increase the compute horsepower available, and reduce time to model significantly,” said Gold. “They are also enabling much larger models to run in a reasonable amount of time. But equally importantly they are reducing the amount of power per model – since if the models run faster, they use less total power. And that’s also good for their cloud datacenters costs, as well as just sheer capacity to handle more users.”

And by using Google’s own TPUs, this is also a move by the company to continue to substitute its own processors for those of other vendors, he said. “Google wants to stay ahead of the others like AWS and Microsoft, that are also building their own accelerators for their AI cloud-based services.”

Gold also noted that since Google does a lot of its own AI/ML/DL modeling that anything the company can do to enhance its own internal needs with additional capabilities is a big win for them. “It’s not just about supporting external customers – it’s also about their own requirements,” he said.

Charles King, principal analyst with Pund-IT, said that Google’s ability to double the performance of the previous v3 chips while also achieving exascale performance in a single V4 pod are both impressive.

“It’s a notable achievement that demonstrates the company’s technical acumen and its willingness to continue funding chip development,” said King. It’s also important for the company’s customers, he added.

“Absolutely, since these new chips will be powering AI-related workloads and services offered in Google Cloud,” said King. “If Google can deliver superior performance at highly competitive prices, it could diminish the value of competitors’ services.”

Holger Mueller, principal analyst at Constellation Research, said the TPU v4 news was “one of the most exciting announcements of Google I/O … as the company builds out its lead with algorithms on silicon with TPU v4.”

With this development, Google keeps building its lead on AI compute over AWS and Microsoft Azure, Mueller said. “[This is the] first architecture to reach an exaflops – and AI needs it. When you do it Google-style… the faster and cheaper AI will win in business and government, including with the military.”

Another analyst, Karl Freund, founder and principal analyst for machine learning, HPC and AI with Cambrian AI Research, said that early benchmarks are promising for the new TPUs.

“TPU v4 looks like a winner, based on early MLPerf benchmarks,” said Freund. “We await final benchmarks which I expect to see this summer when we get closer to the announcement of availability and pricing later this year. It has been a much longer time coming compared to earlier TPUs but may well be worth the wait.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Watsonx Brings AI Visibility to Banking Systems

September 21, 2023

A new set of AI-based code conversion tools is available with IBM watsonx. Before introducing the new "watsonx," let's talk about the previous generation Watson, perhaps better known as "Jeopardy!-Watson." The origi Read more…

Researchers Advance Topological Superconductors for Quantum Computing

September 21, 2023

Quantum computers process information using quantum bits, or qubits, based on fragile, short-lived quantum mechanical states. To make qubits robust and tailor them for applications, researchers from the Department of Ene Read more…

Fortran: Still Compiling After All These Years

September 20, 2023

A recent article appearing in EDN (Electrical Design News) points out that on this day, September 20, 1954, the first Fortran program ran on a mainframe computer. Originally developed by IBM, Fortran (or FORmula TRANslat Read more…

Intel’s Gelsinger Lays Out Vision and Map at Innovation 2023 Conference

September 20, 2023

Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 conference being held in San Jose. While technical details were sc Read more…

Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1

September 18, 2023

Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon CPUs and Gaudi2 (Habana) accelerators. Both fared well on t Read more…

AWS Solution Channel

Shutterstock 1679562793

How Maxar Builds Short Duration ‘Bursty’ HPC Workloads on AWS at Scale

Introduction

High performance computing (HPC) has been key to solving the most complex problems in every industry and has been steadily changing the way we work and live. Read more…

QCT Solution Channel

QCT and Intel Codeveloped QCT DevCloud Program to Jumpstart HPC and AI Development

Organizations and developers face a variety of issues in developing and testing HPC and AI applications. Challenges they face can range from simply having access to a wide variety of hardware, frameworks, and toolkits to time spent on installation, development, testing, and troubleshooting which can lead to increases in cost. Read more…

Survey: Majority of US Workers Are Already Using Generative AI Tools, But Company Policies Trail Behind

September 18, 2023

A new survey from the Conference Board indicates that More than half of US employees are already using generative AI tools, at least occasionally, to accomplish work-related tasks. Yet some three-quarters of companies st Read more…

Watsonx Brings AI Visibility to Banking Systems

September 21, 2023

A new set of AI-based code conversion tools is available with IBM watsonx. Before introducing the new "watsonx," let's talk about the previous generation Watson Read more…

Intel’s Gelsinger Lays Out Vision and Map at Innovation 2023 Conference

September 20, 2023

Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 confer Read more…

Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1

September 18, 2023

Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon Read more…

China’s Quiet Journey into Exascale Computing

September 17, 2023

As reported in the South China Morning Post HPC pioneer Jack Dongarra mentioned the lack of benchmarks from recent HPC systems built by China. “It’s a we Read more…

Nvidia Releasing Open-Source Optimized Tensor RT-LLM Runtime with Commercial Foundational AI Models to Follow Later This Year

September 14, 2023

Nvidia's large-language models will become generally available later this year, the company confirmed. Organizations widely rely on Nvidia's graphics process Read more…

MLPerf Releases Latest Inference Results and New Storage Benchmark

September 13, 2023

MLCommons this week issued the results of its latest MLPerf Inference (v3.1) benchmark exercise. Nvidia was again the top performing accelerator, but Intel (Xeo Read more…

Need Some H100 GPUs? Nvidia is Listening

September 12, 2023

During a recent earnings call, Tesla CEO Elon Musk, the world's richest man, summed up the shortage of Nvidia enterprise GPUs in a few sentences.  "We're us Read more…

Intel Getting Squeezed and Benefiting from Nvidia GPU Shortages

September 10, 2023

The shortage of Nvidia's GPUs has customers searching for scrap heap to kickstart makeshift AI projects, and Intel is benefitting from it. Customers seeking qui Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

ISC 2023 Booth Videos

Cornelis Networks @ ISC23
Dell Technologies @ ISC23
Intel @ ISC23
Lenovo @ ISC23
Microsoft @ ISC23
ISC23 Playlist
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire