Google’s Controversial AI Chip Paper Under Scrutiny Again 

By Agam Shah

October 3, 2023

A controversial research paper by Google that claimed the superiority of AI techniques in creating chips is under the microscope for the authenticity of its claims. Science publication Nature is investigating Google’s claims that artificial intelligence techniques helped floor-plan — or establish the basics construct of its AI chip — in under six hours, faster than human experts. 

Nature put an editor’s note on the paper, saying, “Readers are alerted that the performance claims in this article have been called into question. The Editors are investigating these concerns, and, if appropriate, editorial action will be taken once this investigation is complete.” 

Initially published in 2021, the paper was related to using AI to construct a version of its Tensor Processing Unit, or TPU, which the company is using in its cloud data centers for AI in applications that include Search, Maps, and Google Workspace. 

The chip in question was identified on Twitter as TPU v5 by researcher Anna Goldie, who was among the 20 authors of the paper. Nature has put an asterisk on the paper. 

Google TPU board
Google TPU board. Source: Google, “Inside a Google Cloud TPU Data Center” video

Google said the intention was not to replace human designers but to show how AI could be a collaborative technique to speed up chip designs. 

A version of the TPU-v5, the TPU-v5e, came out last month and is now available in Google Cloud.   

It was Google’s first AI chip released with a suite of software and development and virtualization tools so customers can budget and manage the orchestration and deployment of AI techniques. The new AI chip competes with Nvidia’s H100 GPU and succeeds the previous-generation TPUv4, which was used to train the PaLM 2 large language models. 

The controversial research paper has been plagued with trouble from the start. The paper’s merits were questioned internally, and one of the paper’s authors who spoke out, Satrajit Chatterjee, was fired and filed a lawsuit against Google for wrongful termination

Google researchers said that the paper has gone through peer review. But the research has not held up well under challenges from independent researchers. 

Google was criticized for releasing minimal amounts of information related to the research and resisting calls for the full release of data for public scrutiny. The company ultimately placed limited amounts of information on GitHub.

The research provides a framework to use deep-reinforcement learning to floor-plan the chip or lay down the building blocks of the TPU-v5 chip. The paper revolves around using AI to place large circuit blocks that perform specific macro functions in logical spots to generate chip designs. Macro placement is critical to chip design and a very challenging process. 

Google’s reinforcement learning technique developed a chip design using input information such as a circuit netlist comprised of connected circuit components and data such as configuring available tracks for wire rounds. The output was a clean chip design conducive to good macro placements.  

In six hours, Google was able to put together the building blocks of a cohesive chip over a specific area within a specific power and performance envelope. Over time, the AI agent uses past learnings that reinforce its current knowledge to better place the chip modules under 10 nanometers.  

The Google technique used a learning model that took 48 hours to train over 200 CPUs and 20 GPUs, and those hours were not accounted for in the total time it took to design the chip.  

One challenger to Google’s research, Andrew B. Kahng, a professor of computer science at the University of California, San Diego, found Google needed to be more cooperative. He criticized Google’s unwillingness to release critical data such as circuit training datasets, baseline information, or other code for other researchers to reproduce the results. 

He had to reverse engineer Google’s chip-design techniques and found human chip designers and automated tools could sometimes be faster than Google’s AI-only technique. In March, he presented a paper on his findings at the International Symposium on Physical Design, in which he detailed chip design involving humans and standard software tools sometimes being faster or more effective. However, he did not question the value of Google’s techniques. 

Flaws aside, the research contributes to chip design research, and Google is one of the few companies to share information on AI techniques it uses for chip design. It builds on behind-the-doors work already done by Cadence and Synopsys to bring AI to chip design. AMD and Amazon have claimed to use AI in chip design but have not discussed their techniques. 

The Nature fiasco is not the first time Google’s hardware research has come under the microscope. Google, in 2019, claimed quantum supremacy, with quantum computers outperforming classical computers. Google argued that its 54-qubit system called Sycamore, in which the qubits are arranged in a 2D array in 200 seconds, solved a specific problem that would take classical supercomputers 10,000 years. 

IBM disputed the claim, saying the paper was flawed and was creating confusion on quantum and supercomputing performance, and set out to disprove Google’s theory. A subsequent IBM paper claimed that its Summit computer, with the help of additional secondary storage, could achieve six times better performance than the one declared in Google’s quantum supremacy paper and solve problems in a reasonable amount of time. 

Google’s controversial 2019 quantum paper, considered groundbreaking at that time, was also based on closed-door experiments and has not aged well. In subsequent years, more researchers stepped forward to challenge Google’s claims. The flaw was Google’s apples-to-oranges comparison of its optimized quantum algorithms against older, slower classical algorithms. 

It is unclear if TPU v5e was designed using the reinforcement learning technique, but Google has claimed superior performance with the chip compared to the previous generation TPUv4.  

Eight TPU v5e chips can train large language models with up to 2 trillion parameters. This month, Google claimed that “each TPU v5e chip provides up to 393 trillion int8 operations per second (TOPS), allowing fast predictions for the most complex models,” implying the chip is primarily designed for low-leverage inferencing operations. Training typically requires a floating-point pipeline. 

Google is trying to play catch up in AI with Microsoft, which uses OpenAI’s GPT-4 and Nvidia GPUs in its Azure AI supercomputer. The company recently integrated its Bard chatbot into Google Workspace, web search, and other tools. The Bard tools run their AI calculations off the TPUs. 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

The IBM-Meta AI Alliance Promotes Safe and Open AI Progress

December 5, 2023

IBM and Meta have co-launched a massive industry-academic-government alliance to shepherd AI development. The new group has united under the AI Alliance banner to promote responsible innovation in AI. Historically, techn Read more…

ChatGPT Friendly Programming Languages
(hello-world.llm)

December 4, 2023

 Using OpenAI's ChatGPT to write code is an alluring goal. Describing "what to" solve, but not "how to solve" would be a huge breakthrough in computer programming. Alas, we are nowhere near this capability. In particula Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qubit Heron QPU, that’s optimized for combining with multipl Read more…

The Annual SCinet Mandala

November 30, 2023

Perhaps you have seen images of Tibetan Buddhists creating beautiful and intricate images with colored sand. These sand mandalas can take weeks to create, only to be ritualistically dismantled when the image is finished. Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Reuters’ reported earlier this week that Alibaba “cut a Read more…

AWS Solution Channel

Shutterstock 2030529413

Reezocar Rethinks Car Buying Using Computer Vision and ML on AWS

Overview

Every car that finds its way to a landfill marks another dent in the fight for a sustainable future. Reezocar, an online hub for buying and selling used cars, has a mission to change this. Read more…

QCT Solution Channel

QCT and Intel Codeveloped QCT DevCloud Program to Jumpstart HPC and AI Development

Organizations and developers face a variety of issues in developing and testing HPC and AI applications. Challenges they face can range from simply having access to a wide variety of hardware, frameworks, and toolkits to time spent on installation, development, testing, and troubleshooting which can lead to increases in cost. Read more…

SC23: The Ethics of Supercomputing

November 29, 2023

Why should HPC practitioners care about ethics? And, what are our ethics in HPC? These questions were central to a lively discussion at the SC23 Birds-of-a-Feather (BoF) session: With Great Power Comes Great Responsib Read more…

The IBM-Meta AI Alliance Promotes Safe and Open AI Progress

December 5, 2023

IBM and Meta have co-launched a massive industry-academic-government alliance to shepherd AI development. The new group has united under the AI Alliance banner Read more…

Shutterstock 1336284338

ChatGPT Friendly Programming Languages
(hello-world.llm)

December 4, 2023

 Using OpenAI's ChatGPT to write code is an alluring goal. Describing "what to" solve, but not "how to solve" would be a huge breakthrough in computer programm Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

The Annual SCinet Mandala

November 30, 2023

Perhaps you have seen images of Tibetan Buddhists creating beautiful and intricate images with colored sand. These sand mandalas can take weeks to create, only Read more…

SC23: The Ethics of Supercomputing

November 29, 2023

Why should HPC practitioners care about ethics? And, what are our ethics in HPC? These questions were central to a lively discussion at the SC23 Birds-of-a-Fe Read more…

Grace Hopper’s Big Debut in AWS Cloud While Graviton4 Launches

November 29, 2023

Editors Note: Additional Coverage of the AWS-Nvidia 65 Exaflop ‘Ultra-Cluster’ and Graviton4 can be found on our sister site Datanami. Amazon Web Service Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

SCREAM wins Gordon Bell Climate Prize at SC23

November 21, 2023

The first Gordon Bell Prize for Climate Modeling was presented at SC23 in Denver. The award went to a team led by Sandia National Laboratories that had develope Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

SC23 Booth Videos

Achronix @ SC23
AMD @ SC23
AWS @ SC23
Altair @ SC23
CoolIT @ SC23
Cornelis Networks @ SC23
CoreHive @ SC23
DDC @ SC23
HPE @ SC23 with Justin Hotard
HPE @ SC23 with Trish Damkroger
Intel @ SC23
Intelligent Light @ SC23
Lenovo @ SC23
Penguin Solutions @ SC23
QCT Intel @ SC23
Tyan AMD @ SC23
Tyan Intel @ SC23
HPCwire LIVE from SC23 Playlist

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire