Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

By John Russell

October 23, 2019

A month ago the Quantum world was abuzz following discovery of a paper on NASA’s website detailing Google’s supposed success at achieving quantum supremacy. The paper quickly disappeared from the site but copies were made and a general consensus emerged the work was likely genuine. Today Google confirmed the work in a big way with the cover article on Nature’s 150th anniversary issue, a blog by John Martinis and Sergio Boixo, Google’s top quantum researchers, an article by Google CEO Sundar Pichai on the significance of the achievement, and conference call briefing from London with media.

That’s one way to recoup lost “wow power” from an accidentally leaked paper. In their blog, Martinis and Boixo label the work as “The first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any ‘reasonable’ model of computation.” Martinis and Boixo declare, “With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.”

Much of what’s being publically disclosed today was known from the leaked paper. Google used a new 54-bit quantum processor – Sycamore – which features a 2D grid in which each qubit is connected to four other qubits and has higher fidelity two-qubit “gates.” Google also says the improvements in Sycamore are forwardly compatible with much needed quantum error correction schemes. Using Sycamore, Google solved a problem (a kind of random number generator) in 200 seconds that would take on the order of 10,000 years on today’s fastest supercomputers. In this instance they used DOE’s Summit supercomputer for the estimate calculation.

“The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects,” wrote Martinis and Boixo.

Here’s how Google describes the project in the abstract of its Nature paper:

“A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253(about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.”

Not so fast says IBM.

Rival quantum pioneer IBM has disputed the Google claim in a blog – “Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.”

Whether it’s sour grapes, a valid claim, or something in between will become clearer in time. Even if IBM’s classical approach is better than the one chosen by Google, it is still takes longer than the 200 seconds Google’s Sycamore chip required. (For an excellent insider’s view on the controversy see Scott Aaronson’s blog, Quantum Supremacy: the gloves are off)

John Martinis, Google

In response to questioning about Big Blue’s objection, Martinis frankly noted there is an unavoidable “moving target” element in chasing quantum supremacy as classical systems and quantum systems each constantly advance (hardware and algorithms) but he didn’t waiver over the current Google claim. “We expect in the future that the quantum computers will vastly outstrip what’s going on with these [new classical computing] algorithms. We see no reason to doubt that so I encourage people to read the paper,” said Martinis.

Debate has swirled around the race for Quantum Supremacy since the term was coined. Detractors call it a gimmicky trick without bearing on real-world applications or quantum machines. Advocates argue it not only proves the conceptual case for quantum computing but also will pave the way for useful quantum computing because of the technologies the race to achieve quantum supremacy will produce. The latter seems certainly true but is sometimes overwhelmed by the desire to deploy practically useful quantum computing sooner rather than later.

Many contend that attaining Quantum Advantage – the notion of performing a task sufficiently better on a quantum computer to warrant switching from a classical machine – is more important in today’s era of so-called noisy quantum computers which are prone to error.

To put the quantum error correction (QEC) challenge into perspective, consider this excerpt from a recent paper by Georgia Tech researchers Swamit Tannu Moinuddin Qureshi on the topic: “Near-term quantum computers face significant reliability challenges as the qubits are extremely fickle and error-prone. Furthermore, with a limited number of qubits, implementing quantum error correction (QEC) may not be possible as QEC require 20 to 50 physical qubit devices to build a single fault-tolerant qubit. Therefore, fault-tolerant quantum computing is likely to become viable only when we have a system with thousands of qubits. In the meanwhile, the near-term quantum computes with several dozens of qubits are expected to operate in a noisy environment without any error correction using a model of computation called as Noisy Intermediate Scale Quantum (NISQ) Computing.”  (BTW, Tannu and Qureshi’s paper is a good, accessible, and fast read on several key quantum computing error correction issues and on approaches to mitigate them.)

It is interesting to dig a bit into the Google work. As in most R&D efforts there are unexpected twists and turns. You may remember the Bristlecone quantum processor, a 72-qubit device that Google was promoting roughly a year ago. The plans were to keep pushing that work. However a second team was working on a chip with an adjustable coupling mechanism for four qubits. The latter had some advantages and the researchers fairly quickly scaled it to 18 qubits.

“We thought we could get to quantum supremacy [with that approach] and we just moved over all the research and focused on [it],” recalled Martinis. However the added circuitry on Sycamore required for more wires (and space) for mounting; as a result it could only be scaled to 54 qubits at the time. And when the first 54-qubit Sycamore was manufactured one of its mounting wires broke, turning it into a 53-qubit device. Even so that device performed well enough to do the quantum supremacy calculation. Martinis said they’re now able to handle wiring more efficiently and will be able to scale up the number of qubits. He says they have three or four Sycamore processors now in the lab.

For those of you so inclined here’s a bit more technical detail on the chip taken from the paper:

“The processor is fabricated using aluminium for metallization and Josephson junctions, and indium for bump-bonds between two silicon wafers. The chip is wire-bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals. The state of all qubits can be read simultaneously by using a frequency-multiplexing technique. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GHz) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.

“We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off. The pulses are shaped to minimize transitions to higher transmon states. Gate performance varies strongly with frequency owing to two-level-system defects, stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, flux noise and pulse distortions. We therefore optimize the single-qubit operation frequencies to mitigate these error mechanisms.”

It’s good to remember the engineering challenges being faced. All of the wiring, just like the chip itself, must operate in a dilution refrigerator at extremely low temps. As the number of wires grow – i.e. to accommodate the increasing number of qubits – there’s likely to be heat losses affecting scalability for these systems. Asked how many qubits can be squeezed into a dilution refrigerator – thousands or millions – Martinis said, “For thousands, we believe yes. We do see a pathway forward…but we’ll be building a scientific instrument that is really going to have to bring a lot of new technologies.”

More qubits are needed in general for most applications. Consider rendering RSA encryption ineffective, one of the most talked about quantum computing applications. Martinis said, “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.”

That’s the rub for quantum computing generally. Martinis went so far as to call the exercise run on Sycamore (most of the work was in the spring) to be a practical application: “We’re excited that there’s a first useful application. It’s a little bit ‘nichey’, but there will be a real application there as developers work with it.”

Perhaps more immediately concrete are nascent Google plans to offer access to its quantum systems via a web portal. “We actually are using the Sycamore chip now internally to do internal experiments and test our interface to [determine] whether we can use it in this manner [as part of a portal access]. Then we plan to do a cloud offering. We’re not talking about it yet but next year people will be using it… internal people and collaborators first, and then opening it up,” said Martinis. IBM, Rigetti Computing, and D-Wave all currently offer web-based access to their systems spanning a wide variety of development tools, educational resources, simulation, and run-time on quantum processors.

In his blog, Google CEO Pichai said:

“For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality. But we have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.

“We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame.”

Over the next few days there will be a chorus of opinion. Treading the line between recognizing real achievement and not fanning fires of unrealistic expectation is an ongoing challenge for the quantum computing community. Oak Ridge touted the role of Summit in support of the work and issued a press release  –  “This experiment establishes that today’s quantum computers can outperform the best conventional computing for a synthetic benchmark,” said ORNL researcher and Director of the laboratory’s Quantum Computing Institute Travis Humble. “There have been other efforts to try this, but our team is the first to demonstrate this result on a real system.”

Intel, which waded in enthusiastically when the unsanctioned paper was first discovered, did so again today in a blog by Rich Ulig, Intel senior fellow and managing director of Intel Labs:

“Bolstered by this exciting news, we should now turn our attention to the steps it will take to build a system that will enable us to address intractable challenges – in other words, to demonstrate “quantum practicality.” To get a sense of what it would take to achieve quantum practicality, Intel researchers used our high-performance quantum simulator to predict the point at which a quantum computer could outpace a supercomputer in solving an optimization problem called Max-Cut. We chose Max-Cut as a test case because it is widely used in everything from traffic management to electronic design, and because it is an algorithm that gets exponentially more complicated as the number of variables increases.

“In our study, we compared a noise-tolerant quantum algorithm with a state-of-the art classical algorithm on a range of Max-Cut problems of increasing size. After extensive simulations, our research suggests it will take at least hundreds, if not thousands, of qubits working reliably before quantum computers will be able to solve practical problems faster than supercomputers…In other words, it may be years before the industry can develop a functional quantum processor of this size, so there is still work to be done.”

While practical quantum computing may be years away, the Google breakthrough seems impressive. Time will tell. Google’s quantum program is roughly 13-years-old, begun by Google scientist Hartmut Nevin in 2006. Martinis joined the effort in 2014 and set up the Google AI Quantum Team. It will be interesting to watch how it rolls out its web access program and what the quantum community reaction is. No firm timeline for the web portal was mentioned.

Link to Nature paper: https://www.nature.com/articles/s41586-019-1666-5

Link to Martinis’ and Boixo’s blog: https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html

Link to Pichai blog: https://blog.google/perspectives/sundar-pichai/what-our-quantum-computing-milestone-means

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks.  These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of i Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

Texas A&M HPRC at PEARC24: Building the National CI Workforce

October 1, 2024

Texas A&M High-Performance Research Computing (HPRC) significantly contributed to the PEARC24 (Practice & Experience in Advanced Research Computing 2024) conference. Eleven HPRC and ACES’ (Accelerating Computin Read more…

A Q&A with Quantum Systems Accelerator Director Bert de Jong

September 30, 2024

Quantum technologies may still be in development, but these systems are evolving rapidly and existing prototypes are already making a big impact on science and industry. One of the major hubs of quantum R&D is the Q Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks.  These benchmarks have focused on mathematical ML operations and accelerators (e.g., N Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced  export controls on quantum computing technologies as well as new controls for advanced semiconductors and additiv Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire