Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

By John Russell

October 23, 2019

A month ago the Quantum world was abuzz following discovery of a paper on NASA’s website detailing Google’s supposed success at achieving quantum supremacy. The paper quickly disappeared from the site but copies were made and a general consensus emerged the work was likely genuine. Today Google confirmed the work in a big way with the cover article on Nature’s 150th anniversary issue, a blog by John Martinis and Sergio Boixo, Google’s top quantum researchers, an article by Google CEO Sundar Pichai on the significance of the achievement, and conference call briefing from London with media.

That’s one way to recoup lost “wow power” from an accidentally leaked paper. In their blog, Martinis and Boixo label the work as “The first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any ‘reasonable’ model of computation.” Martinis and Boixo declare, “With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.”

Much of what’s being publically disclosed today was known from the leaked paper. Google used a new 54-bit quantum processor – Sycamore – which features a 2D grid in which each qubit is connected to four other qubits and has higher fidelity two-qubit “gates.” Google also says the improvements in Sycamore are forwardly compatible with much needed quantum error correction schemes. Using Sycamore, Google solved a problem (a kind of random number generator) in 200 seconds that would take on the order of 10,000 years on today’s fastest supercomputers. In this instance they used DOE’s Summit supercomputer for the estimate calculation.

“The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects,” wrote Martinis and Boixo.

Here’s how Google describes the project in the abstract of its Nature paper:

“A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253(about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.”

Not so fast says IBM.

Rival quantum pioneer IBM has disputed the Google claim in a blog – “Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.”

Whether it’s sour grapes, a valid claim, or something in between will become clearer in time. Even if IBM’s classical approach is better than the one chosen by Google, it is still takes longer than the 200 seconds Google’s Sycamore chip required. (For an excellent insider’s view on the controversy see Scott Aaronson’s blog, Quantum Supremacy: the gloves are off)

John Martinis, Google

In response to questioning about Big Blue’s objection, Martinis frankly noted there is an unavoidable “moving target” element in chasing quantum supremacy as classical systems and quantum systems each constantly advance (hardware and algorithms) but he didn’t waiver over the current Google claim. “We expect in the future that the quantum computers will vastly outstrip what’s going on with these [new classical computing] algorithms. We see no reason to doubt that so I encourage people to read the paper,” said Martinis.

Debate has swirled around the race for Quantum Supremacy since the term was coined. Detractors call it a gimmicky trick without bearing on real-world applications or quantum machines. Advocates argue it not only proves the conceptual case for quantum computing but also will pave the way for useful quantum computing because of the technologies the race to achieve quantum supremacy will produce. The latter seems certainly true but is sometimes overwhelmed by the desire to deploy practically useful quantum computing sooner rather than later.

Many contend that attaining Quantum Advantage – the notion of performing a task sufficiently better on a quantum computer to warrant switching from a classical machine – is more important in today’s era of so-called noisy quantum computers which are prone to error.

To put the quantum error correction (QEC) challenge into perspective, consider this excerpt from a recent paper by Georgia Tech researchers Swamit Tannu Moinuddin Qureshi on the topic: “Near-term quantum computers face significant reliability challenges as the qubits are extremely fickle and error-prone. Furthermore, with a limited number of qubits, implementing quantum error correction (QEC) may not be possible as QEC require 20 to 50 physical qubit devices to build a single fault-tolerant qubit. Therefore, fault-tolerant quantum computing is likely to become viable only when we have a system with thousands of qubits. In the meanwhile, the near-term quantum computes with several dozens of qubits are expected to operate in a noisy environment without any error correction using a model of computation called as Noisy Intermediate Scale Quantum (NISQ) Computing.”  (BTW, Tannu and Qureshi’s paper is a good, accessible, and fast read on several key quantum computing error correction issues and on approaches to mitigate them.)

It is interesting to dig a bit into the Google work. As in most R&D efforts there are unexpected twists and turns. You may remember the Bristlecone quantum processor, a 72-qubit device that Google was promoting roughly a year ago. The plans were to keep pushing that work. However a second team was working on a chip with an adjustable coupling mechanism for four qubits. The latter had some advantages and the researchers fairly quickly scaled it to 18 qubits.

“We thought we could get to quantum supremacy [with that approach] and we just moved over all the research and focused on [it],” recalled Martinis. However the added circuitry on Sycamore required for more wires (and space) for mounting; as a result it could only be scaled to 54 qubits at the time. And when the first 54-qubit Sycamore was manufactured one of its mounting wires broke, turning it into a 53-qubit device. Even so that device performed well enough to do the quantum supremacy calculation. Martinis said they’re now able to handle wiring more efficiently and will be able to scale up the number of qubits. He says they have three or four Sycamore processors now in the lab.

For those of you so inclined here’s a bit more technical detail on the chip taken from the paper:

“The processor is fabricated using aluminium for metallization and Josephson junctions, and indium for bump-bonds between two silicon wafers. The chip is wire-bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals. The state of all qubits can be read simultaneously by using a frequency-multiplexing technique. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GHz) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.

“We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off. The pulses are shaped to minimize transitions to higher transmon states. Gate performance varies strongly with frequency owing to two-level-system defects, stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, flux noise and pulse distortions. We therefore optimize the single-qubit operation frequencies to mitigate these error mechanisms.”

It’s good to remember the engineering challenges being faced. All of the wiring, just like the chip itself, must operate in a dilution refrigerator at extremely low temps. As the number of wires grow – i.e. to accommodate the increasing number of qubits – there’s likely to be heat losses affecting scalability for these systems. Asked how many qubits can be squeezed into a dilution refrigerator – thousands or millions – Martinis said, “For thousands, we believe yes. We do see a pathway forward…but we’ll be building a scientific instrument that is really going to have to bring a lot of new technologies.”

More qubits are needed in general for most applications. Consider rendering RSA encryption ineffective, one of the most talked about quantum computing applications. Martinis said, “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.”

That’s the rub for quantum computing generally. Martinis went so far as to call the exercise run on Sycamore (most of the work was in the spring) to be a practical application: “We’re excited that there’s a first useful application. It’s a little bit ‘nichey’, but there will be a real application there as developers work with it.”

Perhaps more immediately concrete are nascent Google plans to offer access to its quantum systems via a web portal. “We actually are using the Sycamore chip now internally to do internal experiments and test our interface to [determine] whether we can use it in this manner [as part of a portal access]. Then we plan to do a cloud offering. We’re not talking about it yet but next year people will be using it… internal people and collaborators first, and then opening it up,” said Martinis. IBM, Rigetti Computing, and D-Wave all currently offer web-based access to their systems spanning a wide variety of development tools, educational resources, simulation, and run-time on quantum processors.

In his blog, Google CEO Pichai said:

“For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality. But we have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.

“We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame.”

Over the next few days there will be a chorus of opinion. Treading the line between recognizing real achievement and not fanning fires of unrealistic expectation is an ongoing challenge for the quantum computing community. Oak Ridge touted the role of Summit in support of the work and issued a press release  –  “This experiment establishes that today’s quantum computers can outperform the best conventional computing for a synthetic benchmark,” said ORNL researcher and Director of the laboratory’s Quantum Computing Institute Travis Humble. “There have been other efforts to try this, but our team is the first to demonstrate this result on a real system.”

Intel, which waded in enthusiastically when the unsanctioned paper was first discovered, did so again today in a blog by Rich Ulig, Intel senior fellow and managing director of Intel Labs:

“Bolstered by this exciting news, we should now turn our attention to the steps it will take to build a system that will enable us to address intractable challenges – in other words, to demonstrate “quantum practicality.” To get a sense of what it would take to achieve quantum practicality, Intel researchers used our high-performance quantum simulator to predict the point at which a quantum computer could outpace a supercomputer in solving an optimization problem called Max-Cut. We chose Max-Cut as a test case because it is widely used in everything from traffic management to electronic design, and because it is an algorithm that gets exponentially more complicated as the number of variables increases.

“In our study, we compared a noise-tolerant quantum algorithm with a state-of-the art classical algorithm on a range of Max-Cut problems of increasing size. After extensive simulations, our research suggests it will take at least hundreds, if not thousands, of qubits working reliably before quantum computers will be able to solve practical problems faster than supercomputers…In other words, it may be years before the industry can develop a functional quantum processor of this size, so there is still work to be done.”

While practical quantum computing may be years away, the Google breakthrough seems impressive. Time will tell. Google’s quantum program is roughly 13-years-old, begun by Google scientist Hartmut Nevin in 2006. Martinis joined the effort in 2014 and set up the Google AI Quantum Team. It will be interesting to watch how it rolls out its web access program and what the quantum community reaction is. No firm timeline for the web portal was mentioned.

Link to Nature paper: https://www.nature.com/articles/s41586-019-1666-5

Link to Martinis’ and Boixo’s blog: https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html

Link to Pichai blog: https://blog.google/perspectives/sundar-pichai/what-our-quantum-computing-milestone-means

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire