A month ago the Quantum world was abuzz following discovery of a paper on NASA’s website detailing Google’s supposed success at achieving quantum supremacy. The paper quickly disappeared from the site but copies were made and a general consensus emerged the work was likely genuine. Today Google confirmed the work in a big way with the cover article on Nature’s 150th anniversary issue, a blog by John Martinis and Sergio Boixo, Google’s top quantum researchers, an article by Google CEO Sundar Pichai on the significance of the achievement, and conference call briefing from London with media.
That’s one way to recoup lost “wow power” from an accidentally leaked paper. In their blog, Martinis and Boixo label the work as “The first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any ‘reasonable’ model of computation.” Martinis and Boixo declare, “With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.”
Much of what’s being publically disclosed today was known from the leaked paper. Google used a new 54-bit quantum processor – Sycamore – which features a 2D grid in which each qubit is connected to four other qubits and has higher fidelity two-qubit “gates.” Google also says the improvements in Sycamore are forwardly compatible with much needed quantum error correction schemes. Using Sycamore, Google solved a problem (a kind of random number generator) in 200 seconds that would take on the order of 10,000 years on today’s fastest supercomputers. In this instance they used DOE’s Summit supercomputer for the estimate calculation.
“The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects,” wrote Martinis and Boixo.
Here’s how Google describes the project in the abstract of its Nature paper:
“A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253(about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.”
Not so fast says IBM.
Rival quantum pioneer IBM has disputed the Google claim in a blog – “Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.”
Whether it’s sour grapes, a valid claim, or something in between will become clearer in time. Even if IBM’s classical approach is better than the one chosen by Google, it is still takes longer than the 200 seconds Google’s Sycamore chip required. (For an excellent insider’s view on the controversy see Scott Aaronson’s blog, Quantum Supremacy: the gloves are off)
In response to questioning about Big Blue’s objection, Martinis frankly noted there is an unavoidable “moving target” element in chasing quantum supremacy as classical systems and quantum systems each constantly advance (hardware and algorithms) but he didn’t waiver over the current Google claim. “We expect in the future that the quantum computers will vastly outstrip what’s going on with these [new classical computing] algorithms. We see no reason to doubt that so I encourage people to read the paper,” said Martinis.
Debate has swirled around the race for Quantum Supremacy since the term was coined. Detractors call it a gimmicky trick without bearing on real-world applications or quantum machines. Advocates argue it not only proves the conceptual case for quantum computing but also will pave the way for useful quantum computing because of the technologies the race to achieve quantum supremacy will produce. The latter seems certainly true but is sometimes overwhelmed by the desire to deploy practically useful quantum computing sooner rather than later.
Many contend that attaining Quantum Advantage – the notion of performing a task sufficiently better on a quantum computer to warrant switching from a classical machine – is more important in today’s era of so-called noisy quantum computers which are prone to error.
To put the quantum error correction (QEC) challenge into perspective, consider this excerpt from a recent paper by Georgia Tech researchers Swamit Tannu Moinuddin Qureshi on the topic: “Near-term quantum computers face significant reliability challenges as the qubits are extremely fickle and error-prone. Furthermore, with a limited number of qubits, implementing quantum error correction (QEC) may not be possible as QEC require 20 to 50 physical qubit devices to build a single fault-tolerant qubit. Therefore, fault-tolerant quantum computing is likely to become viable only when we have a system with thousands of qubits. In the meanwhile, the near-term quantum computes with several dozens of qubits are expected to operate in a noisy environment without any error correction using a model of computation called as Noisy Intermediate Scale Quantum (NISQ) Computing.” (BTW, Tannu and Qureshi’s paper is a good, accessible, and fast read on several key quantum computing error correction issues and on approaches to mitigate them.)
It is interesting to dig a bit into the Google work. As in most R&D efforts there are unexpected twists and turns. You may remember the Bristlecone quantum processor, a 72-qubit device that Google was promoting roughly a year ago. The plans were to keep pushing that work. However a second team was working on a chip with an adjustable coupling mechanism for four qubits. The latter had some advantages and the researchers fairly quickly scaled it to 18 qubits.
“We thought we could get to quantum supremacy [with that approach] and we just moved over all the research and focused on [it],” recalled Martinis. However the added circuitry on Sycamore required for more wires (and space) for mounting; as a result it could only be scaled to 54 qubits at the time. And when the first 54-qubit Sycamore was manufactured one of its mounting wires broke, turning it into a 53-qubit device. Even so that device performed well enough to do the quantum supremacy calculation. Martinis said they’re now able to handle wiring more efficiently and will be able to scale up the number of qubits. He says they have three or four Sycamore processors now in the lab.
For those of you so inclined here’s a bit more technical detail on the chip taken from the paper:
“The processor is fabricated using aluminium for metallization and Josephson junctions, and indium for bump-bonds between two silicon wafers. The chip is wire-bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals. The state of all qubits can be read simultaneously by using a frequency-multiplexing technique. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GHz) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.
“We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off. The pulses are shaped to minimize transitions to higher transmon states. Gate performance varies strongly with frequency owing to two-level-system defects, stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, flux noise and pulse distortions. We therefore optimize the single-qubit operation frequencies to mitigate these error mechanisms.”
It’s good to remember the engineering challenges being faced. All of the wiring, just like the chip itself, must operate in a dilution refrigerator at extremely low temps. As the number of wires grow – i.e. to accommodate the increasing number of qubits – there’s likely to be heat losses affecting scalability for these systems. Asked how many qubits can be squeezed into a dilution refrigerator – thousands or millions – Martinis said, “For thousands, we believe yes. We do see a pathway forward…but we’ll be building a scientific instrument that is really going to have to bring a lot of new technologies.”
More qubits are needed in general for most applications. Consider rendering RSA encryption ineffective, one of the most talked about quantum computing applications. Martinis said, “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.”
That’s the rub for quantum computing generally. Martinis went so far as to call the exercise run on Sycamore (most of the work was in the spring) to be a practical application: “We’re excited that there’s a first useful application. It’s a little bit ‘nichey’, but there will be a real application there as developers work with it.”
Perhaps more immediately concrete are nascent Google plans to offer access to its quantum systems via a web portal. “We actually are using the Sycamore chip now internally to do internal experiments and test our interface to [determine] whether we can use it in this manner [as part of a portal access]. Then we plan to do a cloud offering. We’re not talking about it yet but next year people will be using it… internal people and collaborators first, and then opening it up,” said Martinis. IBM, Rigetti Computing, and D-Wave all currently offer web-based access to their systems spanning a wide variety of development tools, educational resources, simulation, and run-time on quantum processors.
In his blog, Google CEO Pichai said:
“For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality. But we have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.
“We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame.”
Over the next few days there will be a chorus of opinion. Treading the line between recognizing real achievement and not fanning fires of unrealistic expectation is an ongoing challenge for the quantum computing community. Oak Ridge touted the role of Summit in support of the work and issued a press release – “This experiment establishes that today’s quantum computers can outperform the best conventional computing for a synthetic benchmark,” said ORNL researcher and Director of the laboratory’s Quantum Computing Institute Travis Humble. “There have been other efforts to try this, but our team is the first to demonstrate this result on a real system.”
Intel, which waded in enthusiastically when the unsanctioned paper was first discovered, did so again today in a blog by Rich Ulig, Intel senior fellow and managing director of Intel Labs:
“Bolstered by this exciting news, we should now turn our attention to the steps it will take to build a system that will enable us to address intractable challenges – in other words, to demonstrate “quantum practicality.” To get a sense of what it would take to achieve quantum practicality, Intel researchers used our high-performance quantum simulator to predict the point at which a quantum computer could outpace a supercomputer in solving an optimization problem called Max-Cut. We chose Max-Cut as a test case because it is widely used in everything from traffic management to electronic design, and because it is an algorithm that gets exponentially more complicated as the number of variables increases.
“In our study, we compared a noise-tolerant quantum algorithm with a state-of-the art classical algorithm on a range of Max-Cut problems of increasing size. After extensive simulations, our research suggests it will take at least hundreds, if not thousands, of qubits working reliably before quantum computers will be able to solve practical problems faster than supercomputers…In other words, it may be years before the industry can develop a functional quantum processor of this size, so there is still work to be done.”
While practical quantum computing may be years away, the Google breakthrough seems impressive. Time will tell. Google’s quantum program is roughly 13-years-old, begun by Google scientist Hartmut Nevin in 2006. Martinis joined the effort in 2014 and set up the Google AI Quantum Team. It will be interesting to watch how it rolls out its web access program and what the quantum community reaction is. No firm timeline for the web portal was mentioned.
Link to Nature paper: https://www.nature.com/articles/s41586-019-1666-5
Link to Martinis’ and Boixo’s blog: https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html
Link to Pichai blog: https://blog.google/perspectives/sundar-pichai/what-our-quantum-computing-milestone-means