Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

By John Russell

October 23, 2019

A month ago the Quantum world was abuzz following discovery of a paper on NASA’s website detailing Google’s supposed success at achieving quantum supremacy. The paper quickly disappeared from the site but copies were made and a general consensus emerged the work was likely genuine. Today Google confirmed the work in a big way with the cover article on Nature’s 150th anniversary issue, a blog by John Martinis and Sergio Boixo, Google’s top quantum researchers, an article by Google CEO Sundar Pichai on the significance of the achievement, and conference call briefing from London with media.

That’s one way to recoup lost “wow power” from an accidentally leaked paper. In their blog, Martinis and Boixo label the work as “The first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any ‘reasonable’ model of computation.” Martinis and Boixo declare, “With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.”

Much of what’s being publically disclosed today was known from the leaked paper. Google used a new 54-bit quantum processor – Sycamore – which features a 2D grid in which each qubit is connected to four other qubits and has higher fidelity two-qubit “gates.” Google also says the improvements in Sycamore are forwardly compatible with much needed quantum error correction schemes. Using Sycamore, Google solved a problem (a kind of random number generator) in 200 seconds that would take on the order of 10,000 years on today’s fastest supercomputers. In this instance they used DOE’s Summit supercomputer for the estimate calculation.

“The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects,” wrote Martinis and Boixo.

Here’s how Google describes the project in the abstract of its Nature paper:

“A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253(about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.”

Not so fast says IBM.

Rival quantum pioneer IBM has disputed the Google claim in a blog – “Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.”

Whether it’s sour grapes, a valid claim, or something in between will become clearer in time. Even if IBM’s classical approach is better than the one chosen by Google, it is still takes longer than the 200 seconds Google’s Sycamore chip required. (For an excellent insider’s view on the controversy see Scott Aaronson’s blog, Quantum Supremacy: the gloves are off)

John Martinis, Google

In response to questioning about Big Blue’s objection, Martinis frankly noted there is an unavoidable “moving target” element in chasing quantum supremacy as classical systems and quantum systems each constantly advance (hardware and algorithms) but he didn’t waiver over the current Google claim. “We expect in the future that the quantum computers will vastly outstrip what’s going on with these [new classical computing] algorithms. We see no reason to doubt that so I encourage people to read the paper,” said Martinis.

Debate has swirled around the race for Quantum Supremacy since the term was coined. Detractors call it a gimmicky trick without bearing on real-world applications or quantum machines. Advocates argue it not only proves the conceptual case for quantum computing but also will pave the way for useful quantum computing because of the technologies the race to achieve quantum supremacy will produce. The latter seems certainly true but is sometimes overwhelmed by the desire to deploy practically useful quantum computing sooner rather than later.

Many contend that attaining Quantum Advantage – the notion of performing a task sufficiently better on a quantum computer to warrant switching from a classical machine – is more important in today’s era of so-called noisy quantum computers which are prone to error.

To put the quantum error correction (QEC) challenge into perspective, consider this excerpt from a recent paper by Georgia Tech researchers Swamit Tannu Moinuddin Qureshi on the topic: “Near-term quantum computers face significant reliability challenges as the qubits are extremely fickle and error-prone. Furthermore, with a limited number of qubits, implementing quantum error correction (QEC) may not be possible as QEC require 20 to 50 physical qubit devices to build a single fault-tolerant qubit. Therefore, fault-tolerant quantum computing is likely to become viable only when we have a system with thousands of qubits. In the meanwhile, the near-term quantum computes with several dozens of qubits are expected to operate in a noisy environment without any error correction using a model of computation called as Noisy Intermediate Scale Quantum (NISQ) Computing.”  (BTW, Tannu and Qureshi’s paper is a good, accessible, and fast read on several key quantum computing error correction issues and on approaches to mitigate them.)

It is interesting to dig a bit into the Google work. As in most R&D efforts there are unexpected twists and turns. You may remember the Bristlecone quantum processor, a 72-qubit device that Google was promoting roughly a year ago. The plans were to keep pushing that work. However a second team was working on a chip with an adjustable coupling mechanism for four qubits. The latter had some advantages and the researchers fairly quickly scaled it to 18 qubits.

“We thought we could get to quantum supremacy [with that approach] and we just moved over all the research and focused on [it],” recalled Martinis. However the added circuitry on Sycamore required for more wires (and space) for mounting; as a result it could only be scaled to 54 qubits at the time. And when the first 54-qubit Sycamore was manufactured one of its mounting wires broke, turning it into a 53-qubit device. Even so that device performed well enough to do the quantum supremacy calculation. Martinis said they’re now able to handle wiring more efficiently and will be able to scale up the number of qubits. He says they have three or four Sycamore processors now in the lab.

For those of you so inclined here’s a bit more technical detail on the chip taken from the paper:

“The processor is fabricated using aluminium for metallization and Josephson junctions, and indium for bump-bonds between two silicon wafers. The chip is wire-bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals. The state of all qubits can be read simultaneously by using a frequency-multiplexing technique. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GHz) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.

“We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off. The pulses are shaped to minimize transitions to higher transmon states. Gate performance varies strongly with frequency owing to two-level-system defects, stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, flux noise and pulse distortions. We therefore optimize the single-qubit operation frequencies to mitigate these error mechanisms.”

It’s good to remember the engineering challenges being faced. All of the wiring, just like the chip itself, must operate in a dilution refrigerator at extremely low temps. As the number of wires grow – i.e. to accommodate the increasing number of qubits – there’s likely to be heat losses affecting scalability for these systems. Asked how many qubits can be squeezed into a dilution refrigerator – thousands or millions – Martinis said, “For thousands, we believe yes. We do see a pathway forward…but we’ll be building a scientific instrument that is really going to have to bring a lot of new technologies.”

More qubits are needed in general for most applications. Consider rendering RSA encryption ineffective, one of the most talked about quantum computing applications. Martinis said, “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.”

That’s the rub for quantum computing generally. Martinis went so far as to call the exercise run on Sycamore (most of the work was in the spring) to be a practical application: “We’re excited that there’s a first useful application. It’s a little bit ‘nichey’, but there will be a real application there as developers work with it.”

Perhaps more immediately concrete are nascent Google plans to offer access to its quantum systems via a web portal. “We actually are using the Sycamore chip now internally to do internal experiments and test our interface to [determine] whether we can use it in this manner [as part of a portal access]. Then we plan to do a cloud offering. We’re not talking about it yet but next year people will be using it… internal people and collaborators first, and then opening it up,” said Martinis. IBM, Rigetti Computing, and D-Wave all currently offer web-based access to their systems spanning a wide variety of development tools, educational resources, simulation, and run-time on quantum processors.

In his blog, Google CEO Pichai said:

“For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality. But we have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.

“We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame.”

Over the next few days there will be a chorus of opinion. Treading the line between recognizing real achievement and not fanning fires of unrealistic expectation is an ongoing challenge for the quantum computing community. Oak Ridge touted the role of Summit in support of the work and issued a press release  –  “This experiment establishes that today’s quantum computers can outperform the best conventional computing for a synthetic benchmark,” said ORNL researcher and Director of the laboratory’s Quantum Computing Institute Travis Humble. “There have been other efforts to try this, but our team is the first to demonstrate this result on a real system.”

Intel, which waded in enthusiastically when the unsanctioned paper was first discovered, did so again today in a blog by Rich Ulig, Intel senior fellow and managing director of Intel Labs:

“Bolstered by this exciting news, we should now turn our attention to the steps it will take to build a system that will enable us to address intractable challenges – in other words, to demonstrate “quantum practicality.” To get a sense of what it would take to achieve quantum practicality, Intel researchers used our high-performance quantum simulator to predict the point at which a quantum computer could outpace a supercomputer in solving an optimization problem called Max-Cut. We chose Max-Cut as a test case because it is widely used in everything from traffic management to electronic design, and because it is an algorithm that gets exponentially more complicated as the number of variables increases.

“In our study, we compared a noise-tolerant quantum algorithm with a state-of-the art classical algorithm on a range of Max-Cut problems of increasing size. After extensive simulations, our research suggests it will take at least hundreds, if not thousands, of qubits working reliably before quantum computers will be able to solve practical problems faster than supercomputers…In other words, it may be years before the industry can develop a functional quantum processor of this size, so there is still work to be done.”

While practical quantum computing may be years away, the Google breakthrough seems impressive. Time will tell. Google’s quantum program is roughly 13-years-old, begun by Google scientist Hartmut Nevin in 2006. Martinis joined the effort in 2014 and set up the Google AI Quantum Team. It will be interesting to watch how it rolls out its web access program and what the quantum community reaction is. No firm timeline for the web portal was mentioned.

Link to Nature paper: https://www.nature.com/articles/s41586-019-1666-5

Link to Martinis’ and Boixo’s blog: https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html

Link to Pichai blog: https://blog.google/perspectives/sundar-pichai/what-our-quantum-computing-milestone-means

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire