Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

By John Russell

October 23, 2019

A month ago the Quantum world was abuzz following discovery of a paper on NASA’s website detailing Google’s supposed success at achieving quantum supremacy. The paper quickly disappeared from the site but copies were made and a general consensus emerged the work was likely genuine. Today Google confirmed the work in a big way with the cover article on Nature’s 150th anniversary issue, a blog by John Martinis and Sergio Boixo, Google’s top quantum researchers, an article by Google CEO Sundar Pichai on the significance of the achievement, and conference call briefing from London with media.

That’s one way to recoup lost “wow power” from an accidentally leaked paper. In their blog, Martinis and Boixo label the work as “The first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any ‘reasonable’ model of computation.” Martinis and Boixo declare, “With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.”

Much of what’s being publically disclosed today was known from the leaked paper. Google used a new 54-bit quantum processor – Sycamore – which features a 2D grid in which each qubit is connected to four other qubits and has higher fidelity two-qubit “gates.” Google also says the improvements in Sycamore are forwardly compatible with much needed quantum error correction schemes. Using Sycamore, Google solved a problem (a kind of random number generator) in 200 seconds that would take on the order of 10,000 years on today’s fastest supercomputers. In this instance they used DOE’s Summit supercomputer for the estimate calculation.

“The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects,” wrote Martinis and Boixo.

Here’s how Google describes the project in the abstract of its Nature paper:

“A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253(about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.”

Not so fast says IBM.

Rival quantum pioneer IBM has disputed the Google claim in a blog – “Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.”

Whether it’s sour grapes, a valid claim, or something in between will become clearer in time. Even if IBM’s classical approach is better than the one chosen by Google, it is still takes longer than the 200 seconds Google’s Sycamore chip required. (For an excellent insider’s view on the controversy see Scott Aaronson’s blog, Quantum Supremacy: the gloves are off)

John Martinis, Google

In response to questioning about Big Blue’s objection, Martinis frankly noted there is an unavoidable “moving target” element in chasing quantum supremacy as classical systems and quantum systems each constantly advance (hardware and algorithms) but he didn’t waiver over the current Google claim. “We expect in the future that the quantum computers will vastly outstrip what’s going on with these [new classical computing] algorithms. We see no reason to doubt that so I encourage people to read the paper,” said Martinis.

Debate has swirled around the race for Quantum Supremacy since the term was coined. Detractors call it a gimmicky trick without bearing on real-world applications or quantum machines. Advocates argue it not only proves the conceptual case for quantum computing but also will pave the way for useful quantum computing because of the technologies the race to achieve quantum supremacy will produce. The latter seems certainly true but is sometimes overwhelmed by the desire to deploy practically useful quantum computing sooner rather than later.

Many contend that attaining Quantum Advantage – the notion of performing a task sufficiently better on a quantum computer to warrant switching from a classical machine – is more important in today’s era of so-called noisy quantum computers which are prone to error.

To put the quantum error correction (QEC) challenge into perspective, consider this excerpt from a recent paper by Georgia Tech researchers Swamit Tannu Moinuddin Qureshi on the topic: “Near-term quantum computers face significant reliability challenges as the qubits are extremely fickle and error-prone. Furthermore, with a limited number of qubits, implementing quantum error correction (QEC) may not be possible as QEC require 20 to 50 physical qubit devices to build a single fault-tolerant qubit. Therefore, fault-tolerant quantum computing is likely to become viable only when we have a system with thousands of qubits. In the meanwhile, the near-term quantum computes with several dozens of qubits are expected to operate in a noisy environment without any error correction using a model of computation called as Noisy Intermediate Scale Quantum (NISQ) Computing.”  (BTW, Tannu and Qureshi’s paper is a good, accessible, and fast read on several key quantum computing error correction issues and on approaches to mitigate them.)

It is interesting to dig a bit into the Google work. As in most R&D efforts there are unexpected twists and turns. You may remember the Bristlecone quantum processor, a 72-qubit device that Google was promoting roughly a year ago. The plans were to keep pushing that work. However a second team was working on a chip with an adjustable coupling mechanism for four qubits. The latter had some advantages and the researchers fairly quickly scaled it to 18 qubits.

“We thought we could get to quantum supremacy [with that approach] and we just moved over all the research and focused on [it],” recalled Martinis. However the added circuitry on Sycamore required for more wires (and space) for mounting; as a result it could only be scaled to 54 qubits at the time. And when the first 54-qubit Sycamore was manufactured one of its mounting wires broke, turning it into a 53-qubit device. Even so that device performed well enough to do the quantum supremacy calculation. Martinis said they’re now able to handle wiring more efficiently and will be able to scale up the number of qubits. He says they have three or four Sycamore processors now in the lab.

For those of you so inclined here’s a bit more technical detail on the chip taken from the paper:

“The processor is fabricated using aluminium for metallization and Josephson junctions, and indium for bump-bonds between two silicon wafers. The chip is wire-bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals. The state of all qubits can be read simultaneously by using a frequency-multiplexing technique. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GHz) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.

“We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off. The pulses are shaped to minimize transitions to higher transmon states. Gate performance varies strongly with frequency owing to two-level-system defects, stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, flux noise and pulse distortions. We therefore optimize the single-qubit operation frequencies to mitigate these error mechanisms.”

It’s good to remember the engineering challenges being faced. All of the wiring, just like the chip itself, must operate in a dilution refrigerator at extremely low temps. As the number of wires grow – i.e. to accommodate the increasing number of qubits – there’s likely to be heat losses affecting scalability for these systems. Asked how many qubits can be squeezed into a dilution refrigerator – thousands or millions – Martinis said, “For thousands, we believe yes. We do see a pathway forward…but we’ll be building a scientific instrument that is really going to have to bring a lot of new technologies.”

More qubits are needed in general for most applications. Consider rendering RSA encryption ineffective, one of the most talked about quantum computing applications. Martinis said, “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.”

That’s the rub for quantum computing generally. Martinis went so far as to call the exercise run on Sycamore (most of the work was in the spring) to be a practical application: “We’re excited that there’s a first useful application. It’s a little bit ‘nichey’, but there will be a real application there as developers work with it.”

Perhaps more immediately concrete are nascent Google plans to offer access to its quantum systems via a web portal. “We actually are using the Sycamore chip now internally to do internal experiments and test our interface to [determine] whether we can use it in this manner [as part of a portal access]. Then we plan to do a cloud offering. We’re not talking about it yet but next year people will be using it… internal people and collaborators first, and then opening it up,” said Martinis. IBM, Rigetti Computing, and D-Wave all currently offer web-based access to their systems spanning a wide variety of development tools, educational resources, simulation, and run-time on quantum processors.

In his blog, Google CEO Pichai said:

“For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality. But we have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.

“We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame.”

Over the next few days there will be a chorus of opinion. Treading the line between recognizing real achievement and not fanning fires of unrealistic expectation is an ongoing challenge for the quantum computing community. Oak Ridge touted the role of Summit in support of the work and issued a press release  –  “This experiment establishes that today’s quantum computers can outperform the best conventional computing for a synthetic benchmark,” said ORNL researcher and Director of the laboratory’s Quantum Computing Institute Travis Humble. “There have been other efforts to try this, but our team is the first to demonstrate this result on a real system.”

Intel, which waded in enthusiastically when the unsanctioned paper was first discovered, did so again today in a blog by Rich Ulig, Intel senior fellow and managing director of Intel Labs:

“Bolstered by this exciting news, we should now turn our attention to the steps it will take to build a system that will enable us to address intractable challenges – in other words, to demonstrate “quantum practicality.” To get a sense of what it would take to achieve quantum practicality, Intel researchers used our high-performance quantum simulator to predict the point at which a quantum computer could outpace a supercomputer in solving an optimization problem called Max-Cut. We chose Max-Cut as a test case because it is widely used in everything from traffic management to electronic design, and because it is an algorithm that gets exponentially more complicated as the number of variables increases.

“In our study, we compared a noise-tolerant quantum algorithm with a state-of-the art classical algorithm on a range of Max-Cut problems of increasing size. After extensive simulations, our research suggests it will take at least hundreds, if not thousands, of qubits working reliably before quantum computers will be able to solve practical problems faster than supercomputers…In other words, it may be years before the industry can develop a functional quantum processor of this size, so there is still work to be done.”

While practical quantum computing may be years away, the Google breakthrough seems impressive. Time will tell. Google’s quantum program is roughly 13-years-old, begun by Google scientist Hartmut Nevin in 2006. Martinis joined the effort in 2014 and set up the Google AI Quantum Team. It will be interesting to watch how it rolls out its web access program and what the quantum community reaction is. No firm timeline for the web portal was mentioned.

Link to Nature paper: https://www.nature.com/articles/s41586-019-1666-5

Link to Martinis’ and Boixo’s blog: https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html

Link to Pichai blog: https://blog.google/perspectives/sundar-pichai/what-our-quantum-computing-milestone-means

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

German Aerospace Center Debuts AMD-Powered CARA Supercomputer

February 18, 2020

The German Aerospace Center (DLR) launched its new high-performance computer CARA (Computer for Advanced Research in Aerospace) at TU Dresden on February 5, 2020. Built by NEC and powered by first-generation AMD Epyc 7601 processors with a budget of more than 20 million Euros, CARA will... Read more…

By Staff report

Berkeley Lab to Tackle Particle Physics with Quantum Computing

February 18, 2020

Massive-scale particle physics produces correspondingly large amounts of data – and this is particularly true of the Large Hadron Collider (LHC), the world’s largest particle accelerator, which is housed at the Europ Read more…

By Staff report

Supercomputer Simulations Validate NASA Crash Testing

February 17, 2020

Car crash simulation is already a challenging supercomputing task, requiring pinpoint estimation of how hundreds of components interact with turbulent forces and human bodies. Spacecraft crash simulation is far more diff Read more…

By Oliver Peckham

What’s New in HPC Research: Quantum Clouds, Interatomic Models, Genetic Algorithms & More

February 14, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or at least, pull off the cloud HPC equivalent. As part of thei Read more…

By Oliver Peckham

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

ORNL Team Develops AI-based Cancer Text Mining Tool on Summit

February 13, 2020

A group of Oak Ridge National Laboratory researchers working on the Summit supercomputer has developed a new neural network tool for fast extraction of information from cancer pathology reports to speed research and clin Read more…

By John Russell

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or Read more…

By Oliver Peckham

Eni to Retake Industry HPC Crown with Launch of HPC5

February 12, 2020

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops p Read more…

By Tiffany Trader

Trump Budget Proposal Again Slashes Science Spending

February 11, 2020

President Donald Trump’s FY2021 U.S. Budget, submitted to Congress this week, again slashes science spending. It’s a $4.8 trillion statement of priorities, Read more…

By John Russell

Policy: Republicans Eye Bigger Science Budgets; NSF Celebrates 70th, Names Idea Machine Winners

February 5, 2020

It’s a busy week for science policy. Yesterday, the National Science Foundation announced winners of its 2026 Idea Machine contest seeking directions for futu Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Intel Stopping Nervana Development to Focus on Habana AI Chips

February 3, 2020

Just two months after acquiring Israeli AI chip start-up Habana Labs for $2 billion, Intel is stopping development of its existing Nervana neural network proces Read more…

By John Russell

Lise Supercomputer, Part of HLRN-IV, Begins Operations

January 29, 2020

The second phase of the build-out of HLRN-IV – the planned 16 peak-petaflops supercomputer serving the North-German Supercomputing Alliance (HLRN) – is unde Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This