The Case Against ‘The Case Against Quantum Computing’

By Ben Criger

January 9, 2019

Editor’s note: In this contributed piece, Ben Criger, a post-doctoral researcher at QuTech, part of the TU Delft in the Netherlands, responds to criticisms of quantum computing and offers an explanation for why such criticisms tend to garner a lot of attention.

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourself – and you are the easiest person to fool.” This maxim motivates us to be critical of our research, even if we’re more critical when it comes to the research of others. From time to time, we even look through journals and technical magazines for arguments against the things we’re trying to do.

Last month, while I was looking for some nice criticism of quantum computing, I had the opportunity to read an article called “The Case Against Quantum Computing,” written by Mikhail Dyakonov, in IEEE Spectrum. While I was reading, I noticed two things that seemed out of the ordinary. First, all of the physics-based criticism of quantum computing was wrong, or had been addressed twenty years ago when the field was starting. The second, and perhaps more important thing, is that I could see the appeal of the article, despite its technical deficiencies.

I noticed that this article had been reviewed on the 27th of November by John Russell, here in HPCwire, so I thought that this would also be a good forum for a rebuttal (many thanks to Tiffany Trader for giving me the opportunity to write one). In the following sections, I’m going to go over two of the main technical points that Dyakonov makes, and try to give people a better idea about where we’re at in quantum computing. I’ll conclude with a comment on the article’s appeal.

Precision in Computing

Dyakonov: “A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.”

No computer, classical or quantum, ever has to process even a single continuous parameter. In classical computers, we can use floating-point arithmetic to approximate continuous parameters using a finite number of bits. Most of the time, we can even manage to do it to within the desired relative precision, in order to avoid catastrophic error propagation. This is because the number of numbers which we can express using a floating-point type scales exponentially with the number of bits.

Normally, I wouldn’t belabour this point so heavily, but I’m going to do the “quantum” version of this in a minute, so let’s take a look at an animation of floating-point representations in action:

Here, I’m writing out all numbers of the form (−1)base sign×significand× 10((−1)exp sign∗exponent), when the variables significand and exponent are each n-bit integers. Now, I can’t plot the whole real line (my monitor isn’t wide enough), so I’ve used a Riemann projection, drawing a ray from the center of the semi-circle shown above to the point on the real line that I’d like to show, and instead showing where that ray intersects the semi-circle, like so:

 

If we begin with 0 bits in the significand and exponent, we can assign any value we like to the sign bits, and the only number we can represent is 0. There are four independent ways, therefore, to represent 0, so there’s a little inefficiency in the representation. However, by the time I get up to 9 bits each in the significand and exponent, all of the points plotted are overlapping, and it’s clear that I have enough precision for the task at hand, for any real number I care to approximate.

A similar result holds in quantum computing, though the ‘data type’ we’ll consider here is a single qubit’s state, rather than a real number. The continuous complex parameters α and β mentioned by Professor Dyakonov go in a length two vector:

These parameters can also be mapped to angles θ and φ on the Bloch sphere, like so:

α = cos(θ/2)        β = esin(θ/2)

(exercise for the reader: show that the state |0>, with α = 1 and β = 0, sits at the North Pole).

The operations we can apply in quantum computing are unitary matrices, equivalent to rotations of the Bloch sphere. For a single qubit, these matrices have two rows and two columns. Now, in fault-tolerant quantum computing, the operations which we can implement with arbitrarily low (but not exactly zero) error rates are limited to a discrete set. Let’s suppose for the sake of example, that there are two, and that they’re called H and T. Furthermore, let’s suppose that we only know how to initialise a single fixed state of our fault-tolerant qubit, the |0> state. How many qubit states can we reach with a string of Hs and Ts of fixed length n? Again, just as in floating-point arithmetic, the number of sequences I can generate scales exponentially with respect to the length of the sequence, despite a few collisions at low n (for example, HH |0> = |0>):

This animation doesn’t look quite as nice as the last one. There’s a lot more space to cover on the sphere than there is on the semi-circle that we used for floating-point arithmetic. From this, we can conclude that quantum computing is harder than classical computing, though I suspect that this does not come as a surprise.

Now, this isn’t the only thing fundamentally wrong with quantum computing, according to Professor Dyakonov. According to him, the entire discussion above is irrelevant, since imprecision and error will inevitably ruin any large-scale quantum computation before we can even think about stringing our Hs and Ts together. This is probably also not a surprise, but this was one of the first big problems that was ever solved in quantum computing, and I’ll talk about it a bit in the following section.

The Threshold Theorem

Dyakonov: “Indeed, [scientists studying quantum computing] claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.”

The threshold theorem, initally proven by Aharonov and Ben-Or, has been around for about twenty years. The proof itself is in a 63-page paper, but the basic qualitative argument is relatively easy to grasp in a few paragraphs. At the cost of oversimplifying things, I’ll try to summarise that argument here.

Let’s define a logical gate as a small quantum computation that uses a number of physical gates acting on encoded states to simulate the effect of a single physical quantum logic gate acting on an unencoded state. Some logical gates can be made fault-tolerant by adding quantum error correction subroutines. The function of these subroutines is to correct the failure of a small number (typically one) of the physical quantum logic gates included in either the logical gate, or the error correction subroutines themselves. Each of these gadgets (that’s the technical term) contains a certain number of physical gates, let’s call it G. Also, let’s assume that, if any pair of these gates does something unanticipated, that the whole thing fails. When, then, does such a circuit have a low error probability? Let’s suppose, for the sake of simplicity, that each physical gate fails with probability p. The probability of error for the fault-tolerant gadget is , and whenever that’s less than p, we’re in business.

Now,  may not be a low enough probability of error for a given computation. In that case, we take advantage of something called concatenation, which is where you replace every physical gate in a fault-tolerant logical gate with yet another fault-tolerant logical gate, as depicted below:

If we use l levels of this concatenation, the number of gates we need to execute scales exponentially in l, but (very importantly) the final probability of error is p2l [ed. note: p^2^l] so it’s doubly-exponentially suppressed.

If this sounds clunky and inefficient to you, you’d be more or less right. The important thing for this initial proof of concept was not that the scheme be particularly efficient, but that it use simple ideas which could be widely understood. Over the past twenty years, a small community of quantum computing researchers have been concerned with finding more efficient schemes, with fewer gates, and the ability to tolerate higher error rates, and the results have been fairly positive. They’ve also been hard at work proving that quantum computing can still be made fault-tolerant if the errors are correlated, rather than independent, as I’ve assumed above (though Aharonov and Ben-Or consider weak correlations in their original work).

During this time, people like Mikhail Dyakonov (and Gil Kalai, and other noted skeptics of quantum computing) have been career researchers. If the theorem were false, we’d expect one of these skeptics, or someone they’ve inspired, to have proven that it was false, or to show that physically-reasonable correlated noise precludes quantum computing. They have not done this. Instead, Dyakonov has loosely suggested that the theorem is false, without a direct statement, or evidence. I, for one, think that the theorem is more or less correct, and that quantum computing is possible.

These are the official fact-based rebuttals that we physicists rely on when confronted with critiques from Dyakonov and the other scientists and engineers who believe that quantum computing is doomed for some reason or another. They’ve been used before, and I suspect that they’ll be used again. In one sense, they’re perfectly sufficient, but I don’t think they’ve addressed the core problem. Dyakonov’s critiques are unfounded, and yet they endure. Why?

The Important Question

So, why was Dyakonov’s article written? Why was it published? I hope I’ve argued adequately that there’s not a lot of science behind it, so why is it so appealing?

I think this article was published because, in a sense we don’t often talk about, it’s correct. People who study quantum computing don’t view it as our responsibility to oppose the unjustified hype building up in the popular press. Times are tough for scientists in every field, as the budgets for those funding agencies Dyakonov mentions dwindle. There’s a temptation not to rock the boat, especially when the critics we do have don’t do a great job of challenging us on technical grounds.

We lament the lack of well-founded criticism, but how often, and how loudly, do we lament the abundance of unfounded optimism? Are these two things not equally dangerous to the progress of science? We’re the people best able to criticise quantum computing, is it then our responsibility to do so?

So far, we’ve left editors with little selection when they look for something to stem the tide of breathless proclamations about how quantum computing is going to solve everything. We often lament the lack of good critiques of quantum computing, but in the end, the only chance we have to elevate the level of criticism is to do it ourselves.

About the Author

Ben is a post-doctoral researcher at QuTech, part of the TU Delft in the Netherlands. His research is focused on near-term implementations of fault-tolerant quantum computing. He can be reached via Twitter (@BenCriger) and GitHub (github.com/bcriger). Scripts producing the animations in this article can be found at github.com/bcriger/examples/tree/master/articles/2019_01_HPCWire.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NIST/Xanadu Researchers Report Photonic Quantum Computing Advance

March 3, 2021

Researchers from the National Institute of Standards and Technology (NIST) and Xanadu, a young Canada-based quantum computing company, have reported developing a full-stack, photonic quantum computer able to carry out th Read more…

By John Russell

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and even to this day, the largest climate models are heavily con Read more…

By Oliver Peckham

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2020) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective immediately. Hotard replaces long-time Cray exec Pete Ungaro Read more…

By Tiffany Trader

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 2, 2021

Jeffery Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been founding director of ORNL's Future Technologies Group which i Read more…

By John Russell

HPC Career Notes: March 2021 Edition

March 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

By Mariana Iriarte

AWS Solution Channel

Moderna Accelerates COVID-19 Vaccine Development on AWS

Marcello Damiani, Chief Digital and Operational Excellence Officer at Moderna, joins Todd Weatherby, Vice President of AWS Professional Services Worldwide, for a discussion on developing Moderna’s COVID-19 vaccine, scaling systems to enable global distribution, and leveraging cloud technologies to accelerate processes. Read more…

Supercomputers Enable First Holistic Model of SARS-CoV-2, Showing Spike Proteins Move in Tandem

February 28, 2021

Most models of SARS-CoV-2, the coronavirus that causes COVID-19, hone in on key features of the virus: for instance, the spike protein. Some of this is attributable to the relative importance of those features, but most Read more…

By Oliver Peckham

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

By Oliver Peckham

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2020) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

By Tiffany Trader

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 2, 2021

Jeffery Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been f Read more…

By John Russell

HPC Career Notes: March 2021 Edition

March 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

By Mariana Iriarte

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing Read more…

By Tiffany Trader

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Intel Teases Ice Lake-SP, Shows Competitive Benchmarking

November 17, 2020

At SC20 this week, Intel teased its forthcoming third-generation Xeon "Ice Lake-SP" server processor, claiming competitive benchmarking results against AMD's second-generation Epyc "Rome" processor. Ice Lake-SP, Intel's first server processor with 10nm technology... Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

By Tiffany Trader

It’s Fugaku vs. COVID-19: How the World’s Top Supercomputer Is Shaping Our New Normal

November 9, 2020

Fugaku is currently the most powerful publicly ranked supercomputer in the world – but we weren’t supposed to have it yet. The supercomputer, situated at Japan’s Riken scientific research institute, was scheduled to come online in 2021. When the pandemic struck... Read more…

By Oliver Peckham

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire