The Case Against ‘The Case Against Quantum Computing’

By Ben Criger

January 9, 2019

Editor’s note: In this contributed piece, Ben Criger, a post-doctoral researcher at QuTech, part of the TU Delft in the Netherlands, responds to criticisms of quantum computing and offers an explanation for why such criticisms tend to garner a lot of attention.

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourself – and you are the easiest person to fool.” This maxim motivates us to be critical of our research, even if we’re more critical when it comes to the research of others. From time to time, we even look through journals and technical magazines for arguments against the things we’re trying to do.

Last month, while I was looking for some nice criticism of quantum computing, I had the opportunity to read an article called “The Case Against Quantum Computing,” written by Mikhail Dyakonov, in IEEE Spectrum. While I was reading, I noticed two things that seemed out of the ordinary. First, all of the physics-based criticism of quantum computing was wrong, or had been addressed twenty years ago when the field was starting. The second, and perhaps more important thing, is that I could see the appeal of the article, despite its technical deficiencies.

I noticed that this article had been reviewed on the 27th of November by John Russell, here in HPCwire, so I thought that this would also be a good forum for a rebuttal (many thanks to Tiffany Trader for giving me the opportunity to write one). In the following sections, I’m going to go over two of the main technical points that Dyakonov makes, and try to give people a better idea about where we’re at in quantum computing. I’ll conclude with a comment on the article’s appeal.

Precision in Computing

Dyakonov: “A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.”

No computer, classical or quantum, ever has to process even a single continuous parameter. In classical computers, we can use floating-point arithmetic to approximate continuous parameters using a finite number of bits. Most of the time, we can even manage to do it to within the desired relative precision, in order to avoid catastrophic error propagation. This is because the number of numbers which we can express using a floating-point type scales exponentially with the number of bits.

Normally, I wouldn’t belabour this point so heavily, but I’m going to do the “quantum” version of this in a minute, so let’s take a look at an animation of floating-point representations in action:

Here, I’m writing out all numbers of the form (−1)base sign×significand× 10((−1)exp sign∗exponent), when the variables significand and exponent are each n-bit integers. Now, I can’t plot the whole real line (my monitor isn’t wide enough), so I’ve used a Riemann projection, drawing a ray from the center of the semi-circle shown above to the point on the real line that I’d like to show, and instead showing where that ray intersects the semi-circle, like so:

 

If we begin with 0 bits in the significand and exponent, we can assign any value we like to the sign bits, and the only number we can represent is 0. There are four independent ways, therefore, to represent 0, so there’s a little inefficiency in the representation. However, by the time I get up to 9 bits each in the significand and exponent, all of the points plotted are overlapping, and it’s clear that I have enough precision for the task at hand, for any real number I care to approximate.

A similar result holds in quantum computing, though the ‘data type’ we’ll consider here is a single qubit’s state, rather than a real number. The continuous complex parameters α and β mentioned by Professor Dyakonov go in a length two vector:

These parameters can also be mapped to angles θ and φ on the Bloch sphere, like so:

α = cos(θ/2)        β = esin(θ/2)

(exercise for the reader: show that the state |0>, with α = 1 and β = 0, sits at the North Pole).

The operations we can apply in quantum computing are unitary matrices, equivalent to rotations of the Bloch sphere. For a single qubit, these matrices have two rows and two columns. Now, in fault-tolerant quantum computing, the operations which we can implement with arbitrarily low (but not exactly zero) error rates are limited to a discrete set. Let’s suppose for the sake of example, that there are two, and that they’re called H and T. Furthermore, let’s suppose that we only know how to initialise a single fixed state of our fault-tolerant qubit, the |0> state. How many qubit states can we reach with a string of Hs and Ts of fixed length n? Again, just as in floating-point arithmetic, the number of sequences I can generate scales exponentially with respect to the length of the sequence, despite a few collisions at low n (for example, HH |0> = |0>):

This animation doesn’t look quite as nice as the last one. There’s a lot more space to cover on the sphere than there is on the semi-circle that we used for floating-point arithmetic. From this, we can conclude that quantum computing is harder than classical computing, though I suspect that this does not come as a surprise.

Now, this isn’t the only thing fundamentally wrong with quantum computing, according to Professor Dyakonov. According to him, the entire discussion above is irrelevant, since imprecision and error will inevitably ruin any large-scale quantum computation before we can even think about stringing our Hs and Ts together. This is probably also not a surprise, but this was one of the first big problems that was ever solved in quantum computing, and I’ll talk about it a bit in the following section.

The Threshold Theorem

Dyakonov: “Indeed, [scientists studying quantum computing] claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.”

The threshold theorem, initally proven by Aharonov and Ben-Or, has been around for about twenty years. The proof itself is in a 63-page paper, but the basic qualitative argument is relatively easy to grasp in a few paragraphs. At the cost of oversimplifying things, I’ll try to summarise that argument here.

Let’s define a logical gate as a small quantum computation that uses a number of physical gates acting on encoded states to simulate the effect of a single physical quantum logic gate acting on an unencoded state. Some logical gates can be made fault-tolerant by adding quantum error correction subroutines. The function of these subroutines is to correct the failure of a small number (typically one) of the physical quantum logic gates included in either the logical gate, or the error correction subroutines themselves. Each of these gadgets (that’s the technical term) contains a certain number of physical gates, let’s call it G. Also, let’s assume that, if any pair of these gates does something unanticipated, that the whole thing fails. When, then, does such a circuit have a low error probability? Let’s suppose, for the sake of simplicity, that each physical gate fails with probability p. The probability of error for the fault-tolerant gadget is , and whenever that’s less than p, we’re in business.

Now,  may not be a low enough probability of error for a given computation. In that case, we take advantage of something called concatenation, which is where you replace every physical gate in a fault-tolerant logical gate with yet another fault-tolerant logical gate, as depicted below:

If we use l levels of this concatenation, the number of gates we need to execute scales exponentially in l, but (very importantly) the final probability of error is p2l [ed. note: p^2^l] so it’s doubly-exponentially suppressed.

If this sounds clunky and inefficient to you, you’d be more or less right. The important thing for this initial proof of concept was not that the scheme be particularly efficient, but that it use simple ideas which could be widely understood. Over the past twenty years, a small community of quantum computing researchers have been concerned with finding more efficient schemes, with fewer gates, and the ability to tolerate higher error rates, and the results have been fairly positive. They’ve also been hard at work proving that quantum computing can still be made fault-tolerant if the errors are correlated, rather than independent, as I’ve assumed above (though Aharonov and Ben-Or consider weak correlations in their original work).

During this time, people like Mikhail Dyakonov (and Gil Kalai, and other noted skeptics of quantum computing) have been career researchers. If the theorem were false, we’d expect one of these skeptics, or someone they’ve inspired, to have proven that it was false, or to show that physically-reasonable correlated noise precludes quantum computing. They have not done this. Instead, Dyakonov has loosely suggested that the theorem is false, without a direct statement, or evidence. I, for one, think that the theorem is more or less correct, and that quantum computing is possible.

These are the official fact-based rebuttals that we physicists rely on when confronted with critiques from Dyakonov and the other scientists and engineers who believe that quantum computing is doomed for some reason or another. They’ve been used before, and I suspect that they’ll be used again. In one sense, they’re perfectly sufficient, but I don’t think they’ve addressed the core problem. Dyakonov’s critiques are unfounded, and yet they endure. Why?

The Important Question

So, why was Dyakonov’s article written? Why was it published? I hope I’ve argued adequately that there’s not a lot of science behind it, so why is it so appealing?

I think this article was published because, in a sense we don’t often talk about, it’s correct. People who study quantum computing don’t view it as our responsibility to oppose the unjustified hype building up in the popular press. Times are tough for scientists in every field, as the budgets for those funding agencies Dyakonov mentions dwindle. There’s a temptation not to rock the boat, especially when the critics we do have don’t do a great job of challenging us on technical grounds.

We lament the lack of well-founded criticism, but how often, and how loudly, do we lament the abundance of unfounded optimism? Are these two things not equally dangerous to the progress of science? We’re the people best able to criticise quantum computing, is it then our responsibility to do so?

So far, we’ve left editors with little selection when they look for something to stem the tide of breathless proclamations about how quantum computing is going to solve everything. We often lament the lack of good critiques of quantum computing, but in the end, the only chance we have to elevate the level of criticism is to do it ourselves.

About the Author

Ben is a post-doctoral researcher at QuTech, part of the TU Delft in the Netherlands. His research is focused on near-term implementations of fault-tolerant quantum computing. He can be reached via Twitter (@BenCriger) and GitHub (github.com/bcriger). Scripts producing the animations in this article can be found at github.com/bcriger/examples/tree/master/articles/2019_01_HPCWire.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. The news follows HPE’s acquisition nearly three years ago o Read more…

By Doug Black & Tiffany Trader

China Establishes Seventh National Supercomputing Center

May 16, 2019

Chinese media is reporting that China will construct a new National Supercomputer Center in Zhengzhou, in central China's Henan Province. The new Zhengzhou facility will house a 100-petaflops supercomputer and will be ta Read more…

By Staff report

Interview with 2019 Person to Watch Ken King

May 16, 2019

Today, as the final installment of our HPCwire People to Watch focus series, we present our interview with Ken King, general manager of OpenPOWER for the IBM Systems Group. Ken is responsible for building and managing t Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Autonomous Vehicles: New challenges for the CAE Data Center

Managing infrastructure complexity in the age of AI

When most of us hear the term autonomous vehicles, we conjure up images of driverless Waymos or robotic transport trucks driving long-haul highway routes. Read more…

What’s New in HPC Research: Image Classification, Crowd Computing, Genome Informatics & More

May 15, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

CCC Offers Draft 20-Year AI Roadmap; Seeks Comments

May 14, 2019

Artificial Intelligence in all its guises has captured much of the conversation in HPC and general computing today. The White House, DARPA, IARPA, and Departmen Read more…

By John Russell

Cascade Lake Shows Up to 84 Percent Gen-on-Gen Advantage on STAC Benchmarking

May 13, 2019

The Securities Technology Analysis Center (STAC) issued a report Friday comparing the performance of Intel's Cascade Lake processors with previous-gen Skylake u Read more…

By Tiffany Trader

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

ASC19: NTHU Returns to Glory

May 11, 2019

As many of you Student Cluster Competition fanatics know by now, Taiwan’s National Tsing Hua University (NTHU) won the gold medal at the recently concluded AS Read more…

By Dan Olds

Intel 7nm GPU on Roadmap for 2021, OneAPI Coming This Year

May 8, 2019

At Intel's investor meeting today in Santa Clara, Calif., the company filled in details of its roadmap and product launch plans and sought to allay concerns about delays of its 10nm chips. In laying out its 10nm and 7nm timelines, Intel revealed that its first 7nm product would be... Read more…

By Tiffany Trader

Ten Great Reasons to Build the 1.5 Exaflops Frontier

May 7, 2019

It’s perhaps obvious that the fundamental reason for building expensive exascale computers is to drive science and industry forward, realizing the resulting b Read more…

By John Russell

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This