The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. For example, will quantum networking be necessary to practically scale up the size of quantum computers? There are differing perspectives on this question but most currently think networking will be necessary to achieve scale. Likewise, well-drawn benchmarking can help both quantum technology developers and users compare systems and identify strengths and weaknesses. But what does well-drawn mean?
In this most recent HPCwire/QCwire survey, senior researchers from D-Wave Systems, Oak Ridge National Laboratory, and PsiQuantum tackle benchmarking, networking, and hybrid classical-quantum computing approaches and you may be surprised by some of their answers. For example, Peter Shadbolt of PsiQuantum offers a nuanced view on hybrid classical-quantum computing, that’s well worth reading. (D-Wave didn’t weigh in on networking as that is not Murray Thom’s expertise).
Our respondents include:
- Nicholas A. Peters, section head, Quantum Information Science (QIS) Section, Oak Ridge National Laboratory. Peter leads ORNL QIS efforts, focusing on networking technologies.
- Murray Thom, vice president of product management, D-Wave Systems. A pioneer in quantum annealing, D-Wave has also launched a gate-based system development effort and is expected to report on its progress later in the year. The company has also been a leader commercial engagements.
- Peter Shadbolt, co-founder and chief scientific officer, PsiQuantum which is developing a quantum system using photonics-based qubits. PsiQuantum believes its approach is perhaps the most scalable of current approaches and has a detailed plan to get to one million qubits, the often-cited threshold many believe will enable fault-tolerant quantum computing.
Thanks to all of the respondents. Their answers are all thoughtful. The idea that these regular HPCwire/QCwire surveys can provide a kind of real-time view into important issues couldn’t happen without their efforts. We expect perspectives to evolve as the technology evolves and we’re hopeful our regular survey will reflect the current views of leaders in the quantum community.
1 Hybrid Classical-Quantum or Pure-play Quantum. There’s a lot of discussion around using quantum computing as mostly another accelerator in the advanced computing landscape and discussion around being able to parse problems into pieces with some portions best run on quantum computers and other portions best run on classical resources.
a) What’s your take on the hybrid classical-quantum computing approach? Is it worthwhile? How significant a portion of quantum computing will the hybrid approach become? Do you see distinct roles for hybrid classical-quantum computing and for pure-play quantum computing?
Unless you are building an algorithm-specific quantum computer, much like how one might use an analog classical computer, I’d expect a hybrid classical-quantum system will be the primary way to leverage the power of quantum computers as they mature. Algorithm-optimized quantum-only machines could be used to simulate parts of problems that are hard on classical machines before we have a good way to integrate with larger classical infrastructures. Further, algorithm-optimized quantum computers may even make up core co-processing units used in more general hybrid-quantum classical systems.
We believe hybrid computing is central to achieving our quantum future. The combination of the best quantum computing methods and the best classical approaches will be the most optimal way to solve problems. As powerful as modern classical computing technologies may be, there is an emerging set of applications that require new resources – quantum resources – to meet the demands of businesses in today’s increasingly competitive markets.
Pure-play quantum computing will likely be the realm of specialists and hybrid processing workflow designers. There will be uses for remote processing with direct calls to quantum processors – for example, in physics studies of spin glasses or sub-routines of a real Shor’s algorithm implementation. But from a commercial applications point of view, industry users will need whole-problem hybrid solvers with self-contained quantum subroutines.
As we look ahead, performant, high-value hybrid solvers across multiple problem types will continue to expand and deliver the benefits of both quantum and classical resources for both annealing quantum computers and gate-model systems for emerging quantum use cases. What we have seen, and believe others will find as well, is that for problems you can solve most effectively with a quantum computer, you can reach an even larger size once you hybridize with classical systems.
We anticipate that most end-to-end applications enabled by quantum computing will depend on a mixture of both classical and quantum computation to produce valuable answers. However, there are two widely held misconceptions. The first is that this mixed responsibility “lowers the bar” for the performance of the quantum computer and creates opportunities for real utility using very small or weak quantum computers. This is not the case. As far as we understand, you need a powerful, error-corrected quantum computer before you can start talking seriously about quantum advantage – no matter how great your integration with conventional hardware might be.
Secondly, it is often thought that the quantum computer must be very tightly integrated with the supporting conventional hardware – high-bandwidth networking, colocation, etc. etc. Consider that a “world-changing”, million-physical-qubit quantum computer only supports hundreds of logical qubits, billions of gates, and has a single-shot run-time much (much!) longer than a second. The bandwidth of user-facing data coming out of this system is miniscule – on the order of kilobytes per second. Assuming that the program to be run can be expressed in less than a few gigabytes (an extremely conservative estimate), the entire machine can be operated remotely over a regular consumer internet connection. Latency and bandwidth are not prohibitive at all, colocation is not required.
b) Do you think quantum computing capability will become embedded in existing HPC application suites? For example, in a suite such as ANSYS, will quantum computing become incorporated as an accelerator option for users to target?
Eventually, it seems likely that quantum computers will be a part of future HPC. I don’t think it is clear yet if we will be able to automate breaking up the code into calls optimized for the different types of accelerators or leave that to the programmers, though automation would be a desirable outcome.
Yes, at this point this seems like a natural outcome of the co-evolution of quantum and classical processors. We think it will result in a continuum of quantum-accelerated computations, each varying in the degree to which it depends on quantum computation.
At some point in the far future, I think this is a reasonable expectation, in the same way that features for exploiting SIMD, GPUs and TPUs have crept into other scientific software libraries. However, in the short term, we expect the use of quantum computers to be more bespoke, more hands-on, and less widely available than is suggested by the question.
2 Quantum Networking. Quantum networking is an active area of research on at least two fronts. 1) Many believe it will be necessary to network quantum processors together to achieve scale, whether at the chip level or system clustering. 2) Quantum networks (LAN/MAN/WAN, etc.) might offer many attractive attributes spanning secure communications to distributed quantum processing environments; DOE even has a Quantum Internet Blueprint.
a) How necessary do you think quantum networking will be for scaling up quantum computers? Will clustering smaller systems together be required to deliver adequate scale to tackle practical problems? When do you expect to see networked quantum chips/systems to start to appear, even if only in R&D? What key challenges remain?
One could argue that a quantum network will be needed to scale quantum computers. The value proposition is that, even if not required, a quantum network of two quantum computers is potentially much more than a factor of two more powerful than two independent quantum computers. Though a quantum network might not be optimized the same for different types of qubits. Once a particular qubit technology is selected, it drives a lot of architectural considerations for supporting technology development. Another potential advantage of networked quantum computing resources is its potential to reduce crosstalk when we address qubits living in different parts of a multi core quantum-processor machine. Finally, one could use different quantum computing technologies to do different parts of a computation, not unlike how we use GPUs and CPUs in HPC today.
At least a million physical qubits are necessary for all known useful applications of quantum computers. For most qubit implementations, the qubits are and will forever remain too large to fit a million qubits onto a single chip (die/reticle), and therefore high-performance quantum networking will be critical to achieve any utility. Probably the most compelling exception to this generalization is quantum dots, where it is reasonable to expect that a million qubits can be fabricated into a single reticle field, albeit with challenges associated with control electronics. Outside of special cases such as quantum dots, where very high density can be achieved, we see chip-to-chip quantum networking as an essential prerequisite for commercial viability of quantum computers.
b) What’s your sense of progress to date in developing quantum networking and a quantum internet? What kinds of applications will be enabled and how soon do you expect nascent quantum networks and prototype quantum internets to appear. What are the key technical hurdles remaining?
The progress in the US has been rapidly accelerating with recent investments. However, we may have small fault-tolerant quantum computers before we have fault tolerant quantum networks, since the historic focus has been on the computers themselves. We can enable some limited quantum-based cybersecurity functions already, but they need further study to ensure methods of accreditation are developed and implemented. In addition to quantum computing, networking quantum sensors promises to greatly improve our ability to measure events of interest, including, potentially the discovery of new physical phenomena such as dark matter which we cannot directly detect today. The key technical hurdles to overcome are correcting for loss and other operation errors when transmitting quantum information.
The most compelling use-case that we are aware of for the proposed “quantum internet” is device-independent quantum key distribution, which enables secure communication with very specific and differentiated guarantees on security. PsiQuantum does develop components that are relevant to the challenges posed by a hypothetical quantum internet. For instance, we invest in low-loss photonic devices, high-efficiency manufacturable single photon detectors, high-performance optical phase-shifters, etc. However, PsiQuantum is focused on building a quantum computer, and does not pursue the quantum internet as a goal.
3 Benchmarks. We seem to love benchmarks and top performer lists (think Top500 list and MLPerf). These metrics can be useful or not so useful. Currently, there’s a lot of activity around developing benchmarks for quantum computing. From IBM’s Quantum Volume and IonQ’s Algorithmic Qubits, which is based on QED-C efforts, to diverse efforts underway by DOE. The idea, of course, is to provide reasonable ways to compare quantum systems based in criteria ranging from hardware performance characteristics to application performance across differing systems and qubit technologies.
a) What’s you sense of the need for benchmarks in quantum computing? Which of the existing still-young offerings, if any, do you prefer and why? Are you involved in any benchmark development collaborations? To what extent do you use existing benchmarks to compare systems now?
Generally speaking, benchmarks are needed. Though in conventional computing infrastructures, careful consideration is made for practical issues like cost and energy consumption along with performance. How exactly one should quantify the performance of a quantum computer is still an active area of research. So further relating the performance of what one gets in a hybrid system vs. what’s possible with equal resources spent on an entirely classical infrastructure is also not yet clear. The technology is probably too immature to make a meaningful comparison at this point, and I am not currently involved in any quantum computing benchmark development efforts, though I am interested in understanding if they might be applied to quantum repeater systems.
Benchmarks are vital in quantum computing, having two distinct purposes: communicating technological progress by measuring performance against an ideal (noise-free) quantum computation and informing customers about which products are most suitable for their computational needs.
For D-Wave’s quantum annealing computers, we prefer the second instance, comparing quantum hybrid application performance against existing commercial methods because we believe that customers need real-world comparisons to demonstrate business value.
D-Wave researchers are members of a few committees (IEEE, QED-C) working to develop benchmark tests for both gate model and annealing quantum computers, and we have also published papers that illustrate our approach. We also have a huge repertoire of internal benchmarks that measure performance of bare hardware components, of the full quantum processing unit, and our online hybrid solvers. We normally publish benchmark results when new products go live, again, through the lens, as often as possible, to commercial applications.
We welcome the concerted and sensitive effort by the community to define good benchmarks.
b) What elements do you think good quantum benchmarks should include? Should the benchmark be a single number, such as in Top500, or offer a suite of results such as is done in MLPerf? Who should develop the benchmarks? Do you think we will end up with an analog of the Top500 List for quantum computers?
Good quantum benchmarks should be able to capture and quantify the challenging aspects that currently make it difficult to build a scalable quantum computing platform. Perhaps they will be able to abstract to existing metrics, but that might be too lofty a goal considering the types of problems quantum computers will likely be good at solving. The broader computing community, including academia, industry, and government, should develop benchmarks. One could have a top500 list for quantum computers, however, I think it would be more desirable to find benchmarks that quantify the capability of hybrid systems.
Good user benchmarks should include performance measurements at whole-problem solving, as opposed to the performance of individual circuits or components (or else better information about how individual component performance is relevant to whole-problem performance). In addition, test designs should reflect the user experience in accounting for the full computation, using realistic inputs, and not unrealistically over-tuned for narrow test scenarios. Measurements also should incorporate both computation time and solution quality. Basically, they should follow standards and expectations that have been set out for classical computational benchmarking, with some necessary modifications for the quantum scenario.
In terms of whether the benchmark should be a single number, given the unusual properties of quantum computers, a single number can be misleading because single number rankings over-generalize performance across too many applications and metrics. No quantum computer can No quantum computer can be best at every task it is given, and a suite of numbers is needed to characterize the kinds of scenarios for which a given one can outperform classical and other quantum alternatives.
The benchmarks need to be developed from dialog between quantum producers and quantum users. Producers want to be able to highlight the kinds of scenarios on which their computer performs best, and users want to know about test results that are relevant to their application/industry.
A single list for quantum computers is unlikely because of the current variety of incomparable technologies. Perhaps it will be possible a long time from now, after the technologies shake themselves out and settle on a small handful of best designs.
One way to use benchmarks is to help determine whether a particular machine is better or worse than another. However, in general what we would really like to quantify is the distance (essentially, the amount of time and money) between a particular machine, and the scale and performance that is required to achieve genuine utility – i.e. large-scale, fault-tolerant quantum computing. Current benchmarks are very good for the former, but in general are not as useful for the latter, primarily because nobody has yet built a device that is meaningfully large or performant. In other words, benchmarks allow us to rank-order current hardware, but since we also know that none of this hardware is remotely close to a genuinely useful quantum computer, the usefulness of the rank-ordering exercise is limited. This is not to dismiss current benchmarking efforts, but is merely a note of caution.
4 Your work. Please describe in a paragraph or two your current top project(s) and priorities.
My current top priority is the development of tools and techniques needed to build a national-scale quantum network. This will likely require the development of new concepts and quantum technologies to build a network of quantum repeaters. Such a network will probably look similar to a special purpose distributed quantum computer and will probably require us to encode our quantum information in photons of many different frequencies, or at the very least use these frequencies to improve the number of entangled photons that are probabilistically carried over an optical fiber. One of the major difficulties compared to quantum computing is that in networking we lose most of our quantum information carriers (the photons on which qubits are encoded) as they are transmitted. As a result, we need to fix large loss errors as well as other operation errors.
Supporting our track record of relentless product delivery, we’re continuing to focus on our Clarity roadmap to bring new innovations to market. In June 2022, we released an experimental prototype of our next-generation Advantage2 quantum system, which shows great promise with a new Zephyr topology and 20-way inter-qubit connectivity. This new prototype represents an early version of the upcoming full-scale product, and early benchmarks show increased energy scale and improved solution quality. New and existing customers can try out the experimental Advantage2 prototype by signing into Leap, our quantum cloud service.
Photonic quantum computers have not yet demonstrated very large entangled states of dual rail-encoded photonic qubits. The reason for this is that multiplexing (essentially, trial-until-success) is required to overcome nondeterminism in single photon sources and linear-optical entangling gates. Multiplexing is technically challenging for multiple reasons, but the most fundamental issue is the need for a very high-performance optical switch. PsiQuantum is investing heavily in a novel, high-performance, mass-manufacturable optical switch to overcome this issue. Beyond this, we are investing across the entire stack, from semiconductor process development, device design, packaging, test, reliability, systems integration and architecture, to control electronics and software, networking, cryogenic infrastructure, quantum architecture, error correcting codes, implementations of fault-tolerant logic and algorithms, and application development.
(Interested in participating in HPCwire/QCwire’s periodic sampling of current thinking? Contact [email protected] for more details.)