Quantum computing technology advances so quickly that it is hard to stay current. HPCwire recently asked a handful of senior researchers and executives for their thoughts on nearer-term progress and challenges. We’ll present their responses as they trickle in through the late summer and fall. (These execs take vacations too!) This also allows us to present the respondent’s full answers. As a regular practice, HPCwire will continue to survey executives in the community to present a kind of rolling glimpse into current thinking. Think of them as real-time snapshots of the constantly evolving quantum landscape.
Here we present responses from Jay Gambetta, VP Quantum, IBM, and Timothy Hirzel, chief evangelist, Zapata Computing – two very different companies. IBM covers, basically, all aspects of quantum computing, with an emphasis on semiconductor-based superconducting qubits. Zapata is a software-only startup, tiny in comparison to IBM, and agnostic about underlying qubit technology. Their answers reflect this difference, but they also reflect IBM’s and Zapata’s shared view that quantum computing will achieve at least some levels of practical use in the NISQ (near-term intermediate scale quantum) computing era. Their responses, without formatting changes, are presented below.
1 Significant advance. What’s your sense of the most significant advance(s) achieved in the past six months-to-year so and why? What nearer-term future advance does it lay the groundwork for?
IBM’s Gambetta:
Multilayer wiring, packing and coherence has enabled superconducting qubit systems to break the 100-qubit barrier. This is a landmark for quantum computing, as this system size allows us to potentially tackle quantum circuits of complexity beyond the scope of classical processors. These advances have been accompanied by two-qubit error rates reaching 1e-3 which is approaching the point at which error mitigation techniques can enable noise-free estimation of observables in a reasonable amount of time.
Zapata’s Hirzel:
-
Quantum advantage in generative modeling: Recent work such as “Generation of High-Resolution Handwritten Digits with an Ion-Trap Quantum Computer”, “Enhancing Generative Models Via Quantum Correlations” and “Evaluating Generalization in Quantum and Classical Generative Models” have laid the groundwork both experimentally and theoretically for establishing the near-term potential for quantum computers to improve machine learning algorithms.
- Approaches to using early fault-tolerant quantum computers: There is a growing body of recent research thatfocuses on developing algorithms and resource estimations suited for “early fault-tolerant quantum computers,” or quantum computers with limited quantum error correction capabilities. Early fault-tolerant quantum computations will need to balance power with error robustness. Recent work has laid the groundwork for designing quantum algorithms that let us tune this balance. This departs from approaches with too little error robustness (design of algorithms for fault-tolerant quantum computers) and approaches with too much error robustness, but not enough power (development of costly error mitigation techniques).
- Xanadu quantum supremacy experiment: Like other quantum supremacy demonstrations, this is a significant milestone in showing that we are now firmly in the era of engineered quantum systems that can manifest computational capabilities beyond what is possible with classical computers.
2 Algorithm development. We hear a lot about Shor’s and Grover’s algorithms and VQE solvers. What are the most important missing algorithms/applications needed for quantum computing and how close are we developing them?
IBM’s Gambetta:
As in classical computing, where it is commonly argued that there are 13 motifs needed for high performance programing, in my view it is not that we need to find too many more algorithms. The missing step is how can we program these and minimize the effects of noise. Long term, error correction is the solution but is it possible to implement the core quantum circuits with error mitigation and show a continuous path to error correction. This is the most important question. I believe we have some ideas showing this path can be continuous. But if we can leverage progress on error mitigation techniques to advance quantum applications, improvements in the hardware will have a more direct impact in quantum technologies. From these core quantum circuits, I expect there to be many applications similar to the case in HPC with the most likely areas being simulating nature (high energy physics, material science, chemistry, drug design), data with structure (quantum machine learning, ranking, detecting signals), and non-exponential applications such as search and optimization.
Zapata’s Hirzel:
- Algorithms that leverage the sampling capabilities of quantum devices: Applications include machine learning (generative and recurrent models), optimization, and cryptography. One salient example in this category is to use quantum devices as a source of statistical power to enhance optimization (see this recent paper), which represents a fundamentally new paradigm of using near-term quantum devices for deriving practical advantage.
- Algorithms that leverage early fault-tolerant quantum device capabilities: A pertinent example is robust amplitude estimation (RAE), which is derived from a long line of works (see here, here, and here). Building on top of amplitude estimation, we can then make further improvements to hybrid quantum-classical schemes such as VQE as well as algorithms for state property estimation (see here). These methods have applications in quantum chemistry, optimization, finance, and other areas.
3 Qubit technology. Which technology(s) is least likely to succeed as an underlying qubit technology and why? Which technology(s) is most unlikely to succeed?
IBM’s Gambetta:
For a technology to succeed it needs to have a path to scale the QPU, improve the quality of the quantum circuits run on the QPU and speed up the running of quantum circuits on the QPU. Currently in my opinion not all qubit technology can do all three of these and some it will be physically impossible to improve one or more of these components. I prefer superconducting qubits as they offer the best path forward when optimized against all three of these components.
Zapata’s Hirzel:
It’s still too early to say. We anticipate that the best qubit technology will depend on the problem: different problem types will work best with different qubit approaches, and that will continue to evolve for some time.
We have had great results on superconducting and ion trap devices— and are excited to explore quantum photonics as well. The answer depends on what time scale one is considering and what is meant by success. Without error correction, doing an experiment using ion traps will probably give better results. On the other hand, ion traps may face limitations when the number of qubits scales up. A single trap can only hold so many ions, so different traps would need to somehow be entangled to reach larger numbers of qubits. There hasn’t been much experimental work in this area, so it’s not clear how well this setup will do and how easy it will be to do QEC. The feedback between the CPU and different ion traps on the QPU will add a layer of complexity, mostly in terms of latency times.
Photonic approaches face different opportunities and challenges. With their scalable but short-lived qubits, they have been more aimed at realizing fault-tolerant architectures. But one can imagine some superconducting platforms might be able to have all the qubits on one “module.” In other words, one is not combining different chips in one mega chip — this would reduce latency problems in comparison with ion traps. For a neutral atom platform, scaling to larger numbers of qubits should be easier than superconducting and ion traps because unwanted interactions between different qubits will be small, but for this same reason making gates is harder since this requires interaction between the qubits. There are two potential platforms that could potentially be attractive over all the other namely: topological qubits (no need of QEC but none has been created) and qubits constructed using cat states (this platform has inherent exponential suppression of bit flip errors, and one needs to only correct for phase flip errors thus greatly reducing the overhead of QEC, but this a new platform)
4 Significant challenge. There’s no lack of challenges. What do you think are the top 3 challenges facing quantum computing and QIS today?
IBM’s Gambetta:
Maybe one could summarize the top challenges in: 1) scaling quantum systems up in size while 2) making them less noisy and faster. And 3) Identify and develop error mitigation techniques to allow noise free estimates from quantum circuits.
Zapata’s Hirzel:
- Talent shortages. The quantum talent pool is relatively small and dwindling fast. According to our recent report on enterprise quantum computing adoption, 51% of enterprises that have started on the path to quantum adoption have already started identifying talent and building their teams. If you wait until the technology is mature, all the best talent will already be working for somebody else.
- The complexity of integrating quantum with existing IT. This is a familiar challenge for any enterprise that adopted AI and machine learning. You can’t just rip and replace, you need to integrate quantum computing with your existing tech stack. Any quantum speedup can easily be negated by an unwieldy quantum workflow. This includes moving data to compute and vice versa.
- Time and urgency. Quantum computing is moving fast, and many enterprises have little appreciation for how much time it will take to upgrade their infrastructure and build valuable quantum applications. Those that wait until the hardware is mature will spend a long time catching up with their peers that started early.
5 Error correction. What’s your sense of the qubit redundancy needed to implement quantum error correction? In other words, how many physical qubits will be needed to implement a logical qubit. Estimates have varied based on many factors (fidelity, speed, underlying qubit technology).
IBM’s Gambetta:
This is one of the most misunderstood questions in the public about quantum computing. Rather than just dive into QEC, I prefer to start with quantum circuits and ask what is needed to implement a quantum circuit (qubits, runtime time, gate fidelity). This is because at this level the gates and operations as well as the encoding become important. The minimum number of qubits to encode a fully correctable logical qubit is 5. A popular LDPC code known as the surface code, or even planar codes in general, have good thresholds, but have an encoding rate (number of encoded qubits to physical qubits) that approaches zero as the distance of the code increases. Furthermore, these codes do not support all gates and need to use techniques such as magic state injection to allow universal quantum circuits. This means that these codes are good for demonstrations exploiting qubits with lower gate fidelities but they are not practical for quantum computing in the long term due to the very large number of physical qubits that you see in the literature. This makes a bigger difference to the physical qubit count than the underlying qubit technology.
In my view, the path forward is to ask whether we can implement quantum circuits by using ideas such as error suppression, error mitigation, error mitigation + error correction, and in the future build systems with long range coupling to allow higher rate quantum LDPC codes. I believe this path will find value in the near term and show a continuous track to more value with improvements in the hardware, rather than waiting until we can build a 1M+ qubit system with magic state injection. I also believe science is about the undiscovered, and I’m very excited about the revolution happing in error correction with new quantum LDPC codes. We need to maximize the co-design between hardware and theory to minimize the size of the system we need to build to bring value to our users.
Zapata’s Hirzel:
Under the current theory of quantum error correction, every order of magnitude improvement in the gate error (for example, a 1% error rate vs. a 10% error rate) requires a constant multiplier in the number of physical qubits.
A subtlety worth mentioning is that “qubit redundancy” is not the only relevant metric. For example, error correction cycle rate and architecture scalability (even if it costs high qubit redundancy) might be equally important. We were recently awarded a grant from DARPA through which we are building tools to carry out fault-tolerant resource estimates. Stay tuned!
6 Your work. Please describe in a paragraph or two your current top project(s) and priorities.
IBM’s Gambetta:
As we go forward into the future there are two big challenges that we need to solved in the next couple of years. The first is to push scale by embracing the concept of modularity. Modularity across the entire system is critical, from the QPU to the cryo-components, electronics for controls, and even the entire cryogenic environment. We are looking at this on multiple fronts as detailed in our extended development roadmap. To allow for more efficient usage of the QPUs we will introduce modularity in terms of classical control and classical links of multiple QPUs. This enables certain techniques of dealing with errors known as error mitigation and enables larger circuits to be explored with tight integration with classical compute through circuit knitting. The second strategy for modularity is to break down the need for ever larger and larger individual processor chips by having high speed chip to chip quantum links. These links extend the quantum computing fabric but through a multi-chip strategy. However, this is also not yet enough as the rest of the components like connectors and even cooling could be a bottleneck and so a slightly longer distance Modularity is also required. For this we imagine meter long microwave cryogenic links between QPUs that still provide a quantum communication link albeit slower than the direct chip to chip ones. These strategies for scaling are reflected by Heron, Crossbill, and Flamingo in our roadmap.
The second [challenge] is HPC + Quantum integration, this is not simply classical + quantum integration but true HPC and Quantum integration into a workflow. Digging into this more classical and quantum will work together in many ways. At the lowest level we need dynamic circuits which brings concurrent classical calculations to quantum circuits allowing simple calculations to happen within the coherence (100 nanoseconds), at the next level we will need classical compute to perform runtime compilation, error suppression, error mitigation, and eventually error correction. This needs low latency and must be close to the QPU. Above this level I am very excited by circuit knitting which is an idea that shows how we can extend the computational reach of quantum by adding classical computing. For example, by combining linear algebra technics and quantum circuits we can effectively simulate a larger quantum circuit. To build this layer we need to develop ideas which within milliseconds can do a calculation on a classical computer which could be a GPU and then run a quantum circuit and obtain the output
Zapata’s Hirzel:
We can’t share all our projects, but there are several that stand out. Our QML (Quantum Machine Learning) Suite is now available to our enterprise customers via our quantum workflow orchestration platform, Orquestra. The QML Suite is a toolbox of plug-and-play, user-defined workflows for building quantum machine learning applications. This new offering embodies our commitment to helping our customers generate near-term value from quantum computers. We’re particularly excited about generative modeling as a near-term application for QML, which can be used for optimization problems and to create synthetic data for training models of situations with small sample sizes, such as financial crashes and pandemics.
One of our most involved and public customer projects right now is our work with Andretti Autosport to upgrade their data analytics infrastructure to be quantum-ready. Not many people know this, but INDYCAR racing is a very analytics-heavy sport — each car generates around 1TB of data in a single race. We’re helping Andretti build advanced machine learning models to help determine the best time for a pit stop, ways to reduce fuel consumption, and other race strategy decisions. See our latest joint press release here for more details.
Lastly, cybersecurity has become a top priority for us. We have been approached by customers at the senior CIO/CISO levels asking for our help in assessing their post-quantum vulnerabilities. People assume encryption-busting algorithms like Shor’s algorithm are still decades away, but the threat could be much sooner. In fact, it is already here in the form of save now, decrypt later (SNDL) attacks. As the inventors of Variational Quantum Factoring (an algorithm that significantly reduces the qubits required to factor a 2048-bit RSA number), we have a unique perspective on the timeline to quantum vulnerability. Orquestra also gives us the ability to assess the threats across the ecosystem at scale and offer swappable PQC (Post Quantum Cryptography) infrastructure upgrades in all data workflows over multiple clouds.
(Interested in participating in HPCwire’s periodic sampling of current thinking? Contact [email protected] more details.)