HPCwire Quantum Survey: First Up – IBM and Zapata – on Algorithms, Error Mitigation, More

By John Russell

August 15, 2022

Quantum computing technology advances so quickly that it is hard to stay current. HPCwire recently asked a handful of senior researchers and executives for their thoughts on nearer-term progress and challenges. We’ll present their responses as they trickle in through the late summer and fall. (These execs take vacations too!) This also allows us to present the respondent’s full answers. As a regular practice, HPCwire will continue to survey executives in the community to present a kind of rolling glimpse into current thinking. Think of them as real-time snapshots of the constantly evolving quantum landscape.

Here we present responses from Jay Gambetta, VP Quantum, IBM, and Timothy Hirzel, chief evangelist, Zapata Computing – two very different companies. IBM covers, basically, all aspects of quantum computing, with an emphasis on semiconductor-based superconducting qubits. Zapata is a software-only startup, tiny in comparison to IBM, and agnostic about underlying qubit technology. Their answers reflect this difference, but they also reflect IBM’s and Zapata’s shared view that quantum computing will achieve at least some levels of practical use in the NISQ (near-term intermediate scale quantum) computing era. Their responses, without formatting changes, are presented below. 


1 Significant advance. What’s your sense of the most significant advance(s) achieved in the past six months-to-year so and why? What nearer-term future advance does it lay the groundwork for?

IBM’s Gambetta:

Jay Gambetta, IBM

Multilayer wiring, packing and coherence has enabled superconducting qubit systems to break the 100-qubit barrier. This is a landmark for quantum computing, as this system size allows us to potentially tackle quantum circuits of complexity beyond the scope of classical processors. These advances have been accompanied by two-qubit error rates reaching 1e-3 which is approaching the point at which error mitigation techniques can enable noise-free estimation of observables in a reasonable amount of time.

 

Zapata’s Hirzel:

  • Tim Hirzel, Zapata

    Quantum advantage in generative modeling: Recent work such as “Generation of High-Resolution Handwritten Digits with an Ion-Trap Quantum Computer”,Enhancing Generative Models Via Quantum Correlations” and “Evaluating Generalization in Quantum and Classical Generative Models” have laid the groundwork both experimentally and theoretically for establishing the near-term potential for quantum computers to improve machine learning algorithms.

  • Approaches to using early fault-tolerant quantum computers: There is a growing body of recent research thatfocuses on developing algorithms and resource estimations suited for “early fault-tolerant quantum computers,” or quantum computers with limited quantum error correction capabilities. Early fault-tolerant quantum computations will need to balance power with error robustness. Recent work has laid the groundwork for designing quantum algorithms that let us tune this balance. This departs from approaches with too little error robustness (design of algorithms for fault-tolerant quantum computers) and approaches with too much error robustness, but not enough power (development of costly error mitigation techniques).
  • Xanadu quantum supremacy experiment: Like other quantum supremacy demonstrations, this is a significant milestone in showing that we are now firmly in the era of engineered quantum systems that can manifest computational capabilities beyond what is possible with classical computers.

2 Algorithm development. We hear a lot about Shor’s and Grover’s algorithms and VQE solvers. What are the most important missing algorithms/applications needed for quantum computing and how close are we developing them?

IBM’s Gambetta:

As in classical computing, where it is commonly argued that there are 13 motifs needed for high performance programing, in my view it is not that we need to find too many more algorithms. The missing step is how can we program these and minimize the effects of noise. Long term, error correction is the solution but is it possible to implement the core quantum circuits with error mitigation and show a continuous path to error correction. This is the most important question. I believe we have some ideas showing this path can be continuous. But if we can leverage progress on error mitigation techniques to advance quantum applications, improvements in the hardware will have a more direct impact in quantum technologies. From these core quantum circuits, I expect there to be many applications similar to the case in HPC with the most likely areas being simulating nature (high energy physics, material science, chemistry, drug design), data with structure (quantum machine learning, ranking, detecting signals), and non-exponential applications such as search and optimization.

Zapata’s Hirzel:

  • Algorithms that leverage the sampling capabilities of quantum devices: Applications include machine learning (generative and recurrent models), optimization, and cryptography. One salient example in this category is to use quantum devices as a source of statistical power to enhance optimization (see this recent paper), which represents a fundamentally new paradigm of using near-term quantum devices for deriving practical advantage.
  • Algorithms that leverage early fault-tolerant quantum device capabilities: A pertinent example is robust amplitude estimation (RAE), which is derived from a long line of works (see here, here, and here). Building on top of amplitude estimation, we can then make further improvements to hybrid quantum-classical schemes such as VQE as well as algorithms for state property estimation (see here). These methods have applications in quantum chemistry, optimization, finance, and other areas.

3 Qubit technology. Which technology(s) is least likely to succeed as an underlying qubit technology and why? Which technology(s) is most unlikely to succeed?

IBM’s Gambetta:

For a technology to succeed it needs to have a path to scale the QPU, improve the quality of the quantum circuits run on the QPU and speed up the running of quantum circuits on the QPU.  Currently in my opinion not all qubit technology can do all three of these and some it will be physically impossible to improve one or more of these components. I prefer superconducting qubits as they offer the best path forward when optimized against all three of these components.

Zapata’s Hirzel:

It’s still too early to say. We anticipate that the best qubit technology will depend on the problem: different problem types will work best with different qubit approaches, and that will continue to evolve for some time.

We have had great results on superconducting and ion trap devices— and are excited to explore quantum photonics as well. The answer depends on what time scale one is considering and what is meant by success. Without error correction, doing an experiment using ion traps will probably give better results. On the other hand, ion traps may face limitations when the number of qubits scales up. A single trap can only hold so many ions, so different traps would need to somehow be entangled to reach larger numbers of qubits. There hasn’t been much experimental work in this area, so it’s not clear how well this setup will do and how easy it will be to do QEC. The feedback between the CPU and different ion traps on the QPU will add a layer of complexity, mostly in terms of latency times.

Photonic approaches face different opportunities and challenges. With their scalable but short-lived qubits, they have been more aimed at realizing fault-tolerant architectures. But one can imagine some superconducting platforms might be able to have all the qubits on one “module.” In other words, one is not combining different chips in one mega chip — this would reduce latency problems in comparison with ion traps. For a neutral atom platform, scaling to larger numbers of qubits should be easier than superconducting and ion traps because unwanted interactions between different qubits will be small, but for this same reason making gates is harder since this requires interaction between the qubits. There are two potential platforms that could potentially be attractive over all the other namely: topological qubits (no need of QEC but none has been created) and qubits constructed using cat states (this platform has inherent exponential suppression of bit flip errors, and one needs to only correct for phase flip errors thus greatly reducing the overhead of QEC, but this a new platform)


4 Significant challenge. There’s no lack of challenges. What do you think are the top 3 challenges facing quantum computing and QIS today?

IBM’s Gambetta:

Maybe one could summarize the top challenges in: 1) scaling quantum systems up in size while 2) making them less noisy and faster. And 3) Identify and develop error mitigation techniques to allow noise free estimates from quantum circuits.

Zapata’s Hirzel:

  • Talent shortages. The quantum talent pool is relatively small and dwindling fast. According to our recent report on enterprise quantum computing adoption, 51% of enterprises that have started on the path to quantum adoption have already started identifying talent and building their teams. If you wait until the technology is mature, all the best talent will already be working for somebody else.
  • The complexity of integrating quantum with existing IT. This is a familiar challenge for any enterprise that adopted AI and machine learning. You can’t just rip and replace, you need to integrate quantum computing with your existing tech stack. Any quantum speedup can easily be negated by an unwieldy quantum workflow. This includes moving data to compute and vice versa.
  • Time and urgency. Quantum computing is moving fast, and many enterprises have little appreciation for how much time it will take to upgrade their infrastructure and build valuable quantum applications. Those that wait until the hardware is mature will spend a long time catching up with their peers that started early.

5 Error correction. What’s your sense of the qubit redundancy needed to implement quantum error correction? In other words, how many physical qubits will be needed to implement a logical qubit. Estimates have varied based on many factors (fidelity, speed, underlying qubit technology).

IBM’s Gambetta:

This is one of the most misunderstood questions in the public about quantum computing. Rather than just dive into QEC, I prefer to start with quantum circuits and ask what is needed to implement a quantum circuit (qubits, runtime time, gate fidelity). This is because at this level the gates and operations as well as the encoding become important. The minimum number of qubits to encode a fully correctable logical qubit is 5. A popular LDPC code known as the surface code, or even planar codes in general, have good thresholds, but have an encoding rate (number of encoded qubits to physical qubits) that approaches zero as the distance of the code increases. Furthermore, these codes do not support all gates and need to use techniques such as magic state injection to allow universal quantum circuits. This means that these codes are good for demonstrations exploiting qubits with lower gate fidelities but they are not practical for quantum computing in the long term due to the very large number of physical qubits that you see in the literature. This makes a bigger difference to the physical qubit count than the underlying qubit technology.

In my view, the path forward is to ask whether we can implement quantum circuits by using ideas such as error suppression, error mitigation, error mitigation + error correction, and in the future build systems with long range coupling to allow higher rate quantum LDPC codes. I believe this path will find value in the near term and show a continuous track to more value with improvements in the hardware, rather than waiting until we can build a 1M+ qubit system with magic state injection. I also believe science is about the undiscovered, and I’m very excited about the revolution happing in error correction with new quantum LDPC codes. We need to maximize the co-design between hardware and theory to minimize the size of the system we need to build to bring value to our users.

Zapata’s Hirzel:

Under the current theory of quantum error correction, every order of magnitude improvement in the gate error (for example, a 1% error rate vs. a 10% error rate) requires a constant multiplier in the number of physical qubits.

A subtlety worth mentioning is that “qubit redundancy” is not the only relevant metric. For example, error correction cycle rate and architecture scalability (even if it costs high qubit redundancy) might be equally important. We were recently awarded a grant from DARPA through which we are building tools to carry out fault-tolerant resource estimates. Stay tuned!


6 Your work. Please describe in a paragraph or two your current top project(s) and priorities.

IBM’s Gambetta:

As we go forward into the future there are two big challenges that we need to solved in the next couple of years. The first is to push scale by embracing the concept of modularity. Modularity across the entire system is critical, from the QPU to the cryo-components, electronics for controls, and even the entire cryogenic environment. We are looking at this on multiple fronts as detailed in our extended development roadmap. To allow for more efficient usage of the QPUs we will introduce modularity in terms of classical control and classical links of multiple QPUs. This enables certain techniques of dealing with errors known as error mitigation and enables larger circuits to be explored with tight integration with classical compute through circuit knitting. The second strategy for modularity is to break down the need for ever larger and larger individual processor chips by having high speed chip to chip quantum links. These links extend the quantum computing fabric but through a multi-chip strategy. However, this is also not yet enough as the rest of the components like connectors and even cooling could be a bottleneck and so a slightly longer distance Modularity is also required. For this we imagine meter long microwave cryogenic links between QPUs that still provide a quantum communication link albeit slower than the direct chip to chip ones. These strategies for scaling are reflected by Heron, Crossbill, and Flamingo in our roadmap.

The second [challenge] is HPC + Quantum integration, this is not simply classical + quantum integration but true HPC and Quantum integration into a workflow. Digging into this more classical and quantum will work together in many ways. At the lowest level we need dynamic circuits which brings concurrent classical calculations to quantum circuits allowing simple calculations to happen within the coherence (100 nanoseconds), at the next level we will need classical compute to perform runtime compilation, error suppression, error mitigation, and eventually error correction. This needs low latency and must be close to the QPU. Above this level I am very excited by circuit knitting which is an idea that shows how we can extend the computational reach of quantum by adding classical computing. For example, by combining linear algebra technics and quantum circuits we can effectively simulate a larger quantum circuit. To build this layer we need to develop ideas which within milliseconds can do a calculation on a classical computer which could be a GPU and then run a quantum circuit and obtain the output

Zapata’s Hirzel:

We can’t share all our projects, but there are several that stand out. Our QML (Quantum Machine Learning) Suite is now available to our enterprise customers via our quantum workflow orchestration platform, Orquestra. The QML Suite is a toolbox of plug-and-play, user-defined workflows for building quantum machine learning applications. This new offering embodies our commitment to helping our customers generate near-term value from quantum computers. We’re particularly excited about generative modeling as a near-term application for QML, which can be used for optimization problems and to create synthetic data for training models of situations with small sample sizes, such as financial crashes and pandemics.

One of our most involved and public customer projects right now is our work with Andretti Autosport to upgrade their data analytics infrastructure to be quantum-ready. Not many people know this, but INDYCAR racing is a very analytics-heavy sport — each car generates around 1TB of data in a single race. We’re helping Andretti build advanced machine learning models to help determine the best time for a pit stop, ways to reduce fuel consumption, and other race strategy decisions. See our latest joint press release here for more details.

Lastly, cybersecurity has become a top priority for us. We have been approached by customers at the senior CIO/CISO levels asking for our help in assessing their post-quantum vulnerabilities. People assume encryption-busting algorithms like Shor’s algorithm are still decades away, but the threat could be much sooner. In fact, it is already here in the form of save now, decrypt later (SNDL) attacks. As the inventors of Variational Quantum Factoring (an algorithm that significantly reduces the qubits required to factor a 2048-bit RSA number), we have a unique perspective on the timeline to quantum vulnerability. Orquestra also gives us the ability to assess the threats across the ecosystem at scale and offer swappable PQC (Post Quantum Cryptography) infrastructure upgrades in all data workflows over multiple clouds.

(Interested in participating in HPCwire’s periodic sampling of current thinking? Contact [email protected] more details.)

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire