HPCwire Quantum Survey: First Up – IBM and Zapata – on Algorithms, Error Mitigation, More

By John Russell

August 15, 2022

Quantum computing technology advances so quickly that it is hard to stay current. HPCwire recently asked a handful of senior researchers and executives for their thoughts on nearer-term progress and challenges. We’ll present their responses as they trickle in through the late summer and fall. (These execs take vacations too!) This also allows us to present the respondent’s full answers. As a regular practice, HPCwire will continue to survey executives in the community to present a kind of rolling glimpse into current thinking. Think of them as real-time snapshots of the constantly evolving quantum landscape.

Here we present responses from Jay Gambetta, VP Quantum, IBM, and Timothy Hirzel, chief evangelist, Zapata Computing – two very different companies. IBM covers, basically, all aspects of quantum computing, with an emphasis on semiconductor-based superconducting qubits. Zapata is a software-only startup, tiny in comparison to IBM, and agnostic about underlying qubit technology. Their answers reflect this difference, but they also reflect IBM’s and Zapata’s shared view that quantum computing will achieve at least some levels of practical use in the NISQ (near-term intermediate scale quantum) computing era. Their responses, without formatting changes, are presented below. 


1 Significant advance. What’s your sense of the most significant advance(s) achieved in the past six months-to-year so and why? What nearer-term future advance does it lay the groundwork for?

IBM’s Gambetta:

Jay Gambetta, IBM

Multilayer wiring, packing and coherence has enabled superconducting qubit systems to break the 100-qubit barrier. This is a landmark for quantum computing, as this system size allows us to potentially tackle quantum circuits of complexity beyond the scope of classical processors. These advances have been accompanied by two-qubit error rates reaching 1e-3 which is approaching the point at which error mitigation techniques can enable noise-free estimation of observables in a reasonable amount of time.

 

Zapata’s Hirzel:

  • Tim Hirzel, Zapata

    Quantum advantage in generative modeling: Recent work such as “Generation of High-Resolution Handwritten Digits with an Ion-Trap Quantum Computer”,Enhancing Generative Models Via Quantum Correlations” and “Evaluating Generalization in Quantum and Classical Generative Models” have laid the groundwork both experimentally and theoretically for establishing the near-term potential for quantum computers to improve machine learning algorithms.

  • Approaches to using early fault-tolerant quantum computers: There is a growing body of recent research thatfocuses on developing algorithms and resource estimations suited for “early fault-tolerant quantum computers,” or quantum computers with limited quantum error correction capabilities. Early fault-tolerant quantum computations will need to balance power with error robustness. Recent work has laid the groundwork for designing quantum algorithms that let us tune this balance. This departs from approaches with too little error robustness (design of algorithms for fault-tolerant quantum computers) and approaches with too much error robustness, but not enough power (development of costly error mitigation techniques).
  • Xanadu quantum supremacy experiment: Like other quantum supremacy demonstrations, this is a significant milestone in showing that we are now firmly in the era of engineered quantum systems that can manifest computational capabilities beyond what is possible with classical computers.

2 Algorithm development. We hear a lot about Shor’s and Grover’s algorithms and VQE solvers. What are the most important missing algorithms/applications needed for quantum computing and how close are we developing them?

IBM’s Gambetta:

As in classical computing, where it is commonly argued that there are 13 motifs needed for high performance programing, in my view it is not that we need to find too many more algorithms. The missing step is how can we program these and minimize the effects of noise. Long term, error correction is the solution but is it possible to implement the core quantum circuits with error mitigation and show a continuous path to error correction. This is the most important question. I believe we have some ideas showing this path can be continuous. But if we can leverage progress on error mitigation techniques to advance quantum applications, improvements in the hardware will have a more direct impact in quantum technologies. From these core quantum circuits, I expect there to be many applications similar to the case in HPC with the most likely areas being simulating nature (high energy physics, material science, chemistry, drug design), data with structure (quantum machine learning, ranking, detecting signals), and non-exponential applications such as search and optimization.

Zapata’s Hirzel:

  • Algorithms that leverage the sampling capabilities of quantum devices: Applications include machine learning (generative and recurrent models), optimization, and cryptography. One salient example in this category is to use quantum devices as a source of statistical power to enhance optimization (see this recent paper), which represents a fundamentally new paradigm of using near-term quantum devices for deriving practical advantage.
  • Algorithms that leverage early fault-tolerant quantum device capabilities: A pertinent example is robust amplitude estimation (RAE), which is derived from a long line of works (see here, here, and here). Building on top of amplitude estimation, we can then make further improvements to hybrid quantum-classical schemes such as VQE as well as algorithms for state property estimation (see here). These methods have applications in quantum chemistry, optimization, finance, and other areas.

3 Qubit technology. Which technology(s) is least likely to succeed as an underlying qubit technology and why? Which technology(s) is most unlikely to succeed?

IBM’s Gambetta:

For a technology to succeed it needs to have a path to scale the QPU, improve the quality of the quantum circuits run on the QPU and speed up the running of quantum circuits on the QPU.  Currently in my opinion not all qubit technology can do all three of these and some it will be physically impossible to improve one or more of these components. I prefer superconducting qubits as they offer the best path forward when optimized against all three of these components.

Zapata’s Hirzel:

It’s still too early to say. We anticipate that the best qubit technology will depend on the problem: different problem types will work best with different qubit approaches, and that will continue to evolve for some time.

We have had great results on superconducting and ion trap devices— and are excited to explore quantum photonics as well. The answer depends on what time scale one is considering and what is meant by success. Without error correction, doing an experiment using ion traps will probably give better results. On the other hand, ion traps may face limitations when the number of qubits scales up. A single trap can only hold so many ions, so different traps would need to somehow be entangled to reach larger numbers of qubits. There hasn’t been much experimental work in this area, so it’s not clear how well this setup will do and how easy it will be to do QEC. The feedback between the CPU and different ion traps on the QPU will add a layer of complexity, mostly in terms of latency times.

Photonic approaches face different opportunities and challenges. With their scalable but short-lived qubits, they have been more aimed at realizing fault-tolerant architectures. But one can imagine some superconducting platforms might be able to have all the qubits on one “module.” In other words, one is not combining different chips in one mega chip — this would reduce latency problems in comparison with ion traps. For a neutral atom platform, scaling to larger numbers of qubits should be easier than superconducting and ion traps because unwanted interactions between different qubits will be small, but for this same reason making gates is harder since this requires interaction between the qubits. There are two potential platforms that could potentially be attractive over all the other namely: topological qubits (no need of QEC but none has been created) and qubits constructed using cat states (this platform has inherent exponential suppression of bit flip errors, and one needs to only correct for phase flip errors thus greatly reducing the overhead of QEC, but this a new platform)


4 Significant challenge. There’s no lack of challenges. What do you think are the top 3 challenges facing quantum computing and QIS today?

IBM’s Gambetta:

Maybe one could summarize the top challenges in: 1) scaling quantum systems up in size while 2) making them less noisy and faster. And 3) Identify and develop error mitigation techniques to allow noise free estimates from quantum circuits.

Zapata’s Hirzel:

  • Talent shortages. The quantum talent pool is relatively small and dwindling fast. According to our recent report on enterprise quantum computing adoption, 51% of enterprises that have started on the path to quantum adoption have already started identifying talent and building their teams. If you wait until the technology is mature, all the best talent will already be working for somebody else.
  • The complexity of integrating quantum with existing IT. This is a familiar challenge for any enterprise that adopted AI and machine learning. You can’t just rip and replace, you need to integrate quantum computing with your existing tech stack. Any quantum speedup can easily be negated by an unwieldy quantum workflow. This includes moving data to compute and vice versa.
  • Time and urgency. Quantum computing is moving fast, and many enterprises have little appreciation for how much time it will take to upgrade their infrastructure and build valuable quantum applications. Those that wait until the hardware is mature will spend a long time catching up with their peers that started early.

5 Error correction. What’s your sense of the qubit redundancy needed to implement quantum error correction? In other words, how many physical qubits will be needed to implement a logical qubit. Estimates have varied based on many factors (fidelity, speed, underlying qubit technology).

IBM’s Gambetta:

This is one of the most misunderstood questions in the public about quantum computing. Rather than just dive into QEC, I prefer to start with quantum circuits and ask what is needed to implement a quantum circuit (qubits, runtime time, gate fidelity). This is because at this level the gates and operations as well as the encoding become important. The minimum number of qubits to encode a fully correctable logical qubit is 5. A popular LDPC code known as the surface code, or even planar codes in general, have good thresholds, but have an encoding rate (number of encoded qubits to physical qubits) that approaches zero as the distance of the code increases. Furthermore, these codes do not support all gates and need to use techniques such as magic state injection to allow universal quantum circuits. This means that these codes are good for demonstrations exploiting qubits with lower gate fidelities but they are not practical for quantum computing in the long term due to the very large number of physical qubits that you see in the literature. This makes a bigger difference to the physical qubit count than the underlying qubit technology.

In my view, the path forward is to ask whether we can implement quantum circuits by using ideas such as error suppression, error mitigation, error mitigation + error correction, and in the future build systems with long range coupling to allow higher rate quantum LDPC codes. I believe this path will find value in the near term and show a continuous track to more value with improvements in the hardware, rather than waiting until we can build a 1M+ qubit system with magic state injection. I also believe science is about the undiscovered, and I’m very excited about the revolution happing in error correction with new quantum LDPC codes. We need to maximize the co-design between hardware and theory to minimize the size of the system we need to build to bring value to our users.

Zapata’s Hirzel:

Under the current theory of quantum error correction, every order of magnitude improvement in the gate error (for example, a 1% error rate vs. a 10% error rate) requires a constant multiplier in the number of physical qubits.

A subtlety worth mentioning is that “qubit redundancy” is not the only relevant metric. For example, error correction cycle rate and architecture scalability (even if it costs high qubit redundancy) might be equally important. We were recently awarded a grant from DARPA through which we are building tools to carry out fault-tolerant resource estimates. Stay tuned!


6 Your work. Please describe in a paragraph or two your current top project(s) and priorities.

IBM’s Gambetta:

As we go forward into the future there are two big challenges that we need to solved in the next couple of years. The first is to push scale by embracing the concept of modularity. Modularity across the entire system is critical, from the QPU to the cryo-components, electronics for controls, and even the entire cryogenic environment. We are looking at this on multiple fronts as detailed in our extended development roadmap. To allow for more efficient usage of the QPUs we will introduce modularity in terms of classical control and classical links of multiple QPUs. This enables certain techniques of dealing with errors known as error mitigation and enables larger circuits to be explored with tight integration with classical compute through circuit knitting. The second strategy for modularity is to break down the need for ever larger and larger individual processor chips by having high speed chip to chip quantum links. These links extend the quantum computing fabric but through a multi-chip strategy. However, this is also not yet enough as the rest of the components like connectors and even cooling could be a bottleneck and so a slightly longer distance Modularity is also required. For this we imagine meter long microwave cryogenic links between QPUs that still provide a quantum communication link albeit slower than the direct chip to chip ones. These strategies for scaling are reflected by Heron, Crossbill, and Flamingo in our roadmap.

The second [challenge] is HPC + Quantum integration, this is not simply classical + quantum integration but true HPC and Quantum integration into a workflow. Digging into this more classical and quantum will work together in many ways. At the lowest level we need dynamic circuits which brings concurrent classical calculations to quantum circuits allowing simple calculations to happen within the coherence (100 nanoseconds), at the next level we will need classical compute to perform runtime compilation, error suppression, error mitigation, and eventually error correction. This needs low latency and must be close to the QPU. Above this level I am very excited by circuit knitting which is an idea that shows how we can extend the computational reach of quantum by adding classical computing. For example, by combining linear algebra technics and quantum circuits we can effectively simulate a larger quantum circuit. To build this layer we need to develop ideas which within milliseconds can do a calculation on a classical computer which could be a GPU and then run a quantum circuit and obtain the output

Zapata’s Hirzel:

We can’t share all our projects, but there are several that stand out. Our QML (Quantum Machine Learning) Suite is now available to our enterprise customers via our quantum workflow orchestration platform, Orquestra. The QML Suite is a toolbox of plug-and-play, user-defined workflows for building quantum machine learning applications. This new offering embodies our commitment to helping our customers generate near-term value from quantum computers. We’re particularly excited about generative modeling as a near-term application for QML, which can be used for optimization problems and to create synthetic data for training models of situations with small sample sizes, such as financial crashes and pandemics.

One of our most involved and public customer projects right now is our work with Andretti Autosport to upgrade their data analytics infrastructure to be quantum-ready. Not many people know this, but INDYCAR racing is a very analytics-heavy sport — each car generates around 1TB of data in a single race. We’re helping Andretti build advanced machine learning models to help determine the best time for a pit stop, ways to reduce fuel consumption, and other race strategy decisions. See our latest joint press release here for more details.

Lastly, cybersecurity has become a top priority for us. We have been approached by customers at the senior CIO/CISO levels asking for our help in assessing their post-quantum vulnerabilities. People assume encryption-busting algorithms like Shor’s algorithm are still decades away, but the threat could be much sooner. In fact, it is already here in the form of save now, decrypt later (SNDL) attacks. As the inventors of Variational Quantum Factoring (an algorithm that significantly reduces the qubits required to factor a 2048-bit RSA number), we have a unique perspective on the timeline to quantum vulnerability. Orquestra also gives us the ability to assess the threats across the ecosystem at scale and offer swappable PQC (Post Quantum Cryptography) infrastructure upgrades in all data workflows over multiple clouds.

(Interested in participating in HPCwire’s periodic sampling of current thinking? Contact [email protected] more details.)

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Microsoft Closes Confidential Computing Loop with AMD’s Milan Chip

September 22, 2022

Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

AWS Solution Channel

Shutterstock 1194728515

Simulating 44-Qubit quantum circuits using AWS ParallelCluster

Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect
Dr. Pavel Lougovski, Pr. QC Research Scientist
Tyson Jones, Doctoral researcher, University of Oxford

Introduction

Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Survey Results: PsiQuantum, ORNL, and D-Wave Tackle Benchmarking, Networking, and More

September 19, 2022

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…

HPC + AI Wall Street to Feature ‘Spooky’ Science for Financial Services

September 18, 2022

Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…

Analog Chips Find a New Lease of Life in Artificial Intelligence

September 17, 2022

The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Leading Solution Providers

Contributors

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire