Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and issues facing the quantum landscape broadly. Thankfully, IBM squeezes most of the highlights into a roughly hour-long recorded session, led by Jay Gambetta, IBM fellow and vice president, IBM Quantum. This year’s event cited 12 “breakthroughs and announcements” with introduction of its 433-qubit Osprey processor and significant error mitigation/correction and hybrid classical-quantum capabilities as centerpieces.
Not all 12 announcements are new as many have been hinted at or were rolled out through the course of the year, but the fact is few companies have the resources to tackle virtually every aspect of quantum computing; IBM has done so and has consistently met the milestones on its quantum roadmap. While IBM efforts focus on semiconductor-based superconducting quantum, many of the problems addressed cut across qubit modalities and are at least directionally informative for the quantum computing landscape writ large.
Here’s a look at the roadmap and recap of announcements (click to enlarge figures).
The big news, of course, was formal introduction of Osprey, making good on an IBM promise last year. In his opening keynote, Dario Gil, IBM senior vice president and director of research, said, “Osprey [is] by far the largest processor ever created in the world of superconducting qubits. Last year when we announced Eagle with 127 qubits – it was the first time anybody had built across the 100-qubit barrier. With Osprey, it brings all of the technologies we’ve been building over the years including 3D integration, multi-layer wiring, being able to separate the qubit control plane from the connectivity and the readout planes. It’s a tour de force in terms of materials, devices, packaging, and on the quantum processor itself.” Osprey will be available to users in Q1.
Next up are two processors planned for 2023. Condor, at 1,121 qubits, and Heron, at a more modest 133 qubits, but with many features necessary for connecting multiple QPUs together into a larger system. Taken together, says IBM, advances in these two QPUs help set the stage for larger hybrid architectures. Longer term, as shown on the roadmap above, IBM plans to introduce Crossbill (408 qubits) and Flamingo (1,386+ qubits) in 2024, leading to Kookaburra (4,158+ qubits) in 2026.
IBM’s vision for quantum computing – like that for most of the quantum industry – has strongly pivoted towards a hybrid classical-quantum vision. IBM calls its version Quantum-centric Supercomputing, a concept it introduced last May.
Gambetta talked about the future and said “2023 marks the point where everything changes.”
“Today, we build single processors, but we realize the path ahead is multiple processors. Today, we build bespoke infrastructure solutions, which aren’t fast enough, aren’t scalable, and cost too much. In the future, we need scalable controls. Today, we’re employing classical compute to enhance quantum hardware. But next, we will develop what we’re calling middleware for quantum that will enhance it further. This next wave is what we are calling quantum-centric supercomputing. To me, a quantum-centric supercomputer is a modular computing architecture, which will enable scaling, it will use communication to increase the computational capacity, and it will use a hybrid-cloud middleware to seamlessly integrate quantum and classical workflows,” said Gambetta.
There’s a lot to unpack here. Besides the Gambetta video, IBM has posted blogs on various specifics; most are available by poking around on the IBM Summit highlights page. The Gambetta-led session, with several colleagues, presented overviews on specific topics. Presented here are portions of the QPU hardware strategy, discussion of error handling techniques, discussion of plans to implement quantum serverless technology relying heavily on a “multi-cloud” approach, and a snapshot of its 100×100 challenge to users.
Let’s start with Osprey chip, which is encapsulated in a printed circuit board to accommodate new signal wiring to control the 433 qubits, according to Jerry Chow, IBM Fellow and Director of Quantum Infrastructure.
“All 433 qubits are arranged in our heavy hex lattice topology. We introduced this concept and technology of multilayer wiring last year with Eagle and that performs the critical function of providing flexibility for the signal wiring as well as optimal device layout. With we’ve had to make further advances there, as well as adding an integrated filtering to reduce noise and improve stability. It’s alive and being tested as we speak. Much like many of our first generations of large birds, we find these coherence times at the moment between 70 to 80 microseconds, median T1 time. At 433 qubits, there’s a lot to measure. So please stay tuned for further calibration updates.”
Chow said IBM quickly migrates learning from one generation to the next. “We already have a new revision of Osprey, and this is just coming off the experimental pipeline where we’re seeing a significant coherence time improvement,” said Chow. Last year’s introduction of tunable coupling architecture has helped push the error rates on Falcon Revision 10 devices into 10-3 range – typically referred to as three 9s territory – noted Chow.
“With Falcon r10, we were able to actually double our quantum volume not once, but twice this past year, first at 256, then again at 512 with our IBM product – so our innovations and tunable coupling architecture have allowed us to drive a 4x increase in quality.”
Signal delivery technology is also advancing.
“This photo is striking (left), but it’s really a relic of the past in some sense. A lot of the wiring that you see within it is built by hand. It’s handcrafted and at 100 qubits with a few hundred cables, I can convince our team to actually do that busy work. But when we push this to 400 or 1000 and need to hand tighten all the different bolts, this becomes impractical. It’s simply not cost-effective, and not nearly dense enough for the solutions that we need in the future. It has to change. So we’re really excited to show the next evolution of high density control signal delivery with cryogenic flex wiring (photo below). It’s going to make it easier to wire hundreds to 1000s of lines. It’s critical for the reliability of our deployed systems. Today, it’s already 70 percent more dense and five times cheaper, and we have plans to make this even better,” said Chow.
Broadly, IBM characterizes its QPUs by three criteria, scale (qubit-count), quality (QV metric[i]) and speed CLOPS (circuit layers per second, introduced last year).
“Let’s talk about that third element, speed. The capacity to actually run a large number of circuits is critical for targeting quantum advantage as well as applications down the road. And [with] the overhead of error mitigation added on top of that and eventually for error correction, speed is absolutely important,” said Chow. IBM set a goal this year to go from 1.4K CLOPS to 10,000 CLOPS and Chow cited improvements to the runtime compiler, the quantum engine, and hardware digital systems. “In June, we introduced code pipelining, followed by improvements in our control systems and quantum engine. [We] not only have we hit our mark of 10,000 CLOPS, we’ve surpassed it at 15,000 clops, a 10x improvement over our fastest integrated systems last year,” he said.
Dealing with error is a quantum community-wide problem. The hope is that intermediate measures will make NISQ (noisy intermediate scale quantum) systems robust enough able to soon deliver some measure of quantum advantage. Blake Johnson, IBM quantum platform lead, reviewed IBM efforts to introduce various error mitigation strategies.
“In practice, we have to contend with the presence of errors. Fortunately, we have powerful tools to deal with these errors,” said Johnson. “One category of tool is error suppression [which] reduces errors in circuits by modifying the underlying circuits without changing their meaning. For instance, we can inject additional gates to echo certain error sources. Another category is error mitigation, which can deliver accurate expectation values by executing collections or ensembles of related circuits and combining their outputs in post processing.”
“Error mitigation is also powerful. In fact, earlier this year, we showed how error mitigation can deliver unbiased estimates from noisy quantum computers, but error mitigation also comes at a cost and that cost is exponential (qubit overhead),” said Johnson. “The question becomes, how are we going to make those tools easy to use and accessible to everyone?”
IBM’s approach has been to build Qiskit runtime primitives, launched earlier this year. “They elevate the fundamental abstraction for interfacing with quantum hardware to directly expose the kinds of queries that are relevant to quantum applications. These more abstract interfaces allow us to expose error suppression and error mitigation through simple-to-configure options. When we do it right, it can have a major impact,” said Johnson.
“Today we’re launching a beta support for error suppression in the Qiskit runtime primitives through a simple optimization level in the API. We can go further though, and will introduce a new option that we’re calling Resilience Level. This is a simple-to-use control that allows the user to adjust the cost-accuracy trade off of a primitive query.”
Here’s a snapshot:
- Resilience Level One. “We’re going to turn on options for methods that specifically address errors in readout operations. We’re going to adapt the choice of method to the specific context of sampling or estimating. These methods at fairly minimal overhead.” Level one is the default resilience level,” said Johnson.
- Resilience Level Two. “This level will enable zero noise extrapolation [and] can reduce error in an estimator, but it doesn’t come with a guarantee that the answer is unbiased.”
- Resilience Level Three. “We turn on our most advanced error mitigation strategy, which is probabilistic error cancellation. This method occurs a substantial overhead both in terms of noise model learning and circuit sampling, but also comes with the most robust guarantees about the quality of the result. For developers in the audience, here’s what it looks like in code to manipulate this resilience level and the new options interface,” he said.
Johnson also reviewed progress to incorporate dynamic circuit capability into its quantum stack. “Dynamic circuits marry real time classical computation with quantum operations, allowing feedback and feed forward of quantum measurements to steer the course of the computation,” said Johnson.
Broadly speaking, a quantum circuit is a sequence of quantum operations — including gates, measurements, and resets — acting on qubits. In static circuits, none of those operations depend on data produced at run time. For example, static circuits might only contain measurement operations at the end of the circuit. Dynamic circuits, on the other hand, incorporate classical processing within the coherence time of the qubits. This means that dynamic circuits can make use of mid-circuit measurements and perform feed-forward operations, using the values produced by measurements to determine what gates to apply next.
IBM posted a blog on the topic coinciding with the event. Here’s an excerpt:
“[W]e have been exploring dynamic circuits for several years. Recently, we compared the performance of static and dynamic circuits for phase estimation, which is a crucial primitive for many algorithms. At the time, this demonstration required working side-by-side with our hardware engineers to enable the required feed-forward operations.
“To make this capability more broadly accessible has required enormous effort. We had to rebuild a substantial portion of our underlying infrastructure, such as re-architecting our control hardware such that we could move data around in real time. We needed to update OpenQASM so it could describe circuits with feed-forward and feed-backward control flow. And we had to re-engineer our compilation toolchain so that we could convert these circuits into something our systems could actually run.
“We are rolling out dynamic circuits on 18 IBM Quantum systems — those QPUs engineered for fast readout. Users can create these circuits using Qiskit or directly in OpenQASM3. They can execute either form through the backend.run() interface. We expect that early next year, we’ll be able to execute payloads with dynamic circuits in the Sampler and Estimator primitives in the Qiskit Runtime, too.”
Continuing the thread of blending classical and quantum computing, Katie Pizzolato, director of quantum strategy and application research, announced the alpha release of IBM circuit knitting toolbox.
“We’re finding there’s many ways that we can weave quantum and classical together to extend what we can achieve. We call this our circuit knitting toolbox. First, we can embed quantum simulations inside larger classical problems [in which] we use quantum to treat pieces of the problem and use classical to approximate the rest,” said Pizzolato.
“Also, with things like entanglement forging, we can break the problem down into smaller circuits and run the smaller quantum circuits on the quantum hardware and then reconstruct them classically, which allows us to double the size of what we can do otherwise. With circuit cutting, we really cut the less entangled connections into subsystems, we compute the global energy by classically coupling each of the answers in each of the results from the QPUs,” she said.
Pizzolato noted these tools have a common approach – they decompose the problem, run a lot of Qiskit runtimes in parallel, and reconstruct the outcomes into a single answer. However, said Pizzolato, “Many users don’t want to worry about the underlying infrastructure, [they] just want to run their code.” To accommodate that desire, she said IBM was releasing an alpha version of quantum serverless with these capabilities built in. IBM’s quantum serverless concept, introduced a year ago, is a programing model that IBM hopes will enable users to easily orchestrate hybrid workloads via the cloud for execution on varying combinations on classical and quantum resources. It’s still fairly nascent.
A good deal more was touched upon at the IBM Summit including, for example: work on a cryo-CMOS controller chip for quantum circuits; cryogenic dense flexible interconnect cable; its new modular IBM System 2 designed to hold quantum and classical resources; work on a new smaller gen3 classical control system; a Quantum Safe service to assist organizations seeking to implement post-quantum cryptography, and more.
The last item on its list was something IBM is calling the 100×100 challenge; it’s intended to provide a practical platform for users to explore new, more complex applications and part of what Gambetta called IBM’s non-nonsense path to quantum advantage. No specific date for reaching QA so was announced.
Pizzolato, said, “We’re issuing a challenge to all of you. We’re calling it the 100×100 challenge. And we’re pledging that in 2024 will offer our partners and clients a system that will generate reliable outcomes running 100 qubits, and a gate depth of 100. We’ve said we’ve had a two-fold path to quantum computing [advantage]; we still have to make better hardware, software and infrastructure, and our users have to devise use cases. We see plenty of avenues to explore use cases using these reliable results, like ground states thermodynamic properties, quantum cartels and more. But we need everyone’s help here and in our network partnerships to really think about what circuits they want to run on a processor like this.”
Gambetta added in his closing comments, “So creating this 100×100 device will really allow us to set up a path to understand how can we get quantum advantage in these systems and lay a future going forward.”
Stay tuned.
Link to Gambetta et. al overview video.
[i] Quantum Volume (QV) is a single-number metric that can be measured using a concrete protocol on near-term quantum computers of modest size. The QV method quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes.