IBM Quantum Summit: Osprey Flies; Error Handling Progress; Quantum-centric Supercomputing

By John Russell

December 1, 2022

Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and issues facing the quantum landscape broadly. Thankfully, IBM squeezes most of the highlights into a roughly hour-long recorded session, led by Jay Gambetta, IBM fellow and vice president, IBM Quantum. This year’s event cited 12 “breakthroughs and announcements” with introduction of its 433-qubit Osprey processor and significant error mitigation/correction and hybrid classical-quantum capabilities as centerpieces.

Not all 12 announcements are new as many have been hinted at or were rolled out through the course of the year, but the fact is few companies have the resources to tackle virtually every aspect of quantum computing; IBM has done so and has consistently met the milestones on its quantum roadmap. While IBM efforts focus on semiconductor-based superconducting quantum, many of the problems addressed cut across qubit modalities and are at least directionally informative for the quantum computing landscape writ large.

Here’s a look at the roadmap and recap of announcements (click to enlarge figures).

The big news, of course, was formal introduction of Osprey, making good on an IBM promise last year. In his opening keynote, Dario Gil, IBM senior vice president and director of research, said, “Osprey [is] by far the largest processor ever created in the world of superconducting qubits. Last year when we announced Eagle with 127 qubits – it was the first time anybody had built across the 100-qubit barrier. With Osprey, it brings all of the technologies we’ve been building over the years including 3D integration, multi-layer wiring, being able to separate the qubit control plane from the connectivity and the readout planes. It’s a tour de force in terms of materials, devices, packaging, and on the quantum processor itself.” Osprey will be available to users in Q1.

Next up are two processors planned for 2023. Condor, at 1,121 qubits, and Heron, at a more modest 133 qubits, but with many features necessary for connecting multiple QPUs together into a larger system. Taken together, says IBM, advances in these two QPUs help set the stage for larger hybrid architectures. Longer term, as shown on the roadmap above, IBM plans to introduce Crossbill (408 qubits) and Flamingo (1,386+ qubits) in 2024, leading to Kookaburra (4,158+ qubits) in 2026.

IBM’s vision for quantum computing – like that for most of the quantum industry – has strongly pivoted towards a hybrid classical-quantum vision. IBM calls its version Quantum-centric Supercomputing, a concept it introduced last May.

Gambetta talked about the future and said “2023 marks the point where everything changes.”

Dario Gil, Jay Gambetta and Jerry Chow holding the new 433 qubit ‘IBM Osprey’ processor. Credit: IBM

“Today, we build single processors, but we realize the path ahead is multiple processors. Today, we build bespoke infrastructure solutions, which aren’t fast enough, aren’t scalable, and cost too much. In the future, we need scalable controls. Today, we’re employing classical compute to enhance quantum hardware. But next, we will develop what we’re calling middleware for quantum that will enhance it further. This next wave is what we are calling quantum-centric supercomputing. To me, a quantum-centric supercomputer is a modular computing architecture, which will enable scaling, it will use communication to increase the computational capacity, and it will use a hybrid-cloud middleware to seamlessly integrate quantum and classical workflows,” said Gambetta.

There’s a lot to unpack here. Besides the Gambetta video, IBM has posted blogs on various specifics; most are available by poking around on the IBM Summit highlights page. The Gambetta-led session, with several colleagues, presented overviews on specific topics. Presented here are portions of the QPU hardware strategy, discussion of error handling techniques, discussion of plans to implement quantum serverless technology relying heavily on a “multi-cloud” approach, and a snapshot of its 100×100 challenge to users.

Let’s start with Osprey chip, which is encapsulated in a printed circuit board to accommodate new signal wiring to control the 433 qubits, according to Jerry Chow, IBM Fellow and Director of Quantum Infrastructure.

“All 433 qubits are arranged in our heavy hex lattice topology. We introduced this concept and technology of multilayer wiring last year with Eagle and that performs the critical function of providing flexibility for the signal wiring as well as optimal device layout. With we’ve had to make further advances there, as well as adding an integrated filtering to reduce noise and improve stability. It’s alive and being tested as we speak. Much like many of our first generations of large birds, we find these coherence times at the moment between 70 to 80 microseconds, median T1 time. At 433 qubits, there’s a lot to measure. So please stay tuned for further calibration updates.”

Chow said IBM quickly migrates learning from one generation to the next. “We already have a new revision of Osprey, and this is just coming off the experimental pipeline where we’re seeing a significant coherence time improvement,” said Chow. Last year’s introduction of tunable coupling architecture has helped push the error rates on Falcon Revision 10 devices into 10-3 range – typically referred to as three 9s territory – noted Chow.

“With Falcon r10, we were able to actually double our quantum volume not once, but twice this past year, first at 256, then again at 512 with our IBM product – so our innovations and tunable coupling architecture have allowed us to drive a 4x increase in quality.”

Signal delivery technology is also advancing.

Quantum hardware. Credit: Graham Carlow for IBM

“This photo is striking (left), but it’s really a relic of the past in some sense. A lot of the wiring that you see within it is built by hand. It’s handcrafted and at 100 qubits with a few hundred cables, I can convince our team to actually do that busy work. But when we push this to 400 or 1000 and need to hand tighten all the different bolts, this becomes impractical. It’s simply not cost-effective, and not nearly dense enough for the solutions that we need in the future. It has to change. So we’re really excited to show the next evolution of high density control signal delivery with cryogenic flex wiring (photo below). It’s going to make it easier to wire hundreds to 1000s of lines. It’s critical for the reliability of our deployed systems. Today, it’s already 70 percent more dense and five times cheaper, and we have plans to make this even better,” said Chow.

IBM’s dense, cryogenic flex cable

Broadly, IBM characterizes its QPUs by three criteria, scale (qubit-count), quality (QV metric[i]) and speed CLOPS (circuit layers per second, introduced last year).

“Let’s talk about that third element, speed. The capacity to actually run a large number of circuits is critical for targeting quantum advantage as well as applications down the road. And [with] the overhead of error mitigation added on top of that and eventually for error correction, speed is absolutely important,” said Chow. IBM set a goal this year to go from 1.4K CLOPS to 10,000 CLOPS and Chow cited improvements to the runtime compiler, the quantum engine, and hardware digital systems. “In June, we introduced code pipelining, followed by improvements in our control systems and quantum engine. [We] not only have we hit our mark of 10,000 CLOPS, we’ve surpassed it at 15,000 clops, a 10x improvement over our fastest integrated systems last year,” he said.

Dealing with error is a quantum community-wide problem. The hope is that intermediate measures will make NISQ (noisy intermediate scale quantum) systems robust enough able to soon deliver some measure of quantum advantage. Blake Johnson, IBM quantum platform lead, reviewed IBM efforts to introduce various error mitigation strategies.

“In practice, we have to contend with the presence of errors. Fortunately, we have powerful tools to deal with these errors,” said Johnson. “One category of tool is error suppression [which] reduces errors in circuits by modifying the underlying circuits without changing their meaning. For instance, we can inject additional gates to echo certain error sources. Another category is error mitigation, which can deliver accurate expectation values by executing collections or ensembles of related circuits and combining their outputs in post processing.”

“Error mitigation is also powerful. In fact, earlier this year, we showed how error mitigation can deliver unbiased estimates from noisy quantum computers, but error mitigation also comes at a cost and that cost is exponential (qubit overhead),” said Johnson. “The question becomes, how are we going to make those tools easy to use and accessible to everyone?”

IBM’s approach has been to build Qiskit runtime primitives, launched earlier this year. “They elevate the fundamental abstraction for interfacing with quantum hardware to directly expose the kinds of queries that are relevant to quantum applications. These more abstract interfaces allow us to expose error suppression and error mitigation through simple-to-configure options. When we do it right, it can have a major impact,” said Johnson.

“Today we’re launching a beta support for error suppression in the Qiskit runtime primitives through a simple optimization level in the API. We can go further though, and will introduce a new option that we’re calling Resilience Level. This is a simple-to-use control that allows the user to adjust the cost-accuracy trade off of a primitive query.”

Here’s a snapshot:

  • Resilience Level One. “We’re going to turn on options for methods that specifically address errors in readout operations. We’re going to adapt the choice of method to the specific context of sampling or estimating. These methods at fairly minimal overhead.” Level one is the default resilience level,” said Johnson.
  • Resilience Level Two. “This level will enable zero noise extrapolation [and] can reduce error in an estimator, but it doesn’t come with a guarantee that the answer is unbiased.”
  • Resilience Level Three. “We turn on our most advanced error mitigation strategy, which is probabilistic error cancellation. This method occurs a substantial overhead both in terms of noise model learning and circuit sampling, but also comes with the most robust guarantees about the quality of the result. For developers in the audience, here’s what it looks like in code to manipulate this resilience level and the new options interface,” he said.

Johnson also reviewed progress to incorporate dynamic circuit capability into its quantum stack. “Dynamic circuits marry real time classical computation with quantum operations, allowing feedback and feed forward of quantum measurements to steer the course of the computation,” said Johnson.

Broadly speaking, a quantum circuit is a sequence of quantum operations — including gates, measurements, and resets — acting on qubits. In static circuits, none of those operations depend on data produced at run time. For example, static circuits might only contain measurement operations at the end of the circuit. Dynamic circuits, on the other hand, incorporate classical processing within the coherence time of the qubits. This means that dynamic circuits can make use of mid-circuit measurements and perform feed-forward operations, using the values produced by measurements to determine what gates to apply next.

IBM posted a blog on the topic coinciding with the event. Here’s an excerpt:

“[W]e have been exploring dynamic circuits for several years. Recently, we compared the performance of static and dynamic circuits for phase estimation, which is a crucial primitive for many algorithms. At the time, this demonstration required working side-by-side with our hardware engineers to enable the required feed-forward operations.

“To make this capability more broadly accessible has required enormous effort. We had to rebuild a substantial portion of our underlying infrastructure, such as re-architecting our control hardware such that we could move data around in real time. We needed to update OpenQASM so it could describe circuits with feed-forward and feed-backward control flow. And we had to re-engineer our compilation toolchain so that we could convert these circuits into something our systems could actually run.

“We are rolling out dynamic circuits on 18 IBM Quantum systems — those QPUs engineered for fast readout. Users can create these circuits using Qiskit or directly in OpenQASM3. They can execute either form through the interface. We expect that early next year, we’ll be able to execute payloads with dynamic circuits in the Sampler and Estimator primitives in the Qiskit Runtime, too.”

Continuing the thread of blending classical and quantum computing, Katie Pizzolato, director of quantum strategy and application research, announced the alpha release of IBM circuit knitting toolbox.

“We’re finding there’s many ways that we can weave quantum and classical together to extend what we can achieve. We call this our circuit knitting toolbox. First, we can embed quantum simulations inside larger classical problems [in which] we use quantum to treat pieces of the problem and use classical to approximate the rest,” said Pizzolato.

“Also, with things like entanglement forging, we can break the problem down into smaller circuits and run the smaller quantum circuits on the quantum hardware and then reconstruct them classically, which allows us to double the size of what we can do otherwise. With circuit cutting, we really cut the less entangled connections into subsystems, we compute the global energy by classically coupling each of the answers in each of the results from the QPUs,” she said.

Pizzolato noted these tools have a common approach – they decompose the problem, run a lot of Qiskit runtimes in parallel, and reconstruct the outcomes into a single answer. However, said Pizzolato, “Many users don’t want to worry about the underlying infrastructure, [they] just want to run their code.” To accommodate that desire, she said IBM was releasing an alpha version of quantum serverless with these capabilities built in. IBM’s quantum serverless concept, introduced a year ago, is a programing model that IBM hopes will enable users to easily orchestrate hybrid workloads via the cloud for execution on varying combinations on classical and quantum resources. It’s still fairly nascent.

A good deal more was touched upon at the IBM Summit including, for example: work on a cryo-CMOS controller chip for quantum circuits; cryogenic dense flexible interconnect cable; its new modular IBM System 2 designed to hold quantum and classical resources; work on a new smaller gen3 classical control system; a Quantum Safe service to assist organizations seeking to implement post-quantum cryptography, and more.

The last item on its list was something IBM is calling the 100×100 challenge; it’s intended to provide a practical platform for users to explore new, more complex applications and part of what Gambetta called IBM’s non-nonsense path to quantum advantage. No specific date for reaching QA so was announced.

Pizzolato, said, “We’re issuing a challenge to all of you. We’re calling it the 100×100 challenge. And we’re pledging that in 2024 will offer our partners and clients a system that will generate reliable outcomes running 100 qubits, and a gate depth of 100. We’ve said we’ve had a two-fold path to quantum computing [advantage]; we still have to make better hardware, software and infrastructure, and our users have to devise use cases. We see plenty of avenues to explore use cases using these reliable results, like ground states thermodynamic properties, quantum cartels and more. But we need everyone’s help here and in our network partnerships to really think about what circuits they want to run on a processor like this.”

Gambetta added in his closing comments, “So creating this 100×100 device will really allow us to set up a path to understand how can we get quantum advantage in these systems and lay a future going forward.”

Stay tuned.

Link to Gambetta et. al overview video.

[i] Quantum Volume (QV) is a single-number metric that can be measured using a concrete protocol on near-term quantum computers of modest size. The QV method quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securi Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnerships in strategic technologies and defense industries across th Read more…

Pittsburgh Supercomputing Enables Transparent Medicare Outcome AI

February 2, 2023

Medical applications of AI are replete with promise, but stymied by opacity: with lives on the line, concerns over AI models’ often-inscrutable reasoning – and as a result, possible biases embedded in those models Read more…

Europe’s LUMI Supercomputer Has Officially Been Accepted

February 1, 2023

“LUMI is officially here!” proclaimed the headline of a blog post written by Pekka Manninen, director of science and technology for CSC, Finland’s state-owned IT center. The EuroHPC-organized supercomputer’s most Read more…

AWS Solution Channel

Shutterstock 2069893598

Cost-effective and accurate genomics analysis with Sentieon on AWS

This blog post was contributed by Don Freed, Senior Bioinformatics Scientist, and Brendan Gallagher, Head of Business Development at Sentieon; and Olivia Choudhury, PhD, Senior Partner Solutions Architect, Sujaya Srinivasan, Genomics Solutions Architect, and Aniket Deshpande, Senior Specialist, HPC HCLS at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for more computing specifically targeted at artificial intellige Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnership Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for Read more…

Roadmap for Building a US National AI Research Resource Released

January 31, 2023

Last week the National AI Research Resource (NAIRR) Task Force released its final report and roadmap for building a national AI infrastructure to include comput Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-yea Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Leading Solution Providers


SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow