IBM Quantum Summit: Osprey Flies; Error Handling Progress; Quantum-centric Supercomputing

By John Russell

December 1, 2022

Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and issues facing the quantum landscape broadly. Thankfully, IBM squeezes most of the highlights into a roughly hour-long recorded session, led by Jay Gambetta, IBM fellow and vice president, IBM Quantum. This year’s event cited 12 “breakthroughs and announcements” with introduction of its 433-qubit Osprey processor and significant error mitigation/correction and hybrid classical-quantum capabilities as centerpieces.

Not all 12 announcements are new as many have been hinted at or were rolled out through the course of the year, but the fact is few companies have the resources to tackle virtually every aspect of quantum computing; IBM has done so and has consistently met the milestones on its quantum roadmap. While IBM efforts focus on semiconductor-based superconducting quantum, many of the problems addressed cut across qubit modalities and are at least directionally informative for the quantum computing landscape writ large.

Here’s a look at the roadmap and recap of announcements (click to enlarge figures).

The big news, of course, was formal introduction of Osprey, making good on an IBM promise last year. In his opening keynote, Dario Gil, IBM senior vice president and director of research, said, “Osprey [is] by far the largest processor ever created in the world of superconducting qubits. Last year when we announced Eagle with 127 qubits – it was the first time anybody had built across the 100-qubit barrier. With Osprey, it brings all of the technologies we’ve been building over the years including 3D integration, multi-layer wiring, being able to separate the qubit control plane from the connectivity and the readout planes. It’s a tour de force in terms of materials, devices, packaging, and on the quantum processor itself.” Osprey will be available to users in Q1.

Next up are two processors planned for 2023. Condor, at 1,121 qubits, and Heron, at a more modest 133 qubits, but with many features necessary for connecting multiple QPUs together into a larger system. Taken together, says IBM, advances in these two QPUs help set the stage for larger hybrid architectures. Longer term, as shown on the roadmap above, IBM plans to introduce Crossbill (408 qubits) and Flamingo (1,386+ qubits) in 2024, leading to Kookaburra (4,158+ qubits) in 2026.

IBM’s vision for quantum computing – like that for most of the quantum industry – has strongly pivoted towards a hybrid classical-quantum vision. IBM calls its version Quantum-centric Supercomputing, a concept it introduced last May.

Gambetta talked about the future and said “2023 marks the point where everything changes.”

Dario Gil, Jay Gambetta and Jerry Chow holding the new 433 qubit ‘IBM Osprey’ processor. Credit: IBM

“Today, we build single processors, but we realize the path ahead is multiple processors. Today, we build bespoke infrastructure solutions, which aren’t fast enough, aren’t scalable, and cost too much. In the future, we need scalable controls. Today, we’re employing classical compute to enhance quantum hardware. But next, we will develop what we’re calling middleware for quantum that will enhance it further. This next wave is what we are calling quantum-centric supercomputing. To me, a quantum-centric supercomputer is a modular computing architecture, which will enable scaling, it will use communication to increase the computational capacity, and it will use a hybrid-cloud middleware to seamlessly integrate quantum and classical workflows,” said Gambetta.

There’s a lot to unpack here. Besides the Gambetta video, IBM has posted blogs on various specifics; most are available by poking around on the IBM Summit highlights page. The Gambetta-led session, with several colleagues, presented overviews on specific topics. Presented here are portions of the QPU hardware strategy, discussion of error handling techniques, discussion of plans to implement quantum serverless technology relying heavily on a “multi-cloud” approach, and a snapshot of its 100×100 challenge to users.

Let’s start with Osprey chip, which is encapsulated in a printed circuit board to accommodate new signal wiring to control the 433 qubits, according to Jerry Chow, IBM Fellow and Director of Quantum Infrastructure.

“All 433 qubits are arranged in our heavy hex lattice topology. We introduced this concept and technology of multilayer wiring last year with Eagle and that performs the critical function of providing flexibility for the signal wiring as well as optimal device layout. With we’ve had to make further advances there, as well as adding an integrated filtering to reduce noise and improve stability. It’s alive and being tested as we speak. Much like many of our first generations of large birds, we find these coherence times at the moment between 70 to 80 microseconds, median T1 time. At 433 qubits, there’s a lot to measure. So please stay tuned for further calibration updates.”

Chow said IBM quickly migrates learning from one generation to the next. “We already have a new revision of Osprey, and this is just coming off the experimental pipeline where we’re seeing a significant coherence time improvement,” said Chow. Last year’s introduction of tunable coupling architecture has helped push the error rates on Falcon Revision 10 devices into 10-3 range – typically referred to as three 9s territory – noted Chow.

“With Falcon r10, we were able to actually double our quantum volume not once, but twice this past year, first at 256, then again at 512 with our IBM product – so our innovations and tunable coupling architecture have allowed us to drive a 4x increase in quality.”

Signal delivery technology is also advancing.

Quantum hardware. Credit: Graham Carlow for IBM

“This photo is striking (left), but it’s really a relic of the past in some sense. A lot of the wiring that you see within it is built by hand. It’s handcrafted and at 100 qubits with a few hundred cables, I can convince our team to actually do that busy work. But when we push this to 400 or 1000 and need to hand tighten all the different bolts, this becomes impractical. It’s simply not cost-effective, and not nearly dense enough for the solutions that we need in the future. It has to change. So we’re really excited to show the next evolution of high density control signal delivery with cryogenic flex wiring (photo below). It’s going to make it easier to wire hundreds to 1000s of lines. It’s critical for the reliability of our deployed systems. Today, it’s already 70 percent more dense and five times cheaper, and we have plans to make this even better,” said Chow.

IBM’s dense, cryogenic flex cable

Broadly, IBM characterizes its QPUs by three criteria, scale (qubit-count), quality (QV metric[i]) and speed CLOPS (circuit layers per second, introduced last year).

“Let’s talk about that third element, speed. The capacity to actually run a large number of circuits is critical for targeting quantum advantage as well as applications down the road. And [with] the overhead of error mitigation added on top of that and eventually for error correction, speed is absolutely important,” said Chow. IBM set a goal this year to go from 1.4K CLOPS to 10,000 CLOPS and Chow cited improvements to the runtime compiler, the quantum engine, and hardware digital systems. “In June, we introduced code pipelining, followed by improvements in our control systems and quantum engine. [We] not only have we hit our mark of 10,000 CLOPS, we’ve surpassed it at 15,000 clops, a 10x improvement over our fastest integrated systems last year,” he said.

Dealing with error is a quantum community-wide problem. The hope is that intermediate measures will make NISQ (noisy intermediate scale quantum) systems robust enough able to soon deliver some measure of quantum advantage. Blake Johnson, IBM quantum platform lead, reviewed IBM efforts to introduce various error mitigation strategies.

“In practice, we have to contend with the presence of errors. Fortunately, we have powerful tools to deal with these errors,” said Johnson. “One category of tool is error suppression [which] reduces errors in circuits by modifying the underlying circuits without changing their meaning. For instance, we can inject additional gates to echo certain error sources. Another category is error mitigation, which can deliver accurate expectation values by executing collections or ensembles of related circuits and combining their outputs in post processing.”

“Error mitigation is also powerful. In fact, earlier this year, we showed how error mitigation can deliver unbiased estimates from noisy quantum computers, but error mitigation also comes at a cost and that cost is exponential (qubit overhead),” said Johnson. “The question becomes, how are we going to make those tools easy to use and accessible to everyone?”

IBM’s approach has been to build Qiskit runtime primitives, launched earlier this year. “They elevate the fundamental abstraction for interfacing with quantum hardware to directly expose the kinds of queries that are relevant to quantum applications. These more abstract interfaces allow us to expose error suppression and error mitigation through simple-to-configure options. When we do it right, it can have a major impact,” said Johnson.

“Today we’re launching a beta support for error suppression in the Qiskit runtime primitives through a simple optimization level in the API. We can go further though, and will introduce a new option that we’re calling Resilience Level. This is a simple-to-use control that allows the user to adjust the cost-accuracy trade off of a primitive query.”

Here’s a snapshot:

  • Resilience Level One. “We’re going to turn on options for methods that specifically address errors in readout operations. We’re going to adapt the choice of method to the specific context of sampling or estimating. These methods at fairly minimal overhead.” Level one is the default resilience level,” said Johnson.
  • Resilience Level Two. “This level will enable zero noise extrapolation [and] can reduce error in an estimator, but it doesn’t come with a guarantee that the answer is unbiased.”
  • Resilience Level Three. “We turn on our most advanced error mitigation strategy, which is probabilistic error cancellation. This method occurs a substantial overhead both in terms of noise model learning and circuit sampling, but also comes with the most robust guarantees about the quality of the result. For developers in the audience, here’s what it looks like in code to manipulate this resilience level and the new options interface,” he said.

Johnson also reviewed progress to incorporate dynamic circuit capability into its quantum stack. “Dynamic circuits marry real time classical computation with quantum operations, allowing feedback and feed forward of quantum measurements to steer the course of the computation,” said Johnson.

Broadly speaking, a quantum circuit is a sequence of quantum operations — including gates, measurements, and resets — acting on qubits. In static circuits, none of those operations depend on data produced at run time. For example, static circuits might only contain measurement operations at the end of the circuit. Dynamic circuits, on the other hand, incorporate classical processing within the coherence time of the qubits. This means that dynamic circuits can make use of mid-circuit measurements and perform feed-forward operations, using the values produced by measurements to determine what gates to apply next.

IBM posted a blog on the topic coinciding with the event. Here’s an excerpt:

“[W]e have been exploring dynamic circuits for several years. Recently, we compared the performance of static and dynamic circuits for phase estimation, which is a crucial primitive for many algorithms. At the time, this demonstration required working side-by-side with our hardware engineers to enable the required feed-forward operations.

“To make this capability more broadly accessible has required enormous effort. We had to rebuild a substantial portion of our underlying infrastructure, such as re-architecting our control hardware such that we could move data around in real time. We needed to update OpenQASM so it could describe circuits with feed-forward and feed-backward control flow. And we had to re-engineer our compilation toolchain so that we could convert these circuits into something our systems could actually run.

“We are rolling out dynamic circuits on 18 IBM Quantum systems — those QPUs engineered for fast readout. Users can create these circuits using Qiskit or directly in OpenQASM3. They can execute either form through the backend.run() interface. We expect that early next year, we’ll be able to execute payloads with dynamic circuits in the Sampler and Estimator primitives in the Qiskit Runtime, too.”

Continuing the thread of blending classical and quantum computing, Katie Pizzolato, director of quantum strategy and application research, announced the alpha release of IBM circuit knitting toolbox.

“We’re finding there’s many ways that we can weave quantum and classical together to extend what we can achieve. We call this our circuit knitting toolbox. First, we can embed quantum simulations inside larger classical problems [in which] we use quantum to treat pieces of the problem and use classical to approximate the rest,” said Pizzolato.

“Also, with things like entanglement forging, we can break the problem down into smaller circuits and run the smaller quantum circuits on the quantum hardware and then reconstruct them classically, which allows us to double the size of what we can do otherwise. With circuit cutting, we really cut the less entangled connections into subsystems, we compute the global energy by classically coupling each of the answers in each of the results from the QPUs,” she said.

Pizzolato noted these tools have a common approach – they decompose the problem, run a lot of Qiskit runtimes in parallel, and reconstruct the outcomes into a single answer. However, said Pizzolato, “Many users don’t want to worry about the underlying infrastructure, [they] just want to run their code.” To accommodate that desire, she said IBM was releasing an alpha version of quantum serverless with these capabilities built in. IBM’s quantum serverless concept, introduced a year ago, is a programing model that IBM hopes will enable users to easily orchestrate hybrid workloads via the cloud for execution on varying combinations on classical and quantum resources. It’s still fairly nascent.

A good deal more was touched upon at the IBM Summit including, for example: work on a cryo-CMOS controller chip for quantum circuits; cryogenic dense flexible interconnect cable; its new modular IBM System 2 designed to hold quantum and classical resources; work on a new smaller gen3 classical control system; a Quantum Safe service to assist organizations seeking to implement post-quantum cryptography, and more.

The last item on its list was something IBM is calling the 100×100 challenge; it’s intended to provide a practical platform for users to explore new, more complex applications and part of what Gambetta called IBM’s non-nonsense path to quantum advantage. No specific date for reaching QA so was announced.

Pizzolato, said, “We’re issuing a challenge to all of you. We’re calling it the 100×100 challenge. And we’re pledging that in 2024 will offer our partners and clients a system that will generate reliable outcomes running 100 qubits, and a gate depth of 100. We’ve said we’ve had a two-fold path to quantum computing [advantage]; we still have to make better hardware, software and infrastructure, and our users have to devise use cases. We see plenty of avenues to explore use cases using these reliable results, like ground states thermodynamic properties, quantum cartels and more. But we need everyone’s help here and in our network partnerships to really think about what circuits they want to run on a processor like this.”

Gambetta added in his closing comments, “So creating this 100×100 device will really allow us to set up a path to understand how can we get quantum advantage in these systems and lay a future going forward.”

Stay tuned.

Link to Gambetta et. al overview video.

[i] Quantum Volume (QV) is a single-number metric that can be measured using a concrete protocol on near-term quantum computers of modest size. The QV method quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire