IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

By John Russell

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qubit Heron QPU, that’s optimized for combining with multiple QPUs into larger quantum systems, and introduction of IBM System Two – its next-gen modular infrastructure to accommodate multiple systems and dilution refrigerators. The first System Two is up and running in IBM’s Yorktown Heights facility and has three Heron devices operating inside it.

IBM will also announce the planned introduction of Qiskit 1.0 in February and incorporation Generative AI capabilities to make it easier to use. It will share an expanded 10-year quantum roadmap, doubling earlier 5-year roadmaps. Not least, and signaled by work published last spring using its 127-qubit Eagle processor, IBM will declare the (early) start of the era of Quantum Utility made possible by improved error mitigation and correction techniques. (see HPCwire coverage, IBM Reports Eagle QPU Outperforms Classical System on Simulation).

So, what is Quantum Utility?

“Simply put, it’s when we can get a quantum computer to be able to perform reliable computations at a scale beyond brute force classical computing methods. This is a really enormous milestone. It’s the first time we’ve had a new tool to compete and to understand what it looks like and to try to explore this area that is before not been available. We also believe that those who are exploring this area and using this tool to explore we’ll be the first to find quantum advantage in the future,” said Katie Pizzolato, VP of quantum algorithms and scientific partnerships. These experiments will accelerate algorithm development, she said.

It is hard to match the breadth of IBM’s development efforts in quantum computing. While all quantum developers’ roadmaps undergo twists-and-turns – that’s the nature of the development beast – IBM’s plans have been remarkably stable and available for all to see and poke at it.

Jay Gambetta, IBM Fellow and vice president of IBM Quantum, noted “We take pride in hitting every milestone on our roadmap. Putting a roadmap out for 10 years is a big deal. The roadmap, you’ll notice is actually going to be split into two. The top is what we call a development roadmap. And the bottom is called an innovation roadmap. Our goal is to be transparent, and to show how we’re making progress and all the innovations.” (click to enlarge roadmap above)

“The main thing I want you to take away from this roadmap is a transition from scaling the number of qubits to the quality,” he said. “Over the next five years, we want to increase the quality by five times. And this will allow us to extend the utility experiments that Katie talked about, and really push the limit of what can be done with using quantum computing as a tool for advancing science. Then a big jump happens in 2029, where we want to be able to achieve a system, we call Starling, capable of running 100 million gates on 200 qubits. For me this, this clearly articulates our path, where we go from error mitigation continuously to error correction.”

There’s a lot to unpack here and it’s worth taking a moment to look at IBM roadmap (above).

Jay Gambetta, IBM

In a pre-briefing with media and analysts, Gambetta, Pizzolato, and Matthias Steffen, IBM Fellow quantum processor technologies, walked through main discussion points planned for this week’s summit. Noteworthy, there will be a good deal more technical granularity provided during the three-day, invitation-only conference. Per past practice, IBM will post videos of key talks within a few days of the Summit. Broadly many of the progress points presented will have been discussed or hinted at earlier in the year, along with the new introductions.

Here’s a few IBM-provided highlights:

  • University of Tokyo, Argonne National Laboratory, Fundacion Ikerbasque, Qedma, Algorithmiq, University of Washington, University of Cologne, Q-CTRL demonstrate new research to explore power of utility-scale quantum computing.
  • ‘IBM Quantum Heron’ is released as IBM’s most performant quantum processor in the world, with newly built architecture offering up to five-fold improvement in error reduction over ‘IBM Quantum Eagle’.
  • IBM Quantum System Two begins operation with three IBM Heron processors, designed to bring quantum-centric supercomputing to reality.
  • Expansion of IBM Quantum Development Roadmap for next ten years prioritizes improvements in gate operations to scale with quality towards advanced error-corrected systems.
  • Qiskit 1.0 announced. IBM says it’s the “world’s most widely used open-source quantum programming software,” with new features to help computational scientists execute quantum circuits with ease and speed.
  • IBM will showcase generative AI models with capability to automate quantum code development with watsonX and optimize quantum circuits.

Let’s start with latest quantum chips.

Condor’s size and scale is impressive. “It pushes the limit of scale,” said Steffen. “It features an unprecedented 1121 superconducting qubits, all qubits yielding on the single chip, and further has a 50% increase in qubit density. And Astonishingly, it has one mile of flex cable inside the refrigerator. The whole chip is housed in a single dilution and single dilution refrigerator and managed to cool down the device and has comparable performance to our Osprey device.”

All of that said, IBM’s emphasis seems to have turned more squarely to Heron and the notion that modularity using smaller QPUs to build bigger systems is the key to the future.

“Heron is our best performing quantum processor to date with a fivefold improvement and error reduction compared to our flagship Eagle device,” said Steffen. “This was a journey that was four years in the making. It was designed for modularity and scale and we ended up with a tunable coupler architecture that enables this. The design will give us core components to continue increasing the quality of data operations within each of our subsequent processors. Eventually, we plan to link several Heron chips together with quantum communications for increased computational power.” One would expect more Heron performance characteristics (gate fidelity, etc.) to be presented at the summit.

IBM plans to begin offering Heron in its global fleet of utility scale systems next year, according to Steffen.

IBM is also likely to dig deeper into its new tunable coupler architecture, the development of more efficient error correction codes, and leveraging these features to extend the gate-length of executable quantum circuits. One goal, for example, is to reduce the number of redundant physical qubits required for error correction/mitigation.

Steffen said, “We have shown a method to reduce the number of qubits by a factor of more than 10 needed to perform the degree of error correction, as compared to the popular surface code. This is a significant reduction in the number of qubits necessary. This is [work from] a preprint from August earlier this year. Even more excitingly this new code can be comprised of small blocks, you see here, a total of 12 blocks that are quantum processor chips. And when we connect them with the black colored M couplers, for short distances, and longer couplers, called L couplers, over longer distances, we know we can arrange the chips in a larger architecture.

“This new code also requires something we call a C coupler, which is indicated in the pink colored C coupler; it couples qubits within a single processor. With all of these together, we’re confident we have a path forward to bring this code to reality and extend quantum utility.”

Overall, said Gambetta, “Condor makes it clear that we now know our qubits and how we’re going to scale. Heron [shows] we know that the gate going forward and that’s why what we’re putting on the roadmap that next year we want to go from 3000 gates to 5000 Gates. And then we want to go 7.5K gates to 10,000 gates to 15,000 gates, and eventually get to 100 million.”

The introduction of IBM Quantum System Two infrastructure is also major step towards being able to scale up compete, modular systems.

Steffen said, “IBM Quantum System Two is the foundation of IBM’s next generation quantum computing system architecture that combines expandable cryogenic infrastructure, modular control qubit control electronics and scalable classical runtime servers to define the core element of our vision towards quantum-centric supercomputing. IBM’s quantum system two will enable forthcoming generations of quantum processors with a fully scalable and modular core infrastructure to run longer and deeper quantum circuits than ever before.” Again, one would expect more technical details at the summit. (Update – IBM video on System Two)

Turning to Qiskit, the latest improvements add new features and gen-AI capability. Introduced by IBM in 2017, Qiskit 1.0 will debut in February. “We’ve done a lot of learning, but it’s time to get it to be performant, stable and reliable,” said Gambetta.

“We’re introducing a concept we call Qiskit Patterns, because if we think developers are going to require quantum circuit knowledge to be able to basically do the utility work, that’s not going to be the case if we’re going to continue to expand the reach of quantum computing. So, we’ve come up with a simple framework to developing an algorithm. It consists of a way of mapping a problem to quantum circuits and operators, optimizing the problems for quantum execution, executing them on the runtime that is powered by the System Two [infrastructure] that Matthias talked about, and then post processing those results such that we can get a simple output.”

With Qiskit patterns and quantum serverless, said Gambetta, users can build, deploy, run and in the future share for other users to use.

Connecting the generative AI tools from WatsonX to Qiskit will allow programmers to use a simple language command to generate a quantum circuit; basically, simply write out what they want to do, and that goes to a trained foundation – Granite – which fine tunes with all the Qiskit data and generates code that is executable. “We [think] the full power of using quantum computing will be powered by generative AI to simplify the developer experience,” said Gambetta.

The broad hope is that leveraging the new hardware and software tools will enable an expanding user community, dominated by domain scientists, to explore the so-called Quantum Utility era to build more applications.

“We have over 60 industry clients that have been either working with us or our partners on enterprise experiments. I’m not going sit here and say they got a return on investment yet, but they’re actually starting to transition from in the past, getting ready to actually doing use-case prototypes. One of the demonstrations that we’ll show at the Summit is Hyundai showing a very large optimization problem. I’m looking forward to what we see [what others do], but it’s all going to depend on can we discover those algorithms,” said Gambetta.

Pizzolato handled most of the discussion of the emerging quantum utility era and being able to use noisy small-scale quantum computers for productive work. She said, “You’ll see a lot at Summit on foundational questions as we start to apply longer circuits. What does that mean? What are the capabilities we need to get to those longer circuits? I think the use-cases you’re going to see early are a lot in the condensed matter and high energy physics space, where we’re investigating ground state and some of the places. It’s going to be a continued press of extending the capabilities, extending the circuits, and then mapping those to problems. We’ve always said that we need to find the circuits that are difficult to simulate classically, and then map those to interesting problems. The early use cases that we’re seeing are definitely in the high-energy physics and condensed matter spaces., which can you know, parlay into some materials type discussions.”

She noted the state of the art for most of these early explorations has been in the sub-20-qubit range. “If you follow that trajectory, it’s going to take us a really long time to get to a scale in which we are beyond these brute force classical methods that we’re talking about. We needed a disruptive change, to try to raise this bar and to create a different trajectory for the technology. We believe that this this change and this step function occurred in June with the publication of this paper. This was the first time that a noisy quantum computer produced accurate expectation values outside at a at a scale that was outside of brute force, classical computation,” she said.

“Since this paper (Evidence for the utility of quantum computing before fault tolerance, Nature) we’ve seen a lot of people publishing papers, using quantum as a tool to explore in areas that are outside [the sub-20 qubit] area and start exploring with larger qubit counts. What you’re going to see at Summit is 10 more of these [kinds] of demonstrations with partners,” she said.

Capturing a 3-day conference in an hour pre-briefing is a tall order, and, again, IBM does a nice job posting videos after the conference.

In practical terms, characterizing quantum computing’s progress is still challenging. Will near-term narrow quantum advantage applications emerge before more profound step-function quantum advantage applications? Error correction and modular scaling are now the top priorities. Bottom line: it’s still a journey, but seemingly with many more side streets. IBM seems very focused on achieving fault-tolerant computing while reaping intermediate opportunities as they emerge along the way.

Having deeper-pockets-than-most no doubt helps IBM. But there’s still a diverse and noisy debate over what quantum will deliver in the short- and long-term.

Today, IBM has basically three ways to access its quantum systems: it’s free to get started with ~ 10 minutes access a month; there’s a pay-as-you-go approach that runs around $96 per minute; and there is a premium service to reserve capacity. Of course, other big cloud providers – Azure, AWS Braket, Google – also provide access to a growing diversity of quantum devices and tools.

Gambetta said, “I would add that if you want to do algorithmic research, even though the price of quantum can be expensive, the equivalent price to do this on a classical computer at the utility scale – if you even do it – at maybe 40 qubits is actually going to be more expensive to use the simulator.”

Making sense of the bubbling quantum computing landscape remains difficult – it has so many moving parts, even without diving into fundamental issues such qubit modality, noise control, hybrid systems, etc.

Asked when will Quantum deliver quantum advantage or supremacy, Gambetta said, “We think of quantum supremacy or quantum advantage as a two-step process. The first step is to be able to run a quantum circuit that you cannot do with a brute force classical simulation. The next is to work out what quantum circuit you would run. I feel confident we’re in the first stage. We have succeeded [there]. We can run quantum circuits that are beyond brute force simulation.

“To actually achieve the second stage, quantum advantage, is actually really, really hard. Because when you’re comparing a quantum method to the best classical method that may use a different way of simulating the problem, it gets very hard to do that comparison. That’s actually where we see that you [can] use this as a tool for advancing science. It’s going to be a dialogue that goes back and forth between these domain experts on simulating one method using quantum computing, simulating another method using a classical approximation.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ASC24 Student Cluster Competition: Who Won and Why?

June 18, 2024

As is our tradition, we’re going to take a detailed look back at the recently concluded the ASC24 Student Cluster Competition (Asia Supercomputer Community) to see not only who won the various awards, but to figure out Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch that D-Wave’s brand of analog quantum computing (quantum Read more…

Apple Using Google Cloud Infrastructure to Train and Serve AI

June 18, 2024

Apple has built a new AI infrastructure to deliver AI features introduced in its devices and is utilizing resources available in Google's cloud infrastructure.  Apple's new AI backend includes: A homegrown foun Read more…

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and implementation of artificial intelligence (AI) tools, while the Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are many interesting stories, and only a few ever become headli Read more…

Quantum Tech Sector Hiring Stays Soft

June 13, 2024

New job announcements in the quantum tech sector declined again last month, according to an Quantum Economic Development Consortium (QED-C) report issued last week. “Globally, the number of new, public postings for Qu Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…


Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

ASC24 Expert Perspective: Dongarra, Hoefler, Yong Lin

June 7, 2024

One of the great things about being at an ASC (Asia Supercomputer Community) cluster competition is getting the chance to interview various industry experts and Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers



How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

A Big Memory Nvidia GH200 Next to Your Desk: Closer Than You Think

February 22, 2024

Students of the microprocessor may recall that the original 8086/8088 processors did not have floating point units. The motherboard often had an extra socket fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow