IBM Breaks 100-Qubit QPU Barrier, Marks Milestones on Ambitious Roadmap

By John Russell

December 13, 2021

The highlights reel of IBM’s steady progress in quantum computing was on full display at the company’s 2021 Quantum Summit presented last month while most of the HPC community was wrapped up in SC21. Underpinned by six milestones met this year, IBM has declared that 2023 will be the year, broadly, when its systems deliver quantum advantage, and quantum computing takes its early place as a powerful tool on the HPC landscape.

At present, the advances being made throughout the quantum computing community are impressive and accelerating perhaps beyond the expectations of many observers. In that context, IBM has long been the 500-pound gorilla in the quantum computing world, digging into virtually every aspect of the technology, its use cases, and customer/developer engagements. IBM, of course, is focused on semiconductor-based, superconducting qubit technology and the jury is out on which of the many qubit technologies will prevail. Likely, it won’t be just one.

Last year, IBM laid out a detailed quantum roadmap with milestones around hardware, software, and system infrastructure. At this year’s IBM Quantum Summit, Jay Gambetta, IBM fellow and vice president, quantum computing, along with a few colleagues, delivered a report card and glimpse into future IBM plans. He highlighted six milestones – not least the recent launch of IBM’s 127-qubit quantum processor, Eagle, and plans for IBM System Two, a new complete infrastructure that will supplant System One.

Look over the IBM roadmap shown below (click to enlarge). In many ways, it encompasses the challenges and aspirations faced by everyone in the quantum community.

While fault-tolerant quantum computing remains distant, the practical use of quantum computing on noisy intermediate scale quantum (NISQ) computers seems closer than many expected. We are starting to see early quantum-based applications emerge – mostly around random number generation (see HPCwire articles on Quantinuum and Zapata, both of whom are working to leverage quantum-generated random numbers).

Before digging into the tech talk, it’s worth noting how IBM expects the commercial landscape to emerge (figure below). Working with the Boston Consulting Group, IBM presented a rough roadmap for commercial applications. “IBM’s roadmap is not just concrete. It’s also ambitious,” said Matt Langione, principal and North America head of deep tech, BCG, at the IBM Summit. “We think the technical capabilities [IBM has] outlined today will help create $3 billion in value for end users during the period described.”

He cited portfolio optimization in financial services as an example. Efforts to scale up classical computing-based optimizers “struggle with non-continuous non-convex functions, things like interest rate yield curves, trading logs, buy-in thresholds, and transaction costs,” said Langione. Quantum optimizers could overcome those challenges and, “improve trading strategies by as much as 25 basis points with great fidelity at four nines by 2024 with [quantum] runtimes that integrate classical resources and have error mitigation built in. We believe this is the sort of capability that could be in trader workflows [around] 2025,” he said.

He also singled out mesh optimizers for computational fluid dynamics used in aerospace and automotive design which have similar constraints. He predicted, “In the next three years, quantum computers could start powering past limits that constrain surface size and accuracy.” Look over BCG/IBM’s market projection shown below.

Quantum computing has no shortage of big plans. IBM is betting that by laying out a clear vison and meeting its milestones it will entice broader buy-in from the wait-and-see community as well as within the quantum community. Here are brief summaries of the six topics reviewed by Gambetta and colleagues. IBM has posted a video of the talk, which in just over 30 minutes does a good, succinct job of reviewing IBM progress and plans.

  1. Breaking the 100-Qubit Barrier

IBM starts the formal counting of its current quantum processor portfolio with the introduction of the Falcon processor in 2019; it introduced IBM’s heavy-hexagonal qubit layout and has 27 qubits. IBM has been refining this design since. Hummingbird debuted in 2020 with 65 qubits. Eagle, just launched at the 2021 Summit, has 127 qubits. The qubit count has roughly doubled with each new processor. Next up is Osprey, due in 2022, which will have 433 qubits.

Jerry Chow, director of quantum hardware system development at IBM, explained the lineage this way, “With Falcon, our challenge was reliable yield. We met that challenge with a novel Josephson junction tuning process, combined with our collision-reducing heavy hexagonal lattice. With Hummingbird, we implemented a large-ratio multiplexed readout allowing us to bring down the total cryogenic infrastructure needed for qubit state readout by a factor of eight. This reduced the raw amount of componentry needed.”

“Eagle [was] born out of a necessity to scale up the way that we do our device packaging so we can bring signals to and from our superconducting qubits in a more efficient way. Our work to achieve this is relied heavily upon IBM experience with CMOS technology. It’s actually two chips.”

For Eagle, “The Josephson junction base (qubits) sit on one chip which is attached to a separate interposer chip through bump bonds. This interposer chip provides connections to the qubits through the packaging techniques which are common throughout the CMOS world. These include things like substrate vias and a buried wiring layer, which is completely novel for this technology. The presence of the buried layer provides flexibility in terms of routing the signals and laying out of the device,” said Chow.

IBM says Eagle is the most advanced quantum computing chip ever built, the world’s first quantum processor over 100 qubits. Chow said, “Let me stress this isn’t just a processor we fabricated, but a full working system that is running quantum circuits today.” He said Eagle will be widely available by the end of the year, which presumably means now-ish.

Looking at the impact of Eagle, IBM isn’t shy: “The increased qubit count will allow users to explore problems at a new level of complexity when undertaking experiments and running applications, such as optimizing machine learning or modeling new molecules and materials for use in areas spanning from the energy industry to the drug discovery process. ‘Eagle’ is the first IBM quantum processor whose scale makes it impossible for a classical computer to reliably simulate. In fact, the number of classical bits necessary to represent a state on the 127-qubit processor exceeds the total number of atoms in the more than 7.5 billion people alive today.”

Osprey, due next year, will have 433 qubits as noted earlier and, said Chow, will introduce “the next generation of scalable input output that can deliver signals from room temperature to the cryogenic temperatures.”

  1. Overcoming the Gate Error Barrier

Measuring quality in quantum computing can be tricky. Key components such as coherence duration and gate fidelity are adversely affected by many factors usually lumped together as system and environmental noise. Taming these influences is why most quantum processors are housed in big dilution refrigerators. IBM developed a benchmark metric, Quantum Volume (QV), which has various performance attributes baked in and QV has been fairly widely used the quantum community. IBM achieved QV of 128 on some of its systems. Honeywell (now Quantinuum) also reported achieving QV 128 on its trapped ion device.

At the IBM Quantum Summit, Matthias Steffen, IBM fellow and chief quantum architect reviewed progress on extending coherence times and improving gate fidelity.

“We’ve had a breakthrough with our new Falcon r8 processors. We have succeeded in improving our T1 times (spin-lattice relaxation) dramatically from about 0.1 milliseconds to 0.3 milliseconds. This breakthrough is not limited to a one-off-chip (good yield). It has now been repeated several times. In fact, some of our clients may have noticed [on] the device map showing up for IBM Peekskill recently,” said Steffen. “This is just the start. We have tested several research test devices and we’re now measuring 0.6 milliseconds closing in on reliably crossing the one millisecond barrier.”

“We also had a breakthrough this year with improved gate fidelities. You can see these improvements (figure below) color coded by device family. Our Falcon r4 devices generally achieved gate errors near 0.5 x 10-3.) Our Falcon r5 devices that also include faster readout are about 1/3 better. In fact, many of our recent demonstrations came from this r5 device family. Finally, in gold, you see some of our latest test devices, which include Falcon r8 with the improved coherence times.”

“You also see measured fidelity for other devices, including our very recently [developed] Falcon r10 [on which] we have measured a two-qubit gate breaking the 0.001 error per gate plain,” said Steffen.

IBM is touting achieving 0.001 gate fidelity, which corresponds to over 1000 gates per error, as achieving 3 nines or 99.9 percent quality, and a major milestone.

  1. Mainstreaming Falcon r5

Currently, the Falcon architecture is IBM’s workhorse. As explained by IBM, the portfolio of accessible QPUs includes core and exploratory chips: “Our users have access to the exploratory devices, but those devices are not online all the time. Premium users get access to both core and exploratory systems.”

IBM says there are three metrics that characterize system performance – quality, speed, and scale – and recently issued a white paper (brief excerpt at the article end) defining what’s meant by that. Speed is a core element and is defined as ‘primitive circuit layer operations per second’. IBM calls this CLOPS (catchy), roughly analogous to FLOPS in classical computing parlance.

“There’s no getting away from it,” said Katie Pizzolato, IBM director, quantum theory & applications systems. “Useful quantum computing requires running lots of circuits. Most applications require running at least a billion. If it takes my system more than five milliseconds to run a circuit, it’s simple math, a billion circuits will take you 58 days; that’s not useful quantum computing.”

At the lowest level QPU speed is driven by the underlying architecture. “This is one of the reasons we chose superconducting qubits. In these systems, we can easily couple the qubits to the resonators in the processors. This gives us fast gates, fast resets and fast readout fundamentals for speed,” said Pizzolato.

“Take the Falcon r5 processor for example, [which] is a huge upgrade over the Falcon r4. With the r5 we integrated new components into the processor that have eight times faster measurement rate than the r4 without any effect on coherence. This allows the measurement rate to be a few 100 nanoseconds compared to a few microseconds. Add this to other improvements we’ve made to gate time, and you have a major step forward with the Falcon r5,” she said.

IBM is now officially labelling Falcon r5 a core system, a step up from exploratory. “We’re making sure that Falcon r5 is up and running and with high reliability. We are confident that the r5, which has faster readout, can be maintained with high availability, so it is now labeled as a core system,” she said.

Pizzolato didn’t give a specific CLOPS number for Falcon r5 but in another talk given to the Society of HPC Professionals in early December, IBM’s Scott Crowder (VP and CTO, quantum) showed a slide indicating 4.3 CLOPS for IBM (though didn’t specific which QPU) versus 45 CLOPS for trapped ion.

  1. IBM Systems All Support Qiskit Runtime

In May, IBM rolled out a beta version of Qiskit Runtime, which it says is “a new architecture offered by IBM Quantum that streamlines computations requiring many iterations.” The idea is to leverage classical systems to accelerate access to QPUs not unlike the way CPUs manage access to GPUs in classical computing. Qiskit Runtime is now supported by all IBM QPUs.

“We created Qiskit Runtime to be the container platform for executing classical codes in an environment that has very fast access to quantum hardware,” said Pizzolato. “[It] completely changes the use model for quantum hardware. It allows users to submit programs of circuits rather than simply circuits to IBM’s quantum datacenters. This approach gives us 120-fold improvement. A program like VQE (variational quantum eigensolver), which used to take our users 45 days to run, can now be done in nine hours.”

IBM contends that these advances combined with the 127-qubit Eagle processor mean, “no one really needs to use a simulator anymore.”

Here’s the Qiskit Runtime description from the IBM website: “Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or others to use. A Qiskit quantum program, also called a Qiskit Runtime program, is a piece of Python code that takes certain inputs, performs quantum and maybe classical computation, interactively provides intermediate results if desired, and returns the processing results. The same or other authorized users can then invoke these quantum programs by simply passing in the required input parameters.”

  1. Serverless Quantum Introduction

Qiskit Runtime, says IBM, is part of a broader effort to bring classical and quantum resources closer together via the cloud and to create serverless quantum computing. This would be a big step in abstracting away many obstacles now faced by developers.

“Qiskit Runtime involves squeezing more performance from our QPU at the circuit level by combining it with classical resources to remove latency and increase efficiency. We call this classical with a little c,” said Sarah Sheldon, an IBM Research staff member. “We’ve also discovered we can use classical resources to accelerate progress towards quantum advantage and get us there earlier.”

“To do this, we use something we call using classical with a capital C. These capabilities will be both at the kernel and algorithm levels. We see them as a set of tools, allowing users to trade off quantum and classical resources to optimize the overall performance of an application at the kernel level. This will be achieved using libraries of circuits for sampling time, evolution, and more. But at the algorithm level, we see a future where we’re offering pre-built Qiskit Runtimes in conjunction with classical integration libraries. We call this circuit knitting,” said Sheldon.

Broadly, circuit knitting is a technique that decomposes a large quantum circuit with more qubits and larger gate depth into multiple smaller quantum circuits with fewer qubits as smaller gate depth; it then combines the outcomes together in classical post processing. “This allows us to simulate much larger systems than ever before. We can also knit together circuits along an edge where a high level of noise or crosstalk would be present. This lets us simulate quantum systems with higher levels of accuracy,” said Sheldon.

IBM reported having demonstrated circuit knitting by simulating the ground state of a water molecule using only five qubits with a specific technique of ‘entanglement forging,’ which knits circuits across weakly entangled halves. With circuit knitting, says IBM, users can boost the scale of the problem tackled or increase the quality of the result by making speed trading-offs with these tools.

The new capabilities are being bundled into IBM Code Engine on the IBM cloud. Code engine, combined with lower-level tools will deliver serverless computing says IBM. Pizzolato walked through and example, “The first step is to define the problem. In this case, we’re using VQE. Secondly, we use Lithops, a Python multicloud distributed computing framework to execute the code. Inside this function, we open a communication channel to the Qiskit Runtime and run the program estimator.”

“As an example, for the classical computation, we use the simultaneous perturbations stochastic approximation algorithm. This is just an example; you could put anything here. So now the user can just sit back and enjoy the results. As quantum is increasingly adopted by developers, quantum serverless enables developers to just focus on their code without getting dragged into configuring classical resources,” she said.

  1. Early Plans for System Two.

IBM’s final announcement was that it is “closing the chapter on” IBM Quantum System One, its fully enclosed quantum computer infrastructure, which debuted in 2019. Chow said System One would be able to handle Eagle, but that IBM was partnering with Finnish company Bluefors to develop System Two, its next generation cryogenic infrastructure.

“We are actively working on an entirely new set of technologies from novel high-density, cryogenic microwave flex cables to a new generation of FPGA based high-bandwidth, integrated control electronics,” said Chow.

Bluefors introduced its newest cryogenic platform, Kide, which will be the basis for IBM System Two.

“We call it Kide because in Finnish, Kide means snowflake or crystal, which represents the hexagonal crystal like geometry of the platform that enables unprecedented expandability and access,” said Russell Lake of Bluefors. “Even when we create a larger platform, we maintain the same user accessibility as with a smaller system. This is crucial as advanced quantum hardware scales up. We optimize cooling power by separating the cooling for the quantum processor from the operational heat loads. in addition, the six-fold symmetry of the key to platform means that systems can be joined and clustered to enable vastly expanded quantum hardware configurations.”

“The modular nature of IBM Quantum System Two will be the cornerstone of the future quantum datacenters,” said Gambetta. Presumably, the 433-qubit Osprey processor will be housed in a version of the new System Two infrastructure.

There was a lot to absorb in the IBM presentation. IBM was naturally attempting to put its best foot forward. Practically speaking, there are many companies working on all of the quantum computing aspects discussed by IBM but few tackling all of them. For this reason, IBM’s report serves as an interesting overview of progress generally, throughout the quantum community.

Reaching quantum advantage in 2023, even if for only a few applications, would be a big deal.

Link to video: https://www.youtube.com/watch?v=-qBrLqvESNM

Link to IBM paper (Quality, Speed, and Scale: three key attributes to measure the performance of near-term quantum computers): https://arxiv.org/abs/2110.14108

Excerpt from IBM paper

“Quantum computing performance is defined by the amount of useful work accomplished by a quantum computer per unit of time. In a quantum computer, the information processing is actualized by quantum circuits containing instructions to manipulate quantum data. Unlike classical computer systems, where instructions are executed directly by a CPU, the Quantum Processing Unit (QPU), which is the combination of the control electronics and quantum memory, is supported by a classical runtime system for converting the circuits into a form consumable by the QPU and then retrieving results for further processing. Performance on actual applications depends on the performance of the complete system, and as such any performance metric must holistically consider all of the components.

“In this white paper, we propose that the performance of a quantum computer is governed by three key factors: scale, quality, and speed. Scale, or the number of qubits, determines the size of problem that can be encoded and solved. Quality determines the size of quantum circuit that can be faithfully executed. And speed is related to the number of primitive circuits that the quantum computing system can execute per unit of time. We introduce a benchmark for measuring speed in section III C.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securi Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnerships in strategic technologies and defense industries across th Read more…

Pittsburgh Supercomputing Enables Transparent Medicare Outcome AI

February 2, 2023

Medical applications of AI are replete with promise, but stymied by opacity: with lives on the line, concerns over AI models’ often-inscrutable reasoning – and as a result, possible biases embedded in those models Read more…

Europe’s LUMI Supercomputer Has Officially Been Accepted

February 1, 2023

“LUMI is officially here!” proclaimed the headline of a blog post written by Pekka Manninen, director of science and technology for CSC, Finland’s state-owned IT center. The EuroHPC-organized supercomputer’s most Read more…

AWS Solution Channel

Shutterstock 2069893598

Cost-effective and accurate genomics analysis with Sentieon on AWS

This blog post was contributed by Don Freed, Senior Bioinformatics Scientist, and Brendan Gallagher, Head of Business Development at Sentieon; and Olivia Choudhury, PhD, Senior Partner Solutions Architect, Sujaya Srinivasan, Genomics Solutions Architect, and Aniket Deshpande, Senior Specialist, HPC HCLS at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for more computing specifically targeted at artificial intellige Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnership Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for Read more…

Roadmap for Building a US National AI Research Resource Released

January 31, 2023

Last week the National AI Research Resource (NAIRR) Task Force released its final report and roadmap for building a national AI infrastructure to include comput Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-yea Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire