IBM Breaks 100-Qubit QPU Barrier, Marks Milestones on Ambitious Roadmap

By John Russell

December 13, 2021

The highlights reel of IBM’s steady progress in quantum computing was on full display at the company’s 2021 Quantum Summit presented last month while most of the HPC community was wrapped up in SC21. Underpinned by six milestones met this year, IBM has declared that 2023 will be the year, broadly, when its systems deliver quantum advantage, and quantum computing takes its early place as a powerful tool on the HPC landscape.

At present, the advances being made throughout the quantum computing community are impressive and accelerating perhaps beyond the expectations of many observers. In that context, IBM has long been the 500-pound gorilla in the quantum computing world, digging into virtually every aspect of the technology, its use cases, and customer/developer engagements. IBM, of course, is focused on semiconductor-based, superconducting qubit technology and the jury is out on which of the many qubit technologies will prevail. Likely, it won’t be just one.

Last year, IBM laid out a detailed quantum roadmap with milestones around hardware, software, and system infrastructure. At this year’s IBM Quantum Summit, Jay Gambetta, IBM fellow and vice president, quantum computing, along with a few colleagues, delivered a report card and glimpse into future IBM plans. He highlighted six milestones – not least the recent launch of IBM’s 127-qubit quantum processor, Eagle, and plans for IBM System Two, a new complete infrastructure that will supplant System One.

Look over the IBM roadmap shown below (click to enlarge). In many ways, it encompasses the challenges and aspirations faced by everyone in the quantum community.

While fault-tolerant quantum computing remains distant, the practical use of quantum computing on noisy intermediate scale quantum (NISQ) computers seems closer than many expected. We are starting to see early quantum-based applications emerge – mostly around random number generation (see HPCwire articles on Quantinuum and Zapata, both of whom are working to leverage quantum-generated random numbers).

Before digging into the tech talk, it’s worth noting how IBM expects the commercial landscape to emerge (figure below). Working with the Boston Consulting Group, IBM presented a rough roadmap for commercial applications. “IBM’s roadmap is not just concrete. It’s also ambitious,” said Matt Langione, principal and North America head of deep tech, BCG, at the IBM Summit. “We think the technical capabilities [IBM has] outlined today will help create $3 billion in value for end users during the period described.”

He cited portfolio optimization in financial services as an example. Efforts to scale up classical computing-based optimizers “struggle with non-continuous non-convex functions, things like interest rate yield curves, trading logs, buy-in thresholds, and transaction costs,” said Langione. Quantum optimizers could overcome those challenges and, “improve trading strategies by as much as 25 basis points with great fidelity at four nines by 2024 with [quantum] runtimes that integrate classical resources and have error mitigation built in. We believe this is the sort of capability that could be in trader workflows [around] 2025,” he said.

He also singled out mesh optimizers for computational fluid dynamics used in aerospace and automotive design which have similar constraints. He predicted, “In the next three years, quantum computers could start powering past limits that constrain surface size and accuracy.” Look over BCG/IBM’s market projection shown below.

Quantum computing has no shortage of big plans. IBM is betting that by laying out a clear vison and meeting its milestones it will entice broader buy-in from the wait-and-see community as well as within the quantum community. Here are brief summaries of the six topics reviewed by Gambetta and colleagues. IBM has posted a video of the talk, which in just over 30 minutes does a good, succinct job of reviewing IBM progress and plans.

  1. Breaking the 100-Qubit Barrier

IBM starts the formal counting of its current quantum processor portfolio with the introduction of the Falcon processor in 2019; it introduced IBM’s heavy-hexagonal qubit layout and has 27 qubits. IBM has been refining this design since. Hummingbird debuted in 2020 with 65 qubits. Eagle, just launched at the 2021 Summit, has 127 qubits. The qubit count has roughly doubled with each new processor. Next up is Osprey, due in 2022, which will have 433 qubits.

Jerry Chow, director of quantum hardware system development at IBM, explained the lineage this way, “With Falcon, our challenge was reliable yield. We met that challenge with a novel Josephson junction tuning process, combined with our collision-reducing heavy hexagonal lattice. With Hummingbird, we implemented a large-ratio multiplexed readout allowing us to bring down the total cryogenic infrastructure needed for qubit state readout by a factor of eight. This reduced the raw amount of componentry needed.”

“Eagle [was] born out of a necessity to scale up the way that we do our device packaging so we can bring signals to and from our superconducting qubits in a more efficient way. Our work to achieve this is relied heavily upon IBM experience with CMOS technology. It’s actually two chips.”

For Eagle, “The Josephson junction base (qubits) sit on one chip which is attached to a separate interposer chip through bump bonds. This interposer chip provides connections to the qubits through the packaging techniques which are common throughout the CMOS world. These include things like substrate vias and a buried wiring layer, which is completely novel for this technology. The presence of the buried layer provides flexibility in terms of routing the signals and laying out of the device,” said Chow.

IBM says Eagle is the most advanced quantum computing chip ever built, the world’s first quantum processor over 100 qubits. Chow said, “Let me stress this isn’t just a processor we fabricated, but a full working system that is running quantum circuits today.” He said Eagle will be widely available by the end of the year, which presumably means now-ish.

Looking at the impact of Eagle, IBM isn’t shy: “The increased qubit count will allow users to explore problems at a new level of complexity when undertaking experiments and running applications, such as optimizing machine learning or modeling new molecules and materials for use in areas spanning from the energy industry to the drug discovery process. ‘Eagle’ is the first IBM quantum processor whose scale makes it impossible for a classical computer to reliably simulate. In fact, the number of classical bits necessary to represent a state on the 127-qubit processor exceeds the total number of atoms in the more than 7.5 billion people alive today.”

Osprey, due next year, will have 433 qubits as noted earlier and, said Chow, will introduce “the next generation of scalable input output that can deliver signals from room temperature to the cryogenic temperatures.”

  1. Overcoming the Gate Error Barrier

Measuring quality in quantum computing can be tricky. Key components such as coherence duration and gate fidelity are adversely affected by many factors usually lumped together as system and environmental noise. Taming these influences is why most quantum processors are housed in big dilution refrigerators. IBM developed a benchmark metric, Quantum Volume (QV), which has various performance attributes baked in and QV has been fairly widely used the quantum community. IBM achieved QV of 128 on some of its systems. Honeywell (now Quantinuum) also reported achieving QV 128 on its trapped ion device.

At the IBM Quantum Summit, Matthias Steffen, IBM fellow and chief quantum architect reviewed progress on extending coherence times and improving gate fidelity.

“We’ve had a breakthrough with our new Falcon r8 processors. We have succeeded in improving our T1 times (spin-lattice relaxation) dramatically from about 0.1 milliseconds to 0.3 milliseconds. This breakthrough is not limited to a one-off-chip (good yield). It has now been repeated several times. In fact, some of our clients may have noticed [on] the device map showing up for IBM Peekskill recently,” said Steffen. “This is just the start. We have tested several research test devices and we’re now measuring 0.6 milliseconds closing in on reliably crossing the one millisecond barrier.”

“We also had a breakthrough this year with improved gate fidelities. You can see these improvements (figure below) color coded by device family. Our Falcon r4 devices generally achieved gate errors near 0.5 x 10-3.) Our Falcon r5 devices that also include faster readout are about 1/3 better. In fact, many of our recent demonstrations came from this r5 device family. Finally, in gold, you see some of our latest test devices, which include Falcon r8 with the improved coherence times.”

“You also see measured fidelity for other devices, including our very recently [developed] Falcon r10 [on which] we have measured a two-qubit gate breaking the 0.001 error per gate plain,” said Steffen.

IBM is touting achieving 0.001 gate fidelity, which corresponds to over 1000 gates per error, as achieving 3 nines or 99.9 percent quality, and a major milestone.

  1. Mainstreaming Falcon r5

Currently, the Falcon architecture is IBM’s workhorse. As explained by IBM, the portfolio of accessible QPUs includes core and exploratory chips: “Our users have access to the exploratory devices, but those devices are not online all the time. Premium users get access to both core and exploratory systems.”

IBM says there are three metrics that characterize system performance – quality, speed, and scale – and recently issued a white paper (brief excerpt at the article end) defining what’s meant by that. Speed is a core element and is defined as ‘primitive circuit layer operations per second’. IBM calls this CLOPS (catchy), roughly analogous to FLOPS in classical computing parlance.

“There’s no getting away from it,” said Katie Pizzolato, IBM director, quantum theory & applications systems. “Useful quantum computing requires running lots of circuits. Most applications require running at least a billion. If it takes my system more than five milliseconds to run a circuit, it’s simple math, a billion circuits will take you 58 days; that’s not useful quantum computing.”

At the lowest level QPU speed is driven by the underlying architecture. “This is one of the reasons we chose superconducting qubits. In these systems, we can easily couple the qubits to the resonators in the processors. This gives us fast gates, fast resets and fast readout fundamentals for speed,” said Pizzolato.

“Take the Falcon r5 processor for example, [which] is a huge upgrade over the Falcon r4. With the r5 we integrated new components into the processor that have eight times faster measurement rate than the r4 without any effect on coherence. This allows the measurement rate to be a few 100 nanoseconds compared to a few microseconds. Add this to other improvements we’ve made to gate time, and you have a major step forward with the Falcon r5,” she said.

IBM is now officially labelling Falcon r5 a core system, a step up from exploratory. “We’re making sure that Falcon r5 is up and running and with high reliability. We are confident that the r5, which has faster readout, can be maintained with high availability, so it is now labeled as a core system,” she said.

Pizzolato didn’t give a specific CLOPS number for Falcon r5 but in another talk given to the Society of HPC Professionals in early December, IBM’s Scott Crowder (VP and CTO, quantum) showed a slide indicating 4.3 CLOPS for IBM (though didn’t specific which QPU) versus 45 CLOPS for trapped ion.

  1. IBM Systems All Support Qiskit Runtime

In May, IBM rolled out a beta version of Qiskit Runtime, which it says is “a new architecture offered by IBM Quantum that streamlines computations requiring many iterations.” The idea is to leverage classical systems to accelerate access to QPUs not unlike the way CPUs manage access to GPUs in classical computing. Qiskit Runtime is now supported by all IBM QPUs.

“We created Qiskit Runtime to be the container platform for executing classical codes in an environment that has very fast access to quantum hardware,” said Pizzolato. “[It] completely changes the use model for quantum hardware. It allows users to submit programs of circuits rather than simply circuits to IBM’s quantum datacenters. This approach gives us 120-fold improvement. A program like VQE (variational quantum eigensolver), which used to take our users 45 days to run, can now be done in nine hours.”

IBM contends that these advances combined with the 127-qubit Eagle processor mean, “no one really needs to use a simulator anymore.”

Here’s the Qiskit Runtime description from the IBM website: “Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or others to use. A Qiskit quantum program, also called a Qiskit Runtime program, is a piece of Python code that takes certain inputs, performs quantum and maybe classical computation, interactively provides intermediate results if desired, and returns the processing results. The same or other authorized users can then invoke these quantum programs by simply passing in the required input parameters.”

  1. Serverless Quantum Introduction

Qiskit Runtime, says IBM, is part of a broader effort to bring classical and quantum resources closer together via the cloud and to create serverless quantum computing. This would be a big step in abstracting away many obstacles now faced by developers.

“Qiskit Runtime involves squeezing more performance from our QPU at the circuit level by combining it with classical resources to remove latency and increase efficiency. We call this classical with a little c,” said Sarah Sheldon, an IBM Research staff member. “We’ve also discovered we can use classical resources to accelerate progress towards quantum advantage and get us there earlier.”

“To do this, we use something we call using classical with a capital C. These capabilities will be both at the kernel and algorithm levels. We see them as a set of tools, allowing users to trade off quantum and classical resources to optimize the overall performance of an application at the kernel level. This will be achieved using libraries of circuits for sampling time, evolution, and more. But at the algorithm level, we see a future where we’re offering pre-built Qiskit Runtimes in conjunction with classical integration libraries. We call this circuit knitting,” said Sheldon.

Broadly, circuit knitting is a technique that decomposes a large quantum circuit with more qubits and larger gate depth into multiple smaller quantum circuits with fewer qubits as smaller gate depth; it then combines the outcomes together in classical post processing. “This allows us to simulate much larger systems than ever before. We can also knit together circuits along an edge where a high level of noise or crosstalk would be present. This lets us simulate quantum systems with higher levels of accuracy,” said Sheldon.

IBM reported having demonstrated circuit knitting by simulating the ground state of a water molecule using only five qubits with a specific technique of ‘entanglement forging,’ which knits circuits across weakly entangled halves. With circuit knitting, says IBM, users can boost the scale of the problem tackled or increase the quality of the result by making speed trading-offs with these tools.

The new capabilities are being bundled into IBM Code Engine on the IBM cloud. Code engine, combined with lower-level tools will deliver serverless computing says IBM. Pizzolato walked through and example, “The first step is to define the problem. In this case, we’re using VQE. Secondly, we use Lithops, a Python multicloud distributed computing framework to execute the code. Inside this function, we open a communication channel to the Qiskit Runtime and run the program estimator.”

“As an example, for the classical computation, we use the simultaneous perturbations stochastic approximation algorithm. This is just an example; you could put anything here. So now the user can just sit back and enjoy the results. As quantum is increasingly adopted by developers, quantum serverless enables developers to just focus on their code without getting dragged into configuring classical resources,” she said.

  1. Early Plans for System Two.

IBM’s final announcement was that it is “closing the chapter on” IBM Quantum System One, its fully enclosed quantum computer infrastructure, which debuted in 2019. Chow said System One would be able to handle Eagle, but that IBM was partnering with Finnish company Bluefors to develop System Two, its next generation cryogenic infrastructure.

“We are actively working on an entirely new set of technologies from novel high-density, cryogenic microwave flex cables to a new generation of FPGA based high-bandwidth, integrated control electronics,” said Chow.

Bluefors introduced its newest cryogenic platform, Kide, which will be the basis for IBM System Two.

“We call it Kide because in Finnish, Kide means snowflake or crystal, which represents the hexagonal crystal like geometry of the platform that enables unprecedented expandability and access,” said Russell Lake of Bluefors. “Even when we create a larger platform, we maintain the same user accessibility as with a smaller system. This is crucial as advanced quantum hardware scales up. We optimize cooling power by separating the cooling for the quantum processor from the operational heat loads. in addition, the six-fold symmetry of the key to platform means that systems can be joined and clustered to enable vastly expanded quantum hardware configurations.”

“The modular nature of IBM Quantum System Two will be the cornerstone of the future quantum datacenters,” said Gambetta. Presumably, the 433-qubit Osprey processor will be housed in a version of the new System Two infrastructure.

There was a lot to absorb in the IBM presentation. IBM was naturally attempting to put its best foot forward. Practically speaking, there are many companies working on all of the quantum computing aspects discussed by IBM but few tackling all of them. For this reason, IBM’s report serves as an interesting overview of progress generally, throughout the quantum community.

Reaching quantum advantage in 2023, even if for only a few applications, would be a big deal.

Link to video: https://www.youtube.com/watch?v=-qBrLqvESNM

Link to IBM paper (Quality, Speed, and Scale: three key attributes to measure the performance of near-term quantum computers): https://arxiv.org/abs/2110.14108

Excerpt from IBM paper

“Quantum computing performance is defined by the amount of useful work accomplished by a quantum computer per unit of time. In a quantum computer, the information processing is actualized by quantum circuits containing instructions to manipulate quantum data. Unlike classical computer systems, where instructions are executed directly by a CPU, the Quantum Processing Unit (QPU), which is the combination of the control electronics and quantum memory, is supported by a classical runtime system for converting the circuits into a form consumable by the QPU and then retrieving results for further processing. Performance on actual applications depends on the performance of the complete system, and as such any performance metric must holistically consider all of the components.

“In this white paper, we propose that the performance of a quantum computer is governed by three key factors: scale, quality, and speed. Scale, or the number of qubits, determines the size of problem that can be encoded and solved. Quality determines the size of quantum circuit that can be faithfully executed. And speed is related to the number of primitive circuits that the quantum computing system can execute per unit of time. We introduce a benchmark for measuring speed in section III C.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire