IBM Breaks 100-Qubit QPU Barrier, Marks Milestones on Ambitious Roadmap

By John Russell

December 13, 2021

The highlights reel of IBM’s steady progress in quantum computing was on full display at the company’s 2021 Quantum Summit presented last month while most of the HPC community was wrapped up in SC21. Underpinned by six milestones met this year, IBM has declared that 2023 will be the year, broadly, when its systems deliver quantum advantage, and quantum computing takes its early place as a powerful tool on the HPC landscape.

At present, the advances being made throughout the quantum computing community are impressive and accelerating perhaps beyond the expectations of many observers. In that context, IBM has long been the 500-pound gorilla in the quantum computing world, digging into virtually every aspect of the technology, its use cases, and customer/developer engagements. IBM, of course, is focused on semiconductor-based, superconducting qubit technology and the jury is out on which of the many qubit technologies will prevail. Likely, it won’t be just one.

Last year, IBM laid out a detailed quantum roadmap with milestones around hardware, software, and system infrastructure. At this year’s IBM Quantum Summit, Jay Gambetta, IBM fellow and vice president, quantum computing, along with a few colleagues, delivered a report card and glimpse into future IBM plans. He highlighted six milestones – not least the recent launch of IBM’s 127-qubit quantum processor, Eagle, and plans for IBM System Two, a new complete infrastructure that will supplant System One.

Look over the IBM roadmap shown below (click to enlarge). In many ways, it encompasses the challenges and aspirations faced by everyone in the quantum community.

While fault-tolerant quantum computing remains distant, the practical use of quantum computing on noisy intermediate scale quantum (NISQ) computers seems closer than many expected. We are starting to see early quantum-based applications emerge – mostly around random number generation (see HPCwire articles on Quantinuum and Zapata, both of whom are working to leverage quantum-generated random numbers).

Before digging into the tech talk, it’s worth noting how IBM expects the commercial landscape to emerge (figure below). Working with the Boston Consulting Group, IBM presented a rough roadmap for commercial applications. “IBM’s roadmap is not just concrete. It’s also ambitious,” said Matt Langione, principal and North America head of deep tech, BCG, at the IBM Summit. “We think the technical capabilities [IBM has] outlined today will help create $3 billion in value for end users during the period described.”

He cited portfolio optimization in financial services as an example. Efforts to scale up classical computing-based optimizers “struggle with non-continuous non-convex functions, things like interest rate yield curves, trading logs, buy-in thresholds, and transaction costs,” said Langione. Quantum optimizers could overcome those challenges and, “improve trading strategies by as much as 25 basis points with great fidelity at four nines by 2024 with [quantum] runtimes that integrate classical resources and have error mitigation built in. We believe this is the sort of capability that could be in trader workflows [around] 2025,” he said.

He also singled out mesh optimizers for computational fluid dynamics used in aerospace and automotive design which have similar constraints. He predicted, “In the next three years, quantum computers could start powering past limits that constrain surface size and accuracy.” Look over BCG/IBM’s market projection shown below.

Quantum computing has no shortage of big plans. IBM is betting that by laying out a clear vison and meeting its milestones it will entice broader buy-in from the wait-and-see community as well as within the quantum community. Here are brief summaries of the six topics reviewed by Gambetta and colleagues. IBM has posted a video of the talk, which in just over 30 minutes does a good, succinct job of reviewing IBM progress and plans.

  1. Breaking the 100-Qubit Barrier

IBM starts the formal counting of its current quantum processor portfolio with the introduction of the Falcon processor in 2019; it introduced IBM’s heavy-hexagonal qubit layout and has 27 qubits. IBM has been refining this design since. Hummingbird debuted in 2020 with 65 qubits. Eagle, just launched at the 2021 Summit, has 127 qubits. The qubit count has roughly doubled with each new processor. Next up is Osprey, due in 2022, which will have 433 qubits.

Jerry Chow, director of quantum hardware system development at IBM, explained the lineage this way, “With Falcon, our challenge was reliable yield. We met that challenge with a novel Josephson junction tuning process, combined with our collision-reducing heavy hexagonal lattice. With Hummingbird, we implemented a large-ratio multiplexed readout allowing us to bring down the total cryogenic infrastructure needed for qubit state readout by a factor of eight. This reduced the raw amount of componentry needed.”

“Eagle [was] born out of a necessity to scale up the way that we do our device packaging so we can bring signals to and from our superconducting qubits in a more efficient way. Our work to achieve this is relied heavily upon IBM experience with CMOS technology. It’s actually two chips.”

For Eagle, “The Josephson junction base (qubits) sit on one chip which is attached to a separate interposer chip through bump bonds. This interposer chip provides connections to the qubits through the packaging techniques which are common throughout the CMOS world. These include things like substrate vias and a buried wiring layer, which is completely novel for this technology. The presence of the buried layer provides flexibility in terms of routing the signals and laying out of the device,” said Chow.

IBM says Eagle is the most advanced quantum computing chip ever built, the world’s first quantum processor over 100 qubits. Chow said, “Let me stress this isn’t just a processor we fabricated, but a full working system that is running quantum circuits today.” He said Eagle will be widely available by the end of the year, which presumably means now-ish.

Looking at the impact of Eagle, IBM isn’t shy: “The increased qubit count will allow users to explore problems at a new level of complexity when undertaking experiments and running applications, such as optimizing machine learning or modeling new molecules and materials for use in areas spanning from the energy industry to the drug discovery process. ‘Eagle’ is the first IBM quantum processor whose scale makes it impossible for a classical computer to reliably simulate. In fact, the number of classical bits necessary to represent a state on the 127-qubit processor exceeds the total number of atoms in the more than 7.5 billion people alive today.”

Osprey, due next year, will have 433 qubits as noted earlier and, said Chow, will introduce “the next generation of scalable input output that can deliver signals from room temperature to the cryogenic temperatures.”

  1. Overcoming the Gate Error Barrier

Measuring quality in quantum computing can be tricky. Key components such as coherence duration and gate fidelity are adversely affected by many factors usually lumped together as system and environmental noise. Taming these influences is why most quantum processors are housed in big dilution refrigerators. IBM developed a benchmark metric, Quantum Volume (QV), which has various performance attributes baked in and QV has been fairly widely used the quantum community. IBM achieved QV of 128 on some of its systems. Honeywell (now Quantinuum) also reported achieving QV 128 on its trapped ion device.

At the IBM Quantum Summit, Matthias Steffen, IBM fellow and chief quantum architect reviewed progress on extending coherence times and improving gate fidelity.

“We’ve had a breakthrough with our new Falcon r8 processors. We have succeeded in improving our T1 times (spin-lattice relaxation) dramatically from about 0.1 milliseconds to 0.3 milliseconds. This breakthrough is not limited to a one-off-chip (good yield). It has now been repeated several times. In fact, some of our clients may have noticed [on] the device map showing up for IBM Peekskill recently,” said Steffen. “This is just the start. We have tested several research test devices and we’re now measuring 0.6 milliseconds closing in on reliably crossing the one millisecond barrier.”

“We also had a breakthrough this year with improved gate fidelities. You can see these improvements (figure below) color coded by device family. Our Falcon r4 devices generally achieved gate errors near 0.5 x 10-3.) Our Falcon r5 devices that also include faster readout are about 1/3 better. In fact, many of our recent demonstrations came from this r5 device family. Finally, in gold, you see some of our latest test devices, which include Falcon r8 with the improved coherence times.”

“You also see measured fidelity for other devices, including our very recently [developed] Falcon r10 [on which] we have measured a two-qubit gate breaking the 0.001 error per gate plain,” said Steffen.

IBM is touting achieving 0.001 gate fidelity, which corresponds to over 1000 gates per error, as achieving 3 nines or 99.9 percent quality, and a major milestone.

  1. Mainstreaming Falcon r5

Currently, the Falcon architecture is IBM’s workhorse. As explained by IBM, the portfolio of accessible QPUs includes core and exploratory chips: “Our users have access to the exploratory devices, but those devices are not online all the time. Premium users get access to both core and exploratory systems.”

IBM says there are three metrics that characterize system performance – quality, speed, and scale – and recently issued a white paper (brief excerpt at the article end) defining what’s meant by that. Speed is a core element and is defined as ‘primitive circuit layer operations per second’. IBM calls this CLOPS (catchy), roughly analogous to FLOPS in classical computing parlance.

“There’s no getting away from it,” said Katie Pizzolato, IBM director, quantum theory & applications systems. “Useful quantum computing requires running lots of circuits. Most applications require running at least a billion. If it takes my system more than five milliseconds to run a circuit, it’s simple math, a billion circuits will take you 58 days; that’s not useful quantum computing.”

At the lowest level QPU speed is driven by the underlying architecture. “This is one of the reasons we chose superconducting qubits. In these systems, we can easily couple the qubits to the resonators in the processors. This gives us fast gates, fast resets and fast readout fundamentals for speed,” said Pizzolato.

“Take the Falcon r5 processor for example, [which] is a huge upgrade over the Falcon r4. With the r5 we integrated new components into the processor that have eight times faster measurement rate than the r4 without any effect on coherence. This allows the measurement rate to be a few 100 nanoseconds compared to a few microseconds. Add this to other improvements we’ve made to gate time, and you have a major step forward with the Falcon r5,” she said.

IBM is now officially labelling Falcon r5 a core system, a step up from exploratory. “We’re making sure that Falcon r5 is up and running and with high reliability. We are confident that the r5, which has faster readout, can be maintained with high availability, so it is now labeled as a core system,” she said.

Pizzolato didn’t give a specific CLOPS number for Falcon r5 but in another talk given to the Society of HPC Professionals in early December, IBM’s Scott Crowder (VP and CTO, quantum) showed a slide indicating 4.3 CLOPS for IBM (though didn’t specific which QPU) versus 45 CLOPS for trapped ion.

  1. IBM Systems All Support Qiskit Runtime

In May, IBM rolled out a beta version of Qiskit Runtime, which it says is “a new architecture offered by IBM Quantum that streamlines computations requiring many iterations.” The idea is to leverage classical systems to accelerate access to QPUs not unlike the way CPUs manage access to GPUs in classical computing. Qiskit Runtime is now supported by all IBM QPUs.

“We created Qiskit Runtime to be the container platform for executing classical codes in an environment that has very fast access to quantum hardware,” said Pizzolato. “[It] completely changes the use model for quantum hardware. It allows users to submit programs of circuits rather than simply circuits to IBM’s quantum datacenters. This approach gives us 120-fold improvement. A program like VQE (variational quantum eigensolver), which used to take our users 45 days to run, can now be done in nine hours.”

IBM contends that these advances combined with the 127-qubit Eagle processor mean, “no one really needs to use a simulator anymore.”

Here’s the Qiskit Runtime description from the IBM website: “Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or others to use. A Qiskit quantum program, also called a Qiskit Runtime program, is a piece of Python code that takes certain inputs, performs quantum and maybe classical computation, interactively provides intermediate results if desired, and returns the processing results. The same or other authorized users can then invoke these quantum programs by simply passing in the required input parameters.”

  1. Serverless Quantum Introduction

Qiskit Runtime, says IBM, is part of a broader effort to bring classical and quantum resources closer together via the cloud and to create serverless quantum computing. This would be a big step in abstracting away many obstacles now faced by developers.

“Qiskit Runtime involves squeezing more performance from our QPU at the circuit level by combining it with classical resources to remove latency and increase efficiency. We call this classical with a little c,” said Sarah Sheldon, an IBM Research staff member. “We’ve also discovered we can use classical resources to accelerate progress towards quantum advantage and get us there earlier.”

“To do this, we use something we call using classical with a capital C. These capabilities will be both at the kernel and algorithm levels. We see them as a set of tools, allowing users to trade off quantum and classical resources to optimize the overall performance of an application at the kernel level. This will be achieved using libraries of circuits for sampling time, evolution, and more. But at the algorithm level, we see a future where we’re offering pre-built Qiskit Runtimes in conjunction with classical integration libraries. We call this circuit knitting,” said Sheldon.

Broadly, circuit knitting is a technique that decomposes a large quantum circuit with more qubits and larger gate depth into multiple smaller quantum circuits with fewer qubits as smaller gate depth; it then combines the outcomes together in classical post processing. “This allows us to simulate much larger systems than ever before. We can also knit together circuits along an edge where a high level of noise or crosstalk would be present. This lets us simulate quantum systems with higher levels of accuracy,” said Sheldon.

IBM reported having demonstrated circuit knitting by simulating the ground state of a water molecule using only five qubits with a specific technique of ‘entanglement forging,’ which knits circuits across weakly entangled halves. With circuit knitting, says IBM, users can boost the scale of the problem tackled or increase the quality of the result by making speed trading-offs with these tools.

The new capabilities are being bundled into IBM Code Engine on the IBM cloud. Code engine, combined with lower-level tools will deliver serverless computing says IBM. Pizzolato walked through and example, “The first step is to define the problem. In this case, we’re using VQE. Secondly, we use Lithops, a Python multicloud distributed computing framework to execute the code. Inside this function, we open a communication channel to the Qiskit Runtime and run the program estimator.”

“As an example, for the classical computation, we use the simultaneous perturbations stochastic approximation algorithm. This is just an example; you could put anything here. So now the user can just sit back and enjoy the results. As quantum is increasingly adopted by developers, quantum serverless enables developers to just focus on their code without getting dragged into configuring classical resources,” she said.

  1. Early Plans for System Two.

IBM’s final announcement was that it is “closing the chapter on” IBM Quantum System One, its fully enclosed quantum computer infrastructure, which debuted in 2019. Chow said System One would be able to handle Eagle, but that IBM was partnering with Finnish company Bluefors to develop System Two, its next generation cryogenic infrastructure.

“We are actively working on an entirely new set of technologies from novel high-density, cryogenic microwave flex cables to a new generation of FPGA based high-bandwidth, integrated control electronics,” said Chow.

Bluefors introduced its newest cryogenic platform, Kide, which will be the basis for IBM System Two.

“We call it Kide because in Finnish, Kide means snowflake or crystal, which represents the hexagonal crystal like geometry of the platform that enables unprecedented expandability and access,” said Russell Lake of Bluefors. “Even when we create a larger platform, we maintain the same user accessibility as with a smaller system. This is crucial as advanced quantum hardware scales up. We optimize cooling power by separating the cooling for the quantum processor from the operational heat loads. in addition, the six-fold symmetry of the key to platform means that systems can be joined and clustered to enable vastly expanded quantum hardware configurations.”

“The modular nature of IBM Quantum System Two will be the cornerstone of the future quantum datacenters,” said Gambetta. Presumably, the 433-qubit Osprey processor will be housed in a version of the new System Two infrastructure.

There was a lot to absorb in the IBM presentation. IBM was naturally attempting to put its best foot forward. Practically speaking, there are many companies working on all of the quantum computing aspects discussed by IBM but few tackling all of them. For this reason, IBM’s report serves as an interesting overview of progress generally, throughout the quantum community.

Reaching quantum advantage in 2023, even if for only a few applications, would be a big deal.

Link to video: https://www.youtube.com/watch?v=-qBrLqvESNM

Link to IBM paper (Quality, Speed, and Scale: three key attributes to measure the performance of near-term quantum computers): https://arxiv.org/abs/2110.14108

Excerpt from IBM paper

“Quantum computing performance is defined by the amount of useful work accomplished by a quantum computer per unit of time. In a quantum computer, the information processing is actualized by quantum circuits containing instructions to manipulate quantum data. Unlike classical computer systems, where instructions are executed directly by a CPU, the Quantum Processing Unit (QPU), which is the combination of the control electronics and quantum memory, is supported by a classical runtime system for converting the circuits into a form consumable by the QPU and then retrieving results for further processing. Performance on actual applications depends on the performance of the complete system, and as such any performance metric must holistically consider all of the components.

“In this white paper, we propose that the performance of a quantum computer is governed by three key factors: scale, quality, and speed. Scale, or the number of qubits, determines the size of problem that can be encoded and solved. Quality determines the size of quantum circuit that can be faithfully executed. And speed is related to the number of primitive circuits that the quantum computing system can execute per unit of time. We introduce a benchmark for measuring speed in section III C.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire