New IBM Blog Details Path to Quantum Advantage in 2023

By John Russell

July 19, 2022

At IBM’s Quantum Summit last November, the company issued a detailed roadmap and declared it would deliver quantum advantage (better than classical computers) on selected applications in 2023. It was a bold claim that was greeted with enthusiasm in some quarters and skepticism in others. Today, IBM reiterated that claim with a lengthy blog on how error mitigation is the key to achieving quantum advantage (QA) while still using noisy intermediate scale quantum (NISQ) computers.

“There’s still a common refrain in the field if you want value from a quantum computer, it needs to be a large, fault tolerant quantum processor. Until then we’re stuck with noisy devices, inferior to classical counterparts,” an IBM spokesperson told HPCwire. “Our team has a different view. They have developed new techniques to draw ever more value from noisy qubits.”

Jay Gambetta, IBM

The blog, written by Jay Gambetta, IBM fellow and vice president of quantum and HPCwire Person to Watch in 2022, and research colleagues Kristan Temme, Ewout van den Berg, and Abhinav Kandala argues that probabilistic error cancellation is the secret sauce, which when combined with other advances, will enable IBM to make good on its promise to deliver selective quantum advantage sometime next year. It’s an important part of IBM’s long-term path to fault-tolerant quantum computing.

The IBM researchers write:

“The history of classical computing is one of advances in transistor and chip technology yielding corresponding gains in information processing performance. Although quantum computers have seen tremendous improvements in their scale, quality and speed in recent years, such a gradual evolution seems to be missing from the narrative. Indeed, it is widely accepted that one must first build a large fault-tolerant quantum processor before any of the quantum algorithms with proven super-polynomial speed-up can be implemented. Building such a processor therefore is the central goal for our development.

“However, recent advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities, and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.

“Of course, the ultimate litmus test for practical quantum computing is to provide an advantage over classical approaches for a useful problem. Such an advantage can take many forms, the most prominent one being a substantial improvement of a quantum algorithm’s runtime over the best classical approaches. For this to be possible, the algorithm should have an efficient representation as quantum circuits, and there should be no classical algorithm that can simulate these quantum circuits efficiently. This can only be achieved for specific kinds of problems.”

That’s the IBM plan.

Many in the quantum community agree. Others demure. Photonic-based quantum computing start-up PsiQuantum has consistently said that pursuit of full fault-tolerance is the only realistic way to produce practical, broadly useful quantum computers. PsiQuantum has embarked just such a strategy. (See HPCwire article, PsiQuantum’s Path to 1 Million Qubits.)

Analyst Bob Sorensen, chief quantum analyst for Hyperion Research, noted that IBM’s portrayal of the quantum industry, or at least big portions of it, as clinging to the idea that achieving fault-tolerant quantum computing is necessary before achieving usefulness may have been overstated. “I think there’s a lot more optimism for error-tolerant NISQ systems than is being suggested,” noted Sorensen.

Nevertheless, he was impressed with IBM’s activities. “What IBM is doing, as a full stack company, is addressing error correction at every level, down to the qubit structure and all the way up to software. They can cover everything in between. That’s a key advantage for a full stack vendor. So I think IBM is trying to aggressively take advantage of its full stack perspective. [Moreover,] they’re not putting everything under the hood. They’re saying, here’s our error correction. Here’s how you guys can play in that game as well,” said Sorensen.

Sorensen said, “I was more struck by IBM’s emphasis on runtime improvement and how all of its efforts, not just error correction per se, are aimed at reducing runtime. They’re moving away from a somewhat more abstract, classical versus quantum argument, to an emphasis on how their runtime is improving on these applications. That’s something that every classical guy can resonate with. That emphasis on runtime was an interesting perspective, because it embodies all of the things that they’re looking at – circuit depth, error rates, and the algorithms themselves. And it’s a much more accessible version of in my mind of the quantum volume measure (benchmark measure).”

Sorensen thought the blog, ostensibly about error mitigation and achieving quantum advantage, was really a broader positioning statement. The lengthy IBM blog did encompass a mini-summary of its steady advances to date, along with a deeper description of its error mitigation efforts.

Looking at error mitigation, IBM has developed two general purpose methods: one called zero noise extrapolation (ZNE); and one called probabilistic error cancellation (PEC). “The ZNE method cancels subsequent orders of the noise affecting the expectation value of a noisy quantum circuit by extrapolating measurement outcomes at different noise strength,” write the authors. PEC, says IBM, can already enable noise-free estimators of quantum circuits on noisy quantum computers.

IBM issued a PEC paper in January, which noted, “Noise in pre-fault-tolerant quantum computers can result in biased estimates of physical observables. Accurate bias-free estimates can be obtained using probabilistic error cancellation (PEC), which is an error-mitigation technique that effectively inverts well-characterized noise channels. Learning correlated noise channels in large quantum circuits, however, has been a major challenge and has severely hampered experimental realizations. Our work presents a practical protocol for learning and inverting a sparse noise model that is able to capture correlated noise and scales to large quantum devices.”

IBM was able to demonstrate PEC on a superconducting quantum processor with crosstalk errors. Here’s a somewhat lengthy excerpt (to avoid garbling) from today’s blog digging into PEC:

“In PEC, we learn and effectively invert the noise of the circuit of interest for the computation of expectation values by averaging randomly sampled instances of noisy circuits. The noise inversion, however, leads to pre-factors in the measured expectation values that translate into a circuit sampling overhead. This overhead is exponential in the number of qubits n and circuit depth d. The base of this exponential, represented as γ̄, is a property of the experimentally learned noise model and its inversion. We can therefore conveniently express the circuit sampling overhead as a quantum runtime, J:

     J = γ̄ nd β d

“Here, γ̄ is a powerful quality metric of quantum processors — in technical terms, γ̄ is a measure of the negativity of the quasi-probability distribution used to represent the inverse of the noise channel. Improvements in qubit coherence, gate fidelity and crosstalk will reflect in lower γ̄ values and consequently reduce the PEC runtime dramatically. Meanwhile, β is a measure of the time per circuit layer operation (see CLOPS), an important speed metric.

“The above expression therefore highlights why the path to quantum advantage will be one driven by improvements in the quality and speed of quantum systems as their scale grows to tackle increasingly complex circuits. In recent years, we have pushed the needle on all three fronts.”

Here’s IBM’s overall checklist of notable milestones:

  • “We unveiled our 127-qubit Eagle processor, pushing the scale of these processors well beyond the boundaries of exact classical stimulability.
  • “We also introduced a metric to quantify the speed of quantum systems — CLOPs — and demonstrated a 120x reduction in the runtime of a molecular simulation.
  • “The coherence times of our transmon qubits exceeded 1 ms, an incredible milestone for superconducting qubit technology.
  • “Since then, these improvements have extended to our largest processors, and our 65-qubit Hummingbird processors have seen a 2-3x improvement in coherence, which further enables higher fidelity gates.
  • “In our latest Falcon r10 processor, IBM Prague, two-qubit gate errors dipped under 0.1%, yet another first for superconducting quantum technology, allowing this processor to demonstrate two steps in Quantum Volume of 256 and 512.”

The blog, while not at the depth of a technical paper, has a fuller discussion of the error mitigation schemes, and IBM presents a few examples of error-corrected circuits run on its processors versus classical.

IBM points out that while the improvements in γ̄ might seem modest, the run-time implications (exponential) of these γ̄ improvements are huge — for instance, for a 100-qubit circuit of depth 100, the runtime overhead going from the quality of Hummingbird r2 (version of processor) to Falcon r10 is dramatically reduced by 110 orders of magnitude.

IBM presents a second example of how reductions in two-qubit gate errors will lead to dramatic improvements in quantum runtime. “Consider the example of 100 qubit Trotter-type circuits of varying depth representing the evolution of an Ising spin chain. This is at a size well beyond exact classical simulation. In the figure below we estimate the PEC circuit overhead as a function of two-qubit gate error.

“As discussed above, two-qubit gate errors on our processors have recently dipped below 1e-3. This suggests that if we could further reduce our gate error down to ~ 2-3e-4, we could have access to noise-free observables of 100 qubit, depth 400 circuits in less than a day of runtime. Furthermore, this runtime will be reduced even further as we simultaneously improve the speed of our quantum systems.”

The IBM team write, “We can therefore think of our proposed path to practical quantum computing as a game of exponentials. Despite the exponential cost of PEC, this framework, and the mathematical guarantees on the accuracy of error mitigated expectation values, allow us to lay down a concrete path and a metric (γ̄) to quantify when even noisy processors can achieve computational advantage over classical machines. The metrics γ̄ and β provide numerical criteria for when the runtime of quantum circuits will outperform classical methods that produce accurate expectation values, even before the advent of fault tolerant quantum computation.

“To understand this better, let us consider the example of a specific circuit family: hardware-efficient circuits with layers of Clifford entangling gates and arbitrary single-qubit gates. These circuits are extremely relevant for a host of applications ranging from chemistry to machine learning. The table below compares the runtime scaling of PEC with the runtime scaling of the best classical circuit simulation methods, and estimates the γ̄ required for error mitigated quantum computation to be competitive with best-known classical techniques.”

The blog is best read directly and provides a better view of what IBM sees as its path to quantum advantage using error mitigation techniques. So far, IBM has generally hit its milestones (hardware and software). The latest roadmap includes delivery of previously announced single-chip 433-qubit Osprey processor this year and 1,121-qubit Condor processor next year, but introduces newly architected processors intended for use in multi-chip modules, starting with a 133-qubit (Heron) processor in 2023, a 408-qubit processor (Crossbill) and 462-qubit processor (Flamingo) in 2024, and leading to the 1386-qubit processor (multiple chips) Kookaburra in 2025.

The blog’s concluding statement was quite broad:

“At IBM Quantum, we plan to continue developing our hardware and software with this path in mind. As we improve the scale, quality, and speed of our systems, we expect to see decreases in γ̄ and β resulting in improvements in quantum runtime for circuits of interest.

“At the same time, together with our partners and the growing quantum community, we will continue expanding the list of problems that we can map to quantum circuits and develop better ways of comparing quantum circuit approaches to traditional classical methods to determine if a problem can demonstrate quantum advantage. We fully expect that this continuous path that we have outlined will bring us practical quantum computing.”

Link to IBM blog, https://research.ibm.com/blog/gammabar-for-quantum-advantage

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Argonne Deploys Polaris Supercomputer for Science in Advance of Aurora

August 9, 2022

Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora sys Read more…

US CHIPS and Science Act Signed Into Law

August 9, 2022

Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed and lauded the ambitious piece of legislation, which over t Read more…

12 Midwestern Universities Team to Boost Semiconductor Supply Chain

August 8, 2022

The combined stressors of Covid-19 and the invasion of Ukraine have sent every major nation scrambling to reinforce its mission-critical supply chains – including and in particular the semiconductor supply chain. In the U.S. – which, like much of the world, relies on Asia for its semiconductors – those efforts have taken shape through the recently... Read more…

Quantum Pioneer D-Wave Rings NYSE Bell, Begins Life as Public Company

August 8, 2022

D-Wave Systems, one of the early quantum computing pioneers, has completed its SPAC deal to go public. Its merger with DPCM Capital was completed last Friday, and today, D-Wave management rang the bell on the New York Stock Exchange. It is now trading under two ticker symbols – QBTS and QBTS WS (warrant shares), respectively. Welcome to the public... Read more…

AWS Solution Channel

Shutterstock 1519171757

Running large-scale CFD fire simulations on AWS for Amazon.com

This post was contributed by Matt Broadfoot, Senior Fire Strategy Manager at Amazon Design and Construction, and Antonio Cennamo ProServe Customer Practice Manager, Colin Bridger Principal HPC GTM Specialist, Grigorios Pikoulas ProServe Strategic Program Leader, Neil Ashton Principal, Computational Engineering Product Strategy, Roberto Medar, ProServe HPC Consultant, Taiwo Abioye ProServe Security Consultant, Talib Mahouari ProServe Engagement Manager at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1689646429

Gain a Competitive Edge using Cloud-Based, GPU-Accelerated AI KYC Recommender Systems

Financial services organizations face increased competition for customers from technologies such as FinTechs, mobile banking applications, and online payment systems. To meet this challenge, it is important for organizations to have a deep understanding of their customers. Read more…

Supercomputer Models Explosives Critical for Nuclear Weapons

August 6, 2022

Lawrence Livermore National Laboratory (LLNL) is one of the laboratories that operates under the auspices of the National Nuclear Security Administration (NNSA), which manages the United States’ stockpile of nuclear we Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

US CHIPS and Science Act Signed Into Law

August 9, 2022

Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed Read more…

Quantum Pioneer D-Wave Rings NYSE Bell, Begins Life as Public Company

August 8, 2022

D-Wave Systems, one of the early quantum computing pioneers, has completed its SPAC deal to go public. Its merger with DPCM Capital was completed last Friday, and today, D-Wave management rang the bell on the New York Stock Exchange. It is now trading under two ticker symbols – QBTS and QBTS WS (warrant shares), respectively. Welcome to the public... Read more…

SEA Changes: How EuroHPC Is Preparing for Exascale

August 5, 2022

Back in June, the EuroHPC Joint Undertaking — which serves as the EU’s concerted supercomputing play — announced its first exascale system: JUPITER, set to be installed by the Jülich Supercomputing Centre (FZJ) in 2023. But EuroHPC has been preparing for the exascale era for a much longer time: eight months before... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

CXL Brings Datacenter-sized Computing with 3.0 Standard, Thinks Ahead to 4.0

August 2, 2022

A new version of a standard backed by major cloud providers and chip companies could change the way some of the world's largest datacenters and fastest supercomputers are built. The CXL Consortium on Tuesday announced a new specification called CXL 3.0 – also known as Compute Express Link 3.0... Read more…

Inside an Ambitious Play to Shake Up HPC and the Texas Grid

August 2, 2022

With HPC demand ballooning and Moore’s law slowing down, modern supercomputers often undergo exhaustive efficiency efforts aimed at ameliorating exorbitant energy bills and correspondingly large carbon footprints. Others, meanwhile, are asking: is min-maxing the best option, or are there easier paths to reducing the bills and emissions of... Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

AMD Lines Up Alternate Chips as It Eyes a ‘Post-exaflops’ Future

June 10, 2022

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…

Exascale Watch: Aurora Installation Underway, Now Open for Reservations

May 10, 2022

Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire