IBM Pitches Quantum Volume as Benchmarking Tool for Gate-based Quantum Computers

By John Russell

March 6, 2019

IBM this week announced it had achieved its highest Quantum Volume number to date at the American Physical Society (APS) March meeting being held in Boston. What’s Quantum Volume, you ask? Broadly, it’s a ‘holistic measure’ introduced by IBM in a paper last November that’s intended to characterize gate-based quantum computers, regardless of their underlying technology (semiconductor, ion trap, etc.), with a single number. IBM is urging wide adoption of QV by the quantum computing community.

The idea is interesting. The highest QV score so far is 16 which was attained by IBM’s fourth generation 20-qubit IBM Q System One; that’s double the QV of IBM’s 20-qubit IBM Q Network devices. You can see qubit count isn’t the determinant (but it is a factor). Many system-wide facets – gate error rates, decoherence times, qubit connectivity, operating software efficiency, and more – are effectively baked into the measure. In the paper, IBM likens QV to LINPACK for its ability to compare diverse systems.

IBM has laid out a roadmap in which it believes it can roughly double QVs every year. This rate of progress, argues IBM, will produce quantum advantage – which IBM defines as “a quantum computation is either hundreds or thousands of times faster than a classical computation, or needs a smaller fraction of the memory required by a classical computer, or makes something possible that simply isn’t possible now with a classical computer” – in the 2020s.

Addison Snell, CEO, Intersect360 Research noted, “Quantum volume is an interesting metric for tracking progress toward the ability to leverage quantum computing in ways that would be impractical for conventional supercomputers. With the different approaches to quantum computing, it is difficult to compare this achievement across the industry, but it is nevertheless a compelling statistic.”

There’s a lot to unpack here and it’s best done by reading the IBM paper, which isn’t overly long. Bob Sutor, VP, IBM Q Strategy and Ecosystem, and Sarah Sheldon, research staff at IBM T.J. Watson Research Center, briefed HPCwire on QV’s components, use, and relevance to the pursuit of quantum advantage. Before jumping into how Quantum Value is determined, Sutor’s comments on timing and what the magic QV number might be to achieve quantum advantage are interesting.

“We’re not going to go on record saying this or that particular QV number [will produce quantum advantage]. We have now educated hunches based on the different paths that people are taking, that people are taking for chemistry, for AI explorations, for some of the Monte Carlo simulations, and frankly the QV number may be different and probably will be different for each of those. We are certainly on record as saying in the 2020s and we hope in 3-to-5 years,” said Sutor.

The APS meeting served as a broad launchpad for QV with IBM making several presentations on various quantum topics while also seeking to stimulate conversation and urge adoption of QV within the gate-based quantum computing crowd. IBM issued a press release, a more technical blog with data points, and continued promoting the original paper (Validating quantum computers using randomized model circuits) which is freely downloadable. Rigetti has reportedly implemented QV. Noteworthy, QV is not meant for use with adiabatic annealing quantum systems such as D-Wave’s.

A central challenge in quantum computing is the variety of error and system influences that degrade system control and performance. Lacking practical and powerful enough error correction technology, the community has opted for labelling the modern class of quantum computers as noisy intermediate-scale quantum (NISQ) systems. Recognizing this is a situation likely to persist for some time, the IBM paper’s authors[I] do a nice job describing the problem and their approach to measuring performance. Excerpt:

“In these noisy intermediate-scale quantum (NISQ) systems, performance of isolated gates may not predict the behavior of the system. Methods such as randomized benchmarking, state and process tomography, and gateset tomography are valued for measuring the performance of operations on a few qubits, yet they fail to account for errors arising from interactions with spectator qubits. Given a system such as this, whose individual gate operations have been independently calibrated and verified, how do we measure the degree to which the system performs as a general purpose quantum computer? We address this question by introducing a single-number metric, the quantum volume, together with a concrete protocol for measuring it on near-term systems. Similar to how LINPACK is used for comparing diverse classical computers, this metric is not tailored to any particular system, requiring only the ability to implement a universal set of quantum gates.

“The quantum volume protocol we present is strongly linked to gate error rates, and is influenced by underlying qubit connectivity and gate parallelism. It can thus be improved by moving toward the limit in which large numbers of well-controlled, highly coherent, connected, and generically programmable qubits are manipulated within a state-of-the-art circuit rewriting toolchain. High-fidelity state preparation and readout are also necessary. In this work, we evaluate the quantum volume of current IBM Q devices, and corroborate the results with simulations of the same circuits under a depolarizing error model.”

In practice, explained Sheldon, “We generate model circuits which have a specific form where they are sequences of different layers of random entangling gates. The first step is entangling gates between different pairs of qubits on the device. Then we permute the pairing of qubits, into another layer of entangling gates. Each of these layers we call the depth. So if we have three layers, it’s depth3. What we are looking at are circuits we call square circuits with the same number of qubits as the depth in the circuit. Since we are still talking about small enough numbers of qubits that we can simulate these circuits [on classical systems].

“We run an ideal simulation of the circuit and from get a probability distribution of all the possible outcomes. At the end of applying the circuit, the system should be in some state and if we were to measure it we would get a bunch of bit streams, outcomes, with some probabilities. Then we can compare the probabilities from the ideal case to what we actually measured. Based on how close we are to the ideal situation, we say whether or not we were successful. There are details in the paper about how we actually define the success and how we compare the experimental circuits to the ideal circuits. The main point is by doing these model circuits we’re sort of representing a generic quantum algorithm – [we realize] a quantum algorithm doesn’t use random circuits but this is kind of a proxy for that,” she said.

Shown below are some data characterizing IBM systems – IBM Q System One, IBM Q Network systems “Tokyo” and “Poughkeepsie,” and the publicly-available IBM Q Experience system “Tenerife.” As noted in IBM’s blog the performance of a particular quantum computer can be characterized on two levels: metrics associated with the underlying qubits in the chip—what we call the “quantum device”—and overall full-system performance.

“IBM Q System One’s performance is reflected in some of the best/lowest error rates we have ever measured. The average two qubit gate error is less than two percent, and the best gate has less than one percent error rate. Our devices are close to being fundamentally limited by coherence times, which for IBM Q System One averages 73μs,” write Jay Gambetta (IBM Fellow) and Sheldon in the blog. “The mean two-qubit error rate is within a factor of two (x1.68) of the coherence limit, the theoretical limit set by the qubit T1 and T2 (74μs and 69μs on average for IBM Q System One). This indicates that the errors induced by our controls are quite small, and we are achieving close to the best possible qubit fidelities on this device.”

It will be interesting to see how the quantum computing community responds to the CV metric. Back in May when Hyperion Research launched its quantum practice, analyst Bob Sorensen said, “One of the things I’m hoping we can at least play a role in is the idea of thinking about quantum computing benchmarks. Right now, if you read the popular press, and I say ‘IBM’ and the first thing you think of is, yes they have a 50-qubit system. That doesn’t mean much to anybody other than it’s one more qubit than a 49-qubit system. What I am thinking about is asking these people how can we start to characterize across a number of different abstractions and implementations to gain a sense of how we can measure progress.”

IBM has high hopes for Quantum Volume.

Link to release: https://newsroom.ibm.com/2019-03-04-IBM-Achieves-Highest-Quantum-Volume-to-Date-Establishes-Roadmap-for-Reaching-Quantum-Advantage

Link to blog: https://www.ibm.com/blogs/research/2019/03/power-quantum-device/

Link to paper: https://arxiv.org/pdf/1811.12926.pdf

Feature image; IBM Q System One

[i]Validating quantum computers using randomized model circuits, Andrew W. Cross, Lev S. Bishop, Sarah Sheldon, Paul D. Nation, and Jay M. Gambetta IBM T. J. Watson Research Center, https://arxiv.org/pdf/1811.12926.pdf

 

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire