Researchers Discover New Method for Correcting Errors in Quantum Computers

September 1, 2022

Sept. 1, 2022 — In conventional computers, fixing errors is a well-developed field. Every cellphone requires checks and fixes to send and receive data over messy airwaves. Quantum computers offer enormous potential to solve certain complex problems that are impossible for conventional computers, but this power depends on harnessing extremely fleeting behaviors of subatomic particles. These computing behaviors are so ephemeral that even looking in on them to check for errors can cause the whole system to collapse.

Jeff Thompson, Princeton Associate Professor of Electrical and Computer Engineering, led the interdisciplinary research team. Credit: Princeton University

In a paper outlining a new theory for error correction, published Aug. 9 in Nature Communications, an interdisciplinary team led by Jeff Thompson, an associate professor of electrical and computer engineering at Princeton, and collaborators Yue Wu and Shruti Puri at Yale University and Shimon Kolkowitz at the University of Wisconsin-Madison, showed that they could dramatically improve a quantum computer’s tolerance for faults, and reduce the amount of redundant information needed to isolate and fix errors. The new technique increases the acceptable error rate four-fold, from 1% to 4%, which is practical for quantum computers currently in development.

“The fundamental challenge to quantum computers is that the operations you want to do are noisy,” said Thompson, meaning that calculations are prone to myriad modes of failure.

In a conventional computer, an error can be as simple as a bit of memory accidentally flipping from a 1 to a 0, or as messy as one wireless router interfering with another. A common approach for handling such faults is to build in some redundancy, so that each piece of data is compared with duplicate copies. However, that approach increases the amount of data needed and creates more possibilities for errors. Therefore, it only works when the vast majority of information is already correct. Otherwise, checking wrong data against wrong data leads deeper into a pit of error.

“If your baseline error rate is too high, redundancy is a bad strategy,” Thompson said. “Getting below that threshold is the main challenge.”

Rather than focusing solely on reducing the number of errors, Thompson’s team essentially made errors more visible. The team delved deeply into the actual physical causes of error, and engineered their system so that the most common source of error effectively eliminates, rather than simply corrupting, the damaged data. Thompson said this behavior represents a particular kind of error known as an “erasure error,” which is fundamentally easier to weed out than data that is corrupted but still looks like all the other data.

In a conventional computer, if a packet of supposedly redundant information comes across as 11001, it might be risky to assume that the slightly more prevalent 1s are correct and the 0s are wrong. But if the information comes across as 11XX1, where the corrupted bits are evident, the case is more compelling.

Overview of a fault-tolerant neutral atom quantum computer using erasure conversion. Details are available as a footnote below. Credit: Nature Communications.

“These erasure errors are vastly easier to correct because you know where they are,” Thompson said. “They can be excluded from the majority vote. That is a huge advantage.”

Erasure errors are well understood in conventional computing, but researchers had not previously considered trying to engineer quantum computers to convert errors into erasures, Thompson said.

As a practical matter, their proposed system could withstand an error rate of 4.1%, which Thompson said is well within the realm of possibility for current quantum computers. In previous systems, the state-of-the-art error correction could handle less than 1% error, which Thompson said is at the edge of the capability of any current quantum system with a large number of qubits.

The team’s ability to generate erasure errors turned out to be an unexpected benefit from a choice Thompson made years ago. His research explores “neutral atom qubits,” in which quantum information (a “qubit”) is stored in a single atom. They pioneered the use of the element ytterbium for this purpose. Thompson said the group chose ytterbium partly because it has two electrons in its outermost layer of electrons, compared to most other neutral atom qubits, which have just one.

“I think of it as a Swiss army knife, and this ytterbium is the bigger, fatter Swiss army knife,” Thompson said. “That extra little bit of complexity you get from having two electrons gives you a lot of unique tools.”

One use of those extra tools turned out to be useful for eliminating errors. The team proposed pumping the electrons in ytterbium and from their stable “ground state” to excited states called “metastable states,” which can be long-lived under the right conditions but are inherently fragile. Counterintuitively, the researchers propose to use these states to encode the quantum information.

“It’s like the electrons are on a tightrope,” Thompson said. And the system is engineered so that the same factors that cause error also cause the electrons to fall off the tightrope.

As a bonus, once they fall to the ground state, the electrons scatter light in a very visible way, so shining a light on a collection of ytterbium qubits causes only the faulty ones to light up. Those that light up should be written off as errors.

This advance required combining insights in both quantum computing hardware and the theory of quantum error correction, leveraging the interdisciplinary nature of the research team and their close collaboration. While the mechanics of this setup are specific to Thompson’s ytterbium atoms, he said the idea of engineering quantum qubits to generate erasure errors could be a useful goal in other systems — of which there are many in development around the world—and is something that the group is continuing to work on.

“We see this project as laying out a kind of architecture that could be applied in many different ways,” Thompson said, adding that other groups have already begun engineering their systems to convert errors into erasures. “We are already seeing a lot of interesting in finding adaptations for this work.”

As a next step, Thompson’s group is now working on demonstrating the conversion of errors to erasures in a small working quantum computer that combines several tens of qubits.

*Image: Overview of a fault-tolerant neutral atom quantum computer using erasure conversion: (a) Schematic of a neutral atom quantum computer, with a plane of atoms under a microscope objective used to image fluorescence and project trapping and control fields. (b) The physical qubits are individual 171Yb atoms. The qubit states are encoded in the metastable 6s6p 3P0F = 1/2 level (subspace Q), and two-qubit gates are performed via the Rydberg state |r⟩|r⟩\left|r\right\rangle, which is accessed through a single-photon transition (λ = 302 nm) with Rabi frequency Ω. The dominant errors during gates are decays from |r⟩|r⟩\left|r\right\rangle with a total rate Γ = ΓB + ΓR + ΓQ. Only a small fraction ΓQ/Γ ≈ 0.05 return to the qubit subspace, while the remaining decays are either blackbody (BBR) transitions to nearby Rydberg states (ΓB/Γ ≈ 0.61) or radiative decay to the ground state 6s2 1S0 (ΓR/Γ ≈ 0.34). At the end of a gate, these events can be detected and converted into erasure errors by detecting fluorescence from ground state atoms (subspace R), or ionizing any remaining Rydberg population via autoionization, and collecting fluorescence on the Yb+ transition (subspace B). (c) A patch of the XZZX surface code studied in this work, showing data qubits (open circles), ancilla qubits (filled circles) and stabilizer operations, performed in the order indicated by the arrows. (d) Quantum circuit representing a measurement of a stabilizer on data qubits D1 − D4 using ancilla A1 with interleaved erasure conversion steps. Erasure detection is applied after each gate, and erased atoms are replaced from a reservoir as needed using a moveable optical tweezer. It is strictly only necessary to replace the atom that was detected to have left the subspace, but replacing both protects against the possibility of undetected leakage on the second atom.

The paper, “Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays,” was published Aug. 9 in Nature Communications. The work was supported by the National Science Foundation QLCI Center for Robust Quantum Simulation, as well as grants the Army Research Office, the Office of Naval Research, the Defense Advanced Projects Research Administration and the Sloan Foundation.


Source: Steven Schultz, Princeton University

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic Finale! Gala Awards Ceremony

May 28, 2024

We wrapped up the competition with our traditional Gala Awards Ceremony. This was an exciting show, given that only 40 points or so separated first place from fifth place after the Google GROMACS Challenge and heading in Read more…

IBM Makes a Push Towards Open-Source Services, Announces New watsonx Updates

May 28, 2024

Today, IBM declared that it is releasing a number of noteworthy changes to its watsonx platform, with the goal of increasing the openness, affordability, and flexibility of the platform’s AI capabilities. Announced Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storage, throughput, and new computing technologies. This round Read more…

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

watsonx

IBM Makes a Push Towards Open-Source Services, Announces New watsonx Updates

May 28, 2024

Today, IBM declared that it is releasing a number of noteworthy changes to its watsonx platform, with the goal of increasing the openness, affordability, and fl Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storag Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire