Quantinuum Launches H2, Reports Breakthrough in Work on Topological Qubits

By John Russell

May 9, 2023

Today is a good day for trapped ion quantum computer developer Quantinuum. The young company launched its newest system – System Model H2 – with 32 qubits capable of all-to-all connectivity. Perhaps more importantly, Quantinuum also reported “controlled creation and manipulation of non-Abelian anyons” on the H2. These elusive quasi-particles are thought to be key to developing inherently error-resistant topological qubits which, if actualized, would enable fault-tolerant quantum computers.

Quantinuum, you may recall, is a spinout from Honeywell that was formed in 2021 by the merger of Honeywell’s quantum division and Cambridge Quantum (quantum software specialist). One of the strengths of the trapped ion qubit technology used by Quantinuum is its long coherence times and high-fidelity gates which, in part, have helped produce consistently strong QV benchmark scores. The H2 is being launched with a benchmarked QV of 65,536, higher than the record QV of 32,768 set by System H1 in February, and the roadmap calls for expanding H2 to 50 qubits in ~12 months.

What’s more, collaborator JPMorgan Chase is issuing a paper today on quantum optimization algorithm design for portfolio optimization, with numerical results that were successfully validated on H2 during early access. Apparently the H2 system has been in operation since November as Quantinuum worked in stealth.

Not a bad day for Quantinuum.

Overall, the bigger news is probably the advance towards developing topological qubits.

In a pre-briefing with HPCwire, Quantinuum president Tony Uttley said, “This is a first of its kind. For over a quarter a century of people trying to do this. Theoretically, these things should exist. Now we can we make one and prove that we made one. We created these non-abelian topological qubits and we can control them.”

Rajeeb Hazra, the newly appointed CEO of Quantinuum, said, “For anyone who thought that quantum computers that are able to push forward the boundaries of human knowledge and scientific progress are still in the far distance, today marks a turning point. The H2 provides a breakaway moment for Quantinuum.”

Broadly speaking, topological qubits are made from a class of anyon quasiparticles – abelian anyons and non-Abelian anyons – that are associated with so-called topological states of matter. Study of non-Abelian anyons, and efforts to create and control them, are active research areas among many quantum developers. Microsoft and the Quantum Science Center (Oak Ridge National Laboratory) are two prominent examples. What makes non-abelian topological qubits useful is their inherent resistance to noise – all of the things (EM, heat, etc.) that cause qubits to decohere.

“There have been two approaches to trying to actually find and create a non-abelian topological qubit,” said Uttley. “One approach is engineering the material. So physically creating a material, that if you supercool that down to basically absolute zero, it would demonstrate this behavior. That’s engineering the material. But there’s another approach. The other approach is engineering the wave function. The Anyon doesn’t care how it’s made sure, it just cares that it was made.

“We use this engineering the wave function approach, which requires a quantum computer to do it, because what you’re doing is leveraging entanglement. As you bring these trapped ions together and entangle them, you’re not grabbing the qubits, you’re grabbing the entanglement that you can somehow, in your brain, imagine exists as a web tangle that that has been created, and we grab that to create the topological state,” he said.

Frankly, getting a firm handle on topological quantum computing is challenging as Uttley points out. Quantinuum has released a paper on its work (Creation of Non-Abelian Topological Order and Anyons on a Trapped-Ion Processor) in conjunction the H2 Launch. Below is a brief excerpt from the paper:

“Wavefunctions can exhibit a type of entanglement called ‘topological order’, appearing at the frontiers of condensed matter and high-energy physics and forming the backbone of many proposals for fault-tolerant quantum information processing. Such states come in two levels of complexity. The simplest topological wave- functions are Abelian, whose pointlike excitations, called anyons, acquire a phase factor upon braiding one around one another (see Fig. 1b). They have been proposed as robust quantum memories, and the fractional statistics of Abelian anyons have been verified in certain fractional quantum Hall states. More recently, the correlations associated to Abelian phases have been probed in a variety of engineered quantum devices.

“The situation for non-Abelian topological phases is rather different. These more exotic entangled states host excitations called non-Abelian anyons, which come with internal states. Braiding of non-Abelian particles generically effectuates a matrix action on this degenerate manifold. Such braiding is the operating principle of a topological quantum computer. This is associated to robustness to errors and thus defines a coveted goal, as is more generally the controlled realization of a non-Abelian topological phase—bringing their many remarkable properties under the experimental spotlight.”

Henrik Dreyer, managing director and scientific lead, described how the approach is implemented on Quantinuum systems. “There’s always this base layer, the ions in the trap, and we shine lasers at them and that’s how they change their state. That’s always going to happen. We didn’t change what we’re doing in the trap for this specific anyon experiment. The ions qubits are the zeros and ones and there’s a lot of information that you can encode in 32 qubits, around 4 billion states. [But] there’s also another layer and that layer is about the description of the system.

“You can group the 4 billion states into different classes of states. One particular class of states includes these topological states, which are among the most difficult to find for ions. If you just initialize everything (the ions) it is not topological; you have to follow a very specific path to find these topological states, and at this point it’s just a specific definition of the amplitudes of that state, just a description of the state.”

Information encoded in these global or topological states, by virtue of the braiding, resist error. “The usual noise processes that we’re trying to combat in quantum computing never change the topology of the state. Once you’re in this class, this topological class of states, you’re never going to get out of it just by noise,” said Dreyer.

Done effectively, the use of topological qubits should produce fault tolerant computing. (apologies for any garbling here – best to read the paper directly.) Uttley says the latest work on topological qubits shows a clear, realistic path for Quantinuum to follow towards creating a fault-tolerant quantum computer. There’s still work to be done. For example, developing a universal gate set for the new approach is needed, though Dreyer says there’s a fair amount of available information to help guide that process. Work is also needed to stabilize anyon braids long enough to do computation. But the Quantinuum work is significant.

Uttley contends Quantinuum’s H2 is the only quantum system on which this kind of work can currently be done. He also says the H2 – even before there’s such a thing as topological computing – is likely to be a useful tool for other doing basic materials research.

Heather West, IDC’s lead analyst for quantum computing, was bullish on the progress towards topological quantum computing and on Quantinuum’s new system H2 as a demonstration of Quantinuum’s ability to scale its trapped ion platform.

“I think the topological approach, if it works as it is supposed to, is very important. This is supposed to be the most reliable approach to achieving fault tolerance. I think this development is one of the develop developments in 2023, which is going to send that send us into a next technological wave for quantum which is going to bring us closer to this quantum advantage and this quantum value that people have been talking about,” said West.

About the new system, “This all to all connectivity give you much more leeway to use those qubits in a fashion that you weren’t able to use them before. I think it’s going to allow for deeper circuits to be developed in allow more complex problems to be solved, which is going to deliver value,” said West.

Let’s turn to the new H2 system.

The new architecture leans heavily on past learnings and the prior H1 approach. That’s intentional says Uttley. The H1 system was linear, with separate zone for loading, staging, gating, and entangling. The H2 architecture enhances all of the control mechanism and adopts an oval racetrack configuration that conserves real estate used and smooths ion transport.

Quantinuum System H2 chip. Credit: Quantinuum

Trapped ion technology is sometimes been criticized as difficult to scale and too slow (switching speeds). Uttley counters what users want are good qubits, not more unreliable ones, and it’s true that is a common complaint.  As for scaling, Quantinuum has developed and steadily improved its ability to move ions around with its quantum charge-coupled device (QCCD) architecture.

Overall, the chip-based quantum processor is housed in what Quantinuum call a physics package, which is an ultra-high vacuum chamber made of titanium cooled with liquid helium and other instrumentation. The cooling is to provide a better vacuum, not cool the qubits. This unlike semiconductor-based superconducting qubits, which require dilution refrigerators to cool QPUs to a few mili Kelvin degrees. Quantinuum’s qubits are made from ytterbium (Yb+)/ barium (Ba+) ion pairs. Magnetic fields are used contain the ions. Lasers are used to nudge them around.

Lasers are used to cool the qubits – basically reduce their motion – to micro Kelvin degrees. “It turns out because they’re both ions and have charge that they connect [via] a resonance between them. Think about them as being on a spring, and it turns out if you cool one down, it slows the other one down. Most of the time, what we’re doing is cooling the barium to slow it down to almost the ground state. It’s part of our secret sauce,” said Uttley.

H2 features all-to-all connectivity between qubits, meaning that every qubit in the H2 can directly be pairwise entangled with any other qubit in the system. Quantinuum says, “Near-term doing so reduces the overall errors in algorithms, and long term opens up additional opportunities for new, more efficient error correcting codes – both critical for continuing to accelerate the capabilities of quantum computing. When combined with the demonstration of controlled non-Abelian anyons, the integrated achievement highlights an important step in topological quantum information storage and processing.”

Other prominent H2 features include qubit reuse, mid-circuit measurement with conditional logic, “industry leading high-fidelity qubit operations”, and long coherence time. Quantinuum has issued a paper on the H2 architecture.

While H2 uses an oval racetrack configuration, Quantinuum’s roadmap calls for moving to a grid architecture, with integrated optics. These grids can be tiled across a chip and eventually stacked in a 3D manner.

The H2 is available now through cloud-based access from Quantinuum and will be available through Microsoft Azure Quantum beginning in June. Also, a noise-informed emulator of H2 is being made possible through Nvidia’s cuQuantum SDK of optimized libraries and tools,

Uttley said, “Like our first generation, System H2 is upgradable [and] within the next 12 months we will be crossing over 50 qubits in the system.” That growth may cause problems for GPU-based simulation.

“We’ve worked very closely with Nvidia to have it be optimized because what’s interesting is when we run 32 qubits on our quantum computer and we run 32 qubits on our GPU cluster, most of the time, we actually get there faster on our quantum computer. That’s how challenging this has become,” said Utley. “Think about state vector calculations when you go from 32 up to 40 or over 50. Now you’ve just gone from GPU cluster to Frontier or Summit supercomputer to be able to try to go and do that kind of computation.”

Like virtually everyone in the quantum computing community, Quantinuum is hardly waiting for the arrival of fault-tolerant computing before trying to apply its technology and derive revenue. It’s working with leading edge hopefuls like JPMorgan Chase on long-term projects. Nearer-term Quantinuum has a few offerings.

One is InQuanto, a Python-based computational chemistry tool that can be used now on classical systems to experiment with quantum algorithms for chemistry.

Another Quantum Origin is billed as the provider of the world’s only quantum-hardened encryption keys. It’s a cloud-hosted platform that leverages quantum-generated randomness to generate “superior cryptographic keys.” The cryptography market has grown quickly hot, fueled by the threat that quantum computers will be able to crack existing RSA codes and NIST-led efforts to develop quantum-resistant algorithms.

Stay tuned

Link to press release, https://www.hpcwire.com/off-the-wire/quantinuum-unveils-system-model-h2-a-major-leap-towards-fault-tolerant-quantum-computing/

Link to paper on creating non-Abelian anyons, https://arxiv.org/abs/2305.03766

Link to System H2 architecture paper, https://arxiv.org/abs/2305.03828

Feature image: Quantinnum H2 trapped ion processor. Credit: Quantinuum

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire