IonQ Plots Path to Commercial (Quantum) Advantage

By John Russell

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Next up on the product roadmap is Forte Enterprise, intended to be deployed in client datacenters. In 2025 the company plans to introduce Tempo, its next-gen system that will use barium instead of ytterbium as qubits and likely have many of the features being developed to enable IonQ’s modular strategy.

The update webinar was led by CEO Peter Chapman and focused broadly on technology with Dean Kassmann, IonQ’s recently promoted to SVP engineering and technology, doing most of the presenting. While there were few granular details or new material, Kassmann dug deeper into IonQ’s modular architecture, its developing photon-based interconnect scheme to link many QPUs into larger systems, the switch to barium as Ion’s preferred qubit, and miniaturization of it vacuum package. IonQ has posted a video of the webinar.

Like many in the quantum computing world, IonQ is focused on delivering value — and gaining revenue — in the so-called near-term intermediate-scale quantum (NISQ) era of noisy systems while still pursuing long-range plans for full fault-tolerant quantum computing.

“Right now, we have Harmony and Aria available on the cloud,” said Kassmann. “Forte is available in kind of early access. [It] is our flagship. It’s commercially available. We passed #AQ 36 (algorithmic qubits) early this year on Forte, and that was a year ahead of schedule. As we move forward, Forte Enterprise adds a focus on manufacturability and data center readiness for that system. We’re going to be able to mirror the performance and the capabilities that we have with Forte and add to it customer requests such as robustness and uptime improvements.”

“Tempo is targeting more qubits and higher quality gates. So it’s going to be the first system that we’re using our reconfigurable multi-core quantum architecture (RMQA). We’ll be able to have a multi-core system, and we expect its performance will exceed anything that can be simulated on a classical computer. Tempo is going to be the first system [to use] and leverage barium — we have a number of development systems in place right now, but that’ll be our first commercially available system in barium.”

There are several terms floating around to describe delivering quantum advantage near-term. IonQ calls it Commercial Advantage and that idea seems to inform how it describes its systems — rather than using physical qubits count, the company prefers a measure it calls Algorithmic Qubits (#AQ) — a benchmark of sorts, derived loosely from work by the Quantum Economic Development Consortium (QED-C) that defines a set of algorithms able to be performed by a quantum system.

Here’s the #AQ description from IonQ’s website: “A system’s qubit count reveals information about the physical structure of the system but does not indicate the quality of the system, which is the largest indicator of utility. For a qubit to contribute to an algorithmic qubit it must be able to run enough gates to successful return useful results across the 6 algorithms in the #AQ definition. This is a high bar to pass and is the reason many system’s #AQ is significantly lower than its physical qubit count.”

The higher the #AQ rating, the more capable the machine in IonQ parlance.

A year ago, IonQ CEO Chapman declared reaching #AQ 64 would be a ChatGPT moment for quantum computing — “At #AQ 64, classical computers will no longer be able to fully simulate an IonQ system, and as a result, we believe these systems will enable customers to tackle certain problems that even the best classical supercomputers can’t solve. We currently expect to deliver #AQ 64 by the end of 2025. We believe IonQ is the only public company today, that is executing against a roadmap that can deliver these technical results and sell systems in this time period.” (see HPCwire article,  IonQ Says Reaching #AQ 64 will be a ChatGPT Moment for Quantum Computing)

IonQ expects to use between 80 to 100 physical qubits to reach our #AQ 64 goals. “The additional qubits will help with operation and support the overall width requirements needed to execute circuits. These and other hardware improvements, along with software advances, will enable us to deliver #AQ 64,” IonQ told HPCwire.

There are many opinions but so far no consensus on how best to benchmark quantum computers. DARPA just last week released early results from a quantum benchmarking project. Indeed, there are many such public and private q benchmark projects.

Complicating any benchmarking effort is the wide variety of qubit types vying for dominance. IBM, Google, and Rigetti are betting on superconducting-based qubits. IonQ and Quantinuum use trapped ions. Atom Computing and QuEra use neutral atoms. PsiQuantum and Xanadu rely on photonics-based qubits. Microsoft is exploring topological qubits based on the rare Marjorana particle. Each qubit modality has strengths and weaknesses.

What’s new and growing is competitive zeal among companies using similar modalities, particularly among those expecting to field NISQ offerings. IonQ and Quantinuum have both bet big on trapped ions and have competing architectures. Broadly, all trapped ion-based systems have long coherence times, which helps achieve higher gate fidelity, but they also have relatively slow switching speeds. Trapped ion systems can also implement all-to-all qubit connectivity – this is an area where IonQ and Quantinuum approaches differ.  IonQ’s method doesn’t require moving the qubits (ions) around in the trap while Quantinuum has a very sophisticated transport approach that moves the ions around to various areas in the trap.

Taking a mild jab at Quantinuum, Kassmann said, “We have known for quite a long time, several decades now, that by shuttling only two qubits into an operational zone you can achieve great fidelity, but as a result end up with a very poor time solution. This was known back in, you know, in the 2000s was originally developed by NIST.”

“Through our programmable beam steering — AOD  (accousto optical deflector) technology —we’re focused on longer chains. Those longer chains allow us to be able to increase our all to all connectivity, and also better time to solution because we do not have the large shuttling and other overheads in place. This talks really to the philosophy that we have in terms of making those architectural trade offs for our system and the engineering trade offs that are required to both provide performance scale and enterprise grade,” he said.

No doubt Quantinuum would rebut the criticism. The competition between those seeking footholds in the NISQ era market is clearly heating up. IonQ has articulated a strategy that has three legs — performance, scale, and enterprise grade — to achieve commercial advantage. Both IonQ and Quantinuum have been pushing hard with impressive advances. (Just yesterday Quantinuum annnounced joint work with Colorado University on improved error correction.)

Chapman reviewed the IonQ strategy:

“The first leg is, performance, in particular, two-qubit, native gate fidelities. Fidelity, in the short term, controls how big a quantum circuit you can run in the NISQ era, and in the long term, [it] determines how much error correction is needed. This is one of our sweet spots. The second leg of the stool is getting to scale,” said Chapman. “While we hope to find commercially significant applications in this NISQ era, the true promise of quantum will need a lot more qubits and faster gate speeds. But just as importantly, as we scale up and network these quantum computers, we need to reduce the cost of the machines to make them affordable, because future quantum computers are going to be made up of networked individual machines, so the cost per qubit needs to go down as the computational power increases.”

“The last (third) leg is what we call enterprise grade, and to be honest, [we had] a little bit of a trouble trying to label this one, because it encompasses so much,” he said. “You can think of it as product maturity, or for that matter, product at all. The reality is for quantum to meet its promises, all three of these legs are required [and] to over emphasize one leg means that you have a one legged stool which isn’t worth much. Quantum is all about the architectural choices and navigating the compromises in quantum. You can optimize one parameter, but often at the expense of another.”

Kassmann said, “Right now, we have 99.6% two qubit gate fidelity in our Forte systems, [and] about 600 microsecond two-qubit gate times. We have 36 qubits. Our objectives for next year are to be able to break three nines in our two-qubit gate fidelity. That’s in long chains with over 100 qubits. This is all on our road to AQ #64 that we talked about before. We’re going to be reducing gate speeds next year as we move forward beyond next year, [drive] greater improvements in native gate fidelities, and [reach] logical gate fidelities at six nines in 2026,”said Kassmann.

Leveraging photonically interconnected systems would enable scaling to 1000s qubits. IonQ touts its AOD-guided all-to-all connectivity and modular architectural plan as a practical efficient approach.

The switch to barium ions from ytterbium, said Kassmann, brings both advantages and basic knowledge to assist in scaling.

“Barium is enabling us to use visible spectrum lasers. It allows us to leverage standard atomic technologies for higher levels of integration and better stability. It also has long-lived internal states in its atomic structure; those give us kind of lower state preparation as well as [fewer] measurement errors. But the big advantage it has is it provides are fundamental native gate fidelity limits. Those are all part of our story as we move to #AQ 64 and kind of progress,” said Kassmann.

IonQ is leveraging barium’s strengths, for example, when implementing mid-circuit measurement and reset, and barium qubits will be part of the planned Tempo system. Kassmann showed a slide, “If you look on the right hand side slide (below), you will see a picture of our 64 ions in a chain loaded into one of our barium development systems that was taken earlier this year as part of an internal imaging and readout demonstration we performed as we’re trying to work through those systems in our kind of barium test beds,” he said. The barium work and internal R&D  “is allowing us to kind of leverage some of the underlying physics to kind of scale and simplify the overall engineering.”

One of the advantages inherent to trapped ion and neutral atom based systems is the cooling needs are less exotic than superconducting-based qubits. Magnetic fields are used to confine the ions in a line. Lasers are used to cool the ions and limit their jiggling. The ions act as qubits. Lasers are also used to excite the ions into the desired state. Accomplishing all of this, as you might imagine, requires precision engineering and optics but steers clear of the big dilution refrigerators required by superconducting qubits.

“In parallel to some of the photonic interconnect technology development the research team that we have is also thinking about, what do we need to do to miniaturize our vacuum packages to support the overall kind of scale of our systems? And so vacuum is required in our trapped ion quantum computers to be able to maintain the overall chain. It allows us to kind of manipulate those of those qubits isolated from the external environment and within that vacuum package, we use lasers to be able to excite, cool, and do actual readout of the qubits,” Kassmann.

“I would say, current practice and current state of the art for trapped ion systems is to be able to augment vacuum with cryostats, or open cycle, or in our case, closed cycle cryostats to be able to kind of bring down the pressure lower than what you can achieve with normal kind of pumping technology. We are currently working to be able to do full room temperature trap technology with our what we’re calling our extreme high vacuum packages.”

“This is vacuum that basically is about the vacuum you find on the surface of the Moon lower than 10-12 torr. It’s going to allow us to maintain our overall vacuum for days. And so that is a core piece of our scaling piece as we try to drive these overall form factor in size down. One of the cool parts of this is actually it’s not just the trap itself that we’re trying in the packing package that’s being scaled. It’s also the buildup of the manufacturing technology and the assembly capability that we have,” said Kassmann.

IonQ finished its presentation with a few slides on relatively recent case histories (summary slides below, notably with the U.S.Navy, Airbus, and Deutsches Elektronen-Synchrotron (DESY). None are in production per se, but suggestive of near-term applications. Chapman said IonQ was nearing $100M in bookings for the year but didn’t break out what the projects/clients were.

Here are few summary slides from the presentation on recent case histories.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire