PsiQuantum’s Path to 1 Million Qubits

By John Russell

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for NISQ (near-term intermediate scale quantum) computers and set out to develop a million-qubit system the company says will deliver big gains on big problems as soon as it arrives.

When will that be?

PsiQuantum says it will have all the manufacturing processes in place “by the middle of the decade” and it’s working closely with GlobalFoundries (GF) to turn its vision into reality. The generous size of its funding suggests many think it will succeed. PsiQuantum is betting on a photonics-based approach called fusion-based quantum computing (paper) that relies mostly on well-understood optical technology but requires extremely precise manufacturing tolerances to scale up. It also relies on managing individual photons, something that has proven difficult for others.

Here’s the company’s basic contention:

Success in quantum computing will require large, fault-tolerant systems and the current preoccupation with NISQ computers is an interesting but ultimately mistaken path. The most effective and fastest route to practical quantum computing will require leveraging (and innovating) existing semiconductor manufacturing processes and networking thousands of quantum chips together to reach the million-qubit system threshold that’s widely regarded as necessary to run game-changing applications in chemistry, banking, and other sectors.

It’s not that incrementalism is bad. In fact, it’s necessary. But it’s not well served when focused on delivering NISQ systems argues Peter Shadbolt, one of PsiQuantum founders and the current chief scientific officer.

Peter Shadbolt, PsiQuantum

“Conventional supercomputers are already really good. You’ve got to do some kind of step change, you can’t increment your way [forward], and especially you can’t increment with five qubits, 10 qubits, 20 qubits, 50 qubits to a million. That is not a good strategy. But it’s also not true to say that we’re planning to leap from zero to a million,” said Shadbolt. “We have a whole chain of incrementally larger and larger systems that we’re building along the way. Those allow us to validate the control electronics, the systems integration, the cryogenics, the networking, etc. But we’re not spending time and energy, trying to dress those up as something that they’re not. We’re not having to take those things and try to desperately extract computational value from something that doesn’t have any computational value. We’re able to use those intermediate systems for our own learnings and for our own development.”

That’s a much different approach from the majority of quantum computing hopefuls. Shadbolt suggests the broad message about the need to push beyond NISQ dogma is starting to take hold.

“There is a change that is happening now, which is that people are starting to program for error-corrected quantum computers, as opposed to programming for NISQ computers. That’s a welcome change and that’s happening across the whole space. If you’re programming for NISQ computers, you very rapidly get deeply entangled – if you’ll forgive the pun – with the hardware. You start looking under the hood, and you start trying to find shortcuts to deal with the fact that you have so few gates at your disposal. So, programming NISQ computers is a fascinating, intellectually stimulating activity, I’ve done it myself, but it rapidly becomes sort of siloed and you have to pick a winner,” said Shadbolt.

“With fault tolerance, once you start to accept that you’re going to need error correction, then you can start programming in a fault-tolerant gate set which is hardware agnostic, and it’s much more straightforward to deal with. There are also some surprising characteristics, which mean that the optimizations that you make to algorithms in a fault-tolerant regime are in many cases, the diametric opposite of the optimizations that you would make in the NISQ regime. It really takes a different approach but it’s very welcome that the whole industry is moving in that direction and spending less time on these kinds of myopic, narrow efforts,” he said.

That sounds a bit harsh. PsiQuantum is no doubt benefitting from the manifold efforts by the young quantum computing ecosystem to tout advances and build traction by promoting NISQ use cases. There’s an old business axiom that says a little hype is often a necessary lubricant to accelerate development of young industries; quantum computing certainly has its share. A bigger question is will PsiQuantum beat rivals to the end-game? IBM has laid out a detailed roadmap and said 2023 is when it will start delivering quantum advantage, using a 1000-qubit system, with plans for eventual million-qubit systems. Intel has trumpeted its CMOS strength to scale up manufacturing its quantum dot qubits. D-Wave has been selling its quantum annealing systems to commercial and government customers for years.

It’s really not yet clear which of the qubit technologies – semiconductor-based superconducting, trapped ions, neutral atoms, photonics, or something else – will prevail and for which applications. What’s not ambiguous is PsiQuantum’s Go Big or Go Home strategy. Its photonics approach, argues the company, has distinct advantages in manufacturability and scalability, operating environment (less frigid), ease of networking, and error correction. Shadbolt recently talked with HPCwire about the company’s approach, technology and progress.

What is fusion-based quantum computing?

Broadly, PsiQuantum uses a form of linear optical quantum computing in which individual photons are used as qubits. Over the past year and a half, the previously stealthy PsiQuantum has issued several papers describing the approach while keeping many details close to the vest (papers listed at end of article). The computation flow is to generate single photons and entangle them. PsiQuantum uses dual rail entangling/encoding for photons. The entangled photons are the qubits and are grouped into what PsiQuantum calls resource states, a group of qubits if you will. Fusion measurements (more below) act as gates. Shadbolt says the operations can be mapped to a standard gate-set to achieve universal, error-corrected, quantum computing.

On-chip components carry out the process. It all sounds quite exotic, in part because it differs from more-widely used matter-based qubit technologies. The figure below taken from a PsiQuantum paper – Fusion-based quantum computation – issued about a year ago roughly describes the process.

Digging into the details is best served by reading the papers and the company has archived videos exploring its approach on its website. The video below is a good brief summation by Mercedes Gimeno-Segovia, vice president of quantum architecture at PsiQuantum.

Shadbolt also briefly described fusion-based quantum computation (FBQC).

“Once you’ve got single photons, you need to build what we refer to as seed states. Those are pretty small entangled states and can be constructed again using linear optics. So, you take some single photons and send them into an interferometer and together with single photon detection, you can probabilistically generate small entangled states. You can then multiplex those again and basically the task is to get as fast as possible to a large enough, complex enough, appropriately structured, resource state which is ready to then be acted upon by a fusion network. That’s it. You want to kill the photon as fast as possible. You don’t want photons living for a long time if you can avoid it. That’s pretty much it,” said Shadbolt.

“The fusion operators are the smallest simplest piece of the machine. The multiplex, single-photon sources are the biggest, most expensive piece.  Everything in the middle is kind of the secret sauce of our architecture, some of that we’ve put out in that paper and you can see kind of how that works,” he said. (At the risk of overkill, another brief description of the system from PsiQuantum is presented at the end of the article.)

One important FBQC advantage, says PsiQuantum, is that the shallow depth of optical circuits make error correction easier. “The small entangled states fueling the computation are referred to as resource states. Importantly, their size is independent of code distance used or the computation being performed. This allows them to be generated by a constant number of operations. Since the resource states will be immediately measured after they are created, the total depth of operations is also constant. As a result, errors in the resource states are bounded, which is important for fault-tolerance.”

Some of the differences between the PsiQuantum’s FBQC design and the more familiar MBQC (measurement-based quantum computing) paradigm are shown below.

Another advantage is the operating environment.

“Nothing about photons themselves requires cryogenic operation. You can do very high fidelity manipulation and generation of qubits at room temperature, and in fact, you can even detect single photons at room temperature just fine. The efficiency of room temperature single photon detectors, is not good enough for fault tolerance. These room temperature detectors are based on pretty complex semiconductor devices, avalanche photodiodes, and there’s no physical reason why you couldn’t push those to the necessary efficiency, but it looks really difficult [and] people have been trying for a very long time,” said Shadbolt

“We use a superconducting single-photon detector, which can achieve the necessary efficiencies without a ton of development. It’s worth noting those detectors run in the ballpark of 4 Kelvin. So liquid helium temperature, which is still very cold, but it’s nowhere near as cold as milli-Kelvin temperatures required for superconducting qubits or some of the competing technologies,” said Shadbolt.

This has important implications for control circuit placement as well as for reduced power needed to generate the 4-degree Kelvin environment.

There’s a lot to absorb here and it’s best done directly from the papers. PsiQuantum, like many other quantum start-ups, was founded by researchers who were already digging into the quantum computing space and they’ve shown that PsiQuantum’s FBQC flavor of linear optical quantum computing will work. While at Bristol, Shadbolt was involved in the first demonstration of running a Variational Quantum Eigensolver (VQE) on a photonic chip.

The biggest challenges for PsiQuantum, he suggests, are developing manufacturing techniques and system architecture around well-known optical technology. The company argues having a Tier-1 fab partner such as GlobalFoundries is decisive.

PsiQuantum wafer

“You can go into infinite detail on the architecture and how all the bits and pieces go together. But the point of optical quantum computing is that the network of components is pretty complicated – all sorts of modules and structures and multiplexing strategies, and resource state generation schemes and interferometers, and so on – but they’re all just made out of beam splitters, and switches, and single photon sources and detectors. It’s kind of like in a conventional CPU, you can go in with a microscope and examine the structure of the cache and the ALU and whatever, but underneath it’s all just transistors. It’s the same kind of story here. The limiting factor in our development is the semiconductor process enablement. The thesis has always been that if you tried to build a quantum computer anywhere other than a high-volume semiconductor manufacturing line, your quantum computer isn’t going to work,” he said.

“Any quantum computer needs millions of qubits. Millions of qubits don’t fit on a single chip. So you’re talking about heaps of chips, probably billions of components realistically, and they all need to work and they all need to work better than the state of the art. That brings us to the progress, which is, again, rearranging those various components into ever more efficient and complex networks in pretty close analogy with CPU architecture. It’s a very key part of our IP, but it’s not rate limiting and it’s not terribly expensive to change the network of components on the chip once we’ve got the manufacturing process. We’re continuously moving the needle on that architecture development and we’ve improved these architectures in terms of their tolerance to loss by more than 150x, [actually] well beyond that. We’ve reduced the size of the machine, purely through architectural improvements by many, many orders of magnitude.

“The big, expensive, slow pieces of the development are in being able to build high quality components at GlobalFoundries in New York. What we’ve already done there is to put single photon sources and superconducting nanowire, single photon detectors into that manufacturing process engine. We can build wafers, 300-millimeter wafers, with tens of thousands of components on the wafer, including a full silicon photonics PDK (process design kit), and also a very high performing single photon detector. That’s real progress that brings us closer to being able to build a quantum computer, because that lets us build millions to billions of components.”

PsiQuantum FBQC processor

Shadbolt says real systems will quickly follow development of the manufacturing process. PsiQuantum, like everyone in the quantum computing community, is collaborating closely with potential users. Roughly a week ago, it issued a joint paper with Mercedes-Benz discussing quantum computer simulation of Li-ion chemistry. If the PsiQuantum-GlobalFoundries process is ready around 2025, can a million-qubit system (100 logical qubits) be far behind?

Shadbolt would only say that things will happen quickly once the process has been fully developed. He noted there are three ways to make money with a quantum computer: sell machines, sell time, and sell solutions that come from that machine. “I think we were exploring all of the above,” he said.

“Our customers, which is a growing list at this point – pharmaceutical companies, car companies, materials companies, big banks – are coming to us to understand what a quantum computer can do for them. To understand that, what we are doing, principally, is fault-tolerant resource counting,” said Shadbolt. “So that means we’re taking the algorithm or taking the problem the customer has, working with their technical teams to look under the hood, and understand the technical requirements of solving that problem. We are turning that into the quantum algorithms and sub routines that are appropriate. We’re compiling that for the fault-tolerant gate set that will run on top of that fusion network, which by the way is a completely vanilla textbook fault-tolerant gate set.”

Stay tuned.

PsiQuantum Papers

Fusion-based quantum computation, https://arxiv.org/abs/2101.09310

Creation of Entangled Photonic States Using Linear Optics, https://arxiv.org/abs/2106.13825

Interleaving: Modular architectures for fault-tolerant photonic quantum computing, https://arxiv.org/abs/2103.08612

Description of PsiQuantum’s Fusion-Based System from the Interleaving Paper

“Useful fault-tolerant quantum computers require very large numbers of physical qubits. Quantum computers are often designed as arrays of static qubits executing gates and measurements. Photonic qubits require a different approach. In photonic fusion-based quantum computing (FBQC), the main hardware components are resource-state generators (RSGs) and fusion devices connected via waveguides and switches. RSGs produce small entangled states of a few photonic qubits, whereas fusion devices perform entangling measurements between different resource states, thereby executing computations. In addition, low-loss photonic delays such as optical fiber can be used as fixed-time quantum memories simultaneously storing thousands of photonic qubits.

“Here, we present a modular architecture for FBQC in which these components are combined to form “interleaving modules” consisting of one RSG with its associated fusion devices and a few fiber delays. Exploiting the multiplicative power of delays, each module can add thousands of physical qubits to the computational Hilbert space. Networks of modules are universal fault-tolerant quantum computers, which we demonstrate using surface codes and lattice surgery as a guiding example. Our numerical analysis shows that in a network of modules containing 1-km-long fiber delays, each RSG can generate four logical distance-35 surface-code qubits while tolerating photon loss rates above 2% in addition to the fiber-delay loss. We illustrate how the combination of interleaving with further uses of non-local fiber connections can reduce the cost of logical operations and facilitate the implementation of unconventional geometries such as periodic boundaries or stellated surface codes. Interleaving applies beyond purely optical architectures, and can also turn many small disconnected matter-qubit devices with transduction to photons into a large-scale quantum computer.”

Slides/Figures from various PsiQuantum papers and public presentations

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire