Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

By John Russell

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to deliver practical quantum computing – a race that James Clarke, director of quantum hardware, Intel, has long maintained will be a marathon not a (NISQ) sprint.

Today, Intel posted a blog outlining its process progress towards that goal along with a paper published today in Nature (Probing single electrons across 300-mm spin qubit wafers).

Intel researcher Samuel Neyens writes in the blog, “Spin qubits based on electrons in silicon have shown impressive control fidelities but have historically been challenged by yield and process variation. To achieve high yield, researchers used a combination of processes from industrial transistor manufacturing. The quantum dots are defined by a planar architecture (see Figure 1). Active gates, used for controlled accumulation, are defined in a single layer. In later devices, a second passive layer for screening/depletion is also integrated. The gate electrodes are isolated from the heterostructure by a high-dielectric-constant composite stack (high-K stack) while neighboring gates are isolated by a spacer stack.”

The highlights as described by Intel include:

  • Quantum computing researchers at Intel Foundry Technology Research developed a 300-mm cryogenic probing process to collect high-volume data on the performance of spin qubit devices across full wafers.
  • The results demonstrate state-of-the-art uniformity, fidelity, and measurement statistics of spin qubits.
  • Researchers also found that single-electron devices from these wafers perform well when operated as spin qubits, achieving 99.9% fidelity for qubits fabricated using CMOS manufacturing.

Here’s the basic Intel pitch:

“Intel is taking steps toward building fault-tolerant quantum computers by improving three factors: qubit density, reproducibility of uniform qubits, and measurement statistics from high volume testing. First, Intel’s silicon spin qubits are smaller and denser than other qubit types such as superconducting and trapped ion qubits, enabling more spin qubits on a chip. The company’s extreme ultraviolet (EUV) lithography helps achieve this density in combination with high volume on devices. Second, making quantum computers with millions of uniform qubits requires highly reproducible and reliable fabrication. Spin qubits leverage Intel’s 300-mm CMOS manufacturing techniques, which routinely produce billions of transistors per chip. Third, developing large-scale quantum computers in the CMOS manufacturing space requires a high-volume 300-mm cryogenic probing system for fast process iteration and learning. Intel’s entire testing process, from alignment to device measurement, is fully automated and programmable, speeding up device data collection by several orders of magnitude compared with the measurement of singular devices in a cryostat.”

Many observers think the rationale is solid and are waiting to see more concrete expression of results in the form of bigger quantum dot systems that are running quantum algorithms. Best to read the blog or paper directly.

IBM’s Heron QPU is Designed for Ganging Up

You may have seen IBM’s announcement earlier this week of a collaboration with RIKEN to install a IBM System 2 with a Heron QPU-based system. It’s interesting on several fronts. Heron (133 qubits) was designed to be able to be combined with other QPUs to scale system size up. Pivoting to this approach is part of the IBM roadmap released late last year.

IBM System 2 — the enclosure, fridge, and control electronics — is likewise intended to be modular and capable of right-sizing as needed. The RIKEN-IBM collaboration has several goals, not least direct connection of the IBM quantum system to Fugako. The first iteration will have just one QPU.

“It will have one Heron, and is capable of expanding. The first System Two was installed in New York, and announced at last year’s Summit. The next deployment of this System Two architecture was in January with Korea Quantum Computing (which will be completed in 2028). But while we just announced this installation with RIKEN, we expect its installation to be complete over next year,” said an IBM spokesman.

IBM currently reports Heron has the highest performance metrics of any IBM Quantum processor that has been released, to date, “offering a five-fold improvement over the previous best records set by IBM Eagle.” Access to Heron QPUs through the System Two in NY “and over the cloud to premium clients.” No doubt more details will emerge at IBM Think conference in a few weeks.

PsiQuantum’s Down Under Plan and New Paper

PsiQuantum, the developer of photonics-based quantum computing, has always said it’s goal was to come of out the box with fault-tolerant quantum computer and not mess around with NISQ variants. It’s doubling down on that bet, and announced a $620M (USD) investment from the Australian Commonwealth and Queensland Governments to build its first utility-scale quantum computer in Brisbane, Australia.

Leaving aside funding — PsiQuantum may be the best-funded pure-play QC developer — and the choice of Australia as the site to build — Australia, in fact, has been an early pioneer in quantum information technologies — PsiQuantum issued a paper last week detailing its platform progress and directions.

The paper (A manufacturable platform for photonic quantum computing) provides a fair amount of detail around how PsiQuantum plans actually build its system. Much of this work has been done in close collaboration with GlobalFoundaries. Here’s an excerpt:

“Whilst holding great promise for low noise, ease of operation and networking, useful photonic quantum computing has been precluded by the need for beyond-state-of-the-art components, manufactured by the millions. Here we introduce a manufacturable platform for quantum computing with photons. We benchmark a set of monolithically-integrated silicon photonics-based modules to generate, manipulate, network, and detect photonic qubits, demonstrating dual-rail photonic qubits with 99.98% ± 0.01% state preparation and measurement fidelity, Hong-Ou-Mandel quantum interference between independent photon sources with 99.50% ± 0.25% visibility, two-qubit fusion with 99.22% ± 0.12% fidelity, and a chip-to-chip qubit interconnect with 99.72% ± 0.04% fidelity, not accounting for loss. In addition, we preview a selection of next generation technologies, demonstrating low-loss silicon nitride waveguides and components, fabrication-tolerant photon sources, high-efficiency photon-number-resolving detectors, low-loss chip-to-fiber coupling, and barium titanate electro-optic phase shifters.”

In the conclusion, the authors write: “We have described modifications made to an industrial semiconductor manufacturing process for integrated quantum photonics, demonstrating record performance. Through the addition of new materials, designs and process steps, we have enabled volume manufacturing of heralded photon sources and superconducting single photon detectors, together with photon manipulation via interferometry, tunability, and control of unwanted light. We have also described higher- performing devices, towards a resolution of the outstanding limitations of this baseline platform.”

For quantum industry watchers, the paper is worth a read. Link to paper.

March Meeting Snippets from Nature

Quickly capturing the depth and breadth of quantum information research talks presented at the annual APS March Meeting is probably an impossible task, and the journal Nature, doesn’t try to do that. However, it did issue short attempt (Harnessing quantum information to advance computing) last week in Nature Computational Science.

No surprise error correction/mitigation talks caught the writer’s attention: “A pressing issue in the field is the high level of noise in quantum bits (qubits), resulting in an error rate of about 10–2 to 10–3, which is much larger than the ideal error rate (10–15) required for the successful implementation of large-scale quantum algorithms in practical applications. As such, overcoming the effects of noise remains the foremost challenge for advancing the field. At the APS meeting, a total of 14 sessions — possibly the most attended ones in the event, at least to the eye of our editor in attendance — were devoted to quantum error correction (QEC) and quantum error mitigation.”

Actually, as tough as it would be to accomplish, a more comprehensive summary of QIS talks at the March Meeting would be quite useful; maybe next year Nature could tackle it. Link to article.

In Case You Missed It – QED-C Paper on Logistics

One of the Quantum Economic Development Consortium’s missions is (QED-C) spotlighting emerging uses for quantum computing. Last month, the organization issued a paper (Quantum Computing for Transportation and Logistics) in March that analyzed “83 use cases for quantum technologies in logistics identified by experts and consolidated them into 15 examples and ranked ranked the use cases as those expected to have the greatest impact and those judged to be the most feasible.”

  • The use case ranked the most feasible and impactful is continuous route optimization, making it the best target for research. High impact and high feasibility imply less time is required to develop a working solution than is required for other concepts.
  • The concept rated the second most feasible is operating plan design and train scheduling. This concept refers to a process of forecasting needs for a fleet, including the crew, vehicles, and load, and developing a plan to meet the needs.

According to the report, “The ranked use cases suggest where applying quantum computing could be both feasible and impactful. This means even noisy intermediate-scale quantum (NISQ) solutions, while not perfect, can be explored to discover usefulness in the short-term. New investments in quantum computing targeted to the logistics ecosystem could accelerate exploration and expand benefits for business operations and sustainable supply chains.” Link to report.

Which Quantum Architecture is Best for Financial Applications?

No one really knows the answer to that question as QC paradigms (gate-based, analog, and quantum annealing) and qubit modalities (superconducting, trapped ion, neutral atom, etc.) are still maturing. Nevertheless, a pair of Korean researchers looked at the question and issued a paper in April (Comparative Study of Quantum-Circuit Scalability in a Financial Problem) that found an edge for trapped ion systems over superconducting qubits in a problem using two-gate operations to determine T-bill pricing.

Researchers Jaewoong Heo and Moonjoo Lee of Pohang University of Science and Technology, write “As the number of the evaluation qubit increases, the more accurate the precise the outcome expectation value is. This augmentation in qubits, however, also leads to a varied escalation in circuit complexity, contingent upon the type of quantum computing device. By analyzing the number of two-qubit gates in the superconducting circuit and ion-trap quantum system, this study examines that the native gates and connectivity nature of the ion-trap system lead to less complicated quantum circuits. Across a range of experiments conducted with one to nineteen qubits, the examination reveals that the ion-trap system exhibits a two to three factor reduction in the number of required two-qubit gates when compared to the superconducting circuit system.”

Their study has brief review of techniques often used in FS calculations and the performance of particular algorithms on an IBM superconducting quibit-based system and an IonQ trapped ion machine. “The IBM superconducting quantum system and the IonQ ion-trap system represent two leading approaches, each with distinct advantages and challenges. These platforms differ fundamentally in their physical realization of qubits, the basic units of quantum information, as well as in their operational mechanisms, including how qubits are manipulated and how they interact with each other. This comparative analysis focuses on IBM’s superconducting circuit devices and IonQ’s ion-trap systems, highlighting their native gates, qubit connectivity, and the implications of these characteristics for quantum computing.” write Heo and Lee.

Here’s an excerpt:

  • “IBM’s quantum computers utilize superconducting circuits to create qubits. These circuits operate at extremely low temperatures, close to absolute zero, to maintain superconductivity. Qubits in these systems are typically realized as Josephson junctions, which allow for the creation of superpositions and entanglement, fundamental properties of quantum computing. In terms of connectivity, superconducting qubits are generally arranged in fixed layouts, such as IBM’s heavy- hex lattice. The connectivity is determined by the physical placement of the qubits and the resonators that link them. This architecture allows for direct interactions between neighboring qubits, but interactions between non-neighboring qubits require the use of SWAP operations to move quantum information across the chip, potentially leading to increased operation times and error rates. The native gates of IBM’s superconducting quantum computers typically include several single-qubit gates and one two-qubit gate, where the current basis gates are C X , I , RZ , S X , and X . These gates form a universal set that can implement any quantum algorithm. The precision and speed of these gates are critical for the performance of the quantum computer, with gate errors and decoherence times being key metrics of system quality.
  • “IonQ’s quantum computers use trapped ions as qubits, leveraging the ions’ electronic states to encode quantum information. These ions are trapped and isolated in a vacuum chamber using electromagnetic fields. A significant advantage of ion-trap systems is their flexible qubit connectivity. Unlike superconducting qubits, which have fixed neighbors, ions in a trap can be rearranged using electric fields, allowing any qubit to interact directly with any other qubit. This all-to-all connectivity reduces the need for SWAP operations, potentially offering more efficient quantum algorithms. The native gates in ion-trap systems often include single-qubit rotation gates and the Mølmer-Sørensen(MS(θ)) gate, which is a two-qubit entangling gate. Ion-trap quantum computers can precisely control these gates using laser beams, with the ions’ motion mediating qubit-qubit interactions. This capability allows for the implementation of high-fidelity operations across the entire qubit register.”

As always, it’s best to read the paper directly.

The researchers do find an advantage for the IonQ system in their conclusion:

“In this paper, the evolution of quantum circuits within two distinct systems (superconducting circuits and ion-trap) is investigated through the T-Bill expectation value problem. A better expectation value necessitates a greater number of evaluation qubits, which in turn requires increased entanglement for the implementation of the quantum amplitude estimation algorithm. Theoretically implemented circuits perform identically in terms of the number of two-qubit gates on the ion-trap system; however, an inconsistent increase in the number of two-qubit gates is required on the superconducting circuits system. This discrepancy is attributed to the fixed topology of superconducting circuits quantum processor, as opposed to ion-trap’s all-to-all connectivity, making qubit interaction more challenging and necessitating SWAP operations for qubit connection. The exact number of required SWAP operations, being an NP-hard problem, remains unpredictable. Therefore, the circuit size is incrementally enlarged and the number of two-qubit gates is measured using the transpile function in Qiskit. Data obtained and subsequent fitting into a polynomial graph reveal a difference of a factor of two to three in the highest-order coefficient between ion-trap and superconducting circuits systems, suggesting that the increase in evaluation qubits escalates this disparity, potentially impacting the efficiency of quantum circuits.”

Link to paper, https://arxiv.org/abs/2404.04911

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storage, throughput, and new computing technologies. This round Read more…

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Core42 Is Building Its 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storag Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire