Quantum Watch: Neutral Atoms Draw Growing Attention as Promising Qubit Technology

By John Russell

January 25, 2022

Currently, there are many qubit technologies vying for sway in quantum computing. So far, superconducting (IBM, Google) and trapped ion (IonQ, Quantinuum) have dominated the conversation. Microsoft’s proposed topological qubit, which relies on the existence of a still-unproven particle (Majorana), may be the most intriguing. Recently, neutral atom approaches have quickened pulses in the quantum community. Advocates argue the technology is inherently more scalable, offers longer coherence times (key for error correction), and they point to proof-of-concept 100-qubit systems that have already been built.

Atom Computing, founded in 2018, is one of the neutral atom quantum computing pioneers. Last week, it announced a successful Series B funding round ($60M). It currently has a 100-qubit system (Phoenix) and says it will use the latest cash infusion to build its launch system (Valkyrie) which will be much larger (qubit count) and likely be formally announced in 2022 and brought to market in 2023. It’s also touting 40-second coherence (nuclear spin qubit), which the company says is a world record.

Rob Hays, CEO, Atom Computing

“Our first product will launch as a cloud service initially, probably with a partner. And we know which one or two partners want to go with, and just haven’t signed any contracts yet. We’re still kind of negotiating the T’s and C’s,” said Rob Hays, Atom Computing’s relatively new (July ’21) CEO and president. Most recently, Hays was the chief strategy officer at Lenovo. Before that, he spent 20-plus years in Intel’s datacenter group, working on Xeon, GPUs and OmniPath products.

Company founder Ben Bloom shifted to CTO with Hays’ arrival. Bloom’s Ph.D. work was cold atom quantum research, done with Jun Ye at the University Colorado. Notably, Ye recently won the 2022 Breakthrough Prize in Fundamental Physics[i] for “outstanding contributions to the invention and development of the optical lattice clock, which enables precision tests of the fundamental laws of nature.” Ye is on Atom’s science advisory. Company headcount is now roughly 40, and the next leg in its journey is to bring a commercial neutral atom to market.

If this formula sounds familiar, that’s because it is. There’s been a proliferation of quantum computing startups founded by prominent quantum researchers who, after producing POC systems, bring on veteran electronics industry executives to grow the company. (Link to a growing list of quantum computing/communication companies).

All of the newer emerging qubit technologies are drawing attention. Quantum market watcher Bob Sorensen of Hyperion Research told HPCwire, “I am somewhat of a fan of neutral atom qubit technology. It’s room temp, it has impressive coherence times, but to me, most importantly, it shows good promise for scaling to a large single qubit processor. Atom, along with and France’s Pasqal, are committed to the technology and are they getting the funding and additional support to keep on their development and deployment track. So we do need to keep an eye on their progress.”

So what is neutral atom-based quantum computing? Bloom and Hays recently briefed HPCwire on the company’s technology and plans.

Broadly, neutral atom qubit technology shares much with trapped ion technology — except, obviously, the atoms aren’t charged. Instead of confining ions with electromagnetic forces, neutral atom approaches use light to trap atoms and hold them in position. The qubits are the atoms whose nuclear magnetic spin states (levels) are manipulated to set the qubit state. Atom has written a recent paper (Assembly and coherent control of a register of nuclear spin qubits) describing its approach.

Ben Bloom, founder and CTO, Atom Computing

Bloom said, “We use atoms in the second column of the periodic table (alkaline earth metals). All those atoms share properties. We use strontium, but it doesn’t actually have to have been strontium, it could have been anyone in that column. Similar to trapped ion technology, we capture single atoms, and we optically trapped them. We create this optical trapping landscape with lasers. The nice thing about this is every atom you trap and you put in those light traps is exactly the same. The coherence times you can make are really, really long. It was kind of only theorized you could create them that long, but now we’ve shown that you can create them that long.”

Hays describes the apparatus. “We put some strontium crystals and a little oven next to the vacuum chamber. There’s a little tube that [takes in] gaseous form of strontium as they get heated up and off-gased. The atoms are sucked into the vacuum chamber. Then we shine lasers through the little windows in the vacuum chamber to [form] a grid of light and the little individual atoms that are floating around in there get stuck like a magnet to those spots of light. Once we get them stuck in space, we can actually move them if we want and we can write quantum information with them using a separate set of lasers at a different wavelength. We’ve got a camera that sits under the microscope objective in the top of the system that reads out of the results.

“All that gets fed back into a standard rack of servers that’s running our software stack, you know, the classic compute system off to the side. That’s running our operating system, our scheduler, all the API’s for the access, programming, data storage. That rack also has our proprietary radio frequency control system, which is how we control the lasers. And we’re basically just controlling how many spots light there are, and what the frequency phase and amplitude is of those spots of light. People interact with it remotely.”

It’s pretty cool. Think of a cloud of atoms trapped in the vacuum tube. Lasers are shined through the cloud along an X/Y axes (2D). Wherever the beams intersect, a sticky spot is created, and nearby atoms get stuck in those spots. You don’t get 100 percent filled sticky spots on the first pass, but Atom has demonstrated the ability to move individual atoms to fill in open spots. The result is an 10×10 array of stuck neutral atoms which serve as qubits at addressable locations. The trapped atoms are spaced four microns apart, which is far enough to prevent nuclear spin (qubit state) interaction.

Entanglement between qubits is accomplished by pumping the atoms up into a Rydberg state. This basically puffs up the atoms’ outer shell, enlarging the spatial footprint, and permits becoming entangled with neighbors. This is how Atom Computing gets two-qubit gates.

“To scale the system, we just simply create more spots a light, so instead of like a 10 by 10 array, if we went to a 100 by 100 array of lasers, then we get to 10,000 qubits and if we went to 1000 by 1000 we get to a million the qubits,” said Hays. “So, at four microns [apart] to get to a million qubits we’re still less than a millimeter on a side in a cube. And it’s all, again, wireless control. We don’t have to worry about cabling up the different chips together and then putting them in a dilution refrigerator and all that kind of stuff; we just put more spots of light in the same vacuum chamber and read them with the control systems in the cameras.”

Hays noted that achieving 3D arrays is possible, but much trickier. Currently, Atom computing is focused on 2D arrays. Achieving the current 100-qubit system was done with lots of hand-tuning and intended for experimental flexibility. Moving forward, said Hays, CAD tools with an emphasis on manufacturability, use efficiency, will guide development of the Valkyrie system. Bloom and Hays declined to say how many qubits it would have.

It will be interesting to watch the ongoing jostling among qubit technologies.

Sorensen said, “I still think it is too early to start picking winners and losers in the qubit modality race… and isn’t that part of the fun right now? In reality, there are lots of variables to consider besides qubit count and other qubit specific technical parameters. To me, increasingly, the goal to focus on is not on how to build a qubit, but how to build a processor. That is why when I look at a modality, I consider its overall architectural potential: does it scale, can you do reasonable I/O to the classical side, does it have ready solutions for networking, and does it require esoteric equipment to manufacture and/or operate in a traditional compute environment?”

The issue, says Sorensen, is that there are many factors to consider here, so specific modality may not be the only valid indicator of the winner: “IBM, Quantinuum, Rigetti, and IonQ are quite visible in the sector, representing a range of modalities, but they recognize that they need to bring more to the table in terms of vision, experience, market philosophy, and end use relevance. The smart players know that it is entirely possible, as we have seen in the past, that the best pure technology does not always win in the final market analysis.”

Hays emphasizes Atom Computing is a hardware company and will work with the growing ecosystem for other tools, “We’re focusing on the hardware and the necessary software levels – operating system, scheduler, API’s, etc. – that allow people to interact with the system. [For other needs] we’re working with the ecosystem. We’re going to support Qiskit, we support it internally and we’ll support it for whichever cloud service provider we choose to go to market with we’ll support their tool suite as well. Then there’s companies like QC Ware, Zapata, Classiq and others that are building their own platforms. We’re going to be very partner friendly.”

Atom Computing says it has early collaborators but it’s hard to judge progress without fuller public access to the system. It will be interesting to see just how big (qubit count) the forthcoming system ends up being, and also what benchmarks Atom Computing supplies to the community along the way.

Figure describing Atom Computing approach from its paper.

[i] The Breakthrough Prize in Fundamental Physics[1] is awarded by the Fundamental Physics Prize Foundation, a not-for-profit organization dedicated to awarding physicists involved in fundamental research. The foundation was founded in July 2012 by Russian physicist and internet entrepreneur Yuri Milner.[2]

As of September 2018, this prize is the most lucrative academic prize in the world[3] and is more than twice the amount given to the Nobel Prize awardees.[4][5] This prize is also dubbed by the media as the “XXI Century Nobel”.[6]

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are many interesting stories, and only a few ever become headli Read more…

Quantum Tech Sector Hiring Stays Soft

June 13, 2024

New job announcements in the quantum tech sector declined again last month, according to an Quantum Economic Development Consortium (QED-C) report issued last week. “Globally, the number of new, public postings for Qu Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. A typical supercomputer lifecycle is about five to six years Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently king of accelerated computing) wins again, sweeping all nine Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research computing centers, national labs, federal agencies, and univ Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst firm TechInsights. Nvidia's GPU shipments in 2023 grew by more Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

ASC24 Expert Perspective: Dongarra, Hoefler, Yong Lin

June 7, 2024

One of the great things about being at an ASC (Asia Supercomputer Community) cluster competition is getting the chance to interview various industry experts and Read more…

HPC and Climate: Coastal Hurricanes Around the World Are Intensifying Faster

June 6, 2024

Hurricanes are among the world's most destructive natural hazards. Their environment shapes their ability to deliver damage; conditions like warm ocean waters, Read more…

ASC24: The Battle, The Apps, and The Competitors

June 5, 2024

The ASC24 (Asia Supercomputer Community) Student Cluster Competition was one for the ages. More than 350 university teams worked for months in the preliminary competition to earn one of the 25 final competition slots. The winning teams... Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire