Quantum Upstart: IonQ Sets Sights on Challenging IBM, Rigetti, Others

By John Russell

June 5, 2019

Until now most of the buzz around quantum computing has been generated by folks already in the computer business – systems makers, chip makers, and big cloud providers. Their efforts have been dominated by semiconductor-based, superconducting approaches. The old saw “to a hammer all else looks like a nail” seems to fit here.

Now, a two-year-old start-up – IonQ – that’s pioneering trapped ion technology for quantum computing is jumping into the fray with some brash claims. IonQ reports there’s there’s less overhead required for error correction with its system, that entangling large numbers of qubits is much easier, and that the base technology is mundane, less costly, and compact. No exotic dilution refrigerators here. Indeed much of the approach is derived from decades old atomic clock technology.

Traction for trapped ion technology in the QC world is fairly recent. It was just a year ago NSF initiated a trapped ion quantum computing (STAQ) project. Late last month IonQ installed a new president and CEO, Peter Chapman, whose job is to accelerate commercial success; accomplishing that has eluded everyone in the commercial quantum space so far as the machines and needed ecosystem (tools, developers, breadth of quantum algorithms, etc.) remain in developmental stages. IonQ’s founding president and CEO, Christopher Monroe, is stepping into the chief scientist role, and indeed he is a pioneer in trapped ion technology and one of the authors of an influential 2016 paper[i]on the technology.

Earlier this week Chapman and Stewart Allen, the company COO, briefed HPCwire on IonQ’s technology and roll-out plans. Interestingly much of the conversation focused on hammering home their view that trapped ion technology is set to zoom past the semiconductor-based, superconducting approaches practiced by IBM, Google, and Rigetti Computing.

Based in College Park, MD, not far from the University of Maryland where Monroe did much of his work, IonQ has built three 11-qubit systems. Access to those machines is still “private and in beta stages” with broader access via the web coming, perhaps later this year. Notably, New Enterprise Associates, GV (formerly Google Ventures) and AWS are all investors. In fact, Chapman was director of engineering at Amazon Prime before joining Ion.

One big problem with quantum computers today is that they are noisy. Qubits, by and large, are delicate things that fall apart when disturbed by virtually anything (heat, vibration, stray electromagnetic influence, etc.). Building systems to eliminate those noises is an ongoing challenge, particularly for systems based on semiconductor-based, superconducting qubits. These noisy systems require daunting error correction approaches that have so far largely proved impractical. A second thorny problem is figuring out how to controllably entangle large numbers of qubits. Don’t forget that it is entanglement that gives quantum computing its real power.

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

Chapman argued trapped ion technology is vastly superior in handing these issues than semiconductor-based superconducting approaches. We haven’t heard as much about it, he says, because trapped ion technology grew up in a quieter community unlike the boisterous, jostling world of computer technology suppliers. With the fundamental work now completed, he argues trapped ion technology and IonQ in particular will quickly move to the forefront of quantum computing.

Here’s roughly how the trapped ion approach works: (Apologies for error; Monroe’s 2016 paper is actually terrific and an accessible reference.)

Ionized molecules with appropriate valence structure are used as the qubit registers. IonQ uses Ybions. A key strength here is the ions are essentially identical and reliable in their behavior. Outer electrons can be readily pumped into higher energy level and have a relatively long time before collapse. Depending on its state, the molecule represents a zero or a one. It is straightforward to generate and insert these ions into the ion trap, a “magnetic bottle” if you will, and hold them steady. Chapman use a mag-lev chip analogy with the molecules suspended above the chip.

Interacting with these molecules (the qubit registers) is done using external lasers which ‘perform’ gate operations by putting molecules into a given state; likewise the lasers can be coordinated to interact with one or many qubits and induce entanglement. Unlike for semiconductor-based superconducting quantum computers which require, among other things, exotic deep refrigeration, ion trap systems are cheap and easier to build and operate.

Said Allen, “For the cost of a dilution refrigerator alone, not even given the parts and components and the rest of the things [required for a superconducting quantum computer], you can build an entire ion trap based system of much greater power and capability. It’s also smaller, like the volume size of a kitchen refrigerator versus room-size and you can scale up qubits without changing the physical hardware.

“The vacuum chambers are cheap. The chips are not transistor-based chips. They are just electrodes. The lasers we use are off the shelf. We have to set up some optical paths to rout the lasers and impart different waveforms onto the lasers to create the transitions in the ions for operations. None of the stuff is super exotic. They don’t require a lot of power to run. They run off wall power, they don’t need 480 volts or 220 volts.” (Shown below is a figure taken from Monroe’s 2016 paper showing roughly the trapped ion approach.)

IonQ plans to double its qubit count roughly every year. Its current architecture, according to Chapman, can support scaling full mesh connectivity to 32-qubits. That’s impressive. Given the lack of a need for error correction, long coherence times relative to gate times, he believes IonQ’s approach will enable tackling larger and more complicated algorithms and shorten the time it takes before someone achieves quantum advantage on an IonQ.

It all sounds very compelling.

One quantum watcher, Bob Sorensen, VP research and technology, Hyperion Research, offers a nuanced view: “Strictly speaking, trapped ions are a good way to go because there are some significant advantages to the scheme. Trapped ions can build on an existing set of technologies used to develop things like atomic clocks and precision measurements instruments, and they operate at room temperature. In addition, trapped ions have relatively long coherence times – the amount of time a qubit can stay in a superposition state so it can be used to do quantum operations – compared with just about every other QC modality, as well as high gate fidelity, the amount of error introduced during a QC gate operation.”

Conversely, he noted, “Trapped ion schemes need to be controlled with a complex combination of microwave and optical devices which can be problematic when it comes to scaling trapped ion quantum computers to more than a few qubits. The point here is there are distinct advantages, but also technical hurdles that cannot be ignored. Perhaps more important, to date there has not been much real demonstration of the ability to build large, controllable trapped ion devices. Theoretical advantages are one thing, demonstration of capability on real quantum applications are another.”

IonQ would likely dispute some of that and also argue that it has been steadily advancing the state of the art. In March, the company published two papers, one benchmarking its 11-qubit system demonstrating high gate fidelity. The second paper described work performed on an IonQ system to estimate ground state energy in a water molecule.

With a bit of marketing bravado, the IonQ website touts: “Our quantum cores use lasers pointed at individual atoms to perform longer, more sophisticated calculations with fewer errors than any quantum computer yet built. In 2019, leading companies will start investigating real-world problems in chemistry, medicine, finance, logistics, and more using our systems.”

At least the last portion seems a stretch. Solid developmental work is ongoing in many quantum camps but use of production-quality applications or quantum algorithms on quantum computers to solve real-world problems seems distant.

Chapman and Allen offered few details on how soon the current “private use, beta user” effort will transform into a broader offering except to say that something web-based is likely later this year. They were also chary on revealing too much about their plans for tool development or even developer community development.

“What we have in our software stack is an API that allows you to run a quantum program. We expect that most people will get to our quantum hardware via cloud providers in the future. In the future you will presumably have, instead of an EC2 instance just as an example, a quantum computer instance. Until the world gets enough experience with the quantum computers, they are probably not moving to datacenters. They will probably stay at our datacenter for awhile where we have the expertise to fix them and keep them up and running,” Chapman.

Chapman and Allen emphasize IonQ has invested heavily in the compiler and optimizer technology for turning these algorithms into runnable items on the computer. Said Allen, “In addition to the advantage of requiring substantially less overhead for error correction, the native gates allow us to do a compression of algorithms without any loss of fidelity; that allows us to run algorithms in a fewer number of steps, shorter period of time, and that mapping is something that can be hidden from the developer.”

They also cited a growing cadre of what they call Q-tips, consultants who will work with customers to help them get their algorithms and applications running.

Some of IonQ’s aggressive marketing is likely intended to make up ground in the quest for mindshare in the quantum computing community where the general clamor has grown loud. It’s also in keeping with their QC brethren’s habits where standing out from the growing crowd becomes more difficult as the din around quantum computing grows.

Sorensen said, “The main point to remember is that quantum computing research is still in a nascent stage and much more speculative then in traditional computing, where the fundamental device technology has been fixed for a while and universally adopted – silicon-based CMOS – but instead include a wide range of vastly different schemes, where each offers their own set of challenges and opportunities. Indeed, I like to posit that the fundamental QC technology that may support the QCs in the 2030s across a wide range of application may not even have yet been conceived.

“I look forward to the day when IonQ can demonstrate a significant QC-based application that not only outperforms classical counterparts but that also leads the pack in performance compared with the range of other QC modalities currently under consideration.”

Stay tuned.

[i]Co-Designing a Scalable Quantum Computer with Trapped Atomic Ions

  1. R. BrownJ. KimC. Monroe, https://arxiv.org/abs/1602.02840
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Trump Administration and NIST Issue AI Standards Development Plan

August 14, 2019

Efforts to develop AI are gathering steam fast. On Monday, the White House issued a federal plan to help develop technical standards for AI following up on a mandate contained in the Administration’s AI Executive Order Read more…

By John Russell

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions Read more…

By Rob Johnson

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Cloudy with a Chance of Mainframes

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

Rapid rates of change sometimes result in unexpected bedfellows. Read more…

Argonne Supercomputer Accelerates Cancer Prediction Research

August 13, 2019

In the fight against cancer, early prediction, which drastically improves prognoses, is critical. Now, new research by a team from Northwestern University – and accelerated by supercomputing resources at Argonne Nation Read more…

By Oliver Peckham

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Upcoming NSF Cyberinfrastructure Projects to Support ‘Long-Tail’ Users, AI and Big Data

August 5, 2019

The National Science Foundation is well positioned to support national priorities, as new NSF-funded HPC systems to come online in the upcoming year promise to Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This