Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

By John Russell

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum processor chips. The new controller is a mixed-signal SoC named Horse Ridge after one of the coldest regions in Oregon and is designed to operate at approximately 4 Kelvin. Both devices attack key challenges in quantum computing – the ability to scale up the number of qubits in a processor and the ability to quickly characterize them.

The announcements came at the IEEE International Electron Devices Meeting (2019 IEDM) being held this week in San Francisco with more details expected at the 2020 IEEE Solid State Circuits Conference (ISSCC) in February.

To some extent, Intel has avoided limelight in quantum computing. For one thing, its quantum computing research program is relatively young, roughly four years old. It hasn’t yet produced a named prototype system or publicly engaged in the race to increase qubit counts. Instead Intel insists it has taken a long view and consistently maintained it will be years before quantum computing is mainstream.

Jim Clarke, Intel

Jim Clarke, Intel’s director of quantum hardware, says eight years is a good guess for the time required before we reach ‘Quantum Practicality’, Intel’s term of art for when quantum computers will be able to do useful work. He deflects criticism that Intel is simply late to the game by wondering why most current quantum computing research embraces exotic technologies that are difficult to work with. The current leading contender is semiconductor-based, superconducting qubits which require exotic hardware and extreme cold. Why not CMOS, asks Clarke.

“Let me describe it like this,” said Clarke in pre-briefing with HPCwire. “Superconducting qubits have a lot of momentum because there are already systems that are let’s say 50 qubits. Trapped ions are interesting because the best two-qubit gate, for example, is done with a trapped ion system. Topological qubits are interesting, because they might not need error correction. Nitrogen vacancy or diamond qubits might not have to operate at low temperature.

“This is the kind of feedback I get when I meet with a roomful of academics. They list off these technologies. They don’t usually talk about the limitations of each technology. And relatively few of them will say, “Well heck, the world’s entire technology has been based on silicon devices for the last more than 50 years. Why isn’t this more interesting?

“So, on the one hand when I think about technologies, relying on the one that has built our entire technological infrastructure for the last 50 years isn’t getting enough attention. And actually, I’m okay with that. Because that means Intel is going to be all that farther ahead. So, all these technologies have strengths and weaknesses, the one that I think has the most potential is the one that’s just building on Moore’s law and good old silicon.”

Intel, not surprisingly, is focused on developing silicon spin qubit technology[I] that leverages existing CMOS manufacturing techniques, although it also has a superconducting effort. “To put [it] in perspective, the current superconducting qubits studied by some of our competitors are roughly a million times larger than our silicon spin qubits which look a lot like transistors,” Clarke said.

Intel isn’t wrong about the formidable technical hurdles facing quantum computing. Its new Horse Ridge cryo-controller is aimed at one of the most vexing problems – connecting to and controlling qubits in a way that permits dramatic scaling up of the number of qubits. Currently, individual wires are used to control qubits and must pass through normal-to-frigid temperature zones to do so.

John Martinis, head of Google’s quantum work, didn’t minimize the challenge in his comments following Google’s public announcement of achieving Quantum Supremacy (see HPCwire coverage) in October. Martinis said, “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.” Asked how many qubits can be squeezed into a dilution refrigerator using wires – thousands or millions – Martinis said, “For thousands, we believe yes. We do see a pathway forward…but we’ll be building a scientific instrument that is really going to have to bring a lot of new technologies.”

Indeed, Google’s 54-qubit Sycamore chip actually functioned as a 53-qubit device during the supremacy exercise because one of the control wires broke.

You get the picture. There’s lots to do before QC hits even a modest practical stride. Amid the quantum noise Intel has been relatively quiet. During the briefing with HPCwire, Clark discussed the new cryo-controller, the new cryo-prober, Intel’s long-term strategy, and more.

Presented below are a few of Clarke’s comments but first, given Intel’s long-time role as a key component supplier to the electronics world, it is natural to wonder if Intel is considering a similar path within the emerging quantum systems community.

Bob Sorensen, VP of research and technology, Hyperion Research noted, “What is unclear from this announcement is if Intel intends to make this new SoC technology available to the larger QC development community or keep the technology in-house to support their own internal QC development activities. The answer to that question is critical: does Intel plan on building their own soup-to-nuts QC system in-house with all of the associated technical demand from both a hardware and software perspective, or will they make the chip available as a commercial part, seeking to take the first steps in dividing up the commercial QC hardware stack by supplying this and perhaps other key QC sub-assemblies to a wide range of QC hardware developers.  Each option brings with it some interesting challenges and opportunities, not only for Intel but for the QC sector writ large.”

We’ll see. Clarke wouldn’t say much on the matter but didn’t rule it out.

HPCwire: Maybe start with a recap of the news. Why is the new cryo-controller important and what does it do?

Stefano Pellerano, Principal Engineer at Intel Labs, holds Horse Ridge. The new cryogenic control chip will speed development of full-stack quantum computing systems, marking a milestone in the development of a commercially viable quantum computer.

Clarke: If what you see is a system of 50 qubits where each qubit is controlled with an individual wire or individual coaxial cable, it’s hard to imagine a system of a million qubits controlled in the same way. These are wires that go out of the [dilution] fridge to a rack of instruments, not unlike what you would see in the university laboratory. It’s a brute force type of control scheme.

What we’ve done, using our baseline CMOS technology, is designed a control chip to control qubits where this control chip is actually inside the dilution refrigerator. We’ve used a chip fabbed on our 22-nm process line. So this is Intel FINFET technology that has been optimized for performance at low temperature.

Intel is focusing on what’s known as a silicon spin qubit, which looks a lot like transistor. The energetics of this [type of] qubit allows us to put these control chip in close proximity to the qubit chip; so to a certain extent compared to some of the other technologies out there, like the Google technology, the IBM technology, we’re a little less sensitive to the temperature effects. Ours is basically an RF microwave chip. We put in a fundamental frequency and then we’re able to multiplex it and shift it to the frequencies tailored to qubits [for control].

HPCwire: Given the nature of the qubit control problem it sounds like this could be technology or a product offering to other quantum computer systems makers. Does Intel plan to sell the devices as components to others?

Clarke: You ask a good question. Here is what I would say nominally without overcommitting. There are a few things that are interesting. Actually, cryogenic electronics are pretty appealing. These devices actually work well at low temperature. It requires a certain redesign of both the device and the circuit, so it’s nontrivial to design these circuits for low temperature. But there may be actually other cryogenic applications where cryogenics CMOS would be useful.

What we’re finding so far, Intel has bets on both superconducting and spin technologies, is the control chips, from an efficiency perspective, are better tailored to one technology or another. The Horse Ridge has been tested on spin qubits. It could also have been tested on superconducting qubits. As we move to more and more complex chips, we will probably tailor them a little bit more to one technology versus another. That being said, there isn’t anything fundamental that would prevent the co-integration of this chip and other technologies.

Remember, the qubit chip [processor] itself is just one component of a larger system. Intel is working on all the components of that large system. I think when we piece it together, this is one of the reasons why we feel confident that Intel will be in the lead by the time quantum computer become practical – because we have all the puzzle pieces: the control chip, which is made in our factories, the qubit chip which is made in our factories, and the quantum architecture which loosely be based on the Intel architecture.

HPCwire: What can you say about the cryo-prober also announced today?

Clarke: You’re familiar with the dilution refrigerators. We have a we have a bunch of them at Intel. But the experiments are very slow. These refrigerators, you basically put a sample in, you cool it down for a few days, you study it for several weeks, if not longer, and then you warm it up and try again. I’m going to contrast that with what we do at Intel with a 300-millimeter wafer. We take the 300-millimeter wafer off our production line, and put it on an electrical prober, and can characterize millions of transistors in an hour. Now that’s at room temperature. It’s a very mature technology and really one of the heartbeats for providing the feedback loop for advancement in semiconductors.

We asked the question, could you combine one of these room temperature probers with a refrigerator? That’s essentially what we’re doing. We’re in the final stages of assembling this tool, and hope to have it at Intel in the next quarter. This is called the cryo prober. So when we talk about timelines [to practical quantum computing], it’s not only having the algorithms ready, but it’s also how much information can you get to really accelerate development program. This is what we’re going to be talking about at the IDF conference in San Francisco next week. This cryo-prober, which we think will allow us to go I would say 100 times faster, one of my peers would say 10,000 times faster in terms of device characterization. It’s basically statistical process control and development.

So these timelines that I give you are somewhat historical, but also somewhat based on the velocity of how fast we can go, and not only are we developing things like Horse Ridge, to give us a more scalable system, but we’re also developing tools like this [cryo-prober]to help us go much, much faster in our development cycle.

HPCwire: What’s your take on the National Quantum Initiative and industry’s and government’s patience in terms of what you say will be a long journey to practical quantum computing?

Clarke: Both Intel and IBM, I can speak to those, and Microsoft participated in the National Quantum Initiative Act [of last year]. We were all on the same page. I mean we could always have more funding, but this is not an insignificant amount of funding ($1.2 billion) and they recognize that this is a long term play rather than a short-term deal. I think we’re all quite pleased with how that’s shaping up. The nice thing was the larger players in the quantum space, we’re all on the same page, and were sitting next to each other at tables in DC. We had both houses of Congress and the White House supporting it. I think we all recognize that this is a marathon, not a sprint.

One of the thrusts of the NQIA, primarily through NIST, is to help develop at least the hardware ecosystem. So these would be related to refrigerators or in the case of ion traps related to laser technology. What’s not seen and what isn’t talked about are things like amplifiers and signal filters that are in every single fridge, no matter the technology. These need advancements [too]. So there’s been something called the QEC– Quantum Economic Development Consortia – that has spun out of NIST as a result of the NQIA and is focusing on those sorts of aspects of the business. That’s just starting. It tends to be you know, broad attendance and active throughout the community.

Now, the software ecosystem is kind of interesting. At last check there are more than 100 companies or startups in the quantum space, and most of them are in the software side of things. And it’s interesting. We have all these software companies, but we don’t have enough qubits to actually test them with, and so it’s somewhat upside down from how the software ecosystem has developed for other types of technologies where the hardware existed first and then came the software. My personal belief is the hardware needs to be a bit more mature before you’re really going to be able to develop the software to go along with it. Some might argue, develop the software first and then make the hardware work with the software. That’s really hard to do when you’re dealing with quantum physics. I think the qubit technologies need to mature first, to see which direction.

Feature Photo: A 2018 photo shows Intel’s new quantum computing chip balanced on a pencil eraser. Researchers started testing this “spin qubit chip” at the extremely low temperatures necessary for quantum computing: about 460 degrees below zero Fahrenheit. Intel projects that qubit-based quantum computers, which operate based on the behaviors of single electrons, could someday be more powerful than today’s supercomputers. (Credit: Walden Kirsch/Intel Corporation)

[i]http://meetings.aps.org/Meeting/MAR19/Session/B35.1

Abstract of talk by Jim Clarke at APS March meeting

Intel is developing a 300mm process line for spin qubit devices using state-of-the-art immersion lithography and isotopically pure epitaxial silicon layers. Both Si-MOS and Si/SiGe devices are being evaluated in this multi-layer integration scheme. In this talk, we will be sharing our current progress towards spin qubits starting with substrate characterization. Transistors and quantum dot devices are then co-fabricated on the same wafer and allow calibration to Intel’s internal transistor processes. Electrical characterization and feedback is accomplished through wafer scale testing at both room temperature and 1.6K prior to milli-kelvin testing. Accelerated testing across a 300mm wafer provides a vast amount of data that can be used for continuous improvement in both performance and variability. This removes one of the bottlenecks towards a large scale system: trying to deliver an exponentially fast compute technology with a slow and linear characterization scheme using only dilution refrigerators.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Amid Upbeat Earnings, Intel to Cut 1% of Employees, Add as Many

January 24, 2020

For all the sniping two tech old timers take, both IBM and Intel announced surprisingly upbeat earnings this week. IBM CEO Ginny Rometty was all smiles at this week’s World Economic Forum in Davos, Switzerland, after  Read more…

By Doug Black

Indiana University Dedicates ‘Big Red 200’ Cray Shasta Supercomputer

January 24, 2020

After six months of celebrations, Indiana University (IU) officially marked its bicentennial on Monday – and it saved the best for last, inaugurating Big Red 200, a new AI-focused supercomputer that joins the ranks of Read more…

By Staff report

What’s New in HPC Research: Tsunamis, Wildfires, the Large Hadron Collider & More

January 24, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware. In fact, the company's simulated bifurcation algorithm is Read more…

By Tiffany Trader

Energy Research Combines HPC, 3D Manufacturing

January 23, 2020

A federal energy research initiative is gaining momentum with the release of a contract award aimed at using supercomputing to harness 3D printing technology that would boost the performance of power generators. Partn Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

TACC Highlights Its Upcoming ‘IsoBank’ Isotope Database

January 22, 2020

Isotopes – elemental variations that contain different numbers of neutrons – can help researchers unearth the past of an object, especially the few hundred isotopes that are known to be stable over time. However, iso Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware Read more…

By Tiffany Trader

In Advanced Computing and HPC, Dell EMC Sets Sights on the Broader Market Middle 

January 22, 2020

If the leading advanced computing/HPC server vendors were in the batting lineup of a baseball team, Dell EMC would be going for lots of singles and doubles – Read more…

By Doug Black

DNA-Based Storage Nears Scalable Reality with New $25 Million Project

January 21, 2020

DNA-based storage, which involves storing binary code in the four nucleotides that constitute DNA, has been a moonshot for high-density data storage since the 1960s. Since the first successful experiments in the 1980s, researchers have made a series of major strides toward implementing DNA-based storage at scale, such as improving write times and storage density and enabling easier file identification and extraction. Now, a new $25 million... Read more…

By Oliver Peckham

AMD Recruits Intel, IBM Execs; Pending Layoffs Reported at Intel Data Platform Group

January 17, 2020

AMD has raided Intel and IBM for new senior managers, one of whom will replace an AMD executive who has played a prominent role during the company’s recharged Read more…

By Doug Black

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This