Intel Bets Big on 2-Track Quantum Strategy

By Doug Black

January 15, 2019

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment.

To wit, there’s Mikhail Dyakonov in the November IEEE Spectrum, who, even as investors, companies and countries pour billions into quantum, says it will never work. The theoretical physicist at the Université de Montpellier, France, contends that error correction (monitoring variables and correcting errors) isn’t possible at quantum scale. “A useful quantum computer needs to process a set of continuous parameters,” Dyakonov wrote, “…larger than the number of subatomic particles in the observable universe.”

Maybe. But betting against quantum flies in the face of centuries of technological breakthroughs reinforcing the maxim: what we can conceive we can achieve. At a technical level, countering Dyakonov is a post-doctoral researcher at QuTech, part of the Delft University of Technology in the Netherlands (see his rebuttal in HPCwire), who writes, “No computer, classical or quantum, ever has to process even a single continuous parameter. In classical computers, we can use floating-point arithmetic to approximate continuous parameters using a finite number of bits.”

Intel’s Jim Clarke

Also at odds with Dyakonov is Jim Clarke, director of quantum hardware at Intel Labs, who told us during a recent interview that, daunting as quantum error correction may be, there are tougher quantum challenges to overcome, beginning with scale – which in the quantum world is otherworldly.

This is because, as described in a recent MIT Technology Review article, “The fundamental units of computation…are qubits, which — unlike bits — can occupy a quantum state of 1 and 0 simultaneously. By linking qubits through an almost mystical phenomenon known as entanglement, quantum computers can generate exponential increases in processing power.”

Leaving aside the headsplitting notion that qubits can be both 1 and 0 (University of Nottingham professor Phil Moriarty: “As a quantum physicist, it’s not that you understand it, you just get used to it.”), in Clarke’s explanation, a qubit is like a spinning coin – it’s both heads and tails. This is the principal of “quantum superposition.”

“So if I get two coins spinning at the same time, I’d simultaneously have four states; with three coins, eight states,” Clarke said. “With 300 spinning coins, how many states can I have? That’s two to the 300th, which is more states than there are in the universe. At 50 you can represent more states than any supercomputer could do.”

This brings us to Clarke’s biggest quantum worry: interconnection. Holding a superconducting qubit processing unit (QPU – see left), Clarke said, “If you take this chip, I’ve got 49 qubits and 108 coaxial connectors to the outside world. What would it look if I had a million qubits? I can’t have 2 million coax cables to the outside world. Maybe that’s what an ENIAC system looked like in the 1940s, but that’s not what conventional system looks like. So what worries me most is wiring your interconnects.”

By comparison, Clarke said, an Intel Xeon server chip has 7 billion transistors and only 2000 connectors, mostly for power and ground.

The wiring problem is a factor contributing to Intel’s two-track quantum strategy – one track is development of a “mainstream” (in the quantum world)  superconducting qubit, which most companies (IBM, Google, Rigetti and others) are attempting to perfect; Intel’s other development track is the silicon spin qubit (SSQ), which Intel is pursuing along with Delft University, the University of New South Wales and Princeton and which looks like a transistor.

While developmentally less mature than the superconducting qubit (which have reached about 50 entangled qubits), the SSQ (now at about 23 entangled qubits) may tap more readily into Intel’s chip heritage.

“When you think of Intel, you probably think of it as a transistor company,” Clarke said, “and you’d be right. (SSQ’s)…look a lot like a transistor. I’d describe it as a single electron transistor.”

Intel is the only large company investigating SSQ’s, according to Clarke.

“Our thought is to look at similar (superconducting qubit) technologies as some of our competitors, but we’re also looking at a novel technology that resembles our transistors in the infrastructure we have,” Clarke said. “We can build on our multi-billion dollar infrastructure to make these devices. So that’s a big bet we’re making…. We’re doing both, we’re basically hedging out bet.”

A potential advantage of the SSQ (picture at right) is that it’s “a million times smaller than the superconducting qubit,” said Clarke. “…. just from a real estate perspective, if this has 49 (coax cables), then I have to wonder what a million would look like. It would be huge. But with silicon spin, there’s no reason we can’t have a density similar to our advanced logic or advanced memory, so there’s no reason we couldn’t get into the millions easily… We’re hoping to accelerate that technology and make it competitive with superconducting qubits and then, hopefully, it will be the technology that will scale.”

Intel’s two-track strategy poses another challenge for Clarke: how to accelerate the development of one versus the other. “But it’s hard for me to imagine that we’d give up early on the technology that looks like transistors, being a transistor company,” he said.

Clarke concedes that IBM, Google and others have been working on quantum longer than Intel, but he argues that Intel has advantages the others can’t match, notably Intel’s architecture and process expertise.

“We can tap into state-of-the-art fabrication facilities,” Clarke said, citing research the quantum group has done jointly with Intel’s packaging group in Arizona aimed at improving performance and reliability.

“We have our fabrication engineers working on the chip, we have folks working on control electronics for the QPU, we have people developing architectures for the QPU,” Clarke said. “They’re all working based on the background of Intel’s experience. We’re trying to put together a complete system.”

Companies without advanced fabrication, Clarke said, probably rely on “something that resembles a university lab. You can imagine that a university professor can make one good transistor. But can he build 7 billion of them all the same, and put them on a chip that you could buy for a server?”

He cited Intel’s 500-acre Romler Acres campus in Hillsboro, Ore., at which, though Intel’s quantum work is experimental, “we’re still running our material through the same factories that our advanced technologies are running on, and that has to be considered an advantage…. We get to use state-of-the-art tools with process controls, we take great care with our material deposition.”

With Quantum years away, quantum developers necessarily adopt long time horizons. Clarke estimates quantum is half a decade from a significant step in system performance.

“I think there’s a race to quantum supremacy,” he said. “At some number, 50 or 60 (qubits), someone will say we’ve contrived a problem that can’t be solved with a classical computer. That would be a milestone, but it’s not a practical milestone. I think being able to do an optimization problem or characterize the configuration of a molecule that can’t be done with a classical computer, that would be a first milestone. We’re probably five or six years away from that.”

Waiting that long for results runs counter to our impatient culture, but Clarke suggests quantum progress be put in historical perspective.

“In a field where computer advances are measured on the span of a year or two, then when you say something (quantum) is 10 years away, some will say it might as well be forever,” he said. “But if you look at the history of electronics, the first transistor was 1947, the first silicon transistor was 1954, the integrated circuit was 1958, and the first microprocessor was 1970. So these things don’t happen overnight. If you take a look at where we are, we’ve surpassed the equivalent of the first integrated circuit, and now we’re trying to get to a large enough size to do something useful. So…if we say we’re 10 years away from having a few thousand qubits to do something that cannot be done otherwise, it’s actually not so far of a stretch.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Expands Worldwide Availability to AMD-based Instances

July 22, 2019

Setting aside potential setbacks caused by U.S. trade policies, the steady cadence of AMD’s revival in HPC and the datacenter continued last week with AWS expanding availability of its AMD Epyc-based instances. Recall Read more…

By Staff

Microsoft Investing $1B in OpenAI Artificial General Intelligence R&D

July 22, 2019

Artificial general intelligence (AGI) is AI’s moonshot, the next giant leap for the AI field. Microsoft regards it to be feasible enough to warrant a $1 billion investment in OpenAI, the not-for-profit research organi Read more…

By Doug Black

Researchers Use Supercomputing to Study Links Between Hurricanes and Climate Change

July 19, 2019

As climate change looms, researchers are scrambling to answer the question of how a warming planet will affect the frequency and severity of already-deadly hurricanes. Now, a team of researchers from the University of Il Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

With HPC the Future is Looking Grid

Gone are the days when problems such as unraveling genetic sequences or searching for extra-terrestrial life were solved using only a single high-performance computing (HPC) resource located at one facility. Read more…

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Microsoft Investing $1B in OpenAI Artificial General Intelligence R&D

July 22, 2019

Artificial general intelligence (AGI) is AI’s moonshot, the next giant leap for the AI field. Microsoft regards it to be feasible enough to warrant a $1 billi Read more…

By Doug Black

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This