D-Wave Breaks New Ground in Quantum Simulation

By John Russell

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that simulating physical systems could be done most effectively on quantum computers. In this instance, the project was the simulation of a quantum magnetism problem called the transverse field Ising model (TFIM) that has potential practical application in materials science research.

Using a standard D-Wave 2,048-quibit processor, the researchers simulated interacting Ising spins on 3D cubic lattices up to dimensions of 8x8x8. In some sense, the lattice represents an imaginary ‘substance’ comprised solely of magnetic moments; put another way, you are simulating correlated electron systems.

As the authors explain, “By tuning the amount of disorder within the lattice and varying the effective transverse magnetic field, we demonstrate phase transitions between a paramagnetic (PM), an ordered anti-ferromagnetic (AFM), and a spin-glass (SG) phase. The experimental results compare well with theory for this particular SG problem, thus validating the use of a probabilistic quantum computer to simulate materials physics. This represents an important step forward in the realization of integrated quantum circuits at a scale that is relevant for condensed matter research.”

In essence they fiddled with the simulation dials to watch how nature would unfold under different conditions. Using D-Wave’s quantum annealing technology meant, in effect, that each simulation evolved just as it would naturally. D-Wave’s usual programming tools were used.

An illustration of one particular 8x8x8 cubic lattice studied in Science, July 13, 2018. Red and blue spheres represent two possible states of magnetic moments. Silver bars represent antiferromagnetic interactions that favor alternating (blue-red) ordering of the moments. Gold bars represent randomly added ferromagnetic interactions that favor uniform (blue-blue or red-red) ordering. These latter interactions serve to disorder antiferromagnetic (alternating) ordering of the moments.
Source: D-Wave; Science

At least one observer calls the research ground-breaking. “Characterization of the phase behavior of a genuinely new material not found in nature by a precisely controlled quantum computer used as a simulator…[is] the first truly useful application of a quantum computer. [I]t shows us how to explore the behavior of novel system designs without having to completely understand them first, as we must to write a useful digital simulation code,” said Ned Allen, chief scientist and corporate senior fellow at Lockheed Martin – admittedly a D-Wave customer – in the official announcement.

D-Wave CEO Vern Brownell told HPCwire, “One of the slight nuances here is in order to do this type of modeling you actually have to take advantage of the quantum mechanical effects of the machine. If you were to simulate this on a classical machine like a large HPC cluster, the only way to do that is to simulate the quantum mechanics and there are ways to do that; Monte Carlo simulation is probably the most common way of doing that. That’s incredibly intensive computationally. The advantage that this machine has is actually leveraging those quantum mechanical effects to do a more efficient modeling.”

D-Wave, of course, has been in the thick of the race to develop quantum computers. Its approach – quantum annealing – has advocates and skeptics. Unlike a traditional gate model, D-Wave system architecture relies on the tendency of quantum systems to find low-energy states. Here’s the company’s summary for its most current machine:

  • A lattice of 2,000 tiny superconducting devices, known as qubits, is chilled close to absolute zero to harness quantum effects.
  • A user models a problem into a search for the “lowest energy point in a vast landscape”.
  • The processor considers all possibilities simultaneously to determine the lowest energy and the values that produce it.
  • Multiple solutions are returned to the user, scaled to show optimal answers.

In last week’s paper (Phase transitions in a programmable quantum spin glass simulator), researchers emphasized, “[The] structure of the magnetic system studied was vastly different from the physical layout of qubits within the QPU.”

D-Wave System

Said Brownell, “There are certainly many ways you can build a quantum computer. You can build quantum annealers [like] we build. You can build a gate model, which is what most of the other large companies are trying to build. Then there’s a topological model which Microsoft is trying to build. They’re all quantum computers. The differences are the relative exposure or susceptibility to error. The gate model to quantum computing is the most susceptible to errors, so you’ll need tens of thousands of qubits to simulate one logical qubit and there’s a huge overhead to that. That’s why gate model computers are 5- or 10- or 15 years away from being able to do useful applications. Certainly very far away from the scale of being able to do anything like what we have demonstrated here. Maybe a decade away.”

No doubt D-Wave’s rivals would disagree. To a significant extent D-Wave has always been a small player jostling with giants. It’s often received faint praise designed to spotlight perceived weaknesses of its quantum annealing technology. That hasn’t stopped the Canada-based quantum computing pioneer from punching above its weight in terms of actually selling systems (Lockheed and NASA, for example). The company is perhaps understandably sensitive to criticism.

Brownell points to a report from Jülich Supercomputing Center, Germany, presented at a D-Wave User meeting last April. “They use IBM’s and our system and have done a comparison. On a scale of 1-to-9 – what they call the quantum technology readiness (QTR, detailed at end of article) – we are at  level 8 and they have IBM at 5 along with Google and pretty much everybody else in quantum computing. It’s good to see these reports. There’s a lot of talk from the other folks and a lot of bluster about what their quantum computers can do, but here they have to expose their quantum computers to third party scrutiny and people can now make fair comparisons.”

Source: Jülich; D-Wave

The first D-Wave system was a 128-qubit machine introduced in 2010 with larger systems introduced roughly every two years. The current state of the art is the D-Wave 2000Q, announced in September 2016 and officially launched in early 2017. While a new machine is not expected soon, Brownell promises more important news towards the end of the summer, likely a large-scale cloud program and new tools. He also said another landmark paper is in the works.

Given the tremendous noise surrounding quantum computing currently Brownell is determined that D-Wave not be lost in the din. Earlier this month, D-Wave hired Jennifer Houston as SVP, marketing. “We had effectively no marketing or very little marketing going on,” said Brownell. A year ago, the company hired Alan Baratz as SVP of software and applications. Previously president of JavaSoft (Sun Microsystems), Baratz is charged with ecosystem development and presumably we will see the fruits of his efforts in the cloud/tool rollout.

Last week’s paper, though important, doesn’t mean quantum computing of any sort is suddenly ready for real-world materials science applications. Brownell agreed, “It’s certainly scientifically relevant to materials science research but you would have to work with very deep scientists in order to take advantage of this capability. [But] it is the start of the ability to use a quantum computer to do something useful.”

Jülich Quantum Computing Technology Readiness Level (source: Forschungszentum Jülich)

A quantum computing technology is at QTRL1 when the theoretical framework for quantum computing (annealing) is formulated. Theoretical studies of the basic properties of the quantum computing (annealing) devices move towards applied research and development. The technology reaches QTRL2 once the basic device principles have been studied and applications or technologically relevant algorithms are formulated. QTRL2 quantum computing technology is speculative, as there are little to no experimental results supporting the theoretical studies.

Fabricated imperfect physical qubits, the basic building blocks of quantum computing devices, are at QTRL3. Laboratory studies aim to validate theoretical predictions of qubit properties. Theoretical and laboratory studies are required to determine whether these basic elements of the quantum computing technology are ready to proceed further through the development process.

During QTRL4, multi-qubit systems are fabricated and classical devices for qubit manipulation are developed. Both components of the quantum computing technology are tested with one another. QTRL5 quantum computing technology comprises components integrated in a small quantum processor without error correction. Quantum computing devices labeled as QTRL5 must undergo rigorous testing including running of various algorithms for benchmarking. Components integrated in a small quantum processor with error correction are at QTRL6. Rigorous testing and running algorithms is repeated for the QTRL6 quantum computing technology.

QTRL7 quantum computing technology is a prototype quantum computer (annealer) solving small but user-relevant problems. The prototype is demonstrated in a user environment. A scalable version of a quantum computer (annealer) completed and qualified through test and demonstration is at QTRL8. Once quantum computers (annealers) exceed the computational power of classical computers for general (specific) problems the quantum computing technology can be labeled with QTRL9.

Link to paper: http://science.sciencemag.org/content/361/6398/162

Link to release: https://www.dwavesys.com/press-releases/d-wave-demonstrates-large-scale-programmable-quantum-simulation

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitment to holistic sustainability as well as launching a managed Read more…

By Oliver Peckham

New CMU AI Poker Bot – Pluribus – Humbles the Pros Again

July 15, 2019

Remember Libratus, the Carnegie Mellon University developed AI poker bot that’s been humbling poker professionals at Texas hold’em for a couple of years. Well, say hello to Pluribus, an upgraded bot, which has now be Read more…

By John Russell

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, some of the apps, like SWIFT and OpenFOAM, really pushed the st Read more…

By Dan Olds

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Smarter Technology Revs Up Red Bull Racing

In 21st century business, companies that effectively leverage their information resources – thrive. As it turns out, the same is true in Formula One racing. Read more…

Portugal Launches Its First Supercomputer

July 12, 2019

Portugal has officially inaugurated its first-ever supercomputer. The unassumingly named “Bob” supercomputer is housed in the Minho Advanced Computer Center (MACC) at the University of Minho.  Bob was announced i Read more…

By Oliver Peckham

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

New CMU AI Poker Bot – Pluribus – Humbles the Pros Again

July 15, 2019

Remember Libratus, the Carnegie Mellon University developed AI poker bot that’s been humbling poker professionals at Texas hold’em for a couple of years. We Read more…

By John Russell

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

Applied Materials Embedding New Memory Technologies in Chips

July 9, 2019

Applied Materials, the $17 billion Santa Clara-based materials engineering company for the semiconductor industry, today announced manufacturing systems enablin Read more…

By Doug Black

ISC19 Cluster Competition: HPCC Deep Dive

July 7, 2019

The biggest benchmark the student warriors tackled during the ISC19 Student Cluster Competition was the colossal HPC Challenge. This is a collection of benchmar Read more…

By Dan Olds

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This