D-Wave Breaks New Ground in Quantum Simulation

By John Russell

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that simulating physical systems could be done most effectively on quantum computers. In this instance, the project was the simulation of a quantum magnetism problem called the transverse field Ising model (TFIM) that has potential practical application in materials science research.

Using a standard D-Wave 2,048-quibit processor, the researchers simulated interacting Ising spins on 3D cubic lattices up to dimensions of 8x8x8. In some sense, the lattice represents an imaginary ‘substance’ comprised solely of magnetic moments; put another way, you are simulating correlated electron systems.

As the authors explain, “By tuning the amount of disorder within the lattice and varying the effective transverse magnetic field, we demonstrate phase transitions between a paramagnetic (PM), an ordered anti-ferromagnetic (AFM), and a spin-glass (SG) phase. The experimental results compare well with theory for this particular SG problem, thus validating the use of a probabilistic quantum computer to simulate materials physics. This represents an important step forward in the realization of integrated quantum circuits at a scale that is relevant for condensed matter research.”

In essence they fiddled with the simulation dials to watch how nature would unfold under different conditions. Using D-Wave’s quantum annealing technology meant, in effect, that each simulation evolved just as it would naturally. D-Wave’s usual programming tools were used.

An illustration of one particular 8x8x8 cubic lattice studied in Science, July 13, 2018. Red and blue spheres represent two possible states of magnetic moments. Silver bars represent antiferromagnetic interactions that favor alternating (blue-red) ordering of the moments. Gold bars represent randomly added ferromagnetic interactions that favor uniform (blue-blue or red-red) ordering. These latter interactions serve to disorder antiferromagnetic (alternating) ordering of the moments.
Source: D-Wave; Science

At least one observer calls the research ground-breaking. “Characterization of the phase behavior of a genuinely new material not found in nature by a precisely controlled quantum computer used as a simulator…[is] the first truly useful application of a quantum computer. [I]t shows us how to explore the behavior of novel system designs without having to completely understand them first, as we must to write a useful digital simulation code,” said Ned Allen, chief scientist and corporate senior fellow at Lockheed Martin – admittedly a D-Wave customer – in the official announcement.

D-Wave CEO Vern Brownell told HPCwire, “One of the slight nuances here is in order to do this type of modeling you actually have to take advantage of the quantum mechanical effects of the machine. If you were to simulate this on a classical machine like a large HPC cluster, the only way to do that is to simulate the quantum mechanics and there are ways to do that; Monte Carlo simulation is probably the most common way of doing that. That’s incredibly intensive computationally. The advantage that this machine has is actually leveraging those quantum mechanical effects to do a more efficient modeling.”

D-Wave, of course, has been in the thick of the race to develop quantum computers. Its approach – quantum annealing – has advocates and skeptics. Unlike a traditional gate model, D-Wave system architecture relies on the tendency of quantum systems to find low-energy states. Here’s the company’s summary for its most current machine:

  • A lattice of 2,000 tiny superconducting devices, known as qubits, is chilled close to absolute zero to harness quantum effects.
  • A user models a problem into a search for the “lowest energy point in a vast landscape”.
  • The processor considers all possibilities simultaneously to determine the lowest energy and the values that produce it.
  • Multiple solutions are returned to the user, scaled to show optimal answers.

In last week’s paper (Phase transitions in a programmable quantum spin glass simulator), researchers emphasized, “[The] structure of the magnetic system studied was vastly different from the physical layout of qubits within the QPU.”

D-Wave System

Said Brownell, “There are certainly many ways you can build a quantum computer. You can build quantum annealers [like] we build. You can build a gate model, which is what most of the other large companies are trying to build. Then there’s a topological model which Microsoft is trying to build. They’re all quantum computers. The differences are the relative exposure or susceptibility to error. The gate model to quantum computing is the most susceptible to errors, so you’ll need tens of thousands of qubits to simulate one logical qubit and there’s a huge overhead to that. That’s why gate model computers are 5- or 10- or 15 years away from being able to do useful applications. Certainly very far away from the scale of being able to do anything like what we have demonstrated here. Maybe a decade away.”

No doubt D-Wave’s rivals would disagree. To a significant extent D-Wave has always been a small player jostling with giants. It’s often received faint praise designed to spotlight perceived weaknesses of its quantum annealing technology. That hasn’t stopped the Canada-based quantum computing pioneer from punching above its weight in terms of actually selling systems (Lockheed and NASA, for example). The company is perhaps understandably sensitive to criticism.

Brownell points to a report from Jülich Supercomputing Center, Germany, presented at a D-Wave User meeting last April. “They use IBM’s and our system and have done a comparison. On a scale of 1-to-9 – what they call the quantum technology readiness (QTR, detailed at end of article) – we are at  level 8 and they have IBM at 5 along with Google and pretty much everybody else in quantum computing. It’s good to see these reports. There’s a lot of talk from the other folks and a lot of bluster about what their quantum computers can do, but here they have to expose their quantum computers to third party scrutiny and people can now make fair comparisons.”

Source: Jülich; D-Wave

The first D-Wave system was a 128-qubit machine introduced in 2010 with larger systems introduced roughly every two years. The current state of the art is the D-Wave 2000Q, announced in September 2016 and officially launched in early 2017. While a new machine is not expected soon, Brownell promises more important news towards the end of the summer, likely a large-scale cloud program and new tools. He also said another landmark paper is in the works.

Given the tremendous noise surrounding quantum computing currently Brownell is determined that D-Wave not be lost in the din. Earlier this month, D-Wave hired Jennifer Houston as SVP, marketing. “We had effectively no marketing or very little marketing going on,” said Brownell. A year ago, the company hired Alan Baratz as SVP of software and applications. Previously president of JavaSoft (Sun Microsystems), Baratz is charged with ecosystem development and presumably we will see the fruits of his efforts in the cloud/tool rollout.

Last week’s paper, though important, doesn’t mean quantum computing of any sort is suddenly ready for real-world materials science applications. Brownell agreed, “It’s certainly scientifically relevant to materials science research but you would have to work with very deep scientists in order to take advantage of this capability. [But] it is the start of the ability to use a quantum computer to do something useful.”

Jülich Quantum Computing Technology Readiness Level (source: Forschungszentum Jülich)

A quantum computing technology is at QTRL1 when the theoretical framework for quantum computing (annealing) is formulated. Theoretical studies of the basic properties of the quantum computing (annealing) devices move towards applied research and development. The technology reaches QTRL2 once the basic device principles have been studied and applications or technologically relevant algorithms are formulated. QTRL2 quantum computing technology is speculative, as there are little to no experimental results supporting the theoretical studies.

Fabricated imperfect physical qubits, the basic building blocks of quantum computing devices, are at QTRL3. Laboratory studies aim to validate theoretical predictions of qubit properties. Theoretical and laboratory studies are required to determine whether these basic elements of the quantum computing technology are ready to proceed further through the development process.

During QTRL4, multi-qubit systems are fabricated and classical devices for qubit manipulation are developed. Both components of the quantum computing technology are tested with one another. QTRL5 quantum computing technology comprises components integrated in a small quantum processor without error correction. Quantum computing devices labeled as QTRL5 must undergo rigorous testing including running of various algorithms for benchmarking. Components integrated in a small quantum processor with error correction are at QTRL6. Rigorous testing and running algorithms is repeated for the QTRL6 quantum computing technology.

QTRL7 quantum computing technology is a prototype quantum computer (annealer) solving small but user-relevant problems. The prototype is demonstrated in a user environment. A scalable version of a quantum computer (annealer) completed and qualified through test and demonstration is at QTRL8. Once quantum computers (annealers) exceed the computational power of classical computers for general (specific) problems the quantum computing technology can be labeled with QTRL9.

Link to paper: http://science.sciencemag.org/content/361/6398/162

Link to release: https://www.dwavesys.com/press-releases/d-wave-demonstrates-large-scale-programmable-quantum-simulation

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data West Brings Technology Leaders to SDSC

December 6, 2018

Data and technology enthusiasts from around the world descended upon the San Diego Supercomputing Center (SDSC) for the third annual Data West conference, which is taking place this week on the campus of the University o Read more…

By Alex Woodie

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are Read more…

By James Reinders

What’s New in HPC Research: Automatic Energy Efficiency, DNA Data Analysis, Post-Exascale & More

December 6, 2018

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

Five Steps to Building a Data Strategy for AI

Our data-centric world is driving many organizations to apply advanced analytics that use artificial intelligence (AI). AI provides intelligent answers to challenging business questions. AI also enables highly personalized user experiences, built when data scientists and analysts learn new information from data that would otherwise go undetected using traditional analytics methods. Read more…

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar conc Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

AWS Debuts Lustre as a Service, Accelerates Data Transfer

November 28, 2018

From the Amazon re:Invent main stage in Las Vegas today, Amazon Web Services CEO Andy Jassy introduced Amazon FSx for Lustre, citing a growing body of applicati Read more…

By Tiffany Trader

AWS Launches First Arm Cloud Instances

November 28, 2018

AWS, a macrocosm of the emerging high-performance technology landscape, wants to be everywhere you want to be and offer everything you want to use (or at least Read more…

By Doug Black

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

DOE Under Secretary for Science Paul Dabbar Interviewed at SC18

November 21, 2018

During the 30th annual SC conference in Dallas last week, SC18 hosted U.S. Department of Energy Under Secretary for Science Paul M. Dabbar. In attendance Nov. 13-14, Dabbar delivered remarks at the Top500 panel, met with a number of industry stakeholders and toured the show floor. He also met with HPCwire for an interview, where we discussed the role of the DOE in advancing leadership computing. Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This