Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

By John Russell

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-world applications are depends on whom you talk to and for what kinds of applications. Los Alamos National Lab, for example, has an active application development effort for its D-Wave system and LANL researcher Susan Mniszewski and colleagues have made progress on using the D-Wave machine for aspects of quantum molecular dynamics (QMD) simulations.

At CeBIT this week D-Wave and Volkswagen will discuss their pilot project to monitor and control taxi traffic in Beijing using a hybrid HPC-quantum system – this is on the heels of recent customer upgrade news from D-Wave (more below). Last week IBM announced expanded access to its five-qubit cloud-based quantum developer platform. In early March, researchers from the Google Quantum AI Lab published an excellent commentary in Nature examining real-world opportunities, challenges and timeframes for quantum computing more broadly. Google is also considering making its homegrown quantum capability available through the cloud.

As an overview, the Google commentary provides a great snapshot, noting soberly that challenges such as the lack of solid error correction and the small size (number of qubits) in today’s machines – whether “universal” digital machines like IBM’s or “analog” adiabatic annealing machines like D-Wave’s – have prompted many observers to declare useful quantum computing is still a decade way. Not so fast, says Google.

“This conservative view of quantum computing gives the impression that investors will benefit only in the long term. We contend that short-term returns are possible with the small devices that will emerge within the next five years, even though these will lack full error correction…Heuristic ‘hybrid’ methods that blend quantum and classical approaches could be the foundation for powerful future applications. The recent success of neural networks in machine learning is a good example,” write Masoud Mohseni, Peter Read, and John Martinis (a 2017 HPCwire Person to Watch) and colleagues (Nature, March 8, “Commercialize early quantum technologies”)

The D-Wave/VW project is a good example of a hybrid approach (details to follow) but first here’s a brief summary of recent quantum computing news:

  • IBM released a new API and upgraded simulator for modeling circuits up to 20 qubits on its 5-qubit platform. It also announced plans for a software developer kit by mid-year for building “simple” quantum applications. So far, says IBM, its quantum cloud has attracted about 40,000 users, including, for example, the Massachusetts Institute of Technology, which used the cloud service for its online quantum information science course. IBM also noted heavy use of the service by Chinese researchers. (See HPCwire coverage, IBM Touts Hybrid Approach to Quantum Computing)
  • D-Wave has been actively extending its development ecosystem (qbsolv (D-wave) and qmasm (LANL, et al.) and says researchers have recently been able to simulate a 20,000 qubit system on 1,000-qubit machine using qbsolv (more below). After announcing a 2,000-quibit machine in the fall, the company has begun deploying them. The first will be for a new customer, Temporal Defense System, and another is planned for the Google/NASA/USRA partnership which has a 1,000-qubit machine now. D-wave also just announced Virginia Tech and the Hume Center will begin using D-Wave systems for work on defense and intelligence applications.
  • Google’s commentary declares: “We anticipate that, within a few years, well-controlled quantum systems may be able to perform certain tasks much faster than conventional computers based on CMOS (complementary metal oxide–semiconductor) technology. Here we highlight three commercially viable uses for early quantum-computing devices: quantum simulation, quantum-assisted optimization and quantum sampling. Faster computing speeds in these areas would be commercially advantageous in sectors from artificial intelligence to finance and health care.”
D-Wave 2000Q System

Clearly there is a lot going on even at this stage of quantum computing’s development. There’s also been a good deal of wrangling over just what is a quantum computer and the differences between IBM’s “universal” digital approach – essentially a machine able to do anything computers do now – and D-Wave’s adiabatic annealing approach, which is currently intended to solve specific classes of optimization problems.

“They are different kinds of machines. No one has a universal quantum computer now, so you have to look at each case individually for its particular strengths and weaknesses,” explained Martinis to HPCwire. “The D-wave has minimal quantum coherence (it loses the information exchanged between qubits quite quickly), but makes up for it by having many qubits.”

“The IBM machine is small, but the qubits have quantum coherence enough to do some standard quantum algorithms. Right now it is not powerful, as you can run quantum simulations on classical computers quite easily. But by adding qubits the power will scale up quickly. It has the architecture of a universal machine and has enough quantum coherence to behave like one for very small problems,” Martinis said.

Noteworthy, Google has developed 9-qubit devices that have 3-5x more coherence than IBM, according to Martinis, but they are not on the cloud yet. “We are ready to scale up now, and plan to have this year a ‘quantum supremacy’ device that has to be checked with a supercomputer. We are thinking of offering cloud also, but are more or less waiting until we have a hardware device that gives you more power than a classical simulation.”

Quantum supremacy as described in the Google commentary is a term coined by theoretical physicist John Preskill to describe “the ability of a quantum processor to perform, in a short time, a well-defined mathematical task that even the largest classical supercomputers (such as China’s Sunway TaihuLight) would be unable to complete within any reasonable time frame. We predict that, in a few years, an experiment achieving quantum supremacy will be performed.”

Bo Ewald

For the moment, D-Wave is the only vendor offering near-production machines versus research machines, said Bo Ewald, the company’s ever-cheerful evangelist. He quickly agrees though that at least for now there aren’t any production-ready applications. Developing a quantum tool/software ecosystem is a driving focus at D-wave. The LANL app dev work, though impressive, still represents proto-application development. Nevertheless the ecosystem of tools is growing quickly.

“We have defined a software architecture that has several layers starting at the quantum machine instruction layer where if you want to program in machine language you are certainly welcome to do that; that is kind of the way people had to do it in the early days,” said Ewald.

“The next layer up is if you want to be able to create quantum machine instructions from C or C++ or Python. We have now libraries that run on host machines, regular HPC machines, so you can use those languages to generate programs that run on the D-Wave machine but the challenge that we have faced, that customers have faced, is that our machines had 500 qubits or 1,000 qubits and now 2,000; we know there are problems that are going to consume many more qubits than that,” he said.

For D-Wave systems, qbsolv helps address this problem. It allows a meta-description of the machine and the problem you want to solve as quadratic unconstrained binary optimization or QUBO. It’s an intermediate representation. D-Wave then extended this capability to what it calls virtual QUBOs likening it to virtual memory.

“You can create QUBOs or representations of problems which are much larger than the machine itself and then using combined classical computer and quantum computer techniques we could partition the problem and solve them in chunks and then kind of glue them back together after we solved the D-Wave part. We’ve done that now with the 1,000-qubit machine and run problems that have the equivalent of 20,000 qubits,” said Ewald, adding the new 2,000-qubit machines will handle problems of even greater size using this capability.

At LANL, researcher Scott Pakin has developed another tool – a quantum macro assembler for D-Wave systems (QMASM). Ewald said part of the goal of Pakin’s work was to determine, “if you could map gates onto the machine even though we are not a universal or a gate model. You can in fact model gates on our machine and he has started to [create] a library of gates (or gates, and gates, nand gates) and you can assemble those to become macros.”

Pakin said,My personal research interest has been in making the D-Wave easier to program. I’ve recently built something really nifty on top of QMASM: edif2qmasm, which is my answer to the question: Can one write classical-style code and run it on the D-Wave?

“For many difficult computational problems, solution verification is simple and fast. The idea behind edif2qmasm is that one can write an ordinary(-ish) program that reports if a proposed solution to a problem is in fact valid. This gets compiled for the D-Wave then run _backwards_, giving it ‘true’ for the proposed solution being valid and getting back a solution to the difficult computational problem.”

Pakin noted there are many examples on github to provide a feel for the power of this tool.

“For example, mult.v is a simple, one-line multiplier. Run it backwards, and it factors a number, which underlies modern data decryption. In a dozen or so lines of code, circsat.v evaluates a Boolean circuit. Run it backwards, and it tells you what inputs lead to an output of “true”, which used in areas of artificial intelligence, circuit design, and automatic theorem proving. map-color.v reports if a map is correctly colored with four colors such that no two adjacent regions have the same color. Run it backwards, and it _finds_ such a coloring.

“Although current-generation D-Wave systems are too limited to apply this approach to substantial problems, the trends in system scale and engineering precision indicate that some day we should be able to perform real work on this sort of system. And with the help of tools like edif2qmasm, programmers won’t need an advanced degree to figure out how to write code for it,” he explained.

The D-Wave/VW collaboration, just a year or so old, is one of the more interesting quantum computing proof-of-concept efforts because it tackles an optimization problem of the kind that is widespread in everyday life. As described by Ewald, VW CIO Martin Hoffman was making his yearly swing through Silicon Valley and stopped in at D-Wave and talk turned to the many optimization challenges big automakers face, such as supply logistics, vehicle delivery, and various machine learning tasks and doing a D-Wave project around one of them. Instead, said Ewald, VW eventually settled on a more driver-facing problem.

It turns out there are about 10,000 taxis in Beijing, said Ewald. Each has a GPS device and their positions are recorded every five seconds. Traffic congestion, of course, is a huge problem in Beijing. The idea was to explore if it was possible to create an application running on both traditional computer resources and D-Wave to help monitor and guide taxi movement more quickly and effectively.

“Ten thousand taxis on all of the streets in Beijing is way too big for our machine at this point, but they came to this same idea we talked about with qbsolve where you partition problems,” said Ewald. “On the traditional machines VW created a map and grid and subdivided the grid into quadrants and would find the quadrant that was the most red.” That’s red as in long cab waits.

The problem quadrant was then sent to D-Wave to be solved. “We would optimize the flow, basically minimize the wait time for all of the taxis within the quadrant, send that [solution] back to the traditional machine which would then send us the next most red, and we would try to turn it green,” said Ewald.

According to Ewald, VW was able to relatively create the “hybrid” solutions quickly and “get what they say are pretty good results.” They have talked about then being able to extend this project to predict where traffic jams are going to be and give people perhaps 45 minute warnings that there’s the potential for a traffic jam at such and such intersection. The two companies have a press conference planned this week at CeBIT to showcase the project.

It’s good to emphasize that the VW/D-wave exercise is developmental – what Ewald labels as a proto application: “But just the fact that they were able to get it running is a great step forward in many ways in that we believe our machine will be used side by side with existing machines, much like GPUs were used in the early days on graphics. In this case VW has demonstrated quite clearly how our machine, our QPU if you will, can be used in helping accelerate the work being done on a traditional HPC machines.”

Image art, chip diagram: D-Wave

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC in Life Sciences Part 1: CPU Choices, Rise of Data Lakes, Networking Challenges, and More

February 21, 2019

For the past few years HPCwire and leaders of BioTeam, a research computing consultancy specializing in life sciences, have convened to examine the state of HPC (and now AI) use in life sciences. Without HPC writ large, modern life sciences research would quickly grind to a halt. It’s true most life sciences research computing... Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized silicon designs catered toward general-purpose cloud computing Read more…

By Tiffany Trader

The Internet of Criminal Things—Trust in the Gods but Verify!

February 20, 2019

“Are we under attack?” asked Professor Elmarie Biermann of the Cyber Security Institute during the recent South African Centre for High Performance Computing’s (CHPC) National Conference in Cape Town. A quick show Read more…

By Elizabeth Leake, STEM-Trek

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Perils of Becoming Trapped in the Cloud

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. Read more…

Machine Learning Takes Heat for Science’s Reproducibility Crisis

February 19, 2019

Scientists are raising red flags about the accuracy and reproducibility of conclusions drawn by machine learning frameworks. Among the remedies are developing new ML systems that can question their own predictions, show Read more…

By George Leopold

HPC in Life Sciences Part 1: CPU Choices, Rise of Data Lakes, Networking Challenges, and More

February 21, 2019

For the past few years HPCwire and leaders of BioTeam, a research computing consultancy specializing in life sciences, have convened to examine the state of HPC (and now AI) use in life sciences. Without HPC writ large, modern life sciences research would quickly grind to a halt. It’s true most life sciences research computing... Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effects in new materials to supporting bioinformatics for advanced healthcare research to screening millions of possible chemical combinations to attack a deadly virus. Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This