Quantum Annealing Pioneer D-Wave Introduces Expanded Hybrid Solver

By John Russell

November 3, 2022

D-Wave Systems, a pioneer in quantum annealing-based computing, today announced significant upgrades to its constrained quadratic model (CQM) hybrid solver that should make it easier to use and able to tackle much larger problems, said the company. The model can now handle optimization problems with up to 1 million variables (including continuous variables) and 100,000 constraints. In addition, D-Wave has introduced a “new [pre-solver] set of fast classical algorithms that reduces the size of the problem and allows for larger models to be submitted to the hybrid solver.”

While talk of using hybrid quantum-classical solutions has intensified recently among the gate-based quantum computer developer community, D-Wave has actively explored hybrid approaches for use with its quantum annealing computers for some time. It introduced a hybrid solver service (HSS) as part its Leap web access portal and Ocean SDK development kit that D-Wave in 2020. The broad hybrid idea is to use classical compute resources where they make sense – for example, GPUs perform matrix multiplication faster – and use quantum resources where they add benefit.

D-Wave Advantage System

The HHS also relies on familiar tools and helps deal with the nagging challenge of squeezing large practical problems onto, relatively speaking, D-Wave’s small quantum systems. Its systems are massive (Advantage has 2,000 qubits, Advantage2 is expected to have 5,000 qubits) compared with current gate-based quantum computer sizes (IBM is expected to soon debut a 400-plus qubit processor). But quantum annealing is a different beast and works differently. In most ways, the comparison is not at all apples-to-apples.

“No quantum computer is ever going to be large enough for people to fit their entire application into the computer itself,” said Murray Thom, D-Wave vice president of product management in a briefing with HPCwire.

Here’s D-Wave’s description of the benefits of using its HHS:

  • “Hybrid solvers in the HSS can accept inputs that are much larger than those solved directly by the QPU. They are designed to leverage the unique capability of the QPU to find good solutions fast, thereby extending this property to larger and more varied types of inputs than would otherwise be possible.
  • “Solvers in the HSS are designed to take care of low-level operational details for the user: solving problems with this service does not require any knowledge whatsoever about how to select parameter settings for D-Wave QPUs.
  • “Different types of solvers tend to work best on different types of inputs. Portfolio solvers can run multiple solvers in parallel using a cloud-based platform, and re- turn the best solution from the pool of results. This approach relieves the user from having to know beforehand which solver might work best on any given input, and minimizes the computation time needed to obtain best results.”

The new expanded solver capability, reported D-Wave, “allows quantum developers to better represent commercial problems, enabling them to more easily and accurately model problems where it is not possible to satisfy all constraints through classical computing logic. For example, in an employee scheduling scenario where employees should work 8 hours per shift with optional overtime, the solver is now able to allow for a soft “weighted” constraint of <8 hours and a hard constraint of <12 hours, thereby increasing the utility of the proposed solution.”

The familiar caveat, again, is that quantum annealing-based computing is best suited for a somewhat narrower set of problems – optimization is high on the quantum annealing suitability list – than gate-based quantum computers. It is expected that fault-tolerant, gate-based quantum computers will be able to handle all manner of computational workloads; however, scaling up fault tolerant system-size (number of qubits) and implementing effective error mitigation/correction remain challenging. D-Wave’s quantum annealing system operates differently and does not use error correction.

It is noteworthy that roughly a year ago D-Wave announced an effort to expand beyond quantum annealing and to develop a gate-based system (see HPCwire coverage) and this year the company completed the process of going public via a SPAC. The company has said little about the progress in its gate-based development initiative but is expected to provide a report at its annual conference Qubits, being held in January.

Thom emphasized the value of now being able to use weighted constraints with the CQM solver.

“Weighted constraints allow [developers] to say, I’m willing to violate constraints in this prioritized order. If they’re looking at say the overtime constraint for a fleet of delivery vehicles and the weight constraint of the vehicle, they might say the weight constraint is going to be hard. That’s a high priority, it has a lot of weight to it. But the workforce is willing to work two extra hours of overtime since they are getting paid so I can have a softer barrier (weight) from going from an eight-hour workday to a 10-hour workday and then a hard boundary for going above a 10-hour workday. This puts power in the developers’ hands to make the solver aware of these nuances in the tradeoffs in the in the customer problem,” he said.

The new pre-solve techniques are “baked into” the Ocean SDK said Thom, “Pre-solve techniques are used to take a problem instance that a developer has submitted, and start analyzing it to say, ‘There’s a bunch of variables here that I can set ahead of time. They don’t have any variability. If I change their value, they immediately are invalid, so I’m going to set the value and then fix them, and then pass the remaining portion of the problem to the solver. [Pre-solve] is run every single time a problem is submitted; it’s very, very fast, and basically boosts your success.”

Of note here is D-Wave’s ambitious commercial engagement program which is one of the largest in the quantum community. The fact is very few quantum applications are being run in a production environment. Proof-of-concept demonstration and prototype applications are plentiful. The D-Wave quantum annealing approach, in several aspects, is simpler than gate-based approaches which may be one reason it is apparently further along. The company says at least one organization – Pier 300 of the Port of Los Angeles – is using a D-Wave system for logistics scheduling in a production environment.

When pressed on how far along real-world applications have advanced, Thom said, “Well, we’re talking about a whole ecosystem of application development, and range of applications, but the most advanced ones are like the Port of Los Angeles operations run on Pier 300. They are currently optimizing the way that cargo is being handled on Port 300, which is moving materials around and moving cargo containers. They’ve been able to improve cargo handling efficiency by the rubber tyred gantry (RTG) cranes by 60%, and the turnaround time for the trucks picking up those cargo containers by 12%. And they’re making calls to our quantum computer live right now, 24×7.”

This is a fascinating case history. The RTGs are giant and as you might expect,  expensive to run. Minimizing their movement was a big objective. D-Wave has posted a short video on the project, begun in 2018 following acquisition of Pier 300 for about $850M. The RTG unload incoming ships, stacked high with 20-foot containers. Detailed simulations and collaboration with D-Wave and SavantX eventually produced a new system for choreographing movement of the giant cranes and truck traffic loading containers.

D-Wave is an interesting company to watch. It’s a pioneer, founded in 1999, that often found itself criticized for not being quantum enough and for being restricted in the scope of problems it can attack. Quantum annealing is indeed different but it has also proven to be powerful. In recent years, other companies have emulated versions of annealing-based computation – whether on quantum devices or on classical systems. Now, D-Wave is pushing on both the quantum annealing front and the gate-based, fault-tolerant front where it says its experience in manufacturing, systems control, and software tools and application development give it a leg up.

Stay tuned.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed – and, as a result, PFAS are coming under increasing regu Read more…

Sweden Plans Expansion for Nvidia-Powered Berzelius Supercomputer

January 26, 2023

The Atos-built, Nvidia SuperPod-based Berzelius supercomputer – housed in and operated by Sweden’s Linköping-based National Supercomputer Centre (NSC) – is already no slouch. But now, Nvidia and NSC have announced Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-year POC study “to evaluate the contribution of an algorithmi Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for influence at the World Economic Forum. Intel CEO Pat Gels Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the European Union, China, and Japan. What is the value to be gained Read more…

AWS Solution Channel

Shutterstock_1687123447

Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch

  • 180x improvement in analytics performance
  • Enhanced risk management
  • Decreased bottlenecks in analytics
  • Unlocked near-real-time analytics
  • Scaled financial analytics

Overview

Numerix, a financial technology company, needed to find a way to scale its high performance computing (HPC) solution as client portfolios ballooned in size. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Supercomputer Research Predicts Extinction Cascade

January 25, 2023

The immediate impacts of climate change and land-use change are severe enough, but increasingly, researchers are warning that large enough changes can then snowball into catastrophic changes. New, supercomputer-powered r Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Shutterstock 1134313550

Semiconductor Companies Create Building Block for Chiplet Design

January 24, 2023

Intel's CEO Pat Gelsinger last week made a grand proclamation that chips will be for the next few decades what oil and gas was to the world over the last 50 years. While that remains to be seen, two technology associations are joining hands to develop building blocks to stabilize the development of future chip designs. The goal of the standard is to set the stage for a thriving marketplace that fuels... Read more…

Royalty-free stock photo ID: 1572060865

Fujitsu Study Says Quantum Decryption Threat Still Distant

January 23, 2023

Global computer and chip manufacturer Fujitsu today reported that a new study performed on its 39-qubit quantum simulator suggests it will remain difficult for Read more…

At ORNL, Jeff Smith Becomes Interim Director, as Search for Permanent Lab Chief Continues

January 20, 2023

UT-Battelle, which manages Oak Ridge National Laboratory (ORNL) for the U.S. Department of Energy, has appointed Jeff Smith as interim director for the lab as t Read more…

Top HPC Players Creating New Security Architecture Amid Neglect

January 20, 2023

Security of high-performance computers is being neglected in the pursuit of horsepower, and there are concerns that the ignorance may be costly if safeguards ar Read more…

Ohio Supercomputer Center Debuts ‘Ascend’ GPU Cluster

January 19, 2023

Less than 10 months after it was announced, the Columbus-based Ohio Supercomputer Center (OSC) has debuted its Dell-built GPU cluster, “Ascend.” Designed to Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire