D-Wave Previews Next-Gen Platform; Debuts Pegasus Topology; Targets 5000 Qubits

By John Russell

February 27, 2019

Quantum computing pioneer D-Wave Systems today “previewed” plans for its next-gen adiabatic annealing quantum computing platform which will feature a new underlying fab technology, reduced noise, increased connectivity, 5000-qubit processors, and an expanded toolset for creation of hybrid quantum-classical applications. The company plans to “incrementally” roll out platform elements over the next 18 months.

One major change is implementation of a new topology, Pegasus, in which each qubit is connected to 15 other qubits making it “the most connected of any commercial quantum system in the world,” according to D-Wave. In the current topology, Chimera, each qubit is connected to six other qubits. The roughly 2.5x jump in connectivity will enable users to tackle larger problems with fewer qubits and achieve better performance reports D-Wave.

“The reason we are announcing the preview now is because we will be making this technology available incrementally over the next 18 months and we wanted to provide a framework,” Alan Baratz, executive vice president, R&D and Chief Product Officer, D-Wave, told HPCwire. The plan, he said, is “to start by talking about the new topology now, how it fits into the whole. Then we’ll be announcing new tools, how they fit in. Next you’ll start to see some of the new low noise technology – that will initially be on our current generation system and you’ll see that in the cloud.” The final piece will be early versions of the 5000-qubit next generation systems.

It’s an ambitious plan. Identifying significant milestones now, but without specific dates, is an interesting gambit. Starting now, users can use D-Wave’s Ocean development tools which include compilers for porting of problems into the Pegasus topology. D-Wave launched its cloud-accessed development platform last fall – LEAP – and many of the new features and tools will show up there first (see HPCwire article, D-Wave Is Latest to Offer Quantum Cloud Platform).

Bob Sorensen, VP of research and technology and chief analyst for quantum computing, Hyperion

Bob Sorensen, chief analyst for quantum computing at Hyperion Research, had a positive reaction to D-Wave’s plan, “This announcement indicates that D-Wave continues to advance the state of the art in its quantum computing efforts. Although the increase from 2000 to 5000 qubits is impressive in itself, what strikes me is the new Pegasus topology. I expect that this increased connectivity will prove to be a major driver of new, interesting, and heretofore unrealizable QC algorithms and applications. Finally, I think it is important to note that D-Wave continues to listen to its wide, growing, and increasingly experienced customer base to help guide D-Wave’s future system designs. Being able to tap into the collective expertise of such a user base continues to be a critical element driving the evolution of D-Wave systems.”

Altogether, says D-Wave, the features of its next-gen system are expected to accelerate the race for commercial relevance and so-called quantum advantage – the goal of solving a problem sufficiently better on a quantum computer than on a classical computer to warrant switching to quantum computing for that application. D-Wave has aggressively marketed its success selling machines to commercial and government customers and says those users have developed “more than 100 early applications in areas as diverse as airline scheduling, election modeling, quantum chemistry simulation, automotive design, preventative healthcare, logistics and more.” How ready those apps are is sometimes debated. In any case, Baratz expects the next gen platform to have enough power (compute, developer tools, etc.) to lead to demonstrating customer advantage.

Sorensen is more circumspect about quantum advantage’s importance, “To my mind, the issue of quantum advantage is not a critical one. I really don’t think most users care about a somewhat artificial milestone. What matters is the development of algorithms/applications that bring a new capability to an existing problem or offer some significant speed-up over an existing application. Give a user 50X performance improvement and he/she is not going to lose much sleep debating quantum advantage.

“Bottom line. If at some point the headline reads, “Company Z demonstrates quantum advantage in algorithm X,” what will that mean to the existing and potential QC user base writ large? Not much I suspect. Not without a spate of algorithms to back it up.”

Here are marketing bullet points as excerpted from D-Wave’s announcement:

  • New Topology: Pegasus is the most connected of any commercial quantum system in the world. Each qubit is connected to 15 other qubits (compared to Chimera’s 6), giving it 2.5x more connectivity. It enables embedding of larger problems with fewer physical qubits. The D-Wave Ocean software development kit (SDK) includes tools for generating the Pegasus topology. Interested users can try embedding their problems on Pegasus.
  • “Lower Noise: next generation system will include the lowest noise commercially-available quantum processing units (QPUs) ever produced by D-Wave. This new QPU fabrication technology improves system performance and solution precision to pave the way to greater speedups.
  • “Increased Qubit Count: with more than 5000 qubits, the next generation platform will more than double the qubit count of the existing D-Wave 2000Q. Gives programmers access to a larger, denser, more powerful graph for building commercial quantum applications.
  • “Expansion of Hybrid Software & Tools: Investments in ease-of-use, automation and provide a more powerful hybrid development environment building upon D-Wave Hybrid. Allows allowing developers to run across classical and the next-generation quantum platforms in Python and other common languages. Modular approach incorporates logic to simplify distribution, allowing developers to interrupt processing and synchronize across systems to draw maximum computing power out of each system.
  • “Ongoing Releases: components of the D-Wave next generation quantum platform will come to market between now and mid-2020 via ongoing QPU and software updates available through the cloud. The complete system will be available through cloud and on-premise in mid-2020. Users can get explore a simulation of the new Pegasus topology today.

D-Wave didn’t reveal much detail of the enabling technology advances. Mark Johnson, VP, processor design & development said, “In terms of the integrated circuit we have basically redone the stack and that allowed us to make the design more compact. It also allowed us to get more connectivity. We are also making changes within that stack to reduce the intrinsic contribution to noise and decoherence from the materials. We’re not going to be talking about the recipe, just realize it is a fundamental technology node change, [with] new materials, a new fabrication processes, a new stack.”

Baratz said, “I’d add only that the new materials and processes are not just ‘in design’. We’ve actually used them on our current generation system, our 2000 qubit system. We’ve rebuilt it, using this newer technology stack, have several of them operating in our lab now, and are seeing the results from it we expected to see.”

D-Wave 2000Q System

The lower noise technology, said Baratz, will enable longer coherence times and higher quality solutions. The new operating software “will be designed specifically to support hybrid applications and that means we will be significantly reducing latency. This is important for hybrid applications where you run part classically and send to the quantum processors, get the result, run classically, and back and forth,” he said. For LEAP users, D-Wave will also offer new scheduling options so instead of having to run in a queue, users can reserve blocks of time if necessary to run a longer applications.

A brief review on the D-Wave approach may be useful. It differs rather dramatically from the universal gate-based model. With a gate-model quantum computer you have to specify the sequence of instructions and gates required to solve the problem. In that sense it’s a bit more like programming a classical system where you have to specify the sequence of instructions.

“For our system you don’t do that,” said Baratz. “All you do is specify the problem in a mathematical formulation that our system understands. It understands two different formulations. One of them is the quadratic binary optimization problem. The other is an Ising optimization problem. It’s basically a well-defined mathematical construct. So really programming our system has nothing to do with physics, nothing to do with qubits, nothing to do with entanglement, nothing to do with tuning with pulses; it is about mapping your problem into this mathematical formulation. It’s more like a declarative programing model where you don’t really have to specify the sequence of instruction. As a result it’s much easier to program.”

This description of how D-Wave systems work, taken from D-Wave’s site, may be helpful:

“In nature, physical systems tend to evolve toward their lowest energy state: objects slide down hills, hot things cool down, and so on. This behavior also applies to quantum systems. To imagine this, think of a traveler looking for the best solution by finding the lowest valley in the energy landscape that represents the problem.

“Classical algorithms seek the lowest valley by placing the traveler at some point in the landscape and allowing that traveler to move based on local variations. While it is generally most efficient to move downhill and avoid climbing hills that are too high, such classical algorithms are prone to leading the traveler into nearby valleys that may not be the global minimum. Numerous trials are typically required, with many travelers beginning their journeys from different points.

‘In contrast, quantum annealing begins with the traveler simultaneously occupying many coordinates thanks to the quantum phenomenon of superposition. The probability of being at any given coordinate smoothly evolves as annealing progresses, with the probability increasing around the coordinates of deep valleys. Quantum tunneling allows the traveler to pass through hills—rather than be forced to climb them—reducing the chance of becoming trapped in valleys that are not the global minimum. Quantum entanglement further improves the outcome by allowing the traveler to discover correlations between the coordinates that lead to deep valleys.”

Like its quantum computing rivals IBM and Rigetti, D-Wave is betting heavily on cloud-delivery as both a means for attracting and training QC users as well as offering production capability. Of course, D-Wave is still the only vendor selling systems outright for on premise, though IBM’s new IBM Q System One seems to be a step in that direction.

D-Wave has made it quite easy to create a LEAP account. Users can get one minute of free time to try out the system and one minute per month on an ongoing basis for free if they agree to open source any work created. Baratz says a minute of time buys more than you think (~400-to-4,000 experiments). Fees for commercial use start at $2,000 per hour per month with discounts if you sign up for longer periods of time.

No doubt quantum watchers will monitor how well and how timely D-Wave delivers on its promise. There has been no shortage of optimism from the QC development community (vendor and academia). Likewise the recent $1.25 billion U.S. Quantum Initiative, passed in December, has added to the chorus of those arguing there’s a global quantum computing race with high stakes at risk. We’ll see.

Feature Image: Illustration of Pegasus connectivity, Source: D-Wave Systems

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Chicago Researchers Generate First Computational Model of Entire SARS-CoV-2 Virus

January 15, 2021

Over the course of the last year, many detailed computational models of SARS-CoV-2 have been produced with the help of supercomputers, but those models have largely focused on critical elements of the virus, such as its Read more…

By Oliver Peckham

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Roar Supercomputer to Support Naval Aircraft Research

January 14, 2021

One might not think “aircraft” when picturing the U.S. Navy, but the military branch actually has thousands of aircraft currently in service – and now, supercomputing will help future naval aircraft operate faster, Read more…

By Staff report

DOE and NOAA Extend Computing Partnership, Plan for New Supercomputer

January 14, 2021

The National Climate-Computing Research Center (NCRC), hosted by Oak Ridge National Laboratory (ORNL), has been supporting the climate research of the National Oceanic and Atmospheric Administration (NOAA) for the last 1 Read more…

By Oliver Peckham

Using Micro-Combs, Researchers Demonstrate World’s Fastest Optical Neuromorphic Processor for AI

January 13, 2021

Neuromorphic computing, which uses chips that mimic the behavior of the human brain using virtual “neurons,” is growing in popularity thanks to high-profile efforts from Intel and others. Now, a team of researchers l Read more…

By Oliver Peckham

AWS Solution Channel

Now Available – Amazon EC2 C6gn Instances with 100 Gbps Networking

Amazon EC2 C6gn instances powered by AWS Graviton2 processors are now available!

Compared to C6g instances, this new instance type provides 4x higher network bandwidth, 4x higher packet processing performance, and 2x higher EBS bandwidth. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Honing In on AI, US Launches National Artificial Intelligence Initiative Office

January 13, 2021

To drive American leadership in the field of AI into the future, the National Artificial Intelligence Initiative Office has been launched by the White House Office of Science and Technology Policy (OSTP). The new agen Read more…

By Todd R. Weiss

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Intel ‘Ice Lake’ Server Chips in Production, Set for Volume Ramp This Quarter

January 12, 2021

Intel Corp. used this week’s virtual CES 2021 event to reassert its dominance of the datacenter with the formal roll out of its next-generation server chip, the 10nm Xeon Scalable processor that targets AI and HPC workloads. The third-generation “Ice Lake” family... Read more…

By George Leopold

Researchers Say It Won’t Be Possible to Control Superintelligent AI

January 11, 2021

Worries about out-of-control AI aren’t new. Many prominent figures have suggested caution when unleashing AI. One quote that keeps cropping up is (roughly) th Read more…

By John Russell

AMD Files Patent on New GPU Chiplet Approach

January 5, 2021

Advanced Micro Devices is accelerating the GPU chiplet race with the release of a U.S. patent application for a device that incorporates high-bandwidth intercon Read more…

By George Leopold

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Intel Touts Optane Performance, Teases Next-gen “Crow Pass”

January 5, 2021

Competition to leverage new memory and storage hardware with new or improved software to create better storage/memory schemes has steadily gathered steam during Read more…

By John Russell

Farewell 2020: Bleak, Yes. But a Lot of Good Happened Too

December 30, 2020

Here on the cusp of the new year, the catchphrase ‘2020 hindsight’ has a distinctly different feel. Good riddance, yes. But also proof of science’s power Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Leading Solution Providers

Contributors

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This