Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

By John Russell

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of the last panels at ISC 2024 — the discussion was fascinating and the panelists knowledgable. No such panel would be complete without also asking when QA will be achieved. The broad unsurprising answer to that question is not especially soon.

The panel included: Thomas Lippert, head of Jülich Supercomputing Centre (JSC) and director at the Institute for Advanced Simulation; Laura Schulz, acting head of quantum computing and technologies, Leibniz Supercomputing Centre; Stefano Mensa, advanced computing and emerging technologies group leader STFC Hartree Centre; and Sabrina Maniscalco, CEO and co-founder, Algorithmiq Ltd. The moderator was Heike Riel, IBM Fellow, head of science & technology and lead of IBM Research Quantum Europe.

From left to right: Sabrina Maniscalco, CEO and co-founder, Algorithmiq Ltd,Stefano Mensa, advanced computing and emerging technologies group leader STFC Hartree Centre, Thomas Lippert, head of Jülich Supercomputing Centre (JSC) and director at the Institute for Advanced Simulation; Laura Schulz, acting head of quantum computing and technologies, Leibniz Supercomputing Centre.

Missing from the panel was a pure-play quantum computer developer — that might have added a different perspective. Maybe next year. Topics included quantum-HPC integration, the need for benchmarks (though when and how was not clear), the likely role for hybrid quantum-HPC applications in the NISQ world; familiar discussion around error mitigation and error correction, and more.

Of the many points made, perhaps the strongest was around the idea that Europe has mobilized to rapidly integrate quantum computers into its advanced HPC centers.

Schulz said, “The reason that our work in the Munich Quantum Valley (MQV) is so important is because when we look at the European level. We have the EuroHPC Joint undertaking. We have the six quantum systems that are going to be placed in hosting centers that European wide, and we all [have] different modalities, and we all have to integrate. We have to think about this at the European level for how we’re going to bring these systems together. We do not want multiple schedulers. We do not want multiple solutions that could then clash with one another. We want to try to find unity, where it makes sense and be able to amplify the user experience and smooth the user experience European-wide for them.”

The idea is to connect all of these EuroHPC JU systems and make them widely available to academia and industry. LRZ and JSC, for example, have already fielded or are about to field several quantum computers in their facilities (see slides below).

Lippert emphasized that, at least for this session, the focus is on how to achieve quantum advantage — “when we talk about quantum utility, when this becomes useful, then the quantum computer is able to solve problems of practical usage significantly faster than any classical computer [based on] CPUs, GPUs, of comparable size, weight and power in similar environments. We think this is the first step to be made with quantum-HPC hybrid type of simulation, optimization, machine learning algorithms. Now, how do you realize such quantum advantage? You build HPC-hybrid compute systems. We have the approach that we talk about the modular supercomputing architecture.

“Our mission is to establish a vendor-agnostic comprehensive public quantum computer user infrastructure integrated in to our modular complex of supercomputers to . [It’s] is a user friendly and peer reviewed access. So like we do with supercomputing.”

Schulz drilled down in the software stack being developed at LRZ in collaboration with many partners. On the left side of the slide below are traditional parts — “co-scheduling, co-resource management, all those components that we need to think of, and that we do think of with things like disaggregated acceleration,” said Schulz.

“When you get to the right side,” she noted, “we have to deal with the new physics environment or the new quantum computing environment. So we have a quantum compiler that we are developing, we have a quantum representation moving between them. We’ve got a robust, customized, comprehensive toolkit with things like the debuggers, the optimizers, all of those components that’s built with our partners in the ecosystem. Then we have an interface, this QBMI (quantum back-end manager interface) and this is what connects the systems individually into our whole framework.”

“Now, this is really important. And this is part of the evolution. We’ve been working on this for two years, actively building this up, and we’re already starting to see the fruits of our labor. In our quantum Integration center (QIC), we are already able to go from our HPC environment, so our HPC testbed that we have, using our Munich quantum software stack, we are able to go to an access node on HPC system, the same hardware, and call to the quantum system. We have that on prem, it is co located these systems, and it is an integrated effort with our own software stack. So we are making great strides,” Schulz said.

The pan-European effort to integrate quantum computing into HPC centers is impressive and perhaps furthest along worldwide. Its emphasis is on handling multiple quantum modalities (superconducting, trapped ion, photonic, neutral atom) and approaches (gate-based and annealing) and trying develop relatively-speaking a common easy-to-use software stack connecting HPC and the quantum.

Mensa of the U.K.’s STFC zeroed in on benchmarking. Currently there are many efforts but few widely agreed-upon benchmarks. Roughly, the quantum community talks about system benchmarks (low and middle level) that evaluate a system’s basic attributes (fidelity, speed, connectivity, etc) and application-oriented benchmarks intended to look more at time-to-solution, quantum resources needed, and accuracy.

No one disputes the need for quantum benchmarks. Mensa argued for a coordinated effort and suggested the SPEC model as something to look at it. “The SPEC Consortium for HPC is a great example, because it’s a nonprofit and it establishes and maintains and endorses standardized benchmarks. We need to seek something like that,” he said

He took a light shot at the Top500 metric not being the best approach, noting it didn’t represent practical workloads today, and added the “You know that your car can go up to 260. But on a normal road, we never do that.” Others noted the Top500, based on Linpack, does at least show you can actually get your system up and running correctly. Moreover, noted Lippert and Schulz, the truth is that the Top500 score is not on the criteria lists they use to evaluate advanced systems procurements.

Opinions on benchmarking varied, but it seems that the flurry of separate benchmark initiatives are likely to continue and remain disparate for now. One issue folks agree on is that quantum technology is moving so fast that it’s hard to keep up with, and maybe it’s too early to settle on just a few benchmarks. Moreover benchmarking hybrid quantum-HPC systems becomes even more confusing. All seem to favor use of a suite of benchmarks over a single metric. This is definitely a stay-tuned topic.

Turning to efforts to achieve practical uses, Maniscalco presented two use cases that demonstrate the ability to combine quantum and HPC resources by using classical computing to mitigate errors. Her company Algorithmic Ltd, is developing algorithms for use in bioscience. She provided a snapshot of a technique that Algorithmic has developed to use tensor processing in post-process on classical systems to mitigate errors on the quantum computer.

“HPC and quantum computers are seen almost as antagonists in the sense that we can use, for example, tensor network methods to simulate quantum systems, and this is, of course, it’s very important for benchmarking,” said Maniscalco. “But what we are interested in is bringing these two together and the quantum-centric supercomputing idea brought forward by IBM is important for us and what we do is specifically focused on this interface between the quantum computer and the HPC.

“We develop techniques that are able to measure or extract information from the quantum computers in a way that allows [you] to optimize the efficiency in terms of number of measurements, this eventually corresponds to shorter wall time overhead overall, and also allows to optimize the information that you extract from the quantum computer, and importantly, allows in post processing,” she said. (best to read the associated papers for details)

At the end of Q&A, moderator Heike Riel asked the panel, “Where will we be in five years? Here are their brief answers in the order given:

  • Sabrina Maniscalco, “Well, I think in five years, we will have commercially useful quantum advantage. And so we will be at a new place when it comes to technology, which combines quantum and HPC.”
  • Stefano Mensa, I think in five years…I agree with that. I think we will see early signs of actual usefulness in the fields of machine learning and chemistry.
  • Laura Schulz, I hope, honestly, that we see our first procurements for supercomputers with quantum accelerating [as part of the procurement], and that there has been enough evolution and integration and thinking about how these work together in a realistic way…that we work towards tighter coupling of these systems and make sure that it actually shows up in our procurements.
  • Thomas Lippert, I hope we will have the first machines with error correction. I’m sure that in five years, quantum annealing technologies will be much better than they are today. But I believe that the data collection will be a breakthrough.
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, was an unforgettable event. Other than being the first busi Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to release an AI accelerator with heavy in-memory processing, b Read more…

ASC24 Student Cluster Competition: Who Won and Why?

June 18, 2024

As is our tradition, we’re going to take a detailed look back at the recently concluded the ASC24 Student Cluster Competition (Asia Supercomputer Community) to see not only who won the various awards, but to figure out Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch that D-Wave’s brand of analog quantum computing (quantum Read more…

Apple Using Google Cloud Infrastructure to Train and Serve AI

June 18, 2024

Apple has built a new AI infrastructure to deliver AI features introduced in its devices and is utilizing resources available in Google's cloud infrastructure.  Apple's new AI backend includes: A homegrown foun Read more…

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and implementation of artificial intelligence (AI) tools, while the Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…

Shutterstock_666139696

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire