IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

By John Russell

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x speedup in simulating molecules. Qiskit is IBM’s quantum software development platform; the new containerized runtime software runs in the IBM Cloud where it leverages IBM classical hardware and proximity to IBM quantum processors to accelerate performance.

An IBM blog by researchers Blake Johnson and Ismael Faro said, “Last fall, we made the ambitious promise to demonstrate a 100x speedup of quantum workloads in our IBM Quantum roadmap for scaling quantum technology. Today, we’re pleased to announce that we didn’t just meet that goal; we beat it. The team demonstrated a 120x speedup in simulating molecules thanks to a host of improvements, including the ability to run quantum programs entirely on the cloud with Qiskit Runtime.”

The necessarily hybrid nature of quantum computing has spurred community-wide efforts to accelerate the classical portion of the work in recent years. Not only have there been improvements in the control elements handled by classical systems, but also there have been steady advancements in understanding how to break up the quantum algorithms themselves with portions of the algorithm run on classical systems. Co-or-nearby-location of classical and quantum compute systems has also shown advantages.

The latest IBM test demonstration repeated a past simulation of lithium hydride molecule. Here’s an excerpt from the blog:

“Back in 2017, the IBM Quantum team demonstrated that a quantum computer could simulate the behavior of the  lithium hydride molecule. However, the process of modeling the LiH molecule would take 45 days with today’s quantum computing services, as circuits repeatedly passed back-and-forth between a classical and quantum processor and introduced large latencies. Now, we can solve the same problem in just nine hours — a 120x speedup.

“A host of improvements went into this feat. Algorithmic improvements reduced the number of iterations of the algorithm required to receive a final answer by two to 10 times. Improvements in system software removed around 17 seconds per iteration. Improved processor performance led to a 10x decrease in the number of shots, or repeated circuit runs, required by each iteration of the algorithm. And finally, improved control systems such as better readout and qubit reset performance reduced the amount of time per job execution (that is, execution of each batch of a few dozen circuits) from 1,000 microseconds to 70 microseconds.”

The researchers noted that until recently IBM mostly focused on the execution of quantum circuits, or sequences of quantum operations, on IBM Quantum systems. “However, real applications also require substantial amounts of classical processing. We use the term quantum program to describe this mixture of quantum circuits and classical processing. Some quantum programs have thousands or even millions of interactions between quantum and classical. Therefore, it is critical to build systems that natively accelerate the execution of quantum programs, and not just quantum circuits,” wrote the Johnson and Faro.

Paul Smith-Goodson, analyst-in-residence for quantum computing at Moor Insights & Strategy, agreed, “Not only is it more efficient, it is also more technically expedient to have classical resources in the cloud. The IBM classical machines are designed and maintained specifically for the process. In that way the end user doesn’t have to worry about such things such as control software, cloud software, capacity, etc.”

Providing context for the lithium hydride simulation, Smith-Goodson said, “Running chemistry simulations is a complicated process. You’re looking for the lowest energy state of the molecule. To find it requires a back and forth process between a classical computer and a quantum computer running many nested loops across the cloud. The process, called ansatz, allows a researcher to make calculations on the classical computer using iterative data from the quantum machine and making continuous adjustments until the ground state is found.

“This process takes a long time, depending on many factors including technical constraints/issues with the classical computer. Qiskit Runtime makes it much easier to run quantum algorithms like VQE (Variational Quantum Eigensolver) to simulate molecules,” said Smith-Goodson.

Along those lines, IBM reported the final boost in performance came from the introduction of the Qiskit Runtime, “Rather than building up latencies as code passes between a user’s device and the cloud-based quantum computer, developers could run their program in the Qiskit Runtime execution environment, where the IBM hybrid cloud handles the work for them. New software architectures and OpenShift Operators allow us to maximize the time spent computing, and minimize the time spent waiting,” wrote the researchers.

Big Blue reiterated its commitment to finding practical quantum computing use cases: “We hope that the Qiskit Runtime will allow users around the world to take full advantage of the 127 qubit IBM Quantum Eagle device slated for this year — or the 1,121-qubit Condor device planned for 2023. Qiskit Runtime is currently in beta for some members of the IBM Quantum Network.”

Overall, activity in quantum computing has mushroomed in recent years, particularly following launch of the U.S. National Quantum Initiative. There’s now a global race to achieve practical quantum computing.

Recent DOE work showcases some of the concrete progress being made. Consider this observation from Raphael Pooser, of Oak Ridge National Laboratory and a PI on DOE’s Quantum Testbed Pathfinder Project, “Two or three years ago, we were seeing that we could work really hard to get interesting results on quantum chemistry out of the quantum computers of the day. The concept of chemical accuracy, which is sort of the gold standard, was in a nutshell very hard to attain on the hardware if you didn’t have an in-house device that you’d built yourself. Fast forward to today, we just got through running this benchmark on the latest quantum computers from IBM, and we have some unpublished results from other devices. These systems’ performances have grown by leaps and bounds. It’s gone from being very hard to achieve chemical accuracy on those same problems three years ago to becoming routine now,” said Pooser. (See HPCwire coverage, Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser.)

Link to IBM blog: https://www.research.ibm.com/blog/120x-quantum-speedup

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Can Cerabyte Crack the $1-Per-Petabyte Barrier with Ceramic Storage?

July 20, 2024

A German startup named Cerabyte is hoping to solve the burgeoning market for secondary and archival data storage with a novel approach that uses lasers to etch bits onto glass with a ceramic coating. The “grey ceramic� Read more…

Weekly Wire Roundup: July 15-July 19, 2024

July 19, 2024

It's summertime (for most of us), and the HPC-related headlines aren't as plentiful as they once were. But not everything has to happen at high tide-- this week still had some waves! Idaho National Laboratory's Bitter Read more…

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Can Cerabyte Crack the $1-Per-Petabyte Barrier with Ceramic Storage?

July 20, 2024

A German startup named Cerabyte is hoping to solve the burgeoning market for secondary and archival data storage with a novel approach that uses lasers to etch Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire