Recipe for Scaling: ARQUIN Framework for Simulating a Distributed Quantum Computing System

By Sarah Wong, PNNL

October 14, 2024

Editor note: This article was first posted today on the Pacific Northwest National Laboratory website and is repurposed here with PNNL’s permission. It’s a great example of a multi-institution team — in this case 14 institutions — tackling difficult problems in quantum computing.

One of the most difficult problems with quantum computing relates to increasing the size of the quantum computer. Researchers globally are seeking to solve this “challenge of scale.”

To bring quantum scaling closer to reality, researchers from 14 institutions collaborated through the Co-design Center for Quantum Advantage (C2QA), a Department of Energy (DOE), Office of Science, National Quantum Information Science Research Center. Together, they constructed the ARQUIN (Architectures for Multinode Superconducting Quantum Computers) framework—a pipeline to simulate large-scale distributed quantum computers as different layers. Their results were published in ACM Transactions on Quantum Computing.

Connecting qubits

The research team, led by Michael DeMarco from Brookhaven National Laboratory and the Massachusetts Institute of Technology (MIT), started with a standard computing strategy of combining multiple computing “nodes” into one unified computing framework.

In theory, this multi-node system can be emulated to enhance quantum computers—but there’s a catch. In superconducting quantum systems, qubits must be kept incredibly cold. This is usually done with the help of a cryogenic device called a dilution refrigerator. The problem is that scaling a quantum computing chip to a sufficiently large size within a single fridge is hard.

Even in larger fridges, the superconducting electric circuits within a single chip become difficult to maintain. To create a powerful multi-node quantum computer, researchers need to not only connect nodes inside of one dilution refrigerator, but also to connect the nodes across multiple dilution refrigerators.

Assembling the quantum ingredients

No one institution could carry out the full breadth of research needed for the ARQUIN framework. The ARQUIN team included researchers from Pacific Northwest National Laboratory (PNNL), Brookhaven, MIT, Yale University, Princeton University, Virginia Tech, IBM, and more.

“A lot of quantum research is being done in isolation, with research groups only looking at one piece of the puzzle,” said Samuel Stein, quantum computer scientist at PNNL. “It’s almost like gathering ingredients without knowing how they will work together in a recipe. When experiments are done on only one aspect of the quantum computer, you don’t get to see how the results may impact other parts of the system.”

Instead, the ARQUIN team broke down the problem of constructing a multi-node quantum computer into different “layers,” with each institution working on a different layer based on their area of expertise.

“It’s a huge optimization problem,” said Mark Ritter, chair of the Physical Sciences Council at IBM. “The team had to do an in-depth assessment of the field to look at where we were in terms of technology and algorithms, then do simulations to find out where the bottlenecks were and what could be improved.”

The ARQUIN framework focused on superconducting quantum devices connected by microwave to optical links. Each institution concentrated on a different ingredient of the quantum computing recipe. For example, while some researchers investigated how to optimize microwave-to-optical transduction, others created algorithms that exploit the distributed architecture.

“Such cross-domain systems research is essential to charting roadmaps toward useful quantum information processing applications and is uniquely enabled by the DOE’s national quantum initiatives,” said Professor Isaac Chuang of MIT.

For their part of the ARQUIN framework, PNNL researchers including Stein, Ang Li, and James (Jim) Ang designed and built the simulation pipeline and generated the Quantum Roofline Model that connected all the ingredients together—essentially creating a framework for trying out different recipes for future quantum computers.

From his unique vantage point, PNNL physicist Chenxu Liu understands the need for multi-institutional collaborations well. He worked on the ARQUIN framework while he was a postdoctoral researcher at Virginia Tech.

“While each research group had expertise in their portion of the project, no one had a very deep understanding of what all of the other groups within the project were doing,” said Liu. “However, each group’s work needed to be embedded into the whole pipeline view of the quantum computer in order to make it meaningful.”

After compiling the different pieces of the project together, ARQUIN became a framework for simulating and benchmarking future multi-node quantum computers. This marks a promising first step toward enabling efficient and scalable quantum communication and computation by integrating modular systems.

Expanding the quantum recipe

Though a functional multi-node quantum computer outlined in the ARQUIN paper has not yet been created, this research provides a road map for future quantum hardware/software co-design.

“Creating a layer-based hierarchical simulation environment—including microwave-to-optical simulation, distillation simulation, and system simulation—was a crucial component in this work,” said Li. “It allowed the ARQUIN team to understand and evaluate the tradeoffs between various design factors and performance metrics regarding the complex distributed quantum computing communication stack.”

Some of the software products created for ARQUIN have already been used by members of the team for other projects. Many of the ARQUIN authors collaborated on another project, called HetArch, to further investigate different superconducting quantum architectures.

“This is an example of applying the principles of co-design from exascale computing to our ARQUIN/HetArch design space explorations,” said Ang.

The ARQUIN study was supported by the Department of Energy, Office of Science, National Quantum Information Science Research Center, Co-design Center for Quantum Advantage (C2QA). HetArch was supported by C2QA and the Advanced Scientific Computing Research, Accelerated Research for Quantum Computing Program. HetArch was also supported in part by the National Science Foundation projects Enabling Practical-Scale Quantum Computation, Software-Tailored Architecture for Quantum Co-design, and the Quantum Leap Challenge Institute for Hybrid Quantum Architectures and Networks.

Editor: This article was posted today on the Pacific Northwest National Laboratory website.

BONUS

Here’s the abstract to the ARQUIN team’s paper:

Many proposals to scale quantum technology rely on modular or distributed designs wherein individual quantum processors, called nodes, are linked together to form one large multinode quantum computer (MNQC). One scalable method to construct an MNQC is using superconducting quantum systems with optical interconnects. However, internode gates in these systems may be two to three orders of magnitude noisier and slower than local operations. Surmounting the limitations of internode gates will require improvements in entanglement generation, use of entanglement distillation, and optimized software and compilers. Still, it remains unclear what performance is possible with current hardware and what performance algorithms require. In this article, we employ a systems analysis approach to quantify overall MNQC performance in terms of hardware models of internode links, entanglement distillation, and local architecture. We show how to navigate tradeoffs in entanglement generation and distillation in the context of algorithm performance, lay out how compilers and software should balance between local and internode gates, and discuss when noisy quantum internode links have an advantage over purely classical links. We find that a factor of 10–100× better link performance is required and introduce a research roadmap for the co-design of hardware and software towards the realization of early MNQCs. While we focus on superconducting devices with optical interconnects, our approach is general across MNQC implementations.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

High-Performance Storage for AI and Analytics Panel

October 31, 2024

When storage is mentioned in an AI or Big Data analytics context, it is assumed to be a high-performance system. In practice, it may not be, and the user eventually learns about scaleable storage as the amounts of data g Read more…

White House Mulls Expanding AI Chip Export Bans Beyond China

October 31, 2024

The Biden administration is reportedly considering capping sales of advanced artificial intelligence (AI) chips from US-based manufacturers like AMD and Nvidia to certain countries, including those in the Middle East. � Read more…

Lottery to Determine Major AI Conference Attendees Amid Registration Boom

October 31, 2024

A boom in AI has created a problem for the organizers of the NeurIPS conference, which is considered an essential machine-learning research conference. The sheer number of registrations has overwhelmed organizers, who Read more…

Role Reversal: Google Teases Nvidia’s Blackwell as It Softens TPU Rivalry

October 30, 2024

Customers now have access to Google's homegrown hardware -- its Axion CPU and latest Trillium TPU -- in its Cloud service.  At the same time, Google gave customers a teaser on Nvidia's Blackwell coming to Google Cloud, Read more…

AI Has a Data Problem, Appen Report Says

October 30, 2024

AI may be a priority at American companies, but the difficulty in managing data and obtaining high quality data to train AI models is becoming a bigger hurdle to achieving AI aspirations, according to Appen’s State of Read more…

Microsoft Azure & AMD Solution Channel

Join Microsoft Azure and AMD at SC24

Atlanta, Georgia is the place to be this fall as the high-performance computing (HPC) community convenes for Supercomputing 2024. SC24 will bring together an unparalleled mix of scientists, engineers, researchers, educators, programmers, and developers for a week of learning and sharing. Read more…

Report from HALO Details Issues Facing HPC-AI Industry

October 28, 2024

Intersect360 Research has released a comprehensive new report concerning the challenges facing the combined fields of high-performance computing (HPC) and artificial intelligence (AI). Titled “Issues Facing the HPC-AI Read more…

High-Performance Storage for AI and Analytics Panel

October 31, 2024

When storage is mentioned in an AI or Big Data analytics context, it is assumed to be a high-performance system. In practice, it may not be, and the user eventu Read more…

Shutterstock_556401859

Role Reversal: Google Teases Nvidia’s Blackwell as It Softens TPU Rivalry

October 30, 2024

Customers now have access to Google's homegrown hardware -- its Axion CPU and latest Trillium TPU -- in its Cloud service.  At the same time, Google gave custo Read more…

AI Has a Data Problem, Appen Report Says

October 30, 2024

AI may be a priority at American companies, but the difficulty in managing data and obtaining high quality data to train AI models is becoming a bigger hurdle t Read more…

Report from HALO Details Issues Facing HPC-AI Industry

October 28, 2024

Intersect360 Research has released a comprehensive new report concerning the challenges facing the combined fields of high-performance computing (HPC) and artif Read more…

Archetype AI’s Newton Model Masters Physics From Raw Data

October 28, 2024

Physicists have developed a deep understanding of the fundamental laws of nature through careful observations, experiments, and precise measurements. However, w Read more…

PNNL-Microsoft Collaborate on Cloud Computing for Chemistry, More to Come

October 25, 2024

RICHLAND, Wash.—Some computing challenges are so big that it’s necessary to go all in. That’s the approach a diverse team of scientists and computing expe Read more…

Xeon 6 vs. Zen-5 HPC Benchmark Showdown

October 24, 2024

In this GPU age, CPUs are often considered second citizens because most of the performance comes from the GPU. In most systems, GPUs are separate PCIe devices u Read more…

Nvidia’s Newest Foundation Model Can Actually Spell ‘Strawberry’

October 23, 2024

A new AI model from Nvidia knows just how many R’s are in the word strawberry, a feat that OpenAI’s GPT-4o model has yet to achieve. In what is known as the Read more…

Leading Solution Providers

Contributors

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire