Berkeley Lab’s Collaborative Research Enhances Quantum Visualization and Data Analysis

April 19, 2024

April 19, 2024 — In an ever-changing technological landscape, quantum computing is emerging as a promising solution to address the limitations and overcome the scaling issues of classical computing. Quantum computing platforms will, in some instances, have the ability to solve problems more efficiently than traditional computers, potentially surpassing the capabilities of current exascale-class platforms, especially as Moore’s Law scaling diminishes.

The collaboration: (Top Row, Left to Right) Talita Perciano, Jan Balewski, Daan Camps. (Bottom Row, Left to Right) Roel Van Beeumen, Mercy G. Amankwah, E. Wes Bethel.

Despite all the advances in the era of noisy intermediate-scale quantum (NISQ) devices, there remains a need for basic research to gain a better understanding of the capabilities and applicability of quantum information science and technology (QIST). Although contemporary quantum computing hardware platforms are constrained in accuracy and scale, the field of quantum computing is rapidly advancing in terms of hardware capabilities, software environments for algorithm development, and educational programs.

In scientific exploration, visualization allows researchers to explore the unknown and “see the unseeable,” effectively transferring abstract information into easily understandable images. Lawrence Berkeley National Laboratory (Berkeley Lab) researchers from the Scientific Data Division, the Applied Mathematics & Computational Research Division, and the National Energy Research Scientific Computing Center (NERSC), in collaboration with teams from San Francisco State University (SFSU) and Case Western Reserve University, recently released two papers introducing new methods of data storage and analysis to make quantum computing more practical and exploring how visualization helps in understanding quantum computing.

“This work represents significant strides in understanding and harnessing current quantum devices for data encoding, processing, and visualization. These contributions build on our previous efforts to highlight the ongoing exploration and potential of quantum technologies in shaping scientific data analysis and visualization,” explained Talita Perciano, a Research Scientist in the Scientific Data Division and the leader of this effort. “The realization of these projects underscores the vital role of teamwork, as each member brought their unique expertise and perspective. This collaboration is a testament to the fact that in the quantum realm, as in many aspects of life, progress is not just about individual achievements, but about the team’s collective effort and shared vision.”

With the recent call to build and educate a quantum workforce, many organizations, including the U.S. Department of Energy (DOE), are looking for ways to help advance research and develop new algorithms, systems, and software environments for QIST. To that end, Berkeley Lab’s ongoing collaboration with SFSU, a minority-serving institution, leverages the Lab’s efforts in QIST and expands SFSU’s existing curricula to include new QIST-focused coursework and training opportunities. Formerly a Berkeley Lab Senior Computer Scientist, SFSU Associate Professor Wes Bethel led the charge toward producing a new generation of graduating SFSU Computing Science Master’s students, many from underrepresented groups, with theses focusing on QIST topics.

Mercy Amankwah, a Ph.D. student at Case Western University, has been part of this collaboration since June 2021, dedicating 12 weeks of her summer breaks annually to participate in the Sustainable Research Pathways program, a partnership between Berkeley Lab and the Sustainable Horizons Institute. Amankwah leveraged her expertise in linear algebra to innovate the design and manipulation of quantum circuits to achieve the efficiency the team hoped for in two new methods, QCrank and ABArt. The methods use the team’s innovative techniques to encode data for quantum computers.

“The work we’re doing is truly captivating,” said Amankwah. “It’s a journey that constantly pushes us to contemplate the next big breakthroughs. I’m excitedly looking forward to making more impactful contributions to this field as I step into my post-Ph.D. career adventure.”

Balancing Classical and Quantum Capabilities

The team’s focus on encoding classical data for use by quantum algorithms is a stepping stone toward progress in leveraging QIST methods as part of graphics and visualization, both of which are historically computationally expensive. “Finding the right balance between the capabilities of QIST and classical computing is a big research challenge. On the one side, quantum systems can handle exponentially larger problems as we add more qubits.

On the other side, classical systems and HPC platforms have decades of solid research and infrastructure, but they hit technological limits in scaling,” said Bethel. “One likely pathway is the idea of hybrid classical-quantum computing, blending classical CPUs with quantum processing units (QPUs). This approach combines the best of both worlds, offering exciting possibilities for specific science applications.”

The first paper, recently published in Nature Scientific Reports, explores how to encode and store classical data in quantum systems to improve analytic capabilities and covers the two new methods and how they function. QCrank works by encoding sets of real numbers into continuous rotations of selected qubits, allowing the representation of more data using less space. QBArt, on the other hand, directly represents binary data as a series of zeros and ones mapped to pure zero and one qubit states, making it easier to do calculations on the data.

In the second paper, the team delved into the interaction between visualization and quantum computing, showing how visualization has contributed to quantum computing by enabling the representation of complex quantum states graphically and exploring the potential benefits and challenges of integrating quantum computing into the realm of visual data exploration and analysis.

The team tested their methods on NISQ quantum hardware using several types of data-processing tasks, such as matching patterns in DNA, calculating the distance between sequences of integers, manipulating a sequence of complex numbers, and writing and retrieving images made of binary pixels.

The team ran these tests using a quantum processor called Quantinuum H1-1, as well as on other quantum processors available through IBMQ and IonQ. Often, quantum algorithms processing such large data samples as a single circuit on NISQ devices perform very poorly or yield completely random output. The authors demonstrated that their new methods obtained remarkably accurate results when using such hardware.

Dealing with Data Encoding and Crosstalk

When designing and implementing quantum algorithms processing classical data, a significant challenge arises known as the data encoding problem, which is how to convert classical data into a form that a quantum computer can work with. During the encoding process, there is a tradeoff between using quantum resources efficiently and keeping the computational complexity of algorithms simple enough to manage.

Figure 1. Demonstration of recovery of a black and white 384 pixels image using QCrank executed on the Quantinuum H1-1 real QPU. a) ground-truth image, b) recovered image has 97% of correct pixels, c) residual showing the locations of 12 incorrect pixels.

“The focus was on balancing the current quantum hardware constraints. Some mathematically solid encoding methods use so many steps, or quantum gates, that the quantum system loses the initial information before even reaching the final gate. This leaves no opportunity to correctly compute the encoded data,” said Jan Balewski, Consultant at NERSC and first author of the Scientific Reports paper. “To address this, we came up with the scheme of breaking one long sequence into many parallel encoding streams.”

Unfortunately, this method led to a new problem, crosstalk among streams, which distorted the stored information. “It’s like trying to listen to multiple conversations in a crowded room; when they overlap, understanding each message becomes challenging. In data systems, crosstalk distorts information, making insights less accurate,” said Balewski. “We tackled the crosstalk in two ways: for QCrank, we introduced a calibration step; for QBArt, we simplified the language used in the messages. Reducing the number of used tokens is like switching from the Latin alphabet to Morse code – slower to send but less affected by distortions.”

This research introduces two significant advancements, making quantum data encoding and analysis more practical. First, parallel uniformly controlled rotation (pUCR) circuits drastically reduce the complexity of quantum circuits compared to previous methods. These circuits allow for multiple operations to occur simultaneously, making them well-suited for quantum processors, such as the H1-1 device from Quantinuum, with high connectivity and support for parallel gate execution.

Second, the study introduces QCrank and QBArt, the two data encoding techniques that utilize pUCR circuits: QCrank encodes continuous real data as rotation angles and QBArt encodes integer data in binary form. The research also presents a series of experiments conducted using IonQ and IBMQ quantum processors, demonstrating successful quantum data encoding and analysis on a larger scale than previously achieved. These experiments also incorporate new error mitigation strategies to correct noisy hardware results, enhancing the reliability of the computations.

The experiments conducted with QCrank show promising results, successfully encoding and retrieving 384 black-and-white pixels on 12 qubits with a high level of accuracy in recovering the information (Figure 1). Notably, this image represents the largest image ever successfully encoded on a quantum device, marking it a groundbreaking achievement.

Storing that same image on a classical computer would require 384 bits, making it 30 times less efficient compared to a quantum computer. Since the capacity of the quantum system grows exponentially with the number of qubits, just 35 qubits on an ideal quantum computer could, for example, hold the entire 150 gigabytes of DNA information found in the human genome.

Figure 2. Results obtained by the DNA sequence matching executed on Quantinuum H1-1 QPU. The algorithm correctly detects the differences between the 6 codons in positions 5 to 10, marked in red.

Experiments conducted with QBArt showcased its remarkable prowess in encoding and processing diverse sequences of data, from intricate DNA sequences (Figure 2) to complex numbers, with near-perfect fidelity. Additionally, the study delves into the performance evaluation of different quantum processors in encoding binary data, unveiling the exceptional capabilities of ion trap-based processors for tasks relying on the pUCR circuits.

These findings not only set the stage for deeper investigations into the applications of compact, parallel circuits across different quantum algorithms and hybrid quantum-classical algorithms; they also pave the way for exciting advancements in future quantum machine learning and data processing tasks.

“Navigating the forefront of quantum computing, our team, energized by emerging talents, is exploring theoretical advances leveraging our data encoding methods to tackle a wide range of analysis tasks. These novel approaches hold the promise of unlocking analytical capabilities on a scale we haven’t seen before with NISQ devices,” said Perciano. “Leveraging both HPC and quantum hardware, we aim to expand the horizons of quantum computing research, envisioning how quantum can revolutionize problem-solving methods across various scientific domains. As quantum hardware evolves, all of us on the research team believe in its potential for practicality and usefulness as a powerful tool for large-scale scientific data analysis and visualization.”

This research was supported by the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) Exploratory Research for Extreme-Scale Science, the Sustainable Horizons Institute, and Berkeley Lab’s Lab Directed Research and Development Program and used computing resources at NERSC and the Oak Ridge Leadership Computing Facility.

About Berkeley Lab

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.


Source: Carol Pott, Berkeley Lab

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ISC 2024 Student Cluster Competition

May 16, 2024

The 2024 ISC 2024 competition welcomed 19 virtual (remote) and eight in-person teams. The in-person teams participated in the conference venue and, while the virtual teams competed using the Bridges-2 supercomputers at t Read more…

Grace Hopper Gets Busy With Science 

May 16, 2024

Nvidia’s new Grace Hopper Superchip (GH200) processor has landed in nine new worldwide systems. The GH200 is a recently announced chip from Nvidia that eliminates the PCI bus from the CPU/GPU communications pathway.  Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of the last panels at ISC 2024 — the discussion was fascinat Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can uncover patterns, generate insights, and make predictions that Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top500 list of the fastest supercomputers in the world. At s Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance computing (HPC) will remain essential, even as many applicati Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

A Big Memory Nvidia GH200 Next to Your Desk: Closer Than You Think

February 22, 2024

Students of the microprocessor may recall that the original 8086/8088 processors did not have floating point units. The motherboard often had an extra socket fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire