What’s good way to characterize dynamic noise in QPUs? Why should CMOS and single electron spin qubits be a preferred approach? Can variational quantum algorithms (VQA) truly deliver speed-up over classical systems? Can Hybrid VQAs improve quantum machine learning? What about implementing generative adversarial networks (GANs) on today’s NISQ machines?
Here are five papers from the Quantum Science Center (ORNL), Intel, the University of Science and Technology of China, the Quantum Information Sciences Section (ORNL) and University College Dublin that tackle the topics above. All were posted to arXiv in July. The steady flow of papers tackling quantum computing, even through the summer, is impressive.
QSC Works to Accurately Predict Noise Boundary
Travis Humble, director of the Quantum Science Center at ORNL, and his colleague Samudra Dasgupta, report work on a stability metric that includes dynamic noise as well as easier-to-characterize stationary noises. Their paper – Reliable Devices Yield Stable Quantum Computations – underscores the need to account for dynamic noises when setting expectations.
Here the abstract:
“Stable quantum computation requires noisy results to remain bounded even in the presence of noise fluctuations. Yet non-stationary noise processes lead to drift in the varying characteristics of a quantum device that can greatly influence the circuit outcomes. Here we address how temporal and spatial variations in noise relate device reliability to quantum computing stability. First, our approach quantifies the differences in statistical distributions of characterization metrics collected at different times and locations using Hellinger distance. We then validate an analytical bound that relates this distance directly to the stability of a computed expectation value. Our demonstration uses numerical simulations with models informed by the transmon device from IBM called washington. We find that the stability metric is consistently bounded from above by the corresponding Hellinger distance, which can be cast as a specified tolerance level. These results underscore the significance of reliable quantum computing devices and the impact for stable quantum computation.”
Intel Demonstrates High-Volume On-Wafer QPU Testing
Intel has long argued its CMOS manufacturing expertise and spin qubit technology is best suited for scaling system size to the millions of qubits need to achieve fault tolerant quantum computing. In this paper – Probing single electrons across 300 mm spin qubit wafers – written by Intel researchers including James Clarke, director of quantum hardware, Intel, they look at a piece of the manufacturing puzzle – qubit probing – for build large-scale quantum computers.
Here’s the abstract:
“Building a fault-tolerant quantum computer will require vast numbers of physical qubits. For qubit technologies based on solid state electronic devices, integrating millions of qubits in a single processor will require device fabrication to reach a scale comparable to that of the modern CMOS industry. Equally importantly, the scale of cryogenic device testing must keep pace to enable efficient device screening and to improve statistical metrics like qubit yield and process variation. Spin qubits have shown impressive control fidelities but have historically been challenged by yield and process variation. In this work, we present a testing process using a cryogenic 300 mm wafer prober to collect high-volume data on the performance of industry-manufactured spin qubit devices at 1.6 K.
“This testing method provides fast feedback to enable optimization of the CMOS compatible fabrication process, leading to high yield and low process variation. Using this system, we automate measurements of the operating point of spin qubits and probe the transitions of single electrons across full wafers. We analyze the random variation in single-electron operating voltages and find that this fabrication process leads to low levels of disorder at the 300mm scale. Together these results demonstrate the advances that can be achieved through the application of CMOS industry techniques to the fabrication and measurement of spin qubits.”
VQA Performance Still Lags Classical
Huan-Yu Liu and his colleagues from University of Science and Technology of China argue that VQAs, while promising, can’t currently outperform classical systems on low-depth quantum neural networks. They cite wall clock times of more than a year in their paper – Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters.
That said, the researchers emphasize, “We do not want to deny the potential of VQAs and the NISQ algorithms. In view of VQAs, optimizations need to be made to reduce the time cost, examples like more efficient sampling strategies and more parameter-saving ansatzes. And one of our future works is to design backpropagation-type algorithms for efficiently training QNNs.”
Here’s the abstract:
“Applying low-depth quantum neural networks (QNNs), variational quantum algorithms (VQAs) are both promising and challenging in the noisy intermediate-scale quantum (NISQ) era: Despite its remarkable progress, criticisms on the efficiency and feasibility issues never stopped. However, whether VQAs can demonstrate quantum advantages is still undetermined till now, which will be investigated in this paper. First, we will prove that there exists a dependency between the parameter number and the gradient-evaluation cost when training QNNs. Noticing there is no such direct dependency when training classical neural networks with the backpropagation algorithm, we argue that such a dependency limits the scalability of VQAs.
“Second, we estimate the time for running VQAs in ideal cases, i.e., without considering realistic limitations like noise and reachability. We will show that the ideal time cost easily reaches the order of a 1-year wall time. Third, by comparing with the time cost using classical simulation of quantum circuits, we will show that VQAs can only outperform the classical simulation case when the time cost reaches the scaling of 100 -102 years. Finally, based on the above results, we argue that it would be difficult for VQAs to outperform classical cases in view of time scaling, and therefore, demonstrate quantum advantages, with the current workflow. Since VQAs as well as quantum computing are developing rapidly, this work does not aim to deny the potential of VQAs. The analysis in this paper provides directions for optimizing VQAs, and in the long run, seeking more natural hybrid quantum-classical algorithms would be meaningful.”
Hybrid VQA Enhances Quantum Machine Learning
Researchers Joseph Wang and Ryan Bennink of the Quantum Information Sciences Section, ORNL, look at VQAs from a different perspective. They report in their paper – Variational quantum regression algorithm with encoded data structure – developing a hybrid quantum regression algorithm to improve VQA performance machine learning.
They write, “We propose a method to solve the linear regression problem using variational quantum circuits whose parameters encode the regression coefficients. The best regression coefficients are found by classical optimization with respect to a regularized cost function that furthermore helps to find the subset of features that are most important.”
Here’s their abstract:
“Variational quantum algorithms (VQAs) prevail to solve practical problems such as combinatorial optimization, quantum chemistry simulation, quantum machine learning, and quantum error correction on noisy quantum computers. For variational quantum machine learning, a variational algorithm with model interpretability built into the algorithm is yet to be exploited. In this paper, we construct a quantum regression algorithm and identify the direct relation of variational parameters to learned regression coefficients, while employing a circuit that directly encodes the data in quantum amplitudes reflecting the structure of the classical data table. The algorithm is particularly suitable for well-connected qubits. With compressed encoding and digital-analog gate operation, the run time complexity is logarithmically more advantageous than that for digital 2-local gate native hardware with the number of data entries encoded, a decent improvement in noisy intermediate-scale quantum computers and a minor improvement for large-scale quantum computing
“Our suggested method of compressed binary encoding offers a remarkable reduction in the number of physical qubits needed when compared to the traditional one-hot-encoding technique with the same input data. The algorithm inherently performs linear regression but can also be used easily for nonlinear regression by building nonlinear features into the training data. In terms of measured cost function which distinguishes a good model from a poor one for model training, it will be effective only when the number of features is much less than the number of records for the encoded data structure to be observable. To echo this finding and mitigate hardware noise in practice, the ensemble model training from the quantum regression model learning with important feature selection from regularization is incorporated and illustrated numerically.”
Quantum GAN Improved with a Classical Component
Generative adversarial networks have become key components for a wide variety of applications. Researchers Albha O’Dwyer Boyle and Reza Nikandish of University College, Dublin, have reported developing a hybrid classical-quantum approach to implementing GAN on NISQ devices that improves performance.
In their paper – A Hybrid Quantum-Classical Generative Adversarial Network for Near-Term Quantum Processors – they write, “The developed hybrid quantum-classical GAN is trained successfully using uniform and nonuniform data distributions. Using the nonuniform distribution for training data, the mode collapse failure, which GANs are prone to, can be mitigated. The proposed approach for the realization of hybrid quantum-classical GAN can open up a research direction for the implementation of more advanced GANs on the near-term quantum processors.”
Here’s the abstract:
“In this article, we present a hybrid quantum classical generative adversarial network (GAN) for near-term quantum processors. The hybrid GAN comprises a generator and a discriminator quantum neural network (QNN). The generator network is realized using an angle encoding quantum circuit and a variational quantum ansatz. The discriminator network is realized using multi-stage trainable encoding quantum circuits. A modular design approach is proposed for the QNNs which enables control on their depth to compromise between accuracy and circuit complexity. Gradient of the loss functions for the generator and discriminator networks are derived using the same quantum circuits used for their implementation. This prevents the need for extra quantum circuits or auxiliary qubits.
“The quantum simulations are performed using the IBM’s Qiskit open-source software development kit (SDK), while the training of the hybrid quantum-classical GAN is conducted using the mini-batch stochastic gradient descent (SGD) optimization on a classic computer. The hybrid quantum-classical GAN is implemented using a two-qubit system with different discriminator network structures. The hybrid GAN realized using a five-stage discriminator network, comprises 63 quantum gates and 31 trainable parameters, and achieves the Kullback-Leibler (KL) and the Jensen–Shannon (JS) divergence scores of 0.39 and 4.16, respectively, for similarity between the real and generated data distributions.”
Links to papers cited:
Reliable Devices Yield Stable Quantum Computations, https://arxiv.org/abs/2307.05381
Probing single electrons across 300 mm spin qubit wafers, https://arxiv.org/abs/2307.04812
Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters, https://arxiv.org/abs/2307.04089
Variational quantum regression algorithm with encoded data structure, https://arxiv.org/abs/2307.03334
A Hybrid Quantum-Classical Generative Adversarial Network for Near-Term Quantum Processors, https://arxiv.org/abs/2307.03269