Somewhat quietly, while quantum hardware developers have been steadily improving today’s early quantum computers (scale and error correction), developers of quantum algorithms and applications have also been accelerating efforts. Today, AWS released a new study – Quantum algorithms: A survey of applications and end-to-end complexities – intended to help early quantum computing users.
This is a nice resource to add to the growing basket of tools available to early explorers of quantum computing. It’s also a good indication of the steady quantum algorithm/application development. Increasingly, it seems that a considerable number of quantum algorithms and applications will be available for use on NISQ (noisy intermediate-scale quantum) devices to deliver early quantum advantage (QA) and then eventually on fault tolerant quantum computers when they arrive.
The timetable for reaching QA is, of course, debated. Both IBM and IonQ made pitches at last week’s HPC & AI on Wall Street conference that 24-36 months seems a likely target for reaching early QA on NISQ devices. We’ll see. Full fault-tolerant quantum computers are not expected for several years.
“The primary focus of the survey is quantum algorithms with the greatest potential to generate customer value in the long term, once fault-tolerant quantum computers are available – however, it also comments on the most relevant near-term noisy intermediate-scale quantum (NISQ) algorithms, where appropriate,” write Sam McArdle and Alexander Dalzell, AWS quantum researchers, in a blog today. AWS has also posted a preprint paper on arXiv with the full survey.
The survey is written in a wiki-like fashion that encourages modular treatment of each step shown above. The survey document is split into the following parts:
- Areas of application including quantum chemistry, several subfields of physics, optimization, cryptanalysis, solving differential equations, finance, and machine learning. These sections detail the high-level goals of customers in these areas, as well as specific computational tasks that may be amenable to quantum computation. For each area, the survey identifies the actual end-to-end problems solved, discusses which algorithmic primitives are used to solve the problems, and collects the overall complexity of the solutions. It also discusses state-of-the-art classical solutions for the same tasks, which is important since quantum computers can provide value only if they are superior to these classical solutions.
- Algorithmic primitives including quantum linear algebra, Hamiltonian simulation, quantum linear system solvers, amplitude amplification, and phase estimation, among others. These primitives are the building blocks of quantum algorithms. They can be combined in a modular, abstracted fashion, but there are often caveats involved with using certain primitives that a designer should be aware of. The survey describes how the primitives work, quantifies their resource requirements, and identifies these key nuances.
- Fault-tolerant quantum computation. Since the algorithms discussed in the survey generally cannot be successfully run on existing noisy intermediate-scale quantum devices, the survey works under the assumption that these algorithms will require a fault-tolerant implementation. Fault tolerance introduces significant overheads to the quantum computation. The survey reviews the theory and the leading proposals on how to implement algorithms in a fault-tolerant way, explaining how to compute the relevant overheads for a fully end-to-end resource analysis.
McArdle and Dalzell write, “When analyzing and optimizing quantum algorithms for a range of problems, we found that the most useful insights and pieces of knowledge appeared multiple times in different quantum algorithms and applications.”
“We decided to collect this information in a single location. We have found that the streamlined, modular structure of the survey has enabled us to quickly understand and analyze new quantum algorithms. We hope that by sharing this survey with the community, other researchers will be empowered to view quantum algorithms in an end-to-end fashion. Ultimately, we hope that this work can grow beyond the research of our own team, and become an important resource for the quantum computing community.”
Shown below is a figure from the blog on depicting over all algorithm/app development.
Here’s an excerpt:
“The two-dimensional Fermi–Hubbard model on lattice sites is parameterized by two numbers, t and U, where t is the strength of the kinetic contribution to the energy and U is the strength of the “on-site” interaction between fermions occupying the same lattice site (each site can have at most two electrons). Background information on the Fermi–Hubbard model can be found in Section 1.1 of the survey [1].
“As described in that section, this customer use-case is tackled by the following workflow:
“1. Define the computational task: The goal is to compute the ground state energy density in the thermodynamic limit, that is, to estimate E = (GSE)/N in the limit N → ∞ to precision ε, where GSE denotes the ground-state energy. This is achieved by computing the energy density for increasingly large L × L lattices (N = 2L2), and performing an extrapolation.
“2. Construct a high-level quantum algorithm that solves the computational task: The task can be solved by preparing an initial state with sufficiently high overlap with the ground state, and performing quantum phase estimation to measure the ground state energy. A deeper dive into the resource requirements of these steps is performed below, with help from other sections of the survey.
“3. Fault-tolerant implementation: Sections 25–27 of the survey, which cover fault-tolerant quantum computation, provide rules of thumb that allow for a rough estimate of the resources needed to run the algorithm in a surface code architecture – a common choice of architecture which is compatible with a superconducting qubit hardware platform.
“4. Processing the outcomes of the computation: We see from Figure 2 that we require on the order of 10 energy values for systems of size up to 20 × 20 sites (N = 800). An extrapolation is performed to determine the estimate for infinite system size. Together with the resource estimates determined in the previous steps, this will yield an estimate of the total cost for this problem.”
“These steps are depicted in Figure 3, where color-coding indicates which section of the survey contains the relevant information for each component.”

The bloggers recognize today’s QC shortcomings and make a pitch for using AWS Braket’s expanding portfolio of offerings.
“Quantum hardware is not yet capable of performing the fault-tolerant quantum algorithms discussed in this blog post or in the survey. While the field develops devices able to support quantum error correction, researchers and customers are exploring NISQ algorithms, which can run on hardware today, but are typically more difficult to study analytically since in most cases they are based on heuristics,” they write.
“With Amazon Braket, the quantum computing service of AWS, you can have on-demand access to a variety of different NISQ quantum computers to design and investigate near-term algorithms, research and develop error mitigation strategies, or run analog Hamiltonian simulations of physical spin systems and certain optimization problems. To get started, visit our webpage or explore the Amazon Braket example notebooks.”
Link to blog, https://aws.amazon.com/blogs/quantum-computing/constructing-end-to-end-quantum-algorithm/
Link to full paper, https://arxiv.org/pdf/2310.03011.pdf