New Computing Algorithms Expand the Boundaries of a Quantum Future

April 5, 2021

April 5, 2021 — Quantum computing promises to harness the strange properties of quantum mechanics in machines that will outperform even the most powerful supercomputers of today. But the extent of their application, it turns out, isn’t entirely clear.

To fully realize the potential of quantum computing, scientists must start with the basics: developing step-by-step procedures, or algorithms, for quantum computers to perform simple tasks, like the factoring of a number. These simple algorithms can then be used as building blocks for more complicated calculations.

Prasanth Shyamsundar, a postdoctoral research associate at the Department of Energy’s Fermilab Quantum Institute, has done just that. In a preprint paper released in February, he announced two new algorithms that build upon existing work in the field to further diversify the types of problems quantum computers can solve.

“There are specific tasks that can be done faster using quantum computers, and I’m interested in understanding what those are,” Shyamsundar said. “These new algorithms perform generic tasks, and I am hoping they will inspire people to design even more algorithms around them.”

Shyamsundar’s quantum algorithms, in particular, are useful when searching for a specific entry in an unsorted collection of data. Consider a toy example: Suppose we have a stack of 100 vinyl records, and we task a computer with finding the one jazz album in the stack.

Classically, a computer would need to examine each individual record and make a yes-or-no decision about whether it is the album we are searching for, based on a given set of search criteria.

“You have a query, and the computer gives you an output,” Shyamsundar said. “In this case, the query is: Does this record satisfy my set of criteria? And the output is yes or no.”

Finding the record in question could take only a few queries if it is near the top of the stack, or closer to 100 queries if the record is near the bottom. On average, a classical computer would locate the correct record with 50 queries, or half the total number in the stack.

A quantum computer, on the other hand, would locate the jazz album much faster. This is because it has the ability to analyze all of the records at once, using a quantum effect called superposition.

With this property, the number of queries needed to locate the jazz album is only about 10, the square root of the number of records in the stack. This phenomenon is known as quantum speedup and is a result of the unique way quantum computers store information.

The quantum advantage

Classical computers use units of storage called bits to save and analyze data. A bit can be assigned one of two values: 0 or 1.

The quantum version of this is called a qubit. Qubits can be either 0 or 1 as well, but unlike their classical counterparts, they can also be a combination of both values at the same time. This is known as superposition, and allows quantum computers to assess multiple records, or states, simultaneously.

Qubits can be in a superposition of 0 and 1, while classical bits can be only one or the other. Image: Jerald Pinson

“If a single qubit can be in a superposition of 0 and 1, that means two qubits can be in a superposition of four possible states,” Shyamsundar said. The number of accessible states grows exponentially with the number of qubits used.

Seems powerful, right? It’s a huge advantage when approaching problems that require extensive computing power. The downside, however, is that superpositions are probabilistic in nature — meaning they won’t yield definite outputs about the individual states themselves.

Think of it like a coin flip. When in the air, the state of the coin is indeterminate; it has a 50% probability of landing either heads or tails. Only when the coin reaches the ground does it settle into a value that can be determined precisely.

Quantum superpositions work in a similar way. They’re a combination of individual states, each with their own probability of showing up when measured.

But the process of measuring won’t necessarily collapse the superposition into the value we are looking for. That depends on the probability associated with the correct state.

“If we create a superposition of records and measure it, we’re not necessarily going to get the right answer,” Shyamsundar said. “It’s just going to give us one of the records.”

To fully capitalize on the speedup quantum computers provide, then, scientists must somehow be able to extract the correct record they are looking for. If they cannot, the advantage over classical computers is lost.

Amplifying the probabilities of correct states

Luckily, scientists developed an algorithm nearly 25 years ago that will perform a series of operations on a superposition to amplify the probabilities of certain individual states and suppress others, depending on a given set of search criteria. That means when it comes time to measure, the superposition will most likely collapse into the state they are searching for.

But the limitation of this algorithm is that it can be applied only to Boolean situations, or ones that can be queried with a yes or no output, like searching for a jazz album in a stack of several records.

A quantum computer can amplify the probabilities of certain individual records and suppress others, as indicated by the size and color of the disks in the output superposition. Standard techniques are able to assess only Boolean scenarios, or ones that can be answered with a yes or no output. Illustration: Prasanth Shyamsundar

Scenarios with non-Boolean outputs present a challenge. Music genres aren’t precisely defined, so a better approach to the jazz record problem might be to ask the computer to rate the albums by how “jazzy” they are. This could look like assigning each record a score on a scale from 1 to 10.

New amplification algorithms expand the utility of quantum computers to handle non-Boolean scenarios, allowing for an extended range of values to characterize individual records, such as the scores assigned to each disk in the output superposition above. Illustration: Prasanth Shyamsundar

Previously, scientists would have to convert non-Boolean problems such as this into ones with Boolean outputs.

“You’d set a threshold and say any state below this threshold is bad, and any state above this threshold is good,” Shyamsundar said. In our jazz record example, that would be the equivalent of saying anything rated between 1 and 5 isn’t jazz, while anything between 5 and 10 is.

But Shyamsundar has extended this computation such that a Boolean conversion is no longer necessary. He calls this new technique the non-Boolean quantum amplitude amplification algorithm.

“If a problem requires a yes-or-no answer, the new algorithm is identical to the previous one,” Shyamsundar said. “But this now becomes open to more tasks; there are a lot of problems that can be solved more naturally in terms of a score rather than a yes-or-no output.”

A second algorithm introduced in the paper, dubbed the quantum mean estimation algorithm, allows scientists to estimate the average rating of all the records. In other words, it can assess how “jazzy” the stack is as a whole.

Both algorithms do away with having to reduce scenarios into computations with only two types of output, and instead allow for a range of outputs to more accurately characterize information with a quantum speedup over classical computing methods.

Procedures like these may seem primitive and abstract, but they build an essential foundation for more complex and useful tasks in the quantum future. Within physics, the newly introduced algorithms may eventually allow scientists to reach target sensitivities faster in certain experiments. Shyamsundar is also planning to leverage these algorithms for use in quantum machine learning.

And outside the realm of science? The possibilities are yet to be discovered.

“We’re still in the early days of quantum computing,” Shyamsundar said, noting that curiosity often drives innovation. “These algorithms are going to have an impact on how we use quantum computers in the future.”

This work is supported by the Department of Energy’s Office of Science Office of High Energy Physics QuantISED program.

The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.


Source: Katrina Miller, Fermilab

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Help Wanted: QED-C Survey Spotlights Skills Sought by Quantum Industry

September 28, 2021

Developing an adequate workforce for the young but fast-growing quantum information sciences industry is seen as a critical element for success. Just what that means in terms of skillsets and positions is becoming cleare Read more…

Pittsburgh Supercomputer Powers Machine Learning Analysis of Rare East Asian Stamps

September 27, 2021

Setting aside the relatively recent rise of electronic signatures, personalized stamps have been a popular form of identification for formal documents in East Asia. These identifiers – easily forged, but culturally ubi Read more…

Purdue Researchers Peer into the ‘Fog of the Machine Learning Accelerator War’

September 27, 2021

Making sense of ML performance and benchmark data is an ongoing challenge. In light of last week’s release of the most recent MLPerf (v1.1) inference results, now is perhaps a good time to review how valuable (or not) Read more…

Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to increase even as the size of the silicon on which components a Read more…

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institute of Science and Engineering (NAISE), at the most recent HPC Read more…

AWS Solution Channel

Introducing AWS ParallelCluster 3

Running HPC workloads, like computational fluid dynamics (CFD), molecular dynamics, or weather forecasting typically involves a lot of moving parts. You need a hundreds or thousands of compute cores, a job scheduler for keeping them fed, a shared file system that’s tuned for throughput or IOPS (or both), loads of libraries, a fast network, and a head node to make sense of all this. Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Purdue Researchers Peer into the ‘Fog of the Machine Learning Accelerator War’

September 27, 2021

Making sense of ML performance and benchmark data is an ongoing challenge. In light of last week’s release of the most recent MLPerf (v1.1) inference results, Read more…

Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to in Read more…

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institut Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pu Read more…

Asetek Announces It Is Exiting HPC to Protect Future Profitability

September 22, 2021

Liquid cooling specialist Asetek, well-known in HPC circles for its direct-to-chip cooling technology that is inside some of the fastest supercomputers in the world, announced today that it is exiting the HPC space amid multiple supply chain issues related to the pandemic. Although pandemic supply chain... Read more…

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between the hexokinase-II (HKII) enzyme and the proteins found in a specific channel on the mitochondria’s outer membrane. Now, simulations conducted on supercomputers at the Texas Advanced Computing Center (TACC) have simulated... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire