Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

By John Russell

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pushes chemistry calculations forward, D-Wave prepares for its sixth annual Qubits 21 conference, Infosys partners with AWS Braket, and a U.K. consortium develops a new abstraction layer. He’s a recap of just a few recent QC reports.

Let’s start with IonQ-led efforts to bring practical quantum computing to financial services. Risk analysis is at the heart of many FS activities and Monte Carlo simulation is typically the tool of choice. One challenge with running Monte Carlo simulation algorithm on NISQ (noisy intermediate scale quantum) computers is that it requires very deep circuits. So far, NISQ systems can’t reliably provide deep circuits.

Working with Goldman Sachs and QC Ware, IonQ released a paper (Low depth amplitude estimation on a trapped ion quantum computer) demonstrating progress. Here’s an excerpt:

“Recent works have succeeded in somewhat reducing the necessary resources for such algorithms, by trading off some of the speedup for lower depth circuits, but high quality qubits are still needed for demonstrating such algorithms. Here, we report the results of an experimental demonstration of amplitude estimation on a state-of-the-art trapped ion quantum computer. The amplitude estimation algorithms were used to estimate the inner product of randomly chosen four-dimensional unit vectors, and were based on the maximum likelihood estimation (MLE) and the Chinese remainder theorem (CRT) techniques.

“Significant improvements in accuracy were observed for the MLE based approach when deeper quantum circuits were taken into account, including circuits with more than ninety two-qubit gates and depth sixty, achieving a mean additive estimation error on the order of 10−2. The CRT based approach was found to provide accurate estimates for many of the data points but was less robust against noise on average. Last, we analyze two more amplitude estimation algorithms that take into account the specifics of the hardware noise to further improve the results.”

In the conclusion, they write: “Note that we restricted the experiments to four qubits, because our main goal was to probe the regime where the evaluation oracle is invoked a large number of times in a noisy setting, achieving up to fifteen sequential oracle invocations with still excellent results. A next step would be to establish tradeoffs between circuit depth and number of oracle calls in an experimental setting, as theoretically proved in, and this may soon become feasible with further improvements in hardware.”

The FS world is reluctant to say much about technology advances being put to use – the whole point is to keep an advantage – and there’s a fair amount of work by many parties seeking to develop FS apps for use on quantum computers. Perhaps modest, real-world FS apps on quantum computers are closer than we think.

Broadly IonQ has been busy. It also announced today a new partnership with GE Research to explore the impact of quantum computing in risk analysis applications. IonQ said, “The initiative is expected to lay the groundwork for risk management across key sectors including finance, and government.”

The company also reported, “Over the past six months, IonQ has demonstrated technology that is expected to allow the Company to significantly scale the power of its quantum computers, has expanded its footprint to all major cloud providers and major quantum developer languages, has launched major commercial partnerships with partners like Accenture, Softbank and the University of Maryland, and has tripled its bookings expectations for 2021.”

Triple booking is good.

Cambridge Quantum Advances Quantum Chemistry Simulation

Solving quantum chemistry problems in search of new materials and drugs is expected to be an important application for quantum computers. It’s an area where the inherent probabilistic nature of quantum computing (think superposition) mimics nature and is thought be able to provide a more realistic simulation of physical systems.

Cambridge Quantum Computing is another young QC company pushing that envelope, reporting work this week that improves the accuracy of quantum system modeling and mitigates some of the errors associated with those calculations. The work, Quantum hardware calculations of periodic systems: hydrogen chain and iron crystals, is published online.

Running quantum algorithms on real hardware is essential for understanding their strengths and limitations, say the researchers, especially in the noisy intermediate scale quantum (NISQ) era.

“We select two periodic systems with different level of complexity for these calculations. One of them is the distorted hydrogen chain as an example of very simple systems, and the other one is iron crystal in the BCC and FCC phases as it is considered to be inaccessible by using classical computational wavefunction methods. The ground state energies are evaluated based on the translational quantum subspace expansion (TransQSE) method for the hydrogen chain, and periodic boundary condition adapted VQE for our iron models,” write the researchers, led by Kentaro Yamamoto of Cambridge Quantum.

Besides “usual” mitigation measures, “We apply a novel noise mitigation technique, which performs post-selection of shot counts based on Z2 and U1 symmetry verification. By applying these techniques for the simplest 2 qubit iron model systems, the energies obtained by the hardware calculations agree with those of the state-vector simulations within ∼5 kJ/mol. Although the quantum computational resources used for those experiments are still limited, the systematic resource reduction applied to obtain our simplified models will give us a way to scale up by rolling approximations back as quantum hardware matures.”

While the models examined are simple, the research team believes their results “set an important starting point for systematic improvement of quantum chemical calculations on quantum computers by rolling back the simplification procedure presented in this paper.”

D-Wave Meeting to Showcase Use Cases

D-Wave’s Advantage chip

D-Wave systems is one of a few pioneers in quantum computing. Its quantum annealing approach, though sometimes criticized, has proven applicable in many optimization use cases and D-Wave has one of the quantum community’s more expansive and mature industry engagement programs. Its annual user meeting Qubits will be held October 5-7. It will be virtual again this year and there is no charge to attend.

On the agenda are D-Wave’s technology roadmap as well as user/practitioner presentations in finance, energy, life sciences, manufacturing / logistics, mobility, retail. Here’s a link to the agenda. D-Wave and IBM are, at least so far, the only companies to have systems for sale and intended for on-premise use. Most of the QC community provides access to their systems via a web portal of some kind. D-Wave and IBM, of course, also do this.

Infosys Moves into Quantum Computing

India-based global IT services and consulting firm announced a strategic collaboration with AWS this week to develop quantum computing capabilities and use cases. Like many IT services firms, Infosys has been aggressively expanding its cloud expertise and said the quantum effort would be part of its Cobalt cloud offering and use AWS Braket quantum portal and services.”

According to the official announcement, “Infosys will look to build, test, and evaluate quantum applications on circuit simulators and quantum hardware technologies using Amazon Braket. This will enable researchers and developers to experiment and study complex computational problems as quantum technologies continue to evolve. Enterprises will get access to use cases for rapid experimentation and can explore how quantum computing can potentially help them in the future in a variety of areas, assess new ideas and plan adoption strategies to drive innovation. The use of Amazon Braket by Infosys aims at getting businesses ready for a future where quantum computers will impact business.”

U.K. Consortium Develops HAL to Facilitate QC Collaboration

A U.K. consortium led by quantum software start-up Riverlane and the National Physical Laboratory (NPL) has developed an open-source hardware abstraction layer (HAL) that makes software portable across different quantum computing hardware platforms, according to Riverlane.

This is an idea being worked on by many parties. It’s not at all clear which qubit technologies will eventually win out. Currently several different qubit technologies are in operation and more are being developed. It seems likely there will be a few different kinds of quantum computers, featuring different qubit technologies that are better suited for specific application areas. An abstraction layer to hide the underlying hardware complexity from developers will be essential for success, say many observers.

HAL, reports the consortium, “is designed to be portable across four leading qubit technologies: superconducting qubits, trapped-ion qubits, photonic systems and silicon-based qubits. It will allow high-level quantum computer users, such as application developers, platform and system software engineers, and cross-platform software architects, to write programs for quantum computers portable to these four qubit technologies while maximizing performance.”

Besides Riverlane and NPL, the consortium currently includes the U.K.’s quantum hardware companies, SeeQC, Hitachi Europe, Universal Quantum, Duality Quantum Photonics, Oxford Ionics, and Oxford Quantum Circuits, as well as U.K.-based chip designer, Arm.

The first specification of the HAL is ‘version 0’ and is freely accessible on Github. The consortium is seeking feedback from the quantum community, with the eventual aim of including the concepts into an international standard on which the community can build.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips are available off the shelf, a concern raised at many recent Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire