IBM, Intel Papers Report AI Breakthroughs for Quantum Science

By John Russell

March 13, 2019

Fascinatingly, two announcements today show how AI (machine and deep learning) can influence quantum computing in quite different ways. IBM (et. al) reported developing ‘AI’ algorithms that “demonstrate how noisy quantum computers can solve machine learning classification problems that classical computers cannot” thus paving the way to obtain quantum advantage. Intel (et. al) reported having “mathematically proven that artificial intelligence can help us understand currently unreachable quantum physics phenomena” which, among other things, could lead to better quantum computers.

The twin announcements closely track prestigious publications. The MIT, Oxford, and IBM-led paper, Supervised learning with quantum-enhanced feature spaces, was published in Nature today. The Intel-led paper, Quantum Entanglement in Deep Learning Architectures, was published in APS Physical Review Letters last month. Intel made its announcement in conjunction with Intel Mobileye co-founder/CEO Amnon Shashua’s keynote today at the National Academy of Sciences ‘Science of Deep Learning’ conference. Shashua is also a professor at Hebrew University and one of the paper’s authors.

IBM posted a blog by IBM researchers Kristan Temme and Jay Gambetta explaining the work.

“There are high hopes that quantum computing’s tremendous processing power will someday unleash exponential advances in artificial intelligence. AI systems thrive when the machine-learning algorithms used to train them are given massive amounts of data to ingest, classify and analyze. The more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform. Quantum computers are expected to play a crucial role in machine learning, including the crucial aspect of accessing more computationally complex feature spaces – the fine-grain aspects of data that could lead to new insights,” write Temme and Gambetta.

“[In the paper] we describe developing and testing a quantum algorithm with the potential to enable machine learning on quantum computers in the near future. We’ve shown that as quantum computers become more powerful in the years to come, and their Quantum Volume increases, they will be able to perform feature mapping, a key component of machine learning, on highly complex data structures at a scale far beyond the reach of even the most powerful classical computers…Our methods were also able to classify data with the use of short-depth circuits, which opens a path to dealing with decoherence. Just as significantly, our feature-mapping worked as predicted: no classification errors with our engineered data, even as the IBM Q systems’ processors experienced decoherence.”

Given the nature of the material, the IBM blog and paper are best read directly.

Intel’s work attacked a different issue and the paper’s authors do a nice job framing the challenge in this excerpt:

“A prominent approach for classically simulating many-body wave functions makes use of their entanglement properties in order to construct tensor network (TN) architectures that aptly model them in the thermodynamic limit. Though this method is successful in modeling one-dimensional (1D) systems that obey area-law entanglement scaling with subsystem size through the matrix product state (MPS) TN, it still faces difficulties in modeling two-dimensional (2D) systems due to intractability.

“In the seemingly unrelated field of machine learning, deep neural network architectures have exhibited an unprecedented ability to tractably encompass the convoluted dependencies that characterize difficult learning tasks such as image classification or speech recognition. A consequent machine learning inspired approach for modeling wave functions makes use of fully connected neural networks and restricted Boltzmann machines (RBMs), which represent relatively veteran machine learning constructs.

“In this Letter, we formally establish that highly entangled many-body wave functions can be efficiently represented by deep learning architectures that are at the forefront of recent empirical successes. Specifically, we address two prominent architectures in the form of convolutional neural networks (CNNs), commonly used over spatial inputs (e.g., image pixels), and recurrent neural networks (RNNs), commonly used over temporal inputs (e.g., phonemes of speech).”

Once again, this is a topic best examined by reading the original paper. That said the implications are far reaching affecting many areas of research at the quantum level.

Link to IBM-led paper: https://www.nature.com/articles/s41586-019-0980-2

Link to IBM Blog: https://www.ibm.com/blogs/research/2019/03/machine-learning-quantum-advantage/

Link to Intel-led paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.065301

Link to Intel announcement: https://newsroom.intel.com/news/intel-executive-leads-artificial-intelligence-researchers-linking-ai-quantum-physics-insight/?cid=em-elq-44706&utm_source=elq&utm_medium=email&utm_campaign=44706&elq_cid=1192704

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

Peter Shor Wins IEEE 2025 Shannon Award

July 15, 2024

Peter Shor, the MIT mathematician whose ‘Shor’s algorithm’ sent shivers of fear through the encryption community and helped galvanize ongoing efforts to build quantum computers, has been named the 2025 winner of th Read more…

Weekly Wire Roundup: July 8-July 12, 2024

July 12, 2024

HPC news can get pretty sleepy in June and July, but this week saw a bump in activity midweek as Americans realized they still had work to do after the previous holiday weekend. The world outside the United States also s Read more…

Nvidia, Intel not Welcomed in New Apple AI and HPC Development Tools

July 12, 2024

New Mac developer tools will leverage Apple's homegrown chips, limiting HPC users' ability to use parallel programming frameworks from Intel or Nvidia. Apple's latest programming framework, Xcode 16, was introduced at Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Point and Click HPC: High-Performance Desktops

July 3, 2024

Recently, an interesting paper appeared on Arvix called Use Cases for High-Performance Research Desktops. To be clear, the term desktop in this context does not Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire