Scientists Conduct First Quantum Simulation of Atomic Nucleus

By Rachel Harken, ORNL

May 23, 2018

OAK RIDGE, Tenn., May 23, 2018—Scientists at the Department of Energy’s Oak Ridge National Laboratory are the first to successfully simulate an atomic nucleus using a quantum computer. The results, published in Physical Review Letters, demonstrate the ability of quantum systems to compute nuclear physics problems and serve as a benchmark for future calculations.

Quantum computing, in which computations are carried out based on the quantum principles of matter, was proposed by American theoretical physicist Richard Feynman in the early 1980s. Unlike normal computer bits, the qubit units used by quantum computers store information in two-state systems, such as electrons or photons, that are considered to be in all possible quantum states at once (a phenomenon known as superposition).

“In classical computing, you write in bits of zero and one,” said Thomas Papenbrock, a theoretical nuclear physicist at the University of Tennessee and ORNL who co-led the project with ORNL quantum information specialist Pavel Lougovski. “But with a qubit, you can have zero, one, and any possible combination of zero and one, so you gain a vast set of possibilities to store data.”

In October 2017 the multidivisional ORNL team started developing codes to perform simulations on the IBM QX5 and the Rigetti 19Q quantum computers through DOE’s Quantum Testbed Pathfinder project, an effort to verify and validate scientific applications on different quantum hardware types. Using freely available pyQuil software, a library designed for producing programs in the quantum instruction language, the researchers wrote a code that was sent first to a simulator and then to the cloud-based IBM QX5 and Rigetti 19Q systems.

The team performed more than 700,000 quantum computing measurements of the energy of a deuteron, the nuclear bound state of a proton and a neutron. From these measurements, the team extracted the deuteron’s binding energy—the minimum amount of energy needed to disassemble it into these subatomic particles. The deuteron is the simplest composite atomic nucleus, making it an ideal candidate for the project.

“Qubits are generic versions of quantum two-state systems. They have no properties of a neutron or a proton to start with,” Lougovski said. “We can map these properties to qubits and then use them to simulate specific phenomena—in this case, binding energy.”

A challenge of working with these quantum systems is that scientists must run simulations remotely and then wait for results. ORNL computer science researcher Alex McCaskey and ORNL quantum information research scientist Eugene Dumitrescu ran single measurements 8,000 times each to ensure the statistical accuracy of their results.

“It’s really difficult to do this over the internet,” McCaskey said. “This algorithm has been done primarily by the hardware vendors themselves, and they can actually touch the machine. They are turning the knobs.”

The team also found that quantum devices become tricky to work with due to inherent noise on the chip, which can alter results drastically. McCaskey and Dumitrescu successfully employed strategies to mitigate high error rates, such as artificially adding more noise to the simulation to see its impact and deduce what the results would be with zero noise.

“These systems are really susceptible to noise,” said Gustav Jansen, a computational scientist in the Scientific Computing Group at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. “If particles are coming in and hitting the quantum computer, it can really skew your measurements. These systems aren’t perfect, but in working with them, we can gain a better understanding of the intrinsic errors.”

At the completion of the project, the team’s results on two and three qubits were within 2 and 3 percent, respectively, of the correct answer on a classical computer, and the quantum computation became the first of its kind in the nuclear physics community.

The proof-of-principle simulation paves the way for computing much heavier nuclei with many more protons and neutrons on quantum systems in the future. Quantum computers have potential applications in cryptography, artificial intelligence, and weather forecasting because each additional qubit becomes entangled—or tied inextricably—to the others, exponentially increasing the number of possible outcomes for the measured state at the end. This very benefit, however, also has adverse effects on the system because errors may also scale exponentially with problem size.

Papenbrock said the team’s hope is that improved hardware will eventually enable scientists to solve problems that cannot be solved on traditional high-performance computing resources—not even on the ones at the OLCF. In the future, quantum computations of complex nuclei could unravel important details about the properties of matter, the formation of heavy elements, and the origins of the universe.

Results from the study, titled “Cloud Quantum Computing of an Atomic Nucleus,” were published in Physical Review Letters.

The paper’s coauthors, all from ORNL, were Eugene F. Dumitrescu, Alex J. McCaskey, Gaute Hagen, Gustav R. Jansen, Titus D. Morris, Thomas Papenbrock, Raphael C. Pooser, David J. Dean, and Pavel Lougovski. Hagen, Morris, Papenbrock, and Pooser also are affiliated with the University of Tennessee, Knoxville.

The team’s research was supported by DOE’s Office of Science. ORNL is managed by UT-Battelle for DOE’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.


[1] Image credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are already ensconced at the venue. In any case, you're busy, so he Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the GFS – its first new dynamical core in nearly 40 years – w Read more…

By Oliver Peckham

NCSU Researchers Overcome Key DNA-Based Data Storage Obstacles

June 12, 2019

In the race for increasingly dense data storage solutions, DNA-based storage is surely one of the most curious – and a team of North Carolina State University (NCSU) researchers just brought it two steps closer to bein Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Transforming Dark Data for Insights and Discoveries in Healthcare

Healthcare in the USA produces an enormous amount of patient-related data each year. It is likely that the average person will generate over one million gigabytes of health-related data across his or her lifetime, equivalent to 300 million books. Read more…

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are alr Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the G Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

The Spaceborne Computer Returns to Earth, and HPE Eyes an AI-Protected Spaceborne 2

June 10, 2019

After 615 days on the International Space Station (ISS), HPE’s Spaceborne Computer has returned to Earth. The computer touched down onboard the same SpaceX Dr Read more…

By Oliver Peckham

Building the Team: South African Style

June 9, 2019

We’re only eight days away from the start of the ISC 2019 Student Cluster Competition. Fourteen student teams from eleven countries will travel to Frankfurt, Read more…

By Dan Olds

Scientists Solve Cosmic Mystery Through Black Hole Simulations

June 6, 2019

An international team of researchers has finally solved a long-standing cosmic mystery – and to do it, they needed to produce the most detailed black hole simulation ever created. Read more…

By Oliver Peckham

Quantum Upstart: IonQ Sets Sights on Challenging IBM, Rigetti, Others

June 5, 2019

Until now most of the buzz around quantum computing has been generated by folks already in the computer business – systems makers, chip makers, and big cloud Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This