Quantum Evolution: an Interview with Berkeley Lab’s Bert de Jong

April 17, 2023

April 17, 2023 — Bert de Jong leads the Applied Computing for Scientific Discovery Group at Lawrence Berkeley National Laboratory. He also heads the Advancing Integrated Development Environments for Quantum Computing through Fundamental Research (AIDE-QC) project, a multi-institution effort in open-source computing, programming and simulation for Department of Energy’s Advanced Scientific Computing Research program, and has leadership roles in several other DOE-supported quantum information science programs, including as deputy director for the Quantum Systems Accelerator. Before joining Berkeley Lab, de Jong spent 14 years in leadership roles at Pacific Northwest National Laboratory. He earned his doctorate in theoretical chemistry at the University of Groningen.

Bert de Jong. Image credit: Berkeley Lab.

Now, some terminology: In quantum mechanics, subatomic particles, even far from each other, can experience entanglement – that is, the behavior of one particle can alter the other at distance. Superposition: Particles can exist in multiple states at once until they’re observed. There are different types of qubits, the quantum version of a binary bit. Superconducting qubits are chips constructed with Josephson junctions, structures that weakly link superconductors; current flow between them can produce superposition and entanglement. Trapped-ion and neutral-atom qubits are both trap-based approaches. Trapped ion qubits are held in place by an electric field. Neutral-atom qubits are trapped by tweezers made of light; lasers coax entanglement and rotate qubits between 0 and 1.

ASCR Discovery: How and when did you start working with quantum computing?

Bert de Jong: About eight years ago I realized that classical computers cannot scale forever. We cannot make them smaller; we cannot make them faster. There’s too much energy that we have to put into that. So I started to look more at other computational tools we could use.

Tell us about the Quantum Systems Accelerator.

It’s one of five Department of Energy Office of Science National QIS Research Centers funded to work on quantum research and development as part of the United States National Quantum Initiative. We’re focusing on scaling and building more-accurate quantum computers in three different types of technologies: superconducting qubits, trapped ions, and neutral atoms.

We want to understand each platform’s challenges and find commonalities. It’s not just about building better qubits but also figuring out how to scale them up to many qubits.

On the hardware side, for about two and a half years we’ve been working with a neutral-atom system at the scale 256 qubits. We’ve been able to do real materials science simulations at that scale. With trapped ions, we are now building the first trap that can hold up to 200 ions. If you look at superconducting qubits, we are now in the order of 25. There, it’s not just building more but also trying to find better ways to control the qubits.

We’re also doing a lot of work on the algorithms and software side. How do you know that you get real quantum-advantage with a quantum computer? We have some research programs focused on chemistry and materials, nuclear physics, and high-energy physics, trying to understand how we can use these quantum computers to do real science.

What is quantum computing best poised to work on at this point?

A lot of problems are what we call exponentially hard, which means if I need to describe something, I need a lot of operations to solve it. And we can do this in classical computers. That’s why we now have an exascale computer out of Oak Ridge National Laboratory. But, for example, I can talk about a standard set of equations that we use in chemistry, where our complexity, the number of operations we do, scales by a power of six. So computational power that increases from a petaflop to an exaflop only lets me solve a problem that is four times as large. Quantum computers inherently could do this more efficiently. They use things like superposition and entanglement to do a lot more operations simultaneously and to do complex operations in a natural way.

For an exaflop on a classical computer, which requires 16,000 to 20,000 large GPUs to run, I can do that, probably, with 30 to 40 qubits. So the ratio is very big. And every time I add a qubit, I more quickly expand the amount of compute that I can do compared to a classical computer.

How are researchers dealing with the problem of noise?

The reality is quantum computers are still physics experiments at this point. And we do not fully understand these quantum computers. We have error models that describe what a quantum computer might do, but they’re not exact either.

With classical computers, the chips have gotten so small that we have to do error correction – run many experiments at the same time and do a majority vote to decide what the right answer should be. That approach is a lot harder on a quantum computer. To do full error correction with today’s quantum computers, we would need tens of thousands, up to potentially millions, of qubits to even have maybe 100 to 200 really good working qubits.

We may never get a perfect quantum computer. So we’re trying to learn how we can mitigate these errors on the hardware, software, and algorithms.

This vacuum apparatus, located on the UC Berkeley Campus, is used in QSA-funded R&D in trapped-ion systems for quantum computing. Trapped ions are one of three major quantum computing technologies QSA is investigating. Credit: Thor Swift, Berkeley Lab.

What are other challenges?

We also need to scale this up to an industrial level. IBM, for example, is providing access to lots of quantum computers. Their biggest system is about 433 qubits, and they are trying to build even bigger ones. We know that we cannot store an infinite number of ions in a trap for a trapped-ion quantum computer, so we must couple multiple traps. Now that creates a communication bottleneck, and classical computing is really good at that. How do you do that in a quantum realm?

We have been taking algorithms we use on classical computers and translating them to quantum computers. But we really may have to think about it in a completely new way, like Einstein did with gravity. He decided that space and time were the same things. That revolutionary thinking still has to happen in quantum computing.

Do you see people shuttling certain problems to a quantum system versus other types of computers? How do you see this working in a larger ecosystem?

I don’t expect quantum computing to be an isolated technology. Most of the work is at the interface with classical computers. One technological approach might be quantum computers as an accelerator, like GPUs for our largest HPC systems. For problems in chemistry and materials, for example, that are very hard to do on a classical computer, can we offload those to a quantum computer?

The other angle that comes in this mix is AI, but not all the computations that AI needs are something that a quantum computer is going to be good at. We need to use the strengths of each hardware piece to its full advantage.

What role can quantum processing play in AI workload?

The biggest challenge with AI is that we need to process a large amount of data and learn from that. The hard part is how to get the information into a quantum computer, and to do that we really need something like quantum memory to be efficient.

As quantum computing matures, what benchmarks would identify a transition from the physics-experiment phase to something that facilitates other advances?

If we can use a quantum computer to discover a new solar-cell material that is ten times better, that would be a metric. For us chemists, we have a longtime benchmark around what’s called the FeMoco system to find a natural way to convert nitrogen to ammonia, which is a very important fertilizer problem and takes a percent of worldwide energy these days. That would be a win. But that requires more than 100 to 200 very good qubits doing many quantum operations, and we’re not there yet.

What might future computers look like as new technologies, including quantum processors, come online?

When we went to the moon, we developed enormous numbers of technologies, including computer chips. We are learning a lot about the role of quantum in physics and in building chips. It might not be just that we have a classical computer and a quantum computer. We might get into a serious situation where the classical computer starts to look a lot more like a quantum computer because we’re just starting to integrate components that we have learned to harness as part of the quantum computing revolution.


About Berkeley Lab

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 14 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.


Source: ASCR Discovery

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire