IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

By John Russell

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent additions – and that an IBM quantum computer had achieved a quantum volume (QV) benchmark of 32 in keeping with plans to double QV yearly. IBM also showcased POC work with Daimler using a quantum computer to tackle materials research in battery development.

Perhaps surprisingly the news was released at the 2020 Consumer Electronics Show taking place in Las Vegas this week – “Very few ‘consumers’ will ever buy a quantum computer,” agreed IBM’s Jeff Welser in a pre-briefing with HPCwire.

That said, CES has broadened its technology compass in recent years and Delta CEO Ed Bastian delivered the opening keynote touching upon technology’s role in transforming the travel and the travel experience. Quantum computing, for example, holds promise for a wide range of relevant optimization problems such as traffic control and logistics. “We’re excited to explore how quantum computing can be applied to address challenges across the day of travel,” said Rahul Samant, Delta’s CIO, in the official IBM announcement.

Jeff Welser, IBM

IBM’s CES quantum splash was mostly about demonstrating the diverse and growing interest in quantum computing (QC) by companies. “Many of our clients are consumer companies themselves who are utilizing these systems within the Q network,” said Welser, who wears a number of hats for IBM Research, including VP of exploratory science and lab director of the Almaden Lab. “Think about the many companies who are trying to use quantum technology to come up with new materials that will make big changes in future consumer electronics,” said Welser.

Since its launch in 2016, IBM has aggressively sought to grow the IBM Q network and its available resources. IBM now has a portfolio of 15 quantum computers, ranging in size from 53-qubits down to a single-qubit ‘system’ as well as extensive quantum simulator capabilities. Last year IBM introduced Quantum Volume, a new metric for benchmarking QC progress, and suggested others should adopt it. QV is composite measure encompassing many attributes – gate fidelity, noise, coherence times, and more – not just qubit count; so far QV’s industry-wide traction has seemed limited.

Welser emphasized IBM Q Network membership has steadily grown and now spans multiple industries including airline, automotive, banking and finance, energy, insurance, materials and electronics. Newest large commercial members include Anthem, Delta, Goldman Sachs, Wells Fargo and Woodside Energy. New academia/government members include Georgia Institute of Technology and LANL. (A list and brief description of new members is at the end of the article.)

“IBM’s focus, since we put the very first quantum computer on the cloud in 2016, has been to move quantum computing beyond isolated lab experiments conducted by a handful of organizations, into the hands of tens of thousands of users,” said Dario Gil, director of IBM Research in the official announcement. “We believe a clear advantage will be awarded to early adopters in the era of quantum computing and with partners like Delta, we’re already making significant progress on that mission.”

IBM’s achievement of a QV score of 32 and the recent Daimler work are also significant. When IBM introduced QV concept broadly at the American Physical Society meeting last March, it had achieved a QV score of 16 on its fourth generation 20-qubit system. At that time IBM likened QV to the Linpack benchmark used in HPC and calling it suitable for comparing diverse quantum computing systems. Translating QV into a specific target score that will be indicative of being able to solve real-world problems is still mostly guesswork; indeed different QV-ratings may be adequate for different applications.

IBM issued a blog today discussing the latest QV showing, which was achieved on a new 28-qubit system named Raleigh. IBM also elaborated somewhat on internal practices and timetable expectations.

Writing in the blog, IBM quantum researchers Jerry Chow and Jay Gambetta note, “Since we deployed our first system with five qubits in 2016, we have progressed to a family of 16-qubit systems, 20-qubit systems, and (most recently) the first 53-qubit system. Within these families of systems, roughly demarcated by the number of qubits (internally we code-name the individual systems by city names, and the development threads as different birds), we have chosen a few to drive generations of learning cycles (Canary, Albatross, Penguin, and Hummingbird).”

It gets a bit confusing and best to consult the blog directly for a discussion of error mitigation efforts among the different IBM systems. Each system undergoes revision to improve and experiment with topology and error mitigation strategies.

Chow and Gambetta write, “We can look at the specific case for our 20-qubit systems (internally referred to as Penguin), shown in this figure:

“Shown in the plots are the distributions of CNOT errors across all of the 20-qubit systems that have been deployed, to date. We can point to four distinct revisions of changes that we have integrated into these systems, from varying underlying physical device elements, to altering the connectivity and coupling configuration of the underlying qubits. Overall, the results are striking and visually beautiful, taking what was a wide distribution of errors down to a narrow set, all centered around ~1-2% for the Boeblingen system. Looking back at the original 5-qubit systems (called Canary), we are also able to see significant learning driven into the devices.”

Looking at the evolution of quantum computing by decade IBM says:

  • 1990s: fundamental theoretical concepts showed the potential of quantum computing
  • 2000s: experiments with qubits and multi-qubit gates demonstrated quantum computing could be possible
  • And the decade we just completed, the 2010s: evolution from gates to architectures and cloud access, revealing a path to a real demand for quantum computing systems

“So where does that put us with the 2020s? The next ten years will be the decade of quantum systems, and the emergence of a real hardware ecosystem that will provide the foundation for improving coherence, gates, stability, cryogenics components, integration, and packaging,” write Chow and Gambetta. “Only with a systems development mindset will we as a community see quantum advantage in the 2020s.”

On the application development front, the IBM-Daimler work is interesting. A blog describing the work was posted today by Jeannette Garcia (global lead for quantum applications in quantum chemistry, IBM). She is also an author on the paper (Quantum Chemistry Simulations of Dominant Products in Lithium-Sulfur Batteries). She framed the challenge nicely in the blogpost:

“Today’s supercomputers can simulate fairly simple molecules, but when researchers try to develop novel, complex compounds for better batteries and life-saving drugs, traditional computers can no longer maintain the accuracy they have at smaller scales. The solution has typically been to model experimental observations from the lab and then test the theory.

“The largest chemical problems researchers have been so far able to simulate classically, meaning on a standard computer, by exact diagonalization (or FCI, full configuration interaction) comprise around 22 electrons and 22 orbitals, the size of an active space in the pentacene molecule. For reference, a single FCI iteration for pentacene takes ~1.17 hours on ~4096 processors and a full calculation would be expected to take around nine days.

“For any larger chemical problem, exact calculations become prohibitively slow and memory-consuming, so that approximation schemes need to be introduced in classical simulations, which are not guaranteed to be accurate and affordable for all chemical problems. It’s important to note that reasonably accurate approximations to classical FCI approaches also continue to evolve and is an active area of research, so we can expect that accurate approximations to classical FCI calculations will also continue to improve over time.”

IBM and Daimler researchers, building on earlier algorithm development work, were able to simulate dipole moment of three lithium-containing molecules, “which brings us one step closer the next-generation lithium sulfur (Li-S) batteries that would be more powerful, longer lasting and cheaper than today’s widely used lithium ion batteries.”

Garcia writes, “We have simulated the ground state energies and the dipole moments of the molecules that could form in lithium-sulfur batteries during operation: lithium hydride (LiH), hydrogen sulfide (H2S), lithium hydrogen sulfide (LiSH), and the desired product, lithium sulfide (Li2S). In addition, and for the first time ever on quantum hardware, we demonstrated that we can calculate the dipole moment for LiH using 4 qubits on IBM Q Valencia, a premium-access 5-qubit quantum computer.”

She notes Daimler hope that quantum computers will eventually help them design next-generation lithium-sulfur batteries, because they have the potential to compute and precisely simulate their fundamental behavior. Current QCs are too noisy and limited in size but the POC work is promising. It also represents a specific, real-world opportunity.

Link to IBM blog: https://www.ibm.com/blogs/research/2020/01/quantum-volume-32/

Link to Daimler paper: https://arxiv.org/abs/2001.01120

Feature image: Photo of the IBM System One quantum computer being shown at CES. Source: IBM

 

List of IBM Q New Members Excerpted from the Release (unedited)

Commercial organizations:

  • Anthem: Anthem is a leading health benefits company and will be expanding its research and development efforts to explore how quantum computing may further enhance the consumer healthcare experience. Anthem brings its expertise in working with healthcare data to the Q Network. This technology also has the potential to help individuals lead healthier lives in a number of ways, such as helping in the development of more accurate and personalized treatment options and improving the prediction of health conditions.
  • Delta Air Lines: The global airline has agreed to join the IBM Q Hub at North Carolina State University. They are the first airline to embark on a multi-year collaborative effort with IBM to explore the potential capabilities of quantum computing to transform experiences for customers and employees and address challenges across the day of travel.

Academic institutions and government research labs:

  • Georgia Tech: The university has agreed to join the IBM Q Hub at the Oak Ridge National Laboratory to advance the fundamental research and use of quantum computing in building software infrastructure to make it easier to operate quantum machines, and developing specialized error mitigation techniques. Access to IBM Q commercial systems will also allow Georgia Tech researchers to better understand the error patterns in existing quantum computers, which can help with developing the architecture for future machines.
  • Los Alamos National Laboratory: Joining as an IBM Q Hub will greatly help the Los Alamos National Laboratory research efforts in several directions, including developing and testing near-term quantum algorithms and formulating strategies for mitigating errors on quantum computers. The 53-qubit system will also allow Los Alamos to benchmark the abilities to perform quantum simulations on real quantum hardware and perhaps to push beyond the limits of classical computing. Finally, the IBM Q Network will be a tremendous educational tool, giving students a rare opportunity to develop innovative research projects in the Los Alamos Quantum Computing Summer School.

Startups:

  • AIQTECH: Based in Toronto, AiQ is an artificial intelligence software enterprise set to unleash the power of AI to “learn” complex systems. In particular, it provides a platform to characterize and optimize quantum hardware, algorithms, and simulations in real time. This collaboration with the IBM Q Network provides a unique opportunity to expand AiQ’s software backends from quantum simulation to quantum control and contribute to the advancement of the field.
  • BEIT: The Kraków, Poland-based startup is hardware-agnostic, specializing in solving hard problems with quantum-inspired hardware while preparing the solutions for the proper quantum hardware, when it becomes available. Their goal is to attain super-polynomial speedups over classical counterparts with quantum algorithms via exploitation of problem structure.
  • Quantum Machines: QM is a provider of control and operating systems for quantum computers, with customers among the leading players in the field, including multinational corporations, academic institutions, start-ups and national research labs. As part of the IBM and QM collaboration, a compiler between IBM’s quantum computing programming languages, and those of QM is being developed and offered to QM’s customers. Such development will lead to the increased adoption of IBM’s open-sourced programming languages across the industry.
  • TradeTeq: TradeTeq is the first electronic trading platform for the institutional trade finance market. With teams in London, Singapore, and Vietnam, TradeTeq is using AI for private credit risk assessment and portfolio optimization. TradeTeq is collaborating with leading universities around the globe to build the next generation of machine learning and optimization models, and is advancing the use of quantum machine learning to build models for better credit, investment and portfolio decisions.
  • Zurich Instruments: Zurich Instruments is a test and measurement company based in Zurich, Switzerland, with the mission to progress science and help build the quantum computer. It is developing state-of-the-art control electronics for quantum computers, and now offers the first commercial Quantum Computing Control System linking high-level quantum algorithms with the physical qubit implementation. It brings together the instrumentation required for quantum computers from a few qubits to 100 qubits. They will work on the integration of IBM Q technology with the companies’ own electronics to ensure reliable control and measurement of a quantum device while providing a clean software interface to the next higher level in the stack.”
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a province in Pavia, Italy), and delivered “as-a-service” via H Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire