IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

By John Russell

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit systems up and running and a 53-qubit system expected to go online next month. The latter will become “the largest universal [gate based] quantum system made available for external access in the industry,” says IBM. New additions, including the 53-qubit system, will bring the IBM fleet of commercially available quantum computers to 14 by year’s end according to IBM.

Actually, the new “center” encompasses two locations, Poughkeepsie and York Town, with the following systems: Poughkeepsie has two 20-qubit systems, three 5-qubit systems, and will host the 53-qubit machine; Yorktown has three 20-qubit systems, one 5-qubit system, and one 14-qubit system. IBM isn’t releasing details on the other three systems soon to be made available.

What’s interesting here, besides introduction of the new 53-qubit system, is that IBM director of research Dario Gil says demand for access to quantum systems is what’s been driving IBM’s rapid expansion of its fleet. Just last week, IBM announced a collaboration leading European research organization Fraunhofer-Gesellschaft in which a new IBM Q System One, owned and operated by IBM, will be located in an IBM facility in Germany. IBM reports on the order of 150,000 registered users for its quantum systems.

More qubits. More Machines. All chasing something called quantum advantage which IBM labels as the “single goal” of the quantum community. That sounds like clear progress and it is. However, progress, even steady progress, is one thing, but payoff is another.

Dario Gil, IBM Research Director

IBM, to its credit, has tended to limit its contribution to the hype surrounding quantum computing. Gil, who moved into the IBM Research director role this year, briefed HPCwire on the IBM news and also touched on Big Blue’s overall strategy, the Quantum Volume benchmark pitched by IBM, and a few technology issues being tackled – for example, IBM uses a variety of quantum processor topologies in its systems seeking to identify which topologies work best for particular use cases.

He also injected a note of realism. Asked to define quantum advantage and forecast when it would be achieved, Gil said, “We define QA as when we will have systems that are powerful enough, and, of course, programmable, that would allow us to solve problems that matter, right, something of relevance to your business or science that we couldn’t do before. So my best estimate is that we’re still years away.”

Quantum industry watcher Bob Sorensen, VP of research and technology and chief quantum computing analyst, Hyperion Research, offered praise and caution:

“IBM is demonstrating its long-term commitment to developing quantum computers for the commercial sector and is working hard to roll out a continual steam of tangible gains in technology. But, perhaps more important is IBM’s recent announcement that the firm will install a Q System One quantum computer at one of its facilities in Germany as part of a two-year partnership with the Fraunhofer Society to build a research unit and community around the system. To me, such a deal validates that IBM is not just building systems and hoping to attract customers but instead is working to establish a complete QC ecosystem that spans hardware, software, applications and real world use cases.

“My major concern with the sector right now is that a seemingly steady stream of announcements across the broader QC supply base citing increasing qubit counts, or related metrics, may soon trigger a ‘breakthrough fatigue’, garnering less and less public attention. Strong interest, within both the government and commercial sectors, needs to be maintained if the QC sector is to stay on a robust virtuous development cycle. As such, the sector needs to start rolling out demonstrated quantum advances that translate into real-world application success. I am hoping (perhaps even expecting) some significant developments there in the short-term.”

A rendering of IBM Q System One, the world’s first fully integrated universal quantum computing system, currently installed at the Thomas J Watson Research Center. Source: IBM

IBM reports its IBM Q Network program now supports “nearly 80 commercial clients, academic institutions and research laboratories to explore and develop quantum computing algorithms.” IBM offered the following examples progress in its recent announcement:

  • “J.P. Morgan Chase and IBM published a methodology to price financial options and portfolios of such options, on a gate-based quantum computer. This resulted in an algorithm that provides a quadratic speedup, i.e. whereby classically computers need millions of samples, our methodology requires only a few thousands of samples to achieve the same result, when comparing to classical Monte Carlo methods. This may allow financial analysts to perform the option pricing and risk analysis in near real time. The implementation is available as open source in Qiskit Finance.
  • Mitsubishi Chemical, Keio University and IBM simulated the initial steps of the reaction mechanism between lithium and oxygen in lithium-air batteries. Published on arXiv, Computational Investigations of the Lithium Superoxide Dimer Rearrangement on Noisy Quantum Devices, is a first step in modeling the entire lithium-oxygen reaction on a quantum computer. Better understanding this interaction could lead to more efficient batteries for mobile devices or automotive vehicles.
  • The IBM Q Hub at Keio University, in collaboration with their partners Mizhuo, and Misubishi Financial Group (MUFG) proposed an algorithm that reduces the number of qubits and circuit length of an original methodology proposed by IBM for quantum risk analysis demonstrated in financial applications.”

You may recall IBM pitched QV to the industry last March as a benchmark for assessing and comparing quantum computing platforms. It’s a composite measure combining, among other things, qubit count, error rates, and decoherence times. It’s not yet clear how much uptake the new metric is generating in quantum community but these are still early days.

IBM has said it believes it can double the QV of its machines on roughly yearly basis. Gil said, “Within the 10 systems [now accessible] five of those are 20-qubit systems with the quantum volume of 16.” For most of us it’s not exactly clear what QV 16 mean or even what a range of desirable QV targets would be beyond continued improvement.

Quantum Computer at IBM, York Town, NY

Speaking more broadly about IBM’s growing fleet, Gil said “I think what it shows is [our ability] to go from the demonstrations in the laboratory to rolling out system with 95% availability to an entire community.  We do a tremendous amount of research and [still] have lots of things that we haven’t talked about or published, and a roadmap of larger systems [with] high performance, all of that stuff.  What we’re communicating here, I think is fundamental, this inflection point we saw in the community in last three years, was going from five, six laboratories in the world [that could conduct] multi-qubit experiments to a community of tens of thousands of people who can run experiments.”

Gil outlined IBM’s over quantum strategy like this:

  • Lead with bigger and better machines. “We want to have the most advanced systems in the world, right and that’s linked to wanting the highest quantum volume machines produced to date, the number of those machines, and expanding qubit counts like the new 53-qubit system.”
  • Build a large community. “This is embodied in two things. There is the open source component, which is Qiskit (IBM’s python-based developer kit), which is the most widely adopted open source environment for programming in quantum. And the IBM Q Experience, which is the mechanism by which people can program and experience the technology and the community
  • Maximize value of the network. “This has to do with commercial partners, now including large companies, startups, and universities. It’s about using all of these resources. The purpose is to discover other things that need matter with practical applications.”

No doubt the core ideas are similar across the quantum technology vendor ecosystem but it’s useful to hear them. Gil declined to say too much about the forthcoming 53-qubit system beyond it embodied a number of advances around control electronics, noise reduction, packaging etc.

“These are still transmon-based devices. So that’s in common to all the fleet. From that perspective, we haven’t changed the device. But if you look at everything from the lattice and the topology the quantum processor itself, to a lot of core technology that goes inside in terms of how things get coupled to each other, the packaging, and so on, you know, there’s a lot of change,” said Gil.

“One of the things we do is that we give our community different device topologies, in terms of qubit structure. It is very interesting and an important aspect. The relationship between the device topology, meaning what is the connectivity of the qubits to one another, can have really profound implications on the performance of the system, dealing with things like what’s called a spectator error, right, the unwanted coupling between qubits with one another and algorithmic implementations. So for many of these systems it is not only a question of capacity, but it’s also a variety of approaches.

“That’s very important because as a community we’re still learning what is the right topology and the intersection between error mitigation strategies and circuit implementations and topology. Every time we introduce systems, for any given size system, we also introduce the right topology or what we think is right the right topology at any given time, but expect us to keep changing. For example, even on the 53-qubit system, expect that we will have multiple iterations where we keep upgrading it and changing it. The task ultimately is to continue to increase quantum volume, but also to find the right mapping between topology and algorithms,” said Gil.

Circling back to the question of when quantum computing will start solving practical problems, Gil is realistic and optimistic:

“Recall when the narrative was, Ok, here’s what we’re going to do. We’re going to all work really hard and one day we’re going to have a quantum machine with billions of cubits that will be Nirvana. When that occurs, we know that there is a class of algorithms, a few of them, that would take some exponential time to [run on] classical systems] that will then be used with these alternate quantum machine. Right That was that narrative.

“What have we been advocating and changing that narrative successfully is now, don’t wait for holy grail. That’s not how technology works. What you’ve got to do is go from where we are today, and systematically, create generation after generation of systems to eventually get there. And value is going to be created and accrued along the way. The first value will be at the level of skills, training, intellectual property, the folks are building the first generation of systems. And we all agree that the community is not millions of people, but it’s hundreds of thousands of people who are involved. I don’t know how many startups there are now. Last time I checked it’s like 20 companies trying to build quantum hardware. If I add software now in stock is probably in the triple digits. We’ve seen national networks and quantum all over the world. So value is accruing along the way,” said Gil.

It’s going to be a long ride to QA and practical quantum computing. Perhaps too many of us are like kids in the back of a car on a long journey annoying our parents with chants of “Are we there yet?” half-intended as a real question and half-shouted just to provoke a response.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit technologies), the quantum computing landscape is transforming Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire