IBM-led Webinar Tackles Quantum Developer Community Needs

By John Russell

July 21, 2020

Quantum computing has many needs before it can become the transformative tool many expect. High on the list is a robust software developer community, not least because developers rarely follow ‘intended use’ rules; instead, they bend and break rules in unforeseen ways that transforms new technology into applications that catalyze market growth, sometimes explosively.

In fact, efforts to fill out the quantum computing ecosystem (hardware and software) have continued to expand rapidly. Just today, for example, the Trump Administration announced the establishment of three new Quantum Leap Challenge Institutes around quantum sensing, hybrid classical-quantum systems, and large-scale quantum system development (See HPCwire coverage). Making full use of the new class of machines – whenever they arrive – will require a robust and sufficiently large quantum developer community.

Roughly a week ago, IBM held the first of a planned series of webinars on quantum computing – this one on The Future of Quantum Software Development and it was a treat. Moderated by long-time Forrester analyst Jeffrey Hammond, the panel included three prominent quantum community voices with diverse quantum expertise – Blake Johnson (control systems delivery lead, IBM, and formerly Rigetti), Prineha Narang (Harvard professor and CEO/founder, Aliro Quantum), and tech entrepreneur William Hurley (CEO/founder, Strangeworks).

It was a “glass-half-full” crowd, so consider their enthusiasm when evaluating comments, but it was also a well-informed group not dismissive of challenges.

“You have to realize where we’re actually at,” said Hurley, “We’re not in the days of AMD versus Intel. We’re in the days of like little mechanical gates versus vacuum tubes and other solutions. These things aren’t computers from my pure developer standpoint. At this moment of time they’re really great equipment for exploring the quantum landscape and are the foundation for building machines.”

Indeed, today’s quantum systems are fragile and complicated and even the notion of gates can be confusing. “At what point is a gate so complicated that a developer is never going to touch it or understand it anyway. Is that 1000 (qubits)? Is it 100,000? Is it a million because you hear people with all these stories about, you know, millions-of-qubits machines. If you had it my question is who would program it? Because that sounds really difficult based on where we’re at, and where we’re trying to go,” said Hurley.

Here are a few themes from the discussion.

  • Fast Followers Will Lose! Quantum computing’s inflection point will be such that that if you’re not already in the game, you won’t catch up. “You can’t be a fast follower. You have to either be placing a bet now or deciding to take the risk of not being involved at the point where something that is clearly a tremendous change in computing happens,” said Hurley with agreement from Johnson.
  • It Will be a Hybrid World. Yes, there will be ‘general purpose’ quantum computers although for a limited set of quantum-appropriate problems. There will also be specialization with various qubit technologies (ion trap, cold atom, superconducting, etc.) excelling on different applications. Lastly, all quantum computing will be done in a hybrid classical-quantum computing environment.
  • QA isn’t Far Away (Maybe). There was adamant agreement by panelists that a decade is too pessimistic…but there was waffling on just how soon quantum advantage (QA) would be achieved. Two-to-five years was the consensus guess-of-choice although Narang declined to make any guess. One had the sense their belief in quantum computing’s inevitable breakthrough trumped worry over when. Meanwhile the QA watch continues.
  • Expanded (Developer) Conversation Needed. The quantum conversation now is mostly between system developers who tend to be physicists and algorithm developers who tend to be physicists. That has to change. It will require better quantum computers, wider access to various qubit technologies, better tools, and a level of software abstraction that lets developers do what they do without worrying about quantum physics.

The panel discussion was casual and substantive if not technically deep, and IBM has posted a link to it. Next up is a webinar on Building a Quantum Workforce (July 28) with there are plans for another in late August (no date yet) on Commercial Use of Quantum Computers.

Clearly, each of the participating companies has its own agenda but nevertheless the give-and-take had an insider feel.

IBM, of course, is the biggest player in quantum computing today with deep expertise in hardware and software and its IBM Q network which offers various levels of access to quantum resources. IBM quantum systems use semiconductor-based superconducting qubits. Panelist Johnson is a relatively recent IBM import from Rigetti. His work is fairly deep in the weeds and focuses on control systems which convert conventional instructions (electrical signals) into quantum processor control signals.

Aliro and Strangeworks are start-ups focused on software.

Aliro describes its offering as a “hardware-independent toolkit for developers of quantum algorithms and applications. The development platform is implemented as a scalable cloud-based service. Features include: access to multiple QC hardware vendors and devices via an intuitive GUI as well as REST API; quantum circuit and hybrid workflow visualization and debugging tools; cross-compilation between high and low-level languages; and hardware-specific optimizations enabling best execution on every supported hardware platform.” CEO Narang is also an assistant professor of computational materials science at Harvard.

Strangeworks says, its “platform is a hardware-agnostic, software inclusive, collaborative development environment that brings the latest advancements, frameworks, open source, and tools into a single user interface.” CEO Hurley is a veteran tech entrepreneur who also chairs the IEEE Quantum Computing Work Group.

These are early days for both of these young companies and they are broadly representative of growing number of start-ups seeking to fill out the quantum computing ecosystem. It is probably best to watch/listen to the conversation directly to get a sense of issues facing software development in quantum computing, but also to gain a glimpse into the mindset of the young companies entering the quantum computing fray.

 

Presented here are a few soundbites (lightly edited) from the panel.

WHAT’S THE STATE OF QUANTUM PROGRAMMING AND WHAT ARE SOME OF THE CHALLENGES?  

Johnson: Recognizing that there was something maybe intimidating about quantum, IBM chose to develop first a graphical interface, a graphical drag and drop way to build quantum circuits where they show the kind of the fundamental unit of quantum compute. So that’s what’s available today in IBM quantum experience. You can drag and drop gates, which are the logical operations and manipulate qubits, which are quantum bits, to build up a program. For more real tasks, you need a real programming interface and we have Qiskit, an open source computing framework developed by IBM which is a Python interface for building quantum circuits and for building algorithms that take advantage of quantum processors.

Narang: What I like about Qiskit is that it’s very accessible. The challenge is, with that abstraction, you lose a lot of the control over the actual hardware, you don’t necessarily have all of the tools to directly program the system. So the pulse level control that IBM has made available is a good way to bridge that. I wonder, as we go towards other types of hardware, how some of the programs that are written for superconducting circuits will be translated to those (other types of hardware) and if everything is not based off of the same pulse level control scheme, what would be a good way of translating? And I don’t have an answer to this.

Hurley: Our approaches is to let all of the languages battle it out. We’re big Qiskit fans. I say that not because we’re on IBM, but we first started [there because] there were already tons people working with it. We’re big supporters, soon to be making our first contributions to it. But you can’t take a developer and make them a quantum developer overnight. Some things that are fundamentally different. For example, if I’m programming on any other platform that’s possible in the world, and I run into that error, I can find it or I know how it works. Whereas [with quantum] what we see happen with developers is they get in and they can instantiate a teleportation thing through Microsoft Quantum Katas or IBM Qiskit, or whatever. Then the moment it breaks, if they don’t understand the fundamental physics behind it, they’re at a dead end.

Narang: A lot of things are not yet possible with quantum hardware but we take for granted in a classical computer. [I’m] thinking about conditional statements and intermediate measurements, things that are not trivial to do in a quantum circuit at the moment, but that’s going to be very important to write more complex quantum programs in future. As those advances come from the hardware side, we think about how to translate those into something that you can use on the on the software side. 

GIVEN THE NASCENT STAGE OF QUBIT TECHNOLOGIES – SUPERCONDUCTING, TRAPPED ION, COLD ATOMS, ETC. – WILL THERE BE SPECIALIZED MACHINES?

Narang:  Tricky question. I’m trying to see how to answer it without making all of my colleagues angry at me. My personal view is there will be certain problems that will run just fine on a variety of hardware. And some that might be more specialized to particular types of hardware and that’s just associated with a physics that is underlying that type of hardware. This will be especially important as we try and map problems from condensed matter and chemistry onto some of these devices. We’ll see that not every technology is ideal or even possible for all kinds of problems. But we don’t have that kind of experience yet.

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

[Currently] there are only a few different trapped ion systems out there. It’s very hard to get access to those and not many of them have gone the route that IBM did with making at least at the smallest devices available very broadly. So there’s a whole lot more that needs to be done on a simulator before you can actually try it on real hardware. And of course, systems like cold atoms or photonic circuits are really niche. Getting time on those is very expensive and almost unaffordable for the average developer. The best you can hope to offer to them is to say, “Hey, if you write something that works on this genre of systems, we can get you to a point where it runs on other systems” and that’s something that my group is trying to accomplish at the moment.

Hurley: On my desk [I have] iPhone and iPad and a 16-inch laptop, okay? And those things can all do email and they can all surf the web and they can do them really well. But I can’t open Xcode and compile a big program on my iPhone. It’s going to be like that in quantum and it’s going to take it in directions that none of us can imagine it would be foolish to try. There’s already 16 startups I’m following who are making quantum processors for specific applications exactly like Pri described. Look at supercomputing today and high performance computing. There are computers built in on Wall Street just for doing trades or [even] specific types of trades. We’ve seen this throughout computing history, right? I don’t think quantum will be any different. I hope that there’s as many hardware and software solutions available as possible

WHEN WILL WE ACHIEVE QUANTUM ADVANTAGE?

Johnson: The problem is we don’t know exactly how powerful a quantum machine needs to be in order to get the quantum advantage. We’ve committed to and have been on a track of doubling quantum volume[i] (broad performance metric) every year. We put up and made publicly available our first machine with a quantum volume of 32 back in April. We now have eight machines with quantum volume of 32 that are available in IBM quantum experience, and we continue to march along that path of doubling QV [yearly]. At what point is it a powerful enough machine for quantum advantage? I’m not sure. I would say personally, I’d be surprised if we have to get all the way to fault tolerance to find a single application where you can do something with quantum advantage, whether that’s time-to-solution, cost or whatever, against classical resource.

Hurley: If you’re in this industry, if you want to be a developer in this industry, it needs to be a long-term play, a long-term vision that you have. Pessimists think [quantum payoff is] 20 years out, the optimistic [thinks] it’s three years out, and the reality is if you want to be involved in it than you should be preparing now, because all I can tell you is at some point between tomorrow and some future tomorrow it will happen and the inflection point will be steep.

A rendering of IBM Q System One, the world’s first fully integrated universal quantum computing system, currently installed at the Thomas J Watson Research Center. Source: IBM

Johnson: Those of us that build hardware understand that the most critical thing that’s preventing us from reaching quantum advantages is the hardware, and so our first tools are really focused at those domain experts to give them the tools they need to build better hardware. That’s an important audience that we like and we will continue to make better and better tools to serve that audience. But as Pri mentioned, people are doing applications, research with the devices that exist today, and are finding they maybe can’t yet solve a system problem better than they could with a classical resource, but they can solve problems. They’re starting to figure out what the limitations are, how they can squeeze out the most utility out of the devices today, and then getting ready for the devices that exist tomorrow.

So we’re starting to build tools that try to lower the barriers to entry for those people, not the domain experts, but a new audience. We don’t want them to have to learn everything about quantum computing in order to be able to get started. The idea here was to try to reach out to this new audience of developers [such that] they can they can write their programs by describing the problem that they want to solve, in this case, an optimization problem. So they can write down a quadratic formula, they write down some description of the constraints of their optimization problem. They can choose different solvers in both quantum solvers and classical solvers, because a lot of these developers are trying to [understand the value today and [how] quantum works versus the classical.

Hurley: Looking at it from a pure 30 years of seeing new tech coming down and doing the development, I think 10 years sounds very pessimistic now. What most people imagine as a quantum computer in 10 years? Is it a full general purpose machine, whatever? Who knows? But I don’t think you’re going to wait 10 years, I think it’s more in the two-to-five-year range to where there are things that start to become economically advantageous to enterprises and it will probably be in chemistry and material science.

HOW BIG MUST THE DEVELOPER COMMUNITY BECOME AND WHAT’S THE KEY TO DOING THAT?

Hurley: Most of the people building machines aren’t actually talking to developers. They’re talking to physicists who can download development tools and they’re playing the role of developers. Software developers are not necessarily the greatest physicists and physicists are not all the greatest software developers, right? [We need] to drive it to a point where it’s not 200 people who can program the machines; it’s 2 million people. I believe this is the first real leap, then in the next 10 years computing will change more than it has in the last hundred.

Johnson: Open source, I think, is a critical piece to accelerating these kinds of developments because we want these tools to be available to the broadest audience possible. Open source is a great accelerator for making that happen. [But even] on the fastest timescale, we’re a long way off from the App Store experience where on my phone, I can get an app that takes advantage of current resources. But I think we’re not that far away from the developer equivalent of that, which is, you know, package management systems where I can just say, pip install or brew install some package, which is a quantum library for some application domain. The goal is to have like the equivalent of iPhone experiences today.

If I’m gonna an iPhone developer and I want to develop a new app that uses like the augmented reality app to check the position of a basketball, that itself is a non-trivial machine vision task, right? But we don’t ask every iPhone developer to be a machine vision expert. They just they plug in into Expo that they want to use with AR kit, which is Apple’s augmented reality solution. And off they go. We need to get to that point and I don’t think it’s that far away, where a developer can use quantum resources without having to be an expert in quantum computing. 

Google’s Sycamore quantum chip

Hurley: If you think back to 2007, there are 400 of us a week after [the iPhone introduction] with iPhones and people hacking on them. It went to 10s of thousands almost instantly, right? Within six months to a year, and then over the course of 10 or 11 years, you get to 23 million people who are doing that. That mass of developers being involved drives apps exactly as Blake just said. You have to have a mass of developers to do that. That’s where quantum computing faces its biggest challenge, when it gets what I call out of the lab into the real world and all of a sudden there’s a million developers. Because developers rarely use things in the exact way they were intended to be used. They will find more uses, they will find the bugs, they will find the weaknesses. So the faster we can get to that point the better. I mean, Pri what are your thoughts?

Narang: We’re taking some of the circuits run on silicon superconducting to trapped ion and realizing that some so don’t work the same way. And yeah, you forget about developers breaking things in new ways, even experienced people break things in ways they didn’t anticipate, and have to call an engineer and say, hey, how do I fix this? If we expect, you know, a million developers entering the community to have to get answers from an expert engineer, that’s probably not a very scalable model. Something that could be useful is having better simulators that allow you to replicate some of the noise associated with current hardware, to see how things are performing? Also, simple stuff like getting runtime estimates. Getting a yea or nay on if your circuit going to actually fit on the device you’re trying to run it on? That’s a problem I’ve seen a lot of people have. They have a beautiful idea. And they assume that it can run on this really tiny device. I think there’s different levels to how do we make it easier for developers who are entering the field.

Link to panel video, https://www.youtube.com/watch?v=fBP6qTc_fGU&feature=youtu.be

[i] Quantum Volume (QV) is a hardware-agnostic metric that we defined to measure the performance of a real quantum computer. Each system we develop brings us along a path where complex problems will be more efficiently addressed by quantum computing; therefore, the need for system benchmarks is crucial, and simply counting qubits is not enough. As we have discussed in the past, Quantum Volume takes into account the number of qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace. https://www.ibm.com/blogs/research/2020/01/quantum-volume-32/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire