Imagine some years hence the HPC landscape, already confusing in its heterogeneity, also includes quantum computing resources. How does a user choose from among those resources the best workflow and computing engines for a project? Agnostiq, a Toronto-based start-up, is focused on solving that broad problem and its initial product, Covalent, is focused on the intersection of HPC and quantum resources.
Agnostiq presented at last week’s ISC conference, and this week Will Cunningham, Agnostiq head of software, briefed HPCwire on the company’s nascent but ambitious plans. Co-founded in 2018 by CEO Oktay Goktas (physicist trained at Max-Planck-Institute in Stuttgart) and COO Elliot MacGowan (MBA from university of Toronto), Agnostiq is one of a growing number of Canadian quantum computing startups.
Cunningham told HPCwire, “Agnostic right now is a seed stage startup. We have about 25 fulltime employees across the globe – Canada, North America, and beyond – so we’re fully remote right now and intend to stay that way. We position ourselves at the intersection of quantum and high-performance computing, since we see these technologies growing and eventually merging together into a broader continuum of compute technology.”
This idea of quantum computing becoming part of the broader HPC landscape is starting to take hold with more companies like Agnostiq seeking to provide orchestrating software and services to knit the technologies together. Covalent, the company’s first product, is a workflow orchestration tool that hides much of the underlying quantum system complexity from users and allows fast prototyping among many resources to choose a preferred workflow (time-to-solution, cost, viability).
Noted Cunningham, “Right now we’re focused on Covalent and putting all of our effort into developing tools at the HPC-quantum intersection. We started as a pure quantum software company, trying to develop finance applications. What we found is that it’s a little bit too early to build out enterprise applications that are really ready to be consumed by large companies. Of course, the reason being that the hardware is getting there, but it’s not quite there yet.
“When it comes to classical HPC, one of the things that we’re interested in [and] that’s in our roadmap is understanding how people can better provision and manage and schedule tasks on compute in general. We view quantum as part of this broader landscape of compute, but one of the things that we’re interested in, both with Covalent and beyond, is understanding how to appropriately map software to hardware and that hardware can be high compute and supercomputers or it could be general compute or low compute,” said Cunningham.
Released last January, Covalent is open-sourced and freely available on github; it is a workflow orchestration tool designed for rapid iteration and pre-production R&D workflows. The idea is to be able to quickly build, test and compare workflows.
Complicated workflows using multiple compute resources is a growing part of HPC.
“For example, a user may perform some data pre-processing on a local laptop, then transfer it to a supercomputer where it is used as an input to a high compute simulation. Finally, results are collected, some post-processing may be performed to remove corrupt or missing entries, and the results are visualized in some plot. In more complex settings, users may be interacting with multiple supercomputing, clusters, cloud HPC resources, and now even quantum computers,” noted Cunningham during his ISC presentation.
“In the era of hybrid computing, there are more options than ever when it comes to how you interact with HPC devices. Experiments become heterogeneous in so many ways. They can be classical or quantum, use high compute or general compute resources, use cloud or on-prem compute clusters, involve serial or parallel algorithms, and even deterministic or probabilistic algorithms. We are transitioning to a world where all of these options can be considered together for a single application, it can become very difficult to make sense of so many options.”
Covalent is intended to prototype and test drive workflows using quantum resources. Agnostiq says that users can quickly prototype and experiment, try it out and iterate using different input parameters, software environments and hardware resources.
“At the top of the stack above Covalent, we find commercial workflow orchestration tools such as Prefect, Dagster, Airflow, and Luigi. These tools are used in large scale enterprise machine learning and data analytics applications where certain tasks must run on a time-based schedule. Covalent is a layer in between the distributed computation and workflow management layers, which is why we call it a distributed workflow tool. In Covalent, instances of the workflow rather than the workflow itself, are the primary objects. This core design principle enables the type of rapid iteration needed for pre-production R&D workflows while remaining compatible with tools at the other layers in the stack,” said Cunningham.
Users can start from a Jupyter notebook, which has become the platform of choice for prototyping in high performance settings, or use standard scripts. The first step is to functionalize their existing code. “While this is good practice anyways, in that it increases maintainability and code quality, it is required for Covalent as functions are what will be mapped to tasks in a workflow,” said Cunningham.
Users then add one-line decorators to each function. “Internally, this converts the functions to callable, class objects, which have the ability to save loads of metadata about the function and how it ran. However, these functions remain callable as standard Python functions in the way that you might expect. This differs from some workflow tools which require the use of Yaml rather than Python. Notice we use two different decorators – the electron decorator refers to a task while the lattice decorator refers to a workflow. Next, users can add execution information to the decorators and executors are used to tell Covalent where and how to run the tasks,” explained Cunningham.
Shown (slide) above are two examples. One uses the Slurm executor, which submits a high compute task to a Slurm partition in an HPC cluster managed by Slurm. The second uses the AWS Fargate executor, which submits low compute tasks to the AWS elastic container service, where it can run using AWS Fargate. With these two different executors, a workflow using these tasks can flexibly send tasks to both high compute and low compute resources.
Cunningham said Agnostiq understands that Python is not the first language of choice for many HPC users, “We don’t ask that you rewrite legacy code in a new language. Instead, we provide bindings between Covalent and other common HPC languages, so that Python is used for orchestration, while C or C++, Bash, Julia and Fortran are used for the actual compute task.”
Focusing for a moment on the quantum portion, Cunningham said, “In today’s NISQ (noisy intermediate scale quantum) era, quantum computers are in high demand, and it is commonplace to wait for hours in queues. Practically, this means both hardware providers and research groups need to be careful about how HPC and quantum reservations are utilized. Quantum computers are expensive, difficult to maintain, and have a relatively short lifetime compared to classical resources.”
Obviously, these are early days for Agnostiq. “We’re engaging with a variety of potential customers before we go for any enterprise or monetized version. We’re really trying to get good user feedback from a variety of use cases. So, this includes software startups in deep tech like machine learning, and pharmaceuticals. It includes national supercomputing centers. We’re sort of exploring how this performs, you know, in some of these larger systems in North America, and after ISC, hopefully in Europe, as well.”
Agnostiq agrees that broad quantum advantage – the time when many applications will perform significantly better on quantum computers than classical one – is some years away. Cunningham suggests the middle of the decade for select use cases and maybe another five years after that for wider spread adoption. Agnostiq hopes to produce tools that help hybrid HPC-Quantum users make better compute choices at the workflow prototype and R&D stage.
Recalling his early days at Agnostiq, Cunningham said, “My first task was to go onto AWS, and learn how to provision infrastructure, so that we could actually start doing some experiments and ultimately do them at scale. What we found is that very quickly, it became difficult to manage costs. As you know, we’re a small organization. We found it was kind of difficult to manage this interplay between classical and quantum, and all of today’s quantum algorithms are going to require some classical compute. We’ve seen various cloud providers like IBM and AWS starting to provide these sort of hybrid platforms through Qiskit runtime and AWS has Braket hybrid jobs.”
Now, Agnostiq is developing tools to help users with their hybrid HPC-quantum workflows.
“This problem of trying to understand where to invest money, not only on which applications are going to be good for long term, but also where to invest money in a particular technology, but without going all in. So today, you might have superconducting quantum computers that are the best for a particular application, and maybe you get some sort of incremental speed up from using these devices, or at least you can use them to train employees. In a couple of years from now, maybe you’re going to be looking at photonic quantum computers, or maybe you’re going to be looking at neutral atoms. Our take on this is that it’s very dynamic. Nobody really knows who’s going to be the winner.”