ORNL’s Raphael Pooser on DoE’s Quantum Testbed Project

By John Russell

March 11, 2020

Quantum computing and quantum information science generally are areas of aggressive research at the Department of Energy. Their promise, of course, is tantalizing – vast computational scale and impenetrable communication, for starters. Depending on how one defines practical utility, a few applications may not be just distant visions. At least that’s the hope. The most visible sign of that hope and worry about falling behind in a global race to practical quantum computing writ large is the $1.2B U.S. Quantum Initiative passed in 2018.

HPCwire recently spoke with Raphael Pooser, PI for DoE’s Quantum Testbed Pathfinder project and a member of Oak Ridge National Laboratory’s Quantum Information Science group, whose work encompasses quantum sensing, quantum communications, and quantum computing. Pooser also leads the quantum sensing team at ORNL. Broadly, DoE’s Quantum Testbed project is a multi-institution effort involving national labs and academia whose mission has two prongs: one – the Quantum Testbed Pathfinder – is intended to assess quantum computing technologies and deliver tools and benchmarks; and the second – the Quantum Testbeds for Science – is intended to provide quantum computing resources to the research community to foster understanding of how to best use quantum computing to advance science.

Part of what’s noteworthy here is the project’s candid acknowledgement of quantum computing’s nascent stage or the so-called NISQ era in which noisy intermediate scale quantum computers dominate. The Quantum Testbed program is trying to figure out how to improve and make practical use of NISQ systems while also pursuing fault-tolerant quantum computers. Moreover, the whole quantum computing community is seeking to demonstrate quantum advantage – that is use of a quantum computer to do something practical sufficiently faster (and more economical) than a classical computer to warrant switching to quantum computing for that application.

Raphael Pooser, ORNL

As Pooser told HPCwire, “I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.”

Without doubt there are challenges ahead, but there have also been notable accomplishments. Pooser noted the use of quantum communications to secure voting results, albeit over a short distance, in Vienna, and that some banks use quantum key encryption for short-distance communication. Quantum computing, too, has shown progress though much less close to general practical use. It’s been used, for example, in POC efforts to calculate ground state energies for a few molecules. Note too that the quantum testbed project is just one, although a big one, of many DoE-back quantum science research efforts.

The ORNL quantum information science efforts emphasize multidiscipline collaboration. “We are a group of about 20 full time staff, and have several postdocs, grad students, and interns,” said Pooser. “The group members are distributed about evenly over the teams. One thing to note is that the team members pretty much engage in whatever research they are interested in, and are not limited by what team they’re on. I do research in all three areas of QIS, for example. Others choose to engage in research solely for quantum computing. We have a very large breadth due to our need to be ready to serve the needs of government agencies as they emerge. For example, quantum computing, though long studied elsewhere, has only recently become a core program within DOE; our group made sure to maintain ORNL’s level of expertise in this area over time so that we were able to rise to meet DoE’s needs when it expressed them.”

Presented here is part one of Pooser’s conversation with HPCwire, which focuses on quantum computing and what the Testbed Pathfinder group is doing. Part two of the interview, which will be published shortly, focuses on quantum information.

The Testbed Pathfinder group is charged with delivering benchmarks, code, and technology assessment. Peer review papers are a big part of the expected output, 10-to-15 a year, said Pooser noting, “Those are important because most of them come with how-to guides almost. If you download a paper and you are versed in the state of the art, you can reproduce our work on a quantum computer. You can actually take our work and apply it, at least right now, to the freely available IBM cloud machine.” Code too is being made available, such as ORNL-developed XACC which stands for accelerated cross compiler. Most of the work is accessible through the ORNL quantum information science archiveor on github.

The interview installment presented here touches on benchmarking, competing qubit technologies, the emerging software ecosystem, and the quest for quantum advantage. Also, it doesn’t dig deeply into basic quantum computing concepts as they have been covered earlier.

HPCwire: Let’s start with an overview of DoE and ORNL quantum work.

Raphael Pooser: DoE has multiple quantum programs going on. One of the first was this program called Quantum Testbed Pathfinder. This is really about benchmarking quantum computers. The reason for this project is we need to help DoE understand what quantum computers are capable of within the context of the things DoE is interested in. So we want to understand how can quantum computing help DoE reach its goals, more or less independent of other agencies. It’s not as concerned with some of the applications that other agencies might be concerned with. What we’re really talking about are fundamental science questions. To do this we need to benchmark quantum systems and through this process of benchmarking tell DoE what is it about quantum computers that needs to be improved in the future.

HPCwire: What does benchmarking actually mean in this context and are you using commercially-developed machines such as from IBM, Rigetti, etc.?

Raphael Pooser: Yeah, great question. Quantum computing is in such a nascent stage. What do we even mean by benchmarking? So we are using the commercially available devices. That includes IBM and Rigetti and with a company called IonQ, which is ion trap technology company. We are also working, though not as tightly yet, with Google [which] has benchmarked its own machine. We are working with Google to benchmark their machines more closely and with an independent mindset. Those are the four companies we’re now working with. We’re also in talks with various other quantum computing companies that run the gamut from US-based all the way to Canadian and to Australian-based companies.

IBM Q System (IBM photo)

In addition, DoE has its own quantum computing testbed efforts underway. In fact, there’s a second part to this program in which two national labs are building quantum computer testbed facilities (Testbeds for Science), which are meant to give folks like me, deeper access. By deeper access, what I mean is much closer-to-the-metal access so that we can really stress the quantum computers. Those two systems being built are at Berkeley (National Laboratory) and Sandia (National Laboratories). Those are superconducting quantum computers and ion trap quantum computers. Finally, to round it out, we also have optical quantum computers here at Oak Ridge (National Laboratory) which have been used in my project a couple of times. We haven’t really got around to really deeply benchmarking those and stressing those. But I think the one sentence answer to your question is we are technology agnostic and our goal is to benchmark every quantum computer that we can get our hands on.

HPCwire: You mentioned the ion trap and superconducting, which are certainly the quantum computing technologies that have gotten most of the attention. What about such as Intel’s silicon spin-based approach? Are you looking at other technologies?

Raphael Pooser: We do believe we’re going to get our hands on some other technologies soon. I can’t say exactly what those technologies are due to business considerations for the companies involved. I can tell you without giving anything away that my project in particular and Oak Ridge more generally has been in talks with every single company that has a quantum computer in the works and we’re in the process of gaining access to many of them. That doesn’t just include Intel. Speaking broadly, the silicon quantum dot-based qubit (Intel) is a very interesting system. Those are hard to benchmark now because access to those systems is limited. They are still more laboratory-based, but we expect that because of companies like Intel, and because of work going on in this field at Sandia and at University of New South Wales in Australia, these systems are going to become benchmarkable in the future.

The short answer is, we haven’t benchmarked any silicon-based qubit-based systems yet because we don’t have access to them, but we know who the players are, and we’re in talks with those players.

HPCwire: Back to basics, what exactly does benchmarking mean here? One can think of many things to look at when assessing how these systems perform. What exactly is quantum benchmarking involving?

Rigetti quantum processor

Raphael Pooser: That’s also a really good question because the concept of benchmarking in quantum computing, especially this stage of quantum computing, is different from classical benchmarking although they do bear similarities. One of the things that we want to measure is performance and by performance what we’re talking about are the resource costs required to get an answer. At the same time, we also want to measure the quality of the answer. So one of the places where classical computing and quantum computing can vary quite dramatically, especially at this stage in quantum computing life cycle, is in the quality of an answer. The quantum computers you have access to today give rather noisy results. One of our jobs is to quantify to what extent noise affects the quality of the answer you can get from the quantum computer.

The other major component of benchmarking is asking what kind of resources does it take to run this or that interesting problem. Again, these are problems of interest to DoE, so basic science problems in chemistry and nuclear physics and things like that. What we’ll do is take applications in chemistry and nuclear physics and convert them into what we consider a benchmark. We consider it a benchmark when we can distill a metric from it. So the metric could be the accuracy, the quality of the solution, or the resources required to get a given level of quality. Look at our papers and you’ll see that we’ll discuss using a certain number of qubits to do a computation versus a different number of qubits. Or we’ll talk about using one particular level of theory versus another level of theory, and say that if you were able to use such and such level of theory you would get a better answer but because the computer has this much noise, it tempers the quality of quality of your answer by some amount.

HPCwire: It’s interesting to hear discussion about ‘noise’ in quantum computing and how its sources vary, spanning manufacturing issues to characteristics of different gate types to the way you lay out a circuit. They all affect performance. Do I have this idea correct and can you talk about how you handle noise and does that translate into assessing noise for specific quantum circuits and algorithms since they are so intertwined?

Raphael Pooser: You’re hitting on something that’s deeply important in the NISQ era of quantum computing. You really do have different gates which are more or less useful for different architectures. Just as in classical computing back in the days when people used to get very concerned about the compiler optimizations that an Intel compiler would use when compiling benchmarks for Intel processors versus using that same compiler for AMD processors. There used to be quite a bit of wondering about would a benchmark have been better on a RISC architecture versus an x86 architecture.

In quantum computing you have to think in a similar way, but more deeply because the quantum processors can be vastly different when it comes down to how they implement the gate. What we have to do is say, “let’s get this algorithm for this application that we’ve compiled as a benchmark. If we want to run it on a different quantum computing processor, we have to make sure that we make the most efficient implementation in terms of the circuit. In this early era of quantum computing what you really quickly realize is that [forces] limiting what we call the depth of the circuit (very roughly, gate sequence execution as measured by time steps).

Google’s Sycamore quantum chip

We’re of the view and I think other people are of the view nowadays even in classical benchmarking that if a machine has an advantage over another machine – like let’s say the entangling gate in one quantum computer is more efficient than some other architecture in the sense that it has a higher entangling fidelity or can entangle more qubits at a time and it allows you to simplify your algorithms – then all the more power to that platform; it should be allowed to exploit that advantage when running the benchmark. So we move forward with the idea that we want to squeeze the most out of every machine possible and that does mean exactly what you said. You’re looking at a circuit that in some cases may be general enough to run on any architecture, but in other cases, as in the case of translating from a superconducting to ion-based system, we need to translate the gate set a little bit. Luckily, in this era, because circuit depths are so short, this is not an onerous task. We work with the developers of the hardware to do this. We’re able to do this because as there’s not an overabundance of hardware platforms out there and there’s not an overabundance of circuit depth.

HPCwire: That sounds like a major headache for would-be users of quantum computing, this idea they need to write applications differently, or at least have different compilers to get the most out of a platform.

Raphael Pooser: Well we do tune a lot of things by hand. However, if you are a user out on the street who is interested in this stuff, you can grab our software suite and write code once and then run it on quite a bit of different architectures actually at this point; you can run it on IBM, Rigetti. IonQ using their simulator right now because they haven’t put their cloud access up yet. We even have Cirq built in. Cirq is Google’s language. You could say, write in Qiskit if you want, which is IBM’s language, and then our stack will translate it to Google or Rigetti or any other hardware language you want. We even have support for IBM’s low level language. IBM has done something very smart; they have enabled access to what we’re calling the quantum control layer of the quantum computer. They call this OpenPulse as the name for this language. We even have support built in for the quantum control layer. Essentially, if any vendor will give it to us, we’ll build it. So this is actually open source software that’s kind of a product of our work.

HPCwire: Before turning to software issues and your project’s deliverables, could you comment on the competing qubit technologies – superconducting, ion trap, the silicon spin, Microsoft’s topological qubit, etc. – and handicap them in terms of strengths, weaknesses, closeness to practical use? Also, what application areas do you think each is perhaps more suited for?

Raphael Pooser: Another good question. First, going straight to the topological qubit. Yes, Microsoft has been researching this area for a while. They’re rather excited about it and frankly, I am too because if you can discover a topological qubit then you get around a lot of the problems that all the current qubits have – and that is the physical error rate using current technologies. A topological logical qubit would basically jump you forward by leaps and bounds on the path to fault tolerance. The flip side is that, speaking in terms of what you call a handicap, the topological qubits are further off into the future. They’re definitely not impossible, but there has not currently been a demonstration of a fully functioning, topological qubit, in the sense the DiVincenzo criteria, right? You really can’t talk about building a quantum computer scalably with a system until you meet these DiVincenzo criteria. But there’s great promise there because if we can find the topological qubits with low physical error rates, it’s going to be quite a large breakthrough. So that’s several years off in a future.

Guys like me are super excited using quantum computers right now. We’ve got the semiconductor-based superconductors and ion traps, and yes, they do have different applications each system excels at. For superconducting architectures, you really just need look no further than the current scientific literature and you’ll see that people are using them extensively for many different application types. One of the most widespread is an application called the Variational Quantum Eigensolver (VQE). This is a method of searching for the ground state of Hamiltonians. As long as you represent the Hamiltonian of the system of interest in a way to encode on a quantum computer, you can get ground state energies out. One of the big breakthroughs, of course, was using this to calculate ground state energies for chemistry, for molecules, and it was also recently demonstrated for nuclear interaction. Superconducting devices have proven themselves to be quite strong for that. But they’re not limited to that. There’s other things called quantum approximate optimization algorithms and some machine learning protocols. (Link to a good overview of VQE implementation by Talia Gershon of IBM/MIT).

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

If you look at the ion traps, they have very high gate fidelities. Now they’re a little bit different from the superconducting devices in the sense that ion traps have slower clock speeds but longer coherent times, which means you can do more high-fidelity operations on them before they decohere. That enables you to do certain applications that need very exact computation so you could attempt to time evolution on them. They are also good for analog computation of spin systems; they’ve proven to be very robust for calculating the input-output correlations for large numbers of spin. I’m thinking of a paper from Chris Munroe back in 2013 where they did 53-qubit simulation of a 53-qubit spin change. You’re able to get large numbers of qubits with high fidelity gates between them on ion trap technology.

Where ions and superconductors have something in common is both of them have proven to be interesting platforms for machine learning. So folks have run machine learning algorithms, the same mechanisms, on both platforms. I just want to say that in this NISQ era that while there are large differences in the platform technology, their capabilities in terms of the types of algorithms you can run on them are fairly comparable. In other words it’s too early to really say which is better than the other.

HPCwire: It is interesting to track efforts by various quantum computing technology vendors to weigh in on metrics and benchmarks. IonQ has done this. IBM has perhaps made the most noise pitching its Quantum Volume measure at last year’s APS March meeting. It’s a composite measure with many system-wide facets – gate error rates, decoherence times, qubit connectivity, operating software efficiency, and more – effectively baked into the measure.  

Pooser: I think quantum volume is a good benchmark. Quantum volume will give a sense of how many gates (circuit depth) a given set of qubits (circuit width) can support before decohering. I think it is most useful when combined or correlated with other benchmarks. That is, certain benchmarks look at certain aspects of the machines, and to get a good picture, you need to run multiple benchmarks of different types. I think that since Honeywell recently announced their new device in terms of Quantum Volume, you’re going to see a few companies here and there picking it up going forward. However, in the early stage of quantum computing, it’s important to not try and reduce the machines down to single numbers.

HPCwire: Turning to software, we frequently receive press releases claiming being able to dramatically improve quantum hardware performance – outcome quality, ease-of-use, etc. – with little background on what’s being measured or how improvements were achieved. That makes it hard for non-experts like me to assess. What’s your sense of the emerging software ecosystem and the accompanying clamor around their efforts?

Raphael Pooser: I totally agree that that clarity is definitely difficult to come by there. It seems that there was a proliferation of quantum computing software stacks in the past few years, and it’s definitely true that you’re not quite sure what to do with them all. There seems to be this tendency to try to get platform agnostic software stacks, because most people believe doing that will garner them the most users and make them the most relevant.

Now about how these software stacks and tools can supposedly reduce error rates and improve hardware performance, you kind of scratch your head and wonder how can a guy sitting halfway across the nation reduce error rates on hardware he doesn’t even have control over? However, it’s actually possible. Most of the time they are talking about building something into their suite, something called error mitigation. We certainly do. We build this concept called error mitigation into our software and it really does help.

What error mitigation is, in a nutshell, is rooting out the noise that causes errors in quantum computers and trying to correct your data for it. You can either post-process your data or you can change what you’re doing. [For example] if you’re doing like an iterative calculation that uses expectation values – you can actually change what you do when you calculate the expectation value to try to calculate it with less error based on some characterization of the machine.

In addition to the software stack companies, there’s companies that specialize in quantum characterization. One example is a company called Quantum Benchmark. These companies specialize in characterizing the quantum computers so that we can find out where the noise comes from, and we can use that knowledge to make our answers better. That’s error mitigation.

Now back to the bigger question about what are all these stacks doing? What do we do with all these software stack everywhere? It seems like this write-once, run-on-any backend, is a popular way to go? And I think it’s a definitely a good idea for the industry as a whole because you really don’t know what quantum computing technology is going to win out.

That goes back to your other question about what are the relative merits and strengths and weaknesses of these various hardware architectures? Part of the answer to that question was, we don’t know yet. We’re in the process of testing all these machines so we need these write-once, run maybe not everywhere but run in-most-places type stacks so that we can quickly and with a lot of agility, test new hardware that comes or existing hardware and discover what they’re good for. So I support these ideas that companies like Q-CTRL are advancing. Azure (Microsoft) has Q# (“q sharp”). They’re also becoming a multi backend platform.

HPCwire: You’ve said producing code is one of the Testbed Pathfinder’s goals. Maybe talk a little bit about the software it’s developed. I’m thinking of XACC and QCOR.

A quantum computer produced by the Canadian company D-Wave systems. Image: D-Wave Systems Inc.

Raphael Pooser: Oak Ridge has developed this platform we call XACC which stands for accelerated cross compiler. The reason why we developed our own is because we were just one of the first to do it. A lot of other companies started doing it recently. We were one of the first to do it and DoE actually needs its own software stack. We’re happy to use the vendor software stacks, and we do, but DoE also wants its own so that it can have maximum control over it, and really know the nuts and bolts of what’s going on under the hood. In general, these are all good, very good developments. I don’t know what will win out in the end and who will be around 10 years from now, but I think it’s all good stuff.

The fastest way to distinguish XACC and QCOR is to point out that QCOR is really a language, while XACC is at its core, a cross compilation framework. That is, you will not write computer programs in XACC per se, but XACC will be able to “speak” many different computer languages and compile them to the appropriate machine that you’d like to run the particular algorithm on. QCOR is specifically engineered to make it easy to program quantum computers using methods that are familiar to traditional C++ programmers. Getting a little more technical, QCOR is in fact a library plus extensions that extends the existing C++ language by providing handy functions that enable heterogeneous quantum-classical programming.

HPCwire: Let’s talk about Quantum Supremacy versus Quantum Advantage the timeline to reach either. Google, of course created stir with its claim of demonstrating QS in the fall. What’s your sense of the relative importance of these two measures, and how soon can we expect either of them?

Raphael Pooser: Interesting question. The difference between quantum advantage and quantum supremacy, as you noted, is that advantage is actually useful for something whereas supremacy is a demonstration of faster number crunching basically. It’s really hard to pin down when quantum advantage will happen. What I will say is that there are two schools of thought. One school of thought is that you can’t have a quantum advantage without fault tolerance, which means that we need quantum error correction. Now the problem with that is that fully quantum-error-corrected quantum computers are quite far off, at this point probably over a decade away.

There’s another school of thought that says we might be able to gain quantum advantage in this NISQ era if we play our cards, right, and that quantum supremacy is a leading indicator that it might be possible. The way that this would work is that you choose a specific application that is of interest, right? Like say, calculating electron mobility in a molecule or something, and you tailor your quantum computer, maybe even at an analog level, to the application at hand. In other words, it doesn’t have to be a universal machine, but you do a computation that is bona fide faster and more accurate than the classical machine could do even without error correction. Error mitigation is believed to probably play a very big role here because of the prevalence of noisiness in this era.

So there’s the two schools of thought, which really can just be broken down on either side of fault tolerance. Do you believe advantage can come before or after fault tolerance? I don’t feel comfortable saying things like in a couple of years quantum advantage will happen. Although, you know, the word is that as soon as Google demonstrated supremacy they said “we’re going to have a quantum advantage next in this application” which I’ve heard might be a random number generator certification and would technically be quantum advantage. So the definition of quantum advantage varies from person the person. Some people might just wave their hands and say, “Bah! certifying a random number generator that’s not quantum advantage. I wanted the nuclear bound state calculation with 100 qubits in it. That’s something that you could never compute a million years, and it’s scientifically useful versus random number generation.” Others might say no, that’s actually useful for something like communication and that’s quantum advantage.

It’s all kind of a moot argument, of course, because Google hasn’t actually done that yet. My personal opinion is quantum advantage is not [just] a couple years away, and it, it may be more than 10 years away. But if the others are right about not needing fault-tolerance for a true quantum advantage, it may be five years away. I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.

HPCwire: Thanks for your time!

Brief Raphael Pooser Bio (Source: ORNL)

Dr. Pooser is an expert in continuous variable quantum optics. He leads the quantum sensing team within the quantum information science group. His research interests include quantum computing, neuromorphic computing, and sensing. He currently leads the Quantum Computing Testbed project at ORNL, a large multi institution collaboration. He has also developed a quantum sensing program from the ground up based on quantum networks over a number of years at ORNL. He has been working to demonstrate that continuous variable quantum optics, quantum noise reduction in particular, has important uses in the quantum information field. One of his goals is to show that the quantum control and error correction required in computing applications are directly applicable to quantum sensing efforts. He is also interested in highlighting the practicality of these systems, demonstrating their ease of use and broad applicability. His research model uses quantum sensors as a showcase for the technologies that will enable quantum computing. Dr. Pooser has over 16 years of quantum information science experience, having led the quantum sensing program at ORNL over the past eight. Dr. Pooser publishes in high impact journals, including in Science, Nature, and Physical Review Letters. He previously served as a distinguished Wigner Fellow. He also worked as a postdoctoral fellow in the Laser Cooling and Trapping Group at NIST after receiving his PhD in Engineering Physics from the University of Virginia. He received a B.S. in Physics from New York University, graduating Cum Laude on an accelerated schedule. Dr. Pooser is active in the community, having served as a spokesperson for United Way and for the Boys and Girls Clubs of the TN Valley on many occasions in addition to volunteer work.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers


Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow