ORNL’s Raphael Pooser on DoE’s Quantum Testbed Project

By John Russell

March 11, 2020

Quantum computing and quantum information science generally are areas of aggressive research at the Department of Energy. Their promise, of course, is tantalizing – vast computational scale and impenetrable communication, for starters. Depending on how one defines practical utility, a few applications may not be just distant visions. At least that’s the hope. The most visible sign of that hope and worry about falling behind in a global race to practical quantum computing writ large is the $1.2B U.S. Quantum Initiative passed in 2018.

HPCwire recently spoke with Raphael Pooser, PI for DoE’s Quantum Testbed Pathfinder project and a member of Oak Ridge National Laboratory’s Quantum Information Science group, whose work encompasses quantum sensing, quantum communications, and quantum computing. Pooser also leads the quantum sensing team at ORNL. Broadly, DoE’s Quantum Testbed project is a multi-institution effort involving national labs and academia whose mission has two prongs: one – the Quantum Testbed Pathfinder – is intended to assess quantum computing technologies and deliver tools and benchmarks; and the second – the Quantum Testbeds for Science – is intended to provide quantum computing resources to the research community to foster understanding of how to best use quantum computing to advance science.

Part of what’s noteworthy here is the project’s candid acknowledgement of quantum computing’s nascent stage or the so-called NISQ era in which noisy intermediate scale quantum computers dominate. The Quantum Testbed program is trying to figure out how to improve and make practical use of NISQ systems while also pursuing fault-tolerant quantum computers. Moreover, the whole quantum computing community is seeking to demonstrate quantum advantage – that is use of a quantum computer to do something practical sufficiently faster (and more economical) than a classical computer to warrant switching to quantum computing for that application.

Raphael Pooser, ORNL

As Pooser told HPCwire, “I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.”

Without doubt there are challenges ahead, but there have also been notable accomplishments. Pooser noted the use of quantum communications to secure voting results, albeit over a short distance, in Vienna, and that some banks use quantum key encryption for short-distance communication. Quantum computing, too, has shown progress though much less close to general practical use. It’s been used, for example, in POC efforts to calculate ground state energies for a few molecules. Note too that the quantum testbed project is just one, although a big one, of many DoE-back quantum science research efforts.

The ORNL quantum information science efforts emphasize multidiscipline collaboration. “We are a group of about 20 full time staff, and have several postdocs, grad students, and interns,” said Pooser. “The group members are distributed about evenly over the teams. One thing to note is that the team members pretty much engage in whatever research they are interested in, and are not limited by what team they’re on. I do research in all three areas of QIS, for example. Others choose to engage in research solely for quantum computing. We have a very large breadth due to our need to be ready to serve the needs of government agencies as they emerge. For example, quantum computing, though long studied elsewhere, has only recently become a core program within DOE; our group made sure to maintain ORNL’s level of expertise in this area over time so that we were able to rise to meet DoE’s needs when it expressed them.”

Presented here is part one of Pooser’s conversation with HPCwire, which focuses on quantum computing and what the Testbed Pathfinder group is doing. Part two of the interview, which will be published shortly, focuses on quantum information.

The Testbed Pathfinder group is charged with delivering benchmarks, code, and technology assessment. Peer review papers are a big part of the expected output, 10-to-15 a year, said Pooser noting, “Those are important because most of them come with how-to guides almost. If you download a paper and you are versed in the state of the art, you can reproduce our work on a quantum computer. You can actually take our work and apply it, at least right now, to the freely available IBM cloud machine.” Code too is being made available, such as ORNL-developed XACC which stands for accelerated cross compiler. Most of the work is accessible through the ORNL quantum information science archiveor on github.

The interview installment presented here touches on benchmarking, competing qubit technologies, the emerging software ecosystem, and the quest for quantum advantage. Also, it doesn’t dig deeply into basic quantum computing concepts as they have been covered earlier.

HPCwire: Let’s start with an overview of DoE and ORNL quantum work.

Raphael Pooser: DoE has multiple quantum programs going on. One of the first was this program called Quantum Testbed Pathfinder. This is really about benchmarking quantum computers. The reason for this project is we need to help DoE understand what quantum computers are capable of within the context of the things DoE is interested in. So we want to understand how can quantum computing help DoE reach its goals, more or less independent of other agencies. It’s not as concerned with some of the applications that other agencies might be concerned with. What we’re really talking about are fundamental science questions. To do this we need to benchmark quantum systems and through this process of benchmarking tell DoE what is it about quantum computers that needs to be improved in the future.

HPCwire: What does benchmarking actually mean in this context and are you using commercially-developed machines such as from IBM, Rigetti, etc.?

Raphael Pooser: Yeah, great question. Quantum computing is in such a nascent stage. What do we even mean by benchmarking? So we are using the commercially available devices. That includes IBM and Rigetti and with a company called IonQ, which is ion trap technology company. We are also working, though not as tightly yet, with Google [which] has benchmarked its own machine. We are working with Google to benchmark their machines more closely and with an independent mindset. Those are the four companies we’re now working with. We’re also in talks with various other quantum computing companies that run the gamut from US-based all the way to Canadian and to Australian-based companies.

IBM Q System (IBM photo)

In addition, DoE has its own quantum computing testbed efforts underway. In fact, there’s a second part to this program in which two national labs are building quantum computer testbed facilities (Testbeds for Science), which are meant to give folks like me, deeper access. By deeper access, what I mean is much closer-to-the-metal access so that we can really stress the quantum computers. Those two systems being built are at Berkeley (National Laboratory) and Sandia (National Laboratories). Those are superconducting quantum computers and ion trap quantum computers. Finally, to round it out, we also have optical quantum computers here at Oak Ridge (National Laboratory) which have been used in my project a couple of times. We haven’t really got around to really deeply benchmarking those and stressing those. But I think the one sentence answer to your question is we are technology agnostic and our goal is to benchmark every quantum computer that we can get our hands on.

HPCwire: You mentioned the ion trap and superconducting, which are certainly the quantum computing technologies that have gotten most of the attention. What about such as Intel’s silicon spin-based approach? Are you looking at other technologies?

Raphael Pooser: We do believe we’re going to get our hands on some other technologies soon. I can’t say exactly what those technologies are due to business considerations for the companies involved. I can tell you without giving anything away that my project in particular and Oak Ridge more generally has been in talks with every single company that has a quantum computer in the works and we’re in the process of gaining access to many of them. That doesn’t just include Intel. Speaking broadly, the silicon quantum dot-based qubit (Intel) is a very interesting system. Those are hard to benchmark now because access to those systems is limited. They are still more laboratory-based, but we expect that because of companies like Intel, and because of work going on in this field at Sandia and at University of New South Wales in Australia, these systems are going to become benchmarkable in the future.

The short answer is, we haven’t benchmarked any silicon-based qubit-based systems yet because we don’t have access to them, but we know who the players are, and we’re in talks with those players.

HPCwire: Back to basics, what exactly does benchmarking mean here? One can think of many things to look at when assessing how these systems perform. What exactly is quantum benchmarking involving?

Rigetti quantum processor

Raphael Pooser: That’s also a really good question because the concept of benchmarking in quantum computing, especially this stage of quantum computing, is different from classical benchmarking although they do bear similarities. One of the things that we want to measure is performance and by performance what we’re talking about are the resource costs required to get an answer. At the same time, we also want to measure the quality of the answer. So one of the places where classical computing and quantum computing can vary quite dramatically, especially at this stage in quantum computing life cycle, is in the quality of an answer. The quantum computers you have access to today give rather noisy results. One of our jobs is to quantify to what extent noise affects the quality of the answer you can get from the quantum computer.

The other major component of benchmarking is asking what kind of resources does it take to run this or that interesting problem. Again, these are problems of interest to DoE, so basic science problems in chemistry and nuclear physics and things like that. What we’ll do is take applications in chemistry and nuclear physics and convert them into what we consider a benchmark. We consider it a benchmark when we can distill a metric from it. So the metric could be the accuracy, the quality of the solution, or the resources required to get a given level of quality. Look at our papers and you’ll see that we’ll discuss using a certain number of qubits to do a computation versus a different number of qubits. Or we’ll talk about using one particular level of theory versus another level of theory, and say that if you were able to use such and such level of theory you would get a better answer but because the computer has this much noise, it tempers the quality of quality of your answer by some amount.

HPCwire: It’s interesting to hear discussion about ‘noise’ in quantum computing and how its sources vary, spanning manufacturing issues to characteristics of different gate types to the way you lay out a circuit. They all affect performance. Do I have this idea correct and can you talk about how you handle noise and does that translate into assessing noise for specific quantum circuits and algorithms since they are so intertwined?

Raphael Pooser: You’re hitting on something that’s deeply important in the NISQ era of quantum computing. You really do have different gates which are more or less useful for different architectures. Just as in classical computing back in the days when people used to get very concerned about the compiler optimizations that an Intel compiler would use when compiling benchmarks for Intel processors versus using that same compiler for AMD processors. There used to be quite a bit of wondering about would a benchmark have been better on a RISC architecture versus an x86 architecture.

In quantum computing you have to think in a similar way, but more deeply because the quantum processors can be vastly different when it comes down to how they implement the gate. What we have to do is say, “let’s get this algorithm for this application that we’ve compiled as a benchmark. If we want to run it on a different quantum computing processor, we have to make sure that we make the most efficient implementation in terms of the circuit. In this early era of quantum computing what you really quickly realize is that [forces] limiting what we call the depth of the circuit (very roughly, gate sequence execution as measured by time steps).

Google’s Sycamore quantum chip

We’re of the view and I think other people are of the view nowadays even in classical benchmarking that if a machine has an advantage over another machine – like let’s say the entangling gate in one quantum computer is more efficient than some other architecture in the sense that it has a higher entangling fidelity or can entangle more qubits at a time and it allows you to simplify your algorithms – then all the more power to that platform; it should be allowed to exploit that advantage when running the benchmark. So we move forward with the idea that we want to squeeze the most out of every machine possible and that does mean exactly what you said. You’re looking at a circuit that in some cases may be general enough to run on any architecture, but in other cases, as in the case of translating from a superconducting to ion-based system, we need to translate the gate set a little bit. Luckily, in this era, because circuit depths are so short, this is not an onerous task. We work with the developers of the hardware to do this. We’re able to do this because as there’s not an overabundance of hardware platforms out there and there’s not an overabundance of circuit depth.

HPCwire: That sounds like a major headache for would-be users of quantum computing, this idea they need to write applications differently, or at least have different compilers to get the most out of a platform.

Raphael Pooser: Well we do tune a lot of things by hand. However, if you are a user out on the street who is interested in this stuff, you can grab our software suite and write code once and then run it on quite a bit of different architectures actually at this point; you can run it on IBM, Rigetti. IonQ using their simulator right now because they haven’t put their cloud access up yet. We even have Cirq built in. Cirq is Google’s language. You could say, write in Qiskit if you want, which is IBM’s language, and then our stack will translate it to Google or Rigetti or any other hardware language you want. We even have support for IBM’s low level language. IBM has done something very smart; they have enabled access to what we’re calling the quantum control layer of the quantum computer. They call this OpenPulse as the name for this language. We even have support built in for the quantum control layer. Essentially, if any vendor will give it to us, we’ll build it. So this is actually open source software that’s kind of a product of our work.

HPCwire: Before turning to software issues and your project’s deliverables, could you comment on the competing qubit technologies – superconducting, ion trap, the silicon spin, Microsoft’s topological qubit, etc. – and handicap them in terms of strengths, weaknesses, closeness to practical use? Also, what application areas do you think each is perhaps more suited for?

Raphael Pooser: Another good question. First, going straight to the topological qubit. Yes, Microsoft has been researching this area for a while. They’re rather excited about it and frankly, I am too because if you can discover a topological qubit then you get around a lot of the problems that all the current qubits have – and that is the physical error rate using current technologies. A topological logical qubit would basically jump you forward by leaps and bounds on the path to fault tolerance. The flip side is that, speaking in terms of what you call a handicap, the topological qubits are further off into the future. They’re definitely not impossible, but there has not currently been a demonstration of a fully functioning, topological qubit, in the sense the DiVincenzo criteria, right? You really can’t talk about building a quantum computer scalably with a system until you meet these DiVincenzo criteria. But there’s great promise there because if we can find the topological qubits with low physical error rates, it’s going to be quite a large breakthrough. So that’s several years off in a future.

Guys like me are super excited using quantum computers right now. We’ve got the semiconductor-based superconductors and ion traps, and yes, they do have different applications each system excels at. For superconducting architectures, you really just need look no further than the current scientific literature and you’ll see that people are using them extensively for many different application types. One of the most widespread is an application called the Variational Quantum Eigensolver (VQE). This is a method of searching for the ground state of Hamiltonians. As long as you represent the Hamiltonian of the system of interest in a way to encode on a quantum computer, you can get ground state energies out. One of the big breakthroughs, of course, was using this to calculate ground state energies for chemistry, for molecules, and it was also recently demonstrated for nuclear interaction. Superconducting devices have proven themselves to be quite strong for that. But they’re not limited to that. There’s other things called quantum approximate optimization algorithms and some machine learning protocols. (Link to a good overview of VQE implementation by Talia Gershon of IBM/MIT).

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

If you look at the ion traps, they have very high gate fidelities. Now they’re a little bit different from the superconducting devices in the sense that ion traps have slower clock speeds but longer coherent times, which means you can do more high-fidelity operations on them before they decohere. That enables you to do certain applications that need very exact computation so you could attempt to time evolution on them. They are also good for analog computation of spin systems; they’ve proven to be very robust for calculating the input-output correlations for large numbers of spin. I’m thinking of a paper from Chris Munroe back in 2013 where they did 53-qubit simulation of a 53-qubit spin change. You’re able to get large numbers of qubits with high fidelity gates between them on ion trap technology.

Where ions and superconductors have something in common is both of them have proven to be interesting platforms for machine learning. So folks have run machine learning algorithms, the same mechanisms, on both platforms. I just want to say that in this NISQ era that while there are large differences in the platform technology, their capabilities in terms of the types of algorithms you can run on them are fairly comparable. In other words it’s too early to really say which is better than the other.

HPCwire: It is interesting to track efforts by various quantum computing technology vendors to weigh in on metrics and benchmarks. IonQ has done this. IBM has perhaps made the most noise pitching its Quantum Volume measure at last year’s APS March meeting. It’s a composite measure with many system-wide facets – gate error rates, decoherence times, qubit connectivity, operating software efficiency, and more – effectively baked into the measure.  

Pooser: I think quantum volume is a good benchmark. Quantum volume will give a sense of how many gates (circuit depth) a given set of qubits (circuit width) can support before decohering. I think it is most useful when combined or correlated with other benchmarks. That is, certain benchmarks look at certain aspects of the machines, and to get a good picture, you need to run multiple benchmarks of different types. I think that since Honeywell recently announced their new device in terms of Quantum Volume, you’re going to see a few companies here and there picking it up going forward. However, in the early stage of quantum computing, it’s important to not try and reduce the machines down to single numbers.

HPCwire: Turning to software, we frequently receive press releases claiming being able to dramatically improve quantum hardware performance – outcome quality, ease-of-use, etc. – with little background on what’s being measured or how improvements were achieved. That makes it hard for non-experts like me to assess. What’s your sense of the emerging software ecosystem and the accompanying clamor around their efforts?

Raphael Pooser: I totally agree that that clarity is definitely difficult to come by there. It seems that there was a proliferation of quantum computing software stacks in the past few years, and it’s definitely true that you’re not quite sure what to do with them all. There seems to be this tendency to try to get platform agnostic software stacks, because most people believe doing that will garner them the most users and make them the most relevant.

Now about how these software stacks and tools can supposedly reduce error rates and improve hardware performance, you kind of scratch your head and wonder how can a guy sitting halfway across the nation reduce error rates on hardware he doesn’t even have control over? However, it’s actually possible. Most of the time they are talking about building something into their suite, something called error mitigation. We certainly do. We build this concept called error mitigation into our software and it really does help.

What error mitigation is, in a nutshell, is rooting out the noise that causes errors in quantum computers and trying to correct your data for it. You can either post-process your data or you can change what you’re doing. [For example] if you’re doing like an iterative calculation that uses expectation values – you can actually change what you do when you calculate the expectation value to try to calculate it with less error based on some characterization of the machine.

In addition to the software stack companies, there’s companies that specialize in quantum characterization. One example is a company called Quantum Benchmark. These companies specialize in characterizing the quantum computers so that we can find out where the noise comes from, and we can use that knowledge to make our answers better. That’s error mitigation.

Now back to the bigger question about what are all these stacks doing? What do we do with all these software stack everywhere? It seems like this write-once, run-on-any backend, is a popular way to go? And I think it’s a definitely a good idea for the industry as a whole because you really don’t know what quantum computing technology is going to win out.

That goes back to your other question about what are the relative merits and strengths and weaknesses of these various hardware architectures? Part of the answer to that question was, we don’t know yet. We’re in the process of testing all these machines so we need these write-once, run maybe not everywhere but run in-most-places type stacks so that we can quickly and with a lot of agility, test new hardware that comes or existing hardware and discover what they’re good for. So I support these ideas that companies like Q-CTRL are advancing. Azure (Microsoft) has Q# (“q sharp”). They’re also becoming a multi backend platform.

HPCwire: You’ve said producing code is one of the Testbed Pathfinder’s goals. Maybe talk a little bit about the software it’s developed. I’m thinking of XACC and QCOR.

A quantum computer produced by the Canadian company D-Wave systems. Image: D-Wave Systems Inc.

Raphael Pooser: Oak Ridge has developed this platform we call XACC which stands for accelerated cross compiler. The reason why we developed our own is because we were just one of the first to do it. A lot of other companies started doing it recently. We were one of the first to do it and DoE actually needs its own software stack. We’re happy to use the vendor software stacks, and we do, but DoE also wants its own so that it can have maximum control over it, and really know the nuts and bolts of what’s going on under the hood. In general, these are all good, very good developments. I don’t know what will win out in the end and who will be around 10 years from now, but I think it’s all good stuff.

The fastest way to distinguish XACC and QCOR is to point out that QCOR is really a language, while XACC is at its core, a cross compilation framework. That is, you will not write computer programs in XACC per se, but XACC will be able to “speak” many different computer languages and compile them to the appropriate machine that you’d like to run the particular algorithm on. QCOR is specifically engineered to make it easy to program quantum computers using methods that are familiar to traditional C++ programmers. Getting a little more technical, QCOR is in fact a library plus extensions that extends the existing C++ language by providing handy functions that enable heterogeneous quantum-classical programming.

HPCwire: Let’s talk about Quantum Supremacy versus Quantum Advantage the timeline to reach either. Google, of course created stir with its claim of demonstrating QS in the fall. What’s your sense of the relative importance of these two measures, and how soon can we expect either of them?

Raphael Pooser: Interesting question. The difference between quantum advantage and quantum supremacy, as you noted, is that advantage is actually useful for something whereas supremacy is a demonstration of faster number crunching basically. It’s really hard to pin down when quantum advantage will happen. What I will say is that there are two schools of thought. One school of thought is that you can’t have a quantum advantage without fault tolerance, which means that we need quantum error correction. Now the problem with that is that fully quantum-error-corrected quantum computers are quite far off, at this point probably over a decade away.

There’s another school of thought that says we might be able to gain quantum advantage in this NISQ era if we play our cards, right, and that quantum supremacy is a leading indicator that it might be possible. The way that this would work is that you choose a specific application that is of interest, right? Like say, calculating electron mobility in a molecule or something, and you tailor your quantum computer, maybe even at an analog level, to the application at hand. In other words, it doesn’t have to be a universal machine, but you do a computation that is bona fide faster and more accurate than the classical machine could do even without error correction. Error mitigation is believed to probably play a very big role here because of the prevalence of noisiness in this era.

So there’s the two schools of thought, which really can just be broken down on either side of fault tolerance. Do you believe advantage can come before or after fault tolerance? I don’t feel comfortable saying things like in a couple of years quantum advantage will happen. Although, you know, the word is that as soon as Google demonstrated supremacy they said “we’re going to have a quantum advantage next in this application” which I’ve heard might be a random number generator certification and would technically be quantum advantage. So the definition of quantum advantage varies from person the person. Some people might just wave their hands and say, “Bah! certifying a random number generator that’s not quantum advantage. I wanted the nuclear bound state calculation with 100 qubits in it. That’s something that you could never compute a million years, and it’s scientifically useful versus random number generation.” Others might say no, that’s actually useful for something like communication and that’s quantum advantage.

It’s all kind of a moot argument, of course, because Google hasn’t actually done that yet. My personal opinion is quantum advantage is not [just] a couple years away, and it, it may be more than 10 years away. But if the others are right about not needing fault-tolerance for a true quantum advantage, it may be five years away. I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.

HPCwire: Thanks for your time!

Brief Raphael Pooser Bio (Source: ORNL)

Dr. Pooser is an expert in continuous variable quantum optics. He leads the quantum sensing team within the quantum information science group. His research interests include quantum computing, neuromorphic computing, and sensing. He currently leads the Quantum Computing Testbed project at ORNL, a large multi institution collaboration. He has also developed a quantum sensing program from the ground up based on quantum networks over a number of years at ORNL. He has been working to demonstrate that continuous variable quantum optics, quantum noise reduction in particular, has important uses in the quantum information field. One of his goals is to show that the quantum control and error correction required in computing applications are directly applicable to quantum sensing efforts. He is also interested in highlighting the practicality of these systems, demonstrating their ease of use and broad applicability. His research model uses quantum sensors as a showcase for the technologies that will enable quantum computing. Dr. Pooser has over 16 years of quantum information science experience, having led the quantum sensing program at ORNL over the past eight. Dr. Pooser publishes in high impact journals, including in Science, Nature, and Physical Review Letters. He previously served as a distinguished Wigner Fellow. He also worked as a postdoctoral fellow in the Laser Cooling and Trapping Group at NIST after receiving his PhD in Engineering Physics from the University of Virginia. He received a B.S. in Physics from New York University, graduating Cum Laude on an accelerated schedule. Dr. Pooser is active in the community, having served as a spokesperson for United Way and for the Boys and Girls Clubs of the TN Valley on many occasions in addition to volunteer work.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Weekly Wire Roundup: July 8-July 12, 2024

July 12, 2024

HPC news can get pretty sleepy in June and July, but this week saw a bump in activity midweek as Americans realized they still had work to do after the previous holiday weekend. The world outside the United States also s Read more…

Nvidia, Intel not Welcomed in New Apple AI and HPC Development Tools

July 12, 2024

New Mac developer tools will leverage Apple's homegrown chips, limiting HPC users' ability to use parallel programming frameworks from Intel or Nvidia. Apple's latest programming framework, Xcode 16, was introduced at Read more…

Virga: Australia’s New HPC and AI Powerhouse

July 11, 2024

Australia has officially added another supercomputer to the TOP500 list with the implementation of Virga. Officially coming online in June 2024, Virga is the newest HPC system to come out of the Australian Commonwealth S Read more…

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and implementation phases of the Quantum Quantum Science and Technolo Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the software, and selecting the best user interface. The National Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three of the 10 highest-ranking Top500 systems, but some other ne Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Point and Click HPC: High-Performance Desktops

July 3, 2024

Recently, an interesting paper appeared on Arvix called Use Cases for High-Performance Research Desktops. To be clear, the term desktop in this context does not Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…


Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…


Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Leading Solution Providers


AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

  • arrow
  • Click Here for More Headlines
  • arrow