ORNL’s Raphael Pooser on DoE’s Quantum Testbed Project

By John Russell

March 11, 2020

Quantum computing and quantum information science generally are areas of aggressive research at the Department of Energy. Their promise, of course, is tantalizing – vast computational scale and impenetrable communication, for starters. Depending on how one defines practical utility, a few applications may not be just distant visions. At least that’s the hope. The most visible sign of that hope and worry about falling behind in a global race to practical quantum computing writ large is the $1.2B U.S. Quantum Initiative passed in 2018.

HPCwire recently spoke with Raphael Pooser, PI for DoE’s Quantum Testbed Pathfinder project and a member of Oak Ridge National Laboratory’s Quantum Information Science group, whose work encompasses quantum sensing, quantum communications, and quantum computing. Pooser also leads the quantum sensing team at ORNL. Broadly, DoE’s Quantum Testbed project is a multi-institution effort involving national labs and academia whose mission has two prongs: one – the Quantum Testbed Pathfinder – is intended to assess quantum computing technologies and deliver tools and benchmarks; and the second – the Quantum Testbeds for Science – is intended to provide quantum computing resources to the research community to foster understanding of how to best use quantum computing to advance science.

Part of what’s noteworthy here is the project’s candid acknowledgement of quantum computing’s nascent stage or the so-called NISQ era in which noisy intermediate scale quantum computers dominate. The Quantum Testbed program is trying to figure out how to improve and make practical use of NISQ systems while also pursuing fault-tolerant quantum computers. Moreover, the whole quantum computing community is seeking to demonstrate quantum advantage – that is use of a quantum computer to do something practical sufficiently faster (and more economical) than a classical computer to warrant switching to quantum computing for that application.

Raphael Pooser, ORNL

As Pooser told HPCwire, “I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.”

Without doubt there are challenges ahead, but there have also been notable accomplishments. Pooser noted the use of quantum communications to secure voting results, albeit over a short distance, in Vienna, and that some banks use quantum key encryption for short-distance communication. Quantum computing, too, has shown progress though much less close to general practical use. It’s been used, for example, in POC efforts to calculate ground state energies for a few molecules. Note too that the quantum testbed project is just one, although a big one, of many DoE-back quantum science research efforts.

The ORNL quantum information science efforts emphasize multidiscipline collaboration. “We are a group of about 20 full time staff, and have several postdocs, grad students, and interns,” said Pooser. “The group members are distributed about evenly over the teams. One thing to note is that the team members pretty much engage in whatever research they are interested in, and are not limited by what team they’re on. I do research in all three areas of QIS, for example. Others choose to engage in research solely for quantum computing. We have a very large breadth due to our need to be ready to serve the needs of government agencies as they emerge. For example, quantum computing, though long studied elsewhere, has only recently become a core program within DOE; our group made sure to maintain ORNL’s level of expertise in this area over time so that we were able to rise to meet DoE’s needs when it expressed them.”

Presented here is part one of Pooser’s conversation with HPCwire, which focuses on quantum computing and what the Testbed Pathfinder group is doing. Part two of the interview, which will be published shortly, focuses on quantum information.

The Testbed Pathfinder group is charged with delivering benchmarks, code, and technology assessment. Peer review papers are a big part of the expected output, 10-to-15 a year, said Pooser noting, “Those are important because most of them come with how-to guides almost. If you download a paper and you are versed in the state of the art, you can reproduce our work on a quantum computer. You can actually take our work and apply it, at least right now, to the freely available IBM cloud machine.” Code too is being made available, such as ORNL-developed XACC which stands for accelerated cross compiler. Most of the work is accessible through the ORNL quantum information science archiveor on github.

The interview installment presented here touches on benchmarking, competing qubit technologies, the emerging software ecosystem, and the quest for quantum advantage. Also, it doesn’t dig deeply into basic quantum computing concepts as they have been covered earlier.

HPCwire: Let’s start with an overview of DoE and ORNL quantum work.

Raphael Pooser: DoE has multiple quantum programs going on. One of the first was this program called Quantum Testbed Pathfinder. This is really about benchmarking quantum computers. The reason for this project is we need to help DoE understand what quantum computers are capable of within the context of the things DoE is interested in. So we want to understand how can quantum computing help DoE reach its goals, more or less independent of other agencies. It’s not as concerned with some of the applications that other agencies might be concerned with. What we’re really talking about are fundamental science questions. To do this we need to benchmark quantum systems and through this process of benchmarking tell DoE what is it about quantum computers that needs to be improved in the future.

HPCwire: What does benchmarking actually mean in this context and are you using commercially-developed machines such as from IBM, Rigetti, etc.?

Raphael Pooser: Yeah, great question. Quantum computing is in such a nascent stage. What do we even mean by benchmarking? So we are using the commercially available devices. That includes IBM and Rigetti and with a company called IonQ, which is ion trap technology company. We are also working, though not as tightly yet, with Google [which] has benchmarked its own machine. We are working with Google to benchmark their machines more closely and with an independent mindset. Those are the four companies we’re now working with. We’re also in talks with various other quantum computing companies that run the gamut from US-based all the way to Canadian and to Australian-based companies.

IBM Q System (IBM photo)

In addition, DoE has its own quantum computing testbed efforts underway. In fact, there’s a second part to this program in which two national labs are building quantum computer testbed facilities (Testbeds for Science), which are meant to give folks like me, deeper access. By deeper access, what I mean is much closer-to-the-metal access so that we can really stress the quantum computers. Those two systems being built are at Berkeley (National Laboratory) and Sandia (National Laboratories). Those are superconducting quantum computers and ion trap quantum computers. Finally, to round it out, we also have optical quantum computers here at Oak Ridge (National Laboratory) which have been used in my project a couple of times. We haven’t really got around to really deeply benchmarking those and stressing those. But I think the one sentence answer to your question is we are technology agnostic and our goal is to benchmark every quantum computer that we can get our hands on.

HPCwire: You mentioned the ion trap and superconducting, which are certainly the quantum computing technologies that have gotten most of the attention. What about such as Intel’s silicon spin-based approach? Are you looking at other technologies?

Raphael Pooser: We do believe we’re going to get our hands on some other technologies soon. I can’t say exactly what those technologies are due to business considerations for the companies involved. I can tell you without giving anything away that my project in particular and Oak Ridge more generally has been in talks with every single company that has a quantum computer in the works and we’re in the process of gaining access to many of them. That doesn’t just include Intel. Speaking broadly, the silicon quantum dot-based qubit (Intel) is a very interesting system. Those are hard to benchmark now because access to those systems is limited. They are still more laboratory-based, but we expect that because of companies like Intel, and because of work going on in this field at Sandia and at University of New South Wales in Australia, these systems are going to become benchmarkable in the future.

The short answer is, we haven’t benchmarked any silicon-based qubit-based systems yet because we don’t have access to them, but we know who the players are, and we’re in talks with those players.

HPCwire: Back to basics, what exactly does benchmarking mean here? One can think of many things to look at when assessing how these systems perform. What exactly is quantum benchmarking involving?

Rigetti quantum processor

Raphael Pooser: That’s also a really good question because the concept of benchmarking in quantum computing, especially this stage of quantum computing, is different from classical benchmarking although they do bear similarities. One of the things that we want to measure is performance and by performance what we’re talking about are the resource costs required to get an answer. At the same time, we also want to measure the quality of the answer. So one of the places where classical computing and quantum computing can vary quite dramatically, especially at this stage in quantum computing life cycle, is in the quality of an answer. The quantum computers you have access to today give rather noisy results. One of our jobs is to quantify to what extent noise affects the quality of the answer you can get from the quantum computer.

The other major component of benchmarking is asking what kind of resources does it take to run this or that interesting problem. Again, these are problems of interest to DoE, so basic science problems in chemistry and nuclear physics and things like that. What we’ll do is take applications in chemistry and nuclear physics and convert them into what we consider a benchmark. We consider it a benchmark when we can distill a metric from it. So the metric could be the accuracy, the quality of the solution, or the resources required to get a given level of quality. Look at our papers and you’ll see that we’ll discuss using a certain number of qubits to do a computation versus a different number of qubits. Or we’ll talk about using one particular level of theory versus another level of theory, and say that if you were able to use such and such level of theory you would get a better answer but because the computer has this much noise, it tempers the quality of quality of your answer by some amount.

HPCwire: It’s interesting to hear discussion about ‘noise’ in quantum computing and how its sources vary, spanning manufacturing issues to characteristics of different gate types to the way you lay out a circuit. They all affect performance. Do I have this idea correct and can you talk about how you handle noise and does that translate into assessing noise for specific quantum circuits and algorithms since they are so intertwined?

Raphael Pooser: You’re hitting on something that’s deeply important in the NISQ era of quantum computing. You really do have different gates which are more or less useful for different architectures. Just as in classical computing back in the days when people used to get very concerned about the compiler optimizations that an Intel compiler would use when compiling benchmarks for Intel processors versus using that same compiler for AMD processors. There used to be quite a bit of wondering about would a benchmark have been better on a RISC architecture versus an x86 architecture.

In quantum computing you have to think in a similar way, but more deeply because the quantum processors can be vastly different when it comes down to how they implement the gate. What we have to do is say, “let’s get this algorithm for this application that we’ve compiled as a benchmark. If we want to run it on a different quantum computing processor, we have to make sure that we make the most efficient implementation in terms of the circuit. In this early era of quantum computing what you really quickly realize is that [forces] limiting what we call the depth of the circuit (very roughly, gate sequence execution as measured by time steps).

Google’s Sycamore quantum chip

We’re of the view and I think other people are of the view nowadays even in classical benchmarking that if a machine has an advantage over another machine – like let’s say the entangling gate in one quantum computer is more efficient than some other architecture in the sense that it has a higher entangling fidelity or can entangle more qubits at a time and it allows you to simplify your algorithms – then all the more power to that platform; it should be allowed to exploit that advantage when running the benchmark. So we move forward with the idea that we want to squeeze the most out of every machine possible and that does mean exactly what you said. You’re looking at a circuit that in some cases may be general enough to run on any architecture, but in other cases, as in the case of translating from a superconducting to ion-based system, we need to translate the gate set a little bit. Luckily, in this era, because circuit depths are so short, this is not an onerous task. We work with the developers of the hardware to do this. We’re able to do this because as there’s not an overabundance of hardware platforms out there and there’s not an overabundance of circuit depth.

HPCwire: That sounds like a major headache for would-be users of quantum computing, this idea they need to write applications differently, or at least have different compilers to get the most out of a platform.

Raphael Pooser: Well we do tune a lot of things by hand. However, if you are a user out on the street who is interested in this stuff, you can grab our software suite and write code once and then run it on quite a bit of different architectures actually at this point; you can run it on IBM, Rigetti. IonQ using their simulator right now because they haven’t put their cloud access up yet. We even have Cirq built in. Cirq is Google’s language. You could say, write in Qiskit if you want, which is IBM’s language, and then our stack will translate it to Google or Rigetti or any other hardware language you want. We even have support for IBM’s low level language. IBM has done something very smart; they have enabled access to what we’re calling the quantum control layer of the quantum computer. They call this OpenPulse as the name for this language. We even have support built in for the quantum control layer. Essentially, if any vendor will give it to us, we’ll build it. So this is actually open source software that’s kind of a product of our work.

HPCwire: Before turning to software issues and your project’s deliverables, could you comment on the competing qubit technologies – superconducting, ion trap, the silicon spin, Microsoft’s topological qubit, etc. – and handicap them in terms of strengths, weaknesses, closeness to practical use? Also, what application areas do you think each is perhaps more suited for?

Raphael Pooser: Another good question. First, going straight to the topological qubit. Yes, Microsoft has been researching this area for a while. They’re rather excited about it and frankly, I am too because if you can discover a topological qubit then you get around a lot of the problems that all the current qubits have – and that is the physical error rate using current technologies. A topological logical qubit would basically jump you forward by leaps and bounds on the path to fault tolerance. The flip side is that, speaking in terms of what you call a handicap, the topological qubits are further off into the future. They’re definitely not impossible, but there has not currently been a demonstration of a fully functioning, topological qubit, in the sense the DiVincenzo criteria, right? You really can’t talk about building a quantum computer scalably with a system until you meet these DiVincenzo criteria. But there’s great promise there because if we can find the topological qubits with low physical error rates, it’s going to be quite a large breakthrough. So that’s several years off in a future.

Guys like me are super excited using quantum computers right now. We’ve got the semiconductor-based superconductors and ion traps, and yes, they do have different applications each system excels at. For superconducting architectures, you really just need look no further than the current scientific literature and you’ll see that people are using them extensively for many different application types. One of the most widespread is an application called the Variational Quantum Eigensolver (VQE). This is a method of searching for the ground state of Hamiltonians. As long as you represent the Hamiltonian of the system of interest in a way to encode on a quantum computer, you can get ground state energies out. One of the big breakthroughs, of course, was using this to calculate ground state energies for chemistry, for molecules, and it was also recently demonstrated for nuclear interaction. Superconducting devices have proven themselves to be quite strong for that. But they’re not limited to that. There’s other things called quantum approximate optimization algorithms and some machine learning protocols. (Link to a good overview of VQE implementation by Talia Gershon of IBM/MIT).

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

If you look at the ion traps, they have very high gate fidelities. Now they’re a little bit different from the superconducting devices in the sense that ion traps have slower clock speeds but longer coherent times, which means you can do more high-fidelity operations on them before they decohere. That enables you to do certain applications that need very exact computation so you could attempt to time evolution on them. They are also good for analog computation of spin systems; they’ve proven to be very robust for calculating the input-output correlations for large numbers of spin. I’m thinking of a paper from Chris Munroe back in 2013 where they did 53-qubit simulation of a 53-qubit spin change. You’re able to get large numbers of qubits with high fidelity gates between them on ion trap technology.

Where ions and superconductors have something in common is both of them have proven to be interesting platforms for machine learning. So folks have run machine learning algorithms, the same mechanisms, on both platforms. I just want to say that in this NISQ era that while there are large differences in the platform technology, their capabilities in terms of the types of algorithms you can run on them are fairly comparable. In other words it’s too early to really say which is better than the other.

HPCwire: It is interesting to track efforts by various quantum computing technology vendors to weigh in on metrics and benchmarks. IonQ has done this. IBM has perhaps made the most noise pitching its Quantum Volume measure at last year’s APS March meeting. It’s a composite measure with many system-wide facets – gate error rates, decoherence times, qubit connectivity, operating software efficiency, and more – effectively baked into the measure.  

Pooser: I think quantum volume is a good benchmark. Quantum volume will give a sense of how many gates (circuit depth) a given set of qubits (circuit width) can support before decohering. I think it is most useful when combined or correlated with other benchmarks. That is, certain benchmarks look at certain aspects of the machines, and to get a good picture, you need to run multiple benchmarks of different types. I think that since Honeywell recently announced their new device in terms of Quantum Volume, you’re going to see a few companies here and there picking it up going forward. However, in the early stage of quantum computing, it’s important to not try and reduce the machines down to single numbers.

HPCwire: Turning to software, we frequently receive press releases claiming being able to dramatically improve quantum hardware performance – outcome quality, ease-of-use, etc. – with little background on what’s being measured or how improvements were achieved. That makes it hard for non-experts like me to assess. What’s your sense of the emerging software ecosystem and the accompanying clamor around their efforts?

Raphael Pooser: I totally agree that that clarity is definitely difficult to come by there. It seems that there was a proliferation of quantum computing software stacks in the past few years, and it’s definitely true that you’re not quite sure what to do with them all. There seems to be this tendency to try to get platform agnostic software stacks, because most people believe doing that will garner them the most users and make them the most relevant.

Now about how these software stacks and tools can supposedly reduce error rates and improve hardware performance, you kind of scratch your head and wonder how can a guy sitting halfway across the nation reduce error rates on hardware he doesn’t even have control over? However, it’s actually possible. Most of the time they are talking about building something into their suite, something called error mitigation. We certainly do. We build this concept called error mitigation into our software and it really does help.

What error mitigation is, in a nutshell, is rooting out the noise that causes errors in quantum computers and trying to correct your data for it. You can either post-process your data or you can change what you’re doing. [For example] if you’re doing like an iterative calculation that uses expectation values – you can actually change what you do when you calculate the expectation value to try to calculate it with less error based on some characterization of the machine.

In addition to the software stack companies, there’s companies that specialize in quantum characterization. One example is a company called Quantum Benchmark. These companies specialize in characterizing the quantum computers so that we can find out where the noise comes from, and we can use that knowledge to make our answers better. That’s error mitigation.

Now back to the bigger question about what are all these stacks doing? What do we do with all these software stack everywhere? It seems like this write-once, run-on-any backend, is a popular way to go? And I think it’s a definitely a good idea for the industry as a whole because you really don’t know what quantum computing technology is going to win out.

That goes back to your other question about what are the relative merits and strengths and weaknesses of these various hardware architectures? Part of the answer to that question was, we don’t know yet. We’re in the process of testing all these machines so we need these write-once, run maybe not everywhere but run in-most-places type stacks so that we can quickly and with a lot of agility, test new hardware that comes or existing hardware and discover what they’re good for. So I support these ideas that companies like Q-CTRL are advancing. Azure (Microsoft) has Q# (“q sharp”). They’re also becoming a multi backend platform.

HPCwire: You’ve said producing code is one of the Testbed Pathfinder’s goals. Maybe talk a little bit about the software it’s developed. I’m thinking of XACC and QCOR.

A quantum computer produced by the Canadian company D-Wave systems. Image: D-Wave Systems Inc.

Raphael Pooser: Oak Ridge has developed this platform we call XACC which stands for accelerated cross compiler. The reason why we developed our own is because we were just one of the first to do it. A lot of other companies started doing it recently. We were one of the first to do it and DoE actually needs its own software stack. We’re happy to use the vendor software stacks, and we do, but DoE also wants its own so that it can have maximum control over it, and really know the nuts and bolts of what’s going on under the hood. In general, these are all good, very good developments. I don’t know what will win out in the end and who will be around 10 years from now, but I think it’s all good stuff.

The fastest way to distinguish XACC and QCOR is to point out that QCOR is really a language, while XACC is at its core, a cross compilation framework. That is, you will not write computer programs in XACC per se, but XACC will be able to “speak” many different computer languages and compile them to the appropriate machine that you’d like to run the particular algorithm on. QCOR is specifically engineered to make it easy to program quantum computers using methods that are familiar to traditional C++ programmers. Getting a little more technical, QCOR is in fact a library plus extensions that extends the existing C++ language by providing handy functions that enable heterogeneous quantum-classical programming.

HPCwire: Let’s talk about Quantum Supremacy versus Quantum Advantage the timeline to reach either. Google, of course created stir with its claim of demonstrating QS in the fall. What’s your sense of the relative importance of these two measures, and how soon can we expect either of them?

Raphael Pooser: Interesting question. The difference between quantum advantage and quantum supremacy, as you noted, is that advantage is actually useful for something whereas supremacy is a demonstration of faster number crunching basically. It’s really hard to pin down when quantum advantage will happen. What I will say is that there are two schools of thought. One school of thought is that you can’t have a quantum advantage without fault tolerance, which means that we need quantum error correction. Now the problem with that is that fully quantum-error-corrected quantum computers are quite far off, at this point probably over a decade away.

There’s another school of thought that says we might be able to gain quantum advantage in this NISQ era if we play our cards, right, and that quantum supremacy is a leading indicator that it might be possible. The way that this would work is that you choose a specific application that is of interest, right? Like say, calculating electron mobility in a molecule or something, and you tailor your quantum computer, maybe even at an analog level, to the application at hand. In other words, it doesn’t have to be a universal machine, but you do a computation that is bona fide faster and more accurate than the classical machine could do even without error correction. Error mitigation is believed to probably play a very big role here because of the prevalence of noisiness in this era.

So there’s the two schools of thought, which really can just be broken down on either side of fault tolerance. Do you believe advantage can come before or after fault tolerance? I don’t feel comfortable saying things like in a couple of years quantum advantage will happen. Although, you know, the word is that as soon as Google demonstrated supremacy they said “we’re going to have a quantum advantage next in this application” which I’ve heard might be a random number generator certification and would technically be quantum advantage. So the definition of quantum advantage varies from person the person. Some people might just wave their hands and say, “Bah! certifying a random number generator that’s not quantum advantage. I wanted the nuclear bound state calculation with 100 qubits in it. That’s something that you could never compute a million years, and it’s scientifically useful versus random number generation.” Others might say no, that’s actually useful for something like communication and that’s quantum advantage.

It’s all kind of a moot argument, of course, because Google hasn’t actually done that yet. My personal opinion is quantum advantage is not [just] a couple years away, and it, it may be more than 10 years away. But if the others are right about not needing fault-tolerance for a true quantum advantage, it may be five years away. I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.

HPCwire: Thanks for your time!

Brief Raphael Pooser Bio (Source: ORNL)

Dr. Pooser is an expert in continuous variable quantum optics. He leads the quantum sensing team within the quantum information science group. His research interests include quantum computing, neuromorphic computing, and sensing. He currently leads the Quantum Computing Testbed project at ORNL, a large multi institution collaboration. He has also developed a quantum sensing program from the ground up based on quantum networks over a number of years at ORNL. He has been working to demonstrate that continuous variable quantum optics, quantum noise reduction in particular, has important uses in the quantum information field. One of his goals is to show that the quantum control and error correction required in computing applications are directly applicable to quantum sensing efforts. He is also interested in highlighting the practicality of these systems, demonstrating their ease of use and broad applicability. His research model uses quantum sensors as a showcase for the technologies that will enable quantum computing. Dr. Pooser has over 16 years of quantum information science experience, having led the quantum sensing program at ORNL over the past eight. Dr. Pooser publishes in high impact journals, including in Science, Nature, and Physical Review Letters. He previously served as a distinguished Wigner Fellow. He also worked as a postdoctoral fellow in the Laser Cooling and Trapping Group at NIST after receiving his PhD in Engineering Physics from the University of Virginia. He received a B.S. in Physics from New York University, graduating Cum Laude on an accelerated schedule. Dr. Pooser is active in the community, having served as a spokesperson for United Way and for the Boys and Girls Clubs of the TN Valley on many occasions in addition to volunteer work.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire