ORNL’s Raphael Pooser on DoE’s Quantum Testbed Project

By John Russell

March 11, 2020

Quantum computing and quantum information science generally are areas of aggressive research at the Department of Energy. Their promise, of course, is tantalizing – vast computational scale and impenetrable communication, for starters. Depending on how one defines practical utility, a few applications may not be just distant visions. At least that’s the hope. The most visible sign of that hope and worry about falling behind in a global race to practical quantum computing writ large is the $1.2B U.S. Quantum Initiative passed in 2018.

HPCwire recently spoke with Raphael Pooser, PI for DoE’s Quantum Testbed Pathfinder project and a member of Oak Ridge National Laboratory’s Quantum Information Science group, whose work encompasses quantum sensing, quantum communications, and quantum computing. Pooser also leads the quantum sensing team at ORNL. Broadly, DoE’s Quantum Testbed project is a multi-institution effort involving national labs and academia whose mission has two prongs: one – the Quantum Testbed Pathfinder – is intended to assess quantum computing technologies and deliver tools and benchmarks; and the second – the Quantum Testbeds for Science – is intended to provide quantum computing resources to the research community to foster understanding of how to best use quantum computing to advance science.

Part of what’s noteworthy here is the project’s candid acknowledgement of quantum computing’s nascent stage or the so-called NISQ era in which noisy intermediate scale quantum computers dominate. The Quantum Testbed program is trying to figure out how to improve and make practical use of NISQ systems while also pursuing fault-tolerant quantum computers. Moreover, the whole quantum computing community is seeking to demonstrate quantum advantage – that is use of a quantum computer to do something practical sufficiently faster (and more economical) than a classical computer to warrant switching to quantum computing for that application.

Raphael Pooser, ORNL

As Pooser told HPCwire, “I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.”

Without doubt there are challenges ahead, but there have also been notable accomplishments. Pooser noted the use of quantum communications to secure voting results, albeit over a short distance, in Vienna, and that some banks use quantum key encryption for short-distance communication. Quantum computing, too, has shown progress though much less close to general practical use. It’s been used, for example, in POC efforts to calculate ground state energies for a few molecules. Note too that the quantum testbed project is just one, although a big one, of many DoE-back quantum science research efforts.

The ORNL quantum information science efforts emphasize multidiscipline collaboration. “We are a group of about 20 full time staff, and have several postdocs, grad students, and interns,” said Pooser. “The group members are distributed about evenly over the teams. One thing to note is that the team members pretty much engage in whatever research they are interested in, and are not limited by what team they’re on. I do research in all three areas of QIS, for example. Others choose to engage in research solely for quantum computing. We have a very large breadth due to our need to be ready to serve the needs of government agencies as they emerge. For example, quantum computing, though long studied elsewhere, has only recently become a core program within DOE; our group made sure to maintain ORNL’s level of expertise in this area over time so that we were able to rise to meet DoE’s needs when it expressed them.”

Presented here is part one of Pooser’s conversation with HPCwire, which focuses on quantum computing and what the Testbed Pathfinder group is doing. Part two of the interview, which will be published shortly, focuses on quantum information.

The Testbed Pathfinder group is charged with delivering benchmarks, code, and technology assessment. Peer review papers are a big part of the expected output, 10-to-15 a year, said Pooser noting, “Those are important because most of them come with how-to guides almost. If you download a paper and you are versed in the state of the art, you can reproduce our work on a quantum computer. You can actually take our work and apply it, at least right now, to the freely available IBM cloud machine.” Code too is being made available, such as ORNL-developed XACC which stands for accelerated cross compiler. Most of the work is accessible through the ORNL quantum information science archiveor on github.

The interview installment presented here touches on benchmarking, competing qubit technologies, the emerging software ecosystem, and the quest for quantum advantage. Also, it doesn’t dig deeply into basic quantum computing concepts as they have been covered earlier.

HPCwire: Let’s start with an overview of DoE and ORNL quantum work.

Raphael Pooser: DoE has multiple quantum programs going on. One of the first was this program called Quantum Testbed Pathfinder. This is really about benchmarking quantum computers. The reason for this project is we need to help DoE understand what quantum computers are capable of within the context of the things DoE is interested in. So we want to understand how can quantum computing help DoE reach its goals, more or less independent of other agencies. It’s not as concerned with some of the applications that other agencies might be concerned with. What we’re really talking about are fundamental science questions. To do this we need to benchmark quantum systems and through this process of benchmarking tell DoE what is it about quantum computers that needs to be improved in the future.

HPCwire: What does benchmarking actually mean in this context and are you using commercially-developed machines such as from IBM, Rigetti, etc.?

Raphael Pooser: Yeah, great question. Quantum computing is in such a nascent stage. What do we even mean by benchmarking? So we are using the commercially available devices. That includes IBM and Rigetti and with a company called IonQ, which is ion trap technology company. We are also working, though not as tightly yet, with Google [which] has benchmarked its own machine. We are working with Google to benchmark their machines more closely and with an independent mindset. Those are the four companies we’re now working with. We’re also in talks with various other quantum computing companies that run the gamut from US-based all the way to Canadian and to Australian-based companies.

IBM Q System (IBM photo)

In addition, DoE has its own quantum computing testbed efforts underway. In fact, there’s a second part to this program in which two national labs are building quantum computer testbed facilities (Testbeds for Science), which are meant to give folks like me, deeper access. By deeper access, what I mean is much closer-to-the-metal access so that we can really stress the quantum computers. Those two systems being built are at Berkeley (National Laboratory) and Sandia (National Laboratories). Those are superconducting quantum computers and ion trap quantum computers. Finally, to round it out, we also have optical quantum computers here at Oak Ridge (National Laboratory) which have been used in my project a couple of times. We haven’t really got around to really deeply benchmarking those and stressing those. But I think the one sentence answer to your question is we are technology agnostic and our goal is to benchmark every quantum computer that we can get our hands on.

HPCwire: You mentioned the ion trap and superconducting, which are certainly the quantum computing technologies that have gotten most of the attention. What about such as Intel’s silicon spin-based approach? Are you looking at other technologies?

Raphael Pooser: We do believe we’re going to get our hands on some other technologies soon. I can’t say exactly what those technologies are due to business considerations for the companies involved. I can tell you without giving anything away that my project in particular and Oak Ridge more generally has been in talks with every single company that has a quantum computer in the works and we’re in the process of gaining access to many of them. That doesn’t just include Intel. Speaking broadly, the silicon quantum dot-based qubit (Intel) is a very interesting system. Those are hard to benchmark now because access to those systems is limited. They are still more laboratory-based, but we expect that because of companies like Intel, and because of work going on in this field at Sandia and at University of New South Wales in Australia, these systems are going to become benchmarkable in the future.

The short answer is, we haven’t benchmarked any silicon-based qubit-based systems yet because we don’t have access to them, but we know who the players are, and we’re in talks with those players.

HPCwire: Back to basics, what exactly does benchmarking mean here? One can think of many things to look at when assessing how these systems perform. What exactly is quantum benchmarking involving?

Rigetti quantum processor

Raphael Pooser: That’s also a really good question because the concept of benchmarking in quantum computing, especially this stage of quantum computing, is different from classical benchmarking although they do bear similarities. One of the things that we want to measure is performance and by performance what we’re talking about are the resource costs required to get an answer. At the same time, we also want to measure the quality of the answer. So one of the places where classical computing and quantum computing can vary quite dramatically, especially at this stage in quantum computing life cycle, is in the quality of an answer. The quantum computers you have access to today give rather noisy results. One of our jobs is to quantify to what extent noise affects the quality of the answer you can get from the quantum computer.

The other major component of benchmarking is asking what kind of resources does it take to run this or that interesting problem. Again, these are problems of interest to DoE, so basic science problems in chemistry and nuclear physics and things like that. What we’ll do is take applications in chemistry and nuclear physics and convert them into what we consider a benchmark. We consider it a benchmark when we can distill a metric from it. So the metric could be the accuracy, the quality of the solution, or the resources required to get a given level of quality. Look at our papers and you’ll see that we’ll discuss using a certain number of qubits to do a computation versus a different number of qubits. Or we’ll talk about using one particular level of theory versus another level of theory, and say that if you were able to use such and such level of theory you would get a better answer but because the computer has this much noise, it tempers the quality of quality of your answer by some amount.

HPCwire: It’s interesting to hear discussion about ‘noise’ in quantum computing and how its sources vary, spanning manufacturing issues to characteristics of different gate types to the way you lay out a circuit. They all affect performance. Do I have this idea correct and can you talk about how you handle noise and does that translate into assessing noise for specific quantum circuits and algorithms since they are so intertwined?

Raphael Pooser: You’re hitting on something that’s deeply important in the NISQ era of quantum computing. You really do have different gates which are more or less useful for different architectures. Just as in classical computing back in the days when people used to get very concerned about the compiler optimizations that an Intel compiler would use when compiling benchmarks for Intel processors versus using that same compiler for AMD processors. There used to be quite a bit of wondering about would a benchmark have been better on a RISC architecture versus an x86 architecture.

In quantum computing you have to think in a similar way, but more deeply because the quantum processors can be vastly different when it comes down to how they implement the gate. What we have to do is say, “let’s get this algorithm for this application that we’ve compiled as a benchmark. If we want to run it on a different quantum computing processor, we have to make sure that we make the most efficient implementation in terms of the circuit. In this early era of quantum computing what you really quickly realize is that [forces] limiting what we call the depth of the circuit (very roughly, gate sequence execution as measured by time steps).

Google’s Sycamore quantum chip

We’re of the view and I think other people are of the view nowadays even in classical benchmarking that if a machine has an advantage over another machine – like let’s say the entangling gate in one quantum computer is more efficient than some other architecture in the sense that it has a higher entangling fidelity or can entangle more qubits at a time and it allows you to simplify your algorithms – then all the more power to that platform; it should be allowed to exploit that advantage when running the benchmark. So we move forward with the idea that we want to squeeze the most out of every machine possible and that does mean exactly what you said. You’re looking at a circuit that in some cases may be general enough to run on any architecture, but in other cases, as in the case of translating from a superconducting to ion-based system, we need to translate the gate set a little bit. Luckily, in this era, because circuit depths are so short, this is not an onerous task. We work with the developers of the hardware to do this. We’re able to do this because as there’s not an overabundance of hardware platforms out there and there’s not an overabundance of circuit depth.

HPCwire: That sounds like a major headache for would-be users of quantum computing, this idea they need to write applications differently, or at least have different compilers to get the most out of a platform.

Raphael Pooser: Well we do tune a lot of things by hand. However, if you are a user out on the street who is interested in this stuff, you can grab our software suite and write code once and then run it on quite a bit of different architectures actually at this point; you can run it on IBM, Rigetti. IonQ using their simulator right now because they haven’t put their cloud access up yet. We even have Cirq built in. Cirq is Google’s language. You could say, write in Qiskit if you want, which is IBM’s language, and then our stack will translate it to Google or Rigetti or any other hardware language you want. We even have support for IBM’s low level language. IBM has done something very smart; they have enabled access to what we’re calling the quantum control layer of the quantum computer. They call this OpenPulse as the name for this language. We even have support built in for the quantum control layer. Essentially, if any vendor will give it to us, we’ll build it. So this is actually open source software that’s kind of a product of our work.

HPCwire: Before turning to software issues and your project’s deliverables, could you comment on the competing qubit technologies – superconducting, ion trap, the silicon spin, Microsoft’s topological qubit, etc. – and handicap them in terms of strengths, weaknesses, closeness to practical use? Also, what application areas do you think each is perhaps more suited for?

Raphael Pooser: Another good question. First, going straight to the topological qubit. Yes, Microsoft has been researching this area for a while. They’re rather excited about it and frankly, I am too because if you can discover a topological qubit then you get around a lot of the problems that all the current qubits have – and that is the physical error rate using current technologies. A topological logical qubit would basically jump you forward by leaps and bounds on the path to fault tolerance. The flip side is that, speaking in terms of what you call a handicap, the topological qubits are further off into the future. They’re definitely not impossible, but there has not currently been a demonstration of a fully functioning, topological qubit, in the sense the DiVincenzo criteria, right? You really can’t talk about building a quantum computer scalably with a system until you meet these DiVincenzo criteria. But there’s great promise there because if we can find the topological qubits with low physical error rates, it’s going to be quite a large breakthrough. So that’s several years off in a future.

Guys like me are super excited using quantum computers right now. We’ve got the semiconductor-based superconductors and ion traps, and yes, they do have different applications each system excels at. For superconducting architectures, you really just need look no further than the current scientific literature and you’ll see that people are using them extensively for many different application types. One of the most widespread is an application called the Variational Quantum Eigensolver (VQE). This is a method of searching for the ground state of Hamiltonians. As long as you represent the Hamiltonian of the system of interest in a way to encode on a quantum computer, you can get ground state energies out. One of the big breakthroughs, of course, was using this to calculate ground state energies for chemistry, for molecules, and it was also recently demonstrated for nuclear interaction. Superconducting devices have proven themselves to be quite strong for that. But they’re not limited to that. There’s other things called quantum approximate optimization algorithms and some machine learning protocols. (Link to a good overview of VQE implementation by Talia Gershon of IBM/MIT).

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

If you look at the ion traps, they have very high gate fidelities. Now they’re a little bit different from the superconducting devices in the sense that ion traps have slower clock speeds but longer coherent times, which means you can do more high-fidelity operations on them before they decohere. That enables you to do certain applications that need very exact computation so you could attempt to time evolution on them. They are also good for analog computation of spin systems; they’ve proven to be very robust for calculating the input-output correlations for large numbers of spin. I’m thinking of a paper from Chris Munroe back in 2013 where they did 53-qubit simulation of a 53-qubit spin change. You’re able to get large numbers of qubits with high fidelity gates between them on ion trap technology.

Where ions and superconductors have something in common is both of them have proven to be interesting platforms for machine learning. So folks have run machine learning algorithms, the same mechanisms, on both platforms. I just want to say that in this NISQ era that while there are large differences in the platform technology, their capabilities in terms of the types of algorithms you can run on them are fairly comparable. In other words it’s too early to really say which is better than the other.

HPCwire: It is interesting to track efforts by various quantum computing technology vendors to weigh in on metrics and benchmarks. IonQ has done this. IBM has perhaps made the most noise pitching its Quantum Volume measure at last year’s APS March meeting. It’s a composite measure with many system-wide facets – gate error rates, decoherence times, qubit connectivity, operating software efficiency, and more – effectively baked into the measure.  

Pooser: I think quantum volume is a good benchmark. Quantum volume will give a sense of how many gates (circuit depth) a given set of qubits (circuit width) can support before decohering. I think it is most useful when combined or correlated with other benchmarks. That is, certain benchmarks look at certain aspects of the machines, and to get a good picture, you need to run multiple benchmarks of different types. I think that since Honeywell recently announced their new device in terms of Quantum Volume, you’re going to see a few companies here and there picking it up going forward. However, in the early stage of quantum computing, it’s important to not try and reduce the machines down to single numbers.

HPCwire: Turning to software, we frequently receive press releases claiming being able to dramatically improve quantum hardware performance – outcome quality, ease-of-use, etc. – with little background on what’s being measured or how improvements were achieved. That makes it hard for non-experts like me to assess. What’s your sense of the emerging software ecosystem and the accompanying clamor around their efforts?

Raphael Pooser: I totally agree that that clarity is definitely difficult to come by there. It seems that there was a proliferation of quantum computing software stacks in the past few years, and it’s definitely true that you’re not quite sure what to do with them all. There seems to be this tendency to try to get platform agnostic software stacks, because most people believe doing that will garner them the most users and make them the most relevant.

Now about how these software stacks and tools can supposedly reduce error rates and improve hardware performance, you kind of scratch your head and wonder how can a guy sitting halfway across the nation reduce error rates on hardware he doesn’t even have control over? However, it’s actually possible. Most of the time they are talking about building something into their suite, something called error mitigation. We certainly do. We build this concept called error mitigation into our software and it really does help.

What error mitigation is, in a nutshell, is rooting out the noise that causes errors in quantum computers and trying to correct your data for it. You can either post-process your data or you can change what you’re doing. [For example] if you’re doing like an iterative calculation that uses expectation values – you can actually change what you do when you calculate the expectation value to try to calculate it with less error based on some characterization of the machine.

In addition to the software stack companies, there’s companies that specialize in quantum characterization. One example is a company called Quantum Benchmark. These companies specialize in characterizing the quantum computers so that we can find out where the noise comes from, and we can use that knowledge to make our answers better. That’s error mitigation.

Now back to the bigger question about what are all these stacks doing? What do we do with all these software stack everywhere? It seems like this write-once, run-on-any backend, is a popular way to go? And I think it’s a definitely a good idea for the industry as a whole because you really don’t know what quantum computing technology is going to win out.

That goes back to your other question about what are the relative merits and strengths and weaknesses of these various hardware architectures? Part of the answer to that question was, we don’t know yet. We’re in the process of testing all these machines so we need these write-once, run maybe not everywhere but run in-most-places type stacks so that we can quickly and with a lot of agility, test new hardware that comes or existing hardware and discover what they’re good for. So I support these ideas that companies like Q-CTRL are advancing. Azure (Microsoft) has Q# (“q sharp”). They’re also becoming a multi backend platform.

HPCwire: You’ve said producing code is one of the Testbed Pathfinder’s goals. Maybe talk a little bit about the software it’s developed. I’m thinking of XACC and QCOR.

A quantum computer produced by the Canadian company D-Wave systems. Image: D-Wave Systems Inc.

Raphael Pooser: Oak Ridge has developed this platform we call XACC which stands for accelerated cross compiler. The reason why we developed our own is because we were just one of the first to do it. A lot of other companies started doing it recently. We were one of the first to do it and DoE actually needs its own software stack. We’re happy to use the vendor software stacks, and we do, but DoE also wants its own so that it can have maximum control over it, and really know the nuts and bolts of what’s going on under the hood. In general, these are all good, very good developments. I don’t know what will win out in the end and who will be around 10 years from now, but I think it’s all good stuff.

The fastest way to distinguish XACC and QCOR is to point out that QCOR is really a language, while XACC is at its core, a cross compilation framework. That is, you will not write computer programs in XACC per se, but XACC will be able to “speak” many different computer languages and compile them to the appropriate machine that you’d like to run the particular algorithm on. QCOR is specifically engineered to make it easy to program quantum computers using methods that are familiar to traditional C++ programmers. Getting a little more technical, QCOR is in fact a library plus extensions that extends the existing C++ language by providing handy functions that enable heterogeneous quantum-classical programming.

HPCwire: Let’s talk about Quantum Supremacy versus Quantum Advantage the timeline to reach either. Google, of course created stir with its claim of demonstrating QS in the fall. What’s your sense of the relative importance of these two measures, and how soon can we expect either of them?

Raphael Pooser: Interesting question. The difference between quantum advantage and quantum supremacy, as you noted, is that advantage is actually useful for something whereas supremacy is a demonstration of faster number crunching basically. It’s really hard to pin down when quantum advantage will happen. What I will say is that there are two schools of thought. One school of thought is that you can’t have a quantum advantage without fault tolerance, which means that we need quantum error correction. Now the problem with that is that fully quantum-error-corrected quantum computers are quite far off, at this point probably over a decade away.

There’s another school of thought that says we might be able to gain quantum advantage in this NISQ era if we play our cards, right, and that quantum supremacy is a leading indicator that it might be possible. The way that this would work is that you choose a specific application that is of interest, right? Like say, calculating electron mobility in a molecule or something, and you tailor your quantum computer, maybe even at an analog level, to the application at hand. In other words, it doesn’t have to be a universal machine, but you do a computation that is bona fide faster and more accurate than the classical machine could do even without error correction. Error mitigation is believed to probably play a very big role here because of the prevalence of noisiness in this era.

So there’s the two schools of thought, which really can just be broken down on either side of fault tolerance. Do you believe advantage can come before or after fault tolerance? I don’t feel comfortable saying things like in a couple of years quantum advantage will happen. Although, you know, the word is that as soon as Google demonstrated supremacy they said “we’re going to have a quantum advantage next in this application” which I’ve heard might be a random number generator certification and would technically be quantum advantage. So the definition of quantum advantage varies from person the person. Some people might just wave their hands and say, “Bah! certifying a random number generator that’s not quantum advantage. I wanted the nuclear bound state calculation with 100 qubits in it. That’s something that you could never compute a million years, and it’s scientifically useful versus random number generation.” Others might say no, that’s actually useful for something like communication and that’s quantum advantage.

It’s all kind of a moot argument, of course, because Google hasn’t actually done that yet. My personal opinion is quantum advantage is not [just] a couple years away, and it, it may be more than 10 years away. But if the others are right about not needing fault-tolerance for a true quantum advantage, it may be five years away. I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.

HPCwire: Thanks for your time!

Brief Raphael Pooser Bio (Source: ORNL)

Dr. Pooser is an expert in continuous variable quantum optics. He leads the quantum sensing team within the quantum information science group. His research interests include quantum computing, neuromorphic computing, and sensing. He currently leads the Quantum Computing Testbed project at ORNL, a large multi institution collaboration. He has also developed a quantum sensing program from the ground up based on quantum networks over a number of years at ORNL. He has been working to demonstrate that continuous variable quantum optics, quantum noise reduction in particular, has important uses in the quantum information field. One of his goals is to show that the quantum control and error correction required in computing applications are directly applicable to quantum sensing efforts. He is also interested in highlighting the practicality of these systems, demonstrating their ease of use and broad applicability. His research model uses quantum sensors as a showcase for the technologies that will enable quantum computing. Dr. Pooser has over 16 years of quantum information science experience, having led the quantum sensing program at ORNL over the past eight. Dr. Pooser publishes in high impact journals, including in Science, Nature, and Physical Review Letters. He previously served as a distinguished Wigner Fellow. He also worked as a postdoctoral fellow in the Laser Cooling and Trapping Group at NIST after receiving his PhD in Engineering Physics from the University of Virginia. He received a B.S. in Physics from New York University, graduating Cum Laude on an accelerated schedule. Dr. Pooser is active in the community, having served as a spokesperson for United Way and for the Boys and Girls Clubs of the TN Valley on many occasions in addition to volunteer work.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

White House Scientific Integrity Report Addresses AI and ML Ethics

January 26, 2022

Earlier this month, the White House Office of Science and Technology Policy (OSTP) Scientific Integrity Task Force released a report titled “Protecting the Integrity of Government Science.” While broad-based and over Read more…

IBM Quantum Debuts Classical Entanglement Forging to Expand Simulation Capabilities

January 26, 2022

IBM last week reported a new technique – entanglement forging – that uses both quantum and classical computing resources to double the size of select simulation problems that can be solved on current quantum computer Read more…

Lenovo Launches Its TruScale HPC as a Service Offering

January 26, 2022

Lenovo today announced TruScale High Performance Computing as a Service (HPCaaS), which it says will offer a “cloud-like experience” to HPC organizations of all sizes. The new HPC-as-a-Service is part of the TruScale Read more…

Ceremorphic Touts Its HPC/AI Silicon Technology as It Exits Stealth

January 25, 2022

In a market still filling with fledging silicon chips, Ceremorphic, Inc. has exited stealth and is telling the world about what it calls its patented new ThreadArch multi-thread processor technology that is intended to h Read more…

Quantum Watch: Neutral Atoms Draw Growing Attention as Promising Qubit Technology

January 25, 2022

Currently, there are many qubit technologies vying for sway in quantum computing. So far, superconducting (IBM, Google) and trapped ion (IonQ, Quantinuum) have dominated the conversation. Microsoft’s proposed topologic Read more…

AWS Solution Channel

Register for the AWS “Speeds n’ Feeds” event on Feb. 9th

Since the debut of the first ‘Beowulf’ cluster in 1994, HPC has been a race between technologists squeezing as much performance as possible from hardware, and scale economics driving mass-production prices to affordable levels. Read more…

Meta’s Massive New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will b Read more…

Lenovo Launches Its TruScale HPC as a Service Offering

January 26, 2022

Lenovo today announced TruScale High Performance Computing as a Service (HPCaaS), which it says will offer a “cloud-like experience” to HPC organizations of Read more…

Ceremorphic Touts Its HPC/AI Silicon Technology as It Exits Stealth

January 25, 2022

In a market still filling with fledging silicon chips, Ceremorphic, Inc. has exited stealth and is telling the world about what it calls its patented new Thread Read more…

Quantum Watch: Neutral Atoms Draw Growing Attention as Promising Qubit Technology

January 25, 2022

Currently, there are many qubit technologies vying for sway in quantum computing. So far, superconducting (IBM, Google) and trapped ion (IonQ, Quantinuum) have Read more…

Meta’s Massive New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called t Read more…

IBM Watson Health Finally Sold by IBM After 11 Months of Rumors

January 21, 2022

IBM has sold its underachieving IBM Watson Health unit for an undisclosed price tag to a global investment firm after almost a year’s worth of rumors that sai Read more…

Supercomputer Analysis Shows the Atmospheric Reach of the Tonga Eruption

January 21, 2022

On Saturday, an enormous eruption on the volcanic islands of Hunga Tonga and Hunga Haʻapai shook the Pacific Ocean. The explosion, which could be heard six tho Read more…

NSB Issues US State of Science and Engineering 2022 Report

January 20, 2022

This week the National Science Board released its biannual U.S. State of Science and Engineering 2022 report, as required by the NSF Act. Broadly, the report presents a near-term view of S&E based mostly on 2019 data. To a large extent, this year’s edition echoes trends from the last few reports. The U.S. is still a world leader in R&D spending and S&E education... Read more…

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Leading Solution Providers

Contributors

Lessons from LLVM: An SC21 Fireside Chat with Chris Lattner

December 27, 2021

Today, the LLVM compiler infrastructure world is essentially inescapable in HPC. But back in the 2000 timeframe, LLVM (low level virtual machine) was just getting its start as a new way of thinking about how to overcome shortcomings in the Java Virtual Machine. At the time, Chris Lattner was a graduate student of... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Meta’s Massive New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called t Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Top500: No Exascale, Fugaku Still Reigns, Polaris Debuts at #12

November 15, 2021

No exascale for you* -- at least, not within the High-Performance Linpack (HPL) territory of the latest Top500 list, issued today from the 33rd annual Supercomputing Conference (SC21), held in-person in St. Louis, Mo., and virtually, from Nov. 14–19. "We were hoping to have the first exascale system on this list but that didn’t happen," said Top500 co-author... Read more…

TACC Unveils Lonestar6 Supercomputer

November 1, 2021

The Texas Advanced Computing Center (TACC) is unveiling its latest supercomputer: Lonestar6, a three peak petaflops Dell system aimed at supporting researchers Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire