ORNL’s Raphael Pooser on DoE’s Quantum Testbed Project

By John Russell

March 11, 2020

Quantum computing and quantum information science generally are areas of aggressive research at the Department of Energy. Their promise, of course, is tantalizing – vast computational scale and impenetrable communication, for starters. Depending on how one defines practical utility, a few applications may not be just distant visions. At least that’s the hope. The most visible sign of that hope and worry about falling behind in a global race to practical quantum computing writ large is the $1.2B U.S. Quantum Initiative passed in 2018.

HPCwire recently spoke with Raphael Pooser, PI for DoE’s Quantum Testbed Pathfinder project and a member of Oak Ridge National Laboratory’s Quantum Information Science group, whose work encompasses quantum sensing, quantum communications, and quantum computing. Pooser also leads the quantum sensing team at ORNL. Broadly, DoE’s Quantum Testbed project is a multi-institution effort involving national labs and academia whose mission has two prongs: one – the Quantum Testbed Pathfinder – is intended to assess quantum computing technologies and deliver tools and benchmarks; and the second – the Quantum Testbeds for Science – is intended to provide quantum computing resources to the research community to foster understanding of how to best use quantum computing to advance science.

Part of what’s noteworthy here is the project’s candid acknowledgement of quantum computing’s nascent stage or the so-called NISQ era in which noisy intermediate scale quantum computers dominate. The Quantum Testbed program is trying to figure out how to improve and make practical use of NISQ systems while also pursuing fault-tolerant quantum computers. Moreover, the whole quantum computing community is seeking to demonstrate quantum advantage – that is use of a quantum computer to do something practical sufficiently faster (and more economical) than a classical computer to warrant switching to quantum computing for that application.

Raphael Pooser, ORNL

As Pooser told HPCwire, “I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.”

Without doubt there are challenges ahead, but there have also been notable accomplishments. Pooser noted the use of quantum communications to secure voting results, albeit over a short distance, in Vienna, and that some banks use quantum key encryption for short-distance communication. Quantum computing, too, has shown progress though much less close to general practical use. It’s been used, for example, in POC efforts to calculate ground state energies for a few molecules. Note too that the quantum testbed project is just one, although a big one, of many DoE-back quantum science research efforts.

The ORNL quantum information science efforts emphasize multidiscipline collaboration. “We are a group of about 20 full time staff, and have several postdocs, grad students, and interns,” said Pooser. “The group members are distributed about evenly over the teams. One thing to note is that the team members pretty much engage in whatever research they are interested in, and are not limited by what team they’re on. I do research in all three areas of QIS, for example. Others choose to engage in research solely for quantum computing. We have a very large breadth due to our need to be ready to serve the needs of government agencies as they emerge. For example, quantum computing, though long studied elsewhere, has only recently become a core program within DOE; our group made sure to maintain ORNL’s level of expertise in this area over time so that we were able to rise to meet DoE’s needs when it expressed them.”

Presented here is part one of Pooser’s conversation with HPCwire, which focuses on quantum computing and what the Testbed Pathfinder group is doing. Part two of the interview, which will be published shortly, focuses on quantum information.

The Testbed Pathfinder group is charged with delivering benchmarks, code, and technology assessment. Peer review papers are a big part of the expected output, 10-to-15 a year, said Pooser noting, “Those are important because most of them come with how-to guides almost. If you download a paper and you are versed in the state of the art, you can reproduce our work on a quantum computer. You can actually take our work and apply it, at least right now, to the freely available IBM cloud machine.” Code too is being made available, such as ORNL-developed XACC which stands for accelerated cross compiler. Most of the work is accessible through the ORNL quantum information science archiveor on github.

The interview installment presented here touches on benchmarking, competing qubit technologies, the emerging software ecosystem, and the quest for quantum advantage. Also, it doesn’t dig deeply into basic quantum computing concepts as they have been covered earlier.

HPCwire: Let’s start with an overview of DoE and ORNL quantum work.

Raphael Pooser: DoE has multiple quantum programs going on. One of the first was this program called Quantum Testbed Pathfinder. This is really about benchmarking quantum computers. The reason for this project is we need to help DoE understand what quantum computers are capable of within the context of the things DoE is interested in. So we want to understand how can quantum computing help DoE reach its goals, more or less independent of other agencies. It’s not as concerned with some of the applications that other agencies might be concerned with. What we’re really talking about are fundamental science questions. To do this we need to benchmark quantum systems and through this process of benchmarking tell DoE what is it about quantum computers that needs to be improved in the future.

HPCwire: What does benchmarking actually mean in this context and are you using commercially-developed machines such as from IBM, Rigetti, etc.?

Raphael Pooser: Yeah, great question. Quantum computing is in such a nascent stage. What do we even mean by benchmarking? So we are using the commercially available devices. That includes IBM and Rigetti and with a company called IonQ, which is ion trap technology company. We are also working, though not as tightly yet, with Google [which] has benchmarked its own machine. We are working with Google to benchmark their machines more closely and with an independent mindset. Those are the four companies we’re now working with. We’re also in talks with various other quantum computing companies that run the gamut from US-based all the way to Canadian and to Australian-based companies.

IBM Q System (IBM photo)

In addition, DoE has its own quantum computing testbed efforts underway. In fact, there’s a second part to this program in which two national labs are building quantum computer testbed facilities (Testbeds for Science), which are meant to give folks like me, deeper access. By deeper access, what I mean is much closer-to-the-metal access so that we can really stress the quantum computers. Those two systems being built are at Berkeley (National Laboratory) and Sandia (National Laboratories). Those are superconducting quantum computers and ion trap quantum computers. Finally, to round it out, we also have optical quantum computers here at Oak Ridge (National Laboratory) which have been used in my project a couple of times. We haven’t really got around to really deeply benchmarking those and stressing those. But I think the one sentence answer to your question is we are technology agnostic and our goal is to benchmark every quantum computer that we can get our hands on.

HPCwire: You mentioned the ion trap and superconducting, which are certainly the quantum computing technologies that have gotten most of the attention. What about such as Intel’s silicon spin-based approach? Are you looking at other technologies?

Raphael Pooser: We do believe we’re going to get our hands on some other technologies soon. I can’t say exactly what those technologies are due to business considerations for the companies involved. I can tell you without giving anything away that my project in particular and Oak Ridge more generally has been in talks with every single company that has a quantum computer in the works and we’re in the process of gaining access to many of them. That doesn’t just include Intel. Speaking broadly, the silicon quantum dot-based qubit (Intel) is a very interesting system. Those are hard to benchmark now because access to those systems is limited. They are still more laboratory-based, but we expect that because of companies like Intel, and because of work going on in this field at Sandia and at University of New South Wales in Australia, these systems are going to become benchmarkable in the future.

The short answer is, we haven’t benchmarked any silicon-based qubit-based systems yet because we don’t have access to them, but we know who the players are, and we’re in talks with those players.

HPCwire: Back to basics, what exactly does benchmarking mean here? One can think of many things to look at when assessing how these systems perform. What exactly is quantum benchmarking involving?

Rigetti quantum processor

Raphael Pooser: That’s also a really good question because the concept of benchmarking in quantum computing, especially this stage of quantum computing, is different from classical benchmarking although they do bear similarities. One of the things that we want to measure is performance and by performance what we’re talking about are the resource costs required to get an answer. At the same time, we also want to measure the quality of the answer. So one of the places where classical computing and quantum computing can vary quite dramatically, especially at this stage in quantum computing life cycle, is in the quality of an answer. The quantum computers you have access to today give rather noisy results. One of our jobs is to quantify to what extent noise affects the quality of the answer you can get from the quantum computer.

The other major component of benchmarking is asking what kind of resources does it take to run this or that interesting problem. Again, these are problems of interest to DoE, so basic science problems in chemistry and nuclear physics and things like that. What we’ll do is take applications in chemistry and nuclear physics and convert them into what we consider a benchmark. We consider it a benchmark when we can distill a metric from it. So the metric could be the accuracy, the quality of the solution, or the resources required to get a given level of quality. Look at our papers and you’ll see that we’ll discuss using a certain number of qubits to do a computation versus a different number of qubits. Or we’ll talk about using one particular level of theory versus another level of theory, and say that if you were able to use such and such level of theory you would get a better answer but because the computer has this much noise, it tempers the quality of quality of your answer by some amount.

HPCwire: It’s interesting to hear discussion about ‘noise’ in quantum computing and how its sources vary, spanning manufacturing issues to characteristics of different gate types to the way you lay out a circuit. They all affect performance. Do I have this idea correct and can you talk about how you handle noise and does that translate into assessing noise for specific quantum circuits and algorithms since they are so intertwined?

Raphael Pooser: You’re hitting on something that’s deeply important in the NISQ era of quantum computing. You really do have different gates which are more or less useful for different architectures. Just as in classical computing back in the days when people used to get very concerned about the compiler optimizations that an Intel compiler would use when compiling benchmarks for Intel processors versus using that same compiler for AMD processors. There used to be quite a bit of wondering about would a benchmark have been better on a RISC architecture versus an x86 architecture.

In quantum computing you have to think in a similar way, but more deeply because the quantum processors can be vastly different when it comes down to how they implement the gate. What we have to do is say, “let’s get this algorithm for this application that we’ve compiled as a benchmark. If we want to run it on a different quantum computing processor, we have to make sure that we make the most efficient implementation in terms of the circuit. In this early era of quantum computing what you really quickly realize is that [forces] limiting what we call the depth of the circuit (very roughly, gate sequence execution as measured by time steps).

Google’s Sycamore quantum chip

We’re of the view and I think other people are of the view nowadays even in classical benchmarking that if a machine has an advantage over another machine – like let’s say the entangling gate in one quantum computer is more efficient than some other architecture in the sense that it has a higher entangling fidelity or can entangle more qubits at a time and it allows you to simplify your algorithms – then all the more power to that platform; it should be allowed to exploit that advantage when running the benchmark. So we move forward with the idea that we want to squeeze the most out of every machine possible and that does mean exactly what you said. You’re looking at a circuit that in some cases may be general enough to run on any architecture, but in other cases, as in the case of translating from a superconducting to ion-based system, we need to translate the gate set a little bit. Luckily, in this era, because circuit depths are so short, this is not an onerous task. We work with the developers of the hardware to do this. We’re able to do this because as there’s not an overabundance of hardware platforms out there and there’s not an overabundance of circuit depth.

HPCwire: That sounds like a major headache for would-be users of quantum computing, this idea they need to write applications differently, or at least have different compilers to get the most out of a platform.

Raphael Pooser: Well we do tune a lot of things by hand. However, if you are a user out on the street who is interested in this stuff, you can grab our software suite and write code once and then run it on quite a bit of different architectures actually at this point; you can run it on IBM, Rigetti. IonQ using their simulator right now because they haven’t put their cloud access up yet. We even have Cirq built in. Cirq is Google’s language. You could say, write in Qiskit if you want, which is IBM’s language, and then our stack will translate it to Google or Rigetti or any other hardware language you want. We even have support for IBM’s low level language. IBM has done something very smart; they have enabled access to what we’re calling the quantum control layer of the quantum computer. They call this OpenPulse as the name for this language. We even have support built in for the quantum control layer. Essentially, if any vendor will give it to us, we’ll build it. So this is actually open source software that’s kind of a product of our work.

HPCwire: Before turning to software issues and your project’s deliverables, could you comment on the competing qubit technologies – superconducting, ion trap, the silicon spin, Microsoft’s topological qubit, etc. – and handicap them in terms of strengths, weaknesses, closeness to practical use? Also, what application areas do you think each is perhaps more suited for?

Raphael Pooser: Another good question. First, going straight to the topological qubit. Yes, Microsoft has been researching this area for a while. They’re rather excited about it and frankly, I am too because if you can discover a topological qubit then you get around a lot of the problems that all the current qubits have – and that is the physical error rate using current technologies. A topological logical qubit would basically jump you forward by leaps and bounds on the path to fault tolerance. The flip side is that, speaking in terms of what you call a handicap, the topological qubits are further off into the future. They’re definitely not impossible, but there has not currently been a demonstration of a fully functioning, topological qubit, in the sense the DiVincenzo criteria, right? You really can’t talk about building a quantum computer scalably with a system until you meet these DiVincenzo criteria. But there’s great promise there because if we can find the topological qubits with low physical error rates, it’s going to be quite a large breakthrough. So that’s several years off in a future.

Guys like me are super excited using quantum computers right now. We’ve got the semiconductor-based superconductors and ion traps, and yes, they do have different applications each system excels at. For superconducting architectures, you really just need look no further than the current scientific literature and you’ll see that people are using them extensively for many different application types. One of the most widespread is an application called the Variational Quantum Eigensolver (VQE). This is a method of searching for the ground state of Hamiltonians. As long as you represent the Hamiltonian of the system of interest in a way to encode on a quantum computer, you can get ground state energies out. One of the big breakthroughs, of course, was using this to calculate ground state energies for chemistry, for molecules, and it was also recently demonstrated for nuclear interaction. Superconducting devices have proven themselves to be quite strong for that. But they’re not limited to that. There’s other things called quantum approximate optimization algorithms and some machine learning protocols. (Link to a good overview of VQE implementation by Talia Gershon of IBM/MIT).

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

If you look at the ion traps, they have very high gate fidelities. Now they’re a little bit different from the superconducting devices in the sense that ion traps have slower clock speeds but longer coherent times, which means you can do more high-fidelity operations on them before they decohere. That enables you to do certain applications that need very exact computation so you could attempt to time evolution on them. They are also good for analog computation of spin systems; they’ve proven to be very robust for calculating the input-output correlations for large numbers of spin. I’m thinking of a paper from Chris Munroe back in 2013 where they did 53-qubit simulation of a 53-qubit spin change. You’re able to get large numbers of qubits with high fidelity gates between them on ion trap technology.

Where ions and superconductors have something in common is both of them have proven to be interesting platforms for machine learning. So folks have run machine learning algorithms, the same mechanisms, on both platforms. I just want to say that in this NISQ era that while there are large differences in the platform technology, their capabilities in terms of the types of algorithms you can run on them are fairly comparable. In other words it’s too early to really say which is better than the other.

HPCwire: It is interesting to track efforts by various quantum computing technology vendors to weigh in on metrics and benchmarks. IonQ has done this. IBM has perhaps made the most noise pitching its Quantum Volume measure at last year’s APS March meeting. It’s a composite measure with many system-wide facets – gate error rates, decoherence times, qubit connectivity, operating software efficiency, and more – effectively baked into the measure.  

Pooser: I think quantum volume is a good benchmark. Quantum volume will give a sense of how many gates (circuit depth) a given set of qubits (circuit width) can support before decohering. I think it is most useful when combined or correlated with other benchmarks. That is, certain benchmarks look at certain aspects of the machines, and to get a good picture, you need to run multiple benchmarks of different types. I think that since Honeywell recently announced their new device in terms of Quantum Volume, you’re going to see a few companies here and there picking it up going forward. However, in the early stage of quantum computing, it’s important to not try and reduce the machines down to single numbers.

HPCwire: Turning to software, we frequently receive press releases claiming being able to dramatically improve quantum hardware performance – outcome quality, ease-of-use, etc. – with little background on what’s being measured or how improvements were achieved. That makes it hard for non-experts like me to assess. What’s your sense of the emerging software ecosystem and the accompanying clamor around their efforts?

Raphael Pooser: I totally agree that that clarity is definitely difficult to come by there. It seems that there was a proliferation of quantum computing software stacks in the past few years, and it’s definitely true that you’re not quite sure what to do with them all. There seems to be this tendency to try to get platform agnostic software stacks, because most people believe doing that will garner them the most users and make them the most relevant.

Now about how these software stacks and tools can supposedly reduce error rates and improve hardware performance, you kind of scratch your head and wonder how can a guy sitting halfway across the nation reduce error rates on hardware he doesn’t even have control over? However, it’s actually possible. Most of the time they are talking about building something into their suite, something called error mitigation. We certainly do. We build this concept called error mitigation into our software and it really does help.

What error mitigation is, in a nutshell, is rooting out the noise that causes errors in quantum computers and trying to correct your data for it. You can either post-process your data or you can change what you’re doing. [For example] if you’re doing like an iterative calculation that uses expectation values – you can actually change what you do when you calculate the expectation value to try to calculate it with less error based on some characterization of the machine.

In addition to the software stack companies, there’s companies that specialize in quantum characterization. One example is a company called Quantum Benchmark. These companies specialize in characterizing the quantum computers so that we can find out where the noise comes from, and we can use that knowledge to make our answers better. That’s error mitigation.

Now back to the bigger question about what are all these stacks doing? What do we do with all these software stack everywhere? It seems like this write-once, run-on-any backend, is a popular way to go? And I think it’s a definitely a good idea for the industry as a whole because you really don’t know what quantum computing technology is going to win out.

That goes back to your other question about what are the relative merits and strengths and weaknesses of these various hardware architectures? Part of the answer to that question was, we don’t know yet. We’re in the process of testing all these machines so we need these write-once, run maybe not everywhere but run in-most-places type stacks so that we can quickly and with a lot of agility, test new hardware that comes or existing hardware and discover what they’re good for. So I support these ideas that companies like Q-CTRL are advancing. Azure (Microsoft) has Q# (“q sharp”). They’re also becoming a multi backend platform.

HPCwire: You’ve said producing code is one of the Testbed Pathfinder’s goals. Maybe talk a little bit about the software it’s developed. I’m thinking of XACC and QCOR.

A quantum computer produced by the Canadian company D-Wave systems. Image: D-Wave Systems Inc.

Raphael Pooser: Oak Ridge has developed this platform we call XACC which stands for accelerated cross compiler. The reason why we developed our own is because we were just one of the first to do it. A lot of other companies started doing it recently. We were one of the first to do it and DoE actually needs its own software stack. We’re happy to use the vendor software stacks, and we do, but DoE also wants its own so that it can have maximum control over it, and really know the nuts and bolts of what’s going on under the hood. In general, these are all good, very good developments. I don’t know what will win out in the end and who will be around 10 years from now, but I think it’s all good stuff.

The fastest way to distinguish XACC and QCOR is to point out that QCOR is really a language, while XACC is at its core, a cross compilation framework. That is, you will not write computer programs in XACC per se, but XACC will be able to “speak” many different computer languages and compile them to the appropriate machine that you’d like to run the particular algorithm on. QCOR is specifically engineered to make it easy to program quantum computers using methods that are familiar to traditional C++ programmers. Getting a little more technical, QCOR is in fact a library plus extensions that extends the existing C++ language by providing handy functions that enable heterogeneous quantum-classical programming.

HPCwire: Let’s talk about Quantum Supremacy versus Quantum Advantage the timeline to reach either. Google, of course created stir with its claim of demonstrating QS in the fall. What’s your sense of the relative importance of these two measures, and how soon can we expect either of them?

Raphael Pooser: Interesting question. The difference between quantum advantage and quantum supremacy, as you noted, is that advantage is actually useful for something whereas supremacy is a demonstration of faster number crunching basically. It’s really hard to pin down when quantum advantage will happen. What I will say is that there are two schools of thought. One school of thought is that you can’t have a quantum advantage without fault tolerance, which means that we need quantum error correction. Now the problem with that is that fully quantum-error-corrected quantum computers are quite far off, at this point probably over a decade away.

There’s another school of thought that says we might be able to gain quantum advantage in this NISQ era if we play our cards, right, and that quantum supremacy is a leading indicator that it might be possible. The way that this would work is that you choose a specific application that is of interest, right? Like say, calculating electron mobility in a molecule or something, and you tailor your quantum computer, maybe even at an analog level, to the application at hand. In other words, it doesn’t have to be a universal machine, but you do a computation that is bona fide faster and more accurate than the classical machine could do even without error correction. Error mitigation is believed to probably play a very big role here because of the prevalence of noisiness in this era.

So there’s the two schools of thought, which really can just be broken down on either side of fault tolerance. Do you believe advantage can come before or after fault tolerance? I don’t feel comfortable saying things like in a couple of years quantum advantage will happen. Although, you know, the word is that as soon as Google demonstrated supremacy they said “we’re going to have a quantum advantage next in this application” which I’ve heard might be a random number generator certification and would technically be quantum advantage. So the definition of quantum advantage varies from person the person. Some people might just wave their hands and say, “Bah! certifying a random number generator that’s not quantum advantage. I wanted the nuclear bound state calculation with 100 qubits in it. That’s something that you could never compute a million years, and it’s scientifically useful versus random number generation.” Others might say no, that’s actually useful for something like communication and that’s quantum advantage.

It’s all kind of a moot argument, of course, because Google hasn’t actually done that yet. My personal opinion is quantum advantage is not [just] a couple years away, and it, it may be more than 10 years away. But if the others are right about not needing fault-tolerance for a true quantum advantage, it may be five years away. I’m personally working in NISQ era right now. It would be really nice to find a quantum advantage in this era, and at Oak Ridge, we hope that we’re contributing towards finding a quantum advantage for real scientific applications. But if it doesn’t happen before fault tolerance, it won’t necessarily shock me. It’ll just be disappointing.

HPCwire: Thanks for your time!

Brief Raphael Pooser Bio (Source: ORNL)

Dr. Pooser is an expert in continuous variable quantum optics. He leads the quantum sensing team within the quantum information science group. His research interests include quantum computing, neuromorphic computing, and sensing. He currently leads the Quantum Computing Testbed project at ORNL, a large multi institution collaboration. He has also developed a quantum sensing program from the ground up based on quantum networks over a number of years at ORNL. He has been working to demonstrate that continuous variable quantum optics, quantum noise reduction in particular, has important uses in the quantum information field. One of his goals is to show that the quantum control and error correction required in computing applications are directly applicable to quantum sensing efforts. He is also interested in highlighting the practicality of these systems, demonstrating their ease of use and broad applicability. His research model uses quantum sensors as a showcase for the technologies that will enable quantum computing. Dr. Pooser has over 16 years of quantum information science experience, having led the quantum sensing program at ORNL over the past eight. Dr. Pooser publishes in high impact journals, including in Science, Nature, and Physical Review Letters. He previously served as a distinguished Wigner Fellow. He also worked as a postdoctoral fellow in the Laser Cooling and Trapping Group at NIST after receiving his PhD in Engineering Physics from the University of Virginia. He received a B.S. in Physics from New York University, graduating Cum Laude on an accelerated schedule. Dr. Pooser is active in the community, having served as a spokesperson for United Way and for the Boys and Girls Clubs of the TN Valley on many occasions in addition to volunteer work.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Career Notes: June 2020 Edition

June 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

Supercomputer Modeling Shows How COVID-19 Spreads Through Populations

May 30, 2020

As many states begin to loosen the lockdowns and stay-at-home orders that have forced most Americans inside for the past two months, researchers are poring over the data, looking for signs of the dreaded second peak of t Read more…

By Oliver Peckham

SODALITE: Towards Automated Optimization of HPC Application Deployment

May 29, 2020

Developing and deploying applications across heterogeneous infrastructures like HPC or Cloud with diverse hardware is a complex problem. Enabling developers to describe the application deployment and optimising runtime p Read more…

By the SODALITE Team

What’s New in HPC Research: Astronomy, Weather, Security & More

May 29, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

DARPA Looks to Automate Secure Silicon Designs

May 28, 2020

The U.S. military is ramping up efforts to secure semiconductors and its electronics supply chain by embedding defenses during the chip design phase. The automation effort also addresses the high cost and complexity of s Read more…

By George Leopold

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI-based techniques – has expanded to more than 56 research Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This