Rigetti (and Others) Pursuit of Quantum Advantage

By John Russell

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle tasks that classical computers couldn’t. It was always a fuzzy idea and a moving target; classical computers keep advancing too. Today, ‘quantum advantage’ has supplanted quantum supremacy as the milestone of choice. Broadly quantum advantage is the ability of quantum computers to tackle real-world problems, likely a small set to start with, more effectively than classical counterparts.

While quantum advantage has its own fuzzy edges, it nevertheless seems a more moderate idea whose emergence will be signaled by the competitive edge it offers industry and science, perhaps not unlike the early emergence of GPUs offering advantages in specific applications. Talked about targets include quantum chemistry, machine learning, and optimization, hardly newcomers to the quantum computing hit list.

Last week quantum computing pioneer Rigetti Computing announced a $1 million prize for the first conclusive demonstration of quantum advantage performed on its also just-announced Rigetti Quantum Cloud Services (QCS) platform (See HPCwire coverage of the announcement). Rigetti, you may recall, is Lilliputian compared to its better-known rivals (IBM, Google, Microsoft, Alibaba) in the race to develop quantum computers yet has muscled its way into the thick of the quantum computing race.

Rigetti Computing’s full stack

Founded in 2013 by former IBMer Chad Rigetti, and based in Berkeley, CA, Rigetti bills itself as a full stack quantum company with its own fabrication and testing facilities as well as a datacenter. Headcount is roughly 120 and its efforts span hardware and a complete software environment (Forest). The state of the hardware technology at Rigetti today is a new 16-bit quantum processor whose architecture, Rigetti says, will scale to 128 qubits by around this time next year. The $1M prize and cloud services platform just introduced are efforts to stoke activity among applications developers and potential channel partners.

“We definitely need bigger hardware than we have today [to achieve quantum advantage],” said Will Zeng, Rigetti head of evangelism and special products, in a lengthy interview with HPCwire. “We believe that our 128-qubit processor is going to be sufficient size of quantum memory to step up to the plate to try and head towards quantum advantage. But we will need to make continued improvements in algorithms. To really find the quantum advantage we are probably far off. It will remain the major pursuit of the industry for the next five years.”

The rest of the HPC world is watching quantum’s rise with interest. Bob Sorensen, who leads Hyperion Research’s quantum tracking practice, noted “The big question here for Rigetti, as well as other QC aspirants offering on-line cloud access, is if their particular QC software ecosystem is accessible enough to entice a wide range of users to experiments, but still sophisticated enough to support the development of breakthrough algorithms. Only time will tell, but either way, the more developers attracted to QC, the greater the potential of someone making real algorithmic advances. And I don’t think offering a million dollars to do that can hurt.

“I particularly like the emphasis by Rigetti on the integrated traditional HPC and cloud architecture.  I think that some of the first real performance gains we see in this sector will come out of the confluence of traditional HPC and QC capabilities,” said Sorensen.

Quantum computing (QC) remains mysterious for many of us and understandably so. Many of its ideas are counter-intuitive. Think superposition. Indeed, the way QC is implemented is sort of the reverse of traditional von Neumann architecture. In superconducting approaches, like the one Rigetti follows, instead of gates etched in silicon with data flowing through them, qubits (memory registers, really) are ‘etched’ in the silicon and microwaves interact with the qubits as gates to perform computations.

Will Zeng, Rigetti Computing. Source: Everipedia.org

Don’t give up now. Presented below are portions of HPCwire’s interview with Zeng in which he looks at the quantum computing landscape writ large, including deeper dives into Rigetti technology and strategy, and also takes a creditable stab at clarifying how quantum computing works and explaining Rigetti’s hybrid classical-quantum approach.

HPCwire: Thanks for your time Will. In the last year or two quantum computing has burst onto the more public scene. Are we moving too fast in showcasing quantum computing? What’s the reality and what are the misconceptions today?

Zeng: I wouldn’t say it’s too soon. It’s a new type of technology and it’s going to take a while to communicate the subtlety of it. As a developer, I am excited that folks are talking about quantum computers. In terms of misconceptions, one of the important things to emphasize is that quantum computers are now real and are here. They are something that you can download a Python library to and in 15 minutes, 20 minutes, run a program on, and not just from us, but from a couple of companies. Not more than a couple [but still] that’s a really big deal.

The second thing to note is that just because quantum computers are here today doesn’t mean that breaking encryption is going to happen any time soon. A lot of the stuff that is holding back real work applications is that the algorithms of the last 20 years were [designed] for perfect quantum computers and the quantum computers we have today, while they are real, have some limitations. You have to think about them more practically and you need software to actually do this and you need people who are educated in that software to work with it to find applications in the so-called near-term horizon.

HPCwire: Given the giant size of your competitors, why did Rigetti choose to become vertically integrated? Seems like an expensive gamble.

Zeng: I was here at the beginning and we were initially thinking, ‘let’s try to be as fabless as we can’ and we talked to a lot of people and looked at a lot of places and it turned out there were so many innovations that needed to get made that we would be paying for the innovation anyway so we might as well build it up in house. We were able to find capital and that’s paid off. We were able to go from building our first qubit in early 2016 to two years later starting to talk about triple digits (qubits) and we caught up to IBM who has been making qubits for 15 years.

Really, it’s necessary to deliver the whole product. A quantum chip, while very cool to show people, you can’t really sell that to anybody and have them know how to use it. You have to go all the way basically up to the QCS (quantum cloud services) layer. Because we chose to deliver the whole product we’ve also been able to optimize our whole stack. Having our own fab facility, doing our own testing, means our iteration cycles are much tighter and we are able to advance more rapidly than rely on a supply chain that doesn’t really exist yet.

HPCwire: Maybe take a moment to talk about the just announced QCS and distinguish it from, say, IBM’s Q platform.

Zeng: The types of algorithms that have been developed over the last few years that are most likely going to be applied to quantum advantage such as in the areas of optimization, machine learning, quantum chemistry all require very tight integration between the quantum system, and a classical compute stack. All previous offerings, ours [and] IBM and have a very loose link between the classical part and the quantum part. There’s actually an API separating the two. So this means when you want to run some kind of algorithm that involves an integration between classical and quantum, it might have a latency of seconds between iterations. So I will run something on the classical side then I’ll run a quantum API call and I’ll get an answer back a second or a few seconds later. With QCS we’re working toward lowering the latency by up to 20x-50x

Chad Rigetti, CEO

The development flow is you log into your quantum machine image and practice by developing on a simulator back end and then, along with a sort of self-service scheduling, deploy your quantum machine image on a QPU back end. One of the reasons we talk about being the first real quantum platform with QCS is what I am describing sounds a little bit like how an AWS platform works for you – set up your instance and you’ve got different back ends, different types of GPUs or CPUs. So in terms of the terminology setup, we think about Quantum Cloud Services as the big bucket for our whole platform offering. So the Forest SDK, which includes Quil and Grove and our Python or other libraries, is going to come preinstalled in everyone’s quantum machine image that they log into. You can still download locally the SDK and work there if you want.

One of the innovations in the QCS framework is [the ability] to do what’s called parameterized compilation. Think back to the integrated loops where you are running a classical-quantum computation and you have to go between classical and quantum many times. If you have to compile every time you change a parameter, then that’s going to increase your latency, and sometimes by a lot. It can take seconds to hours to compile depending on what we are trying to compile. We had to change our compiler and upgrade it so that you can compile actually a parameterized programs. You compile once, and then tweak the parameters very quickly, and have it recompile every single time. If we hadn’t had done this then we wouldn’t have got any of the latency advantages that we have built into QCS.

HPCwire: What kind of demand are you expecting for QCS?

Zeng: It is still a very technical environment but the goal of QCS is to make this much more accessible. We can support at least 100, 500 or more simultaneous quantum machine images now and we are going to be scaling this up over time. We’re starting with one QPU (16-bit) in the deployed system as we start to ramp up our research partners. But if we get a lot more demand we’ll do more. The way we are doing this release is we are first making it available to our research partners, some of them are in academia. Some of them are in industry.

So our scheduling is not a que-based system. We allow people to book dedicated time on the QPU. Previously it was manual but in the quantum cloud services model it will be automated. In your machine you can look at the calendar and can schedule an hour block of compute. During that hour your quantum [jobs] can be deployed on the quantum processor and you will have dedicated access to it. That dedicated access window is critical for the integrated classical-quantum iterative approach. If your [job] and gets interrupted by other folks, it can become a very unreliable interface to the quantum computer when you are doing iterative work.

HPCwire: How quickly will you ramp up the chips in QCS; in the past Rigetti has said it would have a roughly six-month cadence for introducing new chips. Maybe also give a brief description the QCS datacenter.

Zeng: We’re launching QCS with chips in our new family of chips that lead up to our 128-qubit processor. The first member of the family is a 16-qubit chip in a new design. We’ll have a 32-qubit variant as we build out towards the 128-qubit over the next year. The new design incorporates some of the learnings we’ve had in learning to build coherent qubits but also more importantly how to make it scalable on the device. You’ll see the 16-qubit layout and that tiles to two chips to reach 32 qubits and [so on]. One highlight is 3D signal delivery, so delivering signals not just to the side of the chip but also to the interior of the chip. This has been a big hardware hurdle for a few years that we have been working on and have now solved. If we didn’t have that and a few other things as well on the fab side we wouldn’t be able to get to 128 qubits.

[As far as a description of the QCS equipment], there’s kind of the cylinder and rack of controls next to it; the cylinder houses the quantum computer and cools it down. The quantum machine image is going to be hosted on a rack of regular servers that’s right there with the control servers in the same building in the same datacenter. So when you log in you are actually going to log into something that is right there with the quantum computer.

HPCwire: Most of us think about von Neumann architectures and gates etched in silicon and data moving through them. Quantum chips are almost the reverse. Qubits, the registers of the data if you will, are ‘etched’ in silicon and you operate on them by applying external signals, the gates, to them? Is this close?

Zeng: Yes. You have a chunk of quantum memory, and you have operations applied to it. What’s interesting is that because a quantum chip is a chunk of quantum memory it’s kind of reconfigurable at will by applying different pulses to it. In a sense it’s maybe a little more like an FPGA analog.

The way quantum computing works in the superconducting qubit model is you cool down a microwave/radiofrequency circuit, which is broadly aluminum on silicon, pretty standard technology. You cool it to ten milikelvin (mK) and apply microwave pulses and dc signals to cause interactions to happen. An individual qubit is a resonator. The presence or absence of a microwave photon in that qubit is the zero or one phase. So no photon means zero, one photon means one, and because it is a single photon, if you understand quantum mechanics, it’s zero and one at the same time. You can get superposition by applying pulses in a controlled manner.

[When programming,] you are writing some instructions, these are digital instructions, which in our case are in Quil, our instruction language. Those digital instructions get turned into analog microwaves/dc pulses on the chip, and then computation happens, and answers comeback as analog signals which are changed back into digital signals. That gives a little bit of a sense of what happens when you run a computation.

HPCwire: A big advantage is the scale of the quantum computation that results from being able to entangle qubits, right? In essence a single instruction is executed on all of the associated entangled qubits at the same time in parallel.

Zeng: You can think of quantum computers as large linear algebra machines. Every time step and every operation time step is approximately 100 nanoseconds depending exactly on what operations you do. You are doing a 2nby 2complex matrix multiplication, and every matrix has more elements than you can measure. There’s no technology that has a scaling like that that humans are aware of. But of course there are some caveats. Today there are some limitations on the multiplications because you accumulate errors, and at the end of the day when you sample from that you can only get out a small number of bits because you are doing sampling of a probabilistic computation.

[In any case] there is this fundamentally impressive resource which is exponentially large linear algebra (2nby 2n) that’s inside quantum computing. The game becomes how to build systems and programming languages to make use of and apply the resources.

HPCwire: What about the gates, how do you achieve a so-called universal quantum computer?

Bloch sphere

Zeng: In a classical circuit, a NAND (not and) gate is what’s known as a universal gate. So if you have a NAND gate you can make any kind of Boolean function and computation. This is also true in quantum computing. You can get lots of kind universal gates sets where you can do anything on any quantum computer do as long as you have some set of building blocks. Different hardware players have different building blocks but they tend to be universal. In our case, we can apply arbitrary single qubit operations. Qubits can be represented by a Bloch sphere; we can do arbitrary rotations on x, y, or z axis. So we really have full control over a single qubit.

The hadamard gate is one example of rotations on the sphere. You kind of go 180 degrees around one axis and then 90 degrees around another one. We have that. That opens all individual qubits. Then on two qubits we have something called a controlled phase (gate) which means that I give you two qubits, one called the control and one called the target. If the control is in the zero state, then I apply a phase change to the target which is actually a 180 degree rotation around the z axis of a Bloch sphere. You have those gates. That’s universal quantum computing. We can do any arbitrary operation complete. Those are what we call the native gates. Those are the gates that live on our hardware and we can actually tune up some others and we have worked with them [but] those are the currently supported gates on our hardware set.

HPCwire: How important is close integration between classical and quantum processing?

Zeng: It’s especially important for the latency. You are running the program many times because it is probabilistic. The algorithms that are most useful today are probabilistic so you optimize the algorithm for the QPU. So you will write a parameterized version of the program for your QPU, with some parameters in it. You’ll run it once and then you’ll tweak the parameters and run it again. Your classical computer figures out how to adjust the parameters and you’ll run this loop back many, many times. This is the programming model that has come out in the last few years. It’s a hybrid algorithm model and this allows you to optimize around imperfections in the quantum processors and get much more out of a relatively small system than anyone had thought possible. If you have that loop going back and forth the latency really matters.

HPCwire: What about error correction which we hear so much about? How do you handle that?

Zeng: There’s a way to do error correction but I wouldn’t call it correction. I’d call it error mitigation. The goal is both to have ways of dealing with the noise and error correction. There’s this hybrid method where you optimize the quantum algorithm which is very good for making robust algorithms but we are also working very hard on improving the chips themselves so their error rates are lower and then thirdly developing the technology to apply active quantum error correction schemes. So those exist. You look for an error, then you correct it with redundancy; so you have extra qubits to try to correct the errors. It’s a cool concept that’s on our roadmap for the next few years.

HPCwire: How about a few comments on competing technologies?

Zeng: Superconducting qubits and ion trap qubits have really, in the last couple of years, leapt ahead of the pack. IBM, Alibaba provide systems today. We’re also working on superconducting qubits. Quantum computing is hard but superconducting is the easiest one of the options to scale. It’s really the approach that is mostly likely to get to quantum advantage first. In the long-term there are a few other approaches that might matter, in 10 or 20 years. There’s always revolutions in technology but superconducting qubits are the ones so far that scale and have been useful to people earliest.

Ion traps certainly have a long history. I think there are a few [approaches] up to like 60 or 80 qubits that turn out pretty good. They have lower error rates. Their big challenge is you can only make 60 or 80 qubits or so in a single trap; how do they get bigger than that is the question. There are theoretical approaches but they have yet to be demonstrated. I would say ion traps are in the race but we’ll see how it unfolds over the next few years. That’s why we are excited to go 128 qubits, it’s a strong marker for anyone else to get up to.

HPCwire: What about D-Wave’s annealing technology? It is sometimes criticized but D-wave does have machines, albeit research machines, and customers.

Zeng: The thing to remember is that while they are both called quantum computers, quantum annealers are really a very different technology from the gate model. Annealers are more different from our quantum computers than GPUs are from CPUs. It’s not a digital machine, it’s an analog machine. It may have some kind of application but it’s a very different piece of tech and there are certain things you have in gate model computing, such as it’s easy to show there’s quantum mechanical [activity]. Because we have control we can show it’s quantum mechanical. Secondly we can correct errors. There’s not really like an error correction path for an analog machine like D-wave. The one thing about quantum computing is this exponential scaling that you have control over and it doesn’t apply to D-wave. You have to have gate model control to really unlock that exponential resource.

HPCwire: We saw the announcement that Chinese scientists had set a record for quantum entanglement of 18 qubits which seems impressive. How many qubits are you able to entangle with the 16-qubit processor?

Zeng: As we have done with all of our previous chips, we’ll release the full spec sheet when it’s done that will let you know how many you can entangle with up to one error. The headline announcement for China is a little bit vague because entanglement is not a yes or no thing. You can be 60 percent entangled, 70 percent entangled, or 80 percent entangled and so on. On our 8-qubit processor you can entangle large numbers of qubits, but it might be with a big high error rate. But those are the kind of benchmarks that we are pretty excited to share when ready.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ISC21 Cluster Competition Bracketology

June 18, 2021

For the first time ever, cluster competition experts have gathered together for an actual seeding reveal for the ISC21 Student Cluster Competition. What’s this, you ask? It’s where bona fide student cluster competi Read more…

OSC Enables On-Demand HPC for Automotive Engineering Firm

June 18, 2021

In motorsports, vehicle designers are constantly looking for the tiniest sliver of time to shave off through some clever piece of engineering – but as the low-hanging fruit gets snatched up, those advances are getting Read more…

PNNL Researchers Unveil Tool to Accelerate CGRA Development

June 18, 2021

Moore’s law is in decline due to the physical limits of transistor chips, putting an expiration date on a hitherto-perennial exponential trend in computing power – and leaving hardware developers scrambling to contin Read more…

TU Wien Announces VSC-5, Austria’s Most Powerful Supercomputer

June 17, 2021

Austria is getting a new top supercomputer: VSC-5, the latest iteration of the Vienna Scientific Cluster. The news was announced by VSC-5’s soon-to-be home, TU Wien (also known as the Vienna University of Technology). Read more…

Supercomputing Helps Advance Hydrogen Energy Research

June 16, 2021

Hydrogen energy has long remained an elusive target of the renewable energy industry, promising clean, carbon-free energy that would allow for rapid refueling, unlike current battery-based electric vehicles. Hydrogen-bas Read more…

AWS Solution Channel

Accelerating research and development for new medical treatments

Today, more than 290,000 researchers in France are working to provide better support and care for patients through modern medical treatment. To fulfill their mission, these researchers must be equipped with powerful tools. Read more…

FF4EuroHPC Initiative Highlights Results of First Open Call

June 16, 2021

EuroHPC is kicking into high gear, with seven of its first eight systems detailed – and one of them already operational. While the systems are, perhaps, the flashiest endeavor of the European Commission’s HPC effort, Read more…

TU Wien Announces VSC-5, Austria’s Most Powerful Supercomputer

June 17, 2021

Austria is getting a new top supercomputer: VSC-5, the latest iteration of the Vienna Scientific Cluster. The news was announced by VSC-5’s soon-to-be home, T Read more…

Catching up with ISC 2021 Digital Program Chair Martin Schulz

June 16, 2021

Leibniz Research Centre (LRZ)’s content creator Susanne Vieser interviews ISC 2021 Digital Program Chair, Prof. Martin Schulz to gain an understanding of his ISC affiliation, which is outside his usual scope of work at the research center and the Technical University of Munich. Read more…

Intel Debuts ‘Infrastructure Processing Unit’ as Part of Broader XPU Strategy

June 15, 2021

To boost the performance of busy CPUs hosted by cloud service providers, Intel Corp. has launched a new line of Infrastructure Processing Units (IPUs) that take Read more…

ISC Keynote: Glimpse into Microsoft’s View of the Quantum Computing Landscape

June 15, 2021

Looking for a dose of reality and realistic optimism about quantum computing? Matthias Troyer, Microsoft distinguished scientist, plans to do just that in his ISC2021 keynote in two weeks – Quantum Computing: From Academic Research to Real-world Applications. He notes wryly that classical... Read more…

A Carbon Crisis Looms Over Supercomputing. How Do We Stop It?

June 11, 2021

Supercomputing is extraordinarily power-hungry, with many of the top systems measuring their peak demand in the megawatts due to powerful processors and their c Read more…

Honeywell Quantum and Cambridge Quantum Plan to Merge; More to Follow?

June 10, 2021

Earlier this week, Honeywell announced plans to merge its quantum computing business, Honeywell Quantum Solutions (HQS), which focuses on trapped ion hardware, Read more…

ISC21 Keynoter Xiaoxiang Zhu to Deliver a Bird’s-Eye View of a Changing World

June 10, 2021

ISC High Performance 2021 – once again virtual due to the ongoing pandemic – is swiftly approaching. In contrast to last year’s conference, which canceled Read more…

Xilinx Expands Versal Chip Family With 7 New Versal AI Edge Chips

June 10, 2021

FPGA chip vendor Xilinx has been busy over the last several years cranking out its Versal AI Core, Versal Premium and Versal Prime chip families to fill customer compute needs in the cloud, datacenters, networks and more. Now Xilinx is expanding its reach to the booming edge... Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire