Rigetti (and Others) Pursuit of Quantum Advantage

By John Russell

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle tasks that classical computers couldn’t. It was always a fuzzy idea and a moving target; classical computers keep advancing too. Today, ‘quantum advantage’ has supplanted quantum supremacy as the milestone of choice. Broadly quantum advantage is the ability of quantum computers to tackle real-world problems, likely a small set to start with, more effectively than classical counterparts.

While quantum advantage has its own fuzzy edges, it nevertheless seems a more moderate idea whose emergence will be signaled by the competitive edge it offers industry and science, perhaps not unlike the early emergence of GPUs offering advantages in specific applications. Talked about targets include quantum chemistry, machine learning, and optimization, hardly newcomers to the quantum computing hit list.

Last week quantum computing pioneer Rigetti Computing announced a $1 million prize for the first conclusive demonstration of quantum advantage performed on its also just-announced Rigetti Quantum Cloud Services (QCS) platform (See HPCwire coverage of the announcement). Rigetti, you may recall, is Lilliputian compared to its better-known rivals (IBM, Google, Microsoft, Alibaba) in the race to develop quantum computers yet has muscled its way into the thick of the quantum computing race.

Rigetti Computing’s full stack

Founded in 2013 by former IBMer Chad Rigetti, and based in Berkeley, CA, Rigetti bills itself as a full stack quantum company with its own fabrication and testing facilities as well as a datacenter. Headcount is roughly 120 and its efforts span hardware and a complete software environment (Forest). The state of the hardware technology at Rigetti today is a new 16-bit quantum processor whose architecture, Rigetti says, will scale to 128 qubits by around this time next year. The $1M prize and cloud services platform just introduced are efforts to stoke activity among applications developers and potential channel partners.

“We definitely need bigger hardware than we have today [to achieve quantum advantage],” said Will Zeng, Rigetti head of evangelism and special products, in a lengthy interview with HPCwire. “We believe that our 128-qubit processor is going to be sufficient size of quantum memory to step up to the plate to try and head towards quantum advantage. But we will need to make continued improvements in algorithms. To really find the quantum advantage we are probably far off. It will remain the major pursuit of the industry for the next five years.”

The rest of the HPC world is watching quantum’s rise with interest. Bob Sorensen, who leads Hyperion Research’s quantum tracking practice, noted “The big question here for Rigetti, as well as other QC aspirants offering on-line cloud access, is if their particular QC software ecosystem is accessible enough to entice a wide range of users to experiments, but still sophisticated enough to support the development of breakthrough algorithms. Only time will tell, but either way, the more developers attracted to QC, the greater the potential of someone making real algorithmic advances. And I don’t think offering a million dollars to do that can hurt.

“I particularly like the emphasis by Rigetti on the integrated traditional HPC and cloud architecture.  I think that some of the first real performance gains we see in this sector will come out of the confluence of traditional HPC and QC capabilities,” said Sorensen.

Quantum computing (QC) remains mysterious for many of us and understandably so. Many of its ideas are counter-intuitive. Think superposition. Indeed, the way QC is implemented is sort of the reverse of traditional von Neumann architecture. In superconducting approaches, like the one Rigetti follows, instead of gates etched in silicon with data flowing through them, qubits (memory registers, really) are ‘etched’ in the silicon and microwaves interact with the qubits as gates to perform computations.

Will Zeng, Rigetti Computing. Source: Everipedia.org

Don’t give up now. Presented below are portions of HPCwire’s interview with Zeng in which he looks at the quantum computing landscape writ large, including deeper dives into Rigetti technology and strategy, and also takes a creditable stab at clarifying how quantum computing works and explaining Rigetti’s hybrid classical-quantum approach.

HPCwire: Thanks for your time Will. In the last year or two quantum computing has burst onto the more public scene. Are we moving too fast in showcasing quantum computing? What’s the reality and what are the misconceptions today?

Zeng: I wouldn’t say it’s too soon. It’s a new type of technology and it’s going to take a while to communicate the subtlety of it. As a developer, I am excited that folks are talking about quantum computers. In terms of misconceptions, one of the important things to emphasize is that quantum computers are now real and are here. They are something that you can download a Python library to and in 15 minutes, 20 minutes, run a program on, and not just from us, but from a couple of companies. Not more than a couple [but still] that’s a really big deal.

The second thing to note is that just because quantum computers are here today doesn’t mean that breaking encryption is going to happen any time soon. A lot of the stuff that is holding back real work applications is that the algorithms of the last 20 years were [designed] for perfect quantum computers and the quantum computers we have today, while they are real, have some limitations. You have to think about them more practically and you need software to actually do this and you need people who are educated in that software to work with it to find applications in the so-called near-term horizon.

HPCwire: Given the giant size of your competitors, why did Rigetti choose to become vertically integrated? Seems like an expensive gamble.

Zeng: I was here at the beginning and we were initially thinking, ‘let’s try to be as fabless as we can’ and we talked to a lot of people and looked at a lot of places and it turned out there were so many innovations that needed to get made that we would be paying for the innovation anyway so we might as well build it up in house. We were able to find capital and that’s paid off. We were able to go from building our first qubit in early 2016 to two years later starting to talk about triple digits (qubits) and we caught up to IBM who has been making qubits for 15 years.

Really, it’s necessary to deliver the whole product. A quantum chip, while very cool to show people, you can’t really sell that to anybody and have them know how to use it. You have to go all the way basically up to the QCS (quantum cloud services) layer. Because we chose to deliver the whole product we’ve also been able to optimize our whole stack. Having our own fab facility, doing our own testing, means our iteration cycles are much tighter and we are able to advance more rapidly than rely on a supply chain that doesn’t really exist yet.

HPCwire: Maybe take a moment to talk about the just announced QCS and distinguish it from, say, IBM’s Q platform.

Zeng: The types of algorithms that have been developed over the last few years that are most likely going to be applied to quantum advantage such as in the areas of optimization, machine learning, quantum chemistry all require very tight integration between the quantum system, and a classical compute stack. All previous offerings, ours [and] IBM and have a very loose link between the classical part and the quantum part. There’s actually an API separating the two. So this means when you want to run some kind of algorithm that involves an integration between classical and quantum, it might have a latency of seconds between iterations. So I will run something on the classical side then I’ll run a quantum API call and I’ll get an answer back a second or a few seconds later. With QCS we’re working toward lowering the latency by up to 20x-50x

Chad Rigetti, CEO

The development flow is you log into your quantum machine image and practice by developing on a simulator back end and then, along with a sort of self-service scheduling, deploy your quantum machine image on a QPU back end. One of the reasons we talk about being the first real quantum platform with QCS is what I am describing sounds a little bit like how an AWS platform works for you – set up your instance and you’ve got different back ends, different types of GPUs or CPUs. So in terms of the terminology setup, we think about Quantum Cloud Services as the big bucket for our whole platform offering. So the Forest SDK, which includes Quil and Grove and our Python or other libraries, is going to come preinstalled in everyone’s quantum machine image that they log into. You can still download locally the SDK and work there if you want.

One of the innovations in the QCS framework is [the ability] to do what’s called parameterized compilation. Think back to the integrated loops where you are running a classical-quantum computation and you have to go between classical and quantum many times. If you have to compile every time you change a parameter, then that’s going to increase your latency, and sometimes by a lot. It can take seconds to hours to compile depending on what we are trying to compile. We had to change our compiler and upgrade it so that you can compile actually a parameterized programs. You compile once, and then tweak the parameters very quickly, and have it recompile every single time. If we hadn’t had done this then we wouldn’t have got any of the latency advantages that we have built into QCS.

HPCwire: What kind of demand are you expecting for QCS?

Zeng: It is still a very technical environment but the goal of QCS is to make this much more accessible. We can support at least 100, 500 or more simultaneous quantum machine images now and we are going to be scaling this up over time. We’re starting with one QPU (16-bit) in the deployed system as we start to ramp up our research partners. But if we get a lot more demand we’ll do more. The way we are doing this release is we are first making it available to our research partners, some of them are in academia. Some of them are in industry.

So our scheduling is not a que-based system. We allow people to book dedicated time on the QPU. Previously it was manual but in the quantum cloud services model it will be automated. In your machine you can look at the calendar and can schedule an hour block of compute. During that hour your quantum [jobs] can be deployed on the quantum processor and you will have dedicated access to it. That dedicated access window is critical for the integrated classical-quantum iterative approach. If your [job] and gets interrupted by other folks, it can become a very unreliable interface to the quantum computer when you are doing iterative work.

HPCwire: How quickly will you ramp up the chips in QCS; in the past Rigetti has said it would have a roughly six-month cadence for introducing new chips. Maybe also give a brief description the QCS datacenter.

Zeng: We’re launching QCS with chips in our new family of chips that lead up to our 128-qubit processor. The first member of the family is a 16-qubit chip in a new design. We’ll have a 32-qubit variant as we build out towards the 128-qubit over the next year. The new design incorporates some of the learnings we’ve had in learning to build coherent qubits but also more importantly how to make it scalable on the device. You’ll see the 16-qubit layout and that tiles to two chips to reach 32 qubits and [so on]. One highlight is 3D signal delivery, so delivering signals not just to the side of the chip but also to the interior of the chip. This has been a big hardware hurdle for a few years that we have been working on and have now solved. If we didn’t have that and a few other things as well on the fab side we wouldn’t be able to get to 128 qubits.

[As far as a description of the QCS equipment], there’s kind of the cylinder and rack of controls next to it; the cylinder houses the quantum computer and cools it down. The quantum machine image is going to be hosted on a rack of regular servers that’s right there with the control servers in the same building in the same datacenter. So when you log in you are actually going to log into something that is right there with the quantum computer.

HPCwire: Most of us think about von Neumann architectures and gates etched in silicon and data moving through them. Quantum chips are almost the reverse. Qubits, the registers of the data if you will, are ‘etched’ in silicon and you operate on them by applying external signals, the gates, to them? Is this close?

Zeng: Yes. You have a chunk of quantum memory, and you have operations applied to it. What’s interesting is that because a quantum chip is a chunk of quantum memory it’s kind of reconfigurable at will by applying different pulses to it. In a sense it’s maybe a little more like an FPGA analog.

The way quantum computing works in the superconducting qubit model is you cool down a microwave/radiofrequency circuit, which is broadly aluminum on silicon, pretty standard technology. You cool it to ten milikelvin (mK) and apply microwave pulses and dc signals to cause interactions to happen. An individual qubit is a resonator. The presence or absence of a microwave photon in that qubit is the zero or one phase. So no photon means zero, one photon means one, and because it is a single photon, if you understand quantum mechanics, it’s zero and one at the same time. You can get superposition by applying pulses in a controlled manner.

[When programming,] you are writing some instructions, these are digital instructions, which in our case are in Quil, our instruction language. Those digital instructions get turned into analog microwaves/dc pulses on the chip, and then computation happens, and answers comeback as analog signals which are changed back into digital signals. That gives a little bit of a sense of what happens when you run a computation.

HPCwire: A big advantage is the scale of the quantum computation that results from being able to entangle qubits, right? In essence a single instruction is executed on all of the associated entangled qubits at the same time in parallel.

Zeng: You can think of quantum computers as large linear algebra machines. Every time step and every operation time step is approximately 100 nanoseconds depending exactly on what operations you do. You are doing a 2nby 2complex matrix multiplication, and every matrix has more elements than you can measure. There’s no technology that has a scaling like that that humans are aware of. But of course there are some caveats. Today there are some limitations on the multiplications because you accumulate errors, and at the end of the day when you sample from that you can only get out a small number of bits because you are doing sampling of a probabilistic computation.

[In any case] there is this fundamentally impressive resource which is exponentially large linear algebra (2nby 2n) that’s inside quantum computing. The game becomes how to build systems and programming languages to make use of and apply the resources.

HPCwire: What about the gates, how do you achieve a so-called universal quantum computer?

Bloch sphere

Zeng: In a classical circuit, a NAND (not and) gate is what’s known as a universal gate. So if you have a NAND gate you can make any kind of Boolean function and computation. This is also true in quantum computing. You can get lots of kind universal gates sets where you can do anything on any quantum computer do as long as you have some set of building blocks. Different hardware players have different building blocks but they tend to be universal. In our case, we can apply arbitrary single qubit operations. Qubits can be represented by a Bloch sphere; we can do arbitrary rotations on x, y, or z axis. So we really have full control over a single qubit.

The hadamard gate is one example of rotations on the sphere. You kind of go 180 degrees around one axis and then 90 degrees around another one. We have that. That opens all individual qubits. Then on two qubits we have something called a controlled phase (gate) which means that I give you two qubits, one called the control and one called the target. If the control is in the zero state, then I apply a phase change to the target which is actually a 180 degree rotation around the z axis of a Bloch sphere. You have those gates. That’s universal quantum computing. We can do any arbitrary operation complete. Those are what we call the native gates. Those are the gates that live on our hardware and we can actually tune up some others and we have worked with them [but] those are the currently supported gates on our hardware set.

HPCwire: How important is close integration between classical and quantum processing?

Zeng: It’s especially important for the latency. You are running the program many times because it is probabilistic. The algorithms that are most useful today are probabilistic so you optimize the algorithm for the QPU. So you will write a parameterized version of the program for your QPU, with some parameters in it. You’ll run it once and then you’ll tweak the parameters and run it again. Your classical computer figures out how to adjust the parameters and you’ll run this loop back many, many times. This is the programming model that has come out in the last few years. It’s a hybrid algorithm model and this allows you to optimize around imperfections in the quantum processors and get much more out of a relatively small system than anyone had thought possible. If you have that loop going back and forth the latency really matters.

HPCwire: What about error correction which we hear so much about? How do you handle that?

Zeng: There’s a way to do error correction but I wouldn’t call it correction. I’d call it error mitigation. The goal is both to have ways of dealing with the noise and error correction. There’s this hybrid method where you optimize the quantum algorithm which is very good for making robust algorithms but we are also working very hard on improving the chips themselves so their error rates are lower and then thirdly developing the technology to apply active quantum error correction schemes. So those exist. You look for an error, then you correct it with redundancy; so you have extra qubits to try to correct the errors. It’s a cool concept that’s on our roadmap for the next few years.

HPCwire: How about a few comments on competing technologies?

Zeng: Superconducting qubits and ion trap qubits have really, in the last couple of years, leapt ahead of the pack. IBM, Alibaba provide systems today. We’re also working on superconducting qubits. Quantum computing is hard but superconducting is the easiest one of the options to scale. It’s really the approach that is mostly likely to get to quantum advantage first. In the long-term there are a few other approaches that might matter, in 10 or 20 years. There’s always revolutions in technology but superconducting qubits are the ones so far that scale and have been useful to people earliest.

Ion traps certainly have a long history. I think there are a few [approaches] up to like 60 or 80 qubits that turn out pretty good. They have lower error rates. Their big challenge is you can only make 60 or 80 qubits or so in a single trap; how do they get bigger than that is the question. There are theoretical approaches but they have yet to be demonstrated. I would say ion traps are in the race but we’ll see how it unfolds over the next few years. That’s why we are excited to go 128 qubits, it’s a strong marker for anyone else to get up to.

HPCwire: What about D-Wave’s annealing technology? It is sometimes criticized but D-wave does have machines, albeit research machines, and customers.

Zeng: The thing to remember is that while they are both called quantum computers, quantum annealers are really a very different technology from the gate model. Annealers are more different from our quantum computers than GPUs are from CPUs. It’s not a digital machine, it’s an analog machine. It may have some kind of application but it’s a very different piece of tech and there are certain things you have in gate model computing, such as it’s easy to show there’s quantum mechanical [activity]. Because we have control we can show it’s quantum mechanical. Secondly we can correct errors. There’s not really like an error correction path for an analog machine like D-wave. The one thing about quantum computing is this exponential scaling that you have control over and it doesn’t apply to D-wave. You have to have gate model control to really unlock that exponential resource.

HPCwire: We saw the announcement that Chinese scientists had set a record for quantum entanglement of 18 qubits which seems impressive. How many qubits are you able to entangle with the 16-qubit processor?

Zeng: As we have done with all of our previous chips, we’ll release the full spec sheet when it’s done that will let you know how many you can entangle with up to one error. The headline announcement for China is a little bit vague because entanglement is not a yes or no thing. You can be 60 percent entangled, 70 percent entangled, or 80 percent entangled and so on. On our 8-qubit processor you can entangle large numbers of qubits, but it might be with a big high error rate. But those are the kind of benchmarks that we are pretty excited to share when ready.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NIST/Xanadu Researchers Report Photonic Quantum Computing Advance

March 3, 2021

Researchers from the National Institute of Standards and Technology (NIST) and Xanadu, a young Canada-based quantum computing company, have reported developing a full-stack, photonic quantum computer able to carry out th Read more…

By John Russell

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and even to this day, the largest climate models are heavily con Read more…

By Oliver Peckham

Deloitte Outfits New AI Computing Center with Nvidia DGX Gear

March 3, 2021

With AI use continuing to grow in adoption throughout enterprise IT, Deloitte is creating a new Deloitte Center for AI Computing to advise its customers, explain the technology and help them use it in their ongoing busin Read more…

By Todd R. Weiss

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2020) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective immediately. Hotard replaces long-time Cray exec Pete Ungaro Read more…

By Tiffany Trader

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 2, 2021

Jeffery Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been founding director of ORNL's Future Technologies Group which i Read more…

By John Russell

AWS Solution Channel

Moderna Accelerates COVID-19 Vaccine Development on AWS

Marcello Damiani, Chief Digital and Operational Excellence Officer at Moderna, joins Todd Weatherby, Vice President of AWS Professional Services Worldwide, for a discussion on developing Moderna’s COVID-19 vaccine, scaling systems to enable global distribution, and leveraging cloud technologies to accelerate processes. Read more…

HPC Career Notes: March 2021 Edition

March 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

By Mariana Iriarte

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

By Oliver Peckham

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2020) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

By Tiffany Trader

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 2, 2021

Jeffery Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been f Read more…

By John Russell

HPC Career Notes: March 2021 Edition

March 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

By Mariana Iriarte

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing Read more…

By Tiffany Trader

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Intel Teases Ice Lake-SP, Shows Competitive Benchmarking

November 17, 2020

At SC20 this week, Intel teased its forthcoming third-generation Xeon "Ice Lake-SP" server processor, claiming competitive benchmarking results against AMD's second-generation Epyc "Rome" processor. Ice Lake-SP, Intel's first server processor with 10nm technology... Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

By Tiffany Trader

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

By Oliver Peckham

It’s Fugaku vs. COVID-19: How the World’s Top Supercomputer Is Shaping Our New Normal

November 9, 2020

Fugaku is currently the most powerful publicly ranked supercomputer in the world – but we weren’t supposed to have it yet. The supercomputer, situated at Japan’s Riken scientific research institute, was scheduled to come online in 2021. When the pandemic struck... Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire