Rigetti (and Others) Pursuit of Quantum Advantage

By John Russell

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle tasks that classical computers couldn’t. It was always a fuzzy idea and a moving target; classical computers keep advancing too. Today, ‘quantum advantage’ has supplanted quantum supremacy as the milestone of choice. Broadly quantum advantage is the ability of quantum computers to tackle real-world problems, likely a small set to start with, more effectively than classical counterparts.

While quantum advantage has its own fuzzy edges, it nevertheless seems a more moderate idea whose emergence will be signaled by the competitive edge it offers industry and science, perhaps not unlike the early emergence of GPUs offering advantages in specific applications. Talked about targets include quantum chemistry, machine learning, and optimization, hardly newcomers to the quantum computing hit list.

Last week quantum computing pioneer Rigetti Computing announced a $1 million prize for the first conclusive demonstration of quantum advantage performed on its also just-announced Rigetti Quantum Cloud Services (QCS) platform (See HPCwire coverage of the announcement). Rigetti, you may recall, is Lilliputian compared to its better-known rivals (IBM, Google, Microsoft, Alibaba) in the race to develop quantum computers yet has muscled its way into the thick of the quantum computing race.

Rigetti Computing’s full stack

Founded in 2013 by former IBMer Chad Rigetti, and based in Berkeley, CA, Rigetti bills itself as a full stack quantum company with its own fabrication and testing facilities as well as a datacenter. Headcount is roughly 120 and its efforts span hardware and a complete software environment (Forest). The state of the hardware technology at Rigetti today is a new 16-bit quantum processor whose architecture, Rigetti says, will scale to 128 qubits by around this time next year. The $1M prize and cloud services platform just introduced are efforts to stoke activity among applications developers and potential channel partners.

“We definitely need bigger hardware than we have today [to achieve quantum advantage],” said Will Zeng, Rigetti head of evangelism and special products, in a lengthy interview with HPCwire. “We believe that our 128-qubit processor is going to be sufficient size of quantum memory to step up to the plate to try and head towards quantum advantage. But we will need to make continued improvements in algorithms. To really find the quantum advantage we are probably far off. It will remain the major pursuit of the industry for the next five years.”

The rest of the HPC world is watching quantum’s rise with interest. Bob Sorensen, who leads Hyperion Research’s quantum tracking practice, noted “The big question here for Rigetti, as well as other QC aspirants offering on-line cloud access, is if their particular QC software ecosystem is accessible enough to entice a wide range of users to experiments, but still sophisticated enough to support the development of breakthrough algorithms. Only time will tell, but either way, the more developers attracted to QC, the greater the potential of someone making real algorithmic advances. And I don’t think offering a million dollars to do that can hurt.

“I particularly like the emphasis by Rigetti on the integrated traditional HPC and cloud architecture.  I think that some of the first real performance gains we see in this sector will come out of the confluence of traditional HPC and QC capabilities,” said Sorensen.

Quantum computing (QC) remains mysterious for many of us and understandably so. Many of its ideas are counter-intuitive. Think superposition. Indeed, the way QC is implemented is sort of the reverse of traditional von Neumann architecture. In superconducting approaches, like the one Rigetti follows, instead of gates etched in silicon with data flowing through them, qubits (memory registers, really) are ‘etched’ in the silicon and microwaves interact with the qubits as gates to perform computations.

Will Zeng, Rigetti Computing. Source: Everipedia.org

Don’t give up now. Presented below are portions of HPCwire’s interview with Zeng in which he looks at the quantum computing landscape writ large, including deeper dives into Rigetti technology and strategy, and also takes a creditable stab at clarifying how quantum computing works and explaining Rigetti’s hybrid classical-quantum approach.

HPCwire: Thanks for your time Will. In the last year or two quantum computing has burst onto the more public scene. Are we moving too fast in showcasing quantum computing? What’s the reality and what are the misconceptions today?

Zeng: I wouldn’t say it’s too soon. It’s a new type of technology and it’s going to take a while to communicate the subtlety of it. As a developer, I am excited that folks are talking about quantum computers. In terms of misconceptions, one of the important things to emphasize is that quantum computers are now real and are here. They are something that you can download a Python library to and in 15 minutes, 20 minutes, run a program on, and not just from us, but from a couple of companies. Not more than a couple [but still] that’s a really big deal.

The second thing to note is that just because quantum computers are here today doesn’t mean that breaking encryption is going to happen any time soon. A lot of the stuff that is holding back real work applications is that the algorithms of the last 20 years were [designed] for perfect quantum computers and the quantum computers we have today, while they are real, have some limitations. You have to think about them more practically and you need software to actually do this and you need people who are educated in that software to work with it to find applications in the so-called near-term horizon.

HPCwire: Given the giant size of your competitors, why did Rigetti choose to become vertically integrated? Seems like an expensive gamble.

Zeng: I was here at the beginning and we were initially thinking, ‘let’s try to be as fabless as we can’ and we talked to a lot of people and looked at a lot of places and it turned out there were so many innovations that needed to get made that we would be paying for the innovation anyway so we might as well build it up in house. We were able to find capital and that’s paid off. We were able to go from building our first qubit in early 2016 to two years later starting to talk about triple digits (qubits) and we caught up to IBM who has been making qubits for 15 years.

Really, it’s necessary to deliver the whole product. A quantum chip, while very cool to show people, you can’t really sell that to anybody and have them know how to use it. You have to go all the way basically up to the QCS (quantum cloud services) layer. Because we chose to deliver the whole product we’ve also been able to optimize our whole stack. Having our own fab facility, doing our own testing, means our iteration cycles are much tighter and we are able to advance more rapidly than rely on a supply chain that doesn’t really exist yet.

HPCwire: Maybe take a moment to talk about the just announced QCS and distinguish it from, say, IBM’s Q platform.

Zeng: The types of algorithms that have been developed over the last few years that are most likely going to be applied to quantum advantage such as in the areas of optimization, machine learning, quantum chemistry all require very tight integration between the quantum system, and a classical compute stack. All previous offerings, ours [and] IBM and have a very loose link between the classical part and the quantum part. There’s actually an API separating the two. So this means when you want to run some kind of algorithm that involves an integration between classical and quantum, it might have a latency of seconds between iterations. So I will run something on the classical side then I’ll run a quantum API call and I’ll get an answer back a second or a few seconds later. With QCS we’re working toward lowering the latency by up to 20x-50x

Chad Rigetti, CEO

The development flow is you log into your quantum machine image and practice by developing on a simulator back end and then, along with a sort of self-service scheduling, deploy your quantum machine image on a QPU back end. One of the reasons we talk about being the first real quantum platform with QCS is what I am describing sounds a little bit like how an AWS platform works for you – set up your instance and you’ve got different back ends, different types of GPUs or CPUs. So in terms of the terminology setup, we think about Quantum Cloud Services as the big bucket for our whole platform offering. So the Forest SDK, which includes Quil and Grove and our Python or other libraries, is going to come preinstalled in everyone’s quantum machine image that they log into. You can still download locally the SDK and work there if you want.

One of the innovations in the QCS framework is [the ability] to do what’s called parameterized compilation. Think back to the integrated loops where you are running a classical-quantum computation and you have to go between classical and quantum many times. If you have to compile every time you change a parameter, then that’s going to increase your latency, and sometimes by a lot. It can take seconds to hours to compile depending on what we are trying to compile. We had to change our compiler and upgrade it so that you can compile actually a parameterized programs. You compile once, and then tweak the parameters very quickly, and have it recompile every single time. If we hadn’t had done this then we wouldn’t have got any of the latency advantages that we have built into QCS.

HPCwire: What kind of demand are you expecting for QCS?

Zeng: It is still a very technical environment but the goal of QCS is to make this much more accessible. We can support at least 100, 500 or more simultaneous quantum machine images now and we are going to be scaling this up over time. We’re starting with one QPU (16-bit) in the deployed system as we start to ramp up our research partners. But if we get a lot more demand we’ll do more. The way we are doing this release is we are first making it available to our research partners, some of them are in academia. Some of them are in industry.

So our scheduling is not a que-based system. We allow people to book dedicated time on the QPU. Previously it was manual but in the quantum cloud services model it will be automated. In your machine you can look at the calendar and can schedule an hour block of compute. During that hour your quantum [jobs] can be deployed on the quantum processor and you will have dedicated access to it. That dedicated access window is critical for the integrated classical-quantum iterative approach. If your [job] and gets interrupted by other folks, it can become a very unreliable interface to the quantum computer when you are doing iterative work.

HPCwire: How quickly will you ramp up the chips in QCS; in the past Rigetti has said it would have a roughly six-month cadence for introducing new chips. Maybe also give a brief description the QCS datacenter.

Zeng: We’re launching QCS with chips in our new family of chips that lead up to our 128-qubit processor. The first member of the family is a 16-qubit chip in a new design. We’ll have a 32-qubit variant as we build out towards the 128-qubit over the next year. The new design incorporates some of the learnings we’ve had in learning to build coherent qubits but also more importantly how to make it scalable on the device. You’ll see the 16-qubit layout and that tiles to two chips to reach 32 qubits and [so on]. One highlight is 3D signal delivery, so delivering signals not just to the side of the chip but also to the interior of the chip. This has been a big hardware hurdle for a few years that we have been working on and have now solved. If we didn’t have that and a few other things as well on the fab side we wouldn’t be able to get to 128 qubits.

[As far as a description of the QCS equipment], there’s kind of the cylinder and rack of controls next to it; the cylinder houses the quantum computer and cools it down. The quantum machine image is going to be hosted on a rack of regular servers that’s right there with the control servers in the same building in the same datacenter. So when you log in you are actually going to log into something that is right there with the quantum computer.

HPCwire: Most of us think about von Neumann architectures and gates etched in silicon and data moving through them. Quantum chips are almost the reverse. Qubits, the registers of the data if you will, are ‘etched’ in silicon and you operate on them by applying external signals, the gates, to them? Is this close?

Zeng: Yes. You have a chunk of quantum memory, and you have operations applied to it. What’s interesting is that because a quantum chip is a chunk of quantum memory it’s kind of reconfigurable at will by applying different pulses to it. In a sense it’s maybe a little more like an FPGA analog.

The way quantum computing works in the superconducting qubit model is you cool down a microwave/radiofrequency circuit, which is broadly aluminum on silicon, pretty standard technology. You cool it to ten milikelvin (mK) and apply microwave pulses and dc signals to cause interactions to happen. An individual qubit is a resonator. The presence or absence of a microwave photon in that qubit is the zero or one phase. So no photon means zero, one photon means one, and because it is a single photon, if you understand quantum mechanics, it’s zero and one at the same time. You can get superposition by applying pulses in a controlled manner.

[When programming,] you are writing some instructions, these are digital instructions, which in our case are in Quil, our instruction language. Those digital instructions get turned into analog microwaves/dc pulses on the chip, and then computation happens, and answers comeback as analog signals which are changed back into digital signals. That gives a little bit of a sense of what happens when you run a computation.

HPCwire: A big advantage is the scale of the quantum computation that results from being able to entangle qubits, right? In essence a single instruction is executed on all of the associated entangled qubits at the same time in parallel.

Zeng: You can think of quantum computers as large linear algebra machines. Every time step and every operation time step is approximately 100 nanoseconds depending exactly on what operations you do. You are doing a 2nby 2complex matrix multiplication, and every matrix has more elements than you can measure. There’s no technology that has a scaling like that that humans are aware of. But of course there are some caveats. Today there are some limitations on the multiplications because you accumulate errors, and at the end of the day when you sample from that you can only get out a small number of bits because you are doing sampling of a probabilistic computation.

[In any case] there is this fundamentally impressive resource which is exponentially large linear algebra (2nby 2n) that’s inside quantum computing. The game becomes how to build systems and programming languages to make use of and apply the resources.

HPCwire: What about the gates, how do you achieve a so-called universal quantum computer?

Bloch sphere

Zeng: In a classical circuit, a NAND (not and) gate is what’s known as a universal gate. So if you have a NAND gate you can make any kind of Boolean function and computation. This is also true in quantum computing. You can get lots of kind universal gates sets where you can do anything on any quantum computer do as long as you have some set of building blocks. Different hardware players have different building blocks but they tend to be universal. In our case, we can apply arbitrary single qubit operations. Qubits can be represented by a Bloch sphere; we can do arbitrary rotations on x, y, or z axis. So we really have full control over a single qubit.

The hadamard gate is one example of rotations on the sphere. You kind of go 180 degrees around one axis and then 90 degrees around another one. We have that. That opens all individual qubits. Then on two qubits we have something called a controlled phase (gate) which means that I give you two qubits, one called the control and one called the target. If the control is in the zero state, then I apply a phase change to the target which is actually a 180 degree rotation around the z axis of a Bloch sphere. You have those gates. That’s universal quantum computing. We can do any arbitrary operation complete. Those are what we call the native gates. Those are the gates that live on our hardware and we can actually tune up some others and we have worked with them [but] those are the currently supported gates on our hardware set.

HPCwire: How important is close integration between classical and quantum processing?

Zeng: It’s especially important for the latency. You are running the program many times because it is probabilistic. The algorithms that are most useful today are probabilistic so you optimize the algorithm for the QPU. So you will write a parameterized version of the program for your QPU, with some parameters in it. You’ll run it once and then you’ll tweak the parameters and run it again. Your classical computer figures out how to adjust the parameters and you’ll run this loop back many, many times. This is the programming model that has come out in the last few years. It’s a hybrid algorithm model and this allows you to optimize around imperfections in the quantum processors and get much more out of a relatively small system than anyone had thought possible. If you have that loop going back and forth the latency really matters.

HPCwire: What about error correction which we hear so much about? How do you handle that?

Zeng: There’s a way to do error correction but I wouldn’t call it correction. I’d call it error mitigation. The goal is both to have ways of dealing with the noise and error correction. There’s this hybrid method where you optimize the quantum algorithm which is very good for making robust algorithms but we are also working very hard on improving the chips themselves so their error rates are lower and then thirdly developing the technology to apply active quantum error correction schemes. So those exist. You look for an error, then you correct it with redundancy; so you have extra qubits to try to correct the errors. It’s a cool concept that’s on our roadmap for the next few years.

HPCwire: How about a few comments on competing technologies?

Zeng: Superconducting qubits and ion trap qubits have really, in the last couple of years, leapt ahead of the pack. IBM, Alibaba provide systems today. We’re also working on superconducting qubits. Quantum computing is hard but superconducting is the easiest one of the options to scale. It’s really the approach that is mostly likely to get to quantum advantage first. In the long-term there are a few other approaches that might matter, in 10 or 20 years. There’s always revolutions in technology but superconducting qubits are the ones so far that scale and have been useful to people earliest.

Ion traps certainly have a long history. I think there are a few [approaches] up to like 60 or 80 qubits that turn out pretty good. They have lower error rates. Their big challenge is you can only make 60 or 80 qubits or so in a single trap; how do they get bigger than that is the question. There are theoretical approaches but they have yet to be demonstrated. I would say ion traps are in the race but we’ll see how it unfolds over the next few years. That’s why we are excited to go 128 qubits, it’s a strong marker for anyone else to get up to.

HPCwire: What about D-Wave’s annealing technology? It is sometimes criticized but D-wave does have machines, albeit research machines, and customers.

Zeng: The thing to remember is that while they are both called quantum computers, quantum annealers are really a very different technology from the gate model. Annealers are more different from our quantum computers than GPUs are from CPUs. It’s not a digital machine, it’s an analog machine. It may have some kind of application but it’s a very different piece of tech and there are certain things you have in gate model computing, such as it’s easy to show there’s quantum mechanical [activity]. Because we have control we can show it’s quantum mechanical. Secondly we can correct errors. There’s not really like an error correction path for an analog machine like D-wave. The one thing about quantum computing is this exponential scaling that you have control over and it doesn’t apply to D-wave. You have to have gate model control to really unlock that exponential resource.

HPCwire: We saw the announcement that Chinese scientists had set a record for quantum entanglement of 18 qubits which seems impressive. How many qubits are you able to entangle with the 16-qubit processor?

Zeng: As we have done with all of our previous chips, we’ll release the full spec sheet when it’s done that will let you know how many you can entangle with up to one error. The headline announcement for China is a little bit vague because entanglement is not a yes or no thing. You can be 60 percent entangled, 70 percent entangled, or 80 percent entangled and so on. On our 8-qubit processor you can entangle large numbers of qubits, but it might be with a big high error rate. But those are the kind of benchmarks that we are pretty excited to share when ready.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Amid Upbeat Earnings, Intel to Cut 1% of Employees, Add as Many

January 24, 2020

For all the sniping two tech old timers take, both IBM and Intel announced surprisingly upbeat earnings this week. IBM CEO Ginny Rometty was all smiles at this week’s World Economic Forum in Davos, Switzerland, after  Read more…

By Doug Black

Indiana University Dedicates ‘Big Red 200’ Cray Shasta Supercomputer

January 24, 2020

After six months of celebrations, Indiana University (IU) officially marked its bicentennial on Monday – and it saved the best for last, inaugurating Big Red 200, a new AI-focused supercomputer that joins the ranks of Read more…

By Staff report

What’s New in HPC Research: Tsunamis, Wildfires, the Large Hadron Collider & More

January 24, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware. In fact, the company's simulated bifurcation algorithm is Read more…

By Tiffany Trader

Energy Research Combines HPC, 3D Manufacturing

January 23, 2020

A federal energy research initiative is gaining momentum with the release of a contract award aimed at using supercomputing to harness 3D printing technology that would boost the performance of power generators. Partn Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

TACC Highlights Its Upcoming ‘IsoBank’ Isotope Database

January 22, 2020

Isotopes – elemental variations that contain different numbers of neutrons – can help researchers unearth the past of an object, especially the few hundred isotopes that are known to be stable over time. However, iso Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware Read more…

By Tiffany Trader

In Advanced Computing and HPC, Dell EMC Sets Sights on the Broader Market Middle 

January 22, 2020

If the leading advanced computing/HPC server vendors were in the batting lineup of a baseball team, Dell EMC would be going for lots of singles and doubles – Read more…

By Doug Black

DNA-Based Storage Nears Scalable Reality with New $25 Million Project

January 21, 2020

DNA-based storage, which involves storing binary code in the four nucleotides that constitute DNA, has been a moonshot for high-density data storage since the 1960s. Since the first successful experiments in the 1980s, researchers have made a series of major strides toward implementing DNA-based storage at scale, such as improving write times and storage density and enabling easier file identification and extraction. Now, a new $25 million... Read more…

By Oliver Peckham

AMD Recruits Intel, IBM Execs; Pending Layoffs Reported at Intel Data Platform Group

January 17, 2020

AMD has raided Intel and IBM for new senior managers, one of whom will replace an AMD executive who has played a prominent role during the company’s recharged Read more…

By Doug Black

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This