IBM-led Webinar Tackles Quantum Developer Community Needs

By John Russell

July 21, 2020

Quantum computing has many needs before it can become the transformative tool many expect. High on the list is a robust software developer community, not least because developers rarely follow ‘intended use’ rules; instead, they bend and break rules in unforeseen ways that transforms new technology into applications that catalyze market growth, sometimes explosively.

In fact, efforts to fill out the quantum computing ecosystem (hardware and software) have continued to expand rapidly. Just today, for example, the Trump Administration announced the establishment of three new Quantum Leap Challenge Institutes around quantum sensing, hybrid classical-quantum systems, and large-scale quantum system development (See HPCwire coverage). Making full use of the new class of machines – whenever they arrive – will require a robust and sufficiently large quantum developer community.

Roughly a week ago, IBM held the first of a planned series of webinars on quantum computing – this one on The Future of Quantum Software Development and it was a treat. Moderated by long-time Forrester analyst Jeffrey Hammond, the panel included three prominent quantum community voices with diverse quantum expertise – Blake Johnson (control systems delivery lead, IBM, and formerly Rigetti), Prineha Narang (Harvard professor and CEO/founder, Aliro Quantum), and tech entrepreneur William Hurley (CEO/founder, Strangeworks).

It was a “glass-half-full” crowd, so consider their enthusiasm when evaluating comments, but it was also a well-informed group not dismissive of challenges.

“You have to realize where we’re actually at,” said Hurley, “We’re not in the days of AMD versus Intel. We’re in the days of like little mechanical gates versus vacuum tubes and other solutions. These things aren’t computers from my pure developer standpoint. At this moment of time they’re really great equipment for exploring the quantum landscape and are the foundation for building machines.”

Indeed, today’s quantum systems are fragile and complicated and even the notion of gates can be confusing. “At what point is a gate so complicated that a developer is never going to touch it or understand it anyway. Is that 1000 (qubits)? Is it 100,000? Is it a million because you hear people with all these stories about, you know, millions-of-qubits machines. If you had it my question is who would program it? Because that sounds really difficult based on where we’re at, and where we’re trying to go,” said Hurley.

Here are a few themes from the discussion.

  • Fast Followers Will Lose! Quantum computing’s inflection point will be such that that if you’re not already in the game, you won’t catch up. “You can’t be a fast follower. You have to either be placing a bet now or deciding to take the risk of not being involved at the point where something that is clearly a tremendous change in computing happens,” said Hurley with agreement from Johnson.
  • It Will be a Hybrid World. Yes, there will be ‘general purpose’ quantum computers although for a limited set of quantum-appropriate problems. There will also be specialization with various qubit technologies (ion trap, cold atom, superconducting, etc.) excelling on different applications. Lastly, all quantum computing will be done in a hybrid classical-quantum computing environment.
  • QA isn’t Far Away (Maybe). There was adamant agreement by panelists that a decade is too pessimistic…but there was waffling on just how soon quantum advantage (QA) would be achieved. Two-to-five years was the consensus guess-of-choice although Narang declined to make any guess. One had the sense their belief in quantum computing’s inevitable breakthrough trumped worry over when. Meanwhile the QA watch continues.
  • Expanded (Developer) Conversation Needed. The quantum conversation now is mostly between system developers who tend to be physicists and algorithm developers who tend to be physicists. That has to change. It will require better quantum computers, wider access to various qubit technologies, better tools, and a level of software abstraction that lets developers do what they do without worrying about quantum physics.

The panel discussion was casual and substantive if not technically deep, and IBM has posted a link to it. Next up is a webinar on Building a Quantum Workforce (July 28) with there are plans for another in late August (no date yet) on Commercial Use of Quantum Computers.

Clearly, each of the participating companies has its own agenda but nevertheless the give-and-take had an insider feel.

IBM, of course, is the biggest player in quantum computing today with deep expertise in hardware and software and its IBM Q network which offers various levels of access to quantum resources. IBM quantum systems use semiconductor-based superconducting qubits. Panelist Johnson is a relatively recent IBM import from Rigetti. His work is fairly deep in the weeds and focuses on control systems which convert conventional instructions (electrical signals) into quantum processor control signals.

Aliro and Strangeworks are start-ups focused on software.

Aliro describes its offering as a “hardware-independent toolkit for developers of quantum algorithms and applications. The development platform is implemented as a scalable cloud-based service. Features include: access to multiple QC hardware vendors and devices via an intuitive GUI as well as REST API; quantum circuit and hybrid workflow visualization and debugging tools; cross-compilation between high and low-level languages; and hardware-specific optimizations enabling best execution on every supported hardware platform.” CEO Narang is also an assistant professor of computational materials science at Harvard.

Strangeworks says, its “platform is a hardware-agnostic, software inclusive, collaborative development environment that brings the latest advancements, frameworks, open source, and tools into a single user interface.” CEO Hurley is a veteran tech entrepreneur who also chairs the IEEE Quantum Computing Work Group.

These are early days for both of these young companies and they are broadly representative of growing number of start-ups seeking to fill out the quantum computing ecosystem. It is probably best to watch/listen to the conversation directly to get a sense of issues facing software development in quantum computing, but also to gain a glimpse into the mindset of the young companies entering the quantum computing fray.

 

Presented here are a few soundbites (lightly edited) from the panel.

WHAT’S THE STATE OF QUANTUM PROGRAMMING AND WHAT ARE SOME OF THE CHALLENGES?  

Johnson: Recognizing that there was something maybe intimidating about quantum, IBM chose to develop first a graphical interface, a graphical drag and drop way to build quantum circuits where they show the kind of the fundamental unit of quantum compute. So that’s what’s available today in IBM quantum experience. You can drag and drop gates, which are the logical operations and manipulate qubits, which are quantum bits, to build up a program. For more real tasks, you need a real programming interface and we have Qiskit, an open source computing framework developed by IBM which is a Python interface for building quantum circuits and for building algorithms that take advantage of quantum processors.

Narang: What I like about Qiskit is that it’s very accessible. The challenge is, with that abstraction, you lose a lot of the control over the actual hardware, you don’t necessarily have all of the tools to directly program the system. So the pulse level control that IBM has made available is a good way to bridge that. I wonder, as we go towards other types of hardware, how some of the programs that are written for superconducting circuits will be translated to those (other types of hardware) and if everything is not based off of the same pulse level control scheme, what would be a good way of translating? And I don’t have an answer to this.

Hurley: Our approaches is to let all of the languages battle it out. We’re big Qiskit fans. I say that not because we’re on IBM, but we first started [there because] there were already tons people working with it. We’re big supporters, soon to be making our first contributions to it. But you can’t take a developer and make them a quantum developer overnight. Some things that are fundamentally different. For example, if I’m programming on any other platform that’s possible in the world, and I run into that error, I can find it or I know how it works. Whereas [with quantum] what we see happen with developers is they get in and they can instantiate a teleportation thing through Microsoft Quantum Katas or IBM Qiskit, or whatever. Then the moment it breaks, if they don’t understand the fundamental physics behind it, they’re at a dead end.

Narang: A lot of things are not yet possible with quantum hardware but we take for granted in a classical computer. [I’m] thinking about conditional statements and intermediate measurements, things that are not trivial to do in a quantum circuit at the moment, but that’s going to be very important to write more complex quantum programs in future. As those advances come from the hardware side, we think about how to translate those into something that you can use on the on the software side. 

GIVEN THE NASCENT STAGE OF QUBIT TECHNOLOGIES – SUPERCONDUCTING, TRAPPED ION, COLD ATOMS, ETC. – WILL THERE BE SPECIALIZED MACHINES?

Narang:  Tricky question. I’m trying to see how to answer it without making all of my colleagues angry at me. My personal view is there will be certain problems that will run just fine on a variety of hardware. And some that might be more specialized to particular types of hardware and that’s just associated with a physics that is underlying that type of hardware. This will be especially important as we try and map problems from condensed matter and chemistry onto some of these devices. We’ll see that not every technology is ideal or even possible for all kinds of problems. But we don’t have that kind of experience yet.

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

[Currently] there are only a few different trapped ion systems out there. It’s very hard to get access to those and not many of them have gone the route that IBM did with making at least at the smallest devices available very broadly. So there’s a whole lot more that needs to be done on a simulator before you can actually try it on real hardware. And of course, systems like cold atoms or photonic circuits are really niche. Getting time on those is very expensive and almost unaffordable for the average developer. The best you can hope to offer to them is to say, “Hey, if you write something that works on this genre of systems, we can get you to a point where it runs on other systems” and that’s something that my group is trying to accomplish at the moment.

Hurley: On my desk [I have] iPhone and iPad and a 16-inch laptop, okay? And those things can all do email and they can all surf the web and they can do them really well. But I can’t open Xcode and compile a big program on my iPhone. It’s going to be like that in quantum and it’s going to take it in directions that none of us can imagine it would be foolish to try. There’s already 16 startups I’m following who are making quantum processors for specific applications exactly like Pri described. Look at supercomputing today and high performance computing. There are computers built in on Wall Street just for doing trades or [even] specific types of trades. We’ve seen this throughout computing history, right? I don’t think quantum will be any different. I hope that there’s as many hardware and software solutions available as possible

WHEN WILL WE ACHIEVE QUANTUM ADVANTAGE?

Johnson: The problem is we don’t know exactly how powerful a quantum machine needs to be in order to get the quantum advantage. We’ve committed to and have been on a track of doubling quantum volume[i] (broad performance metric) every year. We put up and made publicly available our first machine with a quantum volume of 32 back in April. We now have eight machines with quantum volume of 32 that are available in IBM quantum experience, and we continue to march along that path of doubling QV [yearly]. At what point is it a powerful enough machine for quantum advantage? I’m not sure. I would say personally, I’d be surprised if we have to get all the way to fault tolerance to find a single application where you can do something with quantum advantage, whether that’s time-to-solution, cost or whatever, against classical resource.

Hurley: If you’re in this industry, if you want to be a developer in this industry, it needs to be a long-term play, a long-term vision that you have. Pessimists think [quantum payoff is] 20 years out, the optimistic [thinks] it’s three years out, and the reality is if you want to be involved in it than you should be preparing now, because all I can tell you is at some point between tomorrow and some future tomorrow it will happen and the inflection point will be steep.

A rendering of IBM Q System One, the world’s first fully integrated universal quantum computing system, currently installed at the Thomas J Watson Research Center. Source: IBM

Johnson: Those of us that build hardware understand that the most critical thing that’s preventing us from reaching quantum advantages is the hardware, and so our first tools are really focused at those domain experts to give them the tools they need to build better hardware. That’s an important audience that we like and we will continue to make better and better tools to serve that audience. But as Pri mentioned, people are doing applications, research with the devices that exist today, and are finding they maybe can’t yet solve a system problem better than they could with a classical resource, but they can solve problems. They’re starting to figure out what the limitations are, how they can squeeze out the most utility out of the devices today, and then getting ready for the devices that exist tomorrow.

So we’re starting to build tools that try to lower the barriers to entry for those people, not the domain experts, but a new audience. We don’t want them to have to learn everything about quantum computing in order to be able to get started. The idea here was to try to reach out to this new audience of developers [such that] they can they can write their programs by describing the problem that they want to solve, in this case, an optimization problem. So they can write down a quadratic formula, they write down some description of the constraints of their optimization problem. They can choose different solvers in both quantum solvers and classical solvers, because a lot of these developers are trying to [understand the value today and [how] quantum works versus the classical.

Hurley: Looking at it from a pure 30 years of seeing new tech coming down and doing the development, I think 10 years sounds very pessimistic now. What most people imagine as a quantum computer in 10 years? Is it a full general purpose machine, whatever? Who knows? But I don’t think you’re going to wait 10 years, I think it’s more in the two-to-five-year range to where there are things that start to become economically advantageous to enterprises and it will probably be in chemistry and material science.

HOW BIG MUST THE DEVELOPER COMMUNITY BECOME AND WHAT’S THE KEY TO DOING THAT?

Hurley: Most of the people building machines aren’t actually talking to developers. They’re talking to physicists who can download development tools and they’re playing the role of developers. Software developers are not necessarily the greatest physicists and physicists are not all the greatest software developers, right? [We need] to drive it to a point where it’s not 200 people who can program the machines; it’s 2 million people. I believe this is the first real leap, then in the next 10 years computing will change more than it has in the last hundred.

Johnson: Open source, I think, is a critical piece to accelerating these kinds of developments because we want these tools to be available to the broadest audience possible. Open source is a great accelerator for making that happen. [But even] on the fastest timescale, we’re a long way off from the App Store experience where on my phone, I can get an app that takes advantage of current resources. But I think we’re not that far away from the developer equivalent of that, which is, you know, package management systems where I can just say, pip install or brew install some package, which is a quantum library for some application domain. The goal is to have like the equivalent of iPhone experiences today.

If I’m gonna an iPhone developer and I want to develop a new app that uses like the augmented reality app to check the position of a basketball, that itself is a non-trivial machine vision task, right? But we don’t ask every iPhone developer to be a machine vision expert. They just they plug in into Expo that they want to use with AR kit, which is Apple’s augmented reality solution. And off they go. We need to get to that point and I don’t think it’s that far away, where a developer can use quantum resources without having to be an expert in quantum computing. 

Google’s Sycamore quantum chip

Hurley: If you think back to 2007, there are 400 of us a week after [the iPhone introduction] with iPhones and people hacking on them. It went to 10s of thousands almost instantly, right? Within six months to a year, and then over the course of 10 or 11 years, you get to 23 million people who are doing that. That mass of developers being involved drives apps exactly as Blake just said. You have to have a mass of developers to do that. That’s where quantum computing faces its biggest challenge, when it gets what I call out of the lab into the real world and all of a sudden there’s a million developers. Because developers rarely use things in the exact way they were intended to be used. They will find more uses, they will find the bugs, they will find the weaknesses. So the faster we can get to that point the better. I mean, Pri what are your thoughts?

Narang: We’re taking some of the circuits run on silicon superconducting to trapped ion and realizing that some so don’t work the same way. And yeah, you forget about developers breaking things in new ways, even experienced people break things in ways they didn’t anticipate, and have to call an engineer and say, hey, how do I fix this? If we expect, you know, a million developers entering the community to have to get answers from an expert engineer, that’s probably not a very scalable model. Something that could be useful is having better simulators that allow you to replicate some of the noise associated with current hardware, to see how things are performing? Also, simple stuff like getting runtime estimates. Getting a yea or nay on if your circuit going to actually fit on the device you’re trying to run it on? That’s a problem I’ve seen a lot of people have. They have a beautiful idea. And they assume that it can run on this really tiny device. I think there’s different levels to how do we make it easier for developers who are entering the field.

Link to panel video, https://www.youtube.com/watch?v=fBP6qTc_fGU&feature=youtu.be

[i] Quantum Volume (QV) is a hardware-agnostic metric that we defined to measure the performance of a real quantum computer. Each system we develop brings us along a path where complex problems will be more efficiently addressed by quantum computing; therefore, the need for system benchmarks is crucial, and simply counting qubits is not enough. As we have discussed in the past, Quantum Volume takes into account the number of qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace. https://www.ibm.com/blogs/research/2020/01/quantum-volume-32/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Chicago Researchers Generate First Computational Model of Entire SARS-CoV-2 Virus

January 15, 2021

Over the course of the last year, many detailed computational models of SARS-CoV-2 have been produced with the help of supercomputers, but those models have largely focused on critical elements of the virus, such as its Read more…

By Oliver Peckham

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Roar Supercomputer to Support Naval Aircraft Research

January 14, 2021

One might not think “aircraft” when picturing the U.S. Navy, but the military branch actually has thousands of aircraft currently in service – and now, supercomputing will help future naval aircraft operate faster, Read more…

By Staff report

DOE and NOAA Extend Computing Partnership, Plan for New Supercomputer

January 14, 2021

The National Climate-Computing Research Center (NCRC), hosted by Oak Ridge National Laboratory (ORNL), has been supporting the climate research of the National Oceanic and Atmospheric Administration (NOAA) for the last 1 Read more…

By Oliver Peckham

Using Micro-Combs, Researchers Demonstrate World’s Fastest Optical Neuromorphic Processor for AI

January 13, 2021

Neuromorphic computing, which uses chips that mimic the behavior of the human brain using virtual “neurons,” is growing in popularity thanks to high-profile efforts from Intel and others. Now, a team of researchers l Read more…

By Oliver Peckham

AWS Solution Channel

Now Available – Amazon EC2 C6gn Instances with 100 Gbps Networking

Amazon EC2 C6gn instances powered by AWS Graviton2 processors are now available!

Compared to C6g instances, this new instance type provides 4x higher network bandwidth, 4x higher packet processing performance, and 2x higher EBS bandwidth. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Honing In on AI, US Launches National Artificial Intelligence Initiative Office

January 13, 2021

To drive American leadership in the field of AI into the future, the National Artificial Intelligence Initiative Office has been launched by the White House Office of Science and Technology Policy (OSTP). The new agen Read more…

By Todd R. Weiss

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Intel ‘Ice Lake’ Server Chips in Production, Set for Volume Ramp This Quarter

January 12, 2021

Intel Corp. used this week’s virtual CES 2021 event to reassert its dominance of the datacenter with the formal roll out of its next-generation server chip, the 10nm Xeon Scalable processor that targets AI and HPC workloads. The third-generation “Ice Lake” family... Read more…

By George Leopold

Researchers Say It Won’t Be Possible to Control Superintelligent AI

January 11, 2021

Worries about out-of-control AI aren’t new. Many prominent figures have suggested caution when unleashing AI. One quote that keeps cropping up is (roughly) th Read more…

By John Russell

AMD Files Patent on New GPU Chiplet Approach

January 5, 2021

Advanced Micro Devices is accelerating the GPU chiplet race with the release of a U.S. patent application for a device that incorporates high-bandwidth intercon Read more…

By George Leopold

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Intel Touts Optane Performance, Teases Next-gen “Crow Pass”

January 5, 2021

Competition to leverage new memory and storage hardware with new or improved software to create better storage/memory schemes has steadily gathered steam during Read more…

By John Russell

Farewell 2020: Bleak, Yes. But a Lot of Good Happened Too

December 30, 2020

Here on the cusp of the new year, the catchphrase ‘2020 hindsight’ has a distinctly different feel. Good riddance, yes. But also proof of science’s power Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Leading Solution Providers

Contributors

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This