Crystal Ball Gazing: IBM’s Vision for the Future of Computing

By John Russell

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ll get there at last month’s MIT-IBM Watson AI Lab’s AI Research Week held at MIT. Just as Moore’s law, now fading, was always a metric with many ingredients baked into it, Gil’s evolving post-Moore vision is a composite view with multiple components.

“We’re beginning to see an answer to what is happening at the end of Moore’s law. It’s a question that has been the front of the industry for a long, long time,” said Gil in his talk. “And the answer is that we’re going to have this new foundation of bits plus neurons plus qubits coming together, over the next decade [at] different maturity levels – bits [are] enormously mature, the world of neural networks and neural technology, next in maturity, [and] quantum the least mature of those. [It] is important to anticipate what will happen when those three things intersect within a decade.”

Dario Gil, IBM

Not by coincidence IBM Research has made big bets in all three areas. It’s neuromorphic chip (True North) and ‘analog logic’ research efforts (e.g. phase change memory) are vigorous. Given the size and scope of its IBM Q systems and Q networks, it seems likely that IBM is spending more on quantum computing than any other non-governmental organization. Lastly, of course, IBM hasn’t been shy about touting Summit and Sierra supercomputers, now ranked one and two in the world (Top500), as the state of the art in heterogeneous computing architectures suited for AI today. In fact, IBM recently donated a 2 petaflops system petaflops, Satori, to MIT that is based the Summit design and well-suited for AI and hybrid HPC-AI workloads.

Gil was promoted to director of IBM Research last February and has begun playing a more visible role. For example, he briefed HPCwire last month on IBM’s new quantum computing center. A longtime IBMer (~16 years) with a Ph.D. in electrical engineering and computer science from MIT, Gil became the 12th director of IBM Research in its storied 74-year history. That IBM Research will turn 75 in 2020 is no small feat in itself. It has about 3,000 researchers at 12 labs spread around the world with 1,500 of those researchers based at IBM’s Watson Research Center in N.Y. IBM likes to point out its research army has included six Nobel prize winners and the truth is IBM research effort dwarfs those of all but a few of the biggest companies.

In his talk at MIT, though thin on technical details for the future, Gil did a nice job of reprising recent computer technology history and current dynamics. Among other things he looked at how the basic idea of separating information – digital bits – from the things they represent and how for a long time that proved incredibly powerful in enabling computing. He then pivoted noting that ultimately nature doesn’t seem to work that way and that for many problems, as Richard Feynman famously suggested, quantum computers based on quantum bits (qubits) are required. Qubits, of course, are intimately connected to “their stuff” and behave in the probabilistic ways as nature does. (Making qubits behave nicely has proven devilishly difficult.)

Pushing beyond Moore’s law, argued Gil, will require digital bits, data-driven AI, and qubits working in collaboration. Before jumping into his talk it’s worth hearing his summary of why the pace of progress even as experienced in Moore’s law’s heyday would be a problem today. As you might guess both flops performance and energy consumption are front and center along with AI’s dramatically growing appetite for compute:

“If you look at what is the core of the issue? If you look at some very state of the art [AI] models, you can see some of the plot in terms of petaflops per day [consumed] for training from examples of recent research work [with AlexNet and AlphaGo Zero] as a function of time. One of the things we are witnessing is the compute requirement for training jobs is doubling every three and a half months. So we were very impressed, with Moore’s law, doubling every 18 months, right? This thing is doubling every three and a half months. Obviously, it’s unsustainable. If we keep at that rate for sustained periods of time we will consume every piece of energy the world has to just do this. So that’s not the right answer,” said Gil.

“There’s a dimension of [the solution] that has to do with hardware innovation and there’s another dimension that has to do with algorithmic innovation. So this is the roadmap that we have laid out in terms of the next eight years or so of how we’re going to go from Digital AI cores [CPU plus accelerators] like we have today, based on reduced precision architectures, to mixed analog-digital cores, to in the future, perhaps, entirely analog cores that implement very efficiently the multiply-accumulate function inherently in these devices as we perform training.

“Even in this scenario, which is, you know, still going to require billions of dollars of investments and a lot of talent, the best we can forecast is about 2.5x improvement per year. That’s well short of three-and-a-half months, right, of doubling computing power. We have to deliver this for sure. But the other side of the equation is the work that you all do and that is: we have got to dramatically improve the algorithmic efficiency of AI on the problems that we solve,” he said.

Gil noted, for example, that a team of MIT researchers recently developed technique for training video recognition models that is up to three times faster than current state-of-the-art methods. Their work will be presented at the upcoming International Conference on Computer Vision in South Korea and a copy of their paper (TSM: Temporal Shift Module for Efficient Video Understanding) is posted on Arxiv.org.

Top video recognition models currently use three-dimensional convolutions to encode the passage of time in a sequence of images which creates bigger, more computationally-intensive models. By mingling spatial representations of the past, present and future, the new MIT model gets a sense of time passing without explicitly representing it and greatly reduces the computational cost. According to the researchers, it normally takes about two days to train such a powerful model on a system with one GPU. They borrowed time on Summit – not a luxury many have – and using 256 nodes with a total of 1,536 GPUs, could train the model in 14 minutes (see the paper’s abstract[I] at the end of the article).

IBM has posted the video of Gil’s talk and it is fairly short (~ 30 min) and worth watching to get a flavor for IBM’s vision of the future of computing. A portion of Gil’s wide-ranging comments, lightly edited and with apologies for any garbling, and a few of his slides are presented below.

  1. CLASSICAL COMPUTING: HOW DID WE GET HERE

“We’re all very familiar with the foundational idea of the binary digit and the bit, and this sort of understanding that we can look at information abstractly. Claude Shannon advocated the separation [this] almost platonic idea of zeros and ones, to decouple them from their physical manifestation was an interesting insight. It’s actually what allowed us to, for the first time in history, to look at the world and look at images as different as this right, a punch card and DNA. [We’ve] come to appreciate that they have something in common that they’re both carriers and expressers of information.

“Now, there was another companion idea that was not theoretical in nature that was practical, and that was Moore’s law. This is the re-creation of the original plot (see slide) from Gordon Moore, when he had four data points in the 1960s and the observation that the number of transistors that you could fit by unit area was doubling every 18 months. Moore extrapolated that, and amazingly enough, that has happened right over 60 years, and not because it fell off a tree but thanks to the work of scientists and engineers. I always like to cite to just give an example of the level of global coordination in R&D that is required. $300 billion a year is what the world spends move from node to node.

Recreation the original four data points that led Intel founder Gordon Moore to postulate Moore’s law

“The result of that is we digitize the world, right? Essentially, bits have become free, and the technology is extraordinarily mature. A byproduct of all of this is that there’s a community of over 25 million software developers around the world that now have access to digital technology all over the world creating and innovating and that is why software has become so like the fabric that binds business and institutions together. So it’s very, very mature technology. We are of course pushing the limits. It turns out you need 12 atoms, magnetic atoms to store a piece of information. In the end, there is a limit of the physical properties. So we need to explore also where alternatives way to represent information in richer and a more complex way.

“We have seen a consequence of when I was talking about Moore’s law and the fact that devices did not get better after 2003 As we scaled them there were a set of architectural innovations the community responded with. One was the idea of multi-cores, right, adding more cores in a in a chip. But also [there] was the idea of accelerators of different forms, that we knew that a form of specialization in computing architecture was going to be required to be able to adapt and continue the evolution of computing.

Using Summit and Sierra as an example: “Every once in a while [it’s] useful to stop and look back at the numbers and reflect right? It is kind of mind blowing that it’s possible to build these kinds of systems with the reliability we see architecturally here is that you’re bringing this blend between large number of accelerators and a large number of CPUs. And you must create system architectures with high bandwidth interconnect, because you must keep the system utilization really, really high. So this is important, and it’s illustrative of what the future is going to be back of this combining sort of this bit and neural-based architectures.”

 

  1. AI: ALGORITHM PROGRESS & NEW HARDWARE NEEDED

“There’s been another idea that has been running for well over a century now, which is the intersection of the world of biology and information. Santiago Ramon Cajal, at the turn of 1900s, was among the first to understand that we have these structures in our brain called neurons and the linkage between these neural structures and memory and learning. It wasn’t with a whole lot more than this biological inspiration that starting in the 1940s and 50s and of course to today we saw the emergence of an artificial neural network that took loose inspiration from the brain. What has happened over the last six years, in terms of this intersection between the bit revolution and the consequence of digitizing the world and the associated computing revolution [is] we have now big enough computers to train some of these deep neural networks at scale.

“We have been able to demonstrate [in] fields that have been with us for a long time like speech recognition, and language processing have been deeply impacted by this approach. We’ve seen the accuracy of these environments really improve, but we’re still in this narrow AI domain.

“I mean, the term AI, [is] a mixed blessing, right? It’s a fascinating scientific and technological, technological endeavor. But it’s a scary term for society. And when we use the word AI, we often are speaking past each other. We mean, very different things, when we say those words. So one useful thing is to add an adjective in front of it. Where are we really today, in that a narrow form of AI has begun to work, that’s a long cry from a general form of AI being present. And we’re seeing dates here, we don’t know when that’s going to happen. You know, my joke on this when we put things like 2050 (see slide). Scientists put numbers like that is like what we’re really mean is we have no idea, right?

“So the journey is to take advantage of the capability that we have today and to push the frontier and boundary towards broader forms of AI. We are passionate advocates within IBM and the collaborations we have around bringing the strengths and the great traditions within the field of AI and bringing neuro-symbolic systems together. That as profound and as important as the advancements we are seeing in deep learning, we have to combine them with knowledge representation and forms of reasoning and bring those together so that we can build systems capable of performing more tasks and more domains.

“Importantly, as technology gets more powerful, the dimension of trust becomes more essential to fulfill the potential of these advancements and get society to adopt them. How do we build the trust layer and the whole AI process around explainability and fairness and the security of AI, and the ethics of AI, and the entire engineering lifecycle of models?In this journey of neural-symbolic AI I think it’s going to have implications at all layers of the stack.

 

  1. SEPARATING IT FROM PHYSICALITY – NOT IN QUANTUM

“In the same way that I was alluding to this intersection of mathematics and information as the world of classical bits and that biology and information gave us the inspiration for neurons, it is physics and information coming together that is giving us the world of qubits. [T]here were physicists asking questions about the world of information and it was very interesting. They would ask questions like “Is there a fundamental limit to the energy efficiency of computation?” Or “Is information processing thermodynamics reversible?” The kinds of questions only physicists would have, right?

“Looking at that world and sort of pulling at that thread and this assumption that Shannon gave us of separating information and physics – Shannon says, ‘Don’t worry about that coupling’ – they actually poke at that question as to whether that was true or not. We learned that the foundational information block, it’s actually not the bit, but something called the qubit, short for quantum bit, and that we could express some fundamental principles of physics in this representation of information. Specifically for quantum computing, three ideas – the principle of superposition, the principle of entanglement, and the idea of interference – actually have to come together for how we represent and process information with qubits.

“The reason why this matters is we know there are many classes of problems in the world of computing and the world of information that are very hard for classical computers and that in the end, we’re [classical computing] bound by things that don’t blow up exponentially in the number of variables. [A] very famous example of a thing that blows up exponentially in the number of variables is simulating nature itself. That was the original idea of Richard Feynman when he advocated the fact that we needed to build a quantum computer or a machine that behaved like nature to be able to model nature. But that’s not the only problem in the realm of mathematics. We know other problems that also have that character. Factoring is an example. The traveling salesman problem, optimization problems, there’s a whole host of problems that are intractable with classical computers, and the best we can do is approximate meet them.

“Now, quantum is not going to solve all of them. There is a subset of them that will be relevant for, but it’s the only technology that we know that alters that equation of something that becomes intractable to tractable. And what is interesting is we find ourselves in a moment, like 1944 [when we built] what is arguably the first digital programmable computer. In a similar fashion now, we built the first programmable quantum computers. This is just a recent event, it just happened in the last few years. So, in fact, in the last few years, we’ve gone from that kind of laboratory environments to build the first engineered systems that are designed for reproducible and stable operation. There’s a picture of IBM Q System One System, one that sits in Yorktown.

“What I really love about what is happening right now is you can [using IBM quantum networks], sit in front of any laptop all over the world, you can write a program now, and it takes those zeros and ones coming in from your computer. In our case we use superconducting technology, converting them to microwave pulses, about five gigahertz, travels down the cryostat with superconducting coaxial cables, these operates at 50 millikelvin. Then we’re able to perform the superposition and entanglement and interference operations in a controlled fashion on the qubits, able to get the microwave signal readout, convert it back to zeros and ones, and present an answer back. It’s a fantastic scientific and engineering tour de force.

Since we put the first system online now we have over 150,000 users who are learning how to program these quantum computers run program, there’s been over 200 scientific publications being able to generate with these environments. It’s the beginning of, I’m not going to say a new field, the field of quantum computing has been with us for a while, but it’s the beginning of a totally new community, a new paradigm of computation that is coming together. One of the things is we gave access to both a simulator and the actual hardware and now it has crossed over right now what people really want access to the real hardware to be able to solve these problems.

 

  1. TRIUMPHANT THREESOME: WHAT WILL WE DO NEXT?

“So let me bring it to a close and make an argument that finally we’re beginning to see an answer to what is happening at the end of Moore’s law. It’s a question that has been the front of the industry for a long, long time. And the answer is that we’re going to have this new foundation of bits plus neurons plus qubits coming together, over the next decade [at] different maturity levels – bits [are] enormously mature, the world of neural networks and neural technology, next in maturity, [and] quantum the least mature of those. [It] is important to anticipate what will happen when those three things intersect within a decade.”

“I think the implications you will have for intelligent, mission-critical applications for the world of business and institutions, and the possibilities to accelerate discovery are so profound. Imagine the discovery of new materials, which is going to be so important to the future of this world, in the context of global warming and so many of the challenges, we face. The ability to engineer materials is going to be at the core of that battle and look at the three scientific communities that are interested in the intersection of computation, that task.

“Historically, we’ve been very experimentally-driven in this approach of the discovery of materials. You have the classical guys, the HPC community, that has been on that journey for a long time, says we know the equations of physics, “We know we can be able to simulate things with larger and larger systems. And we’re quite good at it.” There’s been amazing accomplishments in that community. But now you have the AI community says, “Hey, excuse me, I’m going to approach it with a totally different methodology, a data-driven approach to that problem, I’m going be able to revolutionize and make an impact to discovery.” Then you have the quantum community, who says [this is the very reason] why we’re creating quantum computers. All three are right. And imagine what will happen when all three are combined. That is what is ahead for us for the next decade.”

Link to Gil presentation video: https://www.youtube.com/watch?v=2RBbw6uG94w&feature=youtu.be

[i]TSM: Temporal Shift Module for Efficient Video Understanding

Abstract

“The explosive growth in video streaming gives rise to challenges on performing video understanding at high accu- racy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leader- board upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: https://github. com/mit-han-lab/temporal-shift-module.”

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance computing and the advanced-scale AI market. Early customers Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerated for AI applications. Now, Amazon Web Services (AWS) is int Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testbed (AQT), which is based at Lawrence Berkeley National Labor Read more…

Graphcore Introduces Larger-Than-Ever IPU-Based Pods

October 22, 2021

After launching its second-generation intelligence processing units (IPUs) in 2020, four years after emerging from stealth, Graphcore is now boosting its product line with its largest commercially-available IPU-based sys Read more…

Quantum Chemistry Project to Be Among the First on EuroHPC’s LUMI System

October 22, 2021

Finland’s CSC has just installed the first module of LUMI, a 550-peak petaflops system supported by the European Union’s EuroHPC Joint Undertaking. While LUMI -- pictured in the header -- isn’t slated to complete i Read more…

AWS Solution Channel

Royalty-free stock illustration ID: 577238446

Putting bitrates into perspective

Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerate Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testb Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

LLNL Prepares the Water and Power Infrastructure for El Capitan

October 21, 2021

When it’s (ostensibly) ready in early 2023, El Capitan is expected to deliver in excess of two exaflops of peak computing power – around four times the powe Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire