Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

By John Russell

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical University of Denmark released a template for an optical quantum computer that’s compatible with current fiber optics. Over the past roughly two weeks: Rigetti announced a multi-chip quantum processor and an industry collaboration; Phasecraft presented a novel approach to quantum system modeling; Harvard introduced a beefed-up quantum simulator; Oxford Quantum Circuits announced Quantum Computing-as-a-Service, and China claimed to have achieved Quantum Supremacy. This isn’t even the full list.

You get the idea. The flow of quantum computing announcements to HPCwire has spiraled higher in recent months. They span all aspects of the quantum ecosystem and all states of “technology readiness” from hopeful start-up plans to veteran companies working in the quantum computing trenches. Covering the flow is a challenge. Here’s a brief roundup of just a few recent QC-related announcements spilling out before, during and just after ISC21 (which incidentally had a quantum computing keynote).

IBM Hits a Double

There’s a fair amount of debate over whether quantum computing, or at least whether near-term NISQ (noisy intermediate scale quantum) computers, will be effective for machine learning. In a blog on Monday – IBM researchers have found mathematical proof of a potential quantum advantage for quantum machine learning – IBM notes rather realistically:

“Few concepts in computer science cause as much excitement—and perhaps as much potential for hype and misinformation—as quantum machine learning. Several algorithms in this space have hinted at exponential speedups over classical machine learning approaches, by assuming that one can provide classical data to the algorithm in the form of quantum states. However, we don’t actually know whether a method exists that can efficiently provide data in this way.

“Many proposals for quantum machine learning algorithms have been made that can be best characterized as “heuristics,” meaning that these algorithms have no formal proof that supports their performance. These proposals are motivated by the challenge to find algorithms that are friendly towards near-term experimental implementation with only conventional access to data. One such class of algorithms was the proposal for quantum enhanced feature spaces—also known as quantum kernel methods, where a quantum computer steps in for just a part of the overall algorithm—by Havlíček et al.1

IBM now says it has done just that: “We’re excited to announce a quantum kernel algorithm that, given only classical access to data, provides a provable exponential speedup over classical machine learning algorithms for a certain class of classification problems.” Classification problems, of course, are one of the most fundamental problems in machine learning.

The details of IBM’s new algorithm are in a Nature article published yesterday (A rigorous and robust quantum speed-up in supervised machine learning). Here’s the abstract: “[W]e construct a classification problem with which we can rigorously show that heuristic quantum kernel methods can provide an end-to-end quantum speed-up with only classical access to data. To prove the quantum speed-up, we construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Furthermore, we construct a family of parameterized unitary circuits, which can be efficiently implemented on a fault-tolerant quantum computer, and use them to map the data samples to a quantum feature space and estimate the kernel entries. The resulting quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.”

Meanwhile, less than a week ago, IBM announced in a technical note that it was moving to a new topology for its hardware devices: “As of Aug 8, 2021, the topology of all active IBM Quantum devices will use the heavy-hex lattice, including the IBM Quantum System One’s Falcon processors installed in Germany and Japan.”

IBM spokesperson, Kortney Easterly told HPCwire, “We believe the heavy hex lattice offers the clearest path to reach Quantum Advantage – the point in which a quantum computer can solve a problem faster than a classical computer…The heavy-hex lattice design helps minimize qubit errors – an issue that plagues noisy device performance. Based on proven fidelity improvements and manufacturing scalability, we believe that the heavy hex lattice is superior to a square lattice – from enabling more accurate near-term experimentation to reaching the critical goal of demonstrating fault tolerant error correction. The heavy-hex lattice represents the fourth iteration of the topology for IBM Quantum systems and the Eagle quantum processor that we’re debuting later this year will also have heavy hex lattice layout.”

IBM has made a big bet in quantum computing and in the last year spelled out its technology roadmap for hardware and software intended to deliver a 1000-plus-qubit quantum computer in 2023. The evolution of topology is part of the process. In the new configuration, each unit cell of the lattice consists of a hexagonal arrangement of qubits, with an additional qubit on each edge.

According to the technical note, “The heavy-hex topology is a product of co-design between experiment, theory, and applications, that is scalable and offers reduced error-rates while affording the opportunity to explore error correcting codes. Based on lessons learned from earlier systems, the heavy-hex topology represents a slight reduction in qubit connectivity from previous generation systems, but, crucially, minimizes both qubit frequency collisions and spectator qubit errors that are detrimental to real-world quantum application performance.”

Phasecraft Tackles Fermion Mapping

Simulating quantum systems is one of the most promising applications for quantum computing. Doing so requires mapping these problems efficiently to qubits both to take advantage of their quantum attributes as well as to mitigate error. Phasecraft reported it has a developed a “compact representation of fermions [that] outperforms all previous representations improving memory use and algorithm size each by at least 25% – a significant step towards realizing practical scientific applications on near-term quantum computers.”

The company has a paper published this month in Physical Review B (APS) describing its novel modeling approach. This excerpt is from the abstract:

“The number of qubits required per fermionic mode, and the locality of mapped fermionic operators strongly impact the cost of such simulations. We present a fermion to qubit mapping that outperforms all previous local mappings in both the qubit to mode ratio and the locality of mapped operators. In addition to these practically useful features, the mapping bears an elegant relationship to the toric code, which we discuss. Finally, we consider the error mitigating properties of the mapping—which encodes fermionic states into the code space of a stabilizer code. Although there is an implicit tradeoff between low weight representations of local fermionic operators, and high distance code spaces, we argue that fermionic encodings with low-weight representations of local fermionic operators can still exhibit error mitigating properties which can serve a similar role to that played by high code distances. In particular, when undetectable errors correspond to “natural” fermionic noise.”

Oxford Quantum Circuits’ Coaxmon?

Like many others, including IBM, Rigetti and Google, Oxford Quantum Circuits (OQC) uses semiconductor-based superconducting qubits as the core of its quantum computer. OQC is a spinout from Oxford University and contends that its novel 3D architecture, developed in part at Oxford, avoids many limitations used by superconducting qubit systems and enables better scaling. OCQ’s core differentiator is what it calls an a ‘coaxmon’ which works in conjunction with the industry standard superconducting transmon.

OQC fired up its first system in 2018. Last week the company announced launching “the UK’s first commercially available Quantum Computing-as-a-Service built entirely using its proprietary technology.” Pragmatically, that choice of delivery is really the standard practice for most quantum system developers. (IBM and D-Wave, in addition to portal access, also provide on-premises systems.)

Back to the coaxmon. Company literature describes OQC technology thusly: “Typical superconducting qubit technologies only allow scaling in 1-dimension, ‘in-plane’. This makes wiring-up large arrays of qubits difficult. Any fixes to allow scaling in 2-dimensions require increasingly intricate engineering to route control wiring across the chip to the qubits, degrading their performance. OQC’s innovation – the coaxmon – solves these issues. Our quantum processor is built around a unique 3D architecture, which allows fewer fabrication steps and produces lower unwanted cross-talk than typical superconducting circuit technologies. It also makes the unit-cell readily scalable to large qubit arrays while maintaining the high level of quality and control required for useful quantum computation.”

A 2017 paper (Double-sided coaxial circuit QED with out-of-plane wiring) written by some of the company’s researchers may provide a better look at its technology approach. The figure below is from that paper with a description excerpted from the paper’s text (the actual figure caption is at the end of the article).

“The device is depicted in Fig. 1. It consists of a superconducting charge qubit in the transmon regime with coaxial electrodes, which we call the coaxmon (similar to the concentric and aperture transmons) coupled to a lumped element LC microwave resonator fabricated on the opposite side of the chip, realizing dispersive circuit quantum electrodynamics (QED). The device is controlled and measured via coaxial ports, perpendicular to the plane of the chip (see Fig. 1 (a), whose distance from the chip can be modified to change the external quality factor of the circuits. These ports can be used for independent control of the qubit and measurement of the resonator in reflection, or to measure the device in transmission.”

Strangeworks Offers Qiskit Runtime

Strangeworks last week announced it is the first IBM partner to offer exclusive early preview access to Qiskit Runtime, a new service offered by IBM Quantum that streamlines computations requiring multiple iterations.

Qiskit Runtime, announced earlier this year, is a containerized service for quantum computers. Rather than accumulating latencies as code passes between a user’s device and the cloud-based quantum computer, developers can run their program in the Qiskit Runtime execution environment, where IBM’s hybrid cloud reduces the time between executions. Users of the Strangeworks platform may now use the new technology for free starting via a dedicated IBM 7-qubit quantum computer.

Cambridge Quantum Describes Quantum-Safe Blockchain

Cambridge Quantum (CQ), together with the Inter-American Development Bank (IDB) and the Monterrey Institute of Technology (TEC de Monterrey), published a paper describing the implementation of a quantum-safe blockchain, which was successfully demonstrated on the LACChain network and secured using CQ’s IronBridge quantum key generation platform.

Defending the blockchain against the threat of quantum computing required two enhancements to be made.

  • Firstly, the blockchain was updated to use quantum-safe cryptographic algorithms, rather than vulnerable algorithms (such as ECDSA) that will be broken by quantum computers in as little as 5-10 years.
  • Secondly, the keys signing the blockchain transactions had to be completely unpredictable to present-day attackers as well as quantum-powered adversaries, otherwise fraudulent transactions would occur. This second step was achieved using CQ’s IronBridge quantum key generation platform – the only source of provably perfect and unpredictable cryptographic keys in the world.

Excerpt from a preprint (Quantum-resistance in blockchain networks) of the paper on arXiv.org:

“The advent of quantum computing threatens internet protocols and blockchain networks because they utilize non-quantum resistant cryptographic algorithms. When quantum computers become robust enough to run Shor’s algorithm on a large scale, the most used asymmetric algorithms, utilized for digital signatures and message encryption, such as RSA, (EC)DSA, and (EC)DH, will be no longer secure. Quantum computers will be able to break them within a short period of time. Similarly, Grover’s algorithm concedes a quadratic advantage for mining blocks in certain consensus protocols such as proof of work.”

China Researchers Report Besting Google in Quantum Supremacy

If you’ve followed quantum computing, you’re familiar with the on-again/off-again battle to demonstrate quantum supremacy – the ability to perform a calculation on a quantum computer in a reasonable time that cannot realistically be done on a classical system. Google was first to claim this prize (see HPCwire  coverage) amid dispute over  whether it actually did or even if it’s important versus doing something useful on a quantum computer sufficiently better than on a classical computer to make it worthwhile.

Researchers from China report performing a similar exercise even faster than Google on what’s described as a tw0-dimensional programmable computer composed of 66 functional qubits in a tunable coupling architecture. Rather than dive into the pros and cons, here’s the abstract of pre-print paper (Strong quantum computational advantage using a superconducting quantum processor) describing the work led by Jian-Wei Pan, who has been called the father of China’s quantum computing efforts (figure from the paper is shown).

“Scaling up to a large number of qubits with high-precision control is essential in the demonstrations of quantum computational advantage to exponentially outpace the classical hardware and algorithmic improvements. Here, we develop a two-dimensional programmable superconducting quantum processor, Zuchongzhi, which is composed of 66 functional qubits in a tunable coupling architecture. To characterize the performance of the whole system, we perform random quantum circuits sampling for benchmarking, up to a system size of 56 qubits and 20 cycles.

“The computational cost of the classical simulation of this task is estimated to be 2-3 orders of magnitude higher than the previous work on 53-qubit Sycamore processor (Google). We estimate that the sampling task finished by Zuchongzhi in about 1.2 hours will take the most powerful supercomputer at least 8 years. Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time. The high-precision and programmable quantum com- puting platform opens a new door to explore novel many-body phenomena and implement complex quantum algorithms.”

Harvard-MIT Researchers Extend Quantum Simulator Capability

Physicists from the Harvard-MIT Center for Ultracold Atoms and other universities have developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits. A paper on the work (Quantum phases of matter on a 256-atom programmable quantum simulator) was published in Nature last week. The device seems more of a tool to investigate quantum states that might later be used in quantum computers.

As described in an article written by Juan Siliezar in the Harvard Gazette, “The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.”

“The workhorse of this new platform is a device called the spatial light modulator, which is used to shape an optical wavefront to produce hundreds of individually focused optical tweezer beams,” said Ebadi Sepehr, a physics student in the Harvard Graduate School of Arts and Sciences and the study’s lead author. “These devices are essentially the same as what is used inside a computer projector to display images on a screen, but we have adapted them to be a critical component of our quantum simulator.”

It’s best to read the paper directly. The researchers demonstrated a programmable quantum simulator “based on deterministically prepared two-dimensional arrays of neutral atoms, featuring strong interactions controlled by coherent atomic excitation into Rydberg states. Using this approach, we realize a quantum spin model with tunable interactions for system sizes ranging from 64 to 256 qubits.”

They benchmarked the system by characterizing high-fidelity antiferromagnetically ordered states and demonstrating quantum critical dynamics consistent with an Ising quantum phase transition in (2 + 1) dimensions. “We then create and study several new quantum phases that arise from the interplay between interactions and coherent laser excitation, experimentally map the phase diagram and investigate the role of quantum fluctuations. “Offering a new lens into the study of complex quantum matter, these observations pave the way for investigations of exotic quantum phases, non-equilibrium entanglement dynamics and hardware-efficient realization of quantum algorithms,” wrote the authors.

Rigetti’s New Processor and Industry Collaboration

Rigetti Computing made significant announcements on two fronts. At the beginning of the month, it launched a new multi-chip quantum processor design which the company calls the first in the world. Today, it announced a collaboration with Riverlane (quantum software developer) and Astex Pharmaceuticals to develop an integrated application for simulating molecular systems using Rigetti Quantum Cloud Services.

Rigetti modular, multi-chip quantum processor.

Rigetti says its new multi-chip approach incorporates a proprietary modular architecture that “accelerates the path to commercialization and solves key scaling challenges toward fault-tolerant quantum computers.” Not a lot of detail was provided. Rigetti expects to make an 80-qubit system powered by the breakthrough multi-chip technology available on its Quantum Cloud Services platform later this year.

The company notes that scaling quantum computers comes with inherent challenges: “As chips increase in size, there is a higher likelihood of failure and lower manufacturing yield, making it increasingly difficult to produce high-quality devices. Rigetti has eliminated these roadblocks by developing the technology to connect multiple identical dies into a large-scale quantum processor. This modular approach exponentially reduces manufacturing complexity and allows for accelerated, predictable scaling.”

On the application front, it announced a new partnership that aims to design more efficient drugs and shorten the time to market. Today, drug researchers often use advanced computational methods to model molecular structures and drug-target interactions. Quantum computers have the potential to model more complex systems and improve the drug discovery process, but today’s quantum computers remain too noisy for results to evolve past proof-of-concept studies.

“Building on previous work with Astex, our collaboration aims to overcome this technological barrier and address a real business need for the pharmaceutical sector,” said Riverlane CEO Steve Brierley in the official announcement. The project will leverage Riverlane’s algorithm expertise and existing technology for high-speed, low-latency processing on quantum computers using Rigetti’s commercially available quantum systems. The team will also develop error mitigation software to help optimize the performance of the hardware architecture, which they expect to result in up to a threefold reduction in errors and runtime improvements of up to 40x.

Honeywell/Cambridge Quantum Work with Nippon Steel

You may know that Honeywell Quantum Solutions and Cambridge Quantum Computing were merged in the spring when the parent Honeywell corporation acquired CQC. The entity, not yet named, is still owned by Honeywell but will operate independently as do many Honeywell units.

The new company recently discussed some of the work it is doing with Nippon Steel to devise an optimal schedule for the intermediate products it uses during the steel manufacturing process. CQC developed an algorithm and ran it on the System Model H1 (ion trap system), Honeywell Quantum Solutions’ latest commercial computer.

“Scheduling at our steel plants is one of the biggest logistical challenges we face, and we are always looking for ways to streamline and improve operations in this area,” said Koji Hirano, chief researcher at Nippon Steel. The partners report the System Model H1 was able to find the optimal solution after only a few steps and are encouraging for scaling up the size of problems tackled.

ColdQuanta Reports 100-Qubit Milestone

There are, of course, many qubit technologies under development. ColdQuanta is developing a cold atom approach that exploits condensed matter physics (Bose-Einstein condensate). Last week the company reported successfully trapping and addressing 100 qubits in a large, dense 2-D cold atom array.

ColdQuanta reports it is on track to deliver a digital gate-based quantum computer (code named “Hilbert”) later this year. The claims it will be among the most powerful in the world using pristine qubits that have the stability of atomic clocks to massively scale qubit count beyond what is possible with other quantum computing approaches. We’ll see. Here’s past HPCwire coverage of ColdQuanta’s technology, ColdQuanta – Life in Quantum’s Slow (and Cold) Lane Heats Up.

Links to a few other recent articles in HPCwire

Technical University of Denmark Researchers Tighten Grip on Quantum Computer

Aalto Researchers Unlock Radiation-Free Quantum Technology with Graphene

Griffith University Researchers Work to Build Error-Proof Quantum Computer Using $2M Grant

Caption to Figure 1 of Oxford Quantum Circuits 2017 paper

Figure 1. (a) CAD design of the unit cell, with transmon qubit and lumped element resonator on opposing sides of a substrate, and control and measurement ports perpendicular to the chip plane. (b) Designs of the transmon and resonator. In the transmon the two electrodes are connected by a single Josephson junction, whereas the electrodes of the resonator are connected by an inductor line. (c) Equivalent circuit of the device, showing the resonator inductance and capacitance, LR and CR, the junction Josephson energy EJ and effective capacitance over the junction CΣ.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storage, throughput, and new computing technologies. This round Read more…

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Core42 Is Building Its 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storag Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire