Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

By John Russell

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical University of Denmark released a template for an optical quantum computer that’s compatible with current fiber optics. Over the past roughly two weeks: Rigetti announced a multi-chip quantum processor and an industry collaboration; Phasecraft presented a novel approach to quantum system modeling; Harvard introduced a beefed-up quantum simulator; Oxford Quantum Circuits announced Quantum Computing-as-a-Service, and China claimed to have achieved Quantum Supremacy. This isn’t even the full list.

You get the idea. The flow of quantum computing announcements to HPCwire has spiraled higher in recent months. They span all aspects of the quantum ecosystem and all states of “technology readiness” from hopeful start-up plans to veteran companies working in the quantum computing trenches. Covering the flow is a challenge. Here’s a brief roundup of just a few recent QC-related announcements spilling out before, during and just after ISC21 (which incidentally had a quantum computing keynote).

IBM Hits a Double

There’s a fair amount of debate over whether quantum computing, or at least whether near-term NISQ (noisy intermediate scale quantum) computers, will be effective for machine learning. In a blog on Monday – IBM researchers have found mathematical proof of a potential quantum advantage for quantum machine learning – IBM notes rather realistically:

“Few concepts in computer science cause as much excitement—and perhaps as much potential for hype and misinformation—as quantum machine learning. Several algorithms in this space have hinted at exponential speedups over classical machine learning approaches, by assuming that one can provide classical data to the algorithm in the form of quantum states. However, we don’t actually know whether a method exists that can efficiently provide data in this way.

“Many proposals for quantum machine learning algorithms have been made that can be best characterized as “heuristics,” meaning that these algorithms have no formal proof that supports their performance. These proposals are motivated by the challenge to find algorithms that are friendly towards near-term experimental implementation with only conventional access to data. One such class of algorithms was the proposal for quantum enhanced feature spaces—also known as quantum kernel methods, where a quantum computer steps in for just a part of the overall algorithm—by Havlíček et al.1

IBM now says it has done just that: “We’re excited to announce a quantum kernel algorithm that, given only classical access to data, provides a provable exponential speedup over classical machine learning algorithms for a certain class of classification problems.” Classification problems, of course, are one of the most fundamental problems in machine learning.

The details of IBM’s new algorithm are in a Nature article published yesterday (A rigorous and robust quantum speed-up in supervised machine learning). Here’s the abstract: “[W]e construct a classification problem with which we can rigorously show that heuristic quantum kernel methods can provide an end-to-end quantum speed-up with only classical access to data. To prove the quantum speed-up, we construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Furthermore, we construct a family of parameterized unitary circuits, which can be efficiently implemented on a fault-tolerant quantum computer, and use them to map the data samples to a quantum feature space and estimate the kernel entries. The resulting quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.”

Meanwhile, less than a week ago, IBM announced in a technical note that it was moving to a new topology for its hardware devices: “As of Aug 8, 2021, the topology of all active IBM Quantum devices will use the heavy-hex lattice, including the IBM Quantum System One’s Falcon processors installed in Germany and Japan.”

IBM spokesperson, Kortney Easterly told HPCwire, “We believe the heavy hex lattice offers the clearest path to reach Quantum Advantage – the point in which a quantum computer can solve a problem faster than a classical computer…The heavy-hex lattice design helps minimize qubit errors – an issue that plagues noisy device performance. Based on proven fidelity improvements and manufacturing scalability, we believe that the heavy hex lattice is superior to a square lattice – from enabling more accurate near-term experimentation to reaching the critical goal of demonstrating fault tolerant error correction. The heavy-hex lattice represents the fourth iteration of the topology for IBM Quantum systems and the Eagle quantum processor that we’re debuting later this year will also have heavy hex lattice layout.”

IBM has made a big bet in quantum computing and in the last year spelled out its technology roadmap for hardware and software intended to deliver a 1000-plus-qubit quantum computer in 2023. The evolution of topology is part of the process. In the new configuration, each unit cell of the lattice consists of a hexagonal arrangement of qubits, with an additional qubit on each edge.

According to the technical note, “The heavy-hex topology is a product of co-design between experiment, theory, and applications, that is scalable and offers reduced error-rates while affording the opportunity to explore error correcting codes. Based on lessons learned from earlier systems, the heavy-hex topology represents a slight reduction in qubit connectivity from previous generation systems, but, crucially, minimizes both qubit frequency collisions and spectator qubit errors that are detrimental to real-world quantum application performance.”

Phasecraft Tackles Fermion Mapping

Simulating quantum systems is one of the most promising applications for quantum computing. Doing so requires mapping these problems efficiently to qubits both to take advantage of their quantum attributes as well as to mitigate error. Phasecraft reported it has a developed a “compact representation of fermions [that] outperforms all previous representations improving memory use and algorithm size each by at least 25% – a significant step towards realizing practical scientific applications on near-term quantum computers.”

The company has a paper published this month in Physical Review B (APS) describing its novel modeling approach. This excerpt is from the abstract:

“The number of qubits required per fermionic mode, and the locality of mapped fermionic operators strongly impact the cost of such simulations. We present a fermion to qubit mapping that outperforms all previous local mappings in both the qubit to mode ratio and the locality of mapped operators. In addition to these practically useful features, the mapping bears an elegant relationship to the toric code, which we discuss. Finally, we consider the error mitigating properties of the mapping—which encodes fermionic states into the code space of a stabilizer code. Although there is an implicit tradeoff between low weight representations of local fermionic operators, and high distance code spaces, we argue that fermionic encodings with low-weight representations of local fermionic operators can still exhibit error mitigating properties which can serve a similar role to that played by high code distances. In particular, when undetectable errors correspond to “natural” fermionic noise.”

Oxford Quantum Circuits’ Coaxmon?

Like many others, including IBM, Rigetti and Google, Oxford Quantum Circuits (OQC) uses semiconductor-based superconducting qubits as the core of its quantum computer. OQC is a spinout from Oxford University and contends that its novel 3D architecture, developed in part at Oxford, avoids many limitations used by superconducting qubit systems and enables better scaling. OCQ’s core differentiator is what it calls an a ‘coaxmon’ which works in conjunction with the industry standard superconducting transmon.

OQC fired up its first system in 2018. Last week the company announced launching “the UK’s first commercially available Quantum Computing-as-a-Service built entirely using its proprietary technology.” Pragmatically, that choice of delivery is really the standard practice for most quantum system developers. (IBM and D-Wave, in addition to portal access, also provide on-premises systems.)

Back to the coaxmon. Company literature describes OQC technology thusly: “Typical superconducting qubit technologies only allow scaling in 1-dimension, ‘in-plane’. This makes wiring-up large arrays of qubits difficult. Any fixes to allow scaling in 2-dimensions require increasingly intricate engineering to route control wiring across the chip to the qubits, degrading their performance. OQC’s innovation – the coaxmon – solves these issues. Our quantum processor is built around a unique 3D architecture, which allows fewer fabrication steps and produces lower unwanted cross-talk than typical superconducting circuit technologies. It also makes the unit-cell readily scalable to large qubit arrays while maintaining the high level of quality and control required for useful quantum computation.”

A 2017 paper (Double-sided coaxial circuit QED with out-of-plane wiring) written by some of the company’s researchers may provide a better look at its technology approach. The figure below is from that paper with a description excerpted from the paper’s text (the actual figure caption is at the end of the article).

“The device is depicted in Fig. 1. It consists of a superconducting charge qubit in the transmon regime with coaxial electrodes, which we call the coaxmon (similar to the concentric and aperture transmons) coupled to a lumped element LC microwave resonator fabricated on the opposite side of the chip, realizing dispersive circuit quantum electrodynamics (QED). The device is controlled and measured via coaxial ports, perpendicular to the plane of the chip (see Fig. 1 (a), whose distance from the chip can be modified to change the external quality factor of the circuits. These ports can be used for independent control of the qubit and measurement of the resonator in reflection, or to measure the device in transmission.”

Strangeworks Offers Qiskit Runtime

Strangeworks last week announced it is the first IBM partner to offer exclusive early preview access to Qiskit Runtime, a new service offered by IBM Quantum that streamlines computations requiring multiple iterations.

Qiskit Runtime, announced earlier this year, is a containerized service for quantum computers. Rather than accumulating latencies as code passes between a user’s device and the cloud-based quantum computer, developers can run their program in the Qiskit Runtime execution environment, where IBM’s hybrid cloud reduces the time between executions. Users of the Strangeworks platform may now use the new technology for free starting via a dedicated IBM 7-qubit quantum computer.

Cambridge Quantum Describes Quantum-Safe Blockchain

Cambridge Quantum (CQ), together with the Inter-American Development Bank (IDB) and the Monterrey Institute of Technology (TEC de Monterrey), published a paper describing the implementation of a quantum-safe blockchain, which was successfully demonstrated on the LACChain network and secured using CQ’s IronBridge quantum key generation platform.

Defending the blockchain against the threat of quantum computing required two enhancements to be made.

  • Firstly, the blockchain was updated to use quantum-safe cryptographic algorithms, rather than vulnerable algorithms (such as ECDSA) that will be broken by quantum computers in as little as 5-10 years.
  • Secondly, the keys signing the blockchain transactions had to be completely unpredictable to present-day attackers as well as quantum-powered adversaries, otherwise fraudulent transactions would occur. This second step was achieved using CQ’s IronBridge quantum key generation platform – the only source of provably perfect and unpredictable cryptographic keys in the world.

Excerpt from a preprint (Quantum-resistance in blockchain networks) of the paper on arXiv.org:

“The advent of quantum computing threatens internet protocols and blockchain networks because they utilize non-quantum resistant cryptographic algorithms. When quantum computers become robust enough to run Shor’s algorithm on a large scale, the most used asymmetric algorithms, utilized for digital signatures and message encryption, such as RSA, (EC)DSA, and (EC)DH, will be no longer secure. Quantum computers will be able to break them within a short period of time. Similarly, Grover’s algorithm concedes a quadratic advantage for mining blocks in certain consensus protocols such as proof of work.”

China Researchers Report Besting Google in Quantum Supremacy

If you’ve followed quantum computing, you’re familiar with the on-again/off-again battle to demonstrate quantum supremacy – the ability to perform a calculation on a quantum computer in a reasonable time that cannot realistically be done on a classical system. Google was first to claim this prize (see HPCwire  coverage) amid dispute over  whether it actually did or even if it’s important versus doing something useful on a quantum computer sufficiently better than on a classical computer to make it worthwhile.

Researchers from China report performing a similar exercise even faster than Google on what’s described as a tw0-dimensional programmable computer composed of 66 functional qubits in a tunable coupling architecture. Rather than dive into the pros and cons, here’s the abstract of pre-print paper (Strong quantum computational advantage using a superconducting quantum processor) describing the work led by Jian-Wei Pan, who has been called the father of China’s quantum computing efforts (figure from the paper is shown).

“Scaling up to a large number of qubits with high-precision control is essential in the demonstrations of quantum computational advantage to exponentially outpace the classical hardware and algorithmic improvements. Here, we develop a two-dimensional programmable superconducting quantum processor, Zuchongzhi, which is composed of 66 functional qubits in a tunable coupling architecture. To characterize the performance of the whole system, we perform random quantum circuits sampling for benchmarking, up to a system size of 56 qubits and 20 cycles.

“The computational cost of the classical simulation of this task is estimated to be 2-3 orders of magnitude higher than the previous work on 53-qubit Sycamore processor (Google). We estimate that the sampling task finished by Zuchongzhi in about 1.2 hours will take the most powerful supercomputer at least 8 years. Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time. The high-precision and programmable quantum com- puting platform opens a new door to explore novel many-body phenomena and implement complex quantum algorithms.”

Harvard-MIT Researchers Extend Quantum Simulator Capability

Physicists from the Harvard-MIT Center for Ultracold Atoms and other universities have developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits. A paper on the work (Quantum phases of matter on a 256-atom programmable quantum simulator) was published in Nature last week. The device seems more of a tool to investigate quantum states that might later be used in quantum computers.

As described in an article written by Juan Siliezar in the Harvard Gazette, “The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.”

“The workhorse of this new platform is a device called the spatial light modulator, which is used to shape an optical wavefront to produce hundreds of individually focused optical tweezer beams,” said Ebadi Sepehr, a physics student in the Harvard Graduate School of Arts and Sciences and the study’s lead author. “These devices are essentially the same as what is used inside a computer projector to display images on a screen, but we have adapted them to be a critical component of our quantum simulator.”

It’s best to read the paper directly. The researchers demonstrated a programmable quantum simulator “based on deterministically prepared two-dimensional arrays of neutral atoms, featuring strong interactions controlled by coherent atomic excitation into Rydberg states. Using this approach, we realize a quantum spin model with tunable interactions for system sizes ranging from 64 to 256 qubits.”

They benchmarked the system by characterizing high-fidelity antiferromagnetically ordered states and demonstrating quantum critical dynamics consistent with an Ising quantum phase transition in (2 + 1) dimensions. “We then create and study several new quantum phases that arise from the interplay between interactions and coherent laser excitation, experimentally map the phase diagram and investigate the role of quantum fluctuations. “Offering a new lens into the study of complex quantum matter, these observations pave the way for investigations of exotic quantum phases, non-equilibrium entanglement dynamics and hardware-efficient realization of quantum algorithms,” wrote the authors.

Rigetti’s New Processor and Industry Collaboration

Rigetti Computing made significant announcements on two fronts. At the beginning of the month, it launched a new multi-chip quantum processor design which the company calls the first in the world. Today, it announced a collaboration with Riverlane (quantum software developer) and Astex Pharmaceuticals to develop an integrated application for simulating molecular systems using Rigetti Quantum Cloud Services.

Rigetti modular, multi-chip quantum processor.

Rigetti says its new multi-chip approach incorporates a proprietary modular architecture that “accelerates the path to commercialization and solves key scaling challenges toward fault-tolerant quantum computers.” Not a lot of detail was provided. Rigetti expects to make an 80-qubit system powered by the breakthrough multi-chip technology available on its Quantum Cloud Services platform later this year.

The company notes that scaling quantum computers comes with inherent challenges: “As chips increase in size, there is a higher likelihood of failure and lower manufacturing yield, making it increasingly difficult to produce high-quality devices. Rigetti has eliminated these roadblocks by developing the technology to connect multiple identical dies into a large-scale quantum processor. This modular approach exponentially reduces manufacturing complexity and allows for accelerated, predictable scaling.”

On the application front, it announced a new partnership that aims to design more efficient drugs and shorten the time to market. Today, drug researchers often use advanced computational methods to model molecular structures and drug-target interactions. Quantum computers have the potential to model more complex systems and improve the drug discovery process, but today’s quantum computers remain too noisy for results to evolve past proof-of-concept studies.

“Building on previous work with Astex, our collaboration aims to overcome this technological barrier and address a real business need for the pharmaceutical sector,” said Riverlane CEO Steve Brierley in the official announcement. The project will leverage Riverlane’s algorithm expertise and existing technology for high-speed, low-latency processing on quantum computers using Rigetti’s commercially available quantum systems. The team will also develop error mitigation software to help optimize the performance of the hardware architecture, which they expect to result in up to a threefold reduction in errors and runtime improvements of up to 40x.

Honeywell/Cambridge Quantum Work with Nippon Steel

You may know that Honeywell Quantum Solutions and Cambridge Quantum Computing were merged in the spring when the parent Honeywell corporation acquired CQC. The entity, not yet named, is still owned by Honeywell but will operate independently as do many Honeywell units.

The new company recently discussed some of the work it is doing with Nippon Steel to devise an optimal schedule for the intermediate products it uses during the steel manufacturing process. CQC developed an algorithm and ran it on the System Model H1 (ion trap system), Honeywell Quantum Solutions’ latest commercial computer.

“Scheduling at our steel plants is one of the biggest logistical challenges we face, and we are always looking for ways to streamline and improve operations in this area,” said Koji Hirano, chief researcher at Nippon Steel. The partners report the System Model H1 was able to find the optimal solution after only a few steps and are encouraging for scaling up the size of problems tackled.

ColdQuanta Reports 100-Qubit Milestone

There are, of course, many qubit technologies under development. ColdQuanta is developing a cold atom approach that exploits condensed matter physics (Bose-Einstein condensate). Last week the company reported successfully trapping and addressing 100 qubits in a large, dense 2-D cold atom array.

ColdQuanta reports it is on track to deliver a digital gate-based quantum computer (code named “Hilbert”) later this year. The claims it will be among the most powerful in the world using pristine qubits that have the stability of atomic clocks to massively scale qubit count beyond what is possible with other quantum computing approaches. We’ll see. Here’s past HPCwire coverage of ColdQuanta’s technology, ColdQuanta – Life in Quantum’s Slow (and Cold) Lane Heats Up.

Links to a few other recent articles in HPCwire

Technical University of Denmark Researchers Tighten Grip on Quantum Computer

Aalto Researchers Unlock Radiation-Free Quantum Technology with Graphene

Griffith University Researchers Work to Build Error-Proof Quantum Computer Using $2M Grant

Caption to Figure 1 of Oxford Quantum Circuits 2017 paper

Figure 1. (a) CAD design of the unit cell, with transmon qubit and lumped element resonator on opposing sides of a substrate, and control and measurement ports perpendicular to the chip plane. (b) Designs of the transmon and resonator. In the transmon the two electrodes are connected by a single Josephson junction, whereas the electrodes of the resonator are connected by an inductor line. (c) Equivalent circuit of the device, showing the resonator inductance and capacitance, LR and CR, the junction Josephson energy EJ and effective capacitance over the junction CΣ.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire