Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

By John Russell

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical University of Denmark released a template for an optical quantum computer that’s compatible with current fiber optics. Over the past roughly two weeks: Rigetti announced a multi-chip quantum processor and an industry collaboration; Phasecraft presented a novel approach to quantum system modeling; Harvard introduced a beefed-up quantum simulator; Oxford Quantum Circuits announced Quantum Computing-as-a-Service, and China claimed to have achieved Quantum Supremacy. This isn’t even the full list.

You get the idea. The flow of quantum computing announcements to HPCwire has spiraled higher in recent months. They span all aspects of the quantum ecosystem and all states of “technology readiness” from hopeful start-up plans to veteran companies working in the quantum computing trenches. Covering the flow is a challenge. Here’s a brief roundup of just a few recent QC-related announcements spilling out before, during and just after ISC21 (which incidentally had a quantum computing keynote).

IBM Hits a Double

There’s a fair amount of debate over whether quantum computing, or at least whether near-term NISQ (noisy intermediate scale quantum) computers, will be effective for machine learning. In a blog on Monday – IBM researchers have found mathematical proof of a potential quantum advantage for quantum machine learning – IBM notes rather realistically:

“Few concepts in computer science cause as much excitement—and perhaps as much potential for hype and misinformation—as quantum machine learning. Several algorithms in this space have hinted at exponential speedups over classical machine learning approaches, by assuming that one can provide classical data to the algorithm in the form of quantum states. However, we don’t actually know whether a method exists that can efficiently provide data in this way.

“Many proposals for quantum machine learning algorithms have been made that can be best characterized as “heuristics,” meaning that these algorithms have no formal proof that supports their performance. These proposals are motivated by the challenge to find algorithms that are friendly towards near-term experimental implementation with only conventional access to data. One such class of algorithms was the proposal for quantum enhanced feature spaces—also known as quantum kernel methods, where a quantum computer steps in for just a part of the overall algorithm—by Havlíček et al.1

IBM now says it has done just that: “We’re excited to announce a quantum kernel algorithm that, given only classical access to data, provides a provable exponential speedup over classical machine learning algorithms for a certain class of classification problems.” Classification problems, of course, are one of the most fundamental problems in machine learning.

The details of IBM’s new algorithm are in a Nature article published yesterday (A rigorous and robust quantum speed-up in supervised machine learning). Here’s the abstract: “[W]e construct a classification problem with which we can rigorously show that heuristic quantum kernel methods can provide an end-to-end quantum speed-up with only classical access to data. To prove the quantum speed-up, we construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Furthermore, we construct a family of parameterized unitary circuits, which can be efficiently implemented on a fault-tolerant quantum computer, and use them to map the data samples to a quantum feature space and estimate the kernel entries. The resulting quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.”

Meanwhile, less than a week ago, IBM announced in a technical note that it was moving to a new topology for its hardware devices: “As of Aug 8, 2021, the topology of all active IBM Quantum devices will use the heavy-hex lattice, including the IBM Quantum System One’s Falcon processors installed in Germany and Japan.”

IBM spokesperson, Kortney Easterly told HPCwire, “We believe the heavy hex lattice offers the clearest path to reach Quantum Advantage – the point in which a quantum computer can solve a problem faster than a classical computer…The heavy-hex lattice design helps minimize qubit errors – an issue that plagues noisy device performance. Based on proven fidelity improvements and manufacturing scalability, we believe that the heavy hex lattice is superior to a square lattice – from enabling more accurate near-term experimentation to reaching the critical goal of demonstrating fault tolerant error correction. The heavy-hex lattice represents the fourth iteration of the topology for IBM Quantum systems and the Eagle quantum processor that we’re debuting later this year will also have heavy hex lattice layout.”

IBM has made a big bet in quantum computing and in the last year spelled out its technology roadmap for hardware and software intended to deliver a 1000-plus-qubit quantum computer in 2023. The evolution of topology is part of the process. In the new configuration, each unit cell of the lattice consists of a hexagonal arrangement of qubits, with an additional qubit on each edge.

According to the technical note, “The heavy-hex topology is a product of co-design between experiment, theory, and applications, that is scalable and offers reduced error-rates while affording the opportunity to explore error correcting codes. Based on lessons learned from earlier systems, the heavy-hex topology represents a slight reduction in qubit connectivity from previous generation systems, but, crucially, minimizes both qubit frequency collisions and spectator qubit errors that are detrimental to real-world quantum application performance.”

Phasecraft Tackles Fermion Mapping

Simulating quantum systems is one of the most promising applications for quantum computing. Doing so requires mapping these problems efficiently to qubits both to take advantage of their quantum attributes as well as to mitigate error. Phasecraft reported it has a developed a “compact representation of fermions [that] outperforms all previous representations improving memory use and algorithm size each by at least 25% – a significant step towards realizing practical scientific applications on near-term quantum computers.”

The company has a paper published this month in Physical Review B (APS) describing its novel modeling approach. This excerpt is from the abstract:

“The number of qubits required per fermionic mode, and the locality of mapped fermionic operators strongly impact the cost of such simulations. We present a fermion to qubit mapping that outperforms all previous local mappings in both the qubit to mode ratio and the locality of mapped operators. In addition to these practically useful features, the mapping bears an elegant relationship to the toric code, which we discuss. Finally, we consider the error mitigating properties of the mapping—which encodes fermionic states into the code space of a stabilizer code. Although there is an implicit tradeoff between low weight representations of local fermionic operators, and high distance code spaces, we argue that fermionic encodings with low-weight representations of local fermionic operators can still exhibit error mitigating properties which can serve a similar role to that played by high code distances. In particular, when undetectable errors correspond to “natural” fermionic noise.”

Oxford Quantum Circuits’ Coaxmon?

Like many others, including IBM, Rigetti and Google, Oxford Quantum Circuits (OQC) uses semiconductor-based superconducting qubits as the core of its quantum computer. OQC is a spinout from Oxford University and contends that its novel 3D architecture, developed in part at Oxford, avoids many limitations used by superconducting qubit systems and enables better scaling. OCQ’s core differentiator is what it calls an a ‘coaxmon’ which works in conjunction with the industry standard superconducting transmon.

OQC fired up its first system in 2018. Last week the company announced launching “the UK’s first commercially available Quantum Computing-as-a-Service built entirely using its proprietary technology.” Pragmatically, that choice of delivery is really the standard practice for most quantum system developers. (IBM and D-Wave, in addition to portal access, also provide on-premises systems.)

Back to the coaxmon. Company literature describes OQC technology thusly: “Typical superconducting qubit technologies only allow scaling in 1-dimension, ‘in-plane’. This makes wiring-up large arrays of qubits difficult. Any fixes to allow scaling in 2-dimensions require increasingly intricate engineering to route control wiring across the chip to the qubits, degrading their performance. OQC’s innovation – the coaxmon – solves these issues. Our quantum processor is built around a unique 3D architecture, which allows fewer fabrication steps and produces lower unwanted cross-talk than typical superconducting circuit technologies. It also makes the unit-cell readily scalable to large qubit arrays while maintaining the high level of quality and control required for useful quantum computation.”

A 2017 paper (Double-sided coaxial circuit QED with out-of-plane wiring) written by some of the company’s researchers may provide a better look at its technology approach. The figure below is from that paper with a description excerpted from the paper’s text (the actual figure caption is at the end of the article).

“The device is depicted in Fig. 1. It consists of a superconducting charge qubit in the transmon regime with coaxial electrodes, which we call the coaxmon (similar to the concentric and aperture transmons) coupled to a lumped element LC microwave resonator fabricated on the opposite side of the chip, realizing dispersive circuit quantum electrodynamics (QED). The device is controlled and measured via coaxial ports, perpendicular to the plane of the chip (see Fig. 1 (a), whose distance from the chip can be modified to change the external quality factor of the circuits. These ports can be used for independent control of the qubit and measurement of the resonator in reflection, or to measure the device in transmission.”

Strangeworks Offers Qiskit Runtime

Strangeworks last week announced it is the first IBM partner to offer exclusive early preview access to Qiskit Runtime, a new service offered by IBM Quantum that streamlines computations requiring multiple iterations.

Qiskit Runtime, announced earlier this year, is a containerized service for quantum computers. Rather than accumulating latencies as code passes between a user’s device and the cloud-based quantum computer, developers can run their program in the Qiskit Runtime execution environment, where IBM’s hybrid cloud reduces the time between executions. Users of the Strangeworks platform may now use the new technology for free starting via a dedicated IBM 7-qubit quantum computer.

Cambridge Quantum Describes Quantum-Safe Blockchain

Cambridge Quantum (CQ), together with the Inter-American Development Bank (IDB) and the Monterrey Institute of Technology (TEC de Monterrey), published a paper describing the implementation of a quantum-safe blockchain, which was successfully demonstrated on the LACChain network and secured using CQ’s IronBridge quantum key generation platform.

Defending the blockchain against the threat of quantum computing required two enhancements to be made.

  • Firstly, the blockchain was updated to use quantum-safe cryptographic algorithms, rather than vulnerable algorithms (such as ECDSA) that will be broken by quantum computers in as little as 5-10 years.
  • Secondly, the keys signing the blockchain transactions had to be completely unpredictable to present-day attackers as well as quantum-powered adversaries, otherwise fraudulent transactions would occur. This second step was achieved using CQ’s IronBridge quantum key generation platform – the only source of provably perfect and unpredictable cryptographic keys in the world.

Excerpt from a preprint (Quantum-resistance in blockchain networks) of the paper on arXiv.org:

“The advent of quantum computing threatens internet protocols and blockchain networks because they utilize non-quantum resistant cryptographic algorithms. When quantum computers become robust enough to run Shor’s algorithm on a large scale, the most used asymmetric algorithms, utilized for digital signatures and message encryption, such as RSA, (EC)DSA, and (EC)DH, will be no longer secure. Quantum computers will be able to break them within a short period of time. Similarly, Grover’s algorithm concedes a quadratic advantage for mining blocks in certain consensus protocols such as proof of work.”

China Researchers Report Besting Google in Quantum Supremacy

If you’ve followed quantum computing, you’re familiar with the on-again/off-again battle to demonstrate quantum supremacy – the ability to perform a calculation on a quantum computer in a reasonable time that cannot realistically be done on a classical system. Google was first to claim this prize (see HPCwire  coverage) amid dispute over  whether it actually did or even if it’s important versus doing something useful on a quantum computer sufficiently better than on a classical computer to make it worthwhile.

Researchers from China report performing a similar exercise even faster than Google on what’s described as a tw0-dimensional programmable computer composed of 66 functional qubits in a tunable coupling architecture. Rather than dive into the pros and cons, here’s the abstract of pre-print paper (Strong quantum computational advantage using a superconducting quantum processor) describing the work led by Jian-Wei Pan, who has been called the father of China’s quantum computing efforts (figure from the paper is shown).

“Scaling up to a large number of qubits with high-precision control is essential in the demonstrations of quantum computational advantage to exponentially outpace the classical hardware and algorithmic improvements. Here, we develop a two-dimensional programmable superconducting quantum processor, Zuchongzhi, which is composed of 66 functional qubits in a tunable coupling architecture. To characterize the performance of the whole system, we perform random quantum circuits sampling for benchmarking, up to a system size of 56 qubits and 20 cycles.

“The computational cost of the classical simulation of this task is estimated to be 2-3 orders of magnitude higher than the previous work on 53-qubit Sycamore processor (Google). We estimate that the sampling task finished by Zuchongzhi in about 1.2 hours will take the most powerful supercomputer at least 8 years. Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time. The high-precision and programmable quantum com- puting platform opens a new door to explore novel many-body phenomena and implement complex quantum algorithms.”

Harvard-MIT Researchers Extend Quantum Simulator Capability

Physicists from the Harvard-MIT Center for Ultracold Atoms and other universities have developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits. A paper on the work (Quantum phases of matter on a 256-atom programmable quantum simulator) was published in Nature last week. The device seems more of a tool to investigate quantum states that might later be used in quantum computers.

As described in an article written by Juan Siliezar in the Harvard Gazette, “The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.”

“The workhorse of this new platform is a device called the spatial light modulator, which is used to shape an optical wavefront to produce hundreds of individually focused optical tweezer beams,” said Ebadi Sepehr, a physics student in the Harvard Graduate School of Arts and Sciences and the study’s lead author. “These devices are essentially the same as what is used inside a computer projector to display images on a screen, but we have adapted them to be a critical component of our quantum simulator.”

It’s best to read the paper directly. The researchers demonstrated a programmable quantum simulator “based on deterministically prepared two-dimensional arrays of neutral atoms, featuring strong interactions controlled by coherent atomic excitation into Rydberg states. Using this approach, we realize a quantum spin model with tunable interactions for system sizes ranging from 64 to 256 qubits.”

They benchmarked the system by characterizing high-fidelity antiferromagnetically ordered states and demonstrating quantum critical dynamics consistent with an Ising quantum phase transition in (2 + 1) dimensions. “We then create and study several new quantum phases that arise from the interplay between interactions and coherent laser excitation, experimentally map the phase diagram and investigate the role of quantum fluctuations. “Offering a new lens into the study of complex quantum matter, these observations pave the way for investigations of exotic quantum phases, non-equilibrium entanglement dynamics and hardware-efficient realization of quantum algorithms,” wrote the authors.

Rigetti’s New Processor and Industry Collaboration

Rigetti Computing made significant announcements on two fronts. At the beginning of the month, it launched a new multi-chip quantum processor design which the company calls the first in the world. Today, it announced a collaboration with Riverlane (quantum software developer) and Astex Pharmaceuticals to develop an integrated application for simulating molecular systems using Rigetti Quantum Cloud Services.

Rigetti modular, multi-chip quantum processor.

Rigetti says its new multi-chip approach incorporates a proprietary modular architecture that “accelerates the path to commercialization and solves key scaling challenges toward fault-tolerant quantum computers.” Not a lot of detail was provided. Rigetti expects to make an 80-qubit system powered by the breakthrough multi-chip technology available on its Quantum Cloud Services platform later this year.

The company notes that scaling quantum computers comes with inherent challenges: “As chips increase in size, there is a higher likelihood of failure and lower manufacturing yield, making it increasingly difficult to produce high-quality devices. Rigetti has eliminated these roadblocks by developing the technology to connect multiple identical dies into a large-scale quantum processor. This modular approach exponentially reduces manufacturing complexity and allows for accelerated, predictable scaling.”

On the application front, it announced a new partnership that aims to design more efficient drugs and shorten the time to market. Today, drug researchers often use advanced computational methods to model molecular structures and drug-target interactions. Quantum computers have the potential to model more complex systems and improve the drug discovery process, but today’s quantum computers remain too noisy for results to evolve past proof-of-concept studies.

“Building on previous work with Astex, our collaboration aims to overcome this technological barrier and address a real business need for the pharmaceutical sector,” said Riverlane CEO Steve Brierley in the official announcement. The project will leverage Riverlane’s algorithm expertise and existing technology for high-speed, low-latency processing on quantum computers using Rigetti’s commercially available quantum systems. The team will also develop error mitigation software to help optimize the performance of the hardware architecture, which they expect to result in up to a threefold reduction in errors and runtime improvements of up to 40x.

Honeywell/Cambridge Quantum Work with Nippon Steel

You may know that Honeywell Quantum Solutions and Cambridge Quantum Computing were merged in the spring when the parent Honeywell corporation acquired CQC. The entity, not yet named, is still owned by Honeywell but will operate independently as do many Honeywell units.

The new company recently discussed some of the work it is doing with Nippon Steel to devise an optimal schedule for the intermediate products it uses during the steel manufacturing process. CQC developed an algorithm and ran it on the System Model H1 (ion trap system), Honeywell Quantum Solutions’ latest commercial computer.

“Scheduling at our steel plants is one of the biggest logistical challenges we face, and we are always looking for ways to streamline and improve operations in this area,” said Koji Hirano, chief researcher at Nippon Steel. The partners report the System Model H1 was able to find the optimal solution after only a few steps and are encouraging for scaling up the size of problems tackled.

ColdQuanta Reports 100-Qubit Milestone

There are, of course, many qubit technologies under development. ColdQuanta is developing a cold atom approach that exploits condensed matter physics (Bose-Einstein condensate). Last week the company reported successfully trapping and addressing 100 qubits in a large, dense 2-D cold atom array.

ColdQuanta reports it is on track to deliver a digital gate-based quantum computer (code named “Hilbert”) later this year. The claims it will be among the most powerful in the world using pristine qubits that have the stability of atomic clocks to massively scale qubit count beyond what is possible with other quantum computing approaches. We’ll see. Here’s past HPCwire coverage of ColdQuanta’s technology, ColdQuanta – Life in Quantum’s Slow (and Cold) Lane Heats Up.

Links to a few other recent articles in HPCwire

Technical University of Denmark Researchers Tighten Grip on Quantum Computer

Aalto Researchers Unlock Radiation-Free Quantum Technology with Graphene

Griffith University Researchers Work to Build Error-Proof Quantum Computer Using $2M Grant

Caption to Figure 1 of Oxford Quantum Circuits 2017 paper

Figure 1. (a) CAD design of the unit cell, with transmon qubit and lumped element resonator on opposing sides of a substrate, and control and measurement ports perpendicular to the chip plane. (b) Designs of the transmon and resonator. In the transmon the two electrodes are connected by a single Josephson junction, whereas the electrodes of the resonator are connected by an inductor line. (c) Equivalent circuit of the device, showing the resonator inductance and capacitance, LR and CR, the junction Josephson energy EJ and effective capacitance over the junction CΣ.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Energy Exascale Earth System Model Version 2 Promises Twice the Speed

October 18, 2021

The Energy Exascale Earth System Model (E3SM) is an ongoing Department of Energy (DOE) earth system modeling, simulation and prediction project aiming to “assert and maintain an international scientific leadership posi Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire