Survey Results: PsiQuantum, ORNL, and D-Wave Tackle Benchmarking, Networking, and More

By John Russell

September 19, 2022

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. For example, will quantum networking be necessary to practically scale up the size of quantum computers? There are differing perspectives on this question but most currently think networking will be necessary to achieve scale. Likewise, well-drawn benchmarking can help both quantum technology developers and users compare systems and identify strengths and weaknesses. But what does well-drawn mean?

In this most recent HPCwire/QCwire survey, senior researchers from D-Wave Systems, Oak Ridge National Laboratory, and PsiQuantum tackle benchmarking, networking, and hybrid classical-quantum computing approaches and you may be surprised by some of their answers. For example, Peter Shadbolt of PsiQuantum offers a nuanced view on hybrid classical-quantum computing, that’s well worth reading. (D-Wave didn’t weigh in on networking as that is not Murray Thom’s expertise).

Our respondents include:

  • Nicholas A. Peters, section head, Quantum Information Science (QIS) Section, Oak Ridge National Laboratory. Peter leads ORNL QIS efforts, focusing on networking technologies.
  • Murray Thom, vice president of product management, D-Wave SystemsA pioneer in quantum annealing, D-Wave has also launched a gate-based system development effort and is expected to report on its progress later in the year. The company has also been a leader commercial engagements.
  • Peter Shadbolt, co-founder and chief scientific officer, PsiQuantum which is developing a quantum system using photonics-based qubits. PsiQuantum believes its approach is perhaps the most scalable of current approaches and has a detailed plan to get to one million qubits, the often-cited threshold many believe will enable fault-tolerant quantum computing.

Thanks to all of the respondents. Their answers are all thoughtful. The idea that these regular HPCwire/QCwire surveys can provide a kind of real-time view into important issues couldn’t happen without their efforts. We expect perspectives to evolve as the technology evolves and we’re hopeful our regular survey will reflect the current views of leaders in the quantum community.

1 Hybrid Classical-Quantum or Pure-play Quantum.  There’s a lot of discussion around using quantum computing as mostly another accelerator in the advanced computing landscape and discussion around being able to parse problems into pieces with some portions best run on quantum computers and other portions best run on classical resources.

a) What’s your take on the hybrid classical-quantum computing approach? Is it worthwhile? How significant a portion of quantum computing will the hybrid approach become? Do you see distinct roles for hybrid classical-quantum computing and for pure-play quantum computing? 

Nicholas Peters, ORNL

ORNL’s Peters:
Unless you are building an algorithm-specific quantum computer, much like how one might use an analog classical computer, I’d expect a hybrid classical-quantum system will be the primary way to leverage the power of quantum computers as they mature.  Algorithm-optimized quantum-only machines could be used to simulate parts of problems that are hard on classical machines before we have a good way to integrate with larger classical infrastructures.  Further, algorithm-optimized quantum computers may even make up core co-processing units used in more general hybrid-quantum classical systems.

D-Wave’s Thom:
We believe hybrid computing is central to achieving our quantum future. The combination of the best quantum computing methods and the best classical approaches will be the most optimal way to solve problems. As powerful as modern classical computing technologies may be, there is an emerging set of applications that require new resources – quantum resources – to meet the demands of businesses in today’s increasingly competitive markets.

Murray Thom, D-Wave Systems

Pure-play quantum computing will likely be the realm of specialists and hybrid processing workflow designers. There will be uses for remote processing with direct calls to quantum processors – for example, in physics studies of spin glasses or sub-routines of a real Shor’s algorithm implementation. But from a commercial applications point of view, industry users will need whole-problem hybrid solvers with self-contained quantum subroutines.

As we look ahead, performant, high-value hybrid solvers across multiple problem types will continue to expand and deliver the benefits of both quantum and classical resources for both annealing quantum computers and gate-model systems for emerging quantum use cases. What we have seen, and believe others will find as well, is that for problems you can solve most effectively with a quantum computer, you can reach an even larger size once you hybridize with classical systems.

Peter Shadbolt, PsiQuantum

PsiQuantum’s Shadbolt:
We anticipate that most end-to-end applications enabled by quantum computing will depend on a mixture of both classical and quantum computation to produce valuable answers. However, there are two widely held misconceptions. The first is that this mixed responsibility “lowers the bar” for the performance of the quantum computer and creates opportunities for real utility using very small or weak quantum computers. This is not the case. As far as we understand, you need a powerful, error-corrected quantum computer before you can start talking seriously about quantum advantage – no matter how great your integration with conventional hardware might be.

Secondly, it is often thought that the quantum computer must be very tightly integrated with the supporting conventional hardware – high-bandwidth networking, colocation, etc. etc. Consider that a “world-changing”, million-physical-qubit quantum computer only supports hundreds of logical qubits, billions of gates, and has a single-shot run-time much (much!) longer than a second. The bandwidth of user-facing data coming out of this system is miniscule – on the order of kilobytes per second. Assuming that the program to be run can be expressed in less than a few gigabytes (an extremely conservative estimate), the entire machine can be operated remotely over a regular consumer internet connection. Latency and bandwidth are not prohibitive at all, colocation is not required.

b) Do you think quantum computing capability will become embedded in existing HPC application suites? For example, in a suite such as ANSYS, will quantum computing become incorporated as an accelerator option for users to target? 

ORNL’s Peters:
Eventually, it seems likely that quantum computers will be a part of future HPC. I don’t think it is clear yet if we will be able to automate breaking up the code into calls optimized for the different types of accelerators or leave that to the programmers, though automation would be a desirable outcome.

D-Wave’s Thom:
Yes, at this point this seems like a natural outcome of the co-evolution of quantum and classical processors. We think it will result in a continuum of quantum-accelerated computations, each varying in the degree to which it depends on quantum computation.

PsiQuantum’s Shadbolt:
At some point in the far future, I think this is a reasonable expectation, in the same way that features for exploiting SIMD, GPUs and TPUs have crept into other scientific software libraries. However, in the short term, we expect the use of quantum computers to be more bespoke, more hands-on, and less widely available than is suggested by the question.

2 Quantum Networking. Quantum networking is an active area of research on at least two fronts. 1) Many believe it will be necessary to network quantum processors together to achieve scale, whether at the chip level or system clustering. 2) Quantum networks (LAN/MAN/WAN, etc.) might offer many attractive attributes spanning secure communications to distributed quantum processing environments; DOE even has a Quantum Internet Blueprint 

a) How necessary do you think quantum networking will be for scaling up quantum computers? Will clustering smaller systems together be required to deliver adequate scale to tackle practical problems? When do you expect to see networked quantum chips/systems to start to appear, even if only in R&D? What key challenges remain? 

ORNL’s Peters:
One could argue that a quantum network will be needed to scale quantum computers.  The value proposition is that, even if not required, a quantum network of two quantum computers is potentially much more than a factor of two more powerful than two independent quantum computers.  Though a quantum network might not be optimized the same for different types of qubits. Once a particular qubit technology is selected, it drives a lot of architectural considerations for supporting technology development. Another potential advantage of networked quantum computing resources is its potential to reduce crosstalk when we address qubits living in different parts of a multi core quantum-processor machine. Finally, one could use different quantum computing technologies to do different parts of a computation, not unlike how we use GPUs and CPUs in HPC today.

D-Wave’s Thom:
N/A

PsiQuantum’s Shadbolt:
At least a million physical qubits are necessary for all known useful applications of quantum computers. For most qubit implementations, the qubits are and will forever remain too large to fit a million qubits onto a single chip (die/reticle), and therefore high-performance quantum networking will be critical to achieve any utility. Probably the most compelling exception to this generalization is quantum dots, where it is reasonable to expect that a million qubits can be fabricated into a single reticle field, albeit with challenges associated with control electronics. Outside of special cases such as quantum dots, where very high density can be achieved, we see chip-to-chip quantum networking as an essential prerequisite for commercial viability of quantum computers.

b) What’s your sense of progress to date in developing quantum networking and a quantum internet? What kinds of applications will be enabled and how soon do you expect nascent quantum networks and prototype quantum internets to appear. What are the key technical hurdles remaining? 

ORNL’s Peters:
The progress in the US has been rapidly accelerating with recent investments. However, we may have small fault-tolerant quantum computers before we have fault tolerant quantum networks, since the historic focus has been on the computers themselves. We can enable some limited quantum-based cybersecurity functions already, but they need further study to ensure methods of accreditation are developed and implemented. In addition to quantum computing, networking quantum sensors promises to greatly improve our ability to measure events of interest, including, potentially the discovery of new physical phenomena such as dark matter which we cannot directly detect today. The key technical hurdles to overcome are correcting for loss and other operation errors when transmitting quantum information.

D-Wave’s Thom:
N/A

PsiQuantum’s Shadbolt:
The most compelling use-case that we are aware of for the proposed “quantum internet” is device-independent quantum key distribution, which enables secure communication with very specific and differentiated guarantees on security. PsiQuantum does develop components that are relevant to the challenges posed by a hypothetical quantum internet. For instance, we invest in low-loss photonic devices, high-efficiency manufacturable single photon detectors, high-performance optical phase-shifters, etc. However, PsiQuantum is focused on building a quantum computer, and does not pursue the quantum internet as a goal.

3 Benchmarks. We seem to love benchmarks and top performer lists (think Top500 list and MLPerf). These metrics can be useful or not so useful. Currently, there’s a lot of activity around developing benchmarks for quantum computing. From IBM’s Quantum Volume and IonQ’s Algorithmic Qubits, which is based on QED-C efforts, to diverse efforts underway by DOE. The idea, of course, is to provide reasonable ways to compare quantum systems based in criteria ranging from hardware performance characteristics to application performance across differing systems and qubit technologies. 

a) What’s you sense of the need for benchmarks in quantum computing? Which of the existing still-young offerings, if any, do you prefer and why? Are you involved in any benchmark development collaborations? To what extent do you use existing benchmarks to compare systems now? 

ORNL’s Peters:
Generally speaking, benchmarks are needed. Though in conventional computing infrastructures, careful consideration is made for practical issues like cost and energy consumption along with performance. How exactly one should quantify the performance of a quantum computer is still an active area of research. So further relating the performance of what one gets in a hybrid system vs. what’s possible with equal resources spent on an entirely classical infrastructure is also not yet clear. The technology is probably too immature to make a meaningful comparison at this point, and I am not currently involved in any quantum computing benchmark development efforts, though I am interested in understanding if they might be applied to quantum repeater systems.

D-Wave’s Thom:
Benchmarks are vital in quantum computing, having two distinct purposes: communicating technological progress by measuring performance against an ideal (noise-free) quantum computation and informing customers about which products are most suitable for their computational needs.

For D-Wave’s quantum annealing computers, we prefer the second instance, comparing quantum hybrid application performance against existing commercial methods because we believe that customers need real-world comparisons to demonstrate business value.

D-Wave researchers are members of a few committees (IEEE, QED-C) working to develop benchmark tests for both gate model and annealing quantum computers, and we have also published papers that illustrate our approach. We also have a huge repertoire of internal benchmarks that measure performance of bare hardware components, of the full quantum processing unit, and our online hybrid solvers. We normally publish benchmark results when new products go live, again, through the lens, as often as possible, to commercial applications.

PsiQuantum’s Shadbolt:
We welcome the concerted and sensitive effort by the community to define good benchmarks.

b) What elements do you think good quantum benchmarks should include? Should the benchmark be a single number, such as in Top500, or offer a suite of results such as is done in MLPerf? Who should develop the benchmarks? Do you think we will end up with an analog of the Top500 List for quantum computers? 

ORNL’s Peters:
Good quantum benchmarks should be able to capture and quantify the challenging aspects that currently make it difficult to build a scalable quantum computing platform. Perhaps they will be able to abstract to existing metrics, but that might be too lofty a goal considering the types of problems quantum computers will likely be good at solving. The broader computing community, including academia, industry, and government, should develop benchmarks. One could have a top500 list for quantum computers, however, I think it would be more desirable to find benchmarks that quantify the capability of hybrid systems.

D-Wave Advantage System

D-Wave’s Thom:
Good user benchmarks should include performance measurements at whole-problem solving, as opposed to the performance of individual circuits or components (or else better information about how individual component performance is relevant to whole-problem performance). In addition, test designs should reflect the user experience in accounting for the full computation, using realistic inputs, and not unrealistically over-tuned for narrow test scenarios. Measurements also should incorporate both computation time and solution quality. Basically, they should follow standards and expectations that have been set out for classical computational benchmarking, with some necessary modifications for the quantum scenario.

In terms of whether the benchmark should be a single number, given the unusual properties of quantum computers, a single number can be misleading because single number rankings over-generalize performance across too many applications and metrics. No quantum computer can No quantum computer can be best at every task it is given, and a suite of numbers is needed to characterize the kinds of scenarios for which a given one can outperform classical and other quantum alternatives.

The benchmarks need to be developed from dialog between quantum producers and quantum users. Producers want to be able to highlight the kinds of scenarios on which their computer performs best, and users want to know about test results that are relevant to their application/industry.

A single list for quantum computers is unlikely because of the current variety of incomparable technologies. Perhaps it will be possible a long time from now, after the technologies shake themselves out and settle on a small handful of best designs.

PsiQuantum’s Shadbolt:
One way to use benchmarks is to help determine whether a particular machine is better or worse than another. However, in general what we would really like to quantify is the distance (essentially, the amount of time and money) between a particular machine, and the scale and performance that is required to achieve genuine utility – i.e. large-scale, fault-tolerant quantum computing. Current benchmarks are very good for the former, but in general are not as useful for the latter, primarily because nobody has yet built a device that is meaningfully large or performant. In other words, benchmarks allow us to rank-order current hardware, but since we also know that none of this hardware is remotely close to a genuinely useful quantum computer, the usefulness of the rank-ordering exercise is limited. This is not to dismiss current benchmarking efforts, but is merely a note of caution.

4 Your work. Please describe in a paragraph or two your current top project(s) and priorities.

ORNL’s Peters:
My current top priority is the development of tools and techniques needed to build a national-scale quantum network. This will likely require the development of new concepts and quantum technologies to build a network of quantum repeaters. Such a network will probably look similar to a special purpose distributed quantum computer and will probably require us to encode our quantum information in photons of many different frequencies, or at the very least use these frequencies to improve the number of entangled photons that are probabilistically carried over an optical fiber.  One of the major difficulties compared to quantum computing is that in networking we lose most of our quantum information carriers (the photons on which qubits are encoded) as they are transmitted. As a result, we need to fix large loss errors as well as other operation errors.

D-Wave’s Thom:
Supporting our track record of relentless product delivery, we’re continuing to focus on our Clarity roadmap to bring new innovations to market. In June 2022, we released an experimental prototype of our next-generation Advantage2  quantum system, which shows great promise with a new Zephyr topology and 20-way inter-qubit connectivity. This new prototype represents an early version of the upcoming full-scale product, and early benchmarks show increased energy scale and improved solution quality. New and existing customers can try out the experimental Advantage2 prototype by signing into Leap, our quantum cloud service.

PsiQuantum wafer

PsiQuantum’s Shadbolt:
Photonic quantum computers have not yet demonstrated very large entangled states of dual rail-encoded photonic qubits. The reason for this is that multiplexing (essentially, trial-until-success) is required to overcome nondeterminism in single photon sources and linear-optical entangling gates. Multiplexing is technically challenging for multiple reasons, but the most fundamental issue is the need for a very high-performance optical switch. PsiQuantum is investing heavily in a novel, high-performance, mass-manufacturable optical switch to overcome this issue. Beyond this, we are investing across the entire stack, from semiconductor process development, device design, packaging, test, reliability, systems integration and architecture, to control electronics and software, networking, cryogenic infrastructure, quantum architecture, error correcting codes, implementations of fault-tolerant logic and algorithms, and application development.

(Interested in participating in HPCwire/QCwire’s periodic sampling of current thinking? Contact [email protected] for more details.)

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Quantum Computing Needs More Public-Private Collaboration Says QED-C

October 4, 2022

Last week the Quantum Economic Development Consortium (QED-C) released a new report – Public-Private Partnerships in Quantum Computing – that calls for increased government-commercial collaboration, broadly describes Read more…

HPC Career Notes: October 2022 Edition

October 3, 2022

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

AWS Solution Channel

Shutterstock 242193979

Last Chance to Vote – AWS for Best Use of HPC in the 2022 Readers’ Choice Awards

AWS Once Again Showcasing its Reach Across Several Areas in HPC

Ten categories feature Amazon Web Services (AWS) in the 2022 HPCwire Readers’ Choice Awards. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

Quantum Computing Needs More Public-Private Collaboration Says QED-C

October 4, 2022

Last week the Quantum Economic Development Consortium (QED-C) released a new report – Public-Private Partnerships in Quantum Computing – that calls for incr Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers

Contributors

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

AMD Lines Up Alternate Chips as It Eyes a ‘Post-exaflops’ Future

June 10, 2022

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire