May 25, 2023
As HPC and AI continue to rapidly advance, the alluring vision of nuclear fusion and its endless zero-carbon, low-radioactivity energy is the sparkle in many a Read more…
February 25, 2021
The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing Read more…
August 17, 2020
Ten years ago, the Department of Energy put out a call for innovators to change the world of nuclear energy. What DOE hoped to accomplish with the then-new Read more…
March 3, 2020
Normally, even a two-fold speedup is a big deal for a large-scale simulation, saving large amounts of time (and energy, and money) on machines that are often booked to capacity. Now, a team of researchers from Stanford University and the University of Oxford have applied deep learning to speed simulations quite a bit more – up to billions of times faster – without sacrificing accuracy. Read more…
December 12, 2019
Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief Read more…
November 22, 2019
At SC19, the Association for Computing Machinery (ACM) awarded the prestigious Gordon Bell Prize to the Swiss Federal Institute of Technology (ETH) Zurich. The Read more…
October 1, 2019
In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programm Read more…
September 4, 2019
As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…
Making the Most of Today’s Cloud-First Approach to Running HPC and AI Workloads With Penguin Scyld Cloud Central™
Bursting to cloud has long been used to complement on-premises HPC capacity to meet variable compute demands. But in today’s age of cloud, many workloads start on the cloud with little IT or corporate oversight. What is needed is a way to operationalize the use of these cloud resources so that users get the compute power they need when they need it, but with constraints that take costs and the efficient use of existing compute power into account. Download this special report to learn more about this topic.
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.