Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 conference being held in San Jose. While …
Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon CPUs and Gaudi2 (Habana) accelerat …
Vote for your favorite candidates in 22 categories recognizing the most outstanding individuals, organizations, products, and technologies in the industry.
September 7, 2023
Zapata Computing, the quantum software company spun out from Harvard in 2017, yesterday announced plans to go public and reposition itself as a provider of indu Read more…
July 28, 2023
Editor's note; The Day 1 and Day 2 reports from PEARC23 got crossed in the wires. Both reports are now posted. Thanks to Ken Chiacchia of the Pittsburgh Superco Read more…
July 27, 2023
Editor's note; The Day 1 and Day 2 reports from PEARC23 got crossed in the wires. Both reports are now posted. Thanks to Ken Chiacchia of the Pittsburgh Superco Read more…
July 13, 2023
Remember when a GPU was a small fan-less video card with names like Voodoo, Matrox, Nvidia, or ATI? This simple addition gave your PC a new world of responsive Read more…
July 9, 2023
Performance benchmarking is the hallmark of HPC. One need to look no further than the Top500 list which has been recording HPC performance since 1993. Historica Read more…
July 6, 2023
Crypto companies that loaded up on GPUs in data centers for coin mining are now investigating whether to sell or repurpose the idle hardware to the exploding ar Read more…
June 28, 2023
As promised, MLCommons added a large language model (based on GPT-3) to its MLPerf training suite (v3.0) and released the latest round of results yesterday. Onl Read more…
June 2, 2023
The ASC23 organizers put together a slate of fiendishly difficult applications for the students this year. The apps were a mix of traditional HPC packages, like Read more…
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.