August 25, 2021
A new 44-petaflops (theoretical peak) supercomputer is under construction at the Department of Energy’s Argonne National Laboratory. Called Polaris, this new Read more…
April 1, 2021
Aurora, to be hosted by Argonne National Laboratory, is one of three planned exascale-class systems in the U.S. While the Intel-led system has encountered a variety of conceptual transformations (it was originally planned as a pre-exascale system) and setbacks... Read more…
March 27, 2018
The lithium ion battery, which essentially transformed portable electronics, has been a tough act to follow. Last week, researchers from Argonne National Labora Read more…
September 14, 2015
To borrow a phrase from paleontology, the HPC community has historically evolved in punctuated equilibrium. In the 1970s we transitioned from serial to vector a Read more…
September 3, 2015
Comprising one half of the U.S. Department of Energy’s (DOE) Leadership Computing Facility, the Argonne Leadership Computing Facility (ALCF) operates a superc Read more…
April 20, 2015
In our third and final video from the HPC User Forum panel (The Who-What-When of Getting Applications Ready to Run On, And Across, Office of Science Next-Gen L Read more…
April 17, 2015
In our second video feature from the HPC User Forum panel, "The Who-What-When of Getting Applications Ready to Run On, And Across, Office of Science Next-Gen L Read more…
March 12, 2015
Paul Messina, director of science for the Argonne Leadership Computing Facility (ALCF), discusses the primary objectives, curriculum and importance of the Argon Read more…
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.