Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon CPUs and Gaudi2 (Habana) accelerat …
The Role of Containers in Alzheimer's Disease Research in the Ebbert Lab
Navigating the complexities of scientific research often involves juggling large data sets, multiple tools, and speci …
Vote for your favorite candidates in 22 categories recognizing the most outstanding individuals, organizations, products, and technologies in the industry.
September 6, 2023
An on-premises computing powerhouse is taking shape in Tuscaloosa. According to reporting from Ken Roberts of the Tuscaloosa News, the University of Alabama ( Read more…
August 14, 2023
Intel's newest under-the-hood improvement to boost chip performance, called APX and AVX10 (which is targeted more at HPC), sparked excitement among Read more…
July 27, 2023
The AI supercomputing options in the cloud have expanded at an unprecedented rate over the last few weeks. Amazon joined the party on Wednesday by announci Read more…
July 24, 2023
While not a golden HPC spike, the final blade has been loaded into Aurora. As mentioned previously, final preparation of Aurora is underway. Aurora the "almost Read more…
July 24, 2023
The Texas Advanced Computing Center (TACC) today announced Stampede3, a powerful new Dell Technologies and Intel based supercomputer that will enable groundbrea Read more…
July 20, 2023
Atom Computing, a pioneer in the use of neutral atoms for quantum computing, will collaborate with the U.S. Department of Energy’s National Renewable Energy L Read more…
July 12, 2023
Worldwide revenue for the public cloud services market totaled $545.8 billion in 2022, an increase of 22.9% over 2021, according to new data from the IDC Worldw Read more…
July 9, 2023
Performance benchmarking is the hallmark of HPC. One need to look no further than the Top500 list which has been recording HPC performance since 1993. Historica Read more…
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.