August 31, 2023
Supercomputing remains largely an on-premises affair for many reasons that include horsepower, security, and system management. Companies need more time to move Read more…
March 21, 2023
If you are a die-hard Nvidia loyalist, be ready to pay a fortune to use its AI factories in the cloud. Renting the GPU company's DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month. The rental includes access to a cloud computer with eight Nvidia H100 or A100 GPUs and 640GB... Read more…
February 21, 2023
In an interesting twist on quantum-inspired work making its way into traditional HPC – and in this case a step further into cloud-based HPC – AWS today intr Read more…
December 7, 2022
Ahead of SC22 in Dallas last month, I met up virtually with Ian Colle, general manager of high performance computing at Amazon Web Services. In this fast-paced Read more…
November 30, 2022
AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances � Read more…
November 9, 2022
Nvidia does not have all the internal pieces to build out its massive AI computing empire, so it is enlisting software and hardware partners to scale its so-called AI factories in the cloud. The chip maker's latest partnership is with Rescale, which provides the middleware to orchestrate high-performance computing workloads on public and... Read more…
November 1, 2022
Server hardware has taken a backseat to software-defined virtual machines handling datacenter workloads, but HPE is emphasizing the importance of hardware in these virtual operating models. HPE created waves when it released the next-generation ProLiant Gen11 servers with a flagship server based on Arm CPUs, which sent a strong... Read more…
October 18, 2022
Oracle is bringing Nvidia's AI Enterprise software suite alongside thousands of its latest GPUs to its cloud infrastructure, which could fuel the chipmaker’s Read more…
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.