June 19, 2024
Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…
April 18, 2023
Weather and climate applications are some of the most important for high-performance computing, often serving as raisons d'être and flagship workloads for the Read more…
June 17, 2019
As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…
December 5, 2018
Experts at General Atomics (GA) have achieved a major improvement in processing speed for an important plasma physics code by working with experts from Nvidia t Read more…
November 12, 2018
OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…
June 20, 2018
In an era of multicore processors coupled with manycore accelerators in all kinds of devices from smartphones all the way to supercomputers, it is important to Read more…
November 14, 2017
OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…
September 14, 2017
PGI today announced a fairly lengthy list of new features to version 17.7 of its 2017 Compilers and Tools. The centerpiece of the additions is support for the T Read more…
As Federal agencies navigate an increasingly complex and data-driven world, learning how to get the most out of high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) technologies is imperative to their mission. These technologies can significantly improve efficiency and effectiveness and drive innovation to serve citizens' needs better. Implementing HPC and AI solutions in government can bring challenges and pain points like fragmented datasets, computational hurdles when training ML models, and ethical implications of AI-driven decision-making. Still, CTG Federal, Dell Technologies, and NVIDIA unite to unlock new possibilities and seamlessly integrate HPC capabilities into existing enterprise architectures. This integration empowers organizations to glean actionable insights, improve decision-making, and gain a competitive edge across various domains, from supply chain optimization to financial modeling and beyond.
Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.
This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.
© 2024 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.