May 25, 2023
As HPC and AI continue to rapidly advance, the alluring vision of nuclear fusion and its endless zero-carbon, low-radioactivity energy is the sparkle in many a Read more…
November 14, 2022
Over the past months, Nvidia has put a spotlight on its OVX hardware – purpose-built systems aimed at its Omniverse digital twins platform. Now, at SC22, Nvid Read more…
April 10, 2013
Randall J. Leveque, Professor of Applied Mathematics at the University of Washington in Seattle, will be conducting a free course that brings the principles of parallelism in high performance computers to those in scientific computing. Read more…
July 31, 2012
Traditionally running scientific workloads in AWS provides a diverse toolkit that allows researchers to easily sling data around different time zones, regions, or even globally once the data is inside of the infrastructure sandbox. However, getting data in and out of AWS has historically been more of a challenge. Cycle Computing's Andrew Kaczorek and Dan Harris offer some helpful tips on optimizing ingress and egress transfers. Read more…
July 18, 2011
Software engineering is still something that gets too little attention from the technical computing community, much to the detriment of the scientists and engineers writing the applications. Greg Wilson has been on a mission to remedy that, mainly through his efforts at Software Carpentry, where he is the project lead. HPCwire asked Wilson about the progress he's seen over the last several years and what remains to be done. Read more…
October 19, 2010
Last week at their eScience Workshop at the University of California, Berkeley Microsoft Research announced two key technological progress points related to their Azure cloud. The advancements are currently serving researchers in ecological studies as well as biology and further demonstrate the potential of their cloud offering in further scientific computing projects. Read more…
July 13, 2010
The announcement this morning that Amazon is offering Cluster Compute Instances for EC2 specifically for the needs of HPC users might just be that long-awaited game-changer when it comes to the viability of scientific computing in the public cloud. While it is fresh from a private beta and the results are promising, only time will tell to what degree users will snatch up this opportunity to have supercomputing power on demand. Read more…
July 9, 2010
Researchers from Berkeley Lab are looking at different options available for scientific computing users to move beyond physical infrastructure, including the possibility of deploying public clouds. A recently-published study of Amazon EC2's handling of data from the Nearby Supernova Factory sheds light on putting large-scale scientific computing into the cloud in practice and in theory. Read more…
As Federal agencies navigate an increasingly complex and data-driven world, learning how to get the most out of high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) technologies is imperative to their mission. These technologies can significantly improve efficiency and effectiveness and drive innovation to serve citizens' needs better. Implementing HPC and AI solutions in government can bring challenges and pain points like fragmented datasets, computational hurdles when training ML models, and ethical implications of AI-driven decision-making. Still, CTG Federal, Dell Technologies, and NVIDIA unite to unlock new possibilities and seamlessly integrate HPC capabilities into existing enterprise architectures. This integration empowers organizations to glean actionable insights, improve decision-making, and gain a competitive edge across various domains, from supply chain optimization to financial modeling and beyond.
Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.
This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.
© 2024 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.