November 23, 2011
It's been a little over a year since Nimbix announced the initial beta launch of its Nimbix Accelerated Compute Cloud (NACC). During the SC11 show in Seattle last week, HPC in the Cloud sat down with Nimbix Co-Founder and CEO Steve Hebert to find out where the company fits in with the small-but-growing stable of cloud providers who specialize in supporting HPC workloads. Read more…
November 22, 2011
Indiana University's Scinet Research Sandbox entry sets new records, renews promise of cloud for data-intensive science workloads. Read more…
November 19, 2011
HPC in the Cloud talks to Cycle Computing CEO Jason Stowe at SC11 to get the details on the CycleCloud BigScience Challenge 2011. Cycle crafted the contest based on the noble ideal that science should not be hindered by lack of computational resources. So the company put out the call to non-profit institutions: do you have an HPC problem that will benefit humanity in a large-scale way? Read more…
November 18, 2011
If you thought Lustre and GPFS were your only two choices for a high performance, scalable parallel file system, then you've probably never heard of OrangeFS. We talked with three of the file system's developers and backers to discuss the unique attributes of OrangeFS and how it's being used in the field. Read more…
November 18, 2011
When NVIDIA CEO Jen-Hsun Huang delivered his keynote at SC11 this week, it was easy to forget that a few short years ago, the company and its GPU products had absolutely nothing to do with supercomputing. Today, of course, the technology is a driving force in the HPC ecosystem and is challenging the entrenched interests of chip makers Intel, AMD, and IBM. Read more…
November 17, 2011
With a number of government and commercial exascale projects in full swing, SC11 has provided a convenient venue for vendors, academics and government types to tout their vision of the future of supercomputing. To get a broad perspective on these efforts, we spoke with Thomas Sterling, Professor of Informatics and Computing Indiana University, and one of the foremost experts on supercomputing architectures. Read more…
November 17, 2011
Advances in silicon photonic integration will present an opportunity for hardware engineers to reconsider basic computer designs. That topic is the theme of a Disruptive Technology session at SC11 on Thursday conducted by Keren Bergman of Columbia University and Nadya Bliss of MIT Lincoln Laboratory. Prior to the conference, we asked Bergen and Bliss to discuss the technology issues surrounding integrated photonics and how it could impact computer systems, including HPC machines. Read more…
November 16, 2011
Amazon Web Services just announced its most powerful offering yet for supercomputing users that require the power of a large cluster on demand. The newest EC2 Cluster Compute Instance, called Cluster Compute Eight Extra Large (CC2), is aimed at businesses and researchers who require additional HPC capacity in an elastic, pay-as-you-go format. Read more…
As Federal agencies navigate an increasingly complex and data-driven world, learning how to get the most out of high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) technologies is imperative to their mission. These technologies can significantly improve efficiency and effectiveness and drive innovation to serve citizens' needs better. Implementing HPC and AI solutions in government can bring challenges and pain points like fragmented datasets, computational hurdles when training ML models, and ethical implications of AI-driven decision-making. Still, CTG Federal, Dell Technologies, and NVIDIA unite to unlock new possibilities and seamlessly integrate HPC capabilities into existing enterprise architectures. This integration empowers organizations to glean actionable insights, improve decision-making, and gain a competitive edge across various domains, from supply chain optimization to financial modeling and beyond.
Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.
This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.
SUBSCRIBE for monthly job listings and articles on HPC careers.
© 2024 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.