May 18, 2011
French high performance computing vendor Bull announced its HPC cloud service, eXtreme Factory at SC10, emphasizing its value for simulation-driven customers. This week we checked in on progress with the company's head of HPC, Pascal Barbolosi, to see how the platform has weathered its first six months. Read more…
February 3, 2011
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the computing power on display at SC10's Student Cluster Competition; the University of Portsmouth's new supercomputer; IBM Watson's SUSE Linux platform; multicore advances at North Carolina State; and Intel's new approach to university funding. Read more…
January 10, 2011
At this year's annual Supercomputing Conference held in New Orleans, Platform Computing conducted a survey of 100 IT professionals in academia, government and industry about their experimentation with cloud computing models. Randy Clark from Platform provides some insights into the findings, which do show overall satisfaction with clouds for HPC among those who have some experiences. Read more…
November 24, 2010
Argonne Lab announces debut of Exascale Technology and Computing Institute; and NCSA selects IBM's GPFS file system for facility-wide deployment. We recap those stories and more in our weekly wrapup. Read more…
November 24, 2010
At the SC10 event in New Orleans we were able to capture quite a bit of video, some of which never made it live on the site during the course of the busy week. We wanted to share a few highlights with you--and thank those of you who stopped by the HPCwire/HPCin the Cloud booth to say hello. Read more…
November 19, 2010
During this year's SC event in New Orleans, we caught up with co-founder and CEO of Platform Computing, Songnian Zhou to take a big picture look at key movements in computing--and where grid and clouds fit within the "Renaissance" Zhou feels is taking place. Read more…
November 19, 2010
If there was a dominating theme at the Supercomputing Conference this year, it had to be GPU computing. Read more…
Making the Most of Today’s Cloud-First Approach to Running HPC and AI Workloads With Penguin Scyld Cloud Central™
Bursting to cloud has long been used to complement on-premises HPC capacity to meet variable compute demands. But in today’s age of cloud, many workloads start on the cloud with little IT or corporate oversight. What is needed is a way to operationalize the use of these cloud resources so that users get the compute power they need when they need it, but with constraints that take costs and the efficient use of existing compute power into account. Download this special report to learn more about this topic.
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.