November 5, 2014
Canadian researchers have turned the country's fastest supercomputer into an Ebola-fighting machine. There are no approved vaccines for Ebola yet, but Chematria Read more…
May 12, 2014
You don't need to be a mathematician to appreciate the beauty and elegance of fractal geometries, those infinitely complex patterns that are self-similar across Read more…
September 3, 2013
Lawrence Livermore National Laboratory's High Performance Computing Innovation Center (HPCIC) in the US and the Science and Technology Facilities Council (STFC) in the United Kingdom are combining efforts to help industry stakeholders in both countries leverage supercomputing to accelerate innovation and boost economic competitiveness. Read more…
June 26, 2013
The eighth-ranked Blue Gene/Q Vulcan system at Lawrence Livermore National Lab has opened its doors for business--at least to companies reliant on advanced modeling and simulation. The 5-petaflop super has already been used in a number of incubator projects but now that they are extending the focus of.... Read more…
September 25, 2012
Argonne's 10-petaflop Blue Gene/Q will be used to gain a better understanding of dark matter. Read more…
August 1, 2012
DOE lab is taking applications from researchers who want time on 8-petaflop super. Read more…
August 22, 2011
At the Hot Chips conference in Santa Clara last week, IBM lifted the curtain on its Blue Gene/Q SoC, which will soon power some of the highest performing supercomputers in the world. Next year, two DOE labs are slated to boot up the most powerful Blue Gene systems ever deployed: the 10-petaflop "Mira" system at Argonne National Lab, and the 20-petaflop "Sequoia" super at Lawrence Livermore. Both will employ the latest Blue Gene/Q processor described at the conference. Read more…
July 20, 2011
Researchers in high-energy physics are gearing up to test theories on Argonne, Oak Ridge iron. Read more…
Making the Most of Today’s Cloud-First Approach to Running HPC and AI Workloads With Penguin Scyld Cloud Central™
Bursting to cloud has long been used to complement on-premises HPC capacity to meet variable compute demands. But in today’s age of cloud, many workloads start on the cloud with little IT or corporate oversight. What is needed is a way to operationalize the use of these cloud resources so that users get the compute power they need when they need it, but with constraints that take costs and the efficient use of existing compute power into account. Download this special report to learn more about this topic.
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.