May 18, 2023
If you work in scientific computing, MPI (message passing interface) is likely a part of your life. It may be hidden underneath the applications you run or you Read more…
January 25, 2021
In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programmin Read more…
November 20, 2020
A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…
April 24, 2019
Panels tend to be among the livelier conference sessions and the “Containers” panel at Tabor’s Advanced Scale Forum last week in Jacksonville, Fla., was c Read more…
May 1, 2017
Has it really been 25 years since the Message Passing Interface standard was born? It has indeed, and at this year's EuroMPI meeting in September in Chicago, a Read more…
February 21, 2017
Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…
August 3, 2016
The Message Passing Interface (MPI) is the standard definition of a communication API that has underpinned traditional HPC for decades. The message passing programming represents distributed-memory hardware architectures using processes that send messages to each other. When first standardised in 1993-4, MPI was a major step forward from the many proprietary, system-dependent, and semantically different message-passing libraries that came before it. Read more…
May 16, 2016
Nielsen has collaborated with Intel to migrate important pieces of HPC technology into Nielsen’s big-data analytic workflows including MPI, mature numerical libraries from NAG (the Numerical Algorithms Group), as well as custom C++ analytic codes. This complementary hybrid approach integrates the benefits of Hadoop data management and workflow scheduling with an extensive pool of HPC tools and C/C++ capabilities for analytic applications. In particular, the use of MPI reduces latency, permits reuse of the Hadoop servers, and co-locates the MPI applications close to the data. Read more…
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.