October 16, 2023
If you are waiting in a giant line for Nvidia's H100 GPUs, be advised that the next-generation H200 chip is already on its way. The GPU maker earlier this mo Read more…
May 24, 2022
Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…
November 10, 2021
Nvidia yesterday introduced Quantum-2, its new networking platform that features NDR InfiniBand (400 Gbps) and Bluefield-3 DPU (data processing unit) capabilities. The name is perhaps confusing – it’s not a quantum computing device and even Nvidia is getting into the true quantum computing market with its cuQuantum simulator. The name stems from the legacy line of Nvidia/Mellanox Quantum switches. That said, the new Quantum-2 platform specs are impressive. Jensen Huang, Nvidia CEO, introduced... Read more…
July 23, 2021
Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…
November 16, 2020
With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…
November 16, 2020
Nvidia today introduced its Mellanox NDR 400 gigabit-per-second InfiniBand family of interconnect products, which are expected to be available in Q2 of 2021. Th Read more…
January 30, 2019
The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…
July 19, 2018
In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than Read more…
Making the Most of Today’s Cloud-First Approach to Running HPC and AI Workloads With Penguin Scyld Cloud Central™
Bursting to cloud has long been used to complement on-premises HPC capacity to meet variable compute demands. But in today’s age of cloud, many workloads start on the cloud with little IT or corporate oversight. What is needed is a way to operationalize the use of these cloud resources so that users get the compute power they need when they need it, but with constraints that take costs and the efficient use of existing compute power into account. Download this special report to learn more about this topic.
Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.
Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.
QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.