June 29, 2022
MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no gran Read more…
June 16, 2022
The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…
June 1, 2022
“I don’t think anybody here is ignorant of what supercomputing is,” said Rev Lebaredian, Nvidia’s vice president of Omniverse and Simulation Technology, as he opened the first keynote at ISC 2022 in Hamburg, Germany. “We’ve been building supercomputers for decades, but our uses for them have been evolving over time.” In his keynote, Lebaredian made the case for what he views as... Read more…
May 30, 2022
Just two years ago, chip company SiPearl was a bootstrapped startup helping European achieve a long-term goal to being self-sufficient on technology, and to cut Read more…
May 30, 2022
In March, Nvidia unveiled its two new Grace Superchips: the Grace CPU Superchip, aimed at datacenters, comprises dual Arm-based Grace CPU chips; the Grace Hopper Superchip, meanwhile, combines a Grace CPU with a Hopper GPU in a single SoC. Now, at ISC 2022... Read more…
May 30, 2022
During a special address at ISC today, general manager and vice president of Accelerated Computing at Nvidia, Ian Buck, shared promising news for the future of Read more…
May 25, 2022
Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…
May 24, 2022
Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…
For many organizations, decisions about whether to run HPC workloads in the cloud or in on-premises datacenters are less all-encompassing and more about leveraging both infrastructures strategically to optimize HPC workloads across hybrid environments. From multi-clouds to on-premises, dark, edge, and point of presence (PoP) datacenters, data comes from all directions and in all forms while HPC workloads run in every dimension of modern datacenter schemes. HPC has become multi-dimensional and must be managed as such.
This white paper explores several of these new strategies and tools for optimizing HPC workloads across all dimensions to achieve breakthrough results in Microsoft Azure.
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.