May 10, 2022
Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…
April 28, 2022
As the pandemic swept across the world, virtually every research supercomputer lit up to support Covid-19 investigations. But even as the world transformed, the Read more…
April 11, 2022
Some wearable electronics—like sensors sewn into fabrics, or applicable “skins”—rely on the development of new, durable, stretchable electronic material Read more…
March 9, 2022
The world is (once again) returning to some semblance of pre-pandemic life as the omicron variant wanes. Many are now wondering about the risk calculus for popu Read more…
November 10, 2021
It was with a hint of nostalgia that Argonne Lab’s Bill Allcock described the Argonne Leadership Computing Facility’s (ALCF) decision to switch to a commercially-supported workload management suite after 20+ years spent developing and using ALCF’s custom workload manager, Cobalt. Argonne National Laboratory announced today that it is deploying Altair PBS Professional across the organization’s HPC systems and clusters. “From the inception of ALCF, we wrote our own scheduler called Cobalt... Read more…
September 8, 2021
Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…
August 25, 2021
A new 44-petaflops (theoretical peak) supercomputer is under construction at the Department of Energy’s Argonne National Laboratory. Called Polaris, this new Read more…
August 12, 2021
In 2020, residential and commercial buildings in the U.S. accounted for 40 percent of all energy consumption in the country – and with climate change rapidly Read more…
For many organizations, decisions about whether to run HPC workloads in the cloud or in on-premises datacenters are less all-encompassing and more about leveraging both infrastructures strategically to optimize HPC workloads across hybrid environments. From multi-clouds to on-premises, dark, edge, and point of presence (PoP) datacenters, data comes from all directions and in all forms while HPC workloads run in every dimension of modern datacenter schemes. HPC has become multi-dimensional and must be managed as such.
This white paper explores several of these new strategies and tools for optimizing HPC workloads across all dimensions to achieve breakthrough results in Microsoft Azure.
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.