May 10, 2022
Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…
November 9, 2021
It’s an understatement to say the effort to adapt AI technology for use in scientific computing has gained steam. Last spring, the Department of Energy released a formal report – AI for Science – suggesting an AI program not unlike the exascale program reaching fruition now. There’s also the broader U.S. National Artificial Intelligence Initiative pushing for AI use throughout society. Last week, as part of a year-long celebration of its 75th founding anniversary, Argonne National Laboratory held... Read more…
November 17, 2020
COVID-19 isn’t over – not even close. With about six months until broad vaccine distribution is expected, the world will likely face a long, difficult winte Read more…
August 13, 2019
Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…
August 2, 2019
Argonne National Lab, future home to the Intel-Cray Aurora supercomputer, recently hosted the first in a series of four AI for Science town hall meetings being convened by Department of Energy laboratories. The meetings are aimed at soliciting and collecting "community input on the opportunities and challenges facing the scientific community in the era of convergence of high-performance computing and artificial intelligence (AI) technologies." Read more…
March 13, 2019
Machine learning researchers are pushing back on the recent assertion that the AI framework is a key contributor to a reproducibility crisis in scientific research. Rick Stevens, associate laboratory director for computing, environment and life sciences at Argonne National Laboratory... Read more…
February 21, 2011
Exascale computing promises incredible science breakthroughs, but it won't come easily, and it won't come free. Read more…
For many organizations, decisions about whether to run HPC workloads in the cloud or in on-premises datacenters are less all-encompassing and more about leveraging both infrastructures strategically to optimize HPC workloads across hybrid environments. From multi-clouds to on-premises, dark, edge, and point of presence (PoP) datacenters, data comes from all directions and in all forms while HPC workloads run in every dimension of modern datacenter schemes. HPC has become multi-dimensional and must be managed as such.
This white paper explores several of these new strategies and tools for optimizing HPC workloads across all dimensions to achieve breakthrough results in Microsoft Azure.
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.