May 25, 2022
The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…
April 28, 2022
As the pandemic swept across the world, virtually every research supercomputer lit up to support Covid-19 investigations. But even as the world transformed, the Read more…
April 21, 2022
Cerebras Systems has been making waves for a few years with its massive, dinner plate-sized Wafer Scale Engine (WSE) chips, which are aimed at helping organizat Read more…
January 28, 2022
Computational biology—particularly via combined HPC and AI—has taken the spotlight during the pandemic as pharmaceutical companies and research institutes r Read more…
September 16, 2021
Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…
August 24, 2021
At the Hot Chips conference today, held as a virtual event, wafer-scale computing company Cerebras Systems unveiled its “brain-scale” approach for running the largest models in the world across up to 192 CS-2 clusters. To enable this, Cerebras is debuting its weight streaming technology, which flips the way that models are usually run, and launching two new products: MemoryX and SwarmX. Read more…
April 20, 2021
Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation technology (WSE-2), which its says packs twice the... Read more…
August 19, 2020
The company driving wafer scale computing for AI and machine learning applications, Cerebras, has announced a number of system wins in the last 12 months, and c Read more…
For many organizations, decisions about whether to run HPC workloads in the cloud or in on-premises datacenters are less all-encompassing and more about leveraging both infrastructures strategically to optimize HPC workloads across hybrid environments. From multi-clouds to on-premises, dark, edge, and point of presence (PoP) datacenters, data comes from all directions and in all forms while HPC workloads run in every dimension of modern datacenter schemes. HPC has become multi-dimensional and must be managed as such.
This white paper explores several of these new strategies and tools for optimizing HPC workloads across all dimensions to achieve breakthrough results in Microsoft Azure.
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.