Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Argonne Deploys Polaris Supercomputer for Science in Advance of Aurora

August 9, 2022

Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…

Exascale Watch: Aurora Installation Underway, Now Open for Reservations

May 10, 2022

Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…

Exascale Readiness Key to Solving High Energy Physics Mysteries

April 13, 2022

Scientists at Brookhaven National Laboratory, Columbia University, the University of Connecticut, University of Edinburgh, Regensburg University, and the University of Southampton are seeking answers to physics mysteries at the highest energies and shortest distances. The team is devising new methods and enhancing their code in order to exploit the huge potential... Read more…

Intel’s ‘Borealis’ Testbed Targets Exascale Readiness for Aurora Supercomputer

December 16, 2021

As Intel, HPE, and Argonne National Laboratory drive toward a 2022 delivery of the Aurora leadership-class supercomputer, HPCwire spoke with Dr. Robert Wisniewski, Intel Fellow: SuperCompute Software, Aurora technical lead and PI, to learn about Intel’s Borealis testbed for Aurora. Wisniewski also explains why he views High Bandwidth Memory as a game-changer for HPC. Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to in Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

2021 Storage Technology Series

Data growth is relentless and inevitable, as data has come to define many aspects of research computing. Simulation data, ever more sophisticated sensor data, and now the rise of machine learning all contribute to the accelerating pace of data growth.

Learn how archive is different than backup, how much data should be archived, and how to architect a comprehensive HSM solution using new data management technologies that can handle this accelerating growth today and well into the future.

Download

Sponsored by Seagate

Whitepaper

End-to-End Support Toward Accelerating Discovery and Innovation with a Converged HPC and AI Supercomputer

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by QCT

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire