Supercomputers Project Wetter San Francisco Storms in a Future Climate

May 4, 2022

With climate change dramatically accelerating, scientists continue to struggle to predict the shape of a substantially warmer world. This is particularly true with regard to weather and storms, which – due to the granular, mercurial processes at play – elude climate scientists more than, say, ice melt projections. Recently, a climate study commissioned by the City and County of San Francisco... Read more…

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

March 10, 2022

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programmin Read more…

Supercomputers Illuminate Supernova Formation

March 6, 2021

Supernovae are perhaps the galaxy’s best fireworks shows, with dying stars’ death rattles coming in the form of unimaginably large explosions. Astrophysicis Read more…

Berkeley Lab Team Improves HPC Datacenter Efficiency with Analytics

February 25, 2020

As HPC datacenters scale up, improving efficiency is crucial to avoiding correspondingly large energy use (and the ensuing high costs and large carbon footprint Read more…

NERSC-9 Clues Found in NERSC 2017 Annual Report

October 8, 2018

If you’re eager to find out who’ll supply NERSC’s next-gen supercomputer, codenamed NERSC-9, here’s a project update to tide you over until the winning bid and system details are revealed. The upcoming system is referenced several times in the recently published 2017 NERSC annual report. Read more…

Data Management at NERSC in the Era of Petascale Deep Learning

May 9, 2018

Now that computer scientists at Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC) have demonstrated 15 petaflops deep-learning training performance on the Cori supercomputer, the NERSC staff is working to address the data management issues that arise when running production deep-learning codes at such scale. Read more…

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Cen Read more…

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

Giving developers the ability to write code once and use it on different platforms is important. Organizations are increasingly moving to open source and open standard solutions which can aid in code portability. AMD developed a porting solution that allows developers to port proprietary NVIDIA® CUDA® code to run on AMD graphic processing units (GPUs).

This paper describes the AMD ROCm™ open software platform which provides porting tools to convert NVIDIA CUDA code to AMD native open-source Heterogeneous Computing Interface for Portability (HIP) that can run on AMD Instinct™ accelerator hardware. The AMD solution addresses performance and portability needs of artificial intelligence (AI), machine learning (ML) and high performance computing (HPC) for application developers. Using the AMD ROCm platform, developers can port their GPU applications to run on AMD Instinct accelerators with very minimal changes to be able to run their code in both NVIDIA and AMD environments.

Download Now

Sponsored by AMD

Whitepaper

QCT HPC BeeGFS Storage: A Performance Environment for I/O Intensive Workloads

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by QCT

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire