Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to in Read more…

ISC 2021 Keynote: Thomas Sterling on Urgent Computing, Big Machines, China Speculation

July 1, 2021

In a somewhat shortened version of his annual ISC keynote surveying the HPC landscape Thomas Sterling, lauded the community’s effort in bringing HPC to bear in the fight against the pandemic, welcomed the start of the exascale – if not yet exaflops – era with quick tour of some big machines, speculated a little on what China may be planning, and paid tribute to new and ongoing efforts to bring fresh talent into HPC. Sterling is a longtime HPC leader... Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 3, 2021

Jeffrey Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been f Read more…

Let’s Talk Exascale: ECP Leadership Discusses Project Highlights, Challenges, and the Expected Impact of Exascale Computing

August 19, 2020

In this special episode of the Let’s Talk Exascale podcast from the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), the members of the Read more…

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

DEEP-EST Stands up Cluster Module at Jülich Supercomputing Centre

May 6, 2019

European exascale efforts continued to advance with the recent standing up of the “Cluster Module” (CM) at the Jülich Supercomputing Centre (JSC). CM is on Read more…

Congress Passes DOE Research and Innovation Act

October 9, 2018

Congress recently passed the Department of Energy Research and Innovation Act (H.R.589) which essentially authorizes many existing Department of Energy activities. It also emphasizes efforts to ease and accelerate technology transfer to the private sector. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

Giving developers the ability to write code once and use it on different platforms is important. Organizations are increasingly moving to open source and open standard solutions which can aid in code portability. AMD developed a porting solution that allows developers to port proprietary NVIDIA® CUDA® code to run on AMD graphic processing units (GPUs).

This paper describes the AMD ROCm™ open software platform which provides porting tools to convert NVIDIA CUDA code to AMD native open-source Heterogeneous Computing Interface for Portability (HIP) that can run on AMD Instinct™ accelerator hardware. The AMD solution addresses performance and portability needs of artificial intelligence (AI), machine learning (ML) and high performance computing (HPC) for application developers. Using the AMD ROCm platform, developers can port their GPU applications to run on AMD Instinct accelerators with very minimal changes to be able to run their code in both NVIDIA and AMD environments.

Download Now

Sponsored by AMD

Whitepaper

QCT HPC BeeGFS Storage: A Performance Environment for I/O Intensive Workloads

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by QCT

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire