Gordon Bell Special Prize Goes to LLM-Based Covid Variant Prediction

November 17, 2022

For three years running, ACM has awarded not only its long-standing Gordon Bell Prize (read more about this year’s winner here!) but also its Gordon Bell Spec Read more…

Gordon Bell Nominee Used LLMs, HPC, Cerebras CS-2 to Predict Covid Variants

November 17, 2022

Large language models (LLMs) have taken the tech world by storm over the past couple of years, dominating headlines with their ability to generate convincing hu Read more…

Cerebras Builds ‘Exascale’ AI Supercomputer

November 14, 2022

Cerebras is putting down stakes to be a player in the AI cloud computing with a supercomputer called Andromeda, which achieves over an exaflops of "AI performan Read more…

Cerebras Chip Part of Project to Spot Post-exascale Technology

October 19, 2022

Cerebras Systems has secured another U.S. government win for its wafer scale engine chip – which is considered the largest chip in the world. The company's chip technology will be part of a research project sponsored by the National Nuclear Security Administration to find... Read more…

Computer History Museum Honors Cerebras Systems – Watch a Replay of the Event

August 3, 2022

When Cerebras Systems had its coming out at Hot Chips in August 2019, the hardware community wasn't sure what to think. Attendees were understandably skeptical of the novel "wafer-scale" technology, not to mention an estimated power envelope of ~15 kilowatts for the chip alone. In the intervening three years, the company... Read more…

LRZ Adds Mega AI System as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Argonne Talks AI Accelerators for Covid Research

April 28, 2022

As the pandemic swept across the world, virtually every research supercomputer lit up to support Covid-19 investigations. But even as the world transformed, the Read more…

Cerebras Systems Marks Energy Customer Win with TotalEnergies

March 2, 2022

Cerebras Systems, pioneer of wafer-scale computing for AI and HPC, today announced that TotalEnergies (formerly “Total”) has deployed the Cerebras CS-2 syst Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

Giving developers the ability to write code once and use it on different platforms is important. Organizations are increasingly moving to open source and open standard solutions which can aid in code portability. AMD developed a porting solution that allows developers to port proprietary NVIDIA® CUDA® code to run on AMD graphic processing units (GPUs).

This paper describes the AMD ROCm™ open software platform which provides porting tools to convert NVIDIA CUDA code to AMD native open-source Heterogeneous Computing Interface for Portability (HIP) that can run on AMD Instinct™ accelerator hardware. The AMD solution addresses performance and portability needs of artificial intelligence (AI), machine learning (ML) and high performance computing (HPC) for application developers. Using the AMD ROCm platform, developers can port their GPU applications to run on AMD Instinct accelerators with very minimal changes to be able to run their code in both NVIDIA and AMD environments.

Download Now

Sponsored by AMD

Whitepaper

QCT HPC BeeGFS Storage: A Performance Environment for I/O Intensive Workloads

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by QCT

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire