Google’s DeepMind Has a Long-term Goal of Artificial General Intelligence

September 14, 2022

When DeepMind, an Alphabet subsidiary, started off more than a decade ago, solving some most pressing research questions and problems with AI wasn’t at the top of the company’s mind. Instead, the company started off AI research with computer games. Every score and win was a measuring stick of success... Read more…

Shutterstock 1874021860

The Mainstreaming of MLPerf? Nvidia Dominates Training v2.0 but Challengers Are Rising

June 29, 2022

MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no gran Read more…

Google Cloud’s New TPU v4 ML Hub Packs 9 Exaflops of AI

May 16, 2022

Almost exactly a year ago, Google launched its Tensor Processing Unit (TPU) v4 chips at Google I/O 2021, promising twice the performance compared to the TPU v3. At the time, Google CEO Sundar Pichai said that Google’s datacenters would “soon have dozens of TPU v4 Pods, many of which will be... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Nvidia Dominates Latest MLPerf Training Benchmark Results

July 29, 2020

MLPerf.org released its third round of training benchmark (v0.7) results today and Nvidia again dominated, claiming 16 new records. Meanwhile, Google provided e Read more…

Hardware Acceleration of Recurrent Neural Networks: the Need and the Challenges

July 27, 2020

Recurrent neural networks (RNNs) have shown phenomenal success in several sequence learning tasks such as machine translation, language processing, image captio Read more…

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

Google Cloud to Offer Nvidia P4 Graphics Card for Inferencing Tasks

July 25, 2018

Google continues to add GPU horsepower in tandem with its internally developed deep learning processors to its cloud platform with this week’s announcement th Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

Giving developers the ability to write code once and use it on different platforms is important. Organizations are increasingly moving to open source and open standard solutions which can aid in code portability. AMD developed a porting solution that allows developers to port proprietary NVIDIA® CUDA® code to run on AMD graphic processing units (GPUs).

This paper describes the AMD ROCm™ open software platform which provides porting tools to convert NVIDIA CUDA code to AMD native open-source Heterogeneous Computing Interface for Portability (HIP) that can run on AMD Instinct™ accelerator hardware. The AMD solution addresses performance and portability needs of artificial intelligence (AI), machine learning (ML) and high performance computing (HPC) for application developers. Using the AMD ROCm platform, developers can port their GPU applications to run on AMD Instinct accelerators with very minimal changes to be able to run their code in both NVIDIA and AMD environments.

Download Now

Sponsored by AMD

Whitepaper

QCT HPC BeeGFS Storage: A Performance Environment for I/O Intensive Workloads

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by QCT

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire