AMD’s Exascale Strategy Hinges on Heterogeneity

By Tiffany Trader

July 29, 2015

In a recent IEEE Micro article, a team of engineers and computer scientists from chipmaker Advanced Micro Devices (AMD) describe the company’s vision for exascale computing as a heterogeneous approach based on “exascale nodes” (comprised of integrated CPUs and GPUs) along with the hardware and software support to support real-world application performance.

The authors of the paper, titled “Achieving Exascale Capabilities through Heterogeneous Computing,” also discuss the challenges involved in building a heterogeneous exascale machine and how AMD is addressing them.

As an example of the improvement that is needed to reach this next performance marker, the AMD staffers point out that exascale systems will conceivably span 100,000 nodes, which would require each node to be capable of providing at least 10 teraflops on real applications. Today, the most-performant GPUs offer a peak of about three double-precision teraflops.

A system with this much punch could be built with aggregate force, but at today’s technology levels, memory and internode communication bandwidth wouldn’t satisfy demand, the authors contend. The other main challenges involve strict power constraints of “just” tens of megawatts per system and the non-negotiable need for better resilience and reliability to keep the high-investment machine up and running.

AMD’s vision for realizing this overarching goal features a heterogeneous approach, which won’t come as a surprise to followers of the company. AMD talked up the potential benefits of tight CPU-GPU integration for HPC workloads when it acquired graphics chipset manufacturer ATI in 2006, and kicked off the Fusion program. In January 2012, AMD rebranded the Fusion platform as the Heterogeneous Systems Architecture (HSA). For much of 2013 and 2014, the company seemed focused almost exclusively on the enterprise and desktop space, but in recent months announced a return to the high-end server space and high-performance computing.

In the abstract for the piece, the authors make reference to the fact that as it gets harder and harder to extract performance [thanks to a diminished Moore’s law], customized hardware regains some of its appeal, but more than a decade’s access to cheap commodity off-the-shelf components is a difficult course to reverse. The heterogeneous approach says you can still use and benefit from commodity scales, but there will no longer be one ISA to rule them all.

They write:

“Hardware optimized for specific functions is much more energy efficient than implementing those functions with general purpose cores. However, there is a strong desire for supercomputer customers to not have to pay for custom components designed only for high-end HPC systems, and therefore high-volume GPU technology becomes a natural choice for energy-efficient data-parallel computing.”

AMD exascale vision figure 1 - IEEE Micro July 2015

In AMD’s envisioned exascale machine, each node consists of a high-performance accelerated processing unit (APU) which integrates a high-throughput general-purpose (GPGPU) with a high-performance multicore CPU. In the authors’ words, “the GPUs provide the high throughput required for exascale levels of computation, whereas the CPU cores handle hard-to-parallelize code sections and provide support for legacy applications.”

The AMD-conceived system also employs a heterogeneous memory architecture, comprised of a combination of die-stacked dynamic RAM (DRAM) and high-capacity nonvolatile memory (NVM) to achieve high bandwidth, low energy, and sufficient total memory capacity for the large problem sizes that will characterize exascale science. Rounding out AMD’s proposed system, compute and memory would connect to the other system nodes via a high-bandwidth, low-overhead network interface controller (NIC).

A straight-CPU system was considered as an exascale candidate, but AMD believes the requisite power envelope is unattainable in this design. It has also considered a system with external discrete GPU cards connected to CPUs, but believes an integrated chip is superior for the following reasons:

+ Lower overheads (both latency and energy) for communicating between the CPU and GPU for both data movement and launching tasks/ kernels.

+ Easier dynamic power shifting between the CPU and GPU.

+ Lower overheads for cache coherence and synchronization among the CPU and GPU cache hierarchies that in turn improve programmability.

+ Higher flops per m3 (performance density).

AMD believes so strongly in its APU-based approach (combined with its Heterogenous Systems Architecture framework) that it refers to its next-generation APU as an exascale heterogeneous processor (EHP).

“ A critical part of our heterogeneous computing vision is that each EHP fully supports HSA, which provides (among other things) a system architecture where all devices within a node (such as the CPU, GPU, and other accelerators) share a single, unified virtual memory space,” the authors state. “This lets programmers write applications in which CPU and GPU code can freely exchange pointers without needing expensive memory transfers over PCI Express (PCIe), reformatting or marshalling of data structures, or complicated device-specific memory allocation.

“HSA also provides user-level task queues supported by the hardware, wherein any computing unit can generate work for any other unit. For example, a GPU can launch new tasks on the GPU itself, or even back to the CPU, without involving the operating system or complex drivers, whereas in most conventional (non-HSA) GPU-based heterogeneous computing, all control must flow through the CPU, which can lead to significant inefficiencies and harder-to-program code structures.”

The figure from AMD shows what the the EHP architecture might look like. Note how it integrates CPU and GPU computational resources along with in-package memory (such as 3D DRAM) to provide 10 teraflops of sustained throughput, making it possible to achieve a target computational throughput of exactly 1 exaflop by coupling 100,000 EHP nodes. AMD points out that while the integrated 3D DRAM provides the bulk of the memory bandwidth, additional off-package memory is still required to serve total per-node memory capacity needs.

Heirarchical memory organization is employed to address the conflicting objectives of bandwidth and capacity, something that the AMD scientists explain in detail in the journal article. AMD envisions that “the first-level DRAM will offer high bandwidth and low energy-per-bit memory access, as well as buffering of store operations for the NVM layer.” In the exascale timeframe, the second level is considered likely to be implemented with NVM technologies (such as phase change memory and memristors). This second-level off-package memory is intended to satisfy per-node capacity mandates for less cost and lower energy than DRAM. AMD notes that for systems that need higher memory capacities, a third level of storage-class memory, such as flash or resistive memory, could be added to the node.

AMD’s conceptual EHP design isn’t limited to just x86 cores. As the company has detailed in the past, its vision for APUs is an open one. ARM is an HSA partner, and AMD hints that the ARM instruction set architecture could be used in a similar manner to x86 within the node: to execute serial portions of applications, non-performance-critical sections, or legacy applications that haven’t yet undergone porting to GPUs.

The 12-page paper offers a lot more than what’s covered here, including:

+ A deep discussion of the memory bandwidth and memory capacity requirements of exascale in the context of both current and in-development memory technologies.

+ An overview of the significance of the HSA project, which has a prominent role in providing “open hardware and software interfaces…that will enable HPC application programmers to unlock the computing capabilities of the underlying heterogeneous exascale system.”

+ Proposed solutions to such issues as programmability at scale and physical constraints relating to power, resilience and reliability.

+ The framing of heterogeneous computing as a key technology for enabling higher performance and lower power across the complete spectrum of computing devices, from laptops to game consoles to supercomputers.

The paper didn’t, however, offer many details as far as AMD’s GPU and APU roadmaps are concerned. The company does have a next-gen server APU in development that is on target to deliver “multi-teraflops for HPC and workstation” in the 2016-2017 timeframe, but it’s unclear whether these will be of the half- single- or double-precision variety.

And earlier this month, AMD announced the newest member of its GPU family, the FirePro S9170, said to be “the world’s first and fastest 32GB single-GPU server card for DGEMM heavy double-precision workloads.” The GPU chip is based on the second-generation AMD Graphics Core Next (GCN) GPU architecture, and is capable of delivering up to 5.24 teraflops of peak single precision compute performance and up to 2.62 teraflops of peak double precision performance. AMD says the card supports 40 percent better double precision performance, while using 10 percent less power than the competition.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire