AMD’s Exascale Strategy Hinges on Heterogeneity

By Tiffany Trader

July 29, 2015

In a recent IEEE Micro article, a team of engineers and computer scientists from chipmaker Advanced Micro Devices (AMD) describe the company’s vision for exascale computing as a heterogeneous approach based on “exascale nodes” (comprised of integrated CPUs and GPUs) along with the hardware and software support to support real-world application performance.

The authors of the paper, titled “Achieving Exascale Capabilities through Heterogeneous Computing,” also discuss the challenges involved in building a heterogeneous exascale machine and how AMD is addressing them.

As an example of the improvement that is needed to reach this next performance marker, the AMD staffers point out that exascale systems will conceivably span 100,000 nodes, which would require each node to be capable of providing at least 10 teraflops on real applications. Today, the most-performant GPUs offer a peak of about three double-precision teraflops.

A system with this much punch could be built with aggregate force, but at today’s technology levels, memory and internode communication bandwidth wouldn’t satisfy demand, the authors contend. The other main challenges involve strict power constraints of “just” tens of megawatts per system and the non-negotiable need for better resilience and reliability to keep the high-investment machine up and running.

AMD’s vision for realizing this overarching goal features a heterogeneous approach, which won’t come as a surprise to followers of the company. AMD talked up the potential benefits of tight CPU-GPU integration for HPC workloads when it acquired graphics chipset manufacturer ATI in 2006, and kicked off the Fusion program. In January 2012, AMD rebranded the Fusion platform as the Heterogeneous Systems Architecture (HSA). For much of 2013 and 2014, the company seemed focused almost exclusively on the enterprise and desktop space, but in recent months announced a return to the high-end server space and high-performance computing.

In the abstract for the piece, the authors make reference to the fact that as it gets harder and harder to extract performance [thanks to a diminished Moore’s law], customized hardware regains some of its appeal, but more than a decade’s access to cheap commodity off-the-shelf components is a difficult course to reverse. The heterogeneous approach says you can still use and benefit from commodity scales, but there will no longer be one ISA to rule them all.

They write:

“Hardware optimized for specific functions is much more energy efficient than implementing those functions with general purpose cores. However, there is a strong desire for supercomputer customers to not have to pay for custom components designed only for high-end HPC systems, and therefore high-volume GPU technology becomes a natural choice for energy-efficient data-parallel computing.”

AMD exascale vision figure 1 - IEEE Micro July 2015

In AMD’s envisioned exascale machine, each node consists of a high-performance accelerated processing unit (APU) which integrates a high-throughput general-purpose (GPGPU) with a high-performance multicore CPU. In the authors’ words, “the GPUs provide the high throughput required for exascale levels of computation, whereas the CPU cores handle hard-to-parallelize code sections and provide support for legacy applications.”

The AMD-conceived system also employs a heterogeneous memory architecture, comprised of a combination of die-stacked dynamic RAM (DRAM) and high-capacity nonvolatile memory (NVM) to achieve high bandwidth, low energy, and sufficient total memory capacity for the large problem sizes that will characterize exascale science. Rounding out AMD’s proposed system, compute and memory would connect to the other system nodes via a high-bandwidth, low-overhead network interface controller (NIC).

A straight-CPU system was considered as an exascale candidate, but AMD believes the requisite power envelope is unattainable in this design. It has also considered a system with external discrete GPU cards connected to CPUs, but believes an integrated chip is superior for the following reasons:

+ Lower overheads (both latency and energy) for communicating between the CPU and GPU for both data movement and launching tasks/ kernels.

+ Easier dynamic power shifting between the CPU and GPU.

+ Lower overheads for cache coherence and synchronization among the CPU and GPU cache hierarchies that in turn improve programmability.

+ Higher flops per m3 (performance density).

AMD believes so strongly in its APU-based approach (combined with its Heterogenous Systems Architecture framework) that it refers to its next-generation APU as an exascale heterogeneous processor (EHP).

“ A critical part of our heterogeneous computing vision is that each EHP fully supports HSA, which provides (among other things) a system architecture where all devices within a node (such as the CPU, GPU, and other accelerators) share a single, unified virtual memory space,” the authors state. “This lets programmers write applications in which CPU and GPU code can freely exchange pointers without needing expensive memory transfers over PCI Express (PCIe), reformatting or marshalling of data structures, or complicated device-specific memory allocation.

“HSA also provides user-level task queues supported by the hardware, wherein any computing unit can generate work for any other unit. For example, a GPU can launch new tasks on the GPU itself, or even back to the CPU, without involving the operating system or complex drivers, whereas in most conventional (non-HSA) GPU-based heterogeneous computing, all control must flow through the CPU, which can lead to significant inefficiencies and harder-to-program code structures.”

The figure from AMD shows what the the EHP architecture might look like. Note how it integrates CPU and GPU computational resources along with in-package memory (such as 3D DRAM) to provide 10 teraflops of sustained throughput, making it possible to achieve a target computational throughput of exactly 1 exaflop by coupling 100,000 EHP nodes. AMD points out that while the integrated 3D DRAM provides the bulk of the memory bandwidth, additional off-package memory is still required to serve total per-node memory capacity needs.

Heirarchical memory organization is employed to address the conflicting objectives of bandwidth and capacity, something that the AMD scientists explain in detail in the journal article. AMD envisions that “the first-level DRAM will offer high bandwidth and low energy-per-bit memory access, as well as buffering of store operations for the NVM layer.” In the exascale timeframe, the second level is considered likely to be implemented with NVM technologies (such as phase change memory and memristors). This second-level off-package memory is intended to satisfy per-node capacity mandates for less cost and lower energy than DRAM. AMD notes that for systems that need higher memory capacities, a third level of storage-class memory, such as flash or resistive memory, could be added to the node.

AMD’s conceptual EHP design isn’t limited to just x86 cores. As the company has detailed in the past, its vision for APUs is an open one. ARM is an HSA partner, and AMD hints that the ARM instruction set architecture could be used in a similar manner to x86 within the node: to execute serial portions of applications, non-performance-critical sections, or legacy applications that haven’t yet undergone porting to GPUs.

The 12-page paper offers a lot more than what’s covered here, including:

+ A deep discussion of the memory bandwidth and memory capacity requirements of exascale in the context of both current and in-development memory technologies.

+ An overview of the significance of the HSA project, which has a prominent role in providing “open hardware and software interfaces…that will enable HPC application programmers to unlock the computing capabilities of the underlying heterogeneous exascale system.”

+ Proposed solutions to such issues as programmability at scale and physical constraints relating to power, resilience and reliability.

+ The framing of heterogeneous computing as a key technology for enabling higher performance and lower power across the complete spectrum of computing devices, from laptops to game consoles to supercomputers.

The paper didn’t, however, offer many details as far as AMD’s GPU and APU roadmaps are concerned. The company does have a next-gen server APU in development that is on target to deliver “multi-teraflops for HPC and workstation” in the 2016-2017 timeframe, but it’s unclear whether these will be of the half- single- or double-precision variety.

And earlier this month, AMD announced the newest member of its GPU family, the FirePro S9170, said to be “the world’s first and fastest 32GB single-GPU server card for DGEMM heavy double-precision workloads.” The GPU chip is based on the second-generation AMD Graphics Core Next (GCN) GPU architecture, and is capable of delivering up to 5.24 teraflops of peak single precision compute performance and up to 2.62 teraflops of peak double precision performance. AMD says the card supports 40 percent better double precision performance, while using 10 percent less power than the competition.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17


AMD @ SC17


ASRock Rack @ SC17

ASRock Rack



DDN Storage @ SC17

DDN Storage

Huawei @ SC17


IBM @ SC17


IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17


Lenovo @ SC17


Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17


Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17


Tyan @ SC17


Univa @ SC17


  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This