AMD’s Exascale Strategy Hinges on Heterogeneity

By Tiffany Trader

July 29, 2015

In a recent IEEE Micro article, a team of engineers and computer scientists from chipmaker Advanced Micro Devices (AMD) describe the company’s vision for exascale computing as a heterogeneous approach based on “exascale nodes” (comprised of integrated CPUs and GPUs) along with the hardware and software support to support real-world application performance.

The authors of the paper, titled “Achieving Exascale Capabilities through Heterogeneous Computing,” also discuss the challenges involved in building a heterogeneous exascale machine and how AMD is addressing them.

As an example of the improvement that is needed to reach this next performance marker, the AMD staffers point out that exascale systems will conceivably span 100,000 nodes, which would require each node to be capable of providing at least 10 teraflops on real applications. Today, the most-performant GPUs offer a peak of about three double-precision teraflops.

A system with this much punch could be built with aggregate force, but at today’s technology levels, memory and internode communication bandwidth wouldn’t satisfy demand, the authors contend. The other main challenges involve strict power constraints of “just” tens of megawatts per system and the non-negotiable need for better resilience and reliability to keep the high-investment machine up and running.

AMD’s vision for realizing this overarching goal features a heterogeneous approach, which won’t come as a surprise to followers of the company. AMD talked up the potential benefits of tight CPU-GPU integration for HPC workloads when it acquired graphics chipset manufacturer ATI in 2006, and kicked off the Fusion program. In January 2012, AMD rebranded the Fusion platform as the Heterogeneous Systems Architecture (HSA). For much of 2013 and 2014, the company seemed focused almost exclusively on the enterprise and desktop space, but in recent months announced a return to the high-end server space and high-performance computing.

In the abstract for the piece, the authors make reference to the fact that as it gets harder and harder to extract performance [thanks to a diminished Moore’s law], customized hardware regains some of its appeal, but more than a decade’s access to cheap commodity off-the-shelf components is a difficult course to reverse. The heterogeneous approach says you can still use and benefit from commodity scales, but there will no longer be one ISA to rule them all.

They write:

“Hardware optimized for specific functions is much more energy efficient than implementing those functions with general purpose cores. However, there is a strong desire for supercomputer customers to not have to pay for custom components designed only for high-end HPC systems, and therefore high-volume GPU technology becomes a natural choice for energy-efficient data-parallel computing.”

AMD exascale vision figure 1 - IEEE Micro July 2015

In AMD’s envisioned exascale machine, each node consists of a high-performance accelerated processing unit (APU) which integrates a high-throughput general-purpose (GPGPU) with a high-performance multicore CPU. In the authors’ words, “the GPUs provide the high throughput required for exascale levels of computation, whereas the CPU cores handle hard-to-parallelize code sections and provide support for legacy applications.”

The AMD-conceived system also employs a heterogeneous memory architecture, comprised of a combination of die-stacked dynamic RAM (DRAM) and high-capacity nonvolatile memory (NVM) to achieve high bandwidth, low energy, and sufficient total memory capacity for the large problem sizes that will characterize exascale science. Rounding out AMD’s proposed system, compute and memory would connect to the other system nodes via a high-bandwidth, low-overhead network interface controller (NIC).

A straight-CPU system was considered as an exascale candidate, but AMD believes the requisite power envelope is unattainable in this design. It has also considered a system with external discrete GPU cards connected to CPUs, but believes an integrated chip is superior for the following reasons:

+ Lower overheads (both latency and energy) for communicating between the CPU and GPU for both data movement and launching tasks/ kernels.

+ Easier dynamic power shifting between the CPU and GPU.

+ Lower overheads for cache coherence and synchronization among the CPU and GPU cache hierarchies that in turn improve programmability.

+ Higher flops per m3 (performance density).

AMD believes so strongly in its APU-based approach (combined with its Heterogenous Systems Architecture framework) that it refers to its next-generation APU as an exascale heterogeneous processor (EHP).

“ A critical part of our heterogeneous computing vision is that each EHP fully supports HSA, which provides (among other things) a system architecture where all devices within a node (such as the CPU, GPU, and other accelerators) share a single, unified virtual memory space,” the authors state. “This lets programmers write applications in which CPU and GPU code can freely exchange pointers without needing expensive memory transfers over PCI Express (PCIe), reformatting or marshalling of data structures, or complicated device-specific memory allocation.

“HSA also provides user-level task queues supported by the hardware, wherein any computing unit can generate work for any other unit. For example, a GPU can launch new tasks on the GPU itself, or even back to the CPU, without involving the operating system or complex drivers, whereas in most conventional (non-HSA) GPU-based heterogeneous computing, all control must flow through the CPU, which can lead to significant inefficiencies and harder-to-program code structures.”

The figure from AMD shows what the the EHP architecture might look like. Note how it integrates CPU and GPU computational resources along with in-package memory (such as 3D DRAM) to provide 10 teraflops of sustained throughput, making it possible to achieve a target computational throughput of exactly 1 exaflop by coupling 100,000 EHP nodes. AMD points out that while the integrated 3D DRAM provides the bulk of the memory bandwidth, additional off-package memory is still required to serve total per-node memory capacity needs.

Heirarchical memory organization is employed to address the conflicting objectives of bandwidth and capacity, something that the AMD scientists explain in detail in the journal article. AMD envisions that “the first-level DRAM will offer high bandwidth and low energy-per-bit memory access, as well as buffering of store operations for the NVM layer.” In the exascale timeframe, the second level is considered likely to be implemented with NVM technologies (such as phase change memory and memristors). This second-level off-package memory is intended to satisfy per-node capacity mandates for less cost and lower energy than DRAM. AMD notes that for systems that need higher memory capacities, a third level of storage-class memory, such as flash or resistive memory, could be added to the node.

AMD’s conceptual EHP design isn’t limited to just x86 cores. As the company has detailed in the past, its vision for APUs is an open one. ARM is an HSA partner, and AMD hints that the ARM instruction set architecture could be used in a similar manner to x86 within the node: to execute serial portions of applications, non-performance-critical sections, or legacy applications that haven’t yet undergone porting to GPUs.

The 12-page paper offers a lot more than what’s covered here, including:

+ A deep discussion of the memory bandwidth and memory capacity requirements of exascale in the context of both current and in-development memory technologies.

+ An overview of the significance of the HSA project, which has a prominent role in providing “open hardware and software interfaces…that will enable HPC application programmers to unlock the computing capabilities of the underlying heterogeneous exascale system.”

+ Proposed solutions to such issues as programmability at scale and physical constraints relating to power, resilience and reliability.

+ The framing of heterogeneous computing as a key technology for enabling higher performance and lower power across the complete spectrum of computing devices, from laptops to game consoles to supercomputers.

The paper didn’t, however, offer many details as far as AMD’s GPU and APU roadmaps are concerned. The company does have a next-gen server APU in development that is on target to deliver “multi-teraflops for HPC and workstation” in the 2016-2017 timeframe, but it’s unclear whether these will be of the half- single- or double-precision variety.

And earlier this month, AMD announced the newest member of its GPU family, the FirePro S9170, said to be “the world’s first and fastest 32GB single-GPU server card for DGEMM heavy double-precision workloads.” The GPU chip is based on the second-generation AMD Graphics Core Next (GCN) GPU architecture, and is capable of delivering up to 5.24 teraflops of peak single precision compute performance and up to 2.62 teraflops of peak double precision performance. AMD says the card supports 40 percent better double precision performance, while using 10 percent less power than the competition.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized silicon designs catered toward general-purpose cloud computing Read more…

By Tiffany Trader

The Internet of Criminal Things—Trust in the Gods but Verify!

February 20, 2019

“Are we under attack?” asked Professor Elmarie Biermann of the Cyber Security Institute during the recent South African Centre for High Performance Computing’s (CHPC) National Conference in Cape Town. A quick show Read more…

By Elizabeth Leake, STEM-Trek

Machine Learning Takes Heat for Science’s Reproducibility Crisis

February 19, 2019

Scientists are raising red flags about the accuracy and reproducibility of conclusions drawn by machine learning frameworks. Among the remedies are developing new ML systems that can question their own predictions, show Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Perils of Becoming Trapped in the Cloud

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. Read more…

What’s New in HPC Research: Wind Farms, Gravitational Lenses, Web Portals & More

February 19, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This