AMD’s Exascale Strategy Hinges on Heterogeneity

By Tiffany Trader

July 29, 2015

In a recent IEEE Micro article, a team of engineers and computer scientists from chipmaker Advanced Micro Devices (AMD) describe the company’s vision for exascale computing as a heterogeneous approach based on “exascale nodes” (comprised of integrated CPUs and GPUs) along with the hardware and software support to support real-world application performance.

The authors of the paper, titled “Achieving Exascale Capabilities through Heterogeneous Computing,” also discuss the challenges involved in building a heterogeneous exascale machine and how AMD is addressing them.

As an example of the improvement that is needed to reach this next performance marker, the AMD staffers point out that exascale systems will conceivably span 100,000 nodes, which would require each node to be capable of providing at least 10 teraflops on real applications. Today, the most-performant GPUs offer a peak of about three double-precision teraflops.

A system with this much punch could be built with aggregate force, but at today’s technology levels, memory and internode communication bandwidth wouldn’t satisfy demand, the authors contend. The other main challenges involve strict power constraints of “just” tens of megawatts per system and the non-negotiable need for better resilience and reliability to keep the high-investment machine up and running.

AMD’s vision for realizing this overarching goal features a heterogeneous approach, which won’t come as a surprise to followers of the company. AMD talked up the potential benefits of tight CPU-GPU integration for HPC workloads when it acquired graphics chipset manufacturer ATI in 2006, and kicked off the Fusion program. In January 2012, AMD rebranded the Fusion platform as the Heterogeneous Systems Architecture (HSA). For much of 2013 and 2014, the company seemed focused almost exclusively on the enterprise and desktop space, but in recent months announced a return to the high-end server space and high-performance computing.

In the abstract for the piece, the authors make reference to the fact that as it gets harder and harder to extract performance [thanks to a diminished Moore’s law], customized hardware regains some of its appeal, but more than a decade’s access to cheap commodity off-the-shelf components is a difficult course to reverse. The heterogeneous approach says you can still use and benefit from commodity scales, but there will no longer be one ISA to rule them all.

They write:

“Hardware optimized for specific functions is much more energy efficient than implementing those functions with general purpose cores. However, there is a strong desire for supercomputer customers to not have to pay for custom components designed only for high-end HPC systems, and therefore high-volume GPU technology becomes a natural choice for energy-efficient data-parallel computing.”

AMD exascale vision figure 1 - IEEE Micro July 2015

In AMD’s envisioned exascale machine, each node consists of a high-performance accelerated processing unit (APU) which integrates a high-throughput general-purpose (GPGPU) with a high-performance multicore CPU. In the authors’ words, “the GPUs provide the high throughput required for exascale levels of computation, whereas the CPU cores handle hard-to-parallelize code sections and provide support for legacy applications.”

The AMD-conceived system also employs a heterogeneous memory architecture, comprised of a combination of die-stacked dynamic RAM (DRAM) and high-capacity nonvolatile memory (NVM) to achieve high bandwidth, low energy, and sufficient total memory capacity for the large problem sizes that will characterize exascale science. Rounding out AMD’s proposed system, compute and memory would connect to the other system nodes via a high-bandwidth, low-overhead network interface controller (NIC).

A straight-CPU system was considered as an exascale candidate, but AMD believes the requisite power envelope is unattainable in this design. It has also considered a system with external discrete GPU cards connected to CPUs, but believes an integrated chip is superior for the following reasons:

+ Lower overheads (both latency and energy) for communicating between the CPU and GPU for both data movement and launching tasks/ kernels.

+ Easier dynamic power shifting between the CPU and GPU.

+ Lower overheads for cache coherence and synchronization among the CPU and GPU cache hierarchies that in turn improve programmability.

+ Higher flops per m3 (performance density).

AMD believes so strongly in its APU-based approach (combined with its Heterogenous Systems Architecture framework) that it refers to its next-generation APU as an exascale heterogeneous processor (EHP).

“ A critical part of our heterogeneous computing vision is that each EHP fully supports HSA, which provides (among other things) a system architecture where all devices within a node (such as the CPU, GPU, and other accelerators) share a single, unified virtual memory space,” the authors state. “This lets programmers write applications in which CPU and GPU code can freely exchange pointers without needing expensive memory transfers over PCI Express (PCIe), reformatting or marshalling of data structures, or complicated device-specific memory allocation.

“HSA also provides user-level task queues supported by the hardware, wherein any computing unit can generate work for any other unit. For example, a GPU can launch new tasks on the GPU itself, or even back to the CPU, without involving the operating system or complex drivers, whereas in most conventional (non-HSA) GPU-based heterogeneous computing, all control must flow through the CPU, which can lead to significant inefficiencies and harder-to-program code structures.”

The figure from AMD shows what the the EHP architecture might look like. Note how it integrates CPU and GPU computational resources along with in-package memory (such as 3D DRAM) to provide 10 teraflops of sustained throughput, making it possible to achieve a target computational throughput of exactly 1 exaflop by coupling 100,000 EHP nodes. AMD points out that while the integrated 3D DRAM provides the bulk of the memory bandwidth, additional off-package memory is still required to serve total per-node memory capacity needs.

Heirarchical memory organization is employed to address the conflicting objectives of bandwidth and capacity, something that the AMD scientists explain in detail in the journal article. AMD envisions that “the first-level DRAM will offer high bandwidth and low energy-per-bit memory access, as well as buffering of store operations for the NVM layer.” In the exascale timeframe, the second level is considered likely to be implemented with NVM technologies (such as phase change memory and memristors). This second-level off-package memory is intended to satisfy per-node capacity mandates for less cost and lower energy than DRAM. AMD notes that for systems that need higher memory capacities, a third level of storage-class memory, such as flash or resistive memory, could be added to the node.

AMD’s conceptual EHP design isn’t limited to just x86 cores. As the company has detailed in the past, its vision for APUs is an open one. ARM is an HSA partner, and AMD hints that the ARM instruction set architecture could be used in a similar manner to x86 within the node: to execute serial portions of applications, non-performance-critical sections, or legacy applications that haven’t yet undergone porting to GPUs.

The 12-page paper offers a lot more than what’s covered here, including:

+ A deep discussion of the memory bandwidth and memory capacity requirements of exascale in the context of both current and in-development memory technologies.

+ An overview of the significance of the HSA project, which has a prominent role in providing “open hardware and software interfaces…that will enable HPC application programmers to unlock the computing capabilities of the underlying heterogeneous exascale system.”

+ Proposed solutions to such issues as programmability at scale and physical constraints relating to power, resilience and reliability.

+ The framing of heterogeneous computing as a key technology for enabling higher performance and lower power across the complete spectrum of computing devices, from laptops to game consoles to supercomputers.

The paper didn’t, however, offer many details as far as AMD’s GPU and APU roadmaps are concerned. The company does have a next-gen server APU in development that is on target to deliver “multi-teraflops for HPC and workstation” in the 2016-2017 timeframe, but it’s unclear whether these will be of the half- single- or double-precision variety.

And earlier this month, AMD announced the newest member of its GPU family, the FirePro S9170, said to be “the world’s first and fastest 32GB single-GPU server card for DGEMM heavy double-precision workloads.” The GPU chip is based on the second-generation AMD Graphics Core Next (GCN) GPU architecture, and is capable of delivering up to 5.24 teraflops of peak single precision compute performance and up to 2.62 teraflops of peak double precision performance. AMD says the card supports 40 percent better double precision performance, while using 10 percent less power than the competition.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Nvidia P100 Shows 1.3-2.3x Speedup Over K80 GPU on Financial Apps

April 20, 2017

When it comes to the true performance of the latest silicon, every end user knows that the best processor is the one that works best for their application. Read more…

By Tiffany Trader

Quantum Adds Global Smarts to StorNext File System

April 20, 2017

Companies that use Quantum’s StorNext platform to store massive amounts of data this week got a glimpse of new storage capabilities that should make it easier to access their data horde from anywhere in the world. Read more…

By Alex Woodie

Scaling an HPC Career in Nepal Can Be a Steep Climb

April 20, 2017

Umesh Upadhyaya works as an IT Associate at the International Centre for Integrated Mountain Development (ICIMOD) in Nepal, which supports the country’s one and only HPC facility. He is directly involved in an initiative that focuses on climate change and atmosphere modeling Read more…

By Nages Sieslack

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Penguin Takes a Run at the Big Cloud Providers

April 12, 2017

HPC specialist Penguin Computing recently re-ran benchmarks from a study of its larger brethren and says the results show its ‘public cloud’ – Penguin on Demand (POD) – is among the leaders in cost and performance. Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This