With the race towards exascale heating up – for example, the Exascale Computing Program PathForward awards are expected soon – AMD delivered more details of its exascale vision at last month’s 23rd IEEE Symposium on High Performance Computer Architecture. The chipmaker presented an Exascale Node Architecture (ENA) as the “primary building block for exascale machines” including descriptions of component, interconnect, and packaging strategy along with simulation benchmarks to bolster its case.
The new work, captured in an AMD authored paper (Design and Analysis of an APU for Exascale Computing), comes at a time when many technologies (and vendors) are competing for sway in the exascale race; it also follows an earlier AMD position paper (Achieving Exascale Capabilities through Heterogeneous Computing) that broadly championed the need for a heterogeneous computing approach to exascale. (See HPCwire coverage, AMD’s Exascale Strategy Hinges on Heterogeneity).
“The ENA consists of an Exascale Heterogeneous Processor (EHP) coupled with an advanced memory system. The EHP provides a high-performance accelerated processing unit (CPU+GPU), in-package high-bandwidth 3D memory, and aggressive use of die-stacking and chiplet technologies to meet the requirements for exascale computing in a balanced manner. We present initial experimental analysis to demonstrate the promise of our approach, and we discuss remaining open research challenges for the community,” write the authors.
To an extent, the document ticks through familiar challenges – the exascale race is hardly new – and touches on techniques that already have received attention. The authors also note unsolved issues remain. That said, AMD spells out in some detail its ideas for the solution architecture. Here are a few of specifics:
- A high-performance accelerated processing unit (APU) that integrates high-throughput GPUs with excellent energy efficiency required for exascale levels of computation, tightly coupled with high-performance multi- core CPUs for serial or irregular code sections and legacy applications
- Aggressive use of die-stacking capabilities that enable dense component integration to reduce data-movement overheads and enhance power efficiency
- A chiplet-based approach that decouples performance-critical processing components (e.g., CPUs and GPUs) from components that do not scale well with technology (e.g., analog components), allowing fabrication in disparate, individually optimized process technologies for cost reduction and design reuse in other market segments
- Multi-level memories that enhance memory bandwidth with in-package 3D memory, which is stacked directly above high-bandwidth-consuming GPUs, while provisioning high-capacity memory outside of the package
- Advanced circuit techniques and active power-management techniques, which yield energy reductions with little performance impact
- Hardware and software mechanisms to achieve high resilience and reliability with minimal impact on performance and energy efficiency
- Concurrency frameworks that leverage the Heterogeneous System Architecture (HSA) and Radeon Open Compute platform (ROCm) software ecosystem to support new and existing applications with high- performance and high programmer productivity
The paper includes a fair amount of discussion around choices made. For example, “Rather than build a single, monolithic system on chip (SOC), we propose to leverage advanced die-stacking technologies to decompose the EHP into smaller components consisting of active interposers and chiplets. Each chiplet houses either multiple GPU compute units or CPU cores. The chiplet approach differs from conventional multi-chip module (MCM) designs in that each individual chiplet is not a complete SOC. For example, the CPU chiplet contains CPU cores and caches, but lacks memory interfaces and external I/O.”
Chiplet benefits, according to AMD, include die yield, process optimization, and re-usability. On the latter point, AMD reported, “The decomposition of the EHP into smaller pieces enables silicon-level reuse. A single, large HPC-optimized APU would be great for HPC markets, but may be less appropriate for others. For example, one or more of the CPU clusters could be packaged together to create a conventional CPU-only server processor.”
Six open-source scientific and security-related proxy applications (see table below) were studied to measure the maximum achievable floating-point throughput. AMD characterized application kernels into three categories:
- Compute-intensive Kernels. Compute-intensive kernels have infrequent main-memory accesses, and the performance is bound by compute through- put. As such, these kernels benefit from higher CU counts and GPU frequencies, but they are relatively insensitive to memory bandwidth. In fact, in a power-constrained system like exascale supercomputers, provisioning higher bandwidth can be detrimental to the overall performance because that simply takes power away from the compute resources. “MaxFlops falls under this category, which is a highly compute-intensive kernel as shown in Fig. 4. (shown below) While the performance increases linearly with more CUs and frequency (i.e., each bandwidth curve increases with higher ops-per- byte), bandwidth does not help (i.e., the corresponding CU- frequency points across different bandwidth curves have roughly the same performance level).”
- Balanced Kernels. Balanced kernels, such as CoMD shown in Fig. 5 (not shown), stress both the compute and memory resources. The best performance is observed when all resources are increased together. However, the rate of performance increase plateaus beyond a certain point. It is important to note that the plateau point is different across kernels.
- Memory-intensive Kernels. Memory-intensive kernels, such as LULESH shown in Fig. 6 (not shown), issue a high rate of memory accesses, hence are sensitive to the memory bandwidth. A notable characteristic of this class of kernels is that more CUs and higher GPU frequency are beneficial only up to a certain point. After that, the excessive number of concurrent memory requests starts to thrash the caches and increases contention in the memory and interconnect network, resulting in performance degradation.
“We use a range of HPC applications that exercise various components of the architecture differently. Our analysis of over a thousand different hardware configurations found that utilizing a total of 320 CUs at 1 GHz with 3 TB/s of memory bandwidth achieves the best performance (when considering an average across all applications) under the ENA-node power budget of 160W and area constraints,” report the authors.
As an exercise, the AMD paper is worth reading as many of its ideas are likely to be absorbed into resulting exascale computing architectures.
Link to paper: http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf
Authors (AMD Research):
Thiruvengadam Vijayaraghavan, Yasuko Eckert, Gabriel H. Loh, Michael J. Schulte, Mike Ignatowski, Bradford M. Beckmann, William C. Brantley, Joseph L. Greathouse, Wei Huang, Arun Karunanithi, Onur Kayiran, Mitesh Meswani, Indrani Paul, Matthew Poremba, Steven Raasch, Steven K. Reinhardt, Greg Sadowski, Vilas Sridharan.