Terascale Memory Challenges and Solutions

By Dave Dunning, Randy Mooney, Pat Stolt, Bryan Casper and James E. Jaussi

December 6, 2010

Introduction

Modern computer architectures commonly include one or more CPUs, a cache or caches, a few DDR-based memory channels, rotational and/or solid state disks and one or more Ethernet ports.
 
Figure 1: System blog diagram
Figure 1: System block diagram

A high percentage of CPU-based systems use DDR-based DRAM for external memory. DDR-based DRAM currently provides very favorable cost/bit while providing enough bandwidth with low enough latency to meet the application demands. Although process engineers have continued to find ways to cost effectively scale feature size, the CPU power consumed has become prohibitive.

In contrast to the previous decade, CPU clock rates are scaling slower over time due to the power constraints. However, the number of transistors per silicon area continue to increase roughly at the rate of Moore’s Law. Therefore, CPUs are being designed and built with an increasing number of cores, with each core executing one or more threads of instructions.

This puts a new kind of pressure on the memory subsystem. Though the demand for instructions and data per thread is not increasing very quickly, the rapid growth in the number of available threads puts an increasing emphasis on memory bandwidth. This article summarizes the challenges that arise for the memory subsystem associated with these terascale CPUs.

Memory Key Metrics and Fundamentals

The key metrics for examining the memory sub-systems are bandwidth, capacity, latency, power, system volume, and cost.

Bandwidth (Bytes/second, B/s or bits/second, b/s). Bandwidth is the number of Bytes transferred in a given amount of time. Bandwidth is usually the most talked-about performance metric. The bandwidth required for a system is usually market segment and application (working set size, code arrangement, and structure) dependent. Interestingly, bandwidth alone is not a very useful metric for system design decisions. Other factors must be considered such as cost, power and form factor (size/space) constraints in conjunction with bandwidth.

Capacity (Bytes or B). Capacity is the total number of bytes that can be stored in the region of memory.

Latency (seconds, sec or simply s). This is the time it takes to read a word from the region of memory. The focus is usually on read latency. Write latency is often of less interest; the time required to write to a memory is often not a factor for the performance of the application.

Power (Watts or W). Power equals the energy consumed divided by the time in which that energy is consumed.

System volume, Form Factor. This is the volume required for different technologies into a system. This is usually driven by the physical size of components and/or cooling requirements.

Cost ($). Cost usually refers to the money required to use components in a system.

Often metrics are combined. Frequently used metrics include bandwidth/Cost or Watts/bandwidth (J/bit).

Memory Scaling

Double data rate (DDR) memory has become the dominant memory technology (in terms of number of units sold). DDR-based DRAM products are optimized for high capacity and low cost, not high bandwidth, low latency or low power.

As the CPUs continue to increase in capability toward the terascale level, many of the key metrics are not scaling well and are becoming system design challenges. The metrics being stressed most are bandwidth, power and latency. As potential solutions are investigated, the other metrics of capacity and form factor become challenging as well.

The expression “hitting the memory wall” is often used. Commonly the “memory wall” has the connotation that DDR cannot supply enough bandwidth for CPUs. A more accurate statement is that based on the DDR interface and channel specifications, the bandwidth per pin cannot scale up as quickly as the compute capabilities of CPUs. Simply adding more pins in parallel is not an appealing option due to system cost reasons. The problem becomes acute when CPUs reach the TeraScale performance level. Being more precise, the rate at which bits can be moved between CPUs and DDR devices is limited by the frequency dependent loss, impedance discontinuities, the power available and cost to implement. It will be extremely challenging to push and pull data at rates that exceed 2.4 – 3.2 Gb/s per data signal across DDR channels.

The need to reduce latency and the value of reducing latency is very difficult to assess. Most systems today have put a higher value on bandwidth and choose to use forms of pipelining such as pre-fetching to hide latency. As CPUs approach the terascale range via many threads running in parallel pipeline-based methods to hide memory latency will become less effective. To keep cost and power low, more emphasis will be placed on reducing the latency for the first level of the memory hierarchy that is external to the CPU chip.

Increasing the bandwidth by adding data pins as well as reducing the read latency of DDR devices could be done while maintaining the existing architectures of both the DRAM as well as the interface. However, addressing these bandwidth and latency metrics alone is not enough since one of the greatest challenges to achieving terascale bandwidths is maintaining low power consumption.

DRAM device power is composed of three main components: power consumed by the storage array, power consumed by the datapath and power consumed by the I/O pins. Roughly 50 percent of the power consumed is in the datapath, with the other 50 percent split between I/O circuits and the array. All three areas need to be addressed to create DRAM products suitable for terascale systems.

Evolutionary DRAM Summary

In summary, the key trends for evolutionary memory sub-system scaling are:

•Bandwidth scaling for traditional DDRx-based systems will end at about 2.4 – 3.2 Gb/s per pin (bump).
•To achieve the bit rates above, each channel will likely be limited to one DIMM without extra components, such as buffer on board (motherboard).
•GDDRx gives increased bandwidth but at the cost of capacity. Pin bandwidth will be limited to 5-6 Gb/s for GDDR channels being constructed today.
•Power in the memory sub-system varies from 40-200 mW per Gb/s, translating to hundreds of Watts for a TB/s of bandwidth.
•Adding capacity to evolutionary memory sub-systems is limited to adding channels, buffer on board or other forms of buffered DIMMs.
•Latency improvements for evolutionary systems will be minimal.

Terascale Memory Challenges and Future Memory Technologies

In the following section, we describe some of those challenges facing memory architects and designers and potential solutions.

Memory Technology

The first question we need to ask is which memory technology(s) will fill the needs of these systems. DRAM technology has long dominated the market for off-chip memory bandwidth solutions in computing systems. While non-volatile memory technologies such as NAND Flash and Phase Change Memory are vying for a share of this market, they are at a disadvantage with respect to bandwidth, latency, and power.

A holistic approach is needed to achieve the required results. The main factors that will need to be addressed to achieve the optimal solution for increased bandwidth and lower energy per bit of future terascale memory sub-systems are the channel materials, the I/O density, the memory density, and the memory device architecture. We examine the changes required in these areas.

Channel Materials

First we look at the materials that could be used to construct channels between CPUs and memory modules.

Figure 2: Data Rate versus Trace Length
Figure 2: Data Rate versus Trace Length for different materials

Adding complexity to the I/O circuits in the form of additional equalization, more complex clocking circuits, and possibly data coding can increase the data rate, but also increase the energy per bit moved. More complex interconnects, such as flex cabling, improved board materials, such as Rogers or high-density interconnect (HDI), and eventually, optical solutions, must be considered. The emphasis on higher bandwidth/pin, I/O density and lower energy per bit read/written will lead to selective use of new channel materials.

Memory Density

A DRAM technology that supports a high bandwidth per pin, high capacity and low energy per bit moved will be required. A promising solution to solve these issues is 3-D technology, based on through silicon vias (TSVs). 3-D stacked memory will provide an increase in memory density through stacking, and it will enable a wide datapath from the memory to the external pins, relaxing the per-pin bandwidth requirement in the memory array as shown in Figure 3.

Figure 3: 3-D Stacked Memory Module
Figure 3: 3-D Stacked Memory Module

This design achieves six objectives:

  1. A method for further scaling of DRAM density.
  2. A relatively wide datapath from the memory array to the memory pins, relaxing the speed constraints on the DRAM technology.
  3. A high density connection from the memory module to the memory controller, which makes for more efficient use of power.
  4. The elimination of many of the traditional interconnect components from the electrical path.
  5. It separates the high bandwidth I/O solution from the microprocessor and memory controller power delivery path when using the top of the package for high speed I/O.
  6. The increased density eliminates the need for the electrically-challenged and energy-inefficient, multi-drop DIMM bus.

A key new challenge is introduced; we need a way to move the data from the wide datapath from the memory array to the memory device pins. The general characteristics necessary for an optimal solution are the ability to efficiently multiplex the data at a rate that matches the data rate of the increased device pins (Gb/s), rather than a rate that matches the slower, wider memory datapath, at an efficient energy level (low pJ per bit) that closely matches the characteristics of the CPU generating the memory requests. The architecture, design and implementation of the data collection function will be dependent on the usage of the 3-D memory module, ranging from specialized DRAM chips to a mix of logic process chips and DRAM process chips.

Memory Hierarchy

Given a memory of the type we describe, we must also examine the entire memory hierarchy. For example, it may be advantageous to add a level of memory to the hierarchy.

Analyzing different memory hierarchies is a huge challenge. All the metrics mentioned previously need to be evaluated in the context of the applications of interest (see “Key Metrics”). When considering additional levels of the memory hierarchy, the key decisions are where to add a level or levels in the memory hierarchy and how the levels of memory are managed.

Memory Hierarchy — Where to Add Memory

Earlier, we concluded that to meet the needs of terascale systems, designers should investigate new architectures and manufacturing techniques for DRAM, with an emphasis on 3-D stacking with TSVs. We are confident that these techniques will lead to improved DRAM products, while maintaining a low cost per bit stored. We also realize that when the new technologies are introduced, it will take time for the price per bit to drop. Therefore, early use of 3-D stacked memory as near memory, backed up by DDR-based DRAM or other low cost per bit memory technologies, may be an appealing and cost-effective choice for designers.

The policies of what data (or instructions) are placed, where they are placed as well as what is copied and shared are the key research issues facing system designers. The simple statement that data movement must be minimized will take on additional importance as terascale CPUs are built.

Summary and Conclusions

The demand for bandwidth continues to increase. Terascale CPUs will exacerbate the challenges of the memory subsystem design, including the architecture and design of memory controllers, the memory modules and memory devices themselves. DDR-based memory and interfaces will continue to be used for the markets segments where they can, but the shift to something new will begin in next few years.

To learn more, read the Intel Technology Journal, Volume 13, Issue 4, December 2009, Addressing the Challenges of Tera-scale Computing, ISBN 978-1-934053-23-2

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Quantum Rolls – DOE Dishes $218M; NSF Awards $31M; US Releases ‘Strategic Overview’

September 24, 2018

It was quite a day for U.S. quantum computing. In conjunction with the White House Summit on Advancing American Leadership in Quantum Information Science (QIS) held today, the Department of Energy announced $218 million Read more…

By John Russell

Russian and American Scientists Achieve 50% Increase in Data Transmission Speed

September 20, 2018

As high-performance computing becomes increasingly data-intensive and the demand for shorter turnaround times grows, data transfer speed becomes an ever more important bottleneck. Now, in an article published in IEEE Tra Read more…

By Oliver Peckham

IBM to Brand Rescale’s HPC-in-Cloud Platform

September 20, 2018

HPC (or big compute)-in-the-cloud platform provider Rescale has formalized the work it’s been doing in partnership with public cloud vendors by announcing its Powered by Rescale program – with IBM as its first named Read more…

By Doug Black

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Clouds Over the Ocean – a Healthcare Perspective

Advances in precision medicine, genomics, and imaging; the widespread adoption of electronic health records; and the proliferation of medical Internet of Things (IoT) and mobile devices are resulting in an explosion of structured and unstructured healthcare-related data. Read more…

Democratization of HPC Part 1: Simulation Sheds Light on Building Dispute

September 20, 2018

This is the first of three articles demonstrating the growing acceptance of High Performance Computing especially in new user communities and application areas. Major reasons for this trend are the ongoing improvements i Read more…

By Wolfgang Gentzsch

Quantum Rolls – DOE Dishes $218M; NSF Awards $31M; US Releases ‘Strategic Overview’

September 24, 2018

It was quite a day for U.S. quantum computing. In conjunction with the White House Summit on Advancing American Leadership in Quantum Information Science (QIS) Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This