Terascale Memory Challenges and Solutions

By Dave Dunning, Randy Mooney, Pat Stolt, Bryan Casper and James E. Jaussi

December 6, 2010

Introduction

Modern computer architectures commonly include one or more CPUs, a cache or caches, a few DDR-based memory channels, rotational and/or solid state disks and one or more Ethernet ports.
 
Figure 1: System blog diagram
Figure 1: System block diagram

A high percentage of CPU-based systems use DDR-based DRAM for external memory. DDR-based DRAM currently provides very favorable cost/bit while providing enough bandwidth with low enough latency to meet the application demands. Although process engineers have continued to find ways to cost effectively scale feature size, the CPU power consumed has become prohibitive.

In contrast to the previous decade, CPU clock rates are scaling slower over time due to the power constraints. However, the number of transistors per silicon area continue to increase roughly at the rate of Moore’s Law. Therefore, CPUs are being designed and built with an increasing number of cores, with each core executing one or more threads of instructions.

This puts a new kind of pressure on the memory subsystem. Though the demand for instructions and data per thread is not increasing very quickly, the rapid growth in the number of available threads puts an increasing emphasis on memory bandwidth. This article summarizes the challenges that arise for the memory subsystem associated with these terascale CPUs.

Memory Key Metrics and Fundamentals

The key metrics for examining the memory sub-systems are bandwidth, capacity, latency, power, system volume, and cost.

Bandwidth (Bytes/second, B/s or bits/second, b/s). Bandwidth is the number of Bytes transferred in a given amount of time. Bandwidth is usually the most talked-about performance metric. The bandwidth required for a system is usually market segment and application (working set size, code arrangement, and structure) dependent. Interestingly, bandwidth alone is not a very useful metric for system design decisions. Other factors must be considered such as cost, power and form factor (size/space) constraints in conjunction with bandwidth.

Capacity (Bytes or B). Capacity is the total number of bytes that can be stored in the region of memory.

Latency (seconds, sec or simply s). This is the time it takes to read a word from the region of memory. The focus is usually on read latency. Write latency is often of less interest; the time required to write to a memory is often not a factor for the performance of the application.

Power (Watts or W). Power equals the energy consumed divided by the time in which that energy is consumed.

System volume, Form Factor. This is the volume required for different technologies into a system. This is usually driven by the physical size of components and/or cooling requirements.

Cost ($). Cost usually refers to the money required to use components in a system.

Often metrics are combined. Frequently used metrics include bandwidth/Cost or Watts/bandwidth (J/bit).

Memory Scaling

Double data rate (DDR) memory has become the dominant memory technology (in terms of number of units sold). DDR-based DRAM products are optimized for high capacity and low cost, not high bandwidth, low latency or low power.

As the CPUs continue to increase in capability toward the terascale level, many of the key metrics are not scaling well and are becoming system design challenges. The metrics being stressed most are bandwidth, power and latency. As potential solutions are investigated, the other metrics of capacity and form factor become challenging as well.

The expression “hitting the memory wall” is often used. Commonly the “memory wall” has the connotation that DDR cannot supply enough bandwidth for CPUs. A more accurate statement is that based on the DDR interface and channel specifications, the bandwidth per pin cannot scale up as quickly as the compute capabilities of CPUs. Simply adding more pins in parallel is not an appealing option due to system cost reasons. The problem becomes acute when CPUs reach the TeraScale performance level. Being more precise, the rate at which bits can be moved between CPUs and DDR devices is limited by the frequency dependent loss, impedance discontinuities, the power available and cost to implement. It will be extremely challenging to push and pull data at rates that exceed 2.4 – 3.2 Gb/s per data signal across DDR channels.

The need to reduce latency and the value of reducing latency is very difficult to assess. Most systems today have put a higher value on bandwidth and choose to use forms of pipelining such as pre-fetching to hide latency. As CPUs approach the terascale range via many threads running in parallel pipeline-based methods to hide memory latency will become less effective. To keep cost and power low, more emphasis will be placed on reducing the latency for the first level of the memory hierarchy that is external to the CPU chip.

Increasing the bandwidth by adding data pins as well as reducing the read latency of DDR devices could be done while maintaining the existing architectures of both the DRAM as well as the interface. However, addressing these bandwidth and latency metrics alone is not enough since one of the greatest challenges to achieving terascale bandwidths is maintaining low power consumption.

DRAM device power is composed of three main components: power consumed by the storage array, power consumed by the datapath and power consumed by the I/O pins. Roughly 50 percent of the power consumed is in the datapath, with the other 50 percent split between I/O circuits and the array. All three areas need to be addressed to create DRAM products suitable for terascale systems.

Evolutionary DRAM Summary

In summary, the key trends for evolutionary memory sub-system scaling are:

•Bandwidth scaling for traditional DDRx-based systems will end at about 2.4 – 3.2 Gb/s per pin (bump).
•To achieve the bit rates above, each channel will likely be limited to one DIMM without extra components, such as buffer on board (motherboard).
•GDDRx gives increased bandwidth but at the cost of capacity. Pin bandwidth will be limited to 5-6 Gb/s for GDDR channels being constructed today.
•Power in the memory sub-system varies from 40-200 mW per Gb/s, translating to hundreds of Watts for a TB/s of bandwidth.
•Adding capacity to evolutionary memory sub-systems is limited to adding channels, buffer on board or other forms of buffered DIMMs.
•Latency improvements for evolutionary systems will be minimal.

Terascale Memory Challenges and Future Memory Technologies

In the following section, we describe some of those challenges facing memory architects and designers and potential solutions.

Memory Technology

The first question we need to ask is which memory technology(s) will fill the needs of these systems. DRAM technology has long dominated the market for off-chip memory bandwidth solutions in computing systems. While non-volatile memory technologies such as NAND Flash and Phase Change Memory are vying for a share of this market, they are at a disadvantage with respect to bandwidth, latency, and power.

A holistic approach is needed to achieve the required results. The main factors that will need to be addressed to achieve the optimal solution for increased bandwidth and lower energy per bit of future terascale memory sub-systems are the channel materials, the I/O density, the memory density, and the memory device architecture. We examine the changes required in these areas.

Channel Materials

First we look at the materials that could be used to construct channels between CPUs and memory modules.

Figure 2: Data Rate versus Trace Length
Figure 2: Data Rate versus Trace Length for different materials

Adding complexity to the I/O circuits in the form of additional equalization, more complex clocking circuits, and possibly data coding can increase the data rate, but also increase the energy per bit moved. More complex interconnects, such as flex cabling, improved board materials, such as Rogers or high-density interconnect (HDI), and eventually, optical solutions, must be considered. The emphasis on higher bandwidth/pin, I/O density and lower energy per bit read/written will lead to selective use of new channel materials.

Memory Density

A DRAM technology that supports a high bandwidth per pin, high capacity and low energy per bit moved will be required. A promising solution to solve these issues is 3-D technology, based on through silicon vias (TSVs). 3-D stacked memory will provide an increase in memory density through stacking, and it will enable a wide datapath from the memory to the external pins, relaxing the per-pin bandwidth requirement in the memory array as shown in Figure 3.

Figure 3: 3-D Stacked Memory Module
Figure 3: 3-D Stacked Memory Module

This design achieves six objectives:

  1. A method for further scaling of DRAM density.
  2. A relatively wide datapath from the memory array to the memory pins, relaxing the speed constraints on the DRAM technology.
  3. A high density connection from the memory module to the memory controller, which makes for more efficient use of power.
  4. The elimination of many of the traditional interconnect components from the electrical path.
  5. It separates the high bandwidth I/O solution from the microprocessor and memory controller power delivery path when using the top of the package for high speed I/O.
  6. The increased density eliminates the need for the electrically-challenged and energy-inefficient, multi-drop DIMM bus.

A key new challenge is introduced; we need a way to move the data from the wide datapath from the memory array to the memory device pins. The general characteristics necessary for an optimal solution are the ability to efficiently multiplex the data at a rate that matches the data rate of the increased device pins (Gb/s), rather than a rate that matches the slower, wider memory datapath, at an efficient energy level (low pJ per bit) that closely matches the characteristics of the CPU generating the memory requests. The architecture, design and implementation of the data collection function will be dependent on the usage of the 3-D memory module, ranging from specialized DRAM chips to a mix of logic process chips and DRAM process chips.

Memory Hierarchy

Given a memory of the type we describe, we must also examine the entire memory hierarchy. For example, it may be advantageous to add a level of memory to the hierarchy.

Analyzing different memory hierarchies is a huge challenge. All the metrics mentioned previously need to be evaluated in the context of the applications of interest (see “Key Metrics”). When considering additional levels of the memory hierarchy, the key decisions are where to add a level or levels in the memory hierarchy and how the levels of memory are managed.

Memory Hierarchy — Where to Add Memory

Earlier, we concluded that to meet the needs of terascale systems, designers should investigate new architectures and manufacturing techniques for DRAM, with an emphasis on 3-D stacking with TSVs. We are confident that these techniques will lead to improved DRAM products, while maintaining a low cost per bit stored. We also realize that when the new technologies are introduced, it will take time for the price per bit to drop. Therefore, early use of 3-D stacked memory as near memory, backed up by DDR-based DRAM or other low cost per bit memory technologies, may be an appealing and cost-effective choice for designers.

The policies of what data (or instructions) are placed, where they are placed as well as what is copied and shared are the key research issues facing system designers. The simple statement that data movement must be minimized will take on additional importance as terascale CPUs are built.

Summary and Conclusions

The demand for bandwidth continues to increase. Terascale CPUs will exacerbate the challenges of the memory subsystem design, including the architecture and design of memory controllers, the memory modules and memory devices themselves. DDR-based memory and interfaces will continue to be used for the markets segments where they can, but the shift to something new will begin in next few years.

To learn more, read the Intel Technology Journal, Volume 13, Issue 4, December 2009, Addressing the Challenges of Tera-scale Computing, ISBN 978-1-934053-23-2

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Can Markov Logic Take Machine Learning to the Next Level?

July 11, 2018

Advances in machine learning, including deep learning, have propelled artificial intelligence (AI) into the public conscience and forced executives to create new business plans based on data. However, the scarcity of hig Read more…

By Alex Woodie

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

ORNL Summit Supercomputer Is Officially Here

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer today at an event presided over by DOE Secretary Rick Perry. Read more…

CSIR, Nvidia Partner to Launch GPU-Powered AI Center in India

July 10, 2018

As reported by a number of Indian news outlets, India’s Council of Scientific and Industrial Research (CSIR) is partnering with Nvidia to establish a new, AI-focused Centre of Excellence in New Delhi, India's capital. Read more…

By Oliver Peckham

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Meet the ISC18 Cluster Teams: Up Close & Personal

July 6, 2018

It’s time to meet your ISC18 Student Cluster Competition teams. While I was able to film them live at the ISC show, the trick was finding time to edit the vid Read more…

By Dan Olds

PRACEdays18 Keynote Allan Williams (Australia/NCI): We’re Open for Business Down Under!

July 5, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened with a plenary session on May 29, 2018 Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

HPC Under the Covers: Linpack, Exascale & the Top500

June 28, 2018

HPCers can get painted as a monolithic bunch by outsiders, but internecine disagreements abound over the HPCest of HPC jargon, as was evident at ISC this week. Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This