Terascale Memory Challenges and Solutions

By Dave Dunning, Randy Mooney, Pat Stolt, Bryan Casper and James E. Jaussi

December 6, 2010

Introduction

Modern computer architectures commonly include one or more CPUs, a cache or caches, a few DDR-based memory channels, rotational and/or solid state disks and one or more Ethernet ports.
 
Figure 1: System blog diagram
Figure 1: System block diagram

A high percentage of CPU-based systems use DDR-based DRAM for external memory. DDR-based DRAM currently provides very favorable cost/bit while providing enough bandwidth with low enough latency to meet the application demands. Although process engineers have continued to find ways to cost effectively scale feature size, the CPU power consumed has become prohibitive.

In contrast to the previous decade, CPU clock rates are scaling slower over time due to the power constraints. However, the number of transistors per silicon area continue to increase roughly at the rate of Moore’s Law. Therefore, CPUs are being designed and built with an increasing number of cores, with each core executing one or more threads of instructions.

This puts a new kind of pressure on the memory subsystem. Though the demand for instructions and data per thread is not increasing very quickly, the rapid growth in the number of available threads puts an increasing emphasis on memory bandwidth. This article summarizes the challenges that arise for the memory subsystem associated with these terascale CPUs.

Memory Key Metrics and Fundamentals

The key metrics for examining the memory sub-systems are bandwidth, capacity, latency, power, system volume, and cost.

Bandwidth (Bytes/second, B/s or bits/second, b/s). Bandwidth is the number of Bytes transferred in a given amount of time. Bandwidth is usually the most talked-about performance metric. The bandwidth required for a system is usually market segment and application (working set size, code arrangement, and structure) dependent. Interestingly, bandwidth alone is not a very useful metric for system design decisions. Other factors must be considered such as cost, power and form factor (size/space) constraints in conjunction with bandwidth.

Capacity (Bytes or B). Capacity is the total number of bytes that can be stored in the region of memory.

Latency (seconds, sec or simply s). This is the time it takes to read a word from the region of memory. The focus is usually on read latency. Write latency is often of less interest; the time required to write to a memory is often not a factor for the performance of the application.

Power (Watts or W). Power equals the energy consumed divided by the time in which that energy is consumed.

System volume, Form Factor. This is the volume required for different technologies into a system. This is usually driven by the physical size of components and/or cooling requirements.

Cost ($). Cost usually refers to the money required to use components in a system.

Often metrics are combined. Frequently used metrics include bandwidth/Cost or Watts/bandwidth (J/bit).

Memory Scaling

Double data rate (DDR) memory has become the dominant memory technology (in terms of number of units sold). DDR-based DRAM products are optimized for high capacity and low cost, not high bandwidth, low latency or low power.

As the CPUs continue to increase in capability toward the terascale level, many of the key metrics are not scaling well and are becoming system design challenges. The metrics being stressed most are bandwidth, power and latency. As potential solutions are investigated, the other metrics of capacity and form factor become challenging as well.

The expression “hitting the memory wall” is often used. Commonly the “memory wall” has the connotation that DDR cannot supply enough bandwidth for CPUs. A more accurate statement is that based on the DDR interface and channel specifications, the bandwidth per pin cannot scale up as quickly as the compute capabilities of CPUs. Simply adding more pins in parallel is not an appealing option due to system cost reasons. The problem becomes acute when CPUs reach the TeraScale performance level. Being more precise, the rate at which bits can be moved between CPUs and DDR devices is limited by the frequency dependent loss, impedance discontinuities, the power available and cost to implement. It will be extremely challenging to push and pull data at rates that exceed 2.4 – 3.2 Gb/s per data signal across DDR channels.

The need to reduce latency and the value of reducing latency is very difficult to assess. Most systems today have put a higher value on bandwidth and choose to use forms of pipelining such as pre-fetching to hide latency. As CPUs approach the terascale range via many threads running in parallel pipeline-based methods to hide memory latency will become less effective. To keep cost and power low, more emphasis will be placed on reducing the latency for the first level of the memory hierarchy that is external to the CPU chip.

Increasing the bandwidth by adding data pins as well as reducing the read latency of DDR devices could be done while maintaining the existing architectures of both the DRAM as well as the interface. However, addressing these bandwidth and latency metrics alone is not enough since one of the greatest challenges to achieving terascale bandwidths is maintaining low power consumption.

DRAM device power is composed of three main components: power consumed by the storage array, power consumed by the datapath and power consumed by the I/O pins. Roughly 50 percent of the power consumed is in the datapath, with the other 50 percent split between I/O circuits and the array. All three areas need to be addressed to create DRAM products suitable for terascale systems.

Evolutionary DRAM Summary

In summary, the key trends for evolutionary memory sub-system scaling are:

•Bandwidth scaling for traditional DDRx-based systems will end at about 2.4 – 3.2 Gb/s per pin (bump).
•To achieve the bit rates above, each channel will likely be limited to one DIMM without extra components, such as buffer on board (motherboard).
•GDDRx gives increased bandwidth but at the cost of capacity. Pin bandwidth will be limited to 5-6 Gb/s for GDDR channels being constructed today.
•Power in the memory sub-system varies from 40-200 mW per Gb/s, translating to hundreds of Watts for a TB/s of bandwidth.
•Adding capacity to evolutionary memory sub-systems is limited to adding channels, buffer on board or other forms of buffered DIMMs.
•Latency improvements for evolutionary systems will be minimal.

Terascale Memory Challenges and Future Memory Technologies

In the following section, we describe some of those challenges facing memory architects and designers and potential solutions.

Memory Technology

The first question we need to ask is which memory technology(s) will fill the needs of these systems. DRAM technology has long dominated the market for off-chip memory bandwidth solutions in computing systems. While non-volatile memory technologies such as NAND Flash and Phase Change Memory are vying for a share of this market, they are at a disadvantage with respect to bandwidth, latency, and power.

A holistic approach is needed to achieve the required results. The main factors that will need to be addressed to achieve the optimal solution for increased bandwidth and lower energy per bit of future terascale memory sub-systems are the channel materials, the I/O density, the memory density, and the memory device architecture. We examine the changes required in these areas.

Channel Materials

First we look at the materials that could be used to construct channels between CPUs and memory modules.

Figure 2: Data Rate versus Trace Length
Figure 2: Data Rate versus Trace Length for different materials

Adding complexity to the I/O circuits in the form of additional equalization, more complex clocking circuits, and possibly data coding can increase the data rate, but also increase the energy per bit moved. More complex interconnects, such as flex cabling, improved board materials, such as Rogers or high-density interconnect (HDI), and eventually, optical solutions, must be considered. The emphasis on higher bandwidth/pin, I/O density and lower energy per bit read/written will lead to selective use of new channel materials.

Memory Density

A DRAM technology that supports a high bandwidth per pin, high capacity and low energy per bit moved will be required. A promising solution to solve these issues is 3-D technology, based on through silicon vias (TSVs). 3-D stacked memory will provide an increase in memory density through stacking, and it will enable a wide datapath from the memory to the external pins, relaxing the per-pin bandwidth requirement in the memory array as shown in Figure 3.

Figure 3: 3-D Stacked Memory Module
Figure 3: 3-D Stacked Memory Module

This design achieves six objectives:

  1. A method for further scaling of DRAM density.
  2. A relatively wide datapath from the memory array to the memory pins, relaxing the speed constraints on the DRAM technology.
  3. A high density connection from the memory module to the memory controller, which makes for more efficient use of power.
  4. The elimination of many of the traditional interconnect components from the electrical path.
  5. It separates the high bandwidth I/O solution from the microprocessor and memory controller power delivery path when using the top of the package for high speed I/O.
  6. The increased density eliminates the need for the electrically-challenged and energy-inefficient, multi-drop DIMM bus.

A key new challenge is introduced; we need a way to move the data from the wide datapath from the memory array to the memory device pins. The general characteristics necessary for an optimal solution are the ability to efficiently multiplex the data at a rate that matches the data rate of the increased device pins (Gb/s), rather than a rate that matches the slower, wider memory datapath, at an efficient energy level (low pJ per bit) that closely matches the characteristics of the CPU generating the memory requests. The architecture, design and implementation of the data collection function will be dependent on the usage of the 3-D memory module, ranging from specialized DRAM chips to a mix of logic process chips and DRAM process chips.

Memory Hierarchy

Given a memory of the type we describe, we must also examine the entire memory hierarchy. For example, it may be advantageous to add a level of memory to the hierarchy.

Analyzing different memory hierarchies is a huge challenge. All the metrics mentioned previously need to be evaluated in the context of the applications of interest (see “Key Metrics”). When considering additional levels of the memory hierarchy, the key decisions are where to add a level or levels in the memory hierarchy and how the levels of memory are managed.

Memory Hierarchy — Where to Add Memory

Earlier, we concluded that to meet the needs of terascale systems, designers should investigate new architectures and manufacturing techniques for DRAM, with an emphasis on 3-D stacking with TSVs. We are confident that these techniques will lead to improved DRAM products, while maintaining a low cost per bit stored. We also realize that when the new technologies are introduced, it will take time for the price per bit to drop. Therefore, early use of 3-D stacked memory as near memory, backed up by DDR-based DRAM or other low cost per bit memory technologies, may be an appealing and cost-effective choice for designers.

The policies of what data (or instructions) are placed, where they are placed as well as what is copied and shared are the key research issues facing system designers. The simple statement that data movement must be minimized will take on additional importance as terascale CPUs are built.

Summary and Conclusions

The demand for bandwidth continues to increase. Terascale CPUs will exacerbate the challenges of the memory subsystem design, including the architecture and design of memory controllers, the memory modules and memory devices themselves. DDR-based memory and interfaces will continue to be used for the markets segments where they can, but the shift to something new will begin in next few years.

To learn more, read the Intel Technology Journal, Volume 13, Issue 4, December 2009, Addressing the Challenges of Tera-scale Computing, ISBN 978-1-934053-23-2

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them better through the miracle of video..... Team FAU/TUC is a c Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from SC17 chair Bernd Mohr, where he lauded the competition for Read more…

By Dan Olds

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

November 20, 2017

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore Read more…

By John Russell

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops (peak) machine based on IBM’s Power9 chip being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the ~200 petaflops system being built at Oak Ridge Natio Read more…

By John Russell

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them bet Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This