Micron Technology’s Hybrid Memory Cube (HMC) got a big boost this week when IBM announced it will be supply some critical support for the technology. HMC is a 3D integrated memory chip that Micron is touting as a revolutionary device designed to make a direct assault on the memory wall.
The memory wall has come about because DRAM I/O has not kept up with multicore processors. Although DRAM capacities are keeping pace with Moore’s Law, the performance of the data channel between the memory and the processor has barely budged. Since every new core on a processor adds another hungry mouth to feed, and since cores are doubling in numbers every couple of years or so, the data channel has become a worsening bottleneck. Micron’s solution was to move memory into the 3rd dimension, allowing for the creation of more and wider I/O channels.
In a nutshell, the Hybrid Memory Cube is a 3D stack of memory glued together with through-silicon vias (TSVs). The TSVs provide the electrical interconnect for the DRAM chips. A logic controller is integrated at the base of the Cube.
Although Micron invented the Cube, the company has also brought in Samsung as part of its Hybrid Memory Cube Consortium. Intel, Altera, Open Silicon, and Xilinx are also on board, although it’s not clear if they are officially part of the Consortium or just technology partners. Intel seem particularly enthusiastic. CTO Justin Rattner demonstrated a prototype HMC at the fall Intel Developer Forum in September, noting that it was the world’s highest bandwidth DRAM device every built.
Cheerleading aside, the Consortium’s main purpose is to define an interface for the technology with enough industry backing to spur adoption by system vendors and board makers. Initially targeted to high performance computing, networking, and other memory-bandwidth hungry applications, Micron expects the technology to makes its way down into consumer devices. HMC can be coupled with CPUs, GPUs, FPGAs, or ASICs.
According to the announcement this week, IBM will be manufacturing the HMC controller and will use its 3D chipmaking technology to produce the Cubes. The company intends to manufacture the HMC parts using its 32nm process technology at its fab in East Fishkill, NY, with first shipments scheduled for the second half of 2012.
For high performance computing, networking, and other applications where the memory wall is already a bottleneck, the potential impact could be enormous. The HMC technology is advertised to deliver more than 15 times the performance of DDR3 memory. Using the current HMC design, transfer speeds of up to 128 GB/second (1 terabit per second) have been achieved. And because of the 3D configuration, Micron says it takes up 80 percent less space than traditional RDIMMs.
Significantly, Micron notes the Cube uses 70 percent less energy per bit than conventional DDR3 modules. A single HMC would use about 10 watts of power with current memory parts, compared to 82 watts for the equivalent performance found in 15 DDR3-1333 DIMMs.
The speedup and better energy efficiency is achieved principally through parallelism. Because the memory chips are stacked, there is more space for I/O pins through the TSVs. Thus each DRAM can be accessed with more (and/or wider) channels. The end result is that the controller can access many more banks of memory concurrently than can be accomplished with a two-dimensional DIMM. And because the controller and DRAM chips are in close proximity, latencies can be extremely low.
Prices for the HMC module have not been discussed. But given that the initial target market is for high-end systems, one could expect to pay a premium for these parts, at least from a memory capacity (bytes/dollar) perspective. But where performance, space, and energy consumption are primary considerations, the HMCs could provide a much better TCO than traditional DDR technology.
Certainly for the supercomputing community that is looking to achieve exascale computing with strict (20MW) power budget before the end of the decade, the Cube could become the go-to memory technology for these systems. For more generic HPC, pricing could be the issue, inasmuch as getting enough memory capacity at scale is already price-limited for many customers. In those cases, the Cubes might be more sparingly, as in a low capacity, high performance memory tier.
Although IBM says the first HMC chip are expected in the second half of 2012, that doesn’t mean the parts will be shipping in volume at that time. The interface spec for the Cube isn’t available yet, and isn’t expected to be ready until sometime next year. Given that, it’s more likely the first Cubes will start appearing in high-end servers, networking equipment, and compute appliances sometime in 2013.