Micron Steers Roadmap Around Memory Scaling Obstacles

By Tiffany Trader

August 27, 2015

In a packed session at IDF 2015 in San Francisco last week, Scott Graham, Micron’s general manager of Hybrid Memory, discussed some of the key themes occurring in the memory landscape from Micron’s perspective.

“It’s an exciting time in the industry and there’s a lot going on with memory development in system architecture and software architecture and how they combine together to provide system solutions in the server, mobile computing and embedded and networking environments,” he offered as prelude.

Noting that Micron has a portfolio that spans across platforms and sectors, Graham asked the primarily developer audience to consider how they can use these new and existing memory technologies to develop platforms to solve complex challenges out in the industry.

As the focus in computing moves from the compute bottleneck to the data bottleneck with the slow down of Moore’s law and the proliferation of data, memory and storage technologies are more important than ever. And while HPC certainly has some unique challenges and specific requirements, many concerns related to price, performance and system balance are shared across the larger computing market.

Memory is more diversified than ever and Micron has several technologies and products that are optimized for power and performance and target HPC, including Hybrid Memory Cube, solid state drives, NVDIMMs, 3D NAND, and most recently 3D XPoint, which it developed with partner Intel. The non-volatile memory process technology, unveiled last month, is being heralded by its backers as the first new memory category since the introduction of NAND flash in 1989.

3D XPoint, said Graham, previewing content to come later in his presentation, delivers 1000X the performance of regular multi-level cell (MLC) NAND and 10X higher density than a conventional volatile memory, such as DRAM.

The Update

Graham went on to deliver a technology update for the four key technologies that undergird Micron’s portfolio: DRAM, NAND, package technology (aka Hybrid Memory Cube), and new memory technology (aka 3D XPoint).

In terms of DRAM, Graham said the product continues to come along nicely with strong progress for 20nm yield. And Micron has 1Xnm development underway in Asia and 1Y/1Znm in the US.

For NAND, 16nm TLC NAND is also ramping up, but Micron will be focusing their efforts more on 3D NAND. First generation 3D NAND is on track for production now, and Micron will move to second generation next year.

Micron notes its 3D packaging technology, which has been productized in the HMC line, continues to mature. The company is currently manufacturing HMC generation 2, and will be launching HMC generation 3 over the next year to enable even higher density and bandwidth. Graham reviewed that on the networking side, it is being used in data packet processing and in data packet buffering and storage applications. For the high performance computing space, HMC is used for very high-speed, high-bandwidth technology transactions.

“To be frank, we cannot achieve the applications and system needs without developing a really good packaging technology,” said Graham. “We’re not going to achieve these bandwidth capabilities. We’re not going to achieve the reliability needs. We’re not going to overcome some of the scaling challenges without developing some of these new technology methods. If you look at Hybrid Memory Cube, that’s been the lead vehicle for Micron in order to develop these package technologies for future emerging memories.”

Graham went on to review the benefits of Micron’s in-package memory, stating that it helps to achieve bandwidth, efficiency and form factor all in one package. “If we have the ability to take DRAM and stack it on top of a logic layer and SoC and be able to control that DRAM with that SoC, it allows us to overcome scaling challenges. Being able to combine these technologies together, gives us unprecedented memory bandwidth that keeps pace with multiple CPU cores, and DRAM alone is not going to do that. This all allows for increased savings in energy/bit, density in a small form factor, higher performance and lower energy, and compelling RAS features,” Graham continues.

Challenges to the Longevity of DRAM

Graham also spoke about the impacts of DRAM process complexity, noting that as the industry scales from 50nm to 30nm and then to 20nm, complexity drives really significant upticks in the number of mask levels, by over 35 percent. The number of non-litho steps per critical mask level is up a staggering 110 percent, going from 30nm to 20nm. Clean room space per wafer output is up over 80 percent. Since acquiring Elpida in 2013, Micron says is is getting ahead of its original plan on hitting the 20nm yield. Keeping cost per bit down is a key goal and Micron believes it can enable this by facilitating the scaling path to sub-15nm DRAM. Specifically, Graham noted 1Xnm is driving over a 30 percent improvement in cost per Gb over 20nm.

DRAM is still the primary memory inside nearly every computer, from mobile phones to datacenter servers to supercomputers. But with scaling challenges, improvements have already started slowing. There are also power concerns with DRAM main memory systems accounting for about 30-50 percent of a node’s overall power consumption. These points are all highlighted in a recent journal article written by authors Jeffrey S. Vetter and Sparsh Mittal (of Oak Ridge National Laboratory). The duo then set out to examine what the future might hold for non-volatile memory systems in extreme-scale high performance computing systems.

“For DRAM, there are possible improvements from redesigning and optimizing DRAM protocols, moving DRAM closer to processors, and improved manufacturing processes,” they write. “In fact, this integration of memory onto the package in future systems may provide for performance and power benefits of about one order of magnitude [5]. Second, emerging memory technologies with different characteristics could replace or complement DRAM [13, 15, 19, 24].”

In another part of the paper, Vetter and Mittal write: “Moreover, as the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, a number of emerging memory technologies — nonvolatile memory (NVM) devices – are being investigated as an alternative for DRAM. Moving forward, these NVM devices may offer a number of solutions for HPC architectures. First, as the name, NVM, implies, these devices retain state without continuous power, which can, in turn, reduce power costs. Second, certain NVM devices can be as dense as DRAM, facilitating more memory capacity in the same physical volume. Finally, NVM, such as contemporary NAND flash memory, can be less expensive than DRAM in terms of cost per bit. Taken together, these benefits can provide opportunities for revolutionizing the design of extreme-scale HPC systems.” The full paper fleshes out each of these potential technology trends.

Micron’s General Manager of Hybrid Memory echos many of the same concerns as he discusses Micron’s outlook to the future of memory. “As we look at the future, in order to overcome the scaling challenges, specifically related to DRAM, we need to either find a better DRAM or some type of DRAM replacement,” says Graham. “So we continue to have a strong strategic investment in our roadmap enablement for storage-class memory as well as some type of DRAM or NAND replacement as well as multiple generations of 3D NAND. The strategic investment in the future of those core technologies that we’re looking at today and will continue to invest research dollars in are both resistive RAM as well as SST RAM. And SST-RAM — spin-torque magnetic random-access memory RAM — we think that that technology has a really promising opportunity to perhaps replace DRAM. So it’s DRAM-like but with non-volatile capability. As we continue to explore other opportunities, we will update the community.”

Micron’s condensed roadmap of technologies is shown below:

Micron roadmap slide IDF15

Emerging Memory and 3D-XPoint

When it comes to Micron’s emerging memory line, not surprisingly the focus is on 3D-XPoint with generation one sampling this year (although first deliveries are not promised until 2016) and a subsequent technology coming the following year. You can also see New Memory B Gen 1 positioned just a little farther out. At first all Graham would say is that “we are working on it now and it will be disclosed at a later date,” but he later confirmed that Micron’s first generation offering would be cost-optimized, while the emerging “new memory B” technology would be focused on performance and addressing some of the bigger industry challenges.

“As we develop new memory technologies and learn from XPoint and develop XPoint even further, then we will have subsequent versions of this technology and other technologies that can fit into this roadmap,” said Graham, declining to provide further details.

This slide gives a idea of where these new memories come down in terms of performance versus cost in relation to DRAM and NAND.

Micron new memory performance and cost IDF15

 

Nonvolatile memory latency is the major challenge of emerging memory in Micron’s view. As CPU technology continues to scale, memory IO continues to experience significant performance bottlenecks, so emerging memory products need to fulfill that huge latency gap. The gap continues to widen with the progression of technologies from DDR2 to DDR3 and DDR4.

Micron slide 3D XPoint positioning

Micron and Intel developed 3D XPoint to bridge this gap. As such 3D-XPoint is not intended as a replacement for DRAM or SSD (at least that’s Micron’s view) but for a target niche of applications that include in-memory database, metadata storage as well as application logging and others in verticals such as oil & gas exploration, big data analytics, financial transactions and medical research.

Graham refers to 3D-XPoint as an emerging storage class memory technology that offers DRAM-like performance with higher density and lower energy, and non-volatility with fraction of DRAM cost/bit. It is also said to be 1000x faster than NAND and the performance can be realized on PCIe or DDR buses, but there is concern about the new memory interface being proprietary. For example, Intel’s first go-to-market product, Optane, which slots inside a DDR4, is electrically compatible but will require new CPU and new extensions to access 3D XPoint. Micron has yet to reveal its first XPoint-based product, but said it would be announcing its product plans over the next couple of months.

Micron says it has multiple technologies currently in development and showing promise around XPoint and it realizes the importance of broad industry support to make an emerging memory technology successful. Further development is still needed around controller technology, which is critical to exploit characteristics of each type of memory, as well as software that is capable of taking advantage of the persistent memory semantics.

Micron 3D XPoint memory graphic IDF15For the record, Micron and Intel still aren’t saying exactly what XPoint is made of, except to reiterate that the memory element plus diode are positioned at the intersection of word and bit lines. The “memory grid” 3-D checkerboard structure maximizes cell density and allows memory cells to be addressed individually.

Micron looks at memory in a different way now, according to Graham, which is in three buckets: near, bulk and far memory. This is of course the same trend in HPC with increasing attention being paid to memory hierarchies.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This