Visit additional Tabor Communication Publications
June 23, 2011
With all the focus on more powerful microprocessors, sometimes it's easy to forget that speedier chips do no good if memory or storage is your bottleneck. Since processors are increasing their performance at a much faster rate then bandwidth, there has been a renewed focus on technologies to help to close that gap between processors and memory as well as between memory and storage. Advanced DRAM, NAND flash, and other more exotic memories are being developed with just that challenge mind.
In the final keynote of the week at the International Supercomputing Conference, Dean Klein addressed this topic in some depth under the session titled: Trends in Memory Systems: Showstopper or Performance Potential for HPC. Klein is the VP of Memory Systems Development at Micron Technology and has been immersed in this area since joining the company in 1999. Prior to the conference, HPCwire asked Klein to preview the topic and give us his take on where he thinks memory technology is heading, especially for high performance computing systems.
HPCwire: What are the big drivers today for memory technology?
Dean Klein: Process, economics and architecture. Process technology has always been a driver, and it remains so today. Memory processes have always led the advanced lithography charge for the semiconductor industry, as well. Economics have been a significant driver, as well. The economics of shrinking have allowed us to pack over 64 gigabits of NAND on a single die today using 20nm processes.
For DRAM, the economics of government subsidies and expensive process R&D have created an industry of booms and busts. This cyclical market has shown recent signs of stabilization which will allow the industry to tackle the challenge of architecture. Memory has always been on a path of incremental evolution, with minor architectural improvements, which unfortunately have not kept pace with processors. Today, architecture is driving memory towards dramatically higher performance with aggressive power savings.
HPCwire: What sort of memory technologies do you think will become more important in the coming years, especially in regard to high performance computing and servers in general?
Klein: My biases here probably show in the previous answer! Memory technologies that allow greater performance, reliability and power savings are clearly going to play big roles in HPC this decade. We have shown our Hybrid Memory Cube (HMC) concept, which is clearly an example of the type of DRAM technology that can revolutionize HPC.
But we can’t forget the other side of memory, either -- the non-volatile side. Non-volatile memory (NVM) will play a major role in HPC and servers. The impact in servers is already being felt in systems employing SSD’s as part of the storage hierarchy. But we’re only touching the tip of the iceberg so far!
HPCwire: What role will 3D memory technologies play?
Klein: Shrinking in the X and Y dimensions will grow to be a greater challenge throughout this decade. There are multiple places where 3D memory technologies will play big roles. At the process level, 3D is already huge, with the DRAM cells themselves being highly engineered 3D structures. Transistors and capacitors both are highly 3D. At the die level, stacking of DRAM die is already occurring to meet density requirements. With DDR4 this will become even more common. Of course, the Hybrid Memory Cube takes die stacking to a whole new level, utilizing thru-silicon vias to build a much more efficient stack.
HPCwire: How will these new technologies affect processor and server architectures?
Klein: This is an area where a lot of innovation is set to occur and I can only dream of the impact of these technologies. Certainly, there is a probability of an expansion of the memory hierarchy as extreme bandwidth, coupled with dramatic power savings, from technologies like HMC and NVM are adopted into HPC. But this is only the start. There are a lot of factors influencing HPC architecture today that will also play a role. Other processor architectures, such as Power, ARM and GPUs can integrate new memory technologies in some pretty exciting ways.
HPCwire: What's the next step for SSDs?
Klein: The next major step for SSD’s is to leave the legacy storage connections behind. PCIe is the next obvious connection as is highlighted by products from Fusion-IO, Micron, and others. Of course, NAND is today’s current choice for SSD’s but Micron has demonstrated phase change memory (PCM) in the PCIe environment as well.
HPCwire: How would you rate the potential of the more exotic solid state technologies like phase change memory, spin-torque transfer memories (STTM) or others, compared to conventional NAND?
Klein: NAND is real, and it is very inexpensive. Some companies have shown that NAND can continue to scale if the cells are constructed in a 3D manner. As long as NAND continues to scale, the economics will continue to be in its favor. However, memory technologies like PCM or STTM have other advantages, including an ability to read and write single words. This alone gives them an architectural advantage over NAND in non-storage applications.
HPCwire: What will be the role of HDDs when SSDs become a ubiquitous element in all computing systems?
Klein: HDDs are NOT going away! Globally we are creating data at a tremendous rate and we will still need rotating media to store much of it. But this rotating media will be focused more on density and less on performance.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.