The Wednesday keynote at this year’s ISC High Performance conference by HPC veteran Dr. Thomas Sterling promises to be an enlightening and lively presentation of the HPC year in review. And if previous years are a guide, Dr. Sterling will deliver it with the unique humor and style that has become his trademark.
The late Hans Meuer created this concept of a “continuing series” to complement the other focused talks at this conference, where the international HPC community comes together to contemplate the breadth of progress and the latest trends in this rapidly advancing field. Dr. Sterling has served as medium for this topic for more than a decade now.
Dr. Sterling will be also be chairing a session titled Memory Technologies & Systems for HPC, which will take place the day before his keynote presentation. We got in touch with him recently so he could give us some background on this highly topical subject.
ISC: Could you explain why the memory subsystem has become such a bottleneck in applications performance?
The memory has certainly been a significant bottleneck, which has motivated substantial investment in cache hierarchies and coherency hardware. The separation of processor logic from main memory, in terms of both bandwidth and latency of data access channels, has been a fundamental limitation to program efficiency. In the last decade, this “von Neumann bottleneck” has been aggravated due to multi/many-core processors that have imposed increase demands on the processor/memory interface. These demands have increased exponentially to the present day, with only slow improvements to the socket pins and memory channel bandwidths. Worse has been the inclusion of GPU accelerators that has severely complicated information flow at the memory interface. The use of fast scratch pad memories, NVRAM, and burst buffers, among other innovations, will further impose new architecture and programming advances.
Should codes be written differently to help deal with the memory wall problem or should developers leave such efforts up to the compiler?
The memory wall is a fundamental constraint imposed by the architecture both in terms of latency and bandwidth. To the extent that data reuse can be enhanced through reorganization of data access patterns, the effects of this barrier can be mitigated. Depending on the nesting of loops and the striding of data, the use of compilation techniques, perhaps assisted by auto-tuning, may be able to make better utilization of caches and memory channels. However, the programmer is better informed as to the overall possibilities and should structure the code accordingly.
Performance portability is jeopardized by variations in cache architecture across distinct platforms. Also irregular and time-varying data structures, such as dynamic graphs, make it difficult for either the compiler or the programmer to successfully manage memory traffic due to inadequate foreknowledge of the data access demands. In these cases, advanced runtime systems may deliver new optimization strategies using dynamic adaptive coordination.
The growth of “big data” analytics has greatly expanded the demand for in-memory computing. Is in-memory computing a viable alternative to the distributed memory model HPC has lived with for so long?
Big data analytics emphasizes the importance of support for treating the full system memory as a single resource even though it is physically partitioned and distributed. The notion of in-memory computing is a revival of prior art, although across larger scale problems than ever before. It can greatly improve overall system efficiencies and scalability, especially when supported by advanced hardware mechanisms in the communication network control and the memory system. The HPC vendor community is exploring a number of ideas in this area and we can anticipate significant innovations through the rest of this decade.
3D memory is poised to debut in supercomputers very soon. What do you think are the long-term prospects for this technology in HPC?
Stacking of memory dies is crucial to extending the viability of Moore’s Law by significantly increasing the memory capacity significantly on the motherboard. Of importance is the ability of through silicon vias to deliver substantial bandwidth to drive the combined memory banks while minimizing the latency and latency variability across the memory system.
But 3D packaging will extend beyond the limitations of pure memory chips to include CMOS logic devices, like many-core chips and communication networking dies, possibly with optical interconnects. The challenge of such structures is cooling, with the possibility of micro-channel water-cooling or other fluid through the stack.
Are there other promising memory technologies on the horizon that you think might make a difference for HPC?
There are other emerging memory technologies; perhaps the most significant and immediate are the various forms of NVRAMs which deliver higher density and lower cost than conventional DRAMS. These benefit from economy of scale through mass production for a wide array of mobile computing applications, such as digital cameras and phones. How NVRAMs may be used in the HPC memory hierarchy is still a subject of exploration, with challenges of disparate read and write times combined with capability degradation over time, which will complicate its ultimate manifestation. But the cost benefits it affords will drive this technology to some form of major integration and use.
Scratch pad memories, either SRAM or high speed DRAM, will be employed to augment, if not fully replace, automatic caches. It is ironic that caches, which were first devised to simplify memory hierarchy use, like virtual memory, is sometimes an impediment to both performance and productivity. Scratch pad memories permits explicit control of data allocation where usage models are known and scratch pads can be exploited. Hardly a new idea, early Cray computers employed similar techniques. What is interesting is to what degree compiler advances can facilitate this technology opportunity.
Mass storage may be improved through integration of both processor and memory technologies at the disk sites to process streaming information on the fly, for example, for compression and decompression), and disk drive caching, for example, of meta-data. This is particularly applicable to big data analytics as previously discussed.
I am betting that the biggest advance in future memory systems is going to be the reincarnation of a two-decades-old concept known as PIM or processor in memory. It was first explored around 1992 by Peter Kogge of IBM, Ken Lobst of IDA, Jeff Draper of USC ISI, and Bill Dally, then of MIT, with each working on significantly different forms. PIM integrates logic and primitive controllers onto the same semiconductor dies, with the mainstream memory fabric dramatically increasing bandwidth and reducing effective latencies since all the action can be kept on the chip. While special cases, usually related to the SIMD execution model, have been explored through experimental parts, there has never been a successful generalized component with wide applicability and performance advantage. Since this technology also promises better energy efficiency and given that Moore’s Law is asymptoting – I know: it’s not a word) – this may prove to be the era of opportunity for this innovation. There are many issues to be addressed prior to commercial viability, but exciting work is already being undertaken behind the scenes.
Find out more about Dr. Sterling’s Wednesday keynote here.