ISC 2015 Keynoter Thomas Sterling on Memory in HPC

By Nages Sieslack, Public Relations Manager at ISC Events

April 15, 2015

The Wednesday keynote at this year’s ISC High Performance conference by HPC veteran Dr. Thomas Sterling promises to be an enlightening and lively presentation of the HPC year in review. And if previous years are a guide, Dr. Sterling will deliver it with the unique humor and style that has become his trademark.

The late Hans Meuer created this concept of a “continuing series” to complement the other focused talks at this conference, where the international HPC community comes together to contemplate the breadth of progress and the latest trends in this rapidly advancing field. Dr. Sterling has served as medium for this topic for more than a decade now.

Dr. Sterling will be also be chairing a session titled Memory Technologies & Systems for HPC, which will take place the day before his keynote presentation. We got in touch with him recently so he could give us some background on this highly topical subject.

ISC: Could you explain why the memory subsystem has become such a bottleneck in applications performance?

The memory has certainly been a significant bottleneck, which has motivated substantial investment in cache hierarchies and coherency hardware. The separation of processor logic from main memory, in terms of both bandwidth and latency of data access channels, has been a fundamental limitation to program efficiency. In the last decade, this “von Neumann bottleneck” has been aggravated due to multi/many-core processors that have imposed increase demands on the processor/memory interface. These demands have increased exponentially to the present day, with only slow improvements to the socket pins and memory channel bandwidths. Worse has been the inclusion of GPU accelerators that has severely complicated information flow at the memory interface. The use of fast scratch pad memories, NVRAM, and burst buffers, among other innovations, will further impose new architecture and programming advances.

Should codes be written differently to help deal with the memory wall problem or should developers leave such efforts up to the compiler?

The memory wall is a fundamental constraint imposed by the architecture both in terms of latency and bandwidth. To the extent that data reuse can be enhanced through reorganization of data access patterns, the effects of this barrier can be mitigated. Depending on the nesting of loops and the striding of data, the use of compilation techniques, perhaps assisted by auto-tuning, may be able to make better utilization of caches and memory channels. However, the programmer is better informed as to the overall possibilities and should structure the code accordingly.

Performance portability is jeopardized by variations in cache architecture across distinct platforms. Also irregular and time-varying data structures, such as dynamic graphs, make it difficult for either the compiler or the programmer to successfully manage memory traffic due to inadequate foreknowledge of the data access demands. In these cases, advanced runtime systems may deliver new optimization strategies using dynamic adaptive coordination.

The growth of “big data” analytics has greatly expanded the demand for in-memory computing. Is in-memory computing a viable alternative to the distributed memory model HPC has lived with for so long?

Big data analytics emphasizes the importance of support for treating the full system memory as a single resource even though it is physically partitioned and distributed. The notion of in-memory computing is a revival of prior art, although across larger scale problems than ever before. It can greatly improve overall system efficiencies and scalability, especially when supported by advanced hardware mechanisms in the communication network control and the memory system. The HPC vendor community is exploring a number of ideas in this area and we can anticipate significant innovations through the rest of this decade.

3D memory is poised to debut in supercomputers very soon. What do you think are the long-term prospects for this technology in HPC?

Stacking of memory dies is crucial to extending the viability of Moore’s Law by significantly increasing the memory capacity significantly on the motherboard. Of importance is the ability of through silicon vias to deliver substantial bandwidth to drive the combined memory banks while minimizing the latency and latency variability across the memory system.

But 3D packaging will extend beyond the limitations of pure memory chips to include CMOS logic devices, like many-core chips and communication networking dies, possibly with optical interconnects. The challenge of such structures is cooling, with the possibility of micro-channel water-cooling or other fluid through the stack.

Are there other promising memory technologies on the horizon that you think might make a difference for HPC?

There are other emerging memory technologies; perhaps the most significant and immediate are the various forms of NVRAMs which deliver higher density and lower cost than conventional DRAMS. These benefit from economy of scale through mass production for a wide array of mobile computing applications, such as digital cameras and phones. How NVRAMs may be used in the HPC memory hierarchy is still a subject of exploration, with challenges of disparate read and write times combined with capability degradation over time, which will complicate its ultimate manifestation. But the cost benefits it affords will drive this technology to some form of major integration and use.

Scratch pad memories, either SRAM or high speed DRAM, will be employed to augment, if not fully replace, automatic caches. It is ironic that caches, which were first devised to simplify memory hierarchy use, like virtual memory, is sometimes an impediment to both performance and productivity. Scratch pad memories permits explicit control of data allocation where usage models are known and scratch pads can be exploited. Hardly a new idea, early Cray computers employed similar techniques. What is interesting is to what degree compiler advances can facilitate this technology opportunity.

Mass storage may be improved through integration of both processor and memory technologies at the disk sites to process streaming information on the fly, for example, for compression and decompression), and disk drive caching, for example, of meta-data. This is particularly applicable to big data analytics as previously discussed.

I am betting that the biggest advance in future memory systems is going to be the reincarnation of a two-decades-old concept known as PIM or processor in memory. It was first explored around 1992 by Peter Kogge of IBM, Ken Lobst of IDA, Jeff Draper of USC ISI, and Bill Dally, then of MIT, with each working on significantly different forms. PIM integrates logic and primitive controllers onto the same semiconductor dies, with the mainstream memory fabric dramatically increasing bandwidth and reducing effective latencies since all the action can be kept on the chip. While special cases, usually related to the SIMD execution model, have been explored through experimental parts, there has never been a successful generalized component with wide applicability and performance advantage. Since this technology also promises better energy efficiency and given that Moore’s Law is asymptoting – I know: it’s not a word) – this may prove to be the era of opportunity for this innovation. There are many issues to be addressed prior to commercial viability, but exciting work is already being undertaken behind the scenes.

Find out more about Dr. Sterling’s Wednesday keynote here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire