Future Challenges of Large-Scale Computing

By Nicole Hemsoth

April 15, 2013

Now in its 28th year, the International Supercomputing Conference, ISC’13, is fast approaching. On Monday, June 17, Bill Dally, chief scientist at NVIDIA and senior vice president of NVIDIA Research, will deliver the opening keynote, titled “Future Challenges of Large-Scale Computing.”

Dally will address the multiple advances that will be necessary in order for the community to achieve the potential of HPC and data analytics going forward. The thrust of his talk will be on the challenges around power, programmability, and scalability, and most notably the role that energy-efficiency will play in determining system performance.

ISC’13 will be held from June 16-20, 2013, at the Congress Center Leipzig (CCL) in Leipzig, Germany.

In this Q&A Mr. Dally shares his views on where HPC is headed in the context of such important topics as heterogenous computing, the memory wall, government belt-tightening, and more…

HPCwire: Are different types of workloads, such as big data, HPC and Web 2.0, beginning to demand different types of processors? Will server processors diversify over the next five to ten years, or will they converge?

Bill Dally: HPC, Web servers, and big data all have similar requirement for processors. Within these applications there are program segments that are limited by single-thread performance and other segments that are limited by throughput. To meet this need, there will be a convergence on heterogeneous multicore processors where each “socket” will contain a small number of cores optimized for latency (like today’s CPU cores) and many more cores optimized for throughput (like today’s GPU cores).

HPCwire: The increase in processor performance seems to be outpacing memory technology. What can be done about the memory wall?

Dally: There are three aspects of memory relevant here: bandwidth, latency and capacity. To address the slow scaling of memory bandwidth we plan to move to memory technologies that involve placing memory dice on the same package as the processor chip and connecting them with very high-bandwidth, low-energy links. This on-package memory technology will enable us to scale memory bandwidth with processor performance holding the Byte/FLOP ratio roughly constant for the next few generations.

Memory latency is remaining roughly constant as processor performance increases. We deal with this by increasing parallelism to hide the latency. With adequate parallelism we can keep the memory pipeline full – using all of the available bandwidth.

Memory capacity is largely a matter of cost. The challenge here is that high-bandwidth memories, like on-package or stacked DRAM, cost significantly more than commodity memory. Thus, for cost-sensitive applications we are likely to see a two-tiered memory system with a moderate capacity, high-bandwidth on-package memory and a high-capacity commodity memory. A non-volatile memory technology like flash or phase-change memory could have a place in such a hierarchy as well.

HPCwire: How important will 3D stacked chip technology be to processors and memory? When do you think we’ll see the first commercial products?

Dally: Placing memory on-package will be critical to scale bandwidth. Stacking technology is important to extend the capacity of this high-bandwidth memory.

Stacked memory is shipping today. However, most of this today uses wire bonds, not through-silicon vias. At the 2013 GPU Technology Conference (GTC) this past March, we announced that we expect to introduce stacked memories with our Volta architecture-based generation of GPUs – in about 2016.

HPCwire: Government austerity restrictions look as though there could be pressure to reduce investments in exascale computing, especially in the US. How capable is industry of driving these initiatives by itself?

Dally: Industry will continue to move forward on its own on exascale projects, however progress will be much slower than with government assistance.

It is disappointing that government priorities are such that investment in computing innovation is being scaled back. At the same time, other nations like China are investing heavily in computing. Even the EU with all of its economic problems is moving forward with their exascale program. With reduced investment, the US runs a grave risk of giving up its leadership in computing.

HPCwire: With current technology, it seems as though exascale computing would require so much energy as to render it impractical. Will we see new breakthrough technologies to sufficiently reduce power consumption to make exascale practical and affordable?

Dally: Improving energy efficiency to reach the goal of a sustained exaflops on a real application in 20MW is a significant challenge. However, I am optimistic that we can meet this challenge. There are many emerging circuit, architecture and software technologies that have the potential to dramatically improve the energy efficiency of one or more parts of the system. For example, at NVIDIA we have recently developed a new signaling technology that reduces the energy required by communication by more than an order of magnitude, and we have developed an SRAM technology that permits operation at dramatically lower voltages – and hence lower power. It won’t be a single breakthrough technology that will get us to the exascale energy goal, it will be multiple breakthroughs – at least one in each of the multiple areas that require improvement – processor, communication, memory, etc. We have a number of research projects that are targeted at these different areas. If a sufficient number of these projects have successful outcomes, we will meet the goal.

These improvements, however, depend on research, which in turn will be slowed considerably without government funding.

HPCwire: What do you see as the biggest challenges to reaching exascale?

Dally: Energy efficiency and programmability are the two biggest challenges.

For energy, we will need to improve from where we are with the NVIDIA-Kepler-based Titan machine at Oak Ridge National Laboratory in Tennessee, which is about 2GFLOPS/Watt (500pJ/FLOP) to 50GFLOPS/Watt (20pJ/FLOP), a 25x improvement in efficiency while at the same time increasing scale – which tends to reduce efficiency. Of this 25x improvement we expect to get only a factor of 2x to 4x from improved semiconductor process technology.

As I described before, we are optimistic that we can meet this challenge through a number of research advances in circuits, architecture and software.

Making it easy to program a machine that requires 10 billion threads to use at full capacity is also a challenge. While a backward compatible path will be provided to allow existing MPI codes to run, MPI plus C++ or Fortran is not a productive programming environment for a machine of this scale. We need to move toward higher-level programming models where the programmer describes the algorithm with all available parallelism and locality exposed, and tools automate much of the process of efficiently mapping and tuning the program to a particular target machine.

A number of research projects are underway to develop more productive programming systems – and most importantly the tools that will permit automated mapping and tuning.

Changing a large code base, however, is a very slow process, so we need to start moving on this now. As with energy efficiency, progress will be slowed without government funding.

About Bill Dally

Bill Dally is chief scientist at NVIDIA and senior vice president of NVIDIA Research, the company’s world-class research organization, which is chartered with developing the strategic technologies that will help drive the company’s future growth and success.

Dally first joined NVIDIA in 2009 after spending 12 years at Stanford University, where he was chairman of the computer science department and the Willard R. and Inez Kerr Bell Professor of Engineering. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today.

Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at the California Institute of Technology (Caltech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered wormhole routing and virtual-channel flow control.

Dally is a cofounder of Velio Communications and Stream Processors. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM. He received the 2010 Eckert-Mauchly Award, considered the highest prize in computer architecture, as well as the 2004 IEEE Computer Society Seymour Cray Computer Engineering Award and the 2000 ACM Maurice Wilkes Award. He has published more than 200 papers, holds more than 75 issued patents and is the author of two textbooks, “Digital Systems Engineering” and “Principles and Practices of Interconnection Networks.”

Dally received a bachelor’s degree in electrical engineering from Virginia Tech, a master’s degree in electrical engineering from Stanford University and a PhD in computer science from Caltech.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire