Multicore processors are here, and new articles and products are targeting the need to parallelize applications. There are three common misunderstandings about parallel processing discussed in this article: (1) Embarrassingly parallel execution will always benefit from multicore architectures; (2) The memory wall is a hardware-only showstopper for performance; (3) Going green only implies investing in power-efficient hardware.
Many HPC applications, sequential or parallel, often treat the shared resources of a multicore processor in a wasteful way and unnecessarily crash into the memory wall at high speeds. This may lead to superlinear slowdown, even for embarrassingly parallel applications. Often, simple rewrites of the applications can avoid the memory wall, or at least move it further away. Thus, we cannot blame hitting the memory wall on hardware alone. Usually, some alteration to software is possible to avoid the bottleneck. Making the software more efficient may be the simplest and least expensive way to save power and resources.
HPC Computer Landscape of 2009
A multicore architecture replicates some hardware resources, typically the pipelines and the L1 caches; while other resources are not replicated and are shared by the many active threads running on the processor. Examples of such shared resources are the shared cache (L2 and/or L3) and the DRAM bandwidth. But if increasing the number of cores in the future will increase the peak FLOPS and MIPS, what will happen to the shared resources?
As pointed out in HPCwire by Michael Feldman, we cannot expect any new major improvement in memory bandwidth beyond DDR3 at any time soon. This is likely to be an Achilles heel for future multicore processors. Studying the improvement of cache capacity per active thread does not make the picture any prettier. Looking back, the HPC servers of the 90s often had several megabytes of last-level cache per CPU. Since each CPU executed only one thread, that was also equal to the cache capacity per thread. A multiprocessor system (e.g., SMP) built from many such CPU chips, grew the total cache capacity linearly with the number of threads (i.e., CPUs, a.k.a. cores). Sometimes this even resulted in superlinear speedups, since more active threads also implied more total cache capacity for the parallel application.
Typically, each new and faster generation of CPUs in the 1990s came with larger caches. A rule of thumb for us architects at Sun Microsystems back then — building servers based on UltraSPARC I and II — was that doubling the cache capacity yields roughly 50 percent performance improvement for important applications. Of course, that is not an absolute truth but may say something about the state of mind of large-scale server designers in the previous decade.
In a multicore design, cache capacity is shared among all the cores, where each core is possibly running several threads simultaneously. The new Nehalem processor, for example, has four cores, each one running two threads, sharing 8 megabytes of cache. Cache capacity per thread in 2009 (1 megabyte) is lower than what we grew used to in the 1990s, and the total cache capacity does not scale with the number of cores, as was the case for the SMPs back then. This is likely to make the memory wall higher, rather than lower, in the future.
Embarrassingly Parallel Execution and Superlinear Slowdown
One example of embarrassingly parallel execution is that of throughput computing, where several independent applications are running on a server. Most supercomputers today do exactly this, that is, many single-threaded applications are executed simultaneously and perform completely their tasks independently. This embarrassingly parallel workload may appear to be ideal for multicore processors, but it can actually result in devastating performance, as demonstrated by this graph:
The graph shows the amount of useful work performed by a multicore chip running a throughput workload, in this example, instances of a Lattice Boltzmann Method application, similar to the one used in SPEC2006. While going from one to two cores does indeed result in a throughput improvement of about 50 percent, there is actually performance degradation when going from two to three cores, which we jokingly refer to as a superlinear slowdown.
There is a simple explanation for this. When running two cores, the poor miss-rate in the shared L2 cache increases the accesses from the two cores to DRAM so much that the DRAM interface becomes the bottleneck of the system – the performance of the system is capped by the number of bytes you can transfer per second across the DRAM interface. Actively using three cores decreases the amount of L2 cache dedicated to each thread, resulting in a higher probability that data will have to be brought in from DRAM (higher L2 cache miss rate). Since the DRAM interface is already the bottleneck of the system, having to access the DRAM more often per unit of useful work results in a throughput degradation, which means less amount of useful work per time unit. The memory wall rears its ugly head. This example drives home the point made by Intel’s Sanjiv Shaw: Do not parallelize an application unless it is optimized. A parallel version of the application would also be capped by the memory wall and would not run any faster than the sequential version.
Finding the Software Key to the Door
However, this limitation cannot only be blamed on the hardware limitations of the system. Rather, it is a combined hardware/software effect. Using our SlowSpotter tool, we easily spot that less than half the data transferred across the DRAM interface is actually ever used before being evicted from the cache, using up precious shared bandwidth resources. This also implies that every other byte of data stored in the shared L2 cache is just wasting space.
A simple rewrite of the application (changing one of its macro definitions) results in a slightly changed access order to memory and the much better throughput reported in Graph 2. Now, most of the data dragged across the DRAM interface is useful data and most bytes stored in the L2 cache can help to remove cache misses. This example shows it is not fair for a programmer to blame the memory wall unless the cache utilization has been fully understood.
Multicore processors replicate some hardware resources and can provide MIPS and FLOPS en masse, while their shared resources, cache, and DRAM bandwidth do not show very impressive per-thread capabilities. This can result in degrading performance even for embarrassingly parallel applications. Spotting inefficient usage of the shared resources can lead to simple local changes in the code and cheap performance improvements. This can also be translated into doing the same amount of work with fewer resources, also known as going green. And instead of buying new and power-lean hardware to go green, this software approach saves resources on existing systems.
In Part 2, I’ll talk about how more drastic performance improvement can be realized if the usage of cache and bandwidth are taken into consideration when an algorithm is designed, rather than as an afterthought for fixing the code as in the example above. I’ll also show how a redesigned parallel algorithm consumes an order of magnitude less bandwidth than the (almost) embarrassingly parallel state-of-the-art algorithm commonly used today. The new algorithm turns out to be an order of magnitude more scalable for multicore processors.
About the Author
Erik Hagersten is chief technology officer at Acumem, a Swedish-based company that offers performance analysis tools for modern processors. He was the chief architect for high-end servers at Sun Microsystems (the former Thinking Machines development team) for six years before moving back to Sweden in 1999. Erik has remained a consultant to Sun up until Acumem started in 2006. Since 2000 his research team at Uppsala University has developed the key technology behind Acumem.