Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
March 23, 2009

Finding the Door in the Memory Wall, Part 2

by Erik Hagersten

Trading Parallelism for Performance

It is a common belief that only sequential applications need to be adapted for parallel execution on multicore processors. However, many existing parallel algorithms are also a poor fit. They have simply been optimized for the wrong design parameters.

In the past we have been striving for algorithms to maximize parallelism and at the same time minimize the communication between the threads. For multicore processors, however, the cost of thread communication is relatively cheap as long as the communicated data resides in a cache shared by the threads. Also, the amount of parallelism that can be explored by a multicore processor is limited by its number of cores multiplied by the number of threads running on each core. Instead, a third parameter is gaining importance for parallel multicore applications: the memory usage.

In this, the second article of the series, we contrast the behavior of a highly parallel state-of-the-art algorithm with that of a moderately-parallel algorithm in which some of the parallelism has been traded for lower DRAM bandwidth demands. We show the latter outperforms the highly parallel algorithm by a factor three on today’s multicore processors. The techniques used and some of the performance numbers are summarized here. A more detailed description of the algorithms discussed in this paper was presented at ICS 2006 together with colleagues and students from Uppsala University.

Highly Parallel Algorithm

The Gauss-Seidel algorithm (GS) is used to smooth an array with NxN elements. The original GS algorithm is pictured in Figure 1a. The new value (yellow) for each element of an NxN array is calculated as the average of its own and its four neighbors’ values. The elements of the array are updated row-wise. The element numbers in the figure refer to their iteration age. At the end of each iteration, convergence is checked and, if the condition is not met, the array will be iterated again. Typically, the array is iterated 10–30 times before the convergence is met. The red arrows in Figure 1a indicate the data dependences of this algorithm. The new values to the left of and above the yellow element have to be calculated before the yellow value can be calculated. These data dependencies make the original algorithm hard to parallelize. Memory Wall Part 2 - Fig 1

Figure 1b shows the popular red/black variation of the algorithm, where only every other element is updated in a sweep of the array (the update of red elements is shown in Figure 1b). In a second sweep, the other (black) elements are updated. Unlike the original scheme, this red/black algorithm has no data dependencies during sweeps since red elements do not depend on any other red elements. In other words, all the elements of a sweep can theoretically be updated in parallel – its parallelism is N2/2.

Figure 1c shows how two cores may divide the work. This scheme keeps the communication between the cores at a minimum: only values of the element on the boarder between the threads need to be communicated, and the threads only need to synchronize once per sweep. So, according to the old definition of a good algorithm, the red/black algorithm is close to perfect: plenty of parallelism and a minimum of communication. There is only one drawback: it runs slowly on a multicore processor, as shown in Figure 2.
Memory Wall Part 2 - Fig 2

Typically, the array size used with Gauss-Seidel is too large to fit in a multicore processor cache. Each iteration will force the entire array to be read from memory. Actually, for the red/black scheme, the array will have to be read twice per iteration, first during the red updates and then during the black updates. This will quickly saturate the DRAM bandwidth and limit the performance on a multicore processor.

Finding the Door in the Memory Wall

Instead of just maximizing parallelism, we could try to minimize DRAM bandwidth usage for a GS implementation. If we apply a blocking scheme to the original GS algorithm, we could keep an active subset of the array, called a block, in the cache and reuse these elements many times before the data are evicted from the cache. Because of the data dependence of the original GS algorithm (the red arrows in Figure 1a) we have to apply a sliding blocking technique, shown in Figure 3a. The active block inside the red frame shown includes three rows. Once the next iteration values for all the elements in the block have been updated, as shown in 3a, the block is slid down one row, as shown in Figure 3b, and the next iteration values for those elements are updated. This improves the reuse of element values while they reside in the cache. Using this scheme, each element of the array will advance three iterations per sweep, which means that the array is only read from DRAM every third iteration. This implies that only one sixth of the DRAM bandwidth is needed compared with the red/black algorithm.

If the number of rows in the active block increases, even less DRAM bandwidth will be needed. Figure 4 shows the relationship between bandwidth usage and block size, as shown by the ThreadSpotter tool for different cache sizes. A typical last-level multicore cache is in the 2-12 Mbyte range. Bandwidth demand can be reduced in this range by more than an order of magnitude using the blocking GS scheme instead of the red/black GS scheme.

Figure 3c shows a parallel version of the blocked GS. A drawback is that the threads will have to synchronize row-wise to make sure the thread to the left stays slightly ahead of the thread to the right. In sum, the blocked GS algorithm produces about an order of magnitude more thread communication than red/black GS. Also, its parallelism is much worse, in the order of N parallel threads (one per column) can help out simultaneously, compared with the N2/2 parallelism of the red/black algorithm. Still, it outperforms the red/black algorithm by a factor of three on a dual-socket quad-core system thanks to the much lower DRAM bandwidth need. Similar results have been observed when comparing 3D versions of the algorithms. Figure 5 compares the performance of the two algorithms when running on a two-socket quad-core system. The red/black saturates the bandwidth already at two active cores, while the sliding GS algorithm scales well even on a two-socket system without any special thread applied.

Summing Up

In Part 1 of this article series, we saw how a throughput workload created a superlinear slowdown on a multicore architecture due to increased cache pressure, and in this article — Part 2 of the series — we were forced to change “the ideal” highly parallel and low-communication algorithm in order for it to run well on a multicore processor. This once more drives home the point made by Sanjiv Shah a couple of weeks ago: only focusing on parallelism is not always the best way to get good performance on a multicore architecture. In Part 3, we will take a look at various techniques for identifying when optimizations are needed and compare a few simple optimization tricks.

About the Author

Erik Hagersten is chief technology officer at Acumem, a Swedish-based company that offers performance analysis tools for modern processors. He was the chief architect for high-end servers at Sun Microsystems (the former Thinking Machines development team) for six years before moving back to Sweden in 1999. Erik was a consultant to Sun until Acumem started in 2006. Since 2000 his research team at Uppsala University has developed the key technology behind Acumem.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video