Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
October 22, 2010

Will Multicore Save the Day?

Tiffany Trader

Moore’s Law is dead, or is it? There’s the camp that believes Moore’s Law, which states that transistor density on integrated circuits doubles about every two years, will be viable for only another decade or two. But there’s another camp that thinks the technology already exists to extend the trend: multicore processors. National Instruments’ P.J. Tanzillo is a proponent of the latter theory and has written an article on the subject at Technology Review.

The general purpose computing market has made another quantum leap in processing power in the last five years, but this time it’s not in clock rates, it’s in the number of processing cores. Contrary to popular belief, Moore’s Law is not dead. The number of transistors on modern processors continues to double every 18 months. Those transistors are now just manifesting themselves as additional processing cores. There are two primary reasons that this shift has been made: power and memory.

Tanzillo goes on to explain that with single-core processors, one way to increase performance is to increase clock rates, but with heating and energy concerns, that only goes so far. The increased density of multicore processors allows each core to be clocked well below its theoretical maximum, which assists with heat dissipation and power management.

As for the memory problem, Tanzillo relates how DRAM memory speed has been unable to keep pace with increases in microprocessor speed. Both are increasing exponentially, but with micoprocessors, there is a larger exponent. This creates a situation where memory latency becomes the biggest bottleneck to system performance. This is also known as the memory wall problem. Although it would be nice to think multicore has solved this problem, it’s really just postponed it a bit. The disparity still exists.

Machines with multiple applications that are each well suited to running on one core (as with a desktop computer) can take advantage of multicore architectures rather easily, with little reprogramming. But HPC presents a challenge because you have one application that must be divied up to run on multiple cores. Tanzillo explains:

So, just like the supercomputing clusters of the past, algorithms written in FORTRAN and C need to be modified to take advantage of parallel processing cores. These applications need to be broken into threads and these threads need to be designed to avoid some of the common mistakes in parallelization of code like race conditions and priority inversion. In addition, memory and communication between processes must be made thread-safe, and shared resources need to be avoided or addressed. These issues continue to haunt developers updating legacy code to new architectures, and they often result in instability and/or disappointing performance gains. As a result, a set of complementary technologies are growing into maturity that allow programmers to take advantage of multicore systems in new and interesting ways.

Some of those “new and interesting ways” revolve around dataflow programming and virtualization, and cloud computing should be considered too, according to Tanzillo.

One thing to keep in mind with multicore is that the math doesn’t completely work out. Ideally, doubling the cores would double the performance, but that’s not quite the case, it’s more of a 50% performance increase. And then there’s the 2009 Sandia study that suggested performance actually decreases for machines with more than eight cores:

A Sandia team simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.

For an alternate perspective on the multicore debate, we can look to NVIDIA’s Bill Dally, who believes that building parellel computers from the ground up using GPUs is the way to go. In his Forbes article from last April, Dally stated:

To continue scaling computer performance, it is essential that we build parallel machines using cores optimized for energy efficiency, not serial performance. Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance.

The path toward parallel computing will not be easy. After 40 years of serial programming, there is enormous resistance to change, since it requires a break with longstanding practices. Converting the enormous volume of existing serial programs to run in parallel is a formidable task, and one that is made even more difficult by the scarcity of programmers trained in parallel programming.

A key point that was raised by both Tanzillo and Dally is that whether using multicore or parellel GPU-based machines, there’s still the problem of parallelizing the software to take advantage of multiple processors. And it’s not a minor problem. And yes, there’s resistance to change. But at the end of the day, it’s important to remember that while science isn’t about technology, it is a primary enabler.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video