When I stop and think about the radical changes that computer architectures are undergoing today, it reminds me of that ancient proverb: “May your life be filled with many cores.” OK, I just made that up. The one that really applies is: “May you live in interesting times.” It is intended as both a blessing and a curse. But the proverb pretty much reflects the state of information technology today.
After half a century of domination, the unicore processor is an endangered species. Multicore processing is now mainstream. The future is massively parallel computing performed on manycore processors. That's the fundamental assumption underlying the recent report, “The Landscape of Parallel Computing Research: A View from Berkeley.” According to the authors of the paper, “Successful manycore architectures and supporting software technologies could reset microprocessor hardware and software roadmaps for the next 30 years.”
For a first-hand perspective on “A View from Berkeley,” read our feature interview with John Shalf and David Patterson, two of the authors of the report.
The creation of manycore architectures — hundreds to thousands of cores per processor — is seen by many as a natural evolution of multicore, as Moore's Law and nanoscale physics conspire to force chip designers to add transistors rather than increase processor clocks. If manycore is destined to be the way forward, a new parallel computing ecosystem will need to be developed, one that is very different from the environment that supports the current sequential and multicore processing systems. This is the subject of the Berkeley report.
“A View from Berkeley” delineates the state of parallel computing as it exists today and where it needs to go for our manycore future. In doing so I think they've put together one of the more valuable texts on the subject — valuable not because it claims to have all the answers, but because it manages to ask all the right questions.
One of the central issues discussed in the report is the type of hardware building blocks to be used for manycore systems. On this topic, the researchers take a reasonably definitive stand. They envision processors with thousands of simple (i.e., RISC) processing cores. The researchers argue that small, simple cores are the most efficient structures for parallel codes, providing the best tradeoff between energy consumption, performance and manufacturability. They point to the new 128-core NVIDIA GPUs and Cisco's 188-core Metro network processor as two early examples of this approach. The researchers also entertain the notion of heterogenous cores, but seem ambivalent about the tradeoffs between better code performance and system complexity, especially software complexity.
One of the more interesting areas the report explores is the convergence of computing that is taking place between the embedded and HPC markets. Once at opposite ends of the computing spectrum, embedded computing and HPC are being brought together by their common needs of energy efficiency, low-cost hardware building blocks, software reuse, and high-bandwidth data.
The IBM Blue Gene/L is one system that has some of its roots in the embedded world. The Blue Gene's low-power PowerPC-based processors are essentially embedded microcontrollers recast as HPC processors. A more recent version of an embedded-type HPC architecture is the SiCortex system, based on a chip containing six MIPS cores. It wouldn't be surprising to see some other HPC startups pick up this model.
Certainly the Berkeley folks aren't looking to CISC to achieve anything meaningful in a manycore architecture. Although Intel and AMD have done a remarkable job of driving up the performance/watt numbers for the CISC x86 architecture, the feasibility of using the x86 for manycore seems questionable. Just this week, a similar sentiment was reflected in an ITweek commentary by Martin Banks, who questioned the suitability of using x86 as a basis for scaled out systems. To balance that viewpoint, in early February, InfoWorld's Tom Yager penned a love letter to the quad-core Barcelona, noting how much AMD has achieved with energy efficiency and performance in its next generation Opteron.
Intel's own 80-core terascale prototype processor uses simple RISC-type cores to achieve a teraflop (in less than 70 watts!), although the company implied that commercial versions would support Intel Architecture-based cores. But even 80 cores is an order of magnitude less than that envisioned by the Berkeley researchers.
One of the other big issues the report addresses is the type of applications that will run on manycore systems. The authors believe parallel computing apps will be based on a set of 13 different computational methods — their so-called 13 Dwarfs. This consists of Phil Colella's original Seven Dwarfs from scientific computing, plus six more from other computing domains: embedded, general purpose, machine learning, graphics/games, databases, and Intel's recognition, mining and synthesis (RMS) applications.
It's conceivable that within a few short years, parallelized applications will dominate IT. One could make a case for that today with Internet-based applications like text searching, which are massively parallelized (albeit in a distributed model). Search engines represent one of the dominant applications today. Ten or fifteen years ago, the killer app was the word processor. Tomorrow, it may be a personal multimedia synthesizer.
But many people are concerned that only a small subset of applications can actually be parallelized to any meaningful degree. That's certainly true if you just look at current applications statically. For example, word processors are not compute limited to any extent. Even today's single core systems could happily calculate your tax returns in between keystrokes. But if word processing is going to evolve into a more compelling application, it will need to add capabilities such as voice recognition, next-generation language translation, and semantic analysis — features that are likely to require high degrees of parallelism. I would argue that only the most trivial end-user applications would not be able to take advantage of parallelism.
The real concern is how massive parallelism will be programmed. The Berkeley researchers believe that neither the sequential nor multicore programming models provide the right approach. A central precept for manycore programming is that the model should be independent of the number of processors. That's certainly not the case for applications implemented with MPI. Removing the dependency between the application and the processor/core count provides for automatic application scaling as succeeding microprocessor generations increase compute density. This would be a huge step in the right direction, bringing us back to the good old days when application performance automatically increased with every processor clock speed bump.
A related concern is productivity. A programming model must allow the software developer to balance the competing goals of productivity and implementation efficiency. Here, unfortunately, there is no consensus. The Berkeley report does some hand-waving about human-centric programming, expanding data types, and providing support for different types of parallelism, but the authors recognize that there's usually a tradeoff between ease of programming and runtime performance.
Our manycore future has an enormous upside, but the anxiety about it in the IT world is palpable. No single computing community, not even HPC, seems to have the breadth of expertise to attack this alone. But if you're a computer systems architect looking to change the world, these are indeed interesting times to be alive.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].