Interest in the Cell processor by the high performance computing community appears to be building rapidly. Last week's feature article on the proposed use of the Cell for HPC, “Researchers Analyze HPC Potential of Cell Processor,” generated a large response from our readers. In fact, it was the most downloaded article in this publication's history.
That's not too surprising. With its PowerPC scalar core controlling eight SIMD cores — the synergistic processing elements (SPEs) — the Cell represents the first commodity implementation of a high-performance multi-core heterogeneous processor. In the world of HPC, heterogeneity is seen by many as the next evolutionary step in computer architecture.
However the heterogeneous nature of the Cell is not conventional, in the supercomputing sense. The processor's scalar PowerPC core is used to control the SPE cores and manage the chip's memory hierarchy, while the SPEs themselves do the computation. There's no real division of heterogeneous workloads.
That's not to suggest that the Cell architecture isn't innovative. According to the Berkeley researchers, the three-tiered memory hierarchy, which decouples memory accesses from computation and is explicitly managed by the software, provides some significant advantages over typical cache-based architectures. In fact, the Cell's software-controlled memory system may be its most compelling technological feature, offering a powerful solution to memory latency when data access has some level of predictability.
The Wikipedia reference on the Cell processor offers another way to look at it: “In some ways the Cell system resembles early Seymour Cray designs in reverse.” The Wikipedia notes that the CDC 6600 used one fast processor to handle the math and ten slower systems to keep memory fed with data, while the Cell reverses the model by using the central processor to supply data to the eight math elements.
So how does this translate into an HPC solution? Overall, the impressive power and performance results that the researchers obtained with the Cell do appear to indicate a real potential for high performance computing. When comparing scientific benchmark codes that were run on the AMD Opteron, Intel Itanium 2 and Cray X1E processors, the Cell beat the Opteron and Itanium rather easily, the X1E, less so. The results show that the Cell was about 7 times faster than either the AMD or Itanium and was 15 times more power-efficient than the Opteron and 21 times more power-efficient than the Itanium. Pretty impressive.
The researchers went on to propose a “Cell+” architecture as a way to greatly enhance the architecture's 64-bit floating-point performance for scientific codes. Using this virtual processor, the performance and power-efficiency results more than doubled, when compared to the already blazingly fast Cell.
And, as pointed out by the authors of the research paper, the fact that the Cell will be mass-produced for the Sony PlayStation 3 platform makes it a tempting target for building affordable supercomputing systems. “Cell is particularly compelling because it will be produced at such high volumes that it will be cost-competitive with commodity CPUs,” state the authors.
For anyone in the HPC community, the idea of adopting a commodity architecture that got its start in another market segment should not be too hard to wrap your head around. When Intel introduced the x86 architecture in 1978, and went on to become the standard chip for desktop PCs, who thought it would end up in supercomputers? Even the IBM Blue Gene supercomputer is based on PowerPC chips, whose original habitat was in Apple desktop computers and embedded devices. In contrast, the processors that were specifically designed for high performance computing have struggled in the marketplace. Not because they didn't perform. It's just that the economic model to develop custom chips exclusively for HPC systems is rather tenuous. Just ask Cray or SGI.
So should HPC OEMs start building Cell systems to blow the chips off every other blade and cluster machine out there? Maybe, but it has to be for more than just bragging rights. The IBM Cell-based blade was unveiled this past February and is planned to be generally available in the third quarter of 2006. Mercury Computer Systems has sold several test systems to military and commercial customers, and plans to release its first production-quality Cell blades by the end of June. So there's certainly activity afoot.
But there is the matter of a software ecosystem to contend with. For the benchmark study, the Berkeley researchers admitted to using assembly level insertion to hand-code the algorithms. Obviously for production development, this is unacceptable. A Cell Broadband Engine Software Development Kit, including a compiler, is available from IBM. And with the release of kernel version 2.6.16 in March 2006, the Linux kernel now officially supports the Cell processor. But this is just the start. Many applications will have to be ported to provide a mature software environment.
And some have doubts that the architecture is a useful model for next-generation supercomputing. Here's a few sobering comments from the High-End Crusader:
“The paper by Williams et al., 'The Potential of the Cell
Processor for Scientific Computing', is guarded in its
conclusions and cannot really be faulted. Nonetheless, its
unintended consequence may be regressive, further retarding the
emergence of novel computational paradigms upon which the future
of high-end computing so critically depends.
The paper needs to be put in perspective.
A general-purpose parallel computer must adapt to many
variations in an application, including granularity,
communication regularity, and dependence on runtime data. For
applications with simple static communication patterns, it is
straightforward to algorithmically schedule/overlap
communication and computation to optimize performance. In the
Cell microarchitecture, the programmed scalar core both 1)
issues nonpreemptive vector threads to vector cores, and 2)
manages the flow of data between the Cell's off-chip local DRAM
and the local SRAMs of individual vector cores; this is ideal
for software-controlled scheduling/overlap, assuming that the
programming effort can be amortized.
Yet computing is also about parallel applications with dynamic,
unstructured parallelism. Historically, the correct solution to
this problem has been dynamic thread creation ('spawning')
together with dynamic scheduling. We also need hardware support
for synchronization and scheduling. The authors of the Cell
paper are cleverly programming a software-controlled memory
hierarchy to stream operands to a blindingly-fast vector
processor. By orchestrating pre-communication from local DRAM,
they _fill_ the vector-thread closures; they tolerate the
latency to local DRAM by using long messages.
Fine, I suppose. Even so, the better way to avoid the
approaching train wreck in high-end computing is more progress
on (heterogeneous) machines with agile threads, cheap
synchronization, and low-overhead dynamic scheduling, which
alone can deal with dynamic, unstructured parallelism. These
machines will be heterogeneous in the deepest sense of the word.
Software is a major challenge (see 'Heterogeneous Processing
Needs A Software Revolution', forthcoming).
Finally, sparse MV multiply normally requires random-stride
access to the source vector 'x'. Are there hidden assumptions in
this paper (perhaps matrix preconditioning) that allow DMA
transfer of appropriate blocks of 'x' into local stores of
vector cores? Is the Cell processor really being touted as a
_general_ platform for sparse linear algebra?”
One interesting addendum to the story regards the Berkeley researchers' proposed Cell+ architecture, which is designed to enliven the processor's 64-bit floating-point performance. There actually may be an alternative approach for speeding up double-precision performance on this architecture. Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee, and his colleagues have devised software that implements 64-bit floating-point accuracy using 32-bit floating-point math. One of the processors they targeted was the Cell. The results of this work will be featured in an upcoming issue of HPCwire.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].