Larrabee for HPC: Not So Fast
For those of you who thought Intel was angling for an HPC play with its upcoming Larrabee processor family, think again. In case you’re not a regular reader of this publication, Larrabee is Intel’s manycore x86 GPU-like processor scheduled to debut in late 2009 or early 2010. With Larrabee, Intel is gearing up to challenge NVIDIA and AMD for GPU leadership, but doesn’t appear interested in exploiting the chip for GPGPU.
Although the company has engaged in some mixed messaging with regard to Larrabee, recent conversations with Richard Dracott, the general manager of the Intel’s high performance computing business unit, and Stephen Wheat, Intel’s senior director of HPC, has convinced me that Larrabee will be targeted only for standard graphics applications and the more generalized visual computing realm. The latter does overlap into the HPC space, inasmuch as visual computing encompasses applications like video rendering and other types of compute-intensive image processing. But for general-purpose HPC, Intel has other designs in mind.
My first hint that Larrabee was being more narrowly targeted came from a recent chat with Dracott at the Wall Street on HPC Conference, where he talked about how the CPU will eventually prevail over accelerator offload engines (GPUs, FPGAs, Cell-type processors and FP ASICs), which I wrote about back in September. His basic point was that CPUs offered a better long-term value proposition than accelerators because industry standard processors like the x86 offer lower and more predictable software development costs. Also, any performance advantage demonstrated by an accelerator would eventually erode as CPUs continued to evolve — a dubious assumption, I thought, since GPUs and FPGAs are evolving at least as quickly as CPUs. From his perspective, though, a relatively small number of HPC users would continue to experiment with acceleration over the next several years, but would eventually return to the comfort of the CPU.
What I didn’t mention from that conversation is that while Dracott was trash talking accelerators, he also managed to diss Larrabee — at least as a scientific computing architecture. Although Larrabee would at least offer an x86 compatible ISA, the problem, he said, was that the implementation contains some of the same drawbacks as the traditional GPU for scientific computing — namely the lack of ECC memory to protect against soft errors and a shortage of double precision floating point capability. From his perspective that would prevent Larrabee or GPUs to be deployed more generally in high performance computing, “But,” Dracott added, “we are working on products that will meet that need.”
At this point, the nature of those products is a mystery. But Dracott did offer that “it will be feasible to have products that combine small IA cores and large IA cores that would be more suited for some HPC workloads.” To speculate a bit, the architecture will likely have many of the attributes of Larrabee, that is, a manycore x86 architecture with fully coherent L1 and L2 caches. At least some of those cores will contain wide SIMD units similar to the Larrabee design, but those units will support lots of 64-bit floating point horsepower and be fully IEEE 754 compliant. Also, instead of GDDR-type memory, ECC-capable DDR memory will be supported. In short, it would be an x86 vector processor.
When I talked with Stephen Wheat at the Supercomputing Conference in Austin last month, he reiterated Dracott’s (and Intel’s) CPU-centric view of the universe. Wheat suggested that if accelerator features, like vector processing, become more widely used, they will naturally migrate onto the CPU. There is historical precedence for such a position. Before the 80486 chip, floating point operations were performed in external coprocessors (the x87 series). In the late 80s Intel decided FP operations were general-purpose enough to warrant transistor space on the CPU.
The rationale behind feature integration is that it’s difficult to have different silicon engines move forward individually and maintain a balanced performance profile without being tied to a unified architecture. This dovetails nicely with Intel’s business strategy, in which the x86 CPU ISA is the common denominator in all IT sectors — desktop, mobile, enterprise, HPC, and, if Larrabee is successful, gaming and visualization.
So is now the time for heavy-duty vector processing to be integrated as well? “It’s not that far in the future where the things that make these [accelerators] attractive can become an integral part of the processor itself,” offered Wheat.
The market for this family of chips is not necessarily going to be driven by server-based HPC. Intel has a whole strategy around model-based computing, called RMS (PDF), which requires teraflops a-plenty. RMS encompasses a range of advanced apps from real-time rendering to portfolio management, and some of these will be running on desktop and mobile platforms.
The company is already starting to build a software foundation for such applications with Intel Parallel Studio. Released as a beta this week, Intel Parallel Studio is a suite of tools and libraries for Windows C/C++ developers that supports Threading Building Blocks (TBB), OpenMP, auto-vectorization and other parallel programming goodies. It is built around the concept of “forward scaling,” Intel’s nomenclature for automatic application scaling for multicore/manycore architectures.
The idea is that the same programs that are initially built for quad-core Nehalems can transparently scale up to 8-core Nehalems, manycore Larrabees, and all their descendents. Intel’s research language for manycore throughput computing, Ct, will probably get productized into the company’s software offerings at some point as these manycore products start to hit the streets. If all goes according to plan, by 2015 manycore x86 will be the dominant processor species and parallel programming will be the norm.