If you thought computing was just getting interesting with four cores, what happens when the chipmakers start delivering 100-core chips with multiple types of processing units? In this week’s issue, the High End Crusader (HEC) returns, delivering the first of a three part series about the future of parallel computing and heterogeneous processing. For those of you not familiar with HEC, he’s an HPC insider who has been a regular contributor to HPCwire. He remains anonymous so he can speak freely in this public forum. Anonymous or not, HEC always has an interesting take on which way the cutting-edge of computing is slicing.
In part one, HEC describes the current state of affairs of high-end computing and gives us a glimpse of the road ahead. In parts two and three he will argue that the community needs to reconceptualize both parallel computing and heterogeneous processing as we move toward what he refers to as “nanocore” — that is, the point at which processors exceed 64 cores. This is the level at which HEC believes “wholly innovative microarchitectural strategies are required to scale further.” The 64-core inflection point he’s referring to applies to general-purpose processor architecture, not simpler GPU or DSP architectures, which already have core counts at this scale and above.
While increased core count will make systems much more powerful, heterogeneity will make them more intelligent. In truth, heterogeneous computing has come to mean many things. Traditionally it refers to matching different types of processor architectures — scalar, vector, multithreaded, etc. — to the types of workloads that are most suited to them. So, for example, an application that needs to do matrix multiplication along with some non-arithmetic control logic might best be served with a system that encompassed both GPUs and CPUs. Other forms of heterogeneity involve the architecture of the memory hierarchy and the programming mechanisms that tie the various hardware models together.
On the multicore front, we’re already starting to see some early attempts at nanocore. This week Tilera announced TILE64, a 64-core chip aimed at the high performance embedded computing market. With an architecture that is reminiscent of Intel’s 80-core terascale processor prototype, TILE64 has an 8×8 grid of general-purpose processing cores (tiles) connected via an on-chip network, called iMesh. Tilera’s press release claims that it has achieved a scalable architecture significantly beyond current multicore processors:
Because the aggregate bandwidth is orders of magnitude greater than a bus and the distance between cores is shorter, the iMesh technology can be leveraged to create grids as large or small as an application requires, creating a “computing-by-the-yard” scalability…
By including a communication switch on each core, the processor is able to achieve 27 terabits per second of aggregate on-chip bandwidth. At 1 GHz and just 300 milliwatts per core, the whole (32-bit) processor can reach 192 gigaops. This is just a fraction of Intel’s one-plus teraflop of performance for their 80-core terascale prototype, but to some extent that’s comparing apples to oranges. However, both vendors do take advantage of a tiled arrangement of relatively simple processing cores connected by a 2D mesh to achieve much higher levels of performance than the current crop of commodity processors.
As core count gets into the triple digit range, the on-chip network performance becomes relatively more important than the performance of the individual computational units. The result will be that more silicon logic and power consumption will be devoted to the internal interconnect and off-chip memory access. HEC, in particular, points out that we we’re going to have to start paying a lot more attention to power consumption associated with the communication elements as these components start to dominate the system architecture.
For its part, Intel has stated its plans to bring the x86 ISA into HEC’s nanocore world, not just with high core counts, but with some elements of heterogeneous computing thrown in as well. Nehalem, the company’s next-generation microarchitecture will have a heterogeneous-friendly architecture that will be able to put GPU cores or perhaps other types of acceleration units on-chip. But Nehalem will probably top out at 8 cores.
Intel’s terascale effort, which should be commercially viable in the 2010 timeframe, represents the company’s intention to place hundreds of cores on the same processor die. At least some of these cores will be x86 compatible. But Intel has also talked about incorporating “special-purpose” computational engines for workloads like signal processing, graphics or network security. It’s likely that Intel’s contribution to the PSC/Carnegie Mellon NSF petascale Track 1 bid involved some form of this terascale chip.
Cray, as the extreme example of the high performance system vendor, is fully committed to move beyond multicore in both core count and heterogeneity. So far, it has only proposed loosely coupled heterogeneous systems that encompass scalar, vector, multithreading and FGPA processors. It is also actively working on the all-important system software that makes heterogeneous processing accessible to the application developer.
But unless the economic model for processor manufacturing gets turned on its head, system vendors will need to rely on the big chipmakers (e.g., Intel, IBM, AMD, NVIDIA, Sun Microsystems) to supply heterogeneity at the chip level. The expense of microprocessor R&D and the cost of fabs has created a rather exclusive club of chip manufacturers. Of the big chip vendors, only Intel and AMD have shown an inclination to pursue the heterogeneous path — not counting IBM and its Cell processor, which wasn’t really intended to be used for hosting disparate workloads.
While it’s unlikely that processor manufacturing will get turned on its head anytime soon, it’s possible that nanocore will turn it on its side. Imagine a semiconductor manufacturing technology that allowed system vendors to order customized processors from chip manufacturers. So, for example, an OEM who had a contract with an oil & gas company to provide systems for seismic simulations could specify a chip with, say, 80 GPUs and 20 CPUs. Maybe even user-designed cores could be included. While a customized processor is likely to be more expensive than a standard one, the value proposition seems pretty compelling when you’re talking about a 100-core chip.
That’s just one example of how the next wave of parallel processing and heterogeneous computing could radically alter the IT ecosystem. Certainly both software vendors and hardware manufacturers will be in for some big changes in the years ahead. Get ready for an interesting ride.
—–
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].