The Seduction of Ultra Low-Power Servers
It seems like every time a chip vendor talks about its latest netbook processor, there are a flurry of articles about how such chips could be worked into a server. The impetus for this line of thinking is the power crisis in the datacenter. Processors targeted for netbooks and handheld consumer devices are ultra low-power and usually have a better performance per watt metric than your traditional server chippery. Not only that, these power-saving CPUs sell for just a fraction of the price of a typical server processor.
The most recent example of this line of thinking was precipitated by AMD’s unveiling of its upcoming Bobcat microarchitecture last month at Hot Chips. Bobcat is the company’s future core design destined for the netbook and notebook market. It wasn’t long before articles like this from HotHardware.com showed up, suggesting that the new core design might be a great fit for ultra low-power servers and microblades.
The idea is that these power-sipping CPUs are especially efficient at scaled-out computing, where individual core performance is less important than the aggregate performance of the entire system. The idea, of course, is to offer the equivalent computational throughput for much less power than a conventional Opteron- or Xeon-based server. On paper that’s true. Bobcat, for example, is advertised to offer a sub one-watt core with about 90 percent of the performance of a mainstream notebook chip. Certainly one would expect Bobcat-based CPUs to offer much better performance-per-watt numbers than their larger Opteron brethren.
In some cases, this creative thinking has gone somewhat further, that is, into actual product roadmaps. Earlier this summer, startup SeaMicro announced it was going to use an Intel 1.6 GHz Atom processor to power a new breed of low-power server. The SM10000 stuffs 512 Atom processors into a single box, while being able to run off-the-shelf applications and operating systems. SeaMicro’s claim is that it can deliver comparable performance to a conventional x86 server, but use just a fourth of the power and space.
Meanwhile startup Smooth-Stone is looking to use ARM processors as the basis for another kind of low-power server. Mostly associated with cell phones and other mobile devices, the latest ARM chips will include support for both OS virtualization and the ability to address up to a terabyte of memory.
Given that there has been little experience with this type of computing, the application set for these ultra low-power servers is still a bit fuzzy. It appears that Dell and SeaMicro are aiming their offerings at cloud hosting, Web farms and other light-load applications. The practical consideration here is that single thread performance is not all that good for these under-powered chips, especially compared to a Xeon or Opteron processor. But applications that can be divvied up efficiently across many processors into independent lightweight tasks are perfect for this kind of computing.
On the other hand, where single-thread application performance is the bottleneck, execution times will suffer. Sure, power is expensive, but time is even more so. That makes most compute-bound workloads, including the vast majority of HPC apps, unsuitable for these lowly chips, with the possible exception of embarrassingly parallel codes.
That might be the end of it if it weren’t for this GPGPU phenomenon. In this case, the CPU is used to drive the GPU, where the most compute-intensive piece of the application is executed. If enough of the app can be offloaded to the graphics accelerator, the CPU need not be all that muscular. Thus a power-sipping CPU might be the perfect companion to the power-hogging GPU.
In practice, though, I don’t think we’re quite there yet. From what I’ve gathered, the profile of many GPU-ported codes is such that they still rely on speedy CPUs for at least a portion of the application. It would be interesting for GPGPU developers to track execution cycles on the two processors, and determine how big a CPU is really required for a given code. It might even give some enterprising vendor an idea about how to build a better balanced GPGPU server.