As servers get denser and hotter, datacenters are scrambling for ways to get rid of all that waste heat. But for the average homeowner forking out thousands of dollars a year to keep their house warm, the idea of waste heat is something of an oxymoron.
Why not just put the servers in peoples’ homes and let nature do the rest? A recent article in the New York Times points to some research that aims to do just that. The article describes a paper presented at the latest USENIX Workshop on Hot Topics in Cloud Computing that makes a case for relocating compute cloud servers inside homes, where the waste heat can be recycled at the source. The researchers, who hail from Microsoft and the University of Virginia, refer to the concept as the “data furnace.”
From a manageability and security point of view, the researchers admit that, at least initially, the most likely scenario for waste heat recovery is for mid-sized datacenters that can be relocated in or near office buildings or apartment complexes. In fact, there are experimental versions of this model starting to pop up around the world, especially where electricity rates are high.
The NYT article mentions the IBM Research-Zurich effort I.B.M. Research-Zurich to recapture waste heat from a water-cooled supercomputer for a local university. The technology, called Aquasar, uses hot water to cool the processors on the x86 servers — water which can then be used for to warm buildings. Next year, that research will get a field trial with the three-petaflop SuperMUC supercomputer to be installed in Munich, Germany
But the paper’s primary purpose is to push the envelope beyond these larger-scale setups and look at the feasibility of setting up micro-datacenters to be used as the primary heat source in a single family home. The researchers performed a TCO analysis of their data furnace idea across different climates by looking at the costs/benefits in five cities: Minneapolis, Pittsburgh, Washington DC, San Francisco, and Houston.
They used Dell PowerEdge 850 servers as the hardware and assumed a 1700 square foot residential house that is moderately insulated and sealed with a heating setpoint of 21°C (70°F). They also assumed the necessary air circulation would be provided by the existing heat distribution system in the house. Even given that the residential electricity rates were twice as much as industrial rates, the results showed a savings of between $280 to $324 per year per server.
The analysis was done for generic cloud computing infrastructure, but it could also apply to typical HPC setups as well. The data furnaces can house 40 to 400 CPUs, which covers a lot of middle ground for moderate-sized HPC clusters today. In fact for HPC work, the economics may be even more favorable, given that these systems run hotter than the average cloud cluster and the resulting computational work tends to be more valuable.
Transferring large data sets to and fro, however, is another matter, given the limited broadband available to most homes. Security is another potential showstopper, and latency issues may also preclude a number of applications that require real-time response. Also, since your average homeowner is not a computer geek, system management can be another big challenge.
The paper goes into a lot more detail about different scenarios, various classes of data furnaces, and some of the other limitations. And while this looks impractical for certain types of applications, it offers a thought-provoking look at how true distributed computing might come to pass in the not too distant future.