Visit additional Tabor Communication Publications
November 28, 2011
As servers get denser and hotter, datacenters are scrambling for ways to get rid of all that waste heat. But for the average homeowner forking out thousands of dollars a year to keep their house warm, the idea of waste heat is something of an oxymoron.
Why not just put the servers in peoples' homes and let nature do the rest? A recent article in the New York Times points to some research that aims to do just that. The article describes a paper presented at the latest USENIX Workshop on Hot Topics in Cloud Computing that makes a case for relocating compute cloud servers inside homes, where the waste heat can be recycled at the source. The researchers, who hail from Microsoft and the University of Virginia, refer to the concept as the "data furnace."
From a manageability and security point of view, the researchers admit that, at least initially, the most likely scenario for waste heat recovery is for mid-sized datacenters that can be relocated in or near office buildings or apartment complexes. In fact, there are experimental versions of this model starting to pop up around the world, especially where electricity rates are high.
The NYT article mentions the IBM Research-Zurich effort I.B.M. Research-Zurich to recapture waste heat from a water-cooled supercomputer for a local university. The technology, called Aquasar, uses hot water to cool the processors on the x86 servers -- water which can then be used for to warm buildings. Next year, that research will get a field trial with the three-petaflop SuperMUC supercomputer to be installed in Munich, Germany
But the paper's primary purpose is to push the envelope beyond these larger-scale setups and look at the feasibility of setting up micro-datacenters to be used as the primary heat source in a single family home. The researchers performed a TCO analysis of their data furnace idea across different climates by looking at the costs/benefits in five cities: Minneapolis, Pittsburgh, Washington DC, San Francisco, and Houston.
They used Dell PowerEdge 850 servers as the hardware and assumed a 1700 square foot residential house that is moderately insulated and sealed with a heating setpoint of 21°C (70°F). They also assumed the necessary air circulation would be provided by the existing heat distribution system in the house. Even given that the residential electricity rates were twice as much as industrial rates, the results showed a savings of between $280 to $324 per year per server.
The analysis was done for generic cloud computing infrastructure, but it could also apply to typical HPC setups as well. The data furnaces can house 40 to 400 CPUs, which covers a lot of middle ground for moderate-sized HPC clusters today. In fact for HPC work, the economics may be even more favorable, given that these systems run hotter than the average cloud cluster and the resulting computational work tends to be more valuable.
Transferring large data sets to and fro, however, is another matter, given the limited broadband available to most homes. Security is another potential showstopper, and latency issues may also preclude a number of applications that require real-time response. Also, since your average homeowner is not a computer geek, system management can be another big challenge.
The paper goes into a lot more detail about different scenarios, various classes of data furnaces, and some of the other limitations. And while this looks impractical for certain types of applications, it offers a thought-provoking look at how true distributed computing might come to pass in the not too distant future.
Full story at www.usenix.org
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.