Visit additional Tabor Communication Publications
September 05, 2012
It’s no secret that datacenters consume massive amounts of energy. Along with supporting hundreds or thousands of servers, these facilities also need to provide electricity for lighting, air conditioning and other essential services. This leads to unique designs and practices all aimed at increasing power efficiency. Last week, GigaOm reported on a year-long test conducted by Intel, where servers were cooled with non-conductive mineral oil.
The chipmaker decided to test out Green Revolution Computing’s CarnotJet system, a setup that looks like a high-tech bath for servers. As the oil absorbs heat from the hardware components, it gets cycled through a radiator to dissipate the heat. The cooled liquid is then returned to the servers.
By submerging server components in the GreenDEF coolant, the company claims their system can reduce energy consumption by 90 to 95 percent compared to air-cooled solutions. Since the servers no longer require fans in this configuration, additional power is saved.
Datacenter efficiency is determined by power usage effectiveness (PUE). This rating can be obtained by calculating total facility power divided by IT equipment power. The goal is to get a PUE as close to 1.00 as possible.
Intel mentioned that the CarnoJet servers received a PUE between 1.02 and 1.03. By comparison, standard air-cooled racks operate at roughly 1.6 PUE. This isn’t to say that all air-cooled datacenters are unable to receive high efficiency ratings. For example, Facebook’s datacenter in Prineville, Oregon received a 1.07 PUE rating.
As with any new technology, this one has a unique downside. If a component should fail or a user decides to upgrade their servers, it can be a rather messy endeavor to have to deal with the oil-soaked hardware. Wired reported that one tech involved in the test always had a spare change of clothes handy just in case.
Intel isn’t the only company taking the oil-based racks for a spin. The Texas Advanced Computing Center (TACC) was Green Revolution’s first customer. The center used a CarnoJet system to cool what used to be the ninth-fastest system on the TOP500 list. Since then, CGGVeritas, KTH in Stockholm and Tokyo Tech in Japan have all decided to run with the company’s unique cooling technology. Supermicro is getting ready to ship oil-cooled servers to interested customers.
Full story at GigaOM
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.