Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
August 31, 2010

Startup Makes Liquid Cooling an Immersive Experience

by Michael Feldman

There’s nothing like a blazing hot summer to focus one’s attention on the best ways to keep cool. That goes for datacenter operators as well, who are equally worried about keeping their servers properly chilled. While there is no shortage of innovative cooling solutions being proffered by various vendors, a new liquid immersion cooling solution from startup Green Revolution Cooling could end up being the best of them all.

The stakes for more efficient datacenter cooling are already high. Power consumption for a traditional air-cooled facility eats up a third to more than a half of the energy cost. Making cooling more efficient leaves more money available for computing, which, after all, is the central purpose of the datacenter. Efficient cooling is an especially important consideration in high performance computing, since this class of users gravitate toward faster and denser (and thus hotter) server configurations. If the setup in the center is not optimal, you end up sacrificing a lot of FLOPS for cooling.

With the increasing density of servers, storage, switches and other equipment, facility managers are taking an extra hard look at liquid cooling. Water-cooled servers have been around for decades, and direct-cooled CPUs are now being offered by a handful of vendors. Submerged liquid cooling, too, has been around since the days of the Cray 2, but this technology may be poised for a big comeback.

Servers Take a Bath

Green Revolution Cooling (GRC), a two year-old company based in Austin, Texas, is offering a general-purpose liquid immersion cooling solution that they introduced at SC09 in Portland last November. It was selected as one of the “Disruptive Technologies of the Year” for the 2009 conference, an award they’ve recaptured for SC10.

In a nutshell, the system consists of a 42U rack enclosure tipped on its back and filled with an inert mineral oil mixture in which you immerse the server hardware. A pump is used to circulate the oil to an external heat exchanger, typically located outside the building.

The big advantage is that, unlike water, the oil formulation is not electrically conductive, but has 1,200 times the heat capacity of air. And since the oil is in direct contact with all the components, it only needs to be cooled down to about 104F (40C) to be effective. (CPUs can operate at 75C and hard drives at 45C.) Unless your datacenter happens to be located in Yuma, Arizona, cooling a liquid to 40C is relatively easy to attain with a simple heat exchanger or cooling tower. The solution is advertised to reduce the cooling energy by 90 percent and cut overall power consumption in the datacenter by up to 45 percent. The pitch is that a single 10kW server rack at 8 cents per kWh will save over $5,000 per year on energy costs alone.

According to Green Revolution co-founder Christiaan Best, basically any piece of datacenter equipment — rackmount server, blade, switch — that adheres to the standard 19-inch form factor can be slid into the GRC enclosure. The only equipment modifications required are the removal of the internal fans (you don’t need air cooling any more) and the sealing of any hard drive units, with an epoxy coating, to make them airtight. Typically this procedure takes a few minutes per server.

Because the GRC enclosure is laid on its back, it does takes up more floor space than a regular vertical rack. But since you no longer need hot aisles, chillers, and CRAC units, there is extra square footage to play with. Also, because there is no need to run cold air beneath the equipment anymore, the raised floor is now superfluous. “Essentially you could run it in a barn,” says Best. “All you need is a level floor.”

If you’re looking for performance, the GRC rack allows you to overclock the processors without worrying about melting the server. An NSF-funded study found that cranking up the clock on an Intel E5520 “Nehalem” CPU inside a GRC-cooled server yielded a 54 percent performance boost on Linpack, while keeping the CPU temperature at 76C. The server cost per gigaflop was reduced by about 50 percent.

It’s not just for overclocking. Theoretically, you could throw almost any sort of artificially dense board — multi-GPUs servers, custom blades with 10 CPUs on the motherboard, etc. — into the oil bath and realize the additional cost benefit of shrinking down your hardware footprint.

One possible roadblock to widespread adoption is the lack of warranty support from the OEMs. Warranties don’t typically allow the customer to take the server apart and dunk it into foreign liquids. According to Best, they’ve been talking with all the major OEMs to get their solution qualified under the original warranties, but currently none have committed to supporting the GRC setup. Since many of the big system vendors have their own liquid cooling solutions they’d like sell, they are likely to be less than enthusiastic to qualify a third-party solution.

In any case, Best says they’ve retained third-party support that will honor the original equipment warranties, so customers can be covered for any mishaps. GRC has logged over a quarter million server hours on their in-house test system and has yet to encounter a failure (with the exception of hard drive mechanical failures). Although there is no data to support it, Best is fairly certain that their solution will extend the life of the servers, given the more stable thermal environment, the lack of vibration from internal fans, and the elimination of oxidation on the electrical contacts.

Looking for a Few Brave Customers

Austin-based Midas Networks, a collocation firm, is the company’s first customer. Midas has purchased four of the GRC racks, and the systems are scheduled to be up and running later this year. Best says they also have a number of other customers in the pipeline, including some with HPC facilities, but no checks are in the bank just yet.

With the exception of Green Revolution itself, the Texas Advanced Computing Center (TACC) has acquired the most experience with the technology. TACC installed a pre-production GRC unit back in April and has been putting the system through its paces for the past five months.

Even in oil-rich Texas, energy is not cheap, so power savings has become a big priority at TACC. “We’re really, really chill-water limited where we are now,” says Dan Stanzione, TACC’s deputy director. According to him, they don’t have the ability to add any more chilled water capacity, but do have plans to expand computing capability over the next several years.

The TACC experiment started with immersing some older 1U servers in the GRC enclosure, and since then they’ve added other equipment including InfiniBand switches, GPU-powered servers, and blades. According to Stanzione, all the hardware has performed flawlessly, with no failures to date. They’ve even overclocked some of the server CPUs by 30 to 40 percent, without incident.

At present they have about 10kW of equipment in the rack, and are using just 250 watts to power the GRC solution. That’s more than a 90 percent reduction when compared to the 3,000 to 4,000 watts they would have consumed with a conventional air-cooled system. Stanzione estimates the total power savings for the whole system (equipment plus cooling) was reduced by 25 to 30 percent. “The overall power consumption has been fantastic,” he says.

The TACC crew is going to continue collecting data with the GRC system for the rest of the year. If everything checks out, Stanzione would like to start putting some production units into the upcoming datacenter buildout. They’re already thinking about loading 30 to 40 kW of compute equipment into a single rack, and GRC cooling would make that level of density quite practical. Further into the future, Stanzione is thinking about the cost savings they could accrue by immersing all 140 racks of the center’s equipment. “I think this has a tremendous amount of potential,” he says.

Barring some unforseen technological breakthrough, datacenter computing is only going to get denser and hotter in the years ahead. And since the cooling capacity of air isn’t going to change, the move to liquid-cooled systems appears all but inevitable. “You may not buy liquid cooling from us,” concludes Best, “but you will buy it from someone.”

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video