Startup Makes Liquid Cooling an Immersive Experience

By Michael Feldman

August 31, 2010

There’s nothing like a blazing hot summer to focus one’s attention on the best ways to keep cool. That goes for datacenter operators as well, who are equally worried about keeping their servers properly chilled. While there is no shortage of innovative cooling solutions being proffered by various vendors, a new liquid immersion cooling solution from startup Green Revolution Cooling could end up being the best of them all.

The stakes for more efficient datacenter cooling are already high. Power consumption for a traditional air-cooled facility eats up a third to more than a half of the energy cost. Making cooling more efficient leaves more money available for computing, which, after all, is the central purpose of the datacenter. Efficient cooling is an especially important consideration in high performance computing, since this class of users gravitate toward faster and denser (and thus hotter) server configurations. If the setup in the center is not optimal, you end up sacrificing a lot of FLOPS for cooling.

With the increasing density of servers, storage, switches and other equipment, facility managers are taking an extra hard look at liquid cooling. Water-cooled servers have been around for decades, and direct-cooled CPUs are now being offered by a handful of vendors. Submerged liquid cooling, too, has been around since the days of the Cray 2, but this technology may be poised for a big comeback.

Servers Take a Bath

Green Revolution Cooling (GRC), a two year-old company based in Austin, Texas, is offering a general-purpose liquid immersion cooling solution that they introduced at SC09 in Portland last November. It was selected as one of the “Disruptive Technologies of the Year” for the 2009 conference, an award they’ve recaptured for SC10.

In a nutshell, the system consists of a 42U rack enclosure tipped on its back and filled with an inert mineral oil mixture in which you immerse the server hardware. A pump is used to circulate the oil to an external heat exchanger, typically located outside the building.

The big advantage is that, unlike water, the oil formulation is not electrically conductive, but has 1,200 times the heat capacity of air. And since the oil is in direct contact with all the components, it only needs to be cooled down to about 104F (40C) to be effective. (CPUs can operate at 75C and hard drives at 45C.) Unless your datacenter happens to be located in Yuma, Arizona, cooling a liquid to 40C is relatively easy to attain with a simple heat exchanger or cooling tower. The solution is advertised to reduce the cooling energy by 90 percent and cut overall power consumption in the datacenter by up to 45 percent. The pitch is that a single 10kW server rack at 8 cents per kWh will save over $5,000 per year on energy costs alone.

According to Green Revolution co-founder Christiaan Best, basically any piece of datacenter equipment — rackmount server, blade, switch — that adheres to the standard 19-inch form factor can be slid into the GRC enclosure. The only equipment modifications required are the removal of the internal fans (you don’t need air cooling any more) and the sealing of any hard drive units, with an epoxy coating, to make them airtight. Typically this procedure takes a few minutes per server.

Because the GRC enclosure is laid on its back, it does takes up more floor space than a regular vertical rack. But since you no longer need hot aisles, chillers, and CRAC units, there is extra square footage to play with. Also, because there is no need to run cold air beneath the equipment anymore, the raised floor is now superfluous. “Essentially you could run it in a barn,” says Best. “All you need is a level floor.”

If you’re looking for performance, the GRC rack allows you to overclock the processors without worrying about melting the server. An NSF-funded study found that cranking up the clock on an Intel E5520 “Nehalem” CPU inside a GRC-cooled server yielded a 54 percent performance boost on Linpack, while keeping the CPU temperature at 76C. The server cost per gigaflop was reduced by about 50 percent.

It’s not just for overclocking. Theoretically, you could throw almost any sort of artificially dense board — multi-GPUs servers, custom blades with 10 CPUs on the motherboard, etc. — into the oil bath and realize the additional cost benefit of shrinking down your hardware footprint.

One possible roadblock to widespread adoption is the lack of warranty support from the OEMs. Warranties don’t typically allow the customer to take the server apart and dunk it into foreign liquids. According to Best, they’ve been talking with all the major OEMs to get their solution qualified under the original warranties, but currently none have committed to supporting the GRC setup. Since many of the big system vendors have their own liquid cooling solutions they’d like sell, they are likely to be less than enthusiastic to qualify a third-party solution.

In any case, Best says they’ve retained third-party support that will honor the original equipment warranties, so customers can be covered for any mishaps. GRC has logged over a quarter million server hours on their in-house test system and has yet to encounter a failure (with the exception of hard drive mechanical failures). Although there is no data to support it, Best is fairly certain that their solution will extend the life of the servers, given the more stable thermal environment, the lack of vibration from internal fans, and the elimination of oxidation on the electrical contacts.

Looking for a Few Brave Customers

Austin-based Midas Networks, a collocation firm, is the company’s first customer. Midas has purchased four of the GRC racks, and the systems are scheduled to be up and running later this year. Best says they also have a number of other customers in the pipeline, including some with HPC facilities, but no checks are in the bank just yet.

With the exception of Green Revolution itself, the Texas Advanced Computing Center (TACC) has acquired the most experience with the technology. TACC installed a pre-production GRC unit back in April and has been putting the system through its paces for the past five months.

Even in oil-rich Texas, energy is not cheap, so power savings has become a big priority at TACC. “We’re really, really chill-water limited where we are now,” says Dan Stanzione, TACC’s deputy director. According to him, they don’t have the ability to add any more chilled water capacity, but do have plans to expand computing capability over the next several years.

The TACC experiment started with immersing some older 1U servers in the GRC enclosure, and since then they’ve added other equipment including InfiniBand switches, GPU-powered servers, and blades. According to Stanzione, all the hardware has performed flawlessly, with no failures to date. They’ve even overclocked some of the server CPUs by 30 to 40 percent, without incident.

At present they have about 10kW of equipment in the rack, and are using just 250 watts to power the GRC solution. That’s more than a 90 percent reduction when compared to the 3,000 to 4,000 watts they would have consumed with a conventional air-cooled system. Stanzione estimates the total power savings for the whole system (equipment plus cooling) was reduced by 25 to 30 percent. “The overall power consumption has been fantastic,” he says.

The TACC crew is going to continue collecting data with the GRC system for the rest of the year. If everything checks out, Stanzione would like to start putting some production units into the upcoming datacenter buildout. They’re already thinking about loading 30 to 40 kW of compute equipment into a single rack, and GRC cooling would make that level of density quite practical. Further into the future, Stanzione is thinking about the cost savings they could accrue by immersing all 140 racks of the center’s equipment. “I think this has a tremendous amount of potential,” he says.

Barring some unforseen technological breakthrough, datacenter computing is only going to get denser and hotter in the years ahead. And since the cooling capacity of air isn’t going to change, the move to liquid-cooled systems appears all but inevitable. “You may not buy liquid cooling from us,” concludes Best, “but you will buy it from someone.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that simulating physical systems could be done most effectively Read more…

By John Russell

RIKEN and CEA Mark One Year of Exascale-focused Collaboration

July 16, 2018

RIKEN in Japan and the French Alternative Energies and Atomic Energy Commission (CEA) formed a five-year cooperative research effort on January 11, 2017, to advance HPC and prepare for exascale computing (see HPCwire co Read more…

By Nishi Katsuya

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Are Your Software Licenses Impeding Your Productivity?

In my previous article, Improving chip yield rates with cognitive manufacturing, I highlighted the costs associated with semiconductor manufacturing, and how cognitive methods can yield benefits in both design and manufacture.  Read more…

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Meet the ISC18 Cluster Teams: Up Close & Personal

July 6, 2018

It’s time to meet your ISC18 Student Cluster Competition teams. While I was able to film them live at the ISC show, the trick was finding time to edit the vid Read more…

By Dan Olds

PRACEdays18 Keynote Allan Williams (Australia/NCI): We’re Open for Business Down Under!

July 5, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened with a plenary session on May 29, 2018 Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This