Startup Makes Liquid Cooling an Immersive Experience

By Michael Feldman

August 31, 2010

There’s nothing like a blazing hot summer to focus one’s attention on the best ways to keep cool. That goes for datacenter operators as well, who are equally worried about keeping their servers properly chilled. While there is no shortage of innovative cooling solutions being proffered by various vendors, a new liquid immersion cooling solution from startup Green Revolution Cooling could end up being the best of them all.

The stakes for more efficient datacenter cooling are already high. Power consumption for a traditional air-cooled facility eats up a third to more than a half of the energy cost. Making cooling more efficient leaves more money available for computing, which, after all, is the central purpose of the datacenter. Efficient cooling is an especially important consideration in high performance computing, since this class of users gravitate toward faster and denser (and thus hotter) server configurations. If the setup in the center is not optimal, you end up sacrificing a lot of FLOPS for cooling.

With the increasing density of servers, storage, switches and other equipment, facility managers are taking an extra hard look at liquid cooling. Water-cooled servers have been around for decades, and direct-cooled CPUs are now being offered by a handful of vendors. Submerged liquid cooling, too, has been around since the days of the Cray 2, but this technology may be poised for a big comeback.

Servers Take a Bath

Green Revolution Cooling (GRC), a two year-old company based in Austin, Texas, is offering a general-purpose liquid immersion cooling solution that they introduced at SC09 in Portland last November. It was selected as one of the “Disruptive Technologies of the Year” for the 2009 conference, an award they’ve recaptured for SC10.

In a nutshell, the system consists of a 42U rack enclosure tipped on its back and filled with an inert mineral oil mixture in which you immerse the server hardware. A pump is used to circulate the oil to an external heat exchanger, typically located outside the building.

The big advantage is that, unlike water, the oil formulation is not electrically conductive, but has 1,200 times the heat capacity of air. And since the oil is in direct contact with all the components, it only needs to be cooled down to about 104F (40C) to be effective. (CPUs can operate at 75C and hard drives at 45C.) Unless your datacenter happens to be located in Yuma, Arizona, cooling a liquid to 40C is relatively easy to attain with a simple heat exchanger or cooling tower. The solution is advertised to reduce the cooling energy by 90 percent and cut overall power consumption in the datacenter by up to 45 percent. The pitch is that a single 10kW server rack at 8 cents per kWh will save over $5,000 per year on energy costs alone.

According to Green Revolution co-founder Christiaan Best, basically any piece of datacenter equipment — rackmount server, blade, switch — that adheres to the standard 19-inch form factor can be slid into the GRC enclosure. The only equipment modifications required are the removal of the internal fans (you don’t need air cooling any more) and the sealing of any hard drive units, with an epoxy coating, to make them airtight. Typically this procedure takes a few minutes per server.

Because the GRC enclosure is laid on its back, it does takes up more floor space than a regular vertical rack. But since you no longer need hot aisles, chillers, and CRAC units, there is extra square footage to play with. Also, because there is no need to run cold air beneath the equipment anymore, the raised floor is now superfluous. “Essentially you could run it in a barn,” says Best. “All you need is a level floor.”

If you’re looking for performance, the GRC rack allows you to overclock the processors without worrying about melting the server. An NSF-funded study found that cranking up the clock on an Intel E5520 “Nehalem” CPU inside a GRC-cooled server yielded a 54 percent performance boost on Linpack, while keeping the CPU temperature at 76C. The server cost per gigaflop was reduced by about 50 percent.

It’s not just for overclocking. Theoretically, you could throw almost any sort of artificially dense board — multi-GPUs servers, custom blades with 10 CPUs on the motherboard, etc. — into the oil bath and realize the additional cost benefit of shrinking down your hardware footprint.

One possible roadblock to widespread adoption is the lack of warranty support from the OEMs. Warranties don’t typically allow the customer to take the server apart and dunk it into foreign liquids. According to Best, they’ve been talking with all the major OEMs to get their solution qualified under the original warranties, but currently none have committed to supporting the GRC setup. Since many of the big system vendors have their own liquid cooling solutions they’d like sell, they are likely to be less than enthusiastic to qualify a third-party solution.

In any case, Best says they’ve retained third-party support that will honor the original equipment warranties, so customers can be covered for any mishaps. GRC has logged over a quarter million server hours on their in-house test system and has yet to encounter a failure (with the exception of hard drive mechanical failures). Although there is no data to support it, Best is fairly certain that their solution will extend the life of the servers, given the more stable thermal environment, the lack of vibration from internal fans, and the elimination of oxidation on the electrical contacts.

Looking for a Few Brave Customers

Austin-based Midas Networks, a collocation firm, is the company’s first customer. Midas has purchased four of the GRC racks, and the systems are scheduled to be up and running later this year. Best says they also have a number of other customers in the pipeline, including some with HPC facilities, but no checks are in the bank just yet.

With the exception of Green Revolution itself, the Texas Advanced Computing Center (TACC) has acquired the most experience with the technology. TACC installed a pre-production GRC unit back in April and has been putting the system through its paces for the past five months.

Even in oil-rich Texas, energy is not cheap, so power savings has become a big priority at TACC. “We’re really, really chill-water limited where we are now,” says Dan Stanzione, TACC’s deputy director. According to him, they don’t have the ability to add any more chilled water capacity, but do have plans to expand computing capability over the next several years.

The TACC experiment started with immersing some older 1U servers in the GRC enclosure, and since then they’ve added other equipment including InfiniBand switches, GPU-powered servers, and blades. According to Stanzione, all the hardware has performed flawlessly, with no failures to date. They’ve even overclocked some of the server CPUs by 30 to 40 percent, without incident.

At present they have about 10kW of equipment in the rack, and are using just 250 watts to power the GRC solution. That’s more than a 90 percent reduction when compared to the 3,000 to 4,000 watts they would have consumed with a conventional air-cooled system. Stanzione estimates the total power savings for the whole system (equipment plus cooling) was reduced by 25 to 30 percent. “The overall power consumption has been fantastic,” he says.

The TACC crew is going to continue collecting data with the GRC system for the rest of the year. If everything checks out, Stanzione would like to start putting some production units into the upcoming datacenter buildout. They’re already thinking about loading 30 to 40 kW of compute equipment into a single rack, and GRC cooling would make that level of density quite practical. Further into the future, Stanzione is thinking about the cost savings they could accrue by immersing all 140 racks of the center’s equipment. “I think this has a tremendous amount of potential,” he says.

Barring some unforseen technological breakthrough, datacenter computing is only going to get denser and hotter in the years ahead. And since the cooling capacity of air isn’t going to change, the move to liquid-cooled systems appears all but inevitable. “You may not buy liquid cooling from us,” concludes Best, “but you will buy it from someone.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurr Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Nvidia CEO Predicts AI ‘Cambrian Explosion’

May 25, 2017

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing pl Read more…

By George Leopold

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

IBM, D-Wave Report Quantum Computing Advances

May 18, 2017

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an an Read more…

By George Leopold

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

HPE Launches Servers, Services, and Collaboration at GTC

May 10, 2017

Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" process Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This