This week IBM unveiled the HPC packaging of its Power6 chip. The Power 575 server, aimed at the HPC market, is based on the 4.7 GHz version of the Power6. The 575, along with the Power 595 (which uses a 5 GHz version of the Power6 chip), completes IBM’s transition to the new processor in its server lines and will be available in early May.
IBM is a company with three apparently vibrant lines of supercomputers: the Blue Genes, the Power line, and the x86-based systems. Where does the IBM think the Power line fits for customers? According to Dave Turek, VP of Deep Computing at IBM, “This is the platform we recommend for applications that are looking for a big memory footprint…[and] this supports both AIX and Linux, so from an operating system choice perspective you really have flexibility in that regard as well.”
The 575 is dense, very dense in fact. One rack can hold 14 2U nodes, each with 32 4.7 GHz cores of Power6, and up to 256 GB of memory per node. That puts 448 cores in a rack, compared to current offerings from many HPC vendors in the mid to high 300 cores per rack today. IBM has benchmarked the 575 at 600 GLFOPS per node, or 8.4 peak TFLOPS in a rack.
The density and high clock frequency come at a price: the Power 575 is hot. To deal with this problem, and to enhance the system’s green cred, IBM has brought back a blast from the past: liquid cooling.
In order to reduce the current leakage and remove the heat generated by this chip at nearly 5 GHz, IBM is bringing the cooling directly to the CPUs. Chilled water is circulated through the racks and delivered directly to a copper cooling block on the surface of the CPU package.
An IBM representative I spoke with in an email exchange observed that transferring heat is a fundamentally inefficient process, and that adding in multiple transfer processes (as you do when chilled water is used to cool air which is then blown over processors) compounds these inefficiencies. In the Power 575 rack the cold water is applied directly to the surface of the CPU package; no pesky middleman, at least when it comes to cooling the CPU.
But the processors generate only about half the total heat in the system. The Power 575 uses a rear-door heat exchanger to remove 50-60 percent of the heat generated by the other system components before it enters the datacenter.
IBM says that all of this cooling work pays dividends on the energy bill. Despite being 5 times faster than the previous version of this system, the new Power6 575 uses only 40 percent as much power. And, according to the company, this cooling approach also allowed IBM to use 80 percent fewer air conditioning units (known as CRAC units) in its test installation and still keep the machine cool. This reduction helps with two of the items in short supply in many data centers around the world: space and power.
But what about the water supply itself? Are you still supplying the same amount of water cooling, just redirecting it to the Power 575 racks instead of air conditioning units? IBM says no.
By the company’s calculations a 30 ton CRAC unit typically requires 90 gallons per minute (gpm) of water flow. 30 tons of on-chip water cooling at the same supply temperature requires just 26 gpm. You do still need a chiller in both cases, but since the water-cooled processors operate at lower temperature, the whole system draws less power than an air cooled equivalent system.
What about safety? Sure, everyone already has water on their machine room floor supplying CRAC units, so water in the machine room isn’t a fundamental issue. But water to a relatively few CRACs is a far cry from connecting every rack in a large system directly to the chillers.
What if a pump ruptures and dumps its water? Or what about leaks on fittings and joints?
IBM says it has addressed these issues. On the front end, the company tries to prevent leaks from happening in the first place. Copper joints are brazed where possible, and threaded joints get extra sealants. Special hose joints with a combination of a hose barb and clamp were developed, and sub-assemblies are tested to 10 times the routine operational pressure and helium leak tested.
If a leak does occur within the chip cooling system itself, a sensor warns of the leak and, if it’s beyond a certain threshhold, openings in the cooling system will direct the water under the raised floor, away from the electrical innards of the rack. For leaks at hose connections, plastic shields contain and direct the water to the bottom of the frame.
Seymour Cray, the iconic supercomputer designer, often joked he was an overpaid plumber because a lot of the system design had to do with figuring out how to run pipes through the machines to extract excess heat. In those days, lots of big inefficient processors forced supercomputer makers to use liquid cooling. Today, energy use and floor space are the big challenges. With IBM committed to fast chips in small boxes, the plumbers at Big Blue will have to be increasingly clever to keep those systems cool.