Visit additional Tabor Communication Publications
April 11, 2008
This week IBM unveiled the HPC packaging of its Power6 chip. The Power 575 server, aimed at the HPC market, is based on the 4.7 GHz version of the Power6. The 575, along with the Power 595 (which uses a 5 GHz version of the Power6 chip), completes IBM's transition to the new processor in its server lines and will be available in early May.
IBM is a company with three apparently vibrant lines of supercomputers: the Blue Genes, the Power line, and the x86-based systems. Where does the IBM think the Power line fits for customers? According to Dave Turek, VP of Deep Computing at IBM, "This is the platform we recommend for applications that are looking for a big memory footprint...[and] this supports both AIX and Linux, so from an operating system choice perspective you really have flexibility in that regard as well."
The 575 is dense, very dense in fact. One rack can hold 14 2U nodes, each with 32 4.7 GHz cores of Power6, and up to 256 GB of memory per node. That puts 448 cores in a rack, compared to current offerings from many HPC vendors in the mid to high 300 cores per rack today. IBM has benchmarked the 575 at 600 GLFOPS per node, or 8.4 peak TFLOPS in a rack.
The density and high clock frequency come at a price: the Power 575 is hot. To deal with this problem, and to enhance the system's green cred, IBM has brought back a blast from the past: liquid cooling.
In order to reduce the current leakage and remove the heat generated by this chip at nearly 5 GHz, IBM is bringing the cooling directly to the CPUs. Chilled water is circulated through the racks and delivered directly to a copper cooling block on the surface of the CPU package.
An IBM representative I spoke with in an email exchange observed that transferring heat is a fundamentally inefficient process, and that adding in multiple transfer processes (as you do when chilled water is used to cool air which is then blown over processors) compounds these inefficiencies. In the Power 575 rack the cold water is applied directly to the surface of the CPU package; no pesky middleman, at least when it comes to cooling the CPU.
But the processors generate only about half the total heat in the system. The Power 575 uses a rear-door heat exchanger to remove 50-60 percent of the heat generated by the other system components before it enters the datacenter.
IBM says that all of this cooling work pays dividends on the energy bill. Despite being 5 times faster than the previous version of this system, the new Power6 575 uses only 40 percent as much power. And, according to the company, this cooling approach also allowed IBM to use 80 percent fewer air conditioning units (known as CRAC units) in its test installation and still keep the machine cool. This reduction helps with two of the items in short supply in many data centers around the world: space and power.
But what about the water supply itself? Are you still supplying the same amount of water cooling, just redirecting it to the Power 575 racks instead of air conditioning units? IBM says no.
By the company's calculations a 30 ton CRAC unit typically requires 90 gallons per minute (gpm) of water flow. 30 tons of on-chip water cooling at the same supply temperature requires just 26 gpm. You do still need a chiller in both cases, but since the water-cooled processors operate at lower temperature, the whole system draws less power than an air cooled equivalent system.
What about safety? Sure, everyone already has water on their machine room floor supplying CRAC units, so water in the machine room isn't a fundamental issue. But water to a relatively few CRACs is a far cry from connecting every rack in a large system directly to the chillers.
What if a pump ruptures and dumps its water? Or what about leaks on fittings and joints?
IBM says it has addressed these issues. On the front end, the company tries to prevent leaks from happening in the first place. Copper joints are brazed where possible, and threaded joints get extra sealants. Special hose joints with a combination of a hose barb and clamp were developed, and sub-assemblies are tested to 10 times the routine operational pressure and helium leak tested.
If a leak does occur within the chip cooling system itself, a sensor warns of the leak and, if it's beyond a certain threshhold, openings in the cooling system will direct the water under the raised floor, away from the electrical innards of the rack. For leaks at hose connections, plastic shields contain and direct the water to the bottom of the frame.
Seymour Cray, the iconic supercomputer designer, often joked he was an overpaid plumber because a lot of the system design had to do with figuring out how to run pipes through the machines to extract excess heat. In those days, lots of big inefficient processors forced supercomputer makers to use liquid cooling. Today, energy use and floor space are the big challenges. With IBM committed to fast chips in small boxes, the plumbers at Big Blue will have to be increasingly clever to keep those systems cool.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.