Keeping Your Cool in the Data Center

By Michael Feldman

April 21, 2006

As more computational muscle is incorporated into blade servers, clusters and supercomputers, the resulting increases in power and heat have become a significant challenge for the data center. Power-hungry blade servers, in particular, have become a major source of thermal pollution. Additional IT equipment, such as routers and other communication gear, are also contributing to the power and heat loads. The IT manager is left trying to reconcile the increases in computational demand with the ability of the data center to accommodate it.

Solutions are emerging. Technological advances in processors, such as lower-power chips, multi-core processors and on-chip power management are being developed to slow power demands. But in the short-term, computational demand is overwhelming these solutions. Fortunately, companies specializing in power and cooling equipment have developed strategies that address even the most power-demanding computing centers.

One of these companies, American Power Conversion (APC), has a variety of solutions for powering and cooling the modern data center. APC provides these solutions for thousands of data centers around the world for both commercial and non-commercial organizations. Last month, Richard Sawyer, APC’s Data Center Technology director, presented a tutorial session at the High Performance Computing and Communications (HPPC) conference in Newport, Rhode Island, to educate conference attendees about some of the latest power and cooling strategies that the industry has to offer.

Sawyer gives these types of presentations to help educate IT professionals about how the industry has progressed in the past few years in terms of solving the high-heat-density problem. The evolution from mainframes to blades is occurring rapidly and many IT managers are unaware of the types of strategies that have recently become available to solve the ensuing power and heat dilemma.

Blades reared their ugly head about three years ago,” explained Sawyer. “Manufacturers were all of sudden dealing with [power] densities of 5 to 20KW per rack, which led to a lot of hot spots. The hot spots were what drew everybody’s attention; we developed fixes for that. In the past, they always designed data centers around the power reliability array. Today it’s all about cooling.”

According to APC studies, blade servers require about 20 times the power and cooling of the average data center design. In the past five years, blade server power density has increased rapidly to the point where systems of 24KW per rack are becoming common. A 24KW blade rack generates the heat equivalent to 2 electric ranges. This year, IBM has been talking about driving its Blue Gene/L technology – currently at 31KW per rack — into its BladeCenter products.

“In the last two years, we poured a lot of money into solving the high-heat-density problem in the data center,” said Sawyer. “There’s some interesting technology out there, but it forces a little bit of a rethinking on how to design data centers.”

As a first step in determining a facility’s power and cooling requirements, APC will run a 3D computational fluid dynamics (CFD) analysis using the known data center parameters. Once the model is built, they incorporate the intended IT equipment into the virtual data center. For example, if they’re going to add a couple of blade chassis, they simply plug them into the model and perform a what-if scenario. That lets them know pretty quickly where the capacity is going to be used up and what potential problems could occur.

“Then we reach into our bag of technological tricks and come up with a [solution] that solves that particular problem for them,” said Sawyer.

And just what are those technological tricks? According to Sawyer, the whole industry is moving towards the concept of close-coupling, which means putting the cooling units as close as possible to the source of heat. Instead of arranging a data center with rows of racks in the middle with cooling units around the edge of the room, the cooling units are being moved in close proximity to the IT equipment.

Another strategy is to migrate from air cooling to liquid cooling. Liquid is a much better medium for cooling than air. If you have any doubts about this, compare the different effect of sticking your hand in the refrigerator versus plunging it into some cold water. As heat densities increase in the data center, the ability of air cooling to keep temperatures in the optimal range (68F – 77F degrees) becomes problematic.

Sawyer says that when you go over about 140-150 watts per square foot, which equates to about 3 to 4KW per rack, you start to get into trouble. Beyond this power density, you have more cooling equipment than IT equipment in the data center. So the question becomes how to best cool the equipment, but preserve use of that space. To do this, you have to go to some type of high-density cooling solution.

“There are two things that the users — the IT side of house — have to concede,” said Sawyer. “One is that they are going to have cooling units very close to the racks — in fact, probably in the same row as the racks. The second thing is that there’s going to be some kind of fluid cooling involved — water, glycol or a waterless liquid refrigerant.”

According to Sawyer, that’s not as bad as it sounds, because data centers were originally designed around mainframes, which typically were water-cooled. In fact, raised floors were invented to accommodate the water pipes for mainframe cooling when data centers were first built. Those raised floor are going to be necessary to provide liquid to cooling units that are intermixed with the racks.

But there is resistance to liquid-cooled units by the IT folks. The mantra that Sawyer often hears is: “We don’t want water in our data center.” But, according to Sawyer, they already have to deal with water; the standard air-conditioning units have humidifiers to compensate for the dehumidification that takes places during cooling. And most of the older IT folks are already comfortable with the idea of liquids, since they grew up with water-cooled mainframes.

“It’s a bit of a marketing problem, not just for us, but also for our competitors, to [suggest fluid cooling] in a data center, especially after all these years where we’ve had air cooling,” explained Sawyer. “It’s a little bit of a re-education. So my basic line is: if you’ve got hydrophobia, get over it.”

While power and cooling isn’t the most prominent technology in high-tech facilities, typically representing only 10 to 20 percent of the investment of IT hardware, it is one of the most critical. As Richard Sawyer likes to remind people: “When your server fails, you lose the application; if we fail, you lose all your applications.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program

June 27, 2017

Badisa Mosesane, an undergraduate scholar who studies computer science at the University of Botswana in Gaborone, recently joined other students from developing nations around the world in Geneva, Switzerland to particip Read more…

By Elizabeth Leake, STEM-Trek

The EU Human Brain Project Reboots but Supercomputing Still Needed

June 26, 2017

The often contentious, EU-funded Human Brain Project whose initial aim was fixed firmly on full-brain simulation is now in the midst of a reboot targeting a more modest goal – development of informatics tools and data/ Read more…

By John Russell

DOE Launches Chicago Quantum Exchange

June 26, 2017

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a Department of Energy sponsored collaboration between the Univ Read more…

By John Russell

UMass Dartmouth Reports on HPC Day 2017 Activities

June 26, 2017

UMass Dartmouth's Center for Scientific Computing & Visualization Research (CSCVR) organized and hosted the third annual "HPC Day 2017" on May 25th. This annual event showcases on-going scientific research in Massach Read more…

By Gaurav Khanna

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale companies and their embrace of AI and deep learning – tha Read more…

By Doug Black

DOE Launches Chicago Quantum Exchange

June 26, 2017

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a D Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “g Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This