Rackable Eases Power Struggle in the Data Center

By Michael Feldman

September 8, 2006

Founded in 1999, Rackable Systems has been one of the fastest growing x86 server makers over the last four years. It now stands as the 4th largest x86 servers vendor in the U.S. (ahead of Sun Microsystems) and 8th globally. With just over $20 million in revenue in 2002, this year Rackable expects to reach over $300 million. Its customers, including Yahoo, Amazon and Microsoft, represent some of the largest scale-out deployments of capacity cluster infrastructure in the industry.

The secret to its success? Rackable does some of the same things that a lot of other tier two x86 server vendors do. It offers industry-standard hardware from multiple vendors at competitive prices, allows for lots of customization, and is willing to go after both large and small accounts.

But Rackable provides a couple of features that differentiate its offerings from run-of-the-mill server vendors. The company has designed a half-depth form factor arranged in a “back-to-back” rack-mounted configuration, which results in much denser footprint than a standard server rack. The company also offers DC power options that it claims can provide an energy savings of 10 to 30 percent. Together, these features enable Rackable servers to inhabit some challenging data center environments.

The half-depth back-to-back rack mounting, besides creating a smaller footprint, produces a couple of other advantages. One is that all the I/O and network cabling ends up in the front of the cabinet, where it's easier to access and service. No more scrambling to the back of the cabinet to figure out which cables are connected to which servers. The front-side cabling also leaves space for an air plenum in the middle of the cabinet (at the back of each half-depth unit), which provides for efficient ventilation. Rackable had the foresight to patent the back-to-back rack design and, according to the company, has already invoked its protection against at least one would-be imitator.

The inconvenient side of compute density is the increased need for power and cooling. But Rackable offers a solution for that too. Instead of relying on individual power supplies in the servers to convert the AC power to DC power, the company claims it makes more sense to do the conversion outside of the machines and feed them directly with DC. Rackable's most popular way of doing this is by using a AC-to-DC rectifier for each cabinet. The rectifier sits on top of the rack and distributes DC power to all the servers beneath it. Each server contains a DC card instead of a whole power supply, removing a major source of heat from the machine.

Energy savings can add up quickly. For a cabinet-level AC-to-DC rectifier solution, the company claims that a 10 percent reduction in energy requirements is fairly conservative. If your data center houses a large server farm, cost savings could reach hundreds of thousands of dollars per year.

Also, by replacing all the power supplies with DC cards, reliability improves substantially. AC power supplies are notoriously unreliable — thus the presence of redundant power supplies for mission-critical systems. The DC cards themselves have much higher MTBF ratings, while redundancy at the rectifier level can be used to cope with an AC power failure in the facility. And by removing the heat load of the AC power supply from the server box, the longevity of the other system components can be extended.

Rackable offers vanilla AC-powered servers as well, but interest in their DC solution has been growing. In the second quarter of 2006, the company reported that about half of all units sold used the DC-powered solution. And it's not just the large deployments; smaller installations like the University of Florida's High Performance Computing Center have selected DC-based Rackable systems for their cluster computing needs.

Cool Cluster for Florida

The HPC Initiative at the University of Florida is on an aggressive schedule to expand its computing resources every 12 to 18 months. In 2005 they were looking to double or triple the performance of their legacy Xeon cluster, but realized their cramped machine room was going to be a problem.

“The existing cluster occupied about nine racks in the machine room” said Charles Taylor, senior HPC systems engineer at the University of Florida. “The size of the new cluster that we were looking at would have been about 18 to 22 racks. And as we looked at this, we realized that we didn't have the room and the capacity in our machine room to do this.”

An engineering estimate of about $350 thousand to renovate the machine room was just the beginning. A one-time $2375 (per ton of cooling) impact fee would be charged by the physical plant at the University of Florida to deliver additional chilled water. Since they were looking at around 40 tons of additional cooling, this worked out to about $100 thousand. So the HPC group was looking to spend close to half a million dollars just to get the facility upgraded.

The search was on to find a better solution. Almost immediately they realized that if they switched to dual-core Opterons, they would be able to reduce their power requirements by half. For three extra watts per processor, they could get a second core — essentially free. So they started looking at the vendors offering Opteron-based servers.

Rackable System quickly rose to the top of the list. Its emphasis on low power systems with small footprints seemed like a perfect fit for the university's needs. Taylor said no one could match Rackable for a standard rack configuration. They investigated blade servers from a couple of tier one vendors, but these were priced at a premier level. And even the blade systems they were looking at couldn't match Rackable's server density.

“Their half depth servers and their racks, which are front and back loaded, allowed us to put twice as many nodes in a rack than HP, IBM or Sun,” said Taylor. “And when you include the fact that we were going to two cores per processor, we just cut our space requirement by a factor of four. So we realized that we could probably fit our new cluster into our existing space — which was really remarkable to us.”

Taylor said by avoiding the renovation of the machine room, they probably saved nine or ten months — not to mention the hundreds of thousand of dollars they would have needed to upgrade the facility. Rackable swapped out the university's original cluster, giving them a pretty good deal in the process. The new 200 node (4-way dual processor, dual-core) cluster fit in six racks, using eighteen tons of cooling, including storage. This represented only three tons more cooling than the original Xeon cluster. And they achieved their goal of approximately a 300 percent performance increase.

No AC Power, No Problem

Data393, a company that provides colocation services and managed Web hosting, had a slightly different dilemma. It was trying to figure out how it could expand its server infrastructure as the company's managed hosting business grew. Complicating the situation was the fact Data393 had inherited a DC-powered facility from a defunct telecommunication provider. While DC power is often used for networking infrastructure, in general it represents an unfriendly environment for most data center hardware.

Not so for Rackable. Besides being able to offer a cabinet-level DC power solution, the company can also deal with entire data centers powered with DC. In fact, Rackable is able to take advantage of a facility-wide DC power supply to an even greater degree than a normal AC powered data center since they can skip the power conversion step at each rack. In this type of set-up, Rackable claims users can achieve a 30 percent power savings.

Like the University of Florida, Data393 was looking to expand its server capacity within limited space and power constraints. But they also needed servers that could feed directly from DC.

“There were other providers that had DC-capable servers, but not necessarily with highly dense footprints,” said Steve Merkel, senior engineer at Data393. “Some of the blade environments did have DC options, but they were closed form factor solutions. We could find little bits and pieces of what we wanted, but to wrap everything into a single package, the only one we came across at the time was Rackable Systems.”

Data393 engineers were able to specify motherboards, hard drives, network adapters and RAID controllers, but were still able to get the high-density footprint. They acquired 4 cabinets (about 400 servers) from Rackable. By going with a DC powered solution, they were able to significantly reduce their cooling costs and increase reliability.

“Given that we rectify in a separate room, a large chunk of our heat load is generated outside of the data center,” said Merkel. “We have noticed a decrease in thermal output by those servers, so consequently we've reduced costs from a cooling standpoint so we can increase density within the same infrastructure.”

DC For the Masses?

So why doesn't everyone use DC power in the data center? For some of the same reasons it's not used in general power distribution — namely, it is not very practical to distribute direct current over long distance. Even at the scale of a data center, there are some significant barriers. Once you get past the additional cost of installing the DC power plant, deploying DC across a data center can be problematic. Direct current requires thick copper bus bars that must be built and maintained correctly for safe service. All this extra cost for the specialized infrastructure becomes a hindrance to widespread DC adoption.

At the level of the rack or cabinet, the objections to DC power are somewhat different. Many server makers have denigrated Rackable's solution as just a “gimmick.” They say the energy efficiency gains are an illusion; the conversion from AC to DC just gets moved outside the server. Rackable maintains its cabinet-level DC rectifier solution is significantly more efficient that even the best AC power supplies.

Some of the major server OEMs such as HP, IBM and Sun offer their own DC-capable systems, but they're mainly targeted for DC powered facilities, where direct AC is unavailable. With the exception of Rackable, no server maker provides DC capability as a general-purpose solution. Why is that?

“First of all it's a very difficult technology to build,” said Colette LaForce, vice president of Marketing at Rackable Systems. “We launched it in 2003 but it certainly took a lot of engineering and ingenuity to get it to where it is. I think that for a lot of large x86 server manufacturers this would be like turning the giant ship in another direction. The advantage when you are a younger, more nimble organization is that you can do that. So I think one of the key barriers to entry is that it's just very difficult; this doesn't get solved overnight.”

The company has filed for patents around some of their DC technology. So if other OEMs decide to go this route, they're going to have to develop their own solutions. Until then, Rackable seems to have cornered the market for DC friendly servers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

IBM, D-Wave Report Quantum Computing Advances

May 18, 2017

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an an Read more…

By George Leopold

PRACEdays 2017 Wraps Up in Barcelona

May 18, 2017

Barcelona has been absolutely lovely; the weather, the food, the people. I am, sadly, finishing my last day at PRACEdays 2017 with two sessions: an in-depth loo Read more…

By Kim McMahon

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

US, Europe, Japan Deepen Research Computing Partnership

May 18, 2017

On May 17, 2017, a ceremony was held during the PRACEdays 2017 conference in Barcelona to announce the memorandum of understanding (MOU) between PRACE in Europe Read more…

By Tiffany Trader

NSF, IARPA, and SRC Push into “Semiconductor Synthetic Biology” Computing

May 18, 2017

Research into how biological systems might be fashioned into computational technology has a long history with various DNA-based computing approaches explored. N Read more…

By John Russell

DOE’s HPC4Mfg Leads to Paper Manufacturing Improvement

May 17, 2017

Papermaking ranks third behind only petroleum refining and chemical production in terms of energy consumption. Recently, simulations made possible by the U.S. D Read more…

By John Russell

PRACEdays 2017: The start of a beautiful week in Barcelona

May 17, 2017

Touching down in Barcelona on Saturday afternoon, it was warm, sunny, and oh so Spanish. I was greeted at my hotel with a glass of Cava to sip and treated to a Read more…

By Kim McMahon

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

HPE Launches Servers, Services, and Collaboration at GTC

May 10, 2017

Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…

By John Russell

IBM PowerAI Tools Aim to Ease Deep Learning Data Prep, Shorten Training 

May 10, 2017

A new set of GPU-powered AI software announced by IBM today brings automation to many of the tedious, time consuming and complex aspects of AI project on-rampin Read more…

By Doug Black

Bright Computing 8.0 Adds Azure, Expands Machine Learning Support

May 9, 2017

Bright Computing, long a prominent provider of cluster management tools for HPC, today released version 8.0 of Bright Cluster Manager and Bright OpenStack. The Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular Read more…

By John Russell

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This