Founded in 1999, Rackable Systems has been one of the fastest growing x86 server makers over the last four years. It now stands as the 4th largest x86 servers vendor in the U.S. (ahead of Sun Microsystems) and 8th globally. With just over $20 million in revenue in 2002, this year Rackable expects to reach over $300 million. Its customers, including Yahoo, Amazon and Microsoft, represent some of the largest scale-out deployments of capacity cluster infrastructure in the industry.
The secret to its success? Rackable does some of the same things that a lot of other tier two x86 server vendors do. It offers industry-standard hardware from multiple vendors at competitive prices, allows for lots of customization, and is willing to go after both large and small accounts.
But Rackable provides a couple of features that differentiate its offerings from run-of-the-mill server vendors. The company has designed a half-depth form factor arranged in a “back-to-back” rack-mounted configuration, which results in much denser footprint than a standard server rack. The company also offers DC power options that it claims can provide an energy savings of 10 to 30 percent. Together, these features enable Rackable servers to inhabit some challenging data center environments.
The half-depth back-to-back rack mounting, besides creating a smaller footprint, produces a couple of other advantages. One is that all the I/O and network cabling ends up in the front of the cabinet, where it's easier to access and service. No more scrambling to the back of the cabinet to figure out which cables are connected to which servers. The front-side cabling also leaves space for an air plenum in the middle of the cabinet (at the back of each half-depth unit), which provides for efficient ventilation. Rackable had the foresight to patent the back-to-back rack design and, according to the company, has already invoked its protection against at least one would-be imitator.
The inconvenient side of compute density is the increased need for power and cooling. But Rackable offers a solution for that too. Instead of relying on individual power supplies in the servers to convert the AC power to DC power, the company claims it makes more sense to do the conversion outside of the machines and feed them directly with DC. Rackable's most popular way of doing this is by using a AC-to-DC rectifier for each cabinet. The rectifier sits on top of the rack and distributes DC power to all the servers beneath it. Each server contains a DC card instead of a whole power supply, removing a major source of heat from the machine.
Energy savings can add up quickly. For a cabinet-level AC-to-DC rectifier solution, the company claims that a 10 percent reduction in energy requirements is fairly conservative. If your data center houses a large server farm, cost savings could reach hundreds of thousands of dollars per year.
Also, by replacing all the power supplies with DC cards, reliability improves substantially. AC power supplies are notoriously unreliable — thus the presence of redundant power supplies for mission-critical systems. The DC cards themselves have much higher MTBF ratings, while redundancy at the rectifier level can be used to cope with an AC power failure in the facility. And by removing the heat load of the AC power supply from the server box, the longevity of the other system components can be extended.
Rackable offers vanilla AC-powered servers as well, but interest in their DC solution has been growing. In the second quarter of 2006, the company reported that about half of all units sold used the DC-powered solution. And it's not just the large deployments; smaller installations like the University of Florida's High Performance Computing Center have selected DC-based Rackable systems for their cluster computing needs.
Cool Cluster for Florida
The HPC Initiative at the University of Florida is on an aggressive schedule to expand its computing resources every 12 to 18 months. In 2005 they were looking to double or triple the performance of their legacy Xeon cluster, but realized their cramped machine room was going to be a problem.
“The existing cluster occupied about nine racks in the machine room” said Charles Taylor, senior HPC systems engineer at the University of Florida. “The size of the new cluster that we were looking at would have been about 18 to 22 racks. And as we looked at this, we realized that we didn't have the room and the capacity in our machine room to do this.”
An engineering estimate of about $350 thousand to renovate the machine room was just the beginning. A one-time $2375 (per ton of cooling) impact fee would be charged by the physical plant at the University of Florida to deliver additional chilled water. Since they were looking at around 40 tons of additional cooling, this worked out to about $100 thousand. So the HPC group was looking to spend close to half a million dollars just to get the facility upgraded.
The search was on to find a better solution. Almost immediately they realized that if they switched to dual-core Opterons, they would be able to reduce their power requirements by half. For three extra watts per processor, they could get a second core — essentially free. So they started looking at the vendors offering Opteron-based servers.
Rackable System quickly rose to the top of the list. Its emphasis on low power systems with small footprints seemed like a perfect fit for the university's needs. Taylor said no one could match Rackable for a standard rack configuration. They investigated blade servers from a couple of tier one vendors, but these were priced at a premier level. And even the blade systems they were looking at couldn't match Rackable's server density.
“Their half depth servers and their racks, which are front and back loaded, allowed us to put twice as many nodes in a rack than HP, IBM or Sun,” said Taylor. “And when you include the fact that we were going to two cores per processor, we just cut our space requirement by a factor of four. So we realized that we could probably fit our new cluster into our existing space — which was really remarkable to us.”
Taylor said by avoiding the renovation of the machine room, they probably saved nine or ten months — not to mention the hundreds of thousand of dollars they would have needed to upgrade the facility. Rackable swapped out the university's original cluster, giving them a pretty good deal in the process. The new 200 node (4-way dual processor, dual-core) cluster fit in six racks, using eighteen tons of cooling, including storage. This represented only three tons more cooling than the original Xeon cluster. And they achieved their goal of approximately a 300 percent performance increase.
No AC Power, No Problem
Data393, a company that provides colocation services and managed Web hosting, had a slightly different dilemma. It was trying to figure out how it could expand its server infrastructure as the company's managed hosting business grew. Complicating the situation was the fact Data393 had inherited a DC-powered facility from a defunct telecommunication provider. While DC power is often used for networking infrastructure, in general it represents an unfriendly environment for most data center hardware.
Not so for Rackable. Besides being able to offer a cabinet-level DC power solution, the company can also deal with entire data centers powered with DC. In fact, Rackable is able to take advantage of a facility-wide DC power supply to an even greater degree than a normal AC powered data center since they can skip the power conversion step at each rack. In this type of set-up, Rackable claims users can achieve a 30 percent power savings.
Like the University of Florida, Data393 was looking to expand its server capacity within limited space and power constraints. But they also needed servers that could feed directly from DC.
“There were other providers that had DC-capable servers, but not necessarily with highly dense footprints,” said Steve Merkel, senior engineer at Data393. “Some of the blade environments did have DC options, but they were closed form factor solutions. We could find little bits and pieces of what we wanted, but to wrap everything into a single package, the only one we came across at the time was Rackable Systems.”
Data393 engineers were able to specify motherboards, hard drives, network adapters and RAID controllers, but were still able to get the high-density footprint. They acquired 4 cabinets (about 400 servers) from Rackable. By going with a DC powered solution, they were able to significantly reduce their cooling costs and increase reliability.
“Given that we rectify in a separate room, a large chunk of our heat load is generated outside of the data center,” said Merkel. “We have noticed a decrease in thermal output by those servers, so consequently we've reduced costs from a cooling standpoint so we can increase density within the same infrastructure.”
DC For the Masses?
So why doesn't everyone use DC power in the data center? For some of the same reasons it's not used in general power distribution — namely, it is not very practical to distribute direct current over long distance. Even at the scale of a data center, there are some significant barriers. Once you get past the additional cost of installing the DC power plant, deploying DC across a data center can be problematic. Direct current requires thick copper bus bars that must be built and maintained correctly for safe service. All this extra cost for the specialized infrastructure becomes a hindrance to widespread DC adoption.
At the level of the rack or cabinet, the objections to DC power are somewhat different. Many server makers have denigrated Rackable's solution as just a “gimmick.” They say the energy efficiency gains are an illusion; the conversion from AC to DC just gets moved outside the server. Rackable maintains its cabinet-level DC rectifier solution is significantly more efficient that even the best AC power supplies.
Some of the major server OEMs such as HP, IBM and Sun offer their own DC-capable systems, but they're mainly targeted for DC powered facilities, where direct AC is unavailable. With the exception of Rackable, no server maker provides DC capability as a general-purpose solution. Why is that?
“First of all it's a very difficult technology to build,” said Colette LaForce, vice president of Marketing at Rackable Systems. “We launched it in 2003 but it certainly took a lot of engineering and ingenuity to get it to where it is. I think that for a lot of large x86 server manufacturers this would be like turning the giant ship in another direction. The advantage when you are a younger, more nimble organization is that you can do that. So I think one of the key barriers to entry is that it's just very difficult; this doesn't get solved overnight.”
The company has filed for patents around some of their DC technology. So if other OEMs decide to go this route, they're going to have to develop their own solutions. Until then, Rackable seems to have cornered the market for DC friendly servers.