Rackable Eases Power Struggle in the Data Center

By Michael Feldman

September 8, 2006

Founded in 1999, Rackable Systems has been one of the fastest growing x86 server makers over the last four years. It now stands as the 4th largest x86 servers vendor in the U.S. (ahead of Sun Microsystems) and 8th globally. With just over $20 million in revenue in 2002, this year Rackable expects to reach over $300 million. Its customers, including Yahoo, Amazon and Microsoft, represent some of the largest scale-out deployments of capacity cluster infrastructure in the industry.

The secret to its success? Rackable does some of the same things that a lot of other tier two x86 server vendors do. It offers industry-standard hardware from multiple vendors at competitive prices, allows for lots of customization, and is willing to go after both large and small accounts.

But Rackable provides a couple of features that differentiate its offerings from run-of-the-mill server vendors. The company has designed a half-depth form factor arranged in a “back-to-back” rack-mounted configuration, which results in much denser footprint than a standard server rack. The company also offers DC power options that it claims can provide an energy savings of 10 to 30 percent. Together, these features enable Rackable servers to inhabit some challenging data center environments.

The half-depth back-to-back rack mounting, besides creating a smaller footprint, produces a couple of other advantages. One is that all the I/O and network cabling ends up in the front of the cabinet, where it's easier to access and service. No more scrambling to the back of the cabinet to figure out which cables are connected to which servers. The front-side cabling also leaves space for an air plenum in the middle of the cabinet (at the back of each half-depth unit), which provides for efficient ventilation. Rackable had the foresight to patent the back-to-back rack design and, according to the company, has already invoked its protection against at least one would-be imitator.

The inconvenient side of compute density is the increased need for power and cooling. But Rackable offers a solution for that too. Instead of relying on individual power supplies in the servers to convert the AC power to DC power, the company claims it makes more sense to do the conversion outside of the machines and feed them directly with DC. Rackable's most popular way of doing this is by using a AC-to-DC rectifier for each cabinet. The rectifier sits on top of the rack and distributes DC power to all the servers beneath it. Each server contains a DC card instead of a whole power supply, removing a major source of heat from the machine.

Energy savings can add up quickly. For a cabinet-level AC-to-DC rectifier solution, the company claims that a 10 percent reduction in energy requirements is fairly conservative. If your data center houses a large server farm, cost savings could reach hundreds of thousands of dollars per year.

Also, by replacing all the power supplies with DC cards, reliability improves substantially. AC power supplies are notoriously unreliable — thus the presence of redundant power supplies for mission-critical systems. The DC cards themselves have much higher MTBF ratings, while redundancy at the rectifier level can be used to cope with an AC power failure in the facility. And by removing the heat load of the AC power supply from the server box, the longevity of the other system components can be extended.

Rackable offers vanilla AC-powered servers as well, but interest in their DC solution has been growing. In the second quarter of 2006, the company reported that about half of all units sold used the DC-powered solution. And it's not just the large deployments; smaller installations like the University of Florida's High Performance Computing Center have selected DC-based Rackable systems for their cluster computing needs.

Cool Cluster for Florida

The HPC Initiative at the University of Florida is on an aggressive schedule to expand its computing resources every 12 to 18 months. In 2005 they were looking to double or triple the performance of their legacy Xeon cluster, but realized their cramped machine room was going to be a problem.

“The existing cluster occupied about nine racks in the machine room” said Charles Taylor, senior HPC systems engineer at the University of Florida. “The size of the new cluster that we were looking at would have been about 18 to 22 racks. And as we looked at this, we realized that we didn't have the room and the capacity in our machine room to do this.”

An engineering estimate of about $350 thousand to renovate the machine room was just the beginning. A one-time $2375 (per ton of cooling) impact fee would be charged by the physical plant at the University of Florida to deliver additional chilled water. Since they were looking at around 40 tons of additional cooling, this worked out to about $100 thousand. So the HPC group was looking to spend close to half a million dollars just to get the facility upgraded.

The search was on to find a better solution. Almost immediately they realized that if they switched to dual-core Opterons, they would be able to reduce their power requirements by half. For three extra watts per processor, they could get a second core — essentially free. So they started looking at the vendors offering Opteron-based servers.

Rackable System quickly rose to the top of the list. Its emphasis on low power systems with small footprints seemed like a perfect fit for the university's needs. Taylor said no one could match Rackable for a standard rack configuration. They investigated blade servers from a couple of tier one vendors, but these were priced at a premier level. And even the blade systems they were looking at couldn't match Rackable's server density.

“Their half depth servers and their racks, which are front and back loaded, allowed us to put twice as many nodes in a rack than HP, IBM or Sun,” said Taylor. “And when you include the fact that we were going to two cores per processor, we just cut our space requirement by a factor of four. So we realized that we could probably fit our new cluster into our existing space — which was really remarkable to us.”

Taylor said by avoiding the renovation of the machine room, they probably saved nine or ten months — not to mention the hundreds of thousand of dollars they would have needed to upgrade the facility. Rackable swapped out the university's original cluster, giving them a pretty good deal in the process. The new 200 node (4-way dual processor, dual-core) cluster fit in six racks, using eighteen tons of cooling, including storage. This represented only three tons more cooling than the original Xeon cluster. And they achieved their goal of approximately a 300 percent performance increase.

No AC Power, No Problem

Data393, a company that provides colocation services and managed Web hosting, had a slightly different dilemma. It was trying to figure out how it could expand its server infrastructure as the company's managed hosting business grew. Complicating the situation was the fact Data393 had inherited a DC-powered facility from a defunct telecommunication provider. While DC power is often used for networking infrastructure, in general it represents an unfriendly environment for most data center hardware.

Not so for Rackable. Besides being able to offer a cabinet-level DC power solution, the company can also deal with entire data centers powered with DC. In fact, Rackable is able to take advantage of a facility-wide DC power supply to an even greater degree than a normal AC powered data center since they can skip the power conversion step at each rack. In this type of set-up, Rackable claims users can achieve a 30 percent power savings.

Like the University of Florida, Data393 was looking to expand its server capacity within limited space and power constraints. But they also needed servers that could feed directly from DC.

“There were other providers that had DC-capable servers, but not necessarily with highly dense footprints,” said Steve Merkel, senior engineer at Data393. “Some of the blade environments did have DC options, but they were closed form factor solutions. We could find little bits and pieces of what we wanted, but to wrap everything into a single package, the only one we came across at the time was Rackable Systems.”

Data393 engineers were able to specify motherboards, hard drives, network adapters and RAID controllers, but were still able to get the high-density footprint. They acquired 4 cabinets (about 400 servers) from Rackable. By going with a DC powered solution, they were able to significantly reduce their cooling costs and increase reliability.

“Given that we rectify in a separate room, a large chunk of our heat load is generated outside of the data center,” said Merkel. “We have noticed a decrease in thermal output by those servers, so consequently we've reduced costs from a cooling standpoint so we can increase density within the same infrastructure.”

DC For the Masses?

So why doesn't everyone use DC power in the data center? For some of the same reasons it's not used in general power distribution — namely, it is not very practical to distribute direct current over long distance. Even at the scale of a data center, there are some significant barriers. Once you get past the additional cost of installing the DC power plant, deploying DC across a data center can be problematic. Direct current requires thick copper bus bars that must be built and maintained correctly for safe service. All this extra cost for the specialized infrastructure becomes a hindrance to widespread DC adoption.

At the level of the rack or cabinet, the objections to DC power are somewhat different. Many server makers have denigrated Rackable's solution as just a “gimmick.” They say the energy efficiency gains are an illusion; the conversion from AC to DC just gets moved outside the server. Rackable maintains its cabinet-level DC rectifier solution is significantly more efficient that even the best AC power supplies.

Some of the major server OEMs such as HP, IBM and Sun offer their own DC-capable systems, but they're mainly targeted for DC powered facilities, where direct AC is unavailable. With the exception of Rackable, no server maker provides DC capability as a general-purpose solution. Why is that?

“First of all it's a very difficult technology to build,” said Colette LaForce, vice president of Marketing at Rackable Systems. “We launched it in 2003 but it certainly took a lot of engineering and ingenuity to get it to where it is. I think that for a lot of large x86 server manufacturers this would be like turning the giant ship in another direction. The advantage when you are a younger, more nimble organization is that you can do that. So I think one of the key barriers to entry is that it's just very difficult; this doesn't get solved overnight.”

The company has filed for patents around some of their DC technology. So if other OEMs decide to go this route, they're going to have to develop their own solutions. Until then, Rackable seems to have cornered the market for DC friendly servers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire