Rackable Eases Power Struggle in the Data Center

By Michael Feldman

September 8, 2006

Founded in 1999, Rackable Systems has been one of the fastest growing x86 server makers over the last four years. It now stands as the 4th largest x86 servers vendor in the U.S. (ahead of Sun Microsystems) and 8th globally. With just over $20 million in revenue in 2002, this year Rackable expects to reach over $300 million. Its customers, including Yahoo, Amazon and Microsoft, represent some of the largest scale-out deployments of capacity cluster infrastructure in the industry.

The secret to its success? Rackable does some of the same things that a lot of other tier two x86 server vendors do. It offers industry-standard hardware from multiple vendors at competitive prices, allows for lots of customization, and is willing to go after both large and small accounts.

But Rackable provides a couple of features that differentiate its offerings from run-of-the-mill server vendors. The company has designed a half-depth form factor arranged in a “back-to-back” rack-mounted configuration, which results in much denser footprint than a standard server rack. The company also offers DC power options that it claims can provide an energy savings of 10 to 30 percent. Together, these features enable Rackable servers to inhabit some challenging data center environments.

The half-depth back-to-back rack mounting, besides creating a smaller footprint, produces a couple of other advantages. One is that all the I/O and network cabling ends up in the front of the cabinet, where it's easier to access and service. No more scrambling to the back of the cabinet to figure out which cables are connected to which servers. The front-side cabling also leaves space for an air plenum in the middle of the cabinet (at the back of each half-depth unit), which provides for efficient ventilation. Rackable had the foresight to patent the back-to-back rack design and, according to the company, has already invoked its protection against at least one would-be imitator.

The inconvenient side of compute density is the increased need for power and cooling. But Rackable offers a solution for that too. Instead of relying on individual power supplies in the servers to convert the AC power to DC power, the company claims it makes more sense to do the conversion outside of the machines and feed them directly with DC. Rackable's most popular way of doing this is by using a AC-to-DC rectifier for each cabinet. The rectifier sits on top of the rack and distributes DC power to all the servers beneath it. Each server contains a DC card instead of a whole power supply, removing a major source of heat from the machine.

Energy savings can add up quickly. For a cabinet-level AC-to-DC rectifier solution, the company claims that a 10 percent reduction in energy requirements is fairly conservative. If your data center houses a large server farm, cost savings could reach hundreds of thousands of dollars per year.

Also, by replacing all the power supplies with DC cards, reliability improves substantially. AC power supplies are notoriously unreliable — thus the presence of redundant power supplies for mission-critical systems. The DC cards themselves have much higher MTBF ratings, while redundancy at the rectifier level can be used to cope with an AC power failure in the facility. And by removing the heat load of the AC power supply from the server box, the longevity of the other system components can be extended.

Rackable offers vanilla AC-powered servers as well, but interest in their DC solution has been growing. In the second quarter of 2006, the company reported that about half of all units sold used the DC-powered solution. And it's not just the large deployments; smaller installations like the University of Florida's High Performance Computing Center have selected DC-based Rackable systems for their cluster computing needs.

Cool Cluster for Florida

The HPC Initiative at the University of Florida is on an aggressive schedule to expand its computing resources every 12 to 18 months. In 2005 they were looking to double or triple the performance of their legacy Xeon cluster, but realized their cramped machine room was going to be a problem.

“The existing cluster occupied about nine racks in the machine room” said Charles Taylor, senior HPC systems engineer at the University of Florida. “The size of the new cluster that we were looking at would have been about 18 to 22 racks. And as we looked at this, we realized that we didn't have the room and the capacity in our machine room to do this.”

An engineering estimate of about $350 thousand to renovate the machine room was just the beginning. A one-time $2375 (per ton of cooling) impact fee would be charged by the physical plant at the University of Florida to deliver additional chilled water. Since they were looking at around 40 tons of additional cooling, this worked out to about $100 thousand. So the HPC group was looking to spend close to half a million dollars just to get the facility upgraded.

The search was on to find a better solution. Almost immediately they realized that if they switched to dual-core Opterons, they would be able to reduce their power requirements by half. For three extra watts per processor, they could get a second core — essentially free. So they started looking at the vendors offering Opteron-based servers.

Rackable System quickly rose to the top of the list. Its emphasis on low power systems with small footprints seemed like a perfect fit for the university's needs. Taylor said no one could match Rackable for a standard rack configuration. They investigated blade servers from a couple of tier one vendors, but these were priced at a premier level. And even the blade systems they were looking at couldn't match Rackable's server density.

“Their half depth servers and their racks, which are front and back loaded, allowed us to put twice as many nodes in a rack than HP, IBM or Sun,” said Taylor. “And when you include the fact that we were going to two cores per processor, we just cut our space requirement by a factor of four. So we realized that we could probably fit our new cluster into our existing space — which was really remarkable to us.”

Taylor said by avoiding the renovation of the machine room, they probably saved nine or ten months — not to mention the hundreds of thousand of dollars they would have needed to upgrade the facility. Rackable swapped out the university's original cluster, giving them a pretty good deal in the process. The new 200 node (4-way dual processor, dual-core) cluster fit in six racks, using eighteen tons of cooling, including storage. This represented only three tons more cooling than the original Xeon cluster. And they achieved their goal of approximately a 300 percent performance increase.

No AC Power, No Problem

Data393, a company that provides colocation services and managed Web hosting, had a slightly different dilemma. It was trying to figure out how it could expand its server infrastructure as the company's managed hosting business grew. Complicating the situation was the fact Data393 had inherited a DC-powered facility from a defunct telecommunication provider. While DC power is often used for networking infrastructure, in general it represents an unfriendly environment for most data center hardware.

Not so for Rackable. Besides being able to offer a cabinet-level DC power solution, the company can also deal with entire data centers powered with DC. In fact, Rackable is able to take advantage of a facility-wide DC power supply to an even greater degree than a normal AC powered data center since they can skip the power conversion step at each rack. In this type of set-up, Rackable claims users can achieve a 30 percent power savings.

Like the University of Florida, Data393 was looking to expand its server capacity within limited space and power constraints. But they also needed servers that could feed directly from DC.

“There were other providers that had DC-capable servers, but not necessarily with highly dense footprints,” said Steve Merkel, senior engineer at Data393. “Some of the blade environments did have DC options, but they were closed form factor solutions. We could find little bits and pieces of what we wanted, but to wrap everything into a single package, the only one we came across at the time was Rackable Systems.”

Data393 engineers were able to specify motherboards, hard drives, network adapters and RAID controllers, but were still able to get the high-density footprint. They acquired 4 cabinets (about 400 servers) from Rackable. By going with a DC powered solution, they were able to significantly reduce their cooling costs and increase reliability.

“Given that we rectify in a separate room, a large chunk of our heat load is generated outside of the data center,” said Merkel. “We have noticed a decrease in thermal output by those servers, so consequently we've reduced costs from a cooling standpoint so we can increase density within the same infrastructure.”

DC For the Masses?

So why doesn't everyone use DC power in the data center? For some of the same reasons it's not used in general power distribution — namely, it is not very practical to distribute direct current over long distance. Even at the scale of a data center, there are some significant barriers. Once you get past the additional cost of installing the DC power plant, deploying DC across a data center can be problematic. Direct current requires thick copper bus bars that must be built and maintained correctly for safe service. All this extra cost for the specialized infrastructure becomes a hindrance to widespread DC adoption.

At the level of the rack or cabinet, the objections to DC power are somewhat different. Many server makers have denigrated Rackable's solution as just a “gimmick.” They say the energy efficiency gains are an illusion; the conversion from AC to DC just gets moved outside the server. Rackable maintains its cabinet-level DC rectifier solution is significantly more efficient that even the best AC power supplies.

Some of the major server OEMs such as HP, IBM and Sun offer their own DC-capable systems, but they're mainly targeted for DC powered facilities, where direct AC is unavailable. With the exception of Rackable, no server maker provides DC capability as a general-purpose solution. Why is that?

“First of all it's a very difficult technology to build,” said Colette LaForce, vice president of Marketing at Rackable Systems. “We launched it in 2003 but it certainly took a lot of engineering and ingenuity to get it to where it is. I think that for a lot of large x86 server manufacturers this would be like turning the giant ship in another direction. The advantage when you are a younger, more nimble organization is that you can do that. So I think one of the key barriers to entry is that it's just very difficult; this doesn't get solved overnight.”

The company has filed for patents around some of their DC technology. So if other OEMs decide to go this route, they're going to have to develop their own solutions. Until then, Rackable seems to have cornered the market for DC friendly servers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Battle Brews over Trump Intentions for Funding Science

February 27, 2017

The battle over science funding – how much and for what kinds of science – Read more…

By John Russell

Google Gets First Dibs on New Skylake Chips

February 27, 2017

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. Read more…

By George Leopold

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This