Sun Cofounder Evangelizes Liquid Blade Server

By Michael Feldman

May 4, 2011

What does a Sun Microsystems cofounder do with his spare time? Well, if you’re Scott McNealy, you spend some if it lending your expertise to promising tech vendors that are looking to break into the IT big leagues. One such company that he has taken a personal interest in is Hardcore Computer, which recently introduced a line of servers that use liquid submersion technology. HPCwire spoke with McNealy to get his take on the technology and to ask him why he thinks the company deserves the spotlight.

McNealy signed on as a non-paid advisor and consultant with Hardcore in January at the behest of longtime friend and former Stanford classmate Doug Burgum. Burgum’s venture firm, Kilbourne Group, has invested in Hardcore, a Rochester, Minnesota-based computer maker that specializes in high performance gear based on the company’s patented liquid submersion cooling technology.

“This is one of the few companies innovating on top of the Intel architecture — rather than just strapping a power supply on and porting Linux,” McNealy told HPCwire.

Hardcore makes a range of liquid-cooled offerings, including desktops, workstations, and servers. Its latest offering is the “Liquid Blade,” a server line the company announced in May 2010 and launched in November at the Supercomputing Conference in New Orleans (SC10). The new blade is more or less a standard dual-socket x86-based blade using Intel Xeon 5500 and 5600 Xeon parts. It sports eight DDR3 memory slots per CPU, six SATA slots for storage, and a PCIe x16 slot for a GPU card or other external device.

Liquid Blade’s secret sauce — and in this case it literally is a sauce — is Hardcore’s patented liquid submersion technology. The company uses a proprietary dielectric fluid, called Core Coolant, to entirely submerge the blades within a specially-built 5U rack-mounted chassis. The coolant is inert, biodegradable, and most importantly non-conductive, so all of the electrical components inside the server are protected.

As with any liquid coolant, the idea is to draw off the excess heat much more efficiently than an air-cooled setup and ensure all the server components are keep comfortably cool even under maximum load. According to the company literature, the Core Coolant has 1,350 times the cooling capacity of air. Since the coolant is so effective at heat dissipation and the internal fans have been dispensed with, the server components can be packed rather densely. In this case the 5U Hardcore chassis can house up to seven of the dual-socket blades.

The company launched its liquid-dipped server at SC10 last November to get the attention of the HPC community, but the offering is suitable for any installation where the datacenter is constrained by power and space. Besides HPC centers, these include DoD facilities, telco firms, and Internet service providers. “You have to look at the users who think at scale and have a huge electric bill,” explains McNealy.

The datacenter cooling problem is well-known, of course. As servers get packed with hotter and faster chips and datacenters scale up to meet growing demand, getting enough power and space has become increasingly challenging. Datacenter cooling has traditionally relied on air conditioning, but air makes for a poor heat exchange medium, and it’s hard to direct it where it’s most needed. “Air goes everywhere but where you want it to,” laughs McNealy. Cooling a hot server, he says is “like trying to blow a candle out from the other side of the room.”

Because of the density of the Hardcore solution, you need about 50 percent fewer racks to deliver the same compute. And since the blades essentially never overheat, one can expect better reliability and longevity. As any datacenter administrator knows, heat is a major cause of server mortality, especially in facilities filled to capacity.

But the really big savings is on the power side. Since cooling and the associated equipment take up such a large chunk of a datacenter energy budget, any effort to reduce these costs tends to pay for itself in just a few years. An independent study found that a Liquid Blade setup could reduce datacenter cooling costs by up to 80 percent and operating costs by up to 25 percent.

Hardcore isn’t alone in the liquid submersion biz. Other companies, most notably Austin-based Green Revolution, are providing these types of products. In the case of Green Revolution, they offer a general-purpose solution for all sorts of hardware — rack servers, blades, and network switches. The company will strip down the gear to its essentials and immerse the components in a specially-built 42U enclosure filled with an inert mineral oil.

But since Hardcore is dunking its own servers, it has the option to build high performance gear that would be impractical to run in an air-cooled environment. As McNealy points out, the efficient liquid cooling is a natural for the highest bin x86 chips running the fastest clocks. For example, the company could stuff Intel’s latest 4.4 GHz Xeon 5600 processors into its blades, and offer a special-purpose product for high frequency traders (as Appro has done, sans immersive liquid cooling, with its HF1 servers). Hardcore has never talked about such a setup for HFT, but it does tout the servers outfitted with high wattage graphics cards for GPGPU type computation. Applications using such capabilities include medical imaging, CGI rendering, engineering simulation and modeling and web-based gaming.

One the things McNealy has been working with the Hardcore people on is getting an apples-to-apples comparison of their liquid cooled gear versus conventional air-cooled servers. To do this, he says, you have come up with a higher level analysis that takes into account the service cost over the entire datacenter.

According to the company, the cost of a Liquid Blade setup is on par with a comparably equipped air-cooled product since all the fans are eliminated and the chassis design is simpler. If a user opted for Liquid Blade when it came time to upgrade their servers, they could start to realize energy costs savings immediately. But the big savings occur when a datacenter can be built from scratch with liquid submersion in mind.

In that case, the datacenter can dispense with a lot of the CRAC units, use 12-foot ceilings instead of 16-foot ones (no overhead air ductwork is needed), and use less UPS units thanks to reduced power requirements. The only extra cost comes with the chilled water to oil heat exchangers used to draw the heat from the chassis coolant. Also, since you can fit more servers into the same space, the datacenter floor space can be reduced by about 30 percent for a given compute capacity.

So why isn’t everyone flocking to liquid submersion? Customer inertia, says McNealy. According to him, he’s spent most of his career knowing the right thing to do and trying to get others to realize it themselves.

With Hardcore, the challenge is that most organizations are already set up with their existing air-cooled facilities, so a lot of the cost incentives for the big switch aren’t there. He thinks if a large Internet service provider bought into this technology for a new datacenter, the business could quickly take off. For McNealy’s Sun, that tipping point was in the late 80s when Computervision made a big deal to go with his company’s Unix-based workstations. Hardcore, no doubt would love to repeat history, this time with the likes of Google, Amazon, or Facebook.

“Their biggest challenges is the barrier to exit from the old strategy, not the barrier to entry to the new one,” says McNealy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC impact at SC18. Most noteworthy is that five of 13 CAAR applic Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

SC 30th Anniversary Perennials 1988-2018

November 8, 2018

Many conferences try, fewer succeed. Thirty years ago, no one knew if the first SC would also be the last. Thirty years later, we know it’s the biggest annual Read more…

By Doug Black & Tiffany Trader

CEA’s Pick of ThunderX2-based Atos System Boosts Arm

November 8, 2018

Europe’s bet on Arm took another step forward today with selection of an Atos BullSequana X1310 system by CEA’s (French Alternative Energies and Atomic Ener Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This