Sun Cofounder Evangelizes Liquid Blade Server

By Michael Feldman

May 4, 2011

What does a Sun Microsystems cofounder do with his spare time? Well, if you’re Scott McNealy, you spend some if it lending your expertise to promising tech vendors that are looking to break into the IT big leagues. One such company that he has taken a personal interest in is Hardcore Computer, which recently introduced a line of servers that use liquid submersion technology. HPCwire spoke with McNealy to get his take on the technology and to ask him why he thinks the company deserves the spotlight.

McNealy signed on as a non-paid advisor and consultant with Hardcore in January at the behest of longtime friend and former Stanford classmate Doug Burgum. Burgum’s venture firm, Kilbourne Group, has invested in Hardcore, a Rochester, Minnesota-based computer maker that specializes in high performance gear based on the company’s patented liquid submersion cooling technology.

“This is one of the few companies innovating on top of the Intel architecture — rather than just strapping a power supply on and porting Linux,” McNealy told HPCwire.

Hardcore makes a range of liquid-cooled offerings, including desktops, workstations, and servers. Its latest offering is the “Liquid Blade,” a server line the company announced in May 2010 and launched in November at the Supercomputing Conference in New Orleans (SC10). The new blade is more or less a standard dual-socket x86-based blade using Intel Xeon 5500 and 5600 Xeon parts. It sports eight DDR3 memory slots per CPU, six SATA slots for storage, and a PCIe x16 slot for a GPU card or other external device.

Liquid Blade’s secret sauce — and in this case it literally is a sauce — is Hardcore’s patented liquid submersion technology. The company uses a proprietary dielectric fluid, called Core Coolant, to entirely submerge the blades within a specially-built 5U rack-mounted chassis. The coolant is inert, biodegradable, and most importantly non-conductive, so all of the electrical components inside the server are protected.

As with any liquid coolant, the idea is to draw off the excess heat much more efficiently than an air-cooled setup and ensure all the server components are keep comfortably cool even under maximum load. According to the company literature, the Core Coolant has 1,350 times the cooling capacity of air. Since the coolant is so effective at heat dissipation and the internal fans have been dispensed with, the server components can be packed rather densely. In this case the 5U Hardcore chassis can house up to seven of the dual-socket blades.

The company launched its liquid-dipped server at SC10 last November to get the attention of the HPC community, but the offering is suitable for any installation where the datacenter is constrained by power and space. Besides HPC centers, these include DoD facilities, telco firms, and Internet service providers. “You have to look at the users who think at scale and have a huge electric bill,” explains McNealy.

The datacenter cooling problem is well-known, of course. As servers get packed with hotter and faster chips and datacenters scale up to meet growing demand, getting enough power and space has become increasingly challenging. Datacenter cooling has traditionally relied on air conditioning, but air makes for a poor heat exchange medium, and it’s hard to direct it where it’s most needed. “Air goes everywhere but where you want it to,” laughs McNealy. Cooling a hot server, he says is “like trying to blow a candle out from the other side of the room.”

Because of the density of the Hardcore solution, you need about 50 percent fewer racks to deliver the same compute. And since the blades essentially never overheat, one can expect better reliability and longevity. As any datacenter administrator knows, heat is a major cause of server mortality, especially in facilities filled to capacity.

But the really big savings is on the power side. Since cooling and the associated equipment take up such a large chunk of a datacenter energy budget, any effort to reduce these costs tends to pay for itself in just a few years. An independent study found that a Liquid Blade setup could reduce datacenter cooling costs by up to 80 percent and operating costs by up to 25 percent.

Hardcore isn’t alone in the liquid submersion biz. Other companies, most notably Austin-based Green Revolution, are providing these types of products. In the case of Green Revolution, they offer a general-purpose solution for all sorts of hardware — rack servers, blades, and network switches. The company will strip down the gear to its essentials and immerse the components in a specially-built 42U enclosure filled with an inert mineral oil.

But since Hardcore is dunking its own servers, it has the option to build high performance gear that would be impractical to run in an air-cooled environment. As McNealy points out, the efficient liquid cooling is a natural for the highest bin x86 chips running the fastest clocks. For example, the company could stuff Intel’s latest 4.4 GHz Xeon 5600 processors into its blades, and offer a special-purpose product for high frequency traders (as Appro has done, sans immersive liquid cooling, with its HF1 servers). Hardcore has never talked about such a setup for HFT, but it does tout the servers outfitted with high wattage graphics cards for GPGPU type computation. Applications using such capabilities include medical imaging, CGI rendering, engineering simulation and modeling and web-based gaming.

One the things McNealy has been working with the Hardcore people on is getting an apples-to-apples comparison of their liquid cooled gear versus conventional air-cooled servers. To do this, he says, you have come up with a higher level analysis that takes into account the service cost over the entire datacenter.

According to the company, the cost of a Liquid Blade setup is on par with a comparably equipped air-cooled product since all the fans are eliminated and the chassis design is simpler. If a user opted for Liquid Blade when it came time to upgrade their servers, they could start to realize energy costs savings immediately. But the big savings occur when a datacenter can be built from scratch with liquid submersion in mind.

In that case, the datacenter can dispense with a lot of the CRAC units, use 12-foot ceilings instead of 16-foot ones (no overhead air ductwork is needed), and use less UPS units thanks to reduced power requirements. The only extra cost comes with the chilled water to oil heat exchangers used to draw the heat from the chassis coolant. Also, since you can fit more servers into the same space, the datacenter floor space can be reduced by about 30 percent for a given compute capacity.

So why isn’t everyone flocking to liquid submersion? Customer inertia, says McNealy. According to him, he’s spent most of his career knowing the right thing to do and trying to get others to realize it themselves.

With Hardcore, the challenge is that most organizations are already set up with their existing air-cooled facilities, so a lot of the cost incentives for the big switch aren’t there. He thinks if a large Internet service provider bought into this technology for a new datacenter, the business could quickly take off. For McNealy’s Sun, that tipping point was in the late 80s when Computervision made a big deal to go with his company’s Unix-based workstations. Hardcore, no doubt would love to repeat history, this time with the likes of Google, Amazon, or Facebook.

“Their biggest challenges is the barrier to exit from the old strategy, not the barrier to entry to the new one,” says McNealy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effe Read more…

By Ken Strandberg

What Will IBM’s AI Debater Learn from Its Loss?

February 14, 2019

The utility of IBM’s latest man-versus-machine gambit is debatable. At the very least its Project Debater got us thinking about the potential uses of artificial intelligence as a way of helping humans sift through al Read more…

By George Leopold

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst of bankruptcy proceedings. According to Dutch news site Drimb Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Medical Research Powered by Data

“We’re all the same, but we’re unique as well. In that uniqueness lies all of the answers….”

  • Mark Tykocinski, MD, Provost, Executive Vice President for Academic Affairs, Thomas Jefferson University

Getting the answers to what causes some people to develop diseases and not others is driving the groundbreaking medical research being conducted by the Computational Medicine Center at Thomas Jefferson University in Philadelphia. Read more…

South African Weather Service Doubles Compute and Triples Storage Capacity of Cray System

February 13, 2019

South Africa has made headlines in recent years for its commitment to HPC leadership in Africa – and now, Cray has announced another major South African HPC expansion. Cray has been awarded contracts with Eclipse Holdings Ltd. to upgrade the supercomputing system operated by the South African Weather Service (SAWS). Read more…

By Oliver Peckham

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Assessing Government Shutdown’s Impact on HPC

February 6, 2019

After a 35-day federal government shutdown, the longest in U.S. history, government agencies are taking stock of the damage -- and girding for a potential secon Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This