March 11, 2019 — Hyperion Research forecasts that the worldwide market for high performance computing (HPC) server systems will grow robustly (9.8% CAGR) to reach $19.6 billion in 2022, an aggregate 59% gain from the 2017 total of $12.3 billion. Accompanying this growth is a strong trend toward the use of liquid cooling to manage the fast-rising heat levels generated by HPC servers that are increasingly large and more densely packed with processors, memory, and other components.
HPC data centers have often had to rely on liquid cooling systems borrowed from other industries, and this mismatch exacerbated “data center hydrophobia” — the fear of leaks damaging expensive electronic equipment. Fortunately, current growth in spending for HPC liquid cooling systems has begun to attract purpose-built solutions designed to handle the extreme demands of HPC systems more effectively.
This paper briefly reviews the rise of liquid cooling in the global HPC market, the technical challenges associated with this rise, and how liquid cooling vendors are addressing these challenges.
The Liquid Cooling Trend in the Global HPC Market
Since the start of the supercomputer era in the 1960s, the most powerful HPC systems have needed liquid cooling to keep their many tightly-packed integrated circuits from overheating and damaging expensive electronic components. (Cooling with water or other liquids can be 3-4 times more efficient than air cooling.) In recent years, average processor counts and densities in the HPC market have skyrocketed. Between April 2000 and June 2018, the average peak performance of systems on the Top500 list of the world’s most powerful supercomputers jumped from 154GF to 2.4PF, a factor of 15,844. Today, HPC systems pack significantly more heat-generating components into much tighter confines. In addition, cooling can account for up to half of the energy costs for an HPC system. It’s no wonder HPC vendors and data center managers alike have been seeking ways to improve cooling efficiency and lower cooling costs.
Even many midrange HPC systems and data centers have been making the move to liquid cooling. In a recent Hyperion Research worldwide study, entitled Power and Cooling Practices and Planning at HPC Sites, nearly all of the 100-plus surveyed sites employing air cooling said they were exploring liquid cooling alternatives to meet their future needs. Today’s liquid options include immersion cooling, cold plate cooling, in
door liquid cooling, and direct-to-chip cooling, using water at varying temperatures or with more esoteric liquids.
Liquid cooling options deployed in HPC centers have often been designed for other industrial uses. Even with these make-do solutions, coolant leaks at HPC data centers are uncommon, but when they occur they can cause extensive damage to computer systems that may cost millions, sometimes tens of millions, of dollars each. Also, cooling efficiency can be challenged when components cannot meet the flow rate, temperature, pressure or chemical compatibility needs of larger systems. But the growth in spending for liquid-cooled HPC systems and data centers has motivated some pioneering vendors to design products purpose-built to handle the demanding requirements associated with HPC liquid cooling deployments of on premise and cloud environments.
An Example of Meeting Evolving Cooling Needs
As liquid cooling steadily advances in use and sophistication, system suppliers are increasingly focusing on purpose-built liquid cooling products and components for the worldwide HPC market.
One of the most commonly used, critical components in data centers today is the metal quick disconnect (QD). As a key fluid management component, these QDs affect flow rates, allow hot swapping of equipment in liquid cooled racks if they have integrated stop-flow capabilities, and serve as a point of reliability or vulnerability in a comprehensive liquid cooling system. If QDs work as designed, thermal engineers and data center operators do not typically give them much thought. If they fail, they suddenly become the most scrutinized, critical components in the system.
QDs are now being expressly designed to handle the demanding flow rates and related pressures associated with liquid cooling deployments in HPC and other large data centers. Flow rates are typically low at the server (e.g., 0.5 liters/minute) but considerably higher at the coolant distribution unit (up to 70 l/min.), and flow rates that exceed the connector’s maximum capacity can produce seal failure or accelerate the erosion of parts. As a result, choosing the best QD for the application is a crucial step for reliability and performance.
For a growing number of data centers, new liquid cooling-specific QDs serve as ideal alternatives to bulkier, more difficult to handle ball-and-sleeve hydraulic connectors. These newer products have been optimized for the temperature, pressure, and coolant needs of HPC systems and are made of durable metal and engineered polymers designed for years of dripless operation. Companies like CPC (Colder Products Company) in St. Paul, Minn., work closely with HPC manufacturers and data centers to deliver QDs specifically for their applications. Cray, for example, is among the HPC
manufacturers to use CPC QDs, a version of which optimizes flow rates in a compact format while operating in tight spaces. To ease installation, these liquid cooling-specific QDs feature swivel joints, elbows, and low connection force. An integrated thumb latch allows one-handed installation, simplifying use even further. These seemingly small features take on added significance when you multiply installation, operational, and maintenance efforts across hundreds of racks and thousands of servers.
New to the liquid cooling market are thermoplastic QDs built specifically for liquid cooling use. The reported advantages of these couplers are their lightweight design, chemical compatibility with most conventional and specialty fluids, and significant creep and corrosion resistance. Other reported attributes of thermoplastic QDs made of polyphenylsulphone (PPSU) include:
- Flame retardant in accordance with UL94V0 rating
- Low water absorption
- Thermal insulator: No external condensation; not hot to the touch
- Leak tested to 10,000 cycles
- Broad operating temperature range: 0°-240°F (-17° to 115°C)
Hyperion Research sees spending for liquid cooling deployments as an escalating, enduring trend as average HPC system sizes and densities continue to increase. This spending growth will motivate more vendors to design products that specifically meet the most demanding liquid cooling needs of advanced HPC systems and large data centers.
About Hyperion Research, LLC
Hyperion Research provides data-driven research, analysis and recommendations for technologies, applications, and markets in high performance computing and emerging technology areas to help organizations worldwide make effective decisions and seize growth opportunities. Research includes market sizing and forecasting, share tracking, segmentation, technology and related trend analysis, and both user & vendor analysis. for multi-user technical server technology used for HPC and HPDA (high performance data analysis). We provide thought leadership and practical guidance for users, vendors and other members of the HPC community by focusing on key market and technology trends across government, industry, commerce, and academia.
Source: Hyperion Research, LLC