Visit additional Tabor Communication Publications
December 03, 2008
From Seymour Cray's early heroic implementation of dissipating heat to today's state-of-the-art technology, fans, heat sinks, exotic vapor cooling and even simple water cooling have come full circle, but we still have the same challenges: heat dissipation and designing useable cost-effective solutions, not to mention that some of these solutions draw additional energy, and energy equals expense.
Passive Thermal Technology, LLC, based in Plymouth, Mass., is an innovator in passive cooling technologies and has developed a series of elegant and cost-effective solutions that are based on rather simple principals of thermodynamics and engineering. More amazingly is that these solutions are more than price/performance competitive with existing technologies and are applicable over a wide range of servers, blades, GPUs, FPGAs and other processing elements.
As an example, today, a typical multi-processor server with 100 watt quad-core CPUs has up to 16 fans, each drawing 10 watts. This additional 160 watts needs to be removed from the enclosure and cabinet as well. Some newer systems use liquids to capture local heat and remove it, employing pumps to move the fluid. Passive Thermal Technology (PTT)'s technology is based on Loop Heat Pipes (LHP). I am told that LHP's key differentiation is a 4x higher capacity of heat removal, and being able to transport the heat 10x longer distances while using zero electrical energy to operate -- providing 10x higher thermal efficiency. The energy cost savings can be very meaningful to HPC, cloud computing and datacenters as this is a highly efficient and totally passive process and technology.
The secret to how all of this is accomplished is PTT's Loop Heat Pipe technology, based on a two-phase passive device. It employs a very small evaporator, condenser and pipes. The evaporator contains a wick that transfers the heat rejected by the processor to the working fluid inside the LHP. The wick also provides the capillary pressure that drives the working fluid around a cooling loop, the length of which can be measured in meters. This loop includes a device called the condenser, which is cooled by water or air. The function of the condenser is to return the vapor to a liquid, thereby completing the cycle, while rejecting the heat to the air or water used to cool the condenser. The size of these LHPs is small enough to be used in 1U rack servers, blades, COTS systems, and Avionics, and also makes it possible to cool GPU cards that take up a single slot while rejecting 250 or more watts using the existing cooling fan. GPUs represent a prime application that would immediately benefit from LHP technology. GPUs run extremely fast and hot in a very dense form factor creating a situation where not only does the GPU need to be cooled, but heat must be extracted from the sever enclosure as well.
Heat is the enemy of all processors and a consequence of higher clock speeds. Yielding greater heat leads to significantly higher failure rates of processor elements. Heat is particularly problematic in HPC environments in that CPUs, GPUs, and virtually all semiconductors are driven at the absolute edge of their ability to operate this side of MTBF rates. Moreover, systems are scaling up to the many tens of thousands of cores, so the challenge grows more problematic and expensive in the forms of reliability, downtime and energy consumption. LHP technology appears to be highly cost-effective as it is expected to be priced in the $13-$26 per copy range for what is designed to be a very reliable passive (no moving parts), effective green technology.
Traditional HPC centers have not been attracted or committed to low-power computing if it comes at the sacrifice of absolute performance. HPC still has a "damn the torpedoes" attitude to getting their important jobs done. LHP technology clearly plays to that attitude in a reliable and cost-effective manner. However with the explosive growth of mega-datacenters like Google, Yahoo and Amazon and the scaling up of cloud computing, the need for low power, cost-effective computing will have an effect on serious pressure points such as reliability and ROI, which are of great importance to these data-intensive mega-datacenters.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.