Here’s an interesting dilemma: What if you were awarded millions of dollars to build a new, state-of-the-art, 1-petaflops supercomputer, but had no place to put it?
That’s the situation that the US Department of Energy’s National Renewable Energy Laboratory (NREL) faced a few years ago. Congress appropriated money for NREL to order a new supercomputer system, but its existing datacenter was too small to hold it.
NREL’s solution: It began working on a new energy-efficient datacenter, one designed to be cheaper to build and operate than comparable datacenters. At the same time, it pooled some money with Sandia National Laboratories in Albuquerque, NM, in order to jointly purchase a 500 teraflops system. The labs installed that system in the Sandia datacenter, and both organizations have access to it until NREL’s datacenter is fully equipped.
NREL then requisitioned its new computer from HP.
The whole process is now coming to completion. The new datacenter is largely done. The first phase of the new computer system has been installed and tested. Delivery of phase two – the 1-petaflops system – is about to begin.
It’s just in time. The shared computer at Sandia, a Red Mesa system from Sun’s pre-Oracle days, is no longer sufficient to serve both labs’ needs. It is averaging 92 percent utilization, day in and day out.
Despite the time it took, there were advantages to this approach. HP is sending NREL some new, still-unnamed servers that not only include some of the the latest Intel Xeon processors and Xeon Phi co-processors, but also a new warm-water liquid cooling system that HP has not yet unveiled to the public.
NREL was also able to essentially design a datacenter around its new computer system in order to create an integrated whole. The cooling system, for example, makes compressor-based chillers unnecessary. The servers use 480 VAC power, which eliminates power converters. Less equipment means more space, enabling the servers to be packed into just 10,000 square feet of raised floor space. Warm-water cooling means most of the servers do not require hot and cold aisle containment. The hot water can be used to heat the building or melt snow.
“Taking this integrated look at a datacenter from an energy efficient building perspective drove a lot of the decisions we made,” says NREL Computational Science Center Director Steve Hammond. “Otherwise you could make locally-optimized decisions that are not as efficient as they could be if you stepped back” to see the big picture.
The first racks of the new system were delivered last November, right after SC12. More arrived in early January. The final four racks (out of 10 total) arrived on February 19. Most of the equipment consists of HP ProLiant SL230s and SL250s Gen8 servers powered by Intel Xeon E5-2670 8-core CPUs. This is the Sandy Bridge generation, using 32nm technology.
However, those last four racks each contain something new. They hold prototypes of a next-generation server family that HP will be introducing to the rest of the world next year. This new server uses next-generation Intel Xeon Ivy Bridge processors and Intel Xeon Phi coprocessors, both built on 22nm technology.
These servers also feature HP’s prototype direct-to-chip warm-water liquid cooling system. “The primary heat exchange is at the chip level, with heat going (directly) to liquid rather than going to air first and then liquid,” says NREL’s Hammond. Water will arrive at the servers at about 75 degrees Fahrenheit and leave at about 100 degrees F.
The combination of the ProLiant servers and the new prototypes comprise phase one, consisting of about 11,500 cores in 10 compute racks. That system reached over 200 teraflops on LINPACK tests last month, meeting its intermediate performance milestone.
The real show, however, comes with phase two. That’s a 1-petaflops system made up entirely of HP’s new servers, including the new cooling system. These are the first production versions HP is delivering to a customer. They should start arriving from Houston by early summer and will be standing in the datacenter before the end of August.
To say this is a showcase datacenter is an understatement. It has has floor-to-ceiling glass windows to allow visitors to look in from the corridors. “People say it looks more like an aquarium than a datacenter,” says Hammond. Part of the idea is to show off its energy efficiency for others interested in saving energy and money.
Hammond is hoping, however, that the datacenter-under-glass doesn’t become too popular a display. He’s already regularly guiding visitors past the aquarium, despite the fact that the main system is not yet installed. He needs to get some work done.