Visit additional Tabor Communication Publications
March 13, 2013
Here's an interesting dilemma: What if you were awarded millions of dollars to build a new, state-of-the-art, 1-petaflops supercomputer, but had no place to put it?
That's the situation that the US Department of Energy's National Renewable Energy Laboratory (NREL) faced a few years ago. Congress appropriated money for NREL to order a new supercomputer system, but its existing datacenter was too small to hold it.
NREL's solution: It began working on a new energy-efficient datacenter, one designed to be cheaper to build and operate than comparable datacenters. At the same time, it pooled some money with Sandia National Laboratories in Albuquerque, NM, in order to jointly purchase a 500 teraflops system. The labs installed that system in the Sandia datacenter, and both organizations have access to it until NREL's datacenter is fully equipped.
NREL then requisitioned its new computer from HP.
The whole process is now coming to completion. The new datacenter is largely done. The first phase of the new computer system has been installed and tested. Delivery of phase two – the 1-petaflops system – is about to begin.
It's just in time. The shared computer at Sandia, a Red Mesa system from Sun's pre-Oracle days, is no longer sufficient to serve both labs' needs. It is averaging 92 percent utilization, day in and day out.
Despite the time it took, there were advantages to this approach. HP is sending NREL some new, still-unnamed servers that not only include some of the the latest Intel Xeon processors and Xeon Phi co-processors, but also a new warm-water liquid cooling system that HP has not yet unveiled to the public.
NREL was also able to essentially design a datacenter around its new computer system in order to create an integrated whole. The cooling system, for example, makes compressor-based chillers unnecessary. The servers use 480 VAC power, which eliminates power converters. Less equipment means more space, enabling the servers to be packed into just 10,000 square feet of raised floor space. Warm-water cooling means most of the servers do not require hot and cold aisle containment. The hot water can be used to heat the building or melt snow.
"Taking this integrated look at a datacenter from an energy efficient building perspective drove a lot of the decisions we made," says NREL Computational Science Center Director Steve Hammond. "Otherwise you could make locally-optimized decisions that are not as efficient as they could be if you stepped back" to see the big picture.
The first racks of the new system were delivered last November, right after SC12. More arrived in early January. The final four racks (out of 10 total) arrived on February 19. Most of the equipment consists of HP ProLiant SL230s and SL250s Gen8 servers powered by Intel Xeon E5-2670 8-core CPUs. This is the Sandy Bridge generation, using 32nm technology.
However, those last four racks each contain something new. They hold prototypes of a next-generation server family that HP will be introducing to the rest of the world next year. This new server uses next-generation Intel Xeon Ivy Bridge processors and Intel Xeon Phi coprocessors, both built on 22nm technology.
These servers also feature HP's prototype direct-to-chip warm-water liquid cooling system. "The primary heat exchange is at the chip level, with heat going (directly) to liquid rather than going to air first and then liquid," says NREL's Hammond. Water will arrive at the servers at about 75 degrees Fahrenheit and leave at about 100 degrees F.
The combination of the ProLiant servers and the new prototypes comprise phase one, consisting of about 11,500 cores in 10 compute racks. That system reached over 200 teraflops on LINPACK tests last month, meeting its intermediate performance milestone.
The real show, however, comes with phase two. That's a 1-petaflops system made up entirely of HP's new servers, including the new cooling system. These are the first production versions HP is delivering to a customer. They should start arriving from Houston by early summer and will be standing in the datacenter before the end of August.
To say this is a showcase datacenter is an understatement. It has has floor-to-ceiling glass windows to allow visitors to look in from the corridors. "People say it looks more like an aquarium than a datacenter," says Hammond. Part of the idea is to show off its energy efficiency for others interested in saving energy and money.
Hammond is hoping, however, that the datacenter-under-glass doesn't become too popular a display. He's already regularly guiding visitors past the aquarium, despite the fact that the main system is not yet installed. He needs to get some work done.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.