Visit additional Tabor Communication Publications
December 18, 2008
Argonne National Labs is part of the Department of Energy, so it's not exactly surprising to learn that they are actively looking for ways to reduce energy use. But using Chicago's cold winters to save $25,000 a month on cooling costs for the supers in their Leadership Computing Facility is, well, cool.
I talked briefly to Pete Beckman, the division director at Argonne's Leadership Computing Facility (ALCF), about their overall focus on energy conservation. According to Beckman, it's an effort that pervades the entire organization, "Across the organization, everyone has been told 'let's find ways to reduce power.'" In computing, that mandate gets executed in two ways.
The HPC staff in Beckman's division are focused on practical ways to design datacenters, and supercomputers, to conserve energy. In the Mathematics and Computer Science division, researchers look at longer term solutions to more energy efficient computation. Among the initiatives Argonne has implemented already are thin clients in offices that don't need full workstations, and software that automatically sleeps or turns off electronic and computer equipment after hours or during periods of non-use. Farther down the road? How about capturing the heat generated by the ALCF's supers and doing something useful with it? As Beckman puts it: "no electricity should ever be wasted."
The ALCF also made some big decisions about energy use, including their investment in IBM's Blue Gene/P as the centerpiece of their high performance computation. Their largest system, Intrepid, is the production workhouse with nearly 164,000 cores and over 557 TFLOPS of peak performance. This system is complemented by another BG/P used primarily for testing and code development. Intrepid is number 5 on the latest TOP500 list, but for Beckman and his team, it is just as important that the system is very energy efficient -- it ranks #16 on the Green500 List released in November. The systems ahead of it on that list are other Blue Gene/P systems or systems built out of IBM's QS22 cell processor blades, another highly energy efficient option.
All told, the ALCF uses about a megawatt of power, a fraction of the amount used by less power-efficient computers at other centers. "Because the ALCF can effectively meet the demands of this world-class computer, the laboratory ends up saving taxpayers more than a million dollars a year," said Paul Messina, director of science at the ALCF, in a statement.
Interesting stat? Left uncooled, the Blue Genes would heat up the machine room to 100 degrees Fahrenheit within ten minutes. So with all that heat, how do they save that extra $25,000 a month when it's cold outside? The ALCF's chilled water system uses cooling towers. According to Beckman, once the temperature falls to 35 degrees or below outside, the temperature in the chilled water system is maintained solely by the cooling towers. Although humidity control is still an issue, that's free cooling.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.