Visit additional Tabor Communication Publications
December 15, 2008
ARGONNE, Ill., Dec. 12 -- From Deep Blue, the computer that defeated Garry Kasparov in a 1997 chess match, to the new Blue Gene line of high-performance computers created by IBM, a single color has traditionally been associated with advanced computing.
With the recent opening of the Argonne Leadership Computing Facility (ALCF) at the U.S. Department of Energy's Argonne National Laboratory, however, high-performance computing has taken on a different hue: green. Several innovative steps designed to maximize the efficiency of Argonne's new Blue Gene/P high-performance computer have saved many taxpayer dollars while reducing the laboratory's environmental footprint.
While similar computing centers at other laboratories and institutions often require several megawatts of electricity -- enough to meet the energy demands a small town -- the ALCF needs only a little more than one megawatt of power. "Because the ALCF can effectively meet the demands of this world-class computer, the laboratory ends up saving taxpayers more than a million dollars a year," said Paul Messina, director of science at the ALCF.
The Blue Gene/P currently runs at a speed of more than 557 teraflops, which means that it can complete more than 557 trillion calculations per second. While several high-performance computing facilities recently established or upgraded at some of Argonne's sister laboratories have surpassed that mark, only one exceeds the efficiency of Argonne's Blue Gene/P. "The Blue Gene/P uses about a third as much electricity as a machine of comparable size built with more conventional parts," Messina said.
While a megawatt of electricity might seem like a lot of power, the massive number of computations that the Blue Gene/P can do puts it in perspective. Energy efficiency of high-performance computers is measured in flops per watt -- how many calculations per second the computer can do for every watt of electricity it uses.
According to the November 2008 Green500 ranking of supercomputers, the Blue Gene/P's energy efficiency averages out to more than 350 million calculations a second per watt. By contrast, a common household light bulb frequently uses between 50 and 100 watts of electricity. Among the top 20 supercomputers in the world, the Blue Gene/P is the second-most energy-efficient. "The fact that we are running such a powerful computer so efficiently shows that we can simultaneously respond to the demands of the advanced simulation and modeling community and the environmental concerns of today's society," Messina said.
Much of the electricity that the Blue Gene/P requires is used not to actually process the computations, but rather to cool the machinery. Without any cooling at all, the room that houses the computer would reach 100 degrees within ten minutes after the computers started running.
To keep the facility cool and safe, six air handlers move 300,000 cubic feet of air per minute under the floor, keeping the room chilled to 64 degrees Fahrenheit. These air handlers, according to Messina, cool more cost-effectively than large air conditioners used at other facilities. "Many other high-performance computing centers require as much electricity to cool their computers as they do to operate them, but here at Argonne we need only an additional 60 percent," he said. "We not only have a green computer, we have an entire green facility."
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.