Visit additional Tabor Communication Publications
December 15, 2008
ARGONNE, Ill., Dec. 12 -- From Deep Blue, the computer that defeated Garry Kasparov in a 1997 chess match, to the new Blue Gene line of high-performance computers created by IBM, a single color has traditionally been associated with advanced computing.
With the recent opening of the Argonne Leadership Computing Facility (ALCF) at the U.S. Department of Energy's Argonne National Laboratory, however, high-performance computing has taken on a different hue: green. Several innovative steps designed to maximize the efficiency of Argonne's new Blue Gene/P high-performance computer have saved many taxpayer dollars while reducing the laboratory's environmental footprint.
While similar computing centers at other laboratories and institutions often require several megawatts of electricity -- enough to meet the energy demands a small town -- the ALCF needs only a little more than one megawatt of power. "Because the ALCF can effectively meet the demands of this world-class computer, the laboratory ends up saving taxpayers more than a million dollars a year," said Paul Messina, director of science at the ALCF.
The Blue Gene/P currently runs at a speed of more than 557 teraflops, which means that it can complete more than 557 trillion calculations per second. While several high-performance computing facilities recently established or upgraded at some of Argonne's sister laboratories have surpassed that mark, only one exceeds the efficiency of Argonne's Blue Gene/P. "The Blue Gene/P uses about a third as much electricity as a machine of comparable size built with more conventional parts," Messina said.
While a megawatt of electricity might seem like a lot of power, the massive number of computations that the Blue Gene/P can do puts it in perspective. Energy efficiency of high-performance computers is measured in flops per watt -- how many calculations per second the computer can do for every watt of electricity it uses.
According to the November 2008 Green500 ranking of supercomputers, the Blue Gene/P's energy efficiency averages out to more than 350 million calculations a second per watt. By contrast, a common household light bulb frequently uses between 50 and 100 watts of electricity. Among the top 20 supercomputers in the world, the Blue Gene/P is the second-most energy-efficient. "The fact that we are running such a powerful computer so efficiently shows that we can simultaneously respond to the demands of the advanced simulation and modeling community and the environmental concerns of today's society," Messina said.
Much of the electricity that the Blue Gene/P requires is used not to actually process the computations, but rather to cool the machinery. Without any cooling at all, the room that houses the computer would reach 100 degrees within ten minutes after the computers started running.
To keep the facility cool and safe, six air handlers move 300,000 cubic feet of air per minute under the floor, keeping the room chilled to 64 degrees Fahrenheit. These air handlers, according to Messina, cool more cost-effectively than large air conditioners used at other facilities. "Many other high-performance computing centers require as much electricity to cool their computers as they do to operate them, but here at Argonne we need only an additional 60 percent," he said. "We not only have a green computer, we have an entire green facility."
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.