Visit additional Tabor Communication Publications
October 19, 2007
With crude oil prices hitting a record $87 a barrel this week, IT users are being reminded once again that datacenter energy consumption and computing demand are on a collision course. Earlier this month, the IT analyst firm Gartner reported that "by 2011, more than 70 percent of U.S. enterprise datacenters will face tangible disruptions related to energy consumption, floor space, and/or costs. In fact, during the next five years, most U.S. enterprise datacenters will spend as much on energy (power and cooling) as they will on hardware infrastructure."
The problem is even more acute in the high performance computing realm, where increasingly more powerful systems are being built to work on increasingly bigger problems, like for example, (ironically) global warming. While overall performance-per-watt is certainly improving, those gains are being outstripped by the demand for even greater amounts of compute power. And even though the energy issue has been with us for awhile, it's reaching a new urgency as energy consumption is starting to limit system size.
The 500 teraflop "Ranger" supercomputing cluster being built at the Texas Advanced Computing Center (TACC) is a good example. That machine is expected to draw 2.4 megawatts and require an additional megawatt just to keep it cool. Since Ranger is based on the latest quad-core Opteron technology, it pretty much represents the current level of performance-per-watt you can get from commercial x86 cluster technology.
The IBM Blue Gene is better in this respect. The German publication heise online is reporting the 220 teraflop Blue Gene/P system that was installed this week at the Jülich Supercomputing Centre uses just 500 kilowatts. That's more than twice the energy efficiency of the TACC system. Along the same lines, the new SiCortex system just installed at Argonne National Laboratory on Monday (which I write about in this issue) uses just 18 kilowatts to achieve 5.8 teraflops. Like the PowerPC-based Blue Gene, the SiCortex machine leverages low-power RISC engines, in this case 500 MHz MIP64 processors, to achieve energy savings. By using a larger number of slower CPUs to achieve the same raw performance as a smaller number of faster x86 CPUs, overall energy use is reduced. It's analogous to the multicore strategy of delivering a larger number of slower cores versus a single fast core.
The heise online article also points to a recent presentation by Alan Gara, chief architect for Blue Gene, where he talks about the looming energy problem of many-petaflop systems:
Mr. Gara is convinced that it will be possible at some time between 2015 and 2020 to achieve peak performances of 200 petaflops per second, but that the machines capable of such feats will require 25 to 50 megawatts of energy. And this assessment already takes a 20-fold improvement in energy efficiency for granted. According to Mr. Gara, for such a supercomputer acquisition costs and running costs would be on par.
Setting aside the feasibility of a 50 megawatt datacenter, Gara's assessment essentially corroborates Gartner's prediction that energy and hardware costs are equalizing throughout the industry. That should cause users to rethink their buying strategy for future systems. And since HPC systems have such high power rates and such high rates of technology obsolescence, one might assume this community would be leading the way to energy efficient systems.
With the exception of some in the HPC research community, this is not the case. While green IT organizations, consortiums and initiatives have become a growth industry, green HPC has not. Why? Lots of reasons:
Keep in mind that system acquisition and upgrade costs also reflect energy consumption -- the energy used to develop, build and ship the hardware (and software!). So presumably this should be factored into the lifetime energy consumption of the machine. I don't know if anyone has ever determined the energy required to construct a supercomputer, but I assume it's significant.
I'll finish with a sobering thought about the performance-per-watt metric that most of us throw around. It's not that useful. In fact it's no more useful than the peak performance metric that it's derived from. Sustained application performance-per-watt is a more realistic way to measure the energy efficiency of a system. Better yet would be to measure the amount of energy required to solve a problem -- "watt-hours to solution."
Thinking about it like that might also help us to realize that software can play a huge role in energy conservation, even beyond virtualization technology. So no, we're not green yet. We're barely chartreuse.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - October 18, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.