With crude oil prices hitting a record $87 a barrel this week, IT users are being reminded once again that datacenter energy consumption and computing demand are on a collision course. Earlier this month, the IT analyst firm Gartner reported that “by 2011, more than 70 percent of U.S. enterprise datacenters will face tangible disruptions related to energy consumption, floor space, and/or costs. In fact, during the next five years, most U.S. enterprise datacenters will spend as much on energy (power and cooling) as they will on hardware infrastructure.”
The problem is even more acute in the high performance computing realm, where increasingly more powerful systems are being built to work on increasingly bigger problems, like for example, (ironically) global warming. While overall performance-per-watt is certainly improving, those gains are being outstripped by the demand for even greater amounts of compute power. And even though the energy issue has been with us for awhile, it’s reaching a new urgency as energy consumption is starting to limit system size.
The 500 teraflop “Ranger” supercomputing cluster being built at the Texas Advanced Computing Center (TACC) is a good example. That machine is expected to draw 2.4 megawatts and require an additional megawatt just to keep it cool. Since Ranger is based on the latest quad-core Opteron technology, it pretty much represents the current level of performance-per-watt you can get from commercial x86 cluster technology.
The IBM Blue Gene is better in this respect. The German publication heise online is reporting the 220 teraflop Blue Gene/P system that was installed this week at the Jülich Supercomputing Centre uses just 500 kilowatts. That’s more than twice the energy efficiency of the TACC system. Along the same lines, the new SiCortex system just installed at Argonne National Laboratory on Monday (which I write about in this issue) uses just 18 kilowatts to achieve 5.8 teraflops. Like the PowerPC-based Blue Gene, the SiCortex machine leverages low-power RISC engines, in this case 500 MHz MIP64 processors, to achieve energy savings. By using a larger number of slower CPUs to achieve the same raw performance as a smaller number of faster x86 CPUs, overall energy use is reduced. It’s analogous to the multicore strategy of delivering a larger number of slower cores versus a single fast core.
The heise online article also points to a recent presentation by Alan Gara, chief architect for Blue Gene, where he talks about the looming energy problem of many-petaflop systems:
Mr. Gara is convinced that it will be possible at some time between 2015 and 2020 to achieve peak performances of 200 petaflops per second, but that the machines capable of such feats will require 25 to 50 megawatts of energy. And this assessment already takes a 20-fold improvement in energy efficiency for granted. According to Mr. Gara, for such a supercomputer acquisition costs and running costs would be on par.
Setting aside the feasibility of a 50 megawatt datacenter, Gara’s assessment essentially corroborates Gartner’s prediction that energy and hardware costs are equalizing throughout the industry. That should cause users to rethink their buying strategy for future systems. And since HPC systems have such high power rates and such high rates of technology obsolescence, one might assume this community would be leading the way to energy efficient systems.
With the exception of some in the HPC research community, this is not the case. While green IT organizations, consortiums and initiatives have become a growth industry, green HPC has not. Why? Lots of reasons:
- Lack of HPC Virtualization. The industry trend to reduce energy consumption by consolidating compute infrastructure with traditional server virtualization is a bad fit for HPC. In general, high performance computing has the opposite problem of an over built datacenter. HPC users want to distribute workloads over as much hardware as possible to speed execution, not crowd a lot of performance-hungry apps into a single box.
- Technology Momentum. Even in the cutting-edge realm of high performance computing, users have made long-term investments in software, hardware infrastructure, and human expertise that are tied to established technologies. If this weren’t the case, SiCortex and ClearSpeed would be filing for IPOs and there would be Blue Genes in every HPC facility. Application retargeting costs, additional infrastructure support and cultural bias all slow adoption of new technologies.
- New Problem. The urgency of the energy problem has escalated faster than people can understand it. Marketing departments have been quick to capitalize on this, since green computing is perceived as a “Mom and Apple Pie” issue by vendors. But the multitude of solutions and marketing claims is causing confusion. Every piece of silicon out there seems to be branded with the green label nowadays.
- Acquisition Costs. Initial acquisition costs still carry a lot of weight in decision-making. Part of the problem is that people who buy the hardware are often not the same ones paying the electric bill. In his Real World IT blog, George Ou argues that until the people who procure the hardware are the ones who get billed for the electricity that the hardware uses, the incentive to purchase energy efficient systems won’t exist. This is an industry-wide problem.
- Refresh Cycle. Related to acquisition costs is the hardware upgrade strategy. Most enterprises, HPC or not, refresh their hardware every three to five years. For high-end supercomputing, this cycle can be even longer because of the initial high acquisition costs. (No one’s going to decommission a multi-million dollar Blue Gene/L just because it uses more energy than the newer Blue Gene/P.)
Keep in mind that system acquisition and upgrade costs also reflect energy consumption — the energy used to develop, build and ship the hardware (and software!). So presumably this should be factored into the lifetime energy consumption of the machine. I don’t know if anyone has ever determined the energy required to construct a supercomputer, but I assume it’s significant.
I’ll finish with a sobering thought about the performance-per-watt metric that most of us throw around. It’s not that useful. In fact it’s no more useful than the peak performance metric that it’s derived from. Sustained application performance-per-watt is a more realistic way to measure the energy efficiency of a system. Better yet would be to measure the amount of energy required to solve a problem — “watt-hours to solution.”
Thinking about it like that might also help us to realize that software can play a huge role in energy conservation, even beyond virtualization technology. So no, we’re not green yet. We’re barely chartreuse.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].