Visit additional Tabor Communication Publications
August 18, 2009
In these times of energy resource and climate change worries, green computing continues to be on the minds of high performance computing practitioners and providers. Green computing is mostly about energy efficiency, that is, performance per watt, but also encompasses other aspects like reuse, biodegradability, and optimal resource use in general. But the more I hear about it, the more I realize it's really about the economics of computing rather than any environmental sensitivity.
As John Gustafson has noted, "HPC users are not tree huggers." Like many in the industry, he believes the goal is not to reduce energy use and other resource costs per se, but to maximize computing within a fixed budget. If that's the case, then this is basically just another element of the Total Cost of Ownership (TCO).
But from a marketing point of view, the term "green computing" has a lot more Mom-and-apple-pie sound to it than say "TCO-optimized computing." So it's not surprising that every chipmaker, storage provider, interconnect company, and system vendor is selling green these days. Of course, it's no guarantee of success. SiCortex, the cluster vendor that made green computing the centerpiece of its business, went belly up this year when it failed to attract VC funding to continue its operations.
So with all this newfound love of all things green, what are the results? Depends on how you measure it. Certainly x86 chips are getting more efficient with each processor generation. Intel's Nehalem chips are advertised as having twice the performance per watt as its previous generation Penryn processors, but at the system level this gets diluted significantly. For example in the June 2008 Green500 list, the most energy efficient Intel-based (Penryn presumably) clusters achieved 220 to 240 megaflops/watt, while in the June 2009 list, the top Nehalem-based clusters topped out at 250 to 270 megaflops per watt -- about a 10 percent increase.
In fact, the average efficiency for the whole Green500 also increased by 10 percent compared to last year. During that same period, the aggregate power of the list increased by 15 percent. The conclusion of the Green500 crowd is that "while the supercomputers on the Green500 are collectively consuming more power, they are using the power more efficiently than before." The other conclusion that could be drawn is that the gains realized in energy efficiency are not keeping up with computing demand.
Keep in mind that the Green500 measurements are based on either Linpack or peak performance numbers, not actual applications. Therefore, real-world energy efficiencies are potentially much higher, given that a lot of the power smarts built into these new chips and the servers constructed around them have to do with reducing power at idle or partial load -- something not likely to occur during a Linpack run.
Having said that, my instinct is that energy use in HPC and the broader industry will continue to grow, despite more efficient infrastructure. Computing demand seems insatiable right now and I don't see any end in sight. And since computing is a high value commodity relative to its energy inputs, the economic incentive will continue to be in favor of more computing.
That doesn't mean energy efficiency isn't worthwhile. For individual datacenters, minimizing energy use is a big motivator since there are practical limits to increasing power to a particular site. Also, energy and cooling costs are becoming (or in some cases have already become) the largest expense over the lifetime of a system. Dan Reed's recent blog about how the new focus on Power Usage Effectiveness (PUE) is changing the way these facilities are being designed points to the fact that the cost ratio of computing infrastructure to energy is inverting.
All of this is driven by the economic realities of maintaining these facilities as more and more computing capability is stuffed into them. If the industry needs to feel good about itself by calling it green computing, so be it. It's all TCO to me.
Posted by Michael Feldman - August 18, 2009 @ 6:13 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.