Visit additional Tabor Communication Publications
November 06, 2008
Supercomputing vendor SiCortex has long trumpeted the power-, cooling-, and space-friendliness of its HPC gear. Over time it's added those advantages together to create a picture of eco-friendly HPC, and it's reinforced the message in special events where it uses pedal power from teams of bicyclists to power its boxes. This week the company is introducing a new metric, the Green Computing Performance Index, that assesses the performance of individual supercomputers based on the ratio of their performance on the HPC Challenge benchmark to power consumption.
Although the broader IT community has whipped itself into a foamy eco-green froth over the past two years, the conversation about the ecological impact of computing is still fairly new in HPC. The only major community effort to assess the impact of supercomputing on the ecosystem thus far has been the Green500 List, which didn't get started until November 2007. The Green500, curated by Wu-chun Feng and Kirk W. Cameron, uses performance figures from the TOP500 List and divides them by the total power draw of the machine. Power is either peak (indicated in gray on the Green500 Web site) or measured according to a methodology described on the Web site.
This approach carries with it a significant advantage, namely the TOP500 list itself. The list is well-understood and widely quoted. Most serious HPC organizations submit results to it, and so the Green500 team has been able to build upon the momentum that the TOP500 team has established over many years. However using the TOP500 List as the performance basis also brings along the disadvantage of that list: it uses a sole performance benchmark, the Linpack, which is often observed to be inadequate to characterize a supercomputer's usefulness on real world problems.
The team at SiCortex addressed this shortcoming of the Green500's approach by adopting the benchmark suite that was developed to address the deficiencies of the Linpack itself: the HPC Challenge Benchmark. The HPCC consists of seven tests, each of which stresses various aspects of a machine's architecture, including the same floating point performance measure used on the TOP500 list plus additional tests that measure memory bandwidth and interprocessor communication as well as floating point performance in more complex computational kernels.
Results from an HPCC run are divided by the power consumed in kilowatts -- again, either measured or peak -- to yield SiCortex's proposed index, the Green Computing Performance Index or GCPI. John Goodhue, SiCortex's CTO and a member of the team doing the thinking on the GCPI, recognizes that there are a variety of ways that individual consumers might need to see this information, and the metric admits three different ways to compute the GCPI.
First, one can compute the GCPI on a benchmark-by-benchmark basis. For example, dividing the performance of the Cray XT4 at the ERDC MSRC on the single STREAM triad metric reported at the HPCC Web site by its power consumption yields 129.4 GB/(s*kW). This approach gives a detail-rich view, with multiple measures that reveal the various dimensions of power efficiency, and permit fine-grained analysis of a system's green computing performance.
For those who need more of a shorthand, or who only need the overall picture, the measurements can be combined into a single GCPI number for a machine using an average of the GCPIs resulting from a complete HPCC run. Finally, users may decide to selectively include only the portion of the HPCC that matter most to them, or to weight the components individually, to form a "roll your own" metric to serve a set of highly specialized needs. The flexibility of SiCortex's approach is valuable because it provides a path to preserve the "one number" convenience of both the TOP500 and the Green500 while preserving more levels of detail for later analysis.
SiCortex recognizes that, if the GCPI is to be broadly accepted and used by the community, they cannot be the owners and maintainers of the measure. According to Goodhue, SiCortex is in active discussions with several third parties to own the metric and host its governing body. "At that point," says Goodhue,"we won't have anything to do with it other than by participating in the GCPI organization and submitting results for our machines."
Although SiCortex isn't talking publicly about organizations it is in talks with, one potential partner is The Green Grid. The Green Grid (covered in an HPCwire feature earlier this year) is a relatively new organization focused on improving energy efficiency in datacenters and "business computing ecosystems." After nearly two years, the organization has over 150 members, including power companies, hardware vendors, and end user organizations. Their strategy is focused on the datacenter as a whole, but when I talked with them earlier this year, they could foresee a time when they might be interested in driving their focus down further. This is still probably a little early for them, but if someone else has done the legwork, it might make sense.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.