Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
February 21, 2014

Simulating HPC Workload Energy Costs

Carlo del Mundo
object_storage

In today’s computing world, energy- and power-efficiency in data centers is critical to reducing a system’s overall total cost of ownership. Energy-efficiency is so important that the energy cost of operating a datacenter far exceeds its initial capital investment.

In addition to cost savings, improvements in energy-efficiency also translate into lower carbon emissions. As Omar Al-Saadoon, System Specialist at EXPEC Computer Center puts it, “one megawatt generates close to 8,000 metric tons of C02 per year when burning petroleum to produce electricity.” In short, it behooves data center specialists to educate themselves on the energy-efficiency (or inefficiency) of their system to save on operating costs and on the environment.

To promote an energy- and green-conscious way of computing, Al-Saadoon and his team have developed a simulation framework that provides an intuitive view of the energy costs of a workload. His goal is to “empower simulation engineers to better assess the environmental effect of their simulation runs and become green-conscious.”

His team collected the energy and power characteristics of their Hydrocarbon Reservoir simulation on 1024 compute nodes over the course of three months. Overall, they provide insight on the energy usage of datacenter level computers and incorporates environmental metrics such as carbon emission on a per job basis.

Energy usage in data centers is bifurcated into two groups: servers and support infrastructure. Servers are the physical computing systems that perform computation. The supporting infrastructure includes components such as cooling, lighting, UPS batteries, interconnects, and AC/DC conversation. Typically, the supporting infrastructure adds a significant component to the overall costs of running a data center. In fact, data center efficiency is measured with both the physical computing system and supporting infrastructure using a metric called Power Usage Efficiency (PUE).

PUE is a combination of server and infrastructure power. A PUE value of 2 means that for every 1 kWh of server power, another 1 kWh is spent on cooling, lighting, and other infrastructure needs. The most efficient PUE value tends towards 1 — suggesting the ideal environment that little to no power is used for infrastructure needs.

 

Tags:

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video