Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

November 16, 2013

SC13 Research Highlight: COCA Targets Datacenter Costs, Carbon Neutrality

Shaolei Ren and Yuxiong He

The rapid growth of high performance computing and cloud computing services in recent years has contributed to the dramatic increase in the number and scale of data centers, resulting in a huge demand for electricity.

According to recent studies, the combined electricity consumption of global data centers amounts to 623 billion kWh annually and would rank 5th in the world if the data center were a country. As a significant portion of electricity is produced by coal or other carbon-intensive sources, it is often labeled as “brown energy” and the growing trend of the data center electricity consumption has raised serious concerns about its carbon footprint as well as environmental impacts, such as altering global patterns of temperature, rainfall, and creation of drought and flood.

Recently, large data center operators such as Google and Microsoft have been increasingly urged to find effective solutions to reduce their carbon emissions for sustainable computing and ultimately achieve an overall net-zero carbon footprint (i.e., carbon neutrality), as mandated by governments in the form of Kyoto-style protocols, voluntarily for public images, or urged by environmental organizations.

While it is beneficial for sustainability, achieving carbon neutrality presents significant challenges for data center operators, because the best location for building a data center may not be the most desired location for generating sufficient green energies (e.g., solar, wind) to satisfy the data center requirement and using carbon-free electricity directly from utility companies is not yet widely available. Thus, as a practical alternative, carbon-neutral data centers often rely on a bundle of approaches, such as generating off-site green (or renewable) energy and purchasing renewable energy credits (RECs): using renewable energy to indirectly offset electricity usage.

Completely offsetting electricity usage via off-site renewable energy generation for long-term carbon neutrality is desirable yet challenging: data centers need to carefully budget electricity usage over a long timescale (often a year) such that the “unknown” future brown energy consumption can be completely offset by limited renewables. While it seems to be easy to plan the electricity usage over a long timescale based on the future computing demand, a practical challenge is that the far future time-varying workloads or intermittent renewable energy availability cannot be accurately predicted and hence data centers need to decide electricity usage in an online manner.

In our research, we study long-term energy budgeting for a carbon-neutral data center and propose a provably-efficient online resource management algorithm, called COCA (optimizing for COst minimization and CArbon neutrality), for minimizing the operational cost while satisfying carbon neutrality without requiring long-term future information. Both electricity cost and delay performance are incorporated into our optimization objective.

COCA eliminates the requirement of knowing long-term future computing demand information by keeping track of the “carbon deficit” online. As the name implies, carbon deficit indicates how far the current data center operation deviates from carbon neutrality, or more precisely, how much the current electricity usage has exceeded the available renewables. Incorporating the carbon deficit into the optimization objective, COCA progressively adheres to carbon neutrality by adapting the weight of electricity consumption. Specifically, if the carbon deficit is larger, COCA will place more emphasis on reducing electricity consumption by turning down more servers such that the carbon deficit can be offset by future renewables. Thus, COCA works following the philosophy of “if violating carbon neutrality, then use less electricity”.  While the intuition is straightforward, we formally prove by extending the recently-developed Lyapunov optimization that COCA achieves a close-to-minimum operational cost, compared to the optimal offline algorithm with look-ahead information, while bounding the maximum possible carbon deficit.

Large data centers often consist of up to tens of thousands of servers, and distributed server management is highly desirable for scalability. Towards this end, we embed distributed resource management in COCA such that each server autonomously adjusts its processing speed (and hence, power consumption, too) and optimally decides the amount of workloads to process. Specifically, each server can “learn” the optimal decision by sampling a set of possible decisions and eventually choosing the best one with very high probability.

To validate COCA, we perform an extensive simulation study modeling the one-year operation of a large data center. Using real-world production traces to drive the simulation, we first compare COCA against state-of-the-art prediction-based method in terms of the average hourly operational cost. In particular, PerfectHP (Perfect Hourly Prediction heuristic), which perfectly predicts 48-hour-ahead workloads and allocates the carbon budget in proportion to the hourly workloads, is chosen as the benchmark. As shown in the figure, COCA is more cost effective compared to PerfectHP with a cost saving of more than 25% over one year. COCA achieves the benefit because even though the workload spikes and carbon neutrality is temporarily violated, it can focus on cost minimization while carbon deficit will then later guide the data center operation towards carbon neutrality. By contrast, without foreseeing the long-term future, short-term prediction-based PerfectHP may over-allocate the carbon budget at inappropriate time slots and thus have to set a stringent budget for certain time slots when the workload is high.


Next, we show that, under different electricity usage budgets, the operational cost of COCA is always fairly close to the minimum value achieved by the optimal offline algorithm with complete future information. Operational cost and electricity usage are normalized with respect to the carbon-unaware algorithm that disregards carbon neutrality and purely minimizes the operational cost. It can be seen that with a normalized electricity usage of 0.9 (i.e., saving 10% electricity usage compared to the carbon-unaware algorithm), COCA only increases the operational cost by less than 3% compared to both the carbon-unaware algorithm and the optimal offline algorithm that has the complete future information. This demonstrates a strong applicability of COCA in real systems due to its good performance and online execution without complete offline information.


To summarize, COCA addresses carbon neutrality, an emerging issue in data centers: it enables data centers to achieve a low operational cost while satisfying carbon neutrality in the absence of long-term future information. The distinguishing feature of online and distributed implementation makes COCA an appealing candidate for autonomously managing computing resources in large data centers.

SESSION: Performance Management of HPC Systems


TIME: Tuesday, 11:30AM – 12:00PM

ROOM: 401/402/403

Shaolei Ren received his Ph.D. from University of California, Los Angeles, in 2012 and is currently with Florida International University as an Assistant Professor. His research focuses on sustainability and emerging topics in cloud computing such as water usage effectiveness.

Yuxiong He is a researcher at Microsoft Research.  Her research interests include resource management, algorithms, modeling and performance evaluation of parallel and distributed systems.  Her recent work focuses on improving responsiveness, quality and throughput of large-scale interactive cloud services such as web search.  Yuxiong received her Ph.D. in Computer Science from Singapore-MIT Alliance in 2008.

Share This