The Green Grid’s Datacenter Metrics

By John E. West

May 1, 2008

At the beginning of April, HPCwire profiled The Green Grid, a relatively new organization focused on improving energy efficiency in datacenters and “business computing ecosystems.” After two years of operation, they have over 150 members that include power companies, hardware vendors, and end users. This group is poised to serve as the channel through which the IT industry starts to grow its management of energy use and environmental impact. On an energy per square foot basis, HPC’ers have some of the most significant challenges in an industry that the EPA estimates uses nearly two percent of the electricity consumed in the U.S. every year.

The old saw is that you cannot manage what you cannot measure, and The Green Grid has proposed two metrics for understanding datacenter efficiency: Datacenter Infrastructure Efficiency (DCiE) and Power Usage Effectiveness (PUE). These metrics are designed to give datacenter managers insight into how much power goes to actual computing tasks versus the power consumed in the datacenter as a whole (for cooling, lighting, and so on).

The metrics use the same quantities — one is simply the inverse of the other. PUE divides total facility power (TFP) by IT equipment power (ITEP); DCiE is the inverse of PUE. TFP is measured at the utility meter for the datacenter, while ITEP is measured from individual meters on circuits supplying servers, monitors, KVMs, network gear, storage, and so on. In order to get an accurate TFP, you should be sure to include equipment that supports the datacenter (and only the datacenter) but may not be located within the datacenter — for example, UPS equipment and chillers — as well as items that are often fed as “house power” — lights in the server rooms, for example — but which the datacenter usually doesn’t think about. Also, don’t forget to keep the IT equipment in the total figure for TFP. According to “Green Grid Metrics,” a whitepaper available at http://www.thegreengrid.org/gg_content/, ITEP should be measured “after all power conversion, switching, and conditioning is completed and before the IT equipment itself,” say at the output of the relevant PDUs.

PUE ranges from 1 to infinity (lower is better) and gives you a multiplier for considering an approximation of the real demands of equipment placed in your center. If you are considering adding a supercomputer that needs 1 MW, for example, and you have a PUE of 3.0, you’ll know that you’ll need to supply a total of 3 MW to run the system and all of its support components. This number is an approximation for the new system because the datacenter has fixed components that don’t vary with IT demand, such as lighting. But to the extent that these fixed components are small in proportion to the variable energy use items, PUE can be a good approximation.

The DCiE ranges from 1 to 100 percent (higher is better) and is a useful thumbnail for knowing what proportion of the total power consumed in your datacenter is used in the IT equipment itself. For example, a DCiE of 0.33 indicates that 33 percent of your power goes to IT gear.

So what is a “good” number and what does this mean in practice? The Green Grid doesn’t know yet. According to their metrics papers, there is research that indicates either PUEs of 1.6 or 2.0 should be achievable with careful design, but the recommendation right now is for datacenters to simply begin using and reporting the numbers so that the community can get a feel for the range.

To get a better handle for what this all means in practice I talked with Jim Smith, vice president of engineering for The Green Grid member Digital Realty Trust. From an IT perspective, the Digital Realty’s datacenters and mission look a lot more like supercomputing than enterprise computing. The $400M dollar company provides technology-related real estate (think turnkey datacenters) for customers around the world.

One of the insights that I got from Jim was the degree to which the metrics vary due to factors that aren’t under the datacenter manager’s control. My preconception simply reading the papers was that a datacenter would eventually automate these measurements and continually monitor them, responding to fluctuations and looking to them for immediate feedback on the effectiveness of energy efficiency initiatives.

In practice, it turns out that the datacenter metrics may be more useful as point measurements or as long term trends. Right now Digital Realty Trust evaluates the PUE primarily at datacenter commissioning, with load banks in the center to simulate a fully loaded datacenter.

“PUE and DCiE is indicative at commissioning because you’re at 100 percent load,” says Jim Smith. The commissioning load is a point event during which factors like server load and changes in climate and the outside environment are known. Digital Realty uses this commissioning assessment to measure the health of a new or recently reconfigured datacenter. Digital Realty is also beginning to explore the use of PUE as a contractual deliverable in new datacenter construction contracts.

In Smith’s experience, the metrics are highly sensitive to both server load and changes in the outside environment. Digital Realty’s centers in Dallas, Texas, and Dublin, Ireland, are good examples of the influence of climate. The climate in Dublin is very stable relative to Dallas, where temperature and humidity vary widely throughout the year. Smith indicates that in Dublin a day-to-day measurement of PUE might be useful, but in Dallas the metric can vary widely even within a single day with weather changes, making anything other than trend information over long periods of time or data from carefully controlled measurements not very useful.

The metric also varies with server load, which can fluctuate dramatically from hour to hour in a datacenter. “Higher loads are more efficient in terms of the metrics,” he says. This observation has lead the Digital Realty Trust to modularize their datacenters. For example, if you need additional capacity today and think that you’ll eventually need a 2.4MW datacenter, it is better to build three separate 600kW datacenters and grow into the new units as your needs grow than to build one 2.4MW datacenter today. This approach leads to more efficient cooling in the growth states leading up to full load. The modularized centers can be three discrete centers in different locations, or three units of what used to be one large continuous space physically and electrically divided.

Smith is measuring PUE at all of Digital Realty’s datacenters and is using those measurements to build a design database that he hopes to be able to use to gain insight into which combinations of computation support infrastructure technologies are most effective in datacenters of differing sizes in varying locations.

Despite the fact that the IT community is still in the early stages with PUE and DCiE, Smith emphasizes the importance of beginning to measure and track the quantities. “My one piece of advice is to get your measurement program going now because that’s going to be key to understanding what’s going on inside your datacenter, and it’s not that expensive,” he says. “Most people should have the ability to measure today.” A measurement program can start as simply as sending out a guy once a day with a clipboard to look at the meter. He also advises that datacenter managers should get hold of the power bill, and actually read it.

When you are ready to take things a step further, installing automated monitoring is not an expensive proposition. According to Smith, “In a $10M datacenter, a complete metering solution would cost between $40k and $60k.”

The biggest opportunity a manager is likely to have to improve the efficiency of operations is during a remodel, or during new construction when you can replace or install a highly efficient power and cooling distribution infrastructure. Almost everyone is going to have some significant remodeling in their datacenter over the next five to seven years, and if you start measuring these quantities now you’ll be in a much better position to make the changes that matter for your organization when that time comes.

Something else to think about? The specter of government regulation hangs over both U.S. and European datacenter operators. Jim Smith points to new EU regulations that go into effect next year requiring datacenters larger than 500kW to report their carbon footprint, which will also drive the need to jump start your monitoring program.

As you become more aware of what the metrics for your datacenter are, you are likely to want to start making changes to see what impact you can have. Smith explains that it is key at this point to tell everyone involved with the datacenter, from the users to the operations and maintenance people, what it is you are doing. “Taking this step gets a dialogue going and starts to get everyone involved,” and that’s when an efficiency agenda can really take off.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first planned U.S. exascale computer. Intel also provided a glimpse of Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutting for the Expo Hall opening is Monday at 6:45pm, with the Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Read more…

By Doug Black

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designe Read more…

By George Leopold

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This