Visit additional Tabor Communication Publications
May 10, 2010
Green IT initiatives are popular among IT professionals. In fact, "Green IT 2.0" is a new buzzword that prescribes a holistic approach to not only greener datacenter designs and practices, but also for more eco-friendly practices across all functional departments of an organization (e.g., proper e-waste management of toner cartridges or paper). So why are these trends so hot?
Enterprise-level datacenters consume more energy than ever before due to their increasing size, growing design densities, and the popularity of multicore processor technology in servers. So about 50 percent of an enterprise datacenter's operating cost can be due to the electricity costs to power and cool the IT equipment and facility. Also, for many years, IT departments have been asked to do more with less, and now there exists a positive, politically-correct spin on what to call this kind of initiative: "next generation, green datacenters." The bottom line is that being green means being more cost efficient over the long term. Again, green is only as good as the "Greenbacks" that go in your pocket.
As I talk with CIOs, they tell me the best practices in next generation, green datacenters include the entire organization, not just the IT department; they also include the facilities department designing datacenters and the executive committee overseeing corporate eco-responsibility. It's a very good thing that green IT initiatives are beginning to cross all lines of businesses for better, sustainable approaches to cut energy costs and e-waste. Consider this for example: desktop/laptop computers are scattered everywhere; PDAs are assigned to employees by the thousands; and departmental printers populate every office floor.
I'm enthusiastic about a newer kind of datacenter called the containerized datacenter, or simply, the container. For about two years companies like Sun Microsystems, Verari, PDI, HP, and SGI (formerly Rackable Systems) have been delivering high density, energy efficient containers.
The container is designed to be a fully optimized, tightly controlled system complete with sophisticated management and monitoring tools. The container delivers the most energy efficient "datacenter in a box" with a PUE rating of less than 1.20. PUE seems to be the universal measurement for datacenters, but certainly not the best approach for calculating energy savings. The better approach is "Industrial Efficiency" which is a discussion all by itself.
The containerized datacenter confronts challenges faced by IT and facilities professionals today more effectively than a traditional brick-and-mortar data. The market is starting to get more than early adopters: companies like Microsoft and Google have been pioneers, but now there are commercial installations at Qualcomm, Verizon, Intel and NASA, to name a few.
Well, when all of the container's attributes are taken together, realized benefits are clear. The container datacenter does lower power and cooling expenditures by at least 25 percent. The capital cost is reduced by 50 percent and the electricity by 25 percent. In addition, you have the flexibility to attach to renewable energy sources that have a much smaller carbon footprint, such as hydro-dams and windmills.
When you consider productivity improvement, there are many benefits, including faster deployment, usually 90 days, which is two years faster than building a brick and mortar datacenter. There are fewer permits, fewer construction delays, and fewer labor-related issues.
From a financial perspective, this approach gives you the flexibility in baseball terms to execute a triple play. You can conduct a datacenter and energy audit to identify which applications, servers, storage, networking and power supplies are older energy hogs and potential opportunities for replacement.
The first play is to determine the value of those stranded assets and get cash on your balance sheet. The second play is to look at an operating lease to reduce your OpEx and create a take-out strategy that lets you upgrade to newer more energy efficient equipment in 22-36 months. The third play is to work with the utility companies and demonstrate the energy efficiency savings to qualify for rebates up to $500,000, putting more cash on the balance sheet.
All of these container qualities make for a superior TCO versus brick-and-mortar datacenters.
Posted by Dan Gatti - May 10, 2010 @ 4:00 PM, Pacific Daylight Time
Dan Gatti is the President and CEO of Data Center Rebates, and serves on the board of directors at the Telecommunications Industry Association.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.