Visit additional Tabor Communication Publications
May 10, 2010
Green IT initiatives are popular among IT professionals. In fact, "Green IT 2.0" is a new buzzword that prescribes a holistic approach to not only greener datacenter designs and practices, but also for more eco-friendly practices across all functional departments of an organization (e.g., proper e-waste management of toner cartridges or paper). So why are these trends so hot?
Enterprise-level datacenters consume more energy than ever before due to their increasing size, growing design densities, and the popularity of multicore processor technology in servers. So about 50 percent of an enterprise datacenter's operating cost can be due to the electricity costs to power and cool the IT equipment and facility. Also, for many years, IT departments have been asked to do more with less, and now there exists a positive, politically-correct spin on what to call this kind of initiative: "next generation, green datacenters." The bottom line is that being green means being more cost efficient over the long term. Again, green is only as good as the "Greenbacks" that go in your pocket.
As I talk with CIOs, they tell me the best practices in next generation, green datacenters include the entire organization, not just the IT department; they also include the facilities department designing datacenters and the executive committee overseeing corporate eco-responsibility. It's a very good thing that green IT initiatives are beginning to cross all lines of businesses for better, sustainable approaches to cut energy costs and e-waste. Consider this for example: desktop/laptop computers are scattered everywhere; PDAs are assigned to employees by the thousands; and departmental printers populate every office floor.
I'm enthusiastic about a newer kind of datacenter called the containerized datacenter, or simply, the container. For about two years companies like Sun Microsystems, Verari, PDI, HP, and SGI (formerly Rackable Systems) have been delivering high density, energy efficient containers.
The container is designed to be a fully optimized, tightly controlled system complete with sophisticated management and monitoring tools. The container delivers the most energy efficient "datacenter in a box" with a PUE rating of less than 1.20. PUE seems to be the universal measurement for datacenters, but certainly not the best approach for calculating energy savings. The better approach is "Industrial Efficiency" which is a discussion all by itself.
The containerized datacenter confronts challenges faced by IT and facilities professionals today more effectively than a traditional brick-and-mortar data. The market is starting to get more than early adopters: companies like Microsoft and Google have been pioneers, but now there are commercial installations at Qualcomm, Verizon, Intel and NASA, to name a few.
Well, when all of the container's attributes are taken together, realized benefits are clear. The container datacenter does lower power and cooling expenditures by at least 25 percent. The capital cost is reduced by 50 percent and the electricity by 25 percent. In addition, you have the flexibility to attach to renewable energy sources that have a much smaller carbon footprint, such as hydro-dams and windmills.
When you consider productivity improvement, there are many benefits, including faster deployment, usually 90 days, which is two years faster than building a brick and mortar datacenter. There are fewer permits, fewer construction delays, and fewer labor-related issues.
From a financial perspective, this approach gives you the flexibility in baseball terms to execute a triple play. You can conduct a datacenter and energy audit to identify which applications, servers, storage, networking and power supplies are older energy hogs and potential opportunities for replacement.
The first play is to determine the value of those stranded assets and get cash on your balance sheet. The second play is to look at an operating lease to reduce your OpEx and create a take-out strategy that lets you upgrade to newer more energy efficient equipment in 22-36 months. The third play is to work with the utility companies and demonstrate the energy efficiency savings to qualify for rebates up to $500,000, putting more cash on the balance sheet.
All of these container qualities make for a superior TCO versus brick-and-mortar datacenters.
Posted by Dan Gatti - May 10, 2010 @ 4:00 PM, Pacific Daylight Time
Dan Gatti is the President and CEO of Data Center Rebates, and serves on the board of directors at the Telecommunications Industry Association.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.