Visit additional Tabor Communication Publications
July 29, 2008
The IT industry's focus on energy efficiency might seem like a "Mom and Apple Pie" type of pursuit, but there may a darker side to the trend. There's an illuminating article by Steve Denegri featured on Robin Harris' StorageMojo blog that talks about some of the more depressing ramifications of the green datacenter. Denegri, a storage analyst, posits that the increasing share of the IT energy budget being consumed by storage may portend a shake out for that industry.
He notes that storage vendors -- like all hardware vendors -- are pushing their latest offerings with a big emphasis on energy efficiency. From the marketing and sales point of view, this looks like a great opportunity to sell new products that aren't so power-hungry. But that's the glass half-full perspective. Here's Denegri take:
[T]hese vendors would be better off recognizing that this heightened attention to energy efficiency is less indicative of a new growth opportunity and, more likely, portends an uncertain future for the industry, as a whole. Countless industries have reached an energy ceiling over the past half century, only to realize, soon after, that revenue potential had peaked.
What follows is a survival contest that only Darwin would love: more combinations at the top of the food chain and significant consolidation or closed doors among the multitude of suppliers. As revenue potential falls, those who are fortunate enough to survive must remain in cost-cutting mode in order to stay competitive.
Doesn't exactly make you want to buy shares of EMC.
One of the problems with storage system is that, unlike compute and network boxes, they still depend on mechanical devices to operate, and this tends to suck up a lot of power. Until solid state disk technology (SSD) is ready for prime time, tape and disk machines will continue to take an every-increasing share of the datacenter energy budget. Even beyond that, the need for data storage is growing faster than the need for computation, so moving to SSD will only flatten the curve a bit.
Eventually server and networking may end up in the same boat anyway. Up until now, IT managers have been able to take advantage of Moore's Law to increase performance per watt, and use virtualization to make better use of datacenter hardware resources. But once virtualization achieves 100 percent utilization, no more efficiency can be realized there. And since demand for computing and communication is outracing Moore's Law, power consumption will become a limiting factor here too. In HPC, where use of server virtualization is almost nil, we already see system size being limited by power costs and infrastructure.
Denegri's recommendation is that IT vendors should use their collective clout to lobby for increasing the capacity of the energy grid. That will pave the way for industry growth, which he says is predicated on delivering performance and capacity, not energy efficiency. "Green computing is almost the equivalent of battling a raging inferno through the design of smaller matches," writes Denegri. "If only these consortiums realized that by hailing their energy-efficiency activities, they merely appear content with a reputation of environmental responsibility as they proclaim their industry’s doomed state."
The other way I think this might play out is for datacenters to go set up shop at energy-rich locations. We can see the beginnings of this with the move by Google, Microsoft and other big players building ultra-scale datacenters along the Columbia River in order to take advantage of the cheap hydroelectric power and cooling along the waterway. Likewise, supercomputing at Oak Ridge National Laboratory benefits from the large energy resource of the Tennessee Valley Authority. One could imagine the next generation of supercomputing being hosted in Iceland, where geothermal energy resources are abundant and are far in excess of local demand. (I'm also guessing it's fairly simple to cool big petascale machines in Iceland.) Of course, not all computing and storage can be relegated to remote sites.
Down the road, datacenter operators may decide to develop and build power plants as part of their infrastructure, like aluminum producers did for their smelting operations. In this case, datacenters could even sell off excess energy capacity to help defray operating costs. Especially if power distribution isn't a consideration, exotic technologies like solar-powered hydrogen generation and sea thermal gradient power could be considered.
One thing is certain: the dynamic between energy resources and IT is going to reshape the computing landscape. But IT doesn't have to play the victim here. Innovation is what it does best, and I'm hoping this is one area where the market will work its magic.
Posted by Michael Feldman - July 28, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.