Visit additional Tabor Communication Publications
March 03, 2009
The industry's headlong rush into cloud computing is shaking up the old order, sometimes in ways even the biggest IT firms can't anticipate. And while there has not been a wholesale conversion to the idea of utility computing, momentum seems to be steadily building in spite of the dire economic situation -- or maybe because of it.
In a Wisconsin Technology Network column this week, Peter Coffee, the director of platform research at Salesforce.com, wonders if the plummeting economic indicators may actually be hiding the IT sector's transformation taking place beneath all the financial carnage. His main argument is that capital spending may no longer be the best indicator of economic growth in the information age since it's now possible for firms to rent things like compute cycles from utility computing providers:
You don't need to own a car if you live in a place that's served by Zipcar. You don't need to own a collection of recording media artifacts if you're just as happy with unlimited music on demand, for a fixed subscription fee, at Napster. And you don't need to buy, or even lease, a supercomputer to run complex models when you can buy capacity by the minute from Amazon.
While the main audience for utility computing is for the larger enterprise market, HPC apps continue to show up in the cloud with increasing frequency. It's mainly the smaller firms that have trouble rationalizing a large cluster buy that are being attracted to HPC in the cloud. But in these challenging financial times, companies of all sizes are likely to take a look at renting cycles off-site.
A recent article at Fortune points to Kenworth Truck Company's use of aerodynamic design software hosted on an IBM cluster to design truck mudflaps. The truck design firm determined that it could buy access to supercomputer-level hardware for a fraction of the price of actually buying one outright. The design software used by Kenworth came from Exa, who noted that although two-thirds of its revenue still comes from selling software in the conventional way, sales from utility-based packages are "growing almost twice as fast."
If mudflaps seem a bit mundane, last week I wrote about biotech startup Pathwork Diagnostics, which was using Amazon EC2 and Univa UD's UniCloud as a platform for its cancer diagnostics tool. Pathwork's rationale for shifting to the cloud model: a two-thirds cost savings compared to buying a new machine, plus the flexibility to scale up for peak computing needs.
IT vendors are taking notice. Today, every major IT firm has a "cloud computing strategy," although it's way too soon to tell who the big winners and losers will be. A company like Microsoft would seem to have the furthest to go, since it has relied on its traditional client-side software for so long. Transitioning from a shrink-wrapped software model to a service model is going to be tricky for the software giant, but over the past few years the company has been making a huge effort to shift course. Last year it rolled out its Azure cloud operating system in the hopes of duplicating the success it enjoyed with its flagship Windows platform.
Of particular interest to the HPC crowd was Microsoft's announcement last week regarding a new research initiative named Cloud Computing Futures (CCF). The group is being led by long-time HPC'er Dan Reed and we'll be covering the project in more depth later this week. In a nutshell, CCF is a collection of hardware and software technologies -- including Azure -- that attempts to define the next-generation cloud platform. Considering that cloud computing 1.0 is still coalescing, that's a pretty ambitious undertaking.
One of the major goals of CCF is to come up with a much more energy- and cost-efficient cloud computing platform than is available today. Toward that end, the Microsoftians are experimenting with Intel Atom-based servers. The Atom is Intel's ultra-low-power CPU aimed at MIDs, netbooks, and nettops. Its big draw: for around 30 or 40 dollars and drawing just a handful of watts, the chip gives you x86 compatibility.
Using the Atom for servers is not a completely unique idea. Last year at SC08, SGI debuted an experimental Atom server called Molecule. Even though the performance of the individual Atom CPU was meager by Xeon standards, the performance-per-watt of the system was much better. Plus, the memory bandwidth of an Atom processor was about three times better than a conventional x86 CPU. A Molecule rack with 10,000 cores boasted an aggregate memory bandwidth of 15 terabytes per second.
Of course, Intel wouldn't be happy if Atom servers became all the rage in cloud computing. It would much rather sell its more expensive, higher margin Xeon server parts to datacenter customers. Figuring out how to keep its Atoms in line could turn out to be a real challenge for Intel. The low power and low cost of mobile CPUs are the exact attributes that are so attractive to computing at scale. Yes, even for chipmakers, the rise of cloud computing may demand some tricky maneuvers.
Posted by Michael Feldman - March 03, 2009 @ 5:34 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.