Visit additional Tabor Communication Publications
March 03, 2009
The industry's headlong rush into cloud computing is shaking up the old order, sometimes in ways even the biggest IT firms can't anticipate. And while there has not been a wholesale conversion to the idea of utility computing, momentum seems to be steadily building in spite of the dire economic situation -- or maybe because of it.
In a Wisconsin Technology Network column this week, Peter Coffee, the director of platform research at Salesforce.com, wonders if the plummeting economic indicators may actually be hiding the IT sector's transformation taking place beneath all the financial carnage. His main argument is that capital spending may no longer be the best indicator of economic growth in the information age since it's now possible for firms to rent things like compute cycles from utility computing providers:
You don't need to own a car if you live in a place that's served by Zipcar. You don't need to own a collection of recording media artifacts if you're just as happy with unlimited music on demand, for a fixed subscription fee, at Napster. And you don't need to buy, or even lease, a supercomputer to run complex models when you can buy capacity by the minute from Amazon.
While the main audience for utility computing is for the larger enterprise market, HPC apps continue to show up in the cloud with increasing frequency. It's mainly the smaller firms that have trouble rationalizing a large cluster buy that are being attracted to HPC in the cloud. But in these challenging financial times, companies of all sizes are likely to take a look at renting cycles off-site.
A recent article at Fortune points to Kenworth Truck Company's use of aerodynamic design software hosted on an IBM cluster to design truck mudflaps. The truck design firm determined that it could buy access to supercomputer-level hardware for a fraction of the price of actually buying one outright. The design software used by Kenworth came from Exa, who noted that although two-thirds of its revenue still comes from selling software in the conventional way, sales from utility-based packages are "growing almost twice as fast."
If mudflaps seem a bit mundane, last week I wrote about biotech startup Pathwork Diagnostics, which was using Amazon EC2 and Univa UD's UniCloud as a platform for its cancer diagnostics tool. Pathwork's rationale for shifting to the cloud model: a two-thirds cost savings compared to buying a new machine, plus the flexibility to scale up for peak computing needs.
IT vendors are taking notice. Today, every major IT firm has a "cloud computing strategy," although it's way too soon to tell who the big winners and losers will be. A company like Microsoft would seem to have the furthest to go, since it has relied on its traditional client-side software for so long. Transitioning from a shrink-wrapped software model to a service model is going to be tricky for the software giant, but over the past few years the company has been making a huge effort to shift course. Last year it rolled out its Azure cloud operating system in the hopes of duplicating the success it enjoyed with its flagship Windows platform.
Of particular interest to the HPC crowd was Microsoft's announcement last week regarding a new research initiative named Cloud Computing Futures (CCF). The group is being led by long-time HPC'er Dan Reed and we'll be covering the project in more depth later this week. In a nutshell, CCF is a collection of hardware and software technologies -- including Azure -- that attempts to define the next-generation cloud platform. Considering that cloud computing 1.0 is still coalescing, that's a pretty ambitious undertaking.
One of the major goals of CCF is to come up with a much more energy- and cost-efficient cloud computing platform than is available today. Toward that end, the Microsoftians are experimenting with Intel Atom-based servers. The Atom is Intel's ultra-low-power CPU aimed at MIDs, netbooks, and nettops. Its big draw: for around 30 or 40 dollars and drawing just a handful of watts, the chip gives you x86 compatibility.
Using the Atom for servers is not a completely unique idea. Last year at SC08, SGI debuted an experimental Atom server called Molecule. Even though the performance of the individual Atom CPU was meager by Xeon standards, the performance-per-watt of the system was much better. Plus, the memory bandwidth of an Atom processor was about three times better than a conventional x86 CPU. A Molecule rack with 10,000 cores boasted an aggregate memory bandwidth of 15 terabytes per second.
Of course, Intel wouldn't be happy if Atom servers became all the rage in cloud computing. It would much rather sell its more expensive, higher margin Xeon server parts to datacenter customers. Figuring out how to keep its Atoms in line could turn out to be a real challenge for Intel. The low power and low cost of mobile CPUs are the exact attributes that are so attractive to computing at scale. Yes, even for chipmakers, the rise of cloud computing may demand some tricky maneuvers.
Posted by Michael Feldman - March 03, 2009 @ 5:34 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.