I stopped by a few sessions of Gartner’s Symposium ITXpo in Las Vegas this week — all of which were focused on cloud computing — and I heard no shortage of prepared comments about how Google is making its play in the space. In case you’re unaware, instead of offering up infrastructure as a service, Google utilizes its sprawling infrastructure to offers a wide range of applications and information as services over the Internet.
Funny thing, though … while Gartner’s well-prepared analysts were going through their PowerPoint slides, Google was drastically shifting its position by announcing (with its usual lack of pomp and circumstance) App Engine — a Web application platform that sits on Google’s infrastructure and appears to have its sights set on Amazon’s suite of Web services (EC2, S3, etc.). The long and short of App Engine is that developers write their applications (the platform currently supports Python only) and Google takes care of the rest — including dynamic resource allocation, load balancing and data replication. Sounds too good to be true, right?
I’ve come across some criticism of App Engine, though, especially concerning its limitations in language support, storage quota and bandwidth. However, while these barbs might be well founded, we have to remember that App Engine is in the “preview release” stage — not yet Beta — and it is, for the time being, free. If there was one recurrent theme about Google at the Symposium ITXpo, it was that Google has been very diligent in its quest to continuously update and improve the services it offers. Thus, it stands to reason that these issues will be all but resolved as Google works the bugs out of App Engine, builds in more functionality and eventually rolls out a production-ready, pay-for-use model.
Whether Google can topple Amazon Web Services is a whole other question. Actually, the more accurate question might be whether Google is competing directly with Amazon, after all. In its preview form, for example, App Engine seems more reminiscent of solutions like Mosso’s Hosting Cloud. These types of offerings minimize complexity (and flexibility) for application developers, allowing them to simply write their code and be on their way, content, for the most part, with the presumption that the service provider will take care of everything (scale, updates, etc.) as long as the bill gets paid every month. Amazon EC2 (along with S3 and Simple DB), on the other hand, offers users as close to bare metal as they can get — users can pick their OS, their application platform, their architecture (e.g., whether they want a failover setup, a grid setup, a cluster setup, etc.), etc. It takes a more advanced user to take full advantage of EC2 et al, but the possibilities for its use seem endless. In short, where App Engine might be well suited to powering your Web site, EC2 and its suitemates are designed to run your business.
The reality, however, is that it’s still early in the game for both App Engine and Amazon Web Services, and whether they are direct competitors or not, the safe bet is that both will capitalize on the cloud computing hype and carve out sizeable chunks of market share.
That’s All Fine and Dandy, but is Cloud Computing Actually for Real?
The simple answer, according to Gartner, is “Yes.” Among the myriad statistics thrown out at the conference was Gartner’s prediction that by 2012, 80 percent of Fortune 1000 companies will pay for some cloud computing service, and 30 percent of them will pay for cloud computing infrastructure. Pretty impressive if it comes true.
Another prediction that bears repeating is that through 2010, more than 80 percent of enterprise use of cloud computing will be devoted to very large data queries, short-term massively parallel workloads, or IT use by startups with little to no IT infrastructure. This seems to be on par with what we’ve been told by the companies marketing cloud offerings, most of whom cite the Web 2.0 market as their No.1 customers. However, one use case notably absent in this prediction is use for easily off-loadable enterprise jobs, like e-mail or disaster recovery. Especially among hosting providers, the story has been, to a person, that customers are getting their feet wet with functions they consider low-risk before moving more substantial, mission-critical functions into the cloud. Whether virtualized, grid-based hosting platforms should be considered part and parcel of cloud computing is up for debate, but I would imagine the business propositions for moving to either don’t diverge to far from one another.
Outside the talk about the various models of cloud computing and the pros, cons and concerns around each (which, by the way, resemble the concerns people had about grid computing in its early days), the real takeaway for me came in the form of some perspective on the cloud computing-electric utility analogy. Discussing why the advent of cloud computing doesn’t necessarily mirror the advent of centralized electricity, Gartner’s Thomas Bittman made a few really good points. First, he noted that whereas there were only a few thousand companies generating their own power when the electric utility became available, there are millions of companies already managing their own IT. As a result, cloud computing won’t shatter the current model the way electricity did because the percentage of companies affected by cheap, readily available computing is drastically less. Another difference is that it is difficult to measure service levels in IT, so there isn’t any discernibly universal metric, like the kilowatt.
While the first two points definitely are insightful, it is Bittman’s final point that really struck a nerve with me as someone who covers the enterprise software space. Because IT is always evolving and getting cheaper, he said, the need to move to a cloud model to achieve flexibility or agility just isn’t all that necessary. Depending on how highly a company values the notion of security, or how much capital it is willing to invest in its datacenter, modernizing to an internal cloud-like solution (what Gartner calls real-time infrastructure) can be just as, if not more, effective. If all you’re after is scalability, elasticity and dynamism, I’ve spoken over the past year alone with what seems like dozens of vendors — including Appistry, Cassatt, GigaSpaces, IBM Majitek and even Oracle — who can deliver just that. For large companies that can build their own economies of scale, insourcing these capabilities make a lot of sense; for smaller companies who reap more reward from investing in their core business than on infrastructure, cloud computing (which Gartner defines as “external”) might be the way to go.
But that’s enough cloud computing for now. Looking at the rest of this week’s issue, I cannot strongly enough recommend reading Dennis Barker’s article on Liquid Computing and its fabric-based architecture, LiquidIQ. By virtualizing everything and putting it in a single chassis, LiquidIQ actually falls into that class of solutions companies might opt for in lieu of cloud computing. Although I believe Liquid Computing initially made its mark in the HPC space, LiquidIQ’s ability to give users the resources they need when they need them probably makes it a worthwhile proof-of-concept for the majority of enterprise users, as well. You also can read all about Abacus Data Exhange’s LiquidIQ deployment here.
Elsewhere, you might want to note the following announcement: “3Leaf Enables On-Demand Allocation across the Board”; “Oracle Intros In-Memory Cache for 11g”; “Bungee Labs Links CRM, Third-Party Data”; “IBM Takes SOA to the Bank”; “Cisco Furthers ‘Data Center 3.0’ Initiative w/ Nexus 5000”; and “TACC Hosts Summer Supercomputing Institute.”