It Takes More than Virtualization to Build a Cloud

By By Derrick Harris, Editor

July 14, 2008

As the IT world struggles for nicely packaged definition of cloud computing, marketing personnel who wish to leverage its buzz are engaged in their own struggle. First, they need to determine which, if any, of their offerings can arguably be considered cloud computing under any of the various working definitions currently populating the market. If the connection is attenuated, the question becomes whether to shoehorn their solution into that definition or whether to lay off for the time being and suggest the company start implementing features that will make for a better fit.

This decision is not easy. Jumping on a blazing hot bandwagon could be a ticket to overnight success, but latching onto an unproven fad could be fatal. And what about that struggle for a universal definition? Is stretching the boundaries to include your offering detrimental in that it adds further confusion to the mix, or is there plenty of room for everyone who wants to frolic up in the clouds?

Companies with roots in the traditional managed hosting field are having to address this issue more and more as virtualization establishes itself in their offerings. As it turns out, their reasons for leveraging or not leveraging the cloud buzz are just as varied as the definitions with which they are working.

“We are cloud computing”

The cloud computing landscape is changing so fast that after an early-Spring blog delineating between his company’s offering and cloud computing offerings, GoGrid Technology Evangelist Michael Sheehan concluded that the two simply were not one in the same. Fast-forward a few months to late June, and Sheehan has changed his tune — perhaps with a little goading from President and Co-Founder John Keagy, who believes that, given the breadth of definitions out there, “for us to say that GoGrid is not cloud computing is just silly.”

One of the definitions from which Keagy draws his unequivocal determination comes from Forrester Research, which has included an underlying “grid engine” architecture as one element of a cloud. As its name might imply, GoGrid, a division of dedicated Web-hosting provider ServePath, utilizes a virtualized grid architecture to instantly deliver VMs to customers. “Grids are reincarnated in a new form, and it’s called ‘cloud computing,’” says Keagy. Sheehan concurs, adding that not only is GoGrid built using a grid architecture, but all other clouds are, as well.

Additionally, Sheehan has devised a pyramid diagram of the cloud marketplace, with the base level being infrastructure-as-a-service offerings, like Amazon’s Elastic Compute Cloud (EC2). Allowing users to launch and provision VMs with only a credit card, GoGrid provides a service that, essentially, is the same as EC2. To hear Keagy tell it, no one would deny that EC2 is a cloud, and because GoGrid is the only alternative to EC2, it, too, must be a cloud.

Taking it a step further, the folks at GoGrid actually believe their product is more true to cloud computing’s notions of openness, simplicity and flexibility than is EC2. In terms of simplicity, Sheehan says GoGrid is all about making cloud computing “less nebulous and [more] tangible to the end-user.” Whereas Amazon has an 18-minute video instructing EC2 greenhorns on how to get started, he says a Go-Grid first-timer can be up and running in five minutes using the company’s almost-too-easy GUI. (Ed. Note: He’s not lying — at least in terms of  provisioning a few machines and adding a load balancer and a database.)

In terms of flexibility, Keagy believes GoGrid’s hardware virtualization model gives users far more options than does EC2. For one, it gives users as close to a bare-metal experience as possible, even providing console access as if you are working on a dedicated Windows, Linux or Debian server, giving users, for all intents and purposes, as much control as an “old-school environment. In fact, says Keagy, a user could load software from his desktop DVD drive onto a GoGrid machine if he was so inclined.

Addressing the comparisons between infrastructure-as-a-service offerings like GoGrid and more-limited platform-as-a-service offerings like Google App Engine and Mosso, Keagy offers the following explanation: “App Engine, albeit it’s new, is very exciting if you’re a punk kid who wants to whip off a quick app fresh from scratch in your dorm room … [and] that’s obviously cloud computing … But GoGrid also is obviously cloud computing, and you can take your enterprise application, which might need a combination of servers running Debian, Linux and Windows, and a bunch of private networking features that you need to coordinate within your vLAN, and we give you root access.”

As for what makes GoGrid a cloud service provider and parent company ServePath a traditional hosting provider, Sheehan notes that on top of simply giving customers resources on which to host their applications and providing SLAs, cloud services should offer instant scalability and utility pricing, both of which describe GoGrid.

“Above all,” says Keagy, “we are cloud computing.”

All in Due Time

On the opposite side of the aisle is Layered Technologies, company that definitely could squeeze its Virtual Private Datacenter (VPDC) solutions into some definition of cloud computing and run with it, but who is not too anxious to do so.

For Todd Abrams, president and chief operating officer of Layered Tech, cloud computing means having resources distributed over multiple points of presence, but connected and pooled so that applications are not concerned with where or what they are. “It’s a big marketing thing because, honestly, I don’t see how you can say you’ve got cloud computing if you have a single location,” he explains. “In my mind, you have to have diversity.” Layered Tech is addressing this, he added, by buying and opening new datacenters across the globe, in cities like Dallas, Chicago and Tokyo.

Functionality-wise, Abrams sees easy-to-understand GUIs and utility-style billing as components of a cloud solution, and these are not currently available in the company’s VPDC offerings. However, he noted, Layered Tech is working on its GUI, its utility billing scheme and its credit card-billing mechanism, and should have a “cloud” of resources in a few months. Cloud computing, he says, is both an evolution and an extension of VPDC, but one that will materialize as technology continues to change and features continue to get added.

One reason Layered Tech isn’t gung-ho about ramping up its cloud marketing is that the company has a good thing going with VPDC and, says Abrams, he sees a good bang for his buck from Amazon’s EC2 and S3 solutions, which customers often reference when looking at Layered Tech’s GridLayer family of offerings. This could help to explain why even without calling it cloud computing, VPDC represents 10 percent of the company’s revenue and has garnered attention from “big, big” players in the financial services, credit card, medical and telecom industries.

Of course, the other reason Layered Tech isn’t marketing itself as a cloud computing provider could be that the distinction between VPDC and solutions like EC2 is a good thing. Abrams says that even if they are addressed sufficiently, enterprises always will have security concerns around shared infrastructures — it’s a mind thing. “A lot of guys in IT are server-huggers, they want to touch their servers and feel their servers,” he says. “Once you take that away from them, they don’t really know what to do, because that’s their life and their job.” To address this, VPDC users get a dedicated instance of nodes with which they can do as they please, and if one user’s nodes go down, it won’t affect other users. VPDC is part of a larger backbone, says Abrams, but instances are contained a user’s nodes.

In another deviation from cloud standard operating practices, VPDC, which ranges from $1,700 to $4,000 a month to start, isn’t cheap, acknowledges Abrams. “But … when you work out the math and what the benefits are of what’s in that Virtual Private Datacenter, if you’re going to build in the scalability, you’re going to build in the redundancy, if you’re going to have the load balancers, firewalls, etc., in that, you’re at way more than four grand a month [doing it internally or through a traditional provider like AT&T].”

Finally, Abrams sees cloud computing as being driven by applications that are completely agnostic in terms of hardware type or geographical location, but many enterprises don’t have the luxury of not caring. Compliance measures can require knowing where your data is being stored, as well as various data retention practices, and Layered Tech’s ability to address customers’ compliance needs is not within the present realm of cloud computing. For example, says Abrams, our Dallas datacenter is SAS 70-compliant. For testing and development purposes, however, Abrams sees cloud computing and solutions like Layered Tech’s VPDC developer packages being great fits.

What Layered Tech’s cloud product will look like remains to be seen, but due to the popularity of VPDC among enterprises, it appears the two solutions will be clearly distinct.

Keep ‘em Separate

Like ServePath formed GoGrid to handle its now-called cloud computing offering, so did Rackspace form Mosso to address the cloud computing market. However, unlike ServePath, Rackspace already had a virtualization offering in the same vein as Layered Tech’s VPDC. So why not use that offering as its foray into cloud computing?

According to Lew Moorman, Rackspace’s senior vice president of strategy and corporate development, the answer has a lot to do with both his definition of cloud computing and the strategic goal of attracting enterprise users. Talking about the latter, Moorman believes that, given the current state of cloud computing, highly customized and complex applications (like most mission-critical enterprise applications) are better suited for a traditional infrastructure. “The problem … is that in pretty much every single cloud … you are dealing with a multi-tenant, shared environment. And with multi-tenant and shared environments come restrictions,” he says. “If you meet them, then, man, you get all the benefits — and isn’t that great — but if you don’t, you’ve got problems. It won’t work.” He added that it will be a long time “until someone is running their Oracle accounting in the cloud.”

Aside from being multi-tenant, Moorman also defines clouds as revolving around concepts like pay per use, self service, quick provisioning and automation of key tasks, like scaling. In addition, he believes clouds must have a “very serious” software layer between the applications and underlying infrastructure to make the whole experience as seamless as possible.

Although Rackspace’s virtualization offering does offer utility billing and rapid provisioning, Moorman says the fact that it consists of dedicated virtual servers means it is not a cloud service. “We have to wrestle with how to market these things, and we just haven’t applied the ‘cloud’ label to it yet,” he says. If the offering starts to mature and Rackspace starts selling in more incrementally, then maybe the “cloud” label will make sense, but, for now, the company is selling dedicated infrastructure using virtualization as a tool.

However, Moorman notes, depending on the definition with which someone is working, the line between dedicated hosting and cloud computing can get a bit murky. In some ways, he says, the whole idea of centralized computing, of not having a server closet in your office, can be considered cloud computing. If that’s the case, the “cloud” label probably will spread over traditional hosting to the point where the two become indistinguishable. For the time being, Moorman said, Rackspace welcomes the distinction and only applies the cloud label to the Mosso offering, which has a cloud software layer and is clearly in the platform-as-a-service sector of the cloud. Overall, he believes the hosting market has done a good job with no co-opting the term.

The Expert’s View

Antonio’s Piraino’s professional life probably was running smoothly enough until cloud computing came along. Now, says Piraino, senior analyst for managed hosting with Tier 1 Research, he’s “more and more” involved with cloud computing because “that’s where there’s a bigger impact on all the managed hosters … from all those cloud infrastructures that have been put in place.”

Like the rest of us, Piraino has had to sift through the rubble to find a definition that suits his job functions. For starters, advances in virtualization have really helped to blur the line between what is managed hosting and what is cloud computing. Now, he says, managed hosting providers can tell customers, “You not only come on and get a raw VM, but if you need billing on the fly, we’ll suddenly create a billing system for you. If you suddenly need load balancing or this type of thing, we’ll do it all for you.” The result, he says, resembles glorified dedicated hosting, but being able to move stuff around on the fly and “play around” with the set-up brings a cloud element to it. However, if you ask Amazon, says Piraino, they’ll tell you it’s all about how you charge users. And IT service providers like AT&T and Verizon Business will say they provide storage on demand and utility computing capabilities, but it takes five days to provision. “[F]or an enterprise customer,” he noted, “that might be a really quick provisioning time.”

The result of Piraino’s quest to define cloud computing is a list of properties that every cloud should have. Among those properties, and where the distinction between managed hosting and cloud services becomes a little more clear, are: real-time, accessible infrastructure; scalability on the fly; utility billing model; credit card billable; and it needs to be a platform. Managed hosting solutions usually don’t have these characteristics, and usually it’s intended to be that way. For example, says Piraino, automated scalability is easier with a consistent, simple configuration than with a more complex enterprise-style configuration. Likewise, while credit card billing might eliminate the hassle of having to deal with a salesperson in order to get started, large corporations might actually want contracts and not want to be anonymous. If Wal-Mart is going to use EC2 to handle excess holiday traffic, Piraino explains, it would seem the retailer would want Amazon to know what’s coming so nothing is at risk.

A prime example of this dichotomy, he says, is Rackspace, who hasn’t wanted to get involved with automated scalability and application-aware capabilities, but who has Mosso to fill that void.

Tomayto or Tomahto, the Result is the Same

If the ultimate goal of both managed hosting and cloud computing providers is relieve IT departments of the headaches and costs associated with datacenter management, perhaps it doesn’t matter what we call them. GoGrid’s Keagy seems to think so, saying that “we’d all be wise to let cloud computing be as broad and fanatical [as possible].”

“I think what’s going on, what’s driving your career and mine, is outsourcing,” he explained. “Ninety-nine percent of IT still is not outsourced. I’m looking out across downtown San Francisco, and I just know this place is chock-a-block full of servers packed back in little closets, and this cloud computing thing is what’s going to drive all those people are buying and maintaining all those servers in all those closets … to overcome their outsourcing objections.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire