The Virtual Evolution of Managed Services

By By Derrick Harris, Editor

March 24, 2008

There is no doubt that managed hosting has been undergoing an on-demand transformation thanks to advances in virtualization and grid technologies, but although the underlying technologies might be similar, the available services are increasingly unique — targeted at doing a few things and doing them well. This is especially true for two relatively recent entrants into the virtualized hosting fray.

Bringing Virtualization to Canada’s SMBs

Radiant Communications, a Vancouver, British Columbia-based provider of commercial broadband solutions, publicly entered the virtual space in October 2007 with its AlwaysThere hosted Exchange offering. Leveraging Radiant’s Grid Computing Utility (GCU), a collection of virtual machines combined with a flexible storage area network (SAN), the hosted Exchange offering gives users a dedicated instance of Microsoft Exchange with, according to Radiant’s director of advanced hosting, Jason Leeson, all the security and flexibility of an on-premise offering, as well as the economies of scale that come along with a shared environment. Thus far, the company has been focusing its marketing efforts around the Exchange service (specifically within the Canadian SMB market), and has been gaining a lot of traction as a result, but Leeson says that this is just the tip of the iceberg.

Radiant also is offering virtual servers (running Windows Server 2003, Red Hat Linux or any VMware-compatible OS) to customers who want the ability to scale their resources as needed, on demand, without having to invest in purchasing and managing physical machines. Leeson said the concept of renting virtual servers on a pay-per-use basis is currently showing the most opportunity for uses such as disaster recovery (i.e., backup) and business continuity (i.e., automatic failover), but some customers are actually hosting their own applications on the grid. Radiant’s grid computing environment is connected directly to its existing MPLS (Multi-Protocol Label Switching) core network, which Leeson says allows Radiant to host and deliver grid-based virtual servers and applications for each customer in a completely secure and private manner.

Concerning the latter, Leeson acknowledges that Radiant’s virtual server model is still in its infancy, but points to significant advancements on the horizon. Currently, users wishing to cluster several VMs must wire the servers together themselves, as Radiant has not yet incorporated that level of automation. Additionally, customers must add servers directly through Radiant, with servers usually provisioned and ready to go in a few hours. However, Leeson explained, Radiant is only in the first stage of a three-phase rollout: (1) standardize the grid infrastructure; (2) build the internal management tools for automation and provisioning; and (3) delegate administration, control and management to users. Once the final phase is complete — or at least underway — Leeson says customers will have the full virtual private datacenter (VPDC) experience of being able to turn up or turn down servers on demand, track server utilization, etc. He thinks these customer-side management tools and consoles will be big differentiators as Radiant continues to grow its service.

Right now, the main target for virtual servers and VPDCs is the independent IT consultant market. “We talk to a lot of IT consultants, for example, who don’t have their own datacenters [but] have niche vertical apps that they offer to their customer base,” said Leeson. “This is an opportunity for them to tap into the Radiant datacenter, and we set them up with the virtual servers and they can run what they want.” Professional service organizations, such as law firms, have been the main customers for the AlwaysThere hosted Exchange offering, added Leeson.

Hosting in the Cloud

Attacking the management problem from a different angle is Mosso, a Rackspace company that started in 2006 when co-founders Jonathan Bryce and Todd Morey had the idea to offer Rackspace’s enterprise-level technology to smaller users in a multi-tenant environment. In February of this year, Mosso introduced a revamped service — the Hosting Cloud — in an attempt to make the Web hosting experience as simple as possible without sacrificing reliability.

Leveraging a cloud of computers and VMs to offer customers as-needed scalability, Bryce describes the Hosting Cloud as “a place where developers can basically upload their code and we take care of the rest.” With this in mind, the infrastructure uses standard Web technologies like PHP, Ruby, Perl, .NET, and ASP, and users don’t do any server provisioning, as Mosso’s internally developed software manages provisioning, scaling and other aspects of the environment automatically. Once the application has been uploaded via the Web interface, $100 per month gives customers access to 500GB of bandwidth, 50GB of high-performance storage and 3 million Web requests. Scaling is done automatically as applications experience greater traffic or require more resources, and the extra resources and/or Web requests cost only “pennies”: $.50 per gigabyte of disk space; $.25 per gigabyte of bandwidth; and $.03 per 1,000 Web requests. “A lot of those other systems,” said Bryce “… make it easy to provision additional resources quickly, but they don’t necessarily do it automatically.”

As good as this all sounds, though, even Bryce acknowledges that the platform has limitations — some of which, like not allowing users administrative access, are by design. “It’s a plus because it means they don’t have responsibility for that, but it’s a minus because it means there are limitations that we put in place — you couldn’t run SAP or something on our cluster,” explained Bryce. “It’s meant to do a few things really well. It’s meant to serve Web applications and their … databases.” Essentially, if users have needs that are out of the norm for Web applications (e.g., connecting to a legacy system with custom C code), Mosso will not currently handle them within its system, as such custom installations might affect downtime or otherwise throw a wrench in the system.

The reality, says Bryce, is that there always are trade-offs when dealing with a fully managed platform. “One of the questions we get is why would someone go with Rackspace over Mosso, and that’s generally what it is,” he elaborated. “Rackspace’s customers have more complex and more customization needed to work with their overall architecture, and we generally do really well for the set of standard technologies that we support.”

Among the technologies that Mosso does not support is Java, although Bryce hopes that will change by the end of the year. Java support will come hand in hand with a “sandbox” environment Mosso is currently working on, which would allow customers administrative access of individual virtual instances without Mosso having to install any unique software across its entire pool of resources. Customers also will have more in-depth insight into how their applications are running, thanks to an improved control panel that will, among other things, allow users to access storage snapshots. In addition to these upgrades, Mosso also plans to expand into an additional Rackspace datacenter and to introduce larger base packages for customers who know they will regularly go beyond the current base quantities.

Mosso is an optimistic company, though, and Bryce firmly believes that Mosso’s pros outweigh its seemingly minimal cons. And one big thing the company has going for it is its level of service, which Bryce describes as deeper and more proactive than those of many other managed service providers. Support is available 24 hours a day via phone, e-mail or chat, says Bryce, and because everyone is in-house, customer services representatives have easy access to the technology team should they need it. “Even though everything we’re doing is high technology, we still keep a people element in it,” says Bryce. “That’s been one of the keys to Rackspace’s success over the years, and we definitely stick with that legacy.”

One example of this personal service to which Bryce points involves a recent appearance by a Mosso customer on a national network morning talk show. The customer gave Mosso a heads-up as to when it would be on, and Mosso scaled up its infrastructure in advance to avoid any potential traffic-related downtime. Generally, however, customers don’t know when the need for increased scale will arise, but that doesn’t mean there is any less service involved. According to Bryce, Mosso looks at every deviation from users’ averages and acts accordingly. If it’s just more Web traffic, then the answer if more resources. If, however, it’s a funky SQL query, the Mosso reps will play the role of database administrators and help get everything running smoothly.

Clearly, customers aren’t shying away from Mosso, as the company currently boasts more than 2,000 customers, with the majority having joined in the past year. Bryce says Mosso adds 900 applications per week to its cloud, and is hosting more than 76,000 mailboxes.

One of these customers is David Ponce, owner and managing editor of consumer technology blog Oh Gizmo, who has been with Mosso for almost a year. He was turned on to Mosso after seeking input via a post on his site, as increasing traffic and “terrible” experiences with other providers left him needing to make a change.

At one point, Ponce downgraded from one provider’s “limited” virtual private server offering to the regular grid-based offering, and while it handled his needs just fine, customer service was another story altogether. Corroborating Bryce’s account of Mosso’s level of customer service, Ponce says he has “never seen anything like it.” “I can get in touch with a human within two minutes,” he added, “and, to me, that is worth every penny.”

As far as the product’s offerings, Ponce points to some early issues with downtime following appearances on the front pages of Digg or Slashdot, but says everything has been running pretty much “perfectly” — nearly 99 percent uptime — after a couple months of tweaking the settings. “Whatever they’re doing, their clusters are working, because whenever there’s a spike, it grows and handles it just fine,” said Ponce.

Ponce also has some concerns with the pricing around Web requests, as he has been exceeding the limits lately and expects to do so occasionally in the months to come. He welcomes the opportunity to upgrade to a package offering more base requests, but notes that for the most part, Mosso Hosting Cloud is perfect for his needs, which generally involve 600,000 to 700,000 page views per month.

One Step at a Time

Regardless how many customers managed hosting providers draw or how grand their master plans, both Radiant and Mosso understand that success in the utility hosting space requires a measured approach. According to Mosso’s Bryce, truly pervasive cloud computing will only occur if today’s providers keep their foci narrow and focus on doing one thing well — managing that service “all the way up and down the stack.” Like Amazon’s S3 for storage and Mosso’s Hosting Cloud for Web applications, Bryce foresees a day when “[t]here’ll be enough of these services and enough of these technology-specific utilities that are high-quality and high-performance that most things will be running on them.”

Radiant’s Leeson sees the market unfolding in much the same way, noting that Radiant saw e-mail as a great opportunity to introduce customers to its grid-based service because e-mail is a mission-critical application that doesn’t provide any real strategic advantage to organizations. Disaster recovery and business continuity are other areas where customers, particularly SMBs, can move some tasks into the cloud without over-committing. “When we talk about SaaS or cloud computing, it’s not all or nothing,” he explained. “It’s not companies that are suddenly going to move everything into the cloud and get rid of all their on-premise stuff. It’s going to be slow, and it’s going to be gradual, and they’re going to start with certain things.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire