Cloud computing promises numerous benefits to businesses – among these are agility, scalability, and reduced cost – but the virtualization layer inherent in most public clouds has been somewhat of an anathema to the HPC community. Are bare metal clouds the answer?
Today’s clouds are built on a mix of technologies, including virtualization, automation and orchestration. In some circles, virtualization is nearly synonymous with cloud, but above all a cloud is a pool of resources that is elastic, scalable and accessible on-demand. For the HPC community especially, much of the cloud computing that takes place is of the “bare metal” kind, aka non-virtualized cloud.
Although today’s server virtualization is a lot sleeker than in years past, nothing can beat the performance of bare metal. Some of the issues with virtualized cloud computing were detailed recently by Internap Vice President of Hosted Services Gopala Tumuluri.
Tumuluri points out what most HPCers already know: the virtualized, multi-tenant platform common to most public clouds is subject to performance degradation. “While the hypervisor enables the visibility, flexibility and management capabilities required to run multiple virtual machines on a single box, it also creates additional processing overhead that can significantly affect performance,” writes Tumuluri.
Data-heavy loads are the most likely to be negatively impacted, especially when the service is oversubscribed. Such a setting is ripe for the so-called noisy-neighbor problem that occurs when too many virtual machines compete for server resources.
This is where the bare metal cloud offers a significant advantage, especially for latency-sensitive workloads. Dedicated hardware delivered as a service offers the user the benefits of cloud – flexibility, scalability and efficiency – without the drawbacks of a shared server.
Tumuluri cautions would-be cloud adopters to pay careful attention to terminology:
“It’s important not to confuse true bare-metal cloud capabilities with other, related terminology, such as ‘dedicated instances,’ which can still be part of a multi-tenant environment; and ‘bare-metal servers,’ or ‘dedicated servers,’ which could refer to a managed hosting service that involves fixed architectures and longer-term contracts,” he writes. “A bare-metal cloud model enables on-demand usage and metered hourly billing with physical hardware that was previously only sold on a fully dedicated basis.”
The bare metal cloud is good match for bursty I/O-heavy workloads. Ideal use cases include media encoding and render farms, which are both periodic, and data-intensive in nature. “In the past, organizations couldn’t put these workloads into the cloud, or they simply had to accept lower performance levels,” remarks Tumuluri.
Companies with big data applications may also be interested in exploring the bare metal option. It’s definitely possible for high volume, high velocity data to encounter latency issues on virtualized cloud servers.
Last but not least is the issue of security. Organizations with stringent compliance guidelines – think finance, government and healthcare – can benefit by having their data contained in a well-defined physical environment.
As an umbrella term, Infrastructure-as-a-Service includes many different scenarios that appeal to different use cases. For some HPC workloads, dedicated hosting options with cloud-like features (elasticity, ease-of-use, utility-style billing, etc.) may offer the best of both worlds.