The Perils of Becoming Trapped in the Cloud

By Gabor Samu, HPC Solutions, IBM

February 20, 2019

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. In this article, we explore the challenge of proprietary lock-in in the age of cloud-based HPC and recommend strategies to help you stay portable and flexible.

An evolution in what it means to be open

In the early 1990s lock-in at the level of hardware and operating systems was a major concern. Frustrated with costly proprietary systems, customers began looking towards UNIX as a preferred platform. While not widely used for commercial applications at the time, it was perceived as open. There were only a few dialects, and implementations were mostly standard among vendors. The IEEE’s Portable Operating System Interface standards (POSIX 1003.x) began appearing in RFPs, and although there continued to be multiple processor architectures, the push-pull of innovation and customer demand helped impose more uniformity across operating systems and file systems.

As Linux and the open-source movement gained momentum in the early 2000s, our definition of open shifted from standards-based to software with open-source roots. Widespread use of scripting, cross-platform languages such as Java and Python, and consolidation in processor technologies made infrastructure-level lock-in less of a concern.

Today, it seems that open-source frameworks are everywhere we look. Open-source software helped further improved portability, quality and reduce cost, but customers quickly realized that it was not always a panacea. Open-source projects providing valuable functionality were often backed by just one or a handful of commercial entities. While customers could download and compile a GitHub-based community edition in theory, for many this was impractical, and it was valuable to have a commercial entity able to integrate and support open-source components. Customers learned that in some cases it was just as easy to become locked into single-source open-source software as a proprietary solution.

In the age of cloud, the meaning of open is shifting yet again. Ironically, many people describe cloud services as open but are confusing openness with convenience and flexibility. While it’s true that most cloud service leverage open-source frameworks, depending on how cloud services are consumed, the risk of lock-in is real.

Cloud services present new opportunities and challenges

Infrastructure-as-a-service (IaaS) offerings are reasonably uniform across cloud providers, but even these standard services pose some degree of lock-in. Each cloud provider has native tools, CLIs, and APIs to provision and manage building blocks such as VMs, containers, VPCs, and storage and different tools to assemble these building blocks into HPC-ready clustered environments.

As we climb the cloud provider’s stack, the risk of lock-in becomes higher. At the PaaS layer, most providers offer open-source services such as MySQL, Redis, and Kubernetes. Sticking to these relatively standard services provides some level of portability between providers, but there are plenty of proprietary PaaS offerings as well. Examples include services such as AWS Lambda, Azure Batch, and Google Cloud Functions. While convenient, the APIs and interfaces to these services are usually proprietary, and they are designed to fuel the consumption of additional cloud-native services.

Being trapped by proprietary cloud services

Building applications by wiring together proprietary cloud services is a sure-fire way to become trapped in a single cloud ecosystem. Cloud pricing schemes can be complex, melding multi-dimensional, tiered metrics such as resource usage, API calls, data storage, and network traffic making it difficult to forecast costs. Much like the mainframe service bureaus of old, customers can wake up to find that they’ve essentially out-sourced their operations, have lost cost-transparency, and have little or no leverage in negotiating pricing and terms with their cloud provider.

This is not to say you should never use proprietary cloud services or software. Proprietary solutions often represent compelling value. For example, a cloud provider can probably serve a speech-to-text or image recognition service much more cost-effectively than you can build one yourself. The trick is to select proprietary services with care and treat each service as a utility. Design your applications with a view to what it will take to move to a new provider should the need arise.

Five tips to reduce the risk of cloud lock-in

  • Plan for hybrid environments – While it may be tempting to deploy applications fully in the cloud, this can be expensive and reduce flexibility. It’s prudent to pursue a hybrid strategy maintaining some on-premises capacity so that you’re able to shift workloads to where they can run most efficiently. Hybrid clouds provide the convenience of additional computing capacity to help with peak workload demands and help ensure that you’re in a position to take applications back in-house should the need arise.
  • Keep control over your data – While this may be challenging in cases where data originates in the cloud, your data is your most precious asset. Look for solutions that accommodate local replicas of your most critical data. Avoid using proprietary cloud storage solutions that would make it difficult or costly to extract and migrate data.
  • Stay low on the cloudproviders value stack – If you can stick toward infrastructure-oriented cloud services or standard open-source platform services you’re chances of being locked into a single cloud provider will be significantly reduced.
  • Have a backout plan – If you do decide to take advantage of a proprietary cloud service, it’s prudent to have thought through a “plan B” in case it becomes necessary to move to a different provider or implement a similar service in-house.
  • Lean toward for multi-cloud solutions – When selecting software solutions, put applications and middleware solutions that support multiple clouds at the top of your list. This will help you stay flexible and avoid becoming dependent on a single provider.

Video: Learn about dynamic hybrid cloud with IBM Spectrum LSF

Solutions for multi-cloud HPC

For HPC users, IBM Spectrum Computing solutions can help clients avoid lock-in and stay portable across clouds. Spectrum Computing software runs on-premises or across your choice of cloud providers in a hybrid multi-cloud model. It supports hundreds of existing applications including containerized workloads. Whether applications run locally or in the cloud, an integrated resource connector enables applications to “burst,” seamlessly tapping cloud provider IaaS services based on configurable policies without administrator intervention.

Spectrum Computing software manages the interaction the cloud provider’s infrastructure APIs automatically, quickly standing up application ready environments, running workloads, and tearing down infrastructure in a fashion that is transparent to application users. Built-in adapters are provided for IBM Cloud, Microsoft Azure, Google Cloud, Amazon Web Services (AWS) and OpenStack.

Staying open and portable in the age of cloud computing takes special care. Utilizing hybrid, multi-cloud environments, keeping close control of your data and designing your applications with a view to what it will take to move to a new provider will ensure that you stay portable and flexible for whatever the future holds.

 

Return to Solution Channel Homepage
HPCwire