With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application management solutions for each cloud provider, adds lots of friction in dynamically allocating, using, and managing those resources.
Lack of a common management and operating model in hybrid cloud environments can cause:
- Inhomogeneous, fragmented environments create additional complexity for managers, operators, and security.
- Speed of innovation slows down due to hybrid environments without common management.
- Cloud resources are hard to change or shutdown when dependent on a cloud provider’s specific services.
- Workloads can’t be easily migrated back to on-premises environments when bound to specific cloud environment setups, and vice versa.
Kubernetes has become the de-facto standard container orchestrator as pointed out in a previous article. All major companies provide and build solutions on top of a standardized API which is available everywhere. CIOs are now looking into the applicability of Kubernetes for HPC in hybrid-cloud as it offers a common management and operating model for every environment.
Kubernetes: A Common Management and Operating Model for Hybrid Cloud
Kubernetes facilitates the use and administration of countless containers running on fleets of servers. It is the new standard platform for hybrid environments supported by many IT vendors and cloud providers. CIOs can now allocate a fully configured and supported container orchestrator as base for all of their application workloads.
Kubernetes, unlike proprietary infrastructure solutions, provides portability, ease of administration, high availability, integrability, and monitoring capabilities. When managing resources on Kubernetes CIOs are no longer bound to a specific infrastructure. They can offer their users the same set of functionalities, be it on-premises or in any cloud, using the same application stack. Users are not even aware that their applications are running on Kubernetes, nor on which infrastructure they are running: in their own data centers or at a specific cloud provider, like Google, Microsoft, or Amazon.
Reducing complexity in hybrid cloud environments by using a standardized software stack like Kubernetes comes with many advantages: improvements made for one platform can be made automatically available on other platforms; deployment and operational aspects can be simplified; and security audits are easier and rigorously to execute.
Kubernetes and HPC
Kubernetes is the de facto platform for AI and ML already. However, when it comes to traditional HPC, some challenges remain. There is still a set of features built into HPC workload managers not yet available in Kubernetes. We discussed the major differences already previously in our HPCwire Part I article. Major gaps of Kubernetes for HPC currently are: native support for distributed memory jobs, namely MPI applications, and a missing job queueing system compatible with existing HPC applications.
Kubernetes has built-in high availability on many layers. However, for HPC jobs, it is not enough to restart a single container that failed because the whole distributed job itself might have failed already. In this case, automatic rescheduling of the entire distributed memory job is required. This is something Kubernetes doesn’t handle.
Beside these challenges, Kubernetes comes with many benefits for HPC: for example, the environment for the engineer and for the containerized HPC application is always the same, be it on-premises or running in a cloud-based environment; and the capability to quickly change from one infrastructure to another allows the HPC team to align with their company’s cloud roadmap. The freedom to move workloads between infrastructures based on a common API – the Kubernetes API – is what becomes valuable.
Containerized HPC Applications on Kubernetes
Over the past five years, dozens of HPC applications have been containerized, be it commercial, like ANSYS, COMSOL, STAR-CCM+, or open source packages like OpenFOAM and GROMACS, along with HPC cluster schedulers like Univa Grid Engine and Slurm. Thanks to container technology, a constant stream of updates and improvements is provided which can be promptly and seamlessly updated by customers. Additionally, the container images allow users to go back at any time to a previous application version so that they always can reproduce their previous results.
In the meantime, many container environments have been implemented by using infrastructure and configuration management tools like Terraform and Puppet or by building cloud specific HPC integrations into existing portals. But with the advent of Kubernetes, container environments became easier to maintain and are much more dynamic. Rolling out a cluster, rescaling the worker nodes, using a constant set of preemptible instances, and high availability are driven by controllers which continuously drive the cluster to the desired state. Thus, major HPC gaps of Kubernetes have been closed. This way, today, distributed memory/MPI jobs can be supported in any Kubernetes environment, which provides a built-in HPC workload manager integration running inside HPC containers. That allows traditional HPC applications to run without any changes. Also, GPU and non-GPU enabled applications based on Ansys and COMSOL have been launched successfully, through a high-performance, GPU enabled desktop running inside a pod. Once logged in to the desktop the engineer can start submitting batch jobs or single MPI applications which are distributed across a set of pods allocated on multiple nodes.
Conclusions
Kubernetes not only supports microservice based enterprise applications, but also self-service engineering HPC applications. In summary, as this research has shown, the key advantages of using Kubernetes as a foundation for running containerized engineering applications are:
- Unified application stack available on virtually any infrastructure
- True hybrid cloud usage scenarios for engineering workload. For the engineers it is transparent where the application runs, be it on-premises or in the cloud
- which leads to providing the best performance for running engineering applications by allocating always the newest and fastest machines available in the cloud
- Building and resizing a self-contained HPC application and compute cluster as self-service for the engineer which is only limited by cloud quotas and budget per time period
- Robust management stack, supported by many Cloud providers
- Optimizing costs by only paying for what is used. No idle resources which need to be allocated before they are going to be used.
- High security through self-contained dedicated compute clusters
- Minimal operational overhead by self-provisioning and disposable components for which updates are simple destroy and re-create commands
- Kubernetes based workload is easier to integrate in widely adopted continuous integration and deployment solutions (like Tekton, Concourse, or future versions of Jenkins)
In this research, container-based HPC application environments have been implemented on top of Kubernetes (e.g. on Google GCP and Amazon AWS) and also used as self-service test environments which can be deployed from scratch by HPC application specialists, not operators. It has also been used in CI/CD pipelines to automatically build test environments which run tests against existing container solutions and shut down the infrastructure afterwards. In customer environments, the IT group benefits from an easier to maintain system using a supported, managed Kubernetes which can ramp up, resized and deleted computing resources within minutes.
About the Authors
Daniel Gruber, Burak Yenier, and Wolfgang Gentzsch are with UberCloud, a company that started in 2013 with developing HPC container technology and containerized engineering applications, to facilitate access and use of engineering HPC workload in a shared on-premise or on-demand cloud environment. In this article and the part-one article published on HPCwire last September, they describe their experiences during the last 12 months using UberCloud HPC containers on Kubernetes.