Making Clouds Fly

By Gabor Samu, IBM HPC Solutions

November 7, 2018

There comes a point where good ideas are not enough; workable, proven solutions must take flight.

This is the place where cloud computing finds itself today, especially when the conversation turns toward high performance computing (HPC).

The vast majority of enterprises these days are leveraging the agility, cost advantages, and convenience offered by the public cloud services model. HPC facilities are also exploring the benefits of cloud, but perhaps not for the same reasons. Whereas more traditional enterprises are most often looking to public cloud services providers for backup and disaster recovery solutions or the option to lower storage costs by parking seldom-accessed data sets on less expensive cloud storage, HPC facilities are more likely to be exploring the possibilities of accommodating spikes in demand by dynamically adding and subtracting cloud-based resources.

These all sound like very good ideas, but making them work, simply and reliably, has been a real challenge. Cloud compute resources, unlike cloud storage, can be more expensive than on-premises infrastructure. Security is always a consideration. Moving enormous HPC data sets over networks can be prohibitively slow and cumbersome. And you need to have need to have applications which can perform acceptably on cloud-based resources.

When off-site cloud resources are integrated with on-premises computing, the result is called a hybrid cloud architecture. There is no one correct way to construct an HPC hybrid cloud computing cluster. Every site is different, and users run different workloads that have very different resource consumption patterns. For example, some Deep Learning training consumes huge amounts of data to produce a small model. Others are relatively CPU intensive, or GPU intensive. The hybrid cluster solution actually implemented will of course be driven by the workloads HPC users want to run, but two hybrid architectures often rise to the top when IT architects tackle the challenges of making cloud fly for HPC – the “stretch cluster” configuration and the “multi-cluster” configuration. To accommodate spikes in demand, organizations operating traditional HPC environments are increasingly looking at hybrid clouds to provide the extra capacity when it’s really needed.  Stretch and multi-cluster hybrid cloud solutions can help address this challenge by enabling dynamic access to cloud resources.

A stretch cluster means that you extend or augment your on-premises HPC compute cluster with additional cloud-based compute nodes, when needed. This architecture assumes that you have a compute cluster in another location – either on-premises or configured in a cloud. This architecture is formally defined as a single cluster stretched over a wide area network (WAN) so that compute nodes at the other location or in the cloud communicate with a master scheduling host at the originating location. Though much simpler in concept than a multi-cluster, this means that all communication and coordination with the master scheduler happens over the WAN, which can be a source of extra cost or lowered reliability.

A multi-cluster hybrid cloud configuration is a more complex architecture, because it adds a master scheduler running in the cloud. By including this additional master scheduler in the cloud, the architecture eliminates much of the communication from cloud compute nodes to the on-premises master. The two master schedulers instead exchange task meta-data in a “job forwarding” model. In this model, users on-premises submit application workloads to a queue on-premises, which in turn forwards that workload to the cloud for execution. Upon job completion, the scheduler running in the cloud synchronizes the job status with the on-premises environment and a notification is provided to the user.

The question becomes, what are the deployment, configuration, and optimization details for HPC stretch and multi-cluster hybrid cloud solutions? How do I actually make this thing fly? A recent announcement from IBM provides the answers. In an important step along the pathway to workable, reliable, easy-to-use hybrid cloud solutions, IBM has developed and released a deployment guide that facilitates the use of IBM Spectrum LSF with Amazon Web Services (AWS). This LSF cloud deployment guide builds on an existing relationship with AWS. IBM is providing expertise, services, and management capabilities that will give IBM Spectrum LSF users fast, flexible access to AWS offerings.

The new deployment guide provides information for building a wide range of customizable stretch and multi-cloud IBM Spectrum LSF cluster configurations that enable users to more easily and effectively gain the agility, cost advantages, and convenience offered by cloud computing.

IBM Spectrum LSF is part of a comprehensive Suite of solutions supporting traditional HPC and high throughput environments, as well as big data, Artificial Intelligence, GPU, machine learning, and containerized workloads, among many others. IBM Spectrum LSF, the core of the Suite, is a powerful workload management platform for demanding, distributed HPC environments. It provides a complete set of intelligent, policy-driven scheduling features that help maximize utilization of compute infrastructure resources while optimizing application performance. LSF is the HPC workload management standard, with the most complete set of capabilities – from license scheduling and session scheduling to advanced analytics.

IBM Spectrum LSF Suite comes in three editions and includes a number of additional components such as IBM Spectrum LSF Resource Connector, which enables policy-driven cloud bursting to all major cloud services, including IBM Cloud, AWS, Google, and Azure. And now, IBM is offering variable use licensing for IBM Spectrum LSF which means that LSF users will be able to run the solution almost anywhere, and pay only for what they use.

The cloud offers many benefits, but constructing effective hybrid cloud solutions isn’t so simple – until now. With the leading-edge functionality provided by IBM Spectrum LSF and the help of the new LSF deployment guide for AWS, HPC facilities are much more ready to make cloud fly for them.

Learn about leading-edge HPC and AI solutions at the IBM booth next week at SC18 (booth #3433) in Dallas, Texas. Register here for technical briefings and user group sessions.

Read more about dynamic hybrid cloud with IBM Spectrum LSF at the IBM IT Infrastructure blog – IBM Spectrum LSF Goes Multicloud.

A closer look at the deployment guide and supporting materials for building IBM Spectrum LSF hybrid-cloud configurations with Amazon Web Services can be found here.

Return to Solution Channel Homepage
HPCwire