Cloud technologies have become integral to a number of services including video streaming, file sharing and social media to name a few. But when it comes to HPC applications, benefits shared by the previous examples don’t always translate. This leaves some HPC users preferring private clusters sans virtualization, or straight-up supercomputers. Yet there have been quite a few high-profile accounts of users running HPC workloads on public clouds like Amazon Web Services and Microsoft Azure. Does this mean it’s a good idea to run HPC applications in the cloud? Cluster Monkey’s Thomas Eadline addresses this question, and points out which HPC applications are best suited for today’s multi-purpose clouds.
In the HPC space, users prefer the following features:
- Bare metal infrastructure
- Tuned hardware and storage
- Batch Scheduling
- Userspace Communication (Applications bypassing the OS kernel)
While cloud systems are based on a different set of characteristics:
- Immediate Access
- Large Capacity
- Available Software
- Virtualization
- Service Level Agreements
Eadline notes that where the two spaces overlap is in the need for scalability and redundancy though hardware independence, but otherwise these are very distinct worlds. HPC users have limited choice regarding what hardware they can use, and if they decide to use a service like Amazon, their resources will be virtualized. Furthermore, applications that lean heavily on a system’s interconnect and call for high levels of I/O would suffer serious performance degradation.
Despite these differences, some HPC applications are good candidates for cloud. Also known as embarrassingly parallel (EP), these programs do not require high interconnect performance, and can run on 10 Gigabit Ethernet or even Gigabit Ethernet. Of these EP applications, there is a subset of programs that don’t require especially high I/O rates. This group is typically well-suited for traditional cloud implementations.
Earlier this year, Cycle Computing ran one such application on Amazon’s EC2 infrastructure. The program, called Glide, was used to test various potential cancer drugs for pharmaceutical firm Schrödinger and their research partner Nimbus Discovery. A 50,000-core virtual cluster named “Naga” was spun up for three hours and used the application to test 21 million drug compounds. The project processed over twelve-and-a-half years of compute work in just three hours. Total cost? $4,900. By comparison, had the end users built their own facility and purchased a comparable supercomputer, it would have cost between $20-30 million.
While Cycle’s example demonstrates the extreme savings an HPC user can experience with cloud services, not all applications can be handled in the same fashion on a service like Amazon, but AWS is not the only cloud in town.
Some infrastructure providers like Penguin Computing, Softlayer and Zunicore focus on creating HPC-friendly cloud environments. These companies offer bare metal services, which allow end users to access resources without virtualization layers. Although these features are attractive for HPC applications, users need to analyze their requirements carefully before migrating to the cloud.