In discussions of cloud computing, as with most discourse today, strong opinions are par for the course. Cloud is either the way of the future or an unrealistic marketing ploy, and when the topic at hand moves to running HPC applications in the cloud, viewpoints are if anything even more contentious.
Questions such as “should I rent or buy?” don’t mean much without a specific example to refer to, and this is true regardless of the application being considered. The best answer usually starts with “it depends…”
A recent article from Jeff Layton, a well-known industry technologist, demonstrates a more balanced approach to the subject of HPC cloud. He writes:
I don’t think the shift to cloud computing for some HPC applications is happening because a CIO or director of research computing watched a cloud computing commercial and thinks it sounds really cool. Rather, I think HPC has existing workloads that fit well into cloud computing and can actually save money over traditional solutions. Perhaps more importantly, I think HPC has evolved to include non-traditional workloads and is adapting to meet those workloads, in many cases using cloud computing to do so.
Layton backs up his point with two examples. The first scenario is massively concurrent runs. At an HPC center, a group of researchers needs to examine 25,000 to 30,000 different data sets as part of a perimeter sweep. Most of the time, these are applications that take only a couple of minutes and do not produce a lot of data. A second group of OS and security researchers are exploring different simulations with different inputs. The process involves running thousands of jobs simultaneously and then looking for results. Both groups need somewhere between 50,000 and 100,000 cores to run their applications. The important metric for them is core count and not per-core performance, says Layton.
Layton adds that he originally thought this HPC center had a unique workload, but he has since encountered more users with similar requirements. “Although this might not describe your particular workload,” writes Layton, “a number of centers fit this scenario, and this number is growing rapidly.”
The second use case is Web services. No, this isn’t an HPC use case in the traditional sense, but there is, according to Layton, an “increasing need for hosting servers for classes or training, for websites (internal and external), and for other general research-related computing in which the applications are not parallel or might not even be ‘scientific.'” Layton says some people have dubbed this “Ash and Trash computing,” to distinguish it from bread-and-butter HPC apps. [Editor’s note: A Google sanity-check came up short on the term “ash-and-trash” in the context of IT speak, but according to this Vietnam War jargon website, the term referred to any type of non-combat aviation mission.]
Layton goes on to outline several scenarios where renting cycles on-demand makes more sense than doing the work in-house. Using a real-world example from Cycle Computing, he determines that “cloud computing works out to half the cost of a dedicated [virtualized] system for these workloads.”
One of the takehome points from this article is that HPC workloads are changing. They have a different set of characteristics and requirements compared to traditional HPC applications, and in many cases it proves advantageous to run these workloads in an off-site (public) cloud. Benefits include reduced cost and the freeing up of on-site resources for traditional HPC workloads.
Layton writes:
“At first it was fairly easy to dismiss cloud computing for traditional HPC workloads. The “HP,” after all, stands for “high performance,” and doing anything to reduce performance is counterproductive. You are paying more and getting less. However, new workloads are being added to HPC all of the time that might be very different from the classic MPI applications in HPC and have different characteristics. The amount of computation in these new workloads is increasing at an alarming rate – so much so, that I think HPC is giving way to RC (research computing).”
Many of these research computing applications share similar resource requirements. Productivity for this class of workloads has less to do with improving per-core performance, and is instead achieved by running many instances of the application at once – something that public cloud with the illusion of unlimited cycles does well. As is so often the case in HPC, the optimal approach is the one that best fits the application at hand.