Weighing the Cloud ROI for High Performance Computing
This week Forrester analyst James Staten examined the economics of cloud computing for high-performance computing users based on the research firm’s review of 30 enterprise HPC cloud deployments.
Just as with about anything related to high-performance computing, be it in the ether or on a cluster down the hall, the answer to whether or not the investment makes good sense is the following:
“It all depends…”
Given the diversity of applications and user needs it’s difficult for anyone to come up with a one-size fits all approach to ROI questions. Still, Staten notes that “it is clear that the ROI of cloud for HPC is heavily dependent on how your application scales and how rapidly you can enter and leave the cloud.”
That is useful advice—but it’s hopelessly general, especially for a range of applications that hinge on the idea of customization and specificity.
It seems to be getting harder for analysts to just tell the eager throngs of HPC users that unfortunately, there is some serious legwork involved–that there are no quick and easy answers that fit every application or user.
From benchmarking applications, determining ideal hardware configurations, and the host of other processes behind evaluating investments in any compute power—physical or in the mysterious clouds–it is a process to evaluate possible cloud ROI. Sometimes a long one.
While many specifics linger due to the diversity of applications, hardware requirements, and more general user needs, ROI for HPC cloud computing investments is affected, in Staten’s view, by transiency and elasticity. These make up two of the only real hard and fast rules (for many parallel codes anyway) that potential users have to go upon.
The better an application scales, the more profound the ROI. The “power of cloud economics pays off best with applications that can scale out massively and then scale back down to zero. This means workloads that use a lot of parallel processing…fit best.” Of course, this doesn’t work for every scenario.
Certainly the transiency issue is going to affect cost—since you pay to play, the longer you’re in the game, the more you’re going to need to invest.
For many HPC applications, adding as much compute power as possible is a quick way to arrive at the end result, thus traditional installations have had a significant focus on upfront infrastructure costs. Staten notes, however, that since cloud computing provides a near limitless supply of that much-needed computational muscle the pay-per-hour rates behind the economic model “encourages use of more resources per hour than fewer resources over more hours—a direct hit on the needs of HPC workloads.”
What analysts might consider is an extensive HPC application benchmarking process that culls together 30 applications and runs them in a series of differently configured cloud environments, then the same process on bare metal—all followed up by an analysis of not only how the ROI plays out, but how long it takes to see the results, what degree of hassle is shed or gained (which also should factor to into ROI considerations) and finally, what the users had to say about the fruits of these labors.
There are no hard and fast rules for HPC cloud implementations, thus ROI evaluations are going to be far more work than one would have to plow through before the delivery of a fully integrated physical system.
Full story at Computer Weekly