When HPC Becomes a Service: The POD Example
It might have been difficult a number of years ago to imagine offering HPC as a Service but we’re now living in the era of everything-as-a-service, so why should high performance computing be exempt? As enterprise and research institutions see the need for dramatic compute resource needs that don’t require the maintenance of a cluster, the cloud is a hot topic—and for good reason. Even with all of the discussions about security (many of which seem nebulous and refer to security as an overarching, non-specific concern) the move to the cloud becomes something of a no-brainer, particularly because budgets require the scalability and cost flexibility that cloud provides.
If you were to do a search about HPC as a service, one of your first results would be the trademark owner of that idea—Penguin Computing. The company’s on-demand offering, called POD, has gained some significant traction in academia, manufacturing and aerospace and, like many other long-time players in the HPC space, the company had a booth at ISC. I sat down with Penguin Computing’s manager of software development to discuss the concept of HPC as a service and the cloud in general as well as to get some insights about POD and its uses and applications. In this case, the discussion is in the context of a biosciences firm, Life Technologies, who has made use of POD in the same way that other life sciences companies are utilizing similar offerings from other vendors as will be addressed in later updates from the ISC floor.
While there are other companies offering roughly the same service that Penguin Computing does, it was worthwhile to hear about some of the end user experiences to get an idea of who is looking beyond an in-house solution in order to save the time and management difficulties of having an on-site cluster for occasional or non-maximized use. Since HPC as a service is inherently scalable, it stands to reason that those who would be making the most use out of a service like POD would be research institutions and industry sectors that require big data processing, but in unexpected loads, bursts, or cycles that might be otherwise hard to predict or invest in for occasional use.
At ISC this year, Penguin Computing and other cloud offering vendors who’ve been willing to talk details about their end users are consistently talking about bioscience, R&D, and financial services—in that order. One of the reasons the cloud calls to these sectors is because the need for large-scale crunching, if planned on the basis of anticipated maximum need, would not only be wasteful of compute resources, but incredibly expensive.