Linux cluster maker Penguin Computing hopped on the HPC-in-a-cloud bandwagon this week with the announcement of its HPC on-demand service. Called Penguin On Demand (POD), the service consists of an HPC compute infrastructure whose capacity can be rented on a pay-as-you-go basis or through a monthly subscription.
As it exists today, the POD infrastructure consists of 1200 Xeon cores spread over a number of clusters at a single facility. Penguin offers a choice of GigE or DDR InfiniBand interconnects and the option to tap into NVIDIA Tesla GPU computing hardware. By cloud standards the number of cores is tiny. But since Penguin also sells systems for a living, it would be relatively easy for them to scale up the infrastructure rather quickly if customer demand warranted additional capacity.
According to Penguin, the on-demand facility has sufficient bandwidth to allow the transfer of reasonably large data files directly to POD over the Internet. The company also offer a “disk caddy” service that allows the transfer of 1 TB+ files overnight. The disks are provided as part of the service and are actually owned by the customer and are returned to them once the data has been transferred to POD storage.
The software stack consists of CentOS, a community-supported OS based on Red Hat Enterprise Linux, as well as the company’s Scyld ClusterWare cluster management software. “Scyld enables us to rapidly provision a set of compute nodes for our customers based on their demand — so we can scale up and scale down efficiently,” says Penguin Computing CEO Charles Wuischpard.
Penguin is aiming the POD at a variety of HPC verticals. According to Wuischpard, the initial interest came from the life sciences sector, but they have recently seen interest from a number of Fortune 500 manufacturing companies and some smaller hedge funds firms.
Users with in-house Penguin systems can get access to the POD service via the Scyld software suite. Since Scyld ClusterWare includes TORQUE and offers a scheduling package called TaskMaster, policies in the scheduling software can be set such that when a particular threshold is reached, jobs submitted on the local resource are automatically redirected to the POD system.
Unlike generic cloud computing set-ups like Amazon’s EC2, user applications run directly on the compute nodes without virtualization in order to maximize performance. “POD is geared strictly towards applications that thrive in an HPC environment and would otherwise be starved for performance on a virtualized cloud computing environment,” explains Wuischpard.
In that sense, it’s not really a cloud in the classic sense (if there is such a thing), but rather a dedicated infrastructure built for on-demand HPC. In fact, the model used by Penguin is the same as most HPC on-demand offerings, such as IBM’s Computing On Demand service and R Systems’ dedicated hosting service. Thus far, a virtualized purpose-built HPC cloud with elastic capacity has yet to appear.
At the hardware level, the biggest criticism of general-purpose clouds is that they lack low latency interconnects so important to tightly-coupled MPI applications. As pointed at recently by Ian Foster, for short running HPC applications this may not be much of an issue. But for codes expected to execute for hours, days, or even longer, fast server-to-server communication is all but mandatory. Since at least some of the POD hardware includes InfiniBand-equipped servers, the service offers this natural advantage.
Setting up a POD account requires some initial hand-holding with Penguin technical staff. They will help set up the compute environment, explain the account management features, and answer any questions. After that, the POD service can be accessed via SSH to run user applications directly. If a customer requires more assistance, Penguin techies are available (via their Customer Portal) to help with issues that might come up or to help users squeeze more performance from user codes.
According to Penguin, their on-demand service is priced to provide a significant improvement in price-performance for HPC applications when compared to running on traditional cloud computing offerings. (The implication is that you will pay more per CPU-hour than for, say, EC2, but better performance will more than offset the price premium.) “Users pay only for the core hours that they use,” says Wuischpard. “Monthly contracts are available, which provide for a reduction in the average cost per core hour. And yes, we do have the concept of ‘roll-over’ hours!”
At this point, Penguin is not offering SLAs or QoS guarantees in the general offering. But, according to Wuischpard, these could be implemented if a customer has such a requirement. He says they do guarantee that if a job fails because of a POD hardware failure, then it can be rerun at no cost.
From a business point of view, the OEM-as-cloud-provider will be an interesting model to follow. If margins continue to shrink on commodity-based clusters, selling compute on-demand services may offer a natural way to tap into new revenue streams. As pointed out by many cloud gazers, the largest compute utility today is essentially being run out of the back of a bookstore. Renting CPU cycles from a system vendor would seem at least as reasonable.