Amazon’s cloud platform got a high performance boost this week with the announcement of its Cluster Compute Instances (CCI). CCI specifically targets HPC workloads, incorporating high-end CPU horsepower and a low-latency interconnect fabric into the company’s popular EC2 computing on-demand offering. The new capability welcomes HPC into the most well-recognized public cloud in the world.
In a nutshell, the new offering is based on a new EC2 instance under the CCI category: the Cluster Compute Quadruple Extra Large Instance, which, for the sake of brevity, I’m going to refer to as the HPC instance. It is defined as of a dual-socket Intel Xeon X5570 (2.93 GHz, quad-core) server or virtual server with 23 GB of memory, and 1,690 GB of external storage. Servers are connected via a 10 Gigabit Ethernet network. The HPC instance is the ninth EC2 instance type offered by Amazon and the only one that actually spells out the specific CPU and I/O fabric being employed. For the other eight instances, you are provided a generic notion of capability based on a specified number of EC2 compute units and a general metric for network I/O performance (moderate or high).
For users of the HPC instance, the default cluster size (aka the instance limit) is eight servers, providing 64 cores. That’s probably the sweet spot for the type of customer Amazon is going after — presumably middle-range HPC users with moderately scalable applications. But, as in any computing on-demand offering worthy of that title, capacity can be extended dynamically.
“An instance limit is only an initial limit and can be easily removed by sending us an email, just like any other Amazon EC2 instance,” said Deepak Singh, business development manager for Amazon Web Services (AWS), in an email to HPCwire. “Customers can provision instances in minutes and shut them down and restart as they need in a truly scalable and elastic environment.” The exact extent of this elasticity is somewhat of a mystery though. And at this point, Amazon is not revealing how big a cluster can be devoted to a single customer.
It’s worth noting that Amazon has run Linpack on 880 of their HPC-style servers, reporting a performance result of 41.82 teraflops. That’s well into TOP500 territory (equivalent to the 146 slot on the June 2010 list). It’s also worth noting that, according to Intel, the peak performance on the Xeon X5570 CPU is 46.88 gigaflops, which means the Linpack efficiency for the EC2 cluster is just a shade over 50 percent. That’s pretty much on par with vanilla GigE clusters, although the best 10 GbE cluster can hit 84 percent Linpack efficiency and most InfiniBand-based systems will be in the 70 to 92 percent range.
Customers won’t care about unimpressive Linpack yields, but it may remind potential users that even the new HPC instance may behave less like a supercomputer than they might be expecting. Amazon has provided few details about the 10 GbE setup or how the Hardware Virtual Machine (HVM) virtualization scheme being employed might impact performance. And since there are no performance metrics publicly available for real applications, it’s too early to tell how traditional MPI codes will fare. To its credit, Amazon is being careful not to make claims it can’t demonstrate.
“During our private beta period, customers ran a variety of MPI codes, including MATLAB, in-house computational fluid dynamics software for aircraft and automobile design, and molecular dynamics codes for protein simulation like NAMD,” said Singh. “Our partners and AWS used standard benchmark packages like HPCC and IMB. Now that the service is available to the broad public, we expect an increased variety in the types of applications our customers will be running.”
The Magellan Cloud research team at the National Energy Research Scientific Computing Center (NERSC) was one of those beta customers and got a chance to test drive the new EC2 offering prior to this week’s official launch. They reported that a series of HPC application benchmarks “ran 8.5 times faster on Cluster Compute Instances for Amazon EC2 than the previous EC2 instance types.” But considering the lesser CPUs and GigE configurations on the non-HPC instances, that may end up being faint praise.
EC2 has surely left some room at the high end for more performant on-demand platforms and for customers that require a greater level of HPC expertise than Amazon can muster. Experienced HPC vendors like IBM, SGI, Penguin Computing, and others are already staking out this territory. While those vendors may be gratified that a company like Amazon thinks the HPC on-demand model is ready for prime time, those same companies will now have to prove their offerings are better than Amazon’s.
Penguin Computing seems more than willing to make that case. From CEO Charles Wuischpard’s point of view, his company’s one-year old Penguin On-Demand (POD) HPC rental service has some clear differentiation with Amazon’s new HPC offering. At the hardware level, POD offers more memory per core than EC2, InfiniBand connectivity, a GPU acceleration option, and Panasas-based parallel file storage.
But the big differentiator, according to Wuischpard, is the level of engineering support they’re able to provide. Every POD deal comes with its own HPC engineer, who makes sure the whole software stack — cluster management, network drivers, compilers, and so on — is configured correctly for the end-user applications. “The customers we have today are truly not computer scientists and we help them through the whole process,” said Wuischpard.
Unit pricing is somewhat comparable. POD charges $0.25 per core hour for compute time, while Amazon offers one HPC instance (two quad-core CPUs) for 1.60 per hour. Both provide cost incentives for longer time commitments. But overall, Wuischpard thinks POD will offer better value than Amazon. It should be remembered that wall clock time is the key metric here. If an on-demand platform can run a given application twice as fast as their competitor, they’ve effectively cut their per unit cost in half. “As long as we’re less expensive overall, I’m pretty comfortable with where we are,” said Wuischpard.