Comprehensively Evaluating HPC Cloud Cost Benefits
HP Labs partnered with the University of Illinois at Champaign-Urbana to comprehensively evaluate the feasibility of running high performance applications in the cloud. The research set out to answer many questions, including wondering how HPC applications fare in the cloud versus supercomputers (they used the Ranger and Taub machines for those tests), which applications were best suited for cloud deployment, and what the cost benefits were for certain organizations in maintaining their high performance needs in a cloud.
Below is a grid of all the platforms they used in testing their various applications. As one can see, the Ranger and Taub systems are there along with public and private cloud instances.
It is important to note the approach the research team took with setting up their cloud systems. While they could have built a dedicated instance that would perform closer to supercomputing standards, they figured that such an instance would be unlikely in the scenario of a mid-sized enterprise or startup looking to purchase on-demand HPC resources.
With that said, they still took steps to optimize the performance. “To get maximum performance from virtual machines, we avoided any sharing of physical cores between virtual cores. In case of cloud, most common deployment of multi-tenancy is not sharing individual physical cores, but rather done at the node, or even coarser level. This is even more true with increasing number of cores per server.”
They tested those cloud systems and the control supercomputers on a variety of applications, including Jacobi2D, used for scientific simulation and image processing, NAMD, a molecular dynamics application, ChaNGa, used for cosmology simulation, and the NQueens problem among others.
The graphs above show how well the various machines’ performance scaled relative to the various applications. The applications that reportedly found trouble scaling were those that were communication intensive. “IS is a communication intensive benchmark and involves data reshufﬂing and permutation operations for sorting. Sweep3Dalso exhibits poor weak scaling after 4–8 cores on cloud. Other communication intensive applications such as LU, NAMD and ChaNGa also stop scaling on private cloud around 32 cores,” the report noted.
In all instances except for the public cloud, the EP, Jacobi2D and NQueens applications scaled up to 256 cores, while the public cloud imposed performance penalties once more than four cores were used.
Once the performance drop off was established for clouds, a fact that was altogether not surprising, the next task was to determine exactly what kind of penalty was suffered, such that they could relate that to the cost of apportioning those systems in the process of determining if cloud is indeed a cost effective means of securing HPC resources.
To quantify the amount of variability on cloud and compare it with a supercomputer, we calculated the coefﬁcient of variation (standard deviation/mean) for execution time of ChaNGa across 5 executions,” the report stated. According to the research team, the amount of variability increases as they scale up as a result of decrease in granularity. “For the case of 256 cores at public cloud, standard deviation is equal to half the mean, implying that on average, values are spread out between 0.5x mean and 1.5x mean resulting in low predictability of performance across runs. In contrast, private cloud shows less variability.”
Overall, latency and bandwidth on cloud ended up coming in a couple of orders of magnitude below that of their Ranger and Taub machines, as shown in the logarithmic graphs below.
These bandwidth and latency issues make it difficult on those aforementioned communication intensive applications, where obviously contact among cores and nodes to complete a problem is key.
Again, the researchers note that a dedicated public cloud instance would solve a great deal of these problems. However, such an instance would likely cost more and therefore become less feasible for the mid-sized companies and startups that would utilize it. The multi-tenancy cloud setup renders many high performance applications untenable. “The performance of many HPC applications is very sensitive to the interconnect, as we showed in our experimental evaluation. In particular low latency requirements are typical for the HPC applications that incur substantial communication. This is in contrast with the commodity Ethernet network (1Gbps today moving to 10Gbps) typically deployed in cloud infrastructure,” the report noted.
With that said, it is still prudent for those smallmedium companies to enlist cloud-based HPC services, as the cost analysis shows below.
Even the communication intensive applications work well up to a certain amount of cores, an amount of cores unlikely to be exceeded by a medium institution. “The ability to take advantage of a large variety of different architectures (with different interconnects, processor types, memory sizes, etc.) can result in better utilization at global scale, compared to the limited choices available in any individual organization,” the report argued. Below is a sample of what such an architecture that relies on just four-core cloud-based machines would look like.
The report does go on to say that dedicated instances would be advantageous to large institutions looking for burst capacity, a concept that has been discussed here.