Is Amazon’s ‘Fast’ Interconnect Fast Enough for MPI?
One of the hallmarks of HPC is a speedy interconnect. Amazon’s EC2 Cluster Compute instance runs on a 10 Gigabit Ethernet network, but is it fast enough for MPI applications?
HPC users wondering if Amazon’s virtual cluster is right for them just got some additional data points to consider thanks to a series of MPI benchmark tests undertaken by Glenn K. Lockwood. A user services consultant at the San Diego Supercomputer Center, Lockwood ran the OSU microbenchmark suite on both the Amazon EC2 Cluster and on a Myrinet 10GigE cluster.
The Point-to-Point MPI Benchmark from Ohio State measures latency, bandwidth and bidirectional bandwidth. Lockwood ran each test five times and averaged the scores together, as represented by this chart:
Discussing the results, Lockwood doesn’t beat around the bush.
“The numbers speak for themselves,” he writes. “EC2’s interconnect performance is not great, and the disparity only worsens when comparing EC2 to Infiniband.” (He’s making a reference to Adam DeConinck’s blog, which compared Amazon’s Cluster Compute instances to QDR InfiniBand.)
In another experiment, Lockwood ran a quantum chemistry application across four EC2 Cluster Compute instances and again on the reference architecture with Myrinet. The setups were otherwise identical with two Intel Xeon E5-2670 processors and 60 GB of RAM. EC2 came up short again by about 30 percent.
In a bonus trial, Lockwood puts the EC2 cluster up against the 2007-era Blue Gene/P Torus interconnect as well as the newer Myricom adapter. The graphed results show EC2 and Blue Gene/P on an equivalent trajectory, with Myrinet the clear winner, especially at larger message sizes.
While Lockwood’s report focuses primarily on point-to-point communications, he notes that the Collective and One-Sided Benchmarks did not work out any better for EC2.