Today Purdue University’s Coates Cluster, which is ranked at the #103 spot on the TOP500 supercomputer roll, was declared to the first native 10Gb Ethernet cluster system to be ranked on the honor roll, which means, of course, that the cluster of clusters before this one have all been employing the mighty InfiniBand to sate their low-latency imperatives.
There is little room for questioning that the purist side of the high performance computing community sees InfiniBand as the gold standard. Shortly after my surprise following the announcement regarding Amazon’s new HPC-inspired Compute Cluster Instances, which have the power to place them at the equivalent of the #145 position on the TOP500 list, I figured that the word “InfiniBand” would follow—but it didn’t. Amazon instead went with 10GbE, a decision that has ruffled a few feathers because it is seen by some as being still inferior on low-latency front.
In an interview with HPCwire’s Michael Feldman, Deepak Singh, Business Development Manager at Amazon Web Services, responded to a question that many were asking after they’d a day to sit on Amazon’s news: why did they opt for a 10GbE network rather than InfiniBand, for instance?
Singh replied that Amazon looked to the customer base to understand what technology options were best-suited to their needs, saying, “we know that for HPC, microseconds matter. We specifically engineered Cluster Computer Instances with 10Gbps Ethernet bandwidth to give customers the low-latency network performance required for tightly-coupled, node-to-node communication. Cluster Compute Instances will provide more CPU than any other instance type and customers can expect to find the same performance provided by custom-built infrastructure but with the additional benefits of elasticity, flexibility and low per-hour pricing.”
When asked whether not they had plans to add InfiniBand networked clusters Singh stated that Amazon would “continue to evaluate all technologies as we receive customer feedback on the new instance type” which translates roughly into, no, not anytime soon, but we appreciate that you asked.
Amazon revealed a surprising amount of information for this new instance type, at least compared to their other releases which offered just enough information for users to have a rough idea—another big weakness in the EC2 option for running HPC-type applications. While they did share the hardware specs this time around, the specifics are still cloudy. For instance, when HPCwire asked about the configuration details (i.e., adapters, switches and so on) and for metrics on the node-to-node latency—or any latency information at all, Singh’s response was back to the EC2 generalities. He stated that Amazon “does not share details on the specifics of network implementation. What I can tell you is that the new Cluster Compute instances operate on a 10GbE network that provides full cross-sectional bandwidth to members of a cluster and very low latency.”
Gilad Shainer, Senior Director of HPC and Technical Computing at Mellanox Technologies, a company that is definitely an advocate of InfiniBand (although still caters to the 10GbE market), “Many of the HPC systems around the world are being built for maximum performance and efficiency—hence InfiniBand, GPUs, etc. People using HPC want to be able to run their simulations as fast as possible and as many as possible per day. Amazon’s new entry includes 10GigE for the I/O and incorporates the latest CPUs, but is currently limited in the amount of CPUs that users can utilize. I believe that Amazon will need to continue to improve their HPC cloud offering to include technology being used in most of today’s HPC systems to provide more compute resources per user.”
After the thrill of the news has worn off, people are taking a much closer look at not only the Linpack results that delivered Amazon’s virtual placement (it takes more than a test to get on the Top500—this was more of an exercise to demonstrate CCI’s capabilities) and the nature of this as a viable alternative to in-house HPC clusters. This delivers way more than standard EC2 and answers the concerns by many in the community that they just weren’t getting enough out of what was being offered.
I look forward to seeing how others rise to the challenge since it’s clear now that the HPC market must be important enough to cater to. If someone else ups the ante with InfiniBand and more CPU horsepower (via magic, of course) –what will this mean?
Would love to hear some thoughts on this issue. How important is the network or do the other drawbacks, even the capabilities provided by CCI still stand in the way? In short, is it not just the latency imperative?