While the broader spectrum of HPC users seem to be watching closely from the wings as new developments from the ARM ecosystem appear, it could be some time before these pose a significant threat to the long-dominant x86 order. However with the general availability release of two new ARM entrants from HP along the Moonshot lines, and presumably more to come from the rest of the vendor ranks in coming months, the battle lines are just being drawn.
HPC users often have a more defined desire for tuning and customization than the general datacenter masses–needs that are being met by HPC-oriented vendors and the ever-growing SKUs from Intel, With the arrival of 64-bit ARM as well as new host platforms, including HP’s Moonshot line of servers, and machines from other smaller players like Eurotech that are pairing GPUs with Applied Micro SoCs for big performance and efficiency, the next few years could offer a few surprises.
Part of what is interesting here is that while ARM may be moving into HPC here and there via Moonshot, Eurotech, and other vendors, there are several lingering questions. First, what are the performance differences between x86 servers primed for HPC? Second, what is the TCO when you math it over a few years and related to that, how does all of that fit into the power and cooling profile versus any potential sacrifice or benefit in performance? Further, what are the development hurdles and what do those require for specific, complex codes that have been humming away on x86 servers since they were a few lines old? And with that in mind, for a processor ecosystem dedicated to one important goal—flexibility—how much customization will be possible up and down the stack for HPC users?
It turns out, as you might have imagined, these aren’t easy elements to tease out yet. In part, because on the HPC side, there’s just not enough data yet. It’s part of that chicken and egg scenario where we can’t see the real performance, TCO, development matrix as a point of comparison. But with more “off the shelf” systems like those HP rolled out officially today, it might start getting a bit easier. As more such systems emerge we’ll get a better sense of what these metrics look like and what the real trajectory for ARM in HPC might be.
The first, and perhaps juiciest for HPC, is the HP ProLiant m400, which features Applied Micro’s X-Gene SoC and “provides up to 35% reduction in TCO compared to rack servers” according to recent research that evaluated the new ProLiant over a projected 3-year term to include power, cooling, and space savings. This comparison was based on a combination of mid-range and high-end x86 1U rack servers.
These SoCs feature 8 cores running at Applied Micro’s stated max of up to 2.4 Ghz. HP has implemented all four memory channels on this board, which allows up to 64GB of memory across these channels. They’ve also laid down a Connect-X3 device from Mellanox that allows for 2 10GbE links with a new HP-built switch inside that we’re awaiting more details on. It’s possible to fit 45 of these cartridges into 4.3U, which makes them rather dense. Additionally, these pack in 64 GB of DDR3 memory and SSD storage in 120, 240 or 480 GB, depending on tastes and budgets. While you can’t dress it up with a GPU to make the HPC fit that Eurotech tailored, the density is nothing to overlook. The cartridges are packed into the existing Moonshot 1500 chassis with the 45 cartridge and switching configuration:
While we have already profiled an early user of the DSP variant of the new Moonshot servers, PayPal and its use of the m800, one of the star customers HP identified for the m400 is the University of Utah, which is building an HPC research cloud using the platform. “HP Moonshot with the ARM-based 64-bit system-on-a-chip server cartridge offers lower cost, higher density, and lower power consumption—three factors that will be critical to the future of cloud computing,” said Robert Ricci, Research Assistant Professor of Computer Science at the university. Additionally, Sandia National Labs already discussed their early use of the m400 in its advanced architecture testbed, which is being used to evaluate future architectures for exascale computing. Dr. James Ang from Sandia spoke about this at ISC in light detail, citing work with the Mantivo project on an unnamed X-Gene 1 cluster.
The m800, which is more of a real-time analytics engine as seen in the PayPal case cited above, features a 32-bit instruction set featuring the DSP on Texas Instruments’ KeyStone architecture. With 4 ARM cores down-clocked to 1 Ghz and 8 DSPs running at 1Ghz, HP has managed to pocket 180 such cartridges in their 4.3U platform.
While HP is appealing HPC folks as well as mainstream enterprise markets. The meatiest markets at this point appear to be telco, web services, and other large-scale operations requiring “big data” and reeal-time operations. It’s worth noting that they do require the “typical” HPC shop to do some gymnastics, including wrapping around Canonical’s Ubuntu (which is not impossible though Ubuntu only has a rather small piece of the HPC Linux pie). To make those backflips easier on the programming side, HP is offering a full development kit through their Partner One program.
Pricing starts at around $65,000 for a full unit of the m400 and $81,000 for the DSP-laden m800.