Cray Launches Cascade, Embraces Intel-Based Supercomputing
AMD-loving Cray has launched the XC30 supercomputer, a product line that will be powered by Intel Xeon processors. The platform is based on the company’s “Cascade” architecture, which is designed to bring a variety of processors and coprocessors under a common infrastructure. XC will become Cray’s flagship computing platform as it phases out its XE and XK line over the next year or so.
Cascade was one of the two designs that received government funding under DARPA’s High Productivity Computing Systems (HPCS) program. The research agency also injected money into IBM’s PERCS project, which was transformed into the company’s latest Power7-based line of supercomputers. The Cascade project became the basis for the XC line. And although the feds sunk a few hundred million into Cray’s coffers over the three HPCS phases, the supercomputer maker contributed the majority of the funding for the project’s R&D.
The XC30 series represents the first fruit of that effort, and this particular model will be powered solely by Xeon CPUs — initially the “Sandy Bridge” chips, and later, the socket-compatible “Ivy Bridge” Xeons, which are expected to make their appearance in 2013. Xeon Phi coprocessors will be added as an accelerator option later on, as will NVIDIA Tesla GPUs, but these will be introduced under different XC product SKUs.
The blade on the XC30 houses four dual-socket Xeon nodes fed by a single Aries interconnect chip. A chassis can hold up to 16 of these blades, which are linked together via a backplane (no cables). A full-outfitted chassis of 128 Sandy Bridge CPUs will deliver about 22 teraflops; the future Ivy Bridge chips should kick that up to 100 teraflops or so.
Up to six chassis can be hooked together in a couple of cabinets using short passive copper cabling. To scale beyond that, you need active optical cables, which allow configurations that span multiple rooms, even extending to different floors. That will get you into hundreds of cabinets and well into the multi-petaflop realm. Once the Xeon Phi and GPU accelerator options are available, these machines should be able to reach beyond 100 petaflops.
There are no short-term plans to build an Opteron-based XC, the reason for which will become apparent in a moment. And although Cray has not hinted about more exotic processors for XC, the original idea behind Cascade was to be able to swallow just about any chip with an HPC bent. So FPGAs, Cray’s own ThreadStorm processor, or even future ARM-based chips might end up in the mix at some point.
The one constant in the XC platform will be Aries, Cray’s third-generation supercomputing interconnect fabric, which will tie together all the processors and accelerators. Aries follows SeaStar and Gemini, which glued together the processors in the XT and XE/XK lines, respectively. As you might suspect, the new interconnect is higher performing that its predecessors, offering improvements in global communications and synchronization. Aries is capable of 8 to 10 GB/sec of real I/O in and out of each node. Global bandwidth has been increased 20-fold, delivering up to 120 million gets and puts per second.
Aries will use PCIe Gen3 as the processor interface, which for all practical purposes excludes Opterons — even the new Opteron 6300 CPUs unveiled last week are using PCIe Gen2. Presumably the future Opteron 6400 series, or whatever they’re called, will incorporate Gen3, and Cray will then have the option to design an AMD blade for XC. In any case, according to Cray VP Barry Bolding, they don’t need an Opteron solution for the XC right away, inasmuch as the current XE/XK will be offered for another year.
Aries is unique in another way; it represents the company’s last in-house-designed interconnect. The technology was sold to Intel back in April, but Cray still has rights to produce as many Aries NICs as it wants. They currently employ TSMC to manufacture the 40nm Aries chip, and will likely do so for the lifetime of the XC line.
Besides the interconnect upgrade, the new platform also incorporates a new network topology. Unlike the older XT/XE/XK lines, which relied on a 3D torus, XC uses a Dragonfly topology, which is a kind of flattened Butterfly. Dragonfly offers the all-to-all bandwidth of a fat tree topology, but does so with the network infrastructure and cost of a 3D torus.
Bolding says that unlike a torus network, which adds extra hops as it scales out, the Dragonfly design maxes out at five hops, regardless of system size. That improves latency substantially on these larger systems, while also making it easier to distribute jobs across the nodes without being concerned about lost performance.
The software stack remains fundamentally the same as the XE/XK line, with Cray’s Linux Environment (CLE) at the center, along with an array of compilers, debuggers, performance tools and job schedulers. The Intel compiler has been added for obvious reasons, as well as SLURM, an increasingly popular job scheduler. Cray has also provided a compiler and runtime for Chapel, a programming language that was developed as part of the HPCS work to improve developer productivity for parallel programming. Chapel is open source, but the version you get with the XC30 is targeted to that platform and carries support with it.
The XC30 is shipping now to some early customers, but will be generally available in January. Bolding says they already have a healthy pipeline of customers for the XC30 (and future XC offerings), including NERSC in the US, HLRS in Germany, CSCS in Switzerland, Pawsey in Australia, Kyoto University in Japan, and CSC in Finland. “It’s great to have such broad acceptance of the product line,” says Bolding, who expects it to be the company’s most successful supercomputer ever.
Pricing is not available, but if you have to ask, you probably can’t afford it. Most, if not all of these systems, will end up at government labs and large research institutions with deep pockets. A smaller variant is in the works, which is aimed at commercial customers. That version is slated to become available around the middle of next year.