NVIDIA, Supermicro Give Birth to CPU-GPU Server
Until now, the only practical way for customers to get GPU-accelerated clusters was to combine NVIDIA’s own S1070 Tesla servers with x86 CPU servers from a traditional system vendor. Before May, the onus was on the users to configure the Tesla and x86 boxes themselves. But on May 4, NVIDIA launched its pre-configured cluster program, which brought in OEM partners to construct these mixed-processor clusters, allowing customers to purchase pre-built GPU-accelerated systems.
Now NVIDIA has taken its next step in GPU computing with the introduction of a new Tesla card, the M1060, that is designed to fit neatly inside CPU servers. With this new offering, NVIDIA hopes to expand the scope of GPU high performance computing by using a more traditional model for building large-scale HPC systems.
The M1060 module contains a single 1.3 GHz Tesla Series 10 GPU, the same device found in the C1060 for workstations. The GPU contains 240 stream processing cores, which provide 933 gigaflops of single precision floating point performance or 78 gigaflops of double precision. Four gigabytes of GDDR3 memory are included in the module, and can be accessed at up to 102 GB/second.
Supermicro will be the first vendor to bring an integrated CPU-GPU server to the HPC market. At Computex in Taiwan this week, the company announced its new SS6016T-GF, a 1U server that houses two Tesla GPU modules alongside two quad-core Nehalem (Xeon 5500) CPUs. The new server delivers two single-precision teraflops of computing power. According to Andy Walsh, who heads the NVIDIA Tesla business unit, the encapsulation of dual GPUs inside the Supermicro box will make it “the world’s fastest 1U server.” Although Supermicro is the only vendor that has announced a GPU-juiced server, Walsh says other vendors are being lined up and will offer CPU-GPU systems later this year.
Having a couple of teraflops in a 1U server provides the same compute density as when the CPU and GPU servers are purchased separately. But Walsh explains that having all the processor chips under one roof provides much easier deployment and better manageability. Set up is simpler since there are no external cables to hook up between separate CPU and GPU servers. Instead, each GPU module is connected internally via a PCIe 2.0 x16 interface. Also, when the GPUs inhabit the same host, the server’s management software (which monitors and controls temperature, fans, voltage, etc.) can be applied to the GPU components as well.
Inside the SS6016T-GF Supermicro box, the two M1060 GPUs modules are on opposite sides of the server chassis in a mirror image configuration, where one is facing up, the other facing down, allowing the heat to be distributed more evenly. The NVIDIA M1060 part uses a passive heat sink, and is cooled in conjunction with the rest of the server, which contains a total of eight counter-rotating fans. Supermicro also builds a variant of this model, in which it uses a Tesla C1060 card in place of the M1060. The C1060 has the same technical specs as the M1060, the principle difference being that the C1060 has an active fan heat sink of its own. In both instances though, the servers require plenty of juice. Supermicro uses a 1,400 watt power supply to drive these CPU-GPU hybrids.
Pricing on the servers has not been released, although Boston Limited, a European distribution partner for Supermicro, is offering the C1060-based server variant for £4999 ($8,227) and claim they are ready to ship such systems today.
For its part, NVIDIA is positioning these integrated servers as a way to help push its GPUs into the largest supercomputing systems. As such, it represents the company’s relentless climb up the HPC food chain, starting with GPU-accelerated workstations, moving to heterogeneous CPU/GPU clusters, and now to monolithic CPU-GPU servers. As GPUs reach parity with CPUs, it’s more likely that these hybrid systems will start to vie for the top spots in the supercomputing. And until AMD or Intel manage to come up with a compelling alternative, NVIDIA will continue to define how GPU-based supercomputing is done.