SMP virtualization specialist ScaleMP has announced vSMP Foundation 3.0, the third generation of its virtual SMP solution for high performance computing clusters. Unlike traditional virtualization which partitions hardware resources into smaller chunks, vSMP aggregates those resources to build bigger, more powerful virtual machines. Version 3.0 of the software adds some much-needed updates, including increased scalability and additional hardware support.
Specifically, vSMP 3.0 can aggregate up to 8,192 Intel Xeon cores, i.e., 16,384 threads, and 64 TB of RAM (spread across as many as 128 cluster nodes) into a single virtual SMP. It will also handle as many as four InfiniBand HCAs per node, providing backplane connectivity of up to 160 Gbps (netting 128 Gbps) per node. The new software also folds in support for the latest server silicon from Intel, the Nehalem-EX (Xeon 7500) and Westmere-EP (Xeon 5600) processors. In addition, ScaleMP has added support for two new peripherals: the Emulex LPe12xxx (8 Gbps), Broadcom NetXtreme II 57711 (10 GigE).
The scalability boost is particularly noteworthy. Version 2.0 of vSMP supported a mere 128 cores and 4 TB of memory per VM (across 16 nodes). And connectivity was limited to just a single HCA per node. Upping the cores, memory capacity, and bandwidth is a reflection of the increasing core counts on the latest x86 silicon — up to eight cores on Nehalem-EX and six cores on Westmere-EP — but also points to anticipated demand for HPC apps that want to spread out into larger SMP-type environments.
According to Shai Fultheim, founder and president of ScaleMP, in version 2.0 they supported over 20 different dual-socket servers and a handful of 4-socket systems, “For 3.0 we put significant investment into expanding that, and specifically on the high-end building blocks,” he says.
Although he can’t name names yet, Fultheim says they’ll be supporting all the Tier 1 Nehalem-EX based systems as soon as the OEMs officially roll them out. Certified configurations for systems supporting up to 32 nodes will be generally available on June 14, but certification for up to 128 nodes won’t be ready until the end of Q3 2010.
The current vSMP user base encompasses commercial, higher education/research and government customers. All told, ScaleMP has over 150 engagements, including big names like Bloomberg, SDSC, University of Cambridge, Harvard, Naval Research Lab, NIH, and Lockheed Martin. According to Fultheim, they already have some vSMP 3.0 customers signed up for 32-node systems. One of them (in the government space) is looking to build a 48TB virtual SMP that will consist of 16 eight-socket Nehalem-EX boxes, with 3 terabytes on each node. There are also a few customers getting ready to build systems in the 40- to 50-node range.
An interesting comparison can be made with SGI’s new Altix UV system, which debuted last November. The Altix UV is a traditional hardware SMP geared for HPC, and uses SGI’s custom NUMAlink 5 interconnect and UV hub controller to glue together dual-socket Nehalem-EX blades.
The top-of-the line UV 1000 scales to 2,048 cores and 16 TB of memory for a single system image, versus 8,192 cores and 64 TB for vSMP 3.0. The memory advantage for vSMP is especially significant, inasmuch as memory footprint is more often the limiting factor (as opposed to core count) for global address space HPC codes. Backplane bandwidth is similar: 120 Gbps (15 GB/sec) for the NUMAlinked UV blades versus 128 Gbps (4 x 32 Gbps) for the vSMP version — assuming four QDR InfiniBand adapters per node.
The NUMAlink 5 network is apt to deliver better latency than InfiniBand, and SGI’s MPI Offload Engine may prove to be of particular value for message passing codes, but overall, ScaleMP appears to have built a compelling high-end SMP environment. The fact that it’s being done in software is an extra bonus, especially in the vSMP Cloud offering, which enables dynamic provisioning of virtual machines on cloud infrastructure.
The company is also using vSMP 3.0 to introduce its “VM on VM” technology, aimed at the general enterprise space rather than HPC, per se. In a nutshell, the ScaleMP software aggregates multiple x86 nodes as before, but instead of placing Linux on the virtual SMP, a hypervisor is used, which is then able to load multiple VMs on the virtualized platform. The goal is much the same as with the traditional HPC offering: making big systems out of little ones so that system management can be simplified and overall utilization and flexibility can be improved. “From a computer science perspective, there’s not much difference between running a 100 processors of MATLAB in parallel and 100 VMs,” explains Fultheim.
The VM on VM feature currently only works with KVM and Xen, but ScaleMP intends to add support for Microsoft’s Hyper-V and VMware down the road. It’s still in the preview stage, so the company is registering interested parties on its Web site.