HP Adds New HPC Server with On-Board GPGPU
Hewlett Packard has launched a new purpose-built HPC rack server with a formidable GPGPU capability. That product, the ProLiant SL390s G7, provides more raw FLOPS per square inch than any server HP has delivered to date, and is the basis for the 2.4 petaflop TSUBAME 2.0 supercomputer currently being deployed at the Tokyo Institute of Technology.
HP actually announced the two new servers this week. Besides the SL390s G7, the company also introduced the SL170s G6, a no-frills server aimed at hyper-scale computing deployments. Both the SL390s and SL170s plug into HP’s new ProLiant SL6500 Scalable System chassis, a 4U box that accommodates up to 8 half-width servers. The SL6500 is the upgrade from the SL6000 system announced last year.
The SL170s and SL390s come as skinless trays rather than the typical server boxes encased in metal, top and bottom. This design could catch on as more vendors look to minimize extraneous hardware and come up with ever-denser rack configurations. SGI also uses a similar skinless design in their CloudRack trays.
The more general-purpose of the new Proliant servers is the SL170s G6, a dual-socket or single-socket server that incorporates the latest Intel Xeon Westmere (5600 series) processors in a half-width form factor. Ed Turkel, HP’s manager of business development for its HPC group, describes it as their “lean and mean server,” where scalable performance, serviceability, manageability are the driving concerns. As such, it’s aimed mostly at Web 2.0 and service provider environments, but it’s also quite suitable for embarrassingly-parallel HPC applications, such as portfolio risk analysis and BLAST-based bioinformatics. The base price on this model is $1,559.
But for “true” supercomputing applications, the SL390s G7 is the go-to server. Like its sibling, the SL390s comes with Xeon 5600 processors, but the option to pair the CPUs with up to three on-board NVIDIA “Fermi” 20-series GPUs puts a lot more floating point performance into this design. Customers can choose from either the M2050 or M2070 Tesla GPU modules, the only difference being the amount of graphics memory — 3 GB of GDDR5 for the M2050 versus 6 GB for the M2070. Each GPU module is served by its own PCIe Gen2 x16 channel in order to maximize bandwidth to the graphics chips. At the maximum configuration with all three Fermi GPUs and two Westmere CPUs, a single server delivers on the order of 1 teraflop of double precision performance. “So this is very much a server that has been designed for HPC,” said Turkel.
With GPUs on board, the SL390s fill out a 2U half-width tray, so up to four of these can be packed into a 4U SL6500 chassis. A CPU-only version is also available and takes up just half the space (half-width 1U), enabling twice as many Xeons to occupy the same chassis. This configuration will likely be the server of choice for the majority of HPC setups, given that GPGPU deployment is really just getting started. Pricing on the CPU-only model starts at $2,259.
Another HPC-centric feature on the SL390s is the inclusion of on-board network adapters, in this case Mellanox’s ConnectX-2 silicon. The embedded adapter supports either 40 Gbps InfiniBand or 10 Gigabit Ethernet, making it suitable for low-latency applications on either fabric. If dual-rail InfiniBand is desired, an external adapter can be hooked into the server’s PCIe slot. The Mellanox silicon has also been incorporated in HP’s ProLiant BL2x220 G7 server blade.
Although the official debut of the SL390s was on Tuesday, HP has been shipping the server for some time, most notably to Tokyo Institute of Technology (Tokyo Tech), where it serves as the foundation for the 2.4 petaflop TSUBAME 2.0 supercomputer. That system is now fully deployed and will be formally launched later this week.
The new TSUBAME consists of 1,432 SL390s G7 servers, each of which contains three M2050 GPUs. CPU-wise, each server is outfitted with two 6-core Westmere processors (X5670, 2.93 MHz) and either 54 or 96 GB of RAM. For ultra-fast local storage, two SSDs plug into each server node. The network fabric is all QDR InfiniBand, taking advantage of the on-board Mellanox chips; an additional InfiniBand adapter is plugged into each node to provide dual-rail InfiniBand. The whole fabric delivers a system-wide aggregate bandwidth of 200 terabits per second.
The servers are housed in HP’s 42U Module Cooling System G2 rack, which represents the basic building block for TSUBAME’s computing infrastructure. Each rack contains 30 SL390s G7 nodes (60, CPUs and 90 GPUs), 8 chassis of power management, an HP network switch for shared console and the local area network, two airflow dams, and 4 Voltaire 4036 leaf switches.
A single rack consumes around 35 KW, 20 KW of which are from the GPUs alone. Not surprisingly, the G2 rack is water cooled, to handle the considerable heat generated by the CPU-GPU configuration. All this makes for a very computationally dense system, and despite the 35 KW power draw per rack, results in a rather efficient supercomputer for both space and energy consumption. “They wanted a world-class system, but they wanted it to fit into 200 square meters of floor space and into 1.8 MW of power,” explained Turkel.
Besides Tokyo Tech, the SL390s G7 has attracted some other early customers. Although Turkel couldn’t name names, he said the new HPC server is already garnering a lot of interest from scientific research organizations, oil and gas firms, and financial services institutions.
With the new server, HP joins IBM, Dell, SGI and just about every other HPC system vendor with on-board GPGPU. Although HP has offered plug-in Tesla cards for its servers and even qualified NVIDIA’s 1U quad-GPU box in the past, the SL390s G7 represents the company’s first generally-available native GPU server design. According to Turkel, there will be other variations of GPGPU-equipped rack servers in the future, but he was noncommittal regarding any plans to offer this capability in HP’s blade server line. Turkel did say the company is aware that NVIDIA will soon be shipping the compact X2070 Tesla module designed specifically for blades and other small form-factor designs, admitting “we’re certainly looking at that.”