Russian HPC cluster vendor T-Platforms says it will be adding NVIDIA’s Tesla 20-series (Fermi-class) GPUs into its latest blade offering. According to the company, the GPGPU blade will feature a “very high computing density design along with aggressive power-saving schemes for heterogeneous environments.” This week’s announcement is a prelude to the official unveiling of the blade at NVIDIA’s GPU Technology Conference (GTC), which takes place September 20 to 23 in San Jose, California.
T-Platforms is just the latest HPC system vendor to get Fermi fever. Cray, IBM, Dell, Appro, and Bull have all announced products, or plans for products, that incorporate NVIDIA’s latest Tesla-20 processors into their product line. Because of the delay of the Fermi launch, system deployments are just starting to appear now. So far, China has been most aggressive in installing large-scale GPU-accelerated systems and currently has three such machines in the TOP500, including “Nebulae,” the number 2 ranked supercomputer on the June 2010 list. That system is powered by Fermi GPUs and delivers 1.27 petaflops on Linpack. Research centers in the US, Japan, and Europe can be expected to follow suit with the construction of GPGPU-style petascale supers over the next 12 months.
Given that T-Platforms is the number one provider of HPC systems in Russia and the former Soviet states (CIS), we should also expect to see some Fermi-based GPGPU deployments in that part of the world in the near future. Moscow State University’s Lomonosov supercomputer, which just happens to be a T-Platforms machine, is slated for a petaflop expansion, and it’s a good bet that system will use the company’s upcoming Fermi-based blades to achieve this.
This isn’t T-Platforms’ first foray into GPGPU territory. The company partnered with NVIDIA last year to use the GPU maker’s previous-generation Tesla 10-series hardware (C1060 and S1070) with the T-Blade 1.1. However, the new GPGPU offering will be based on the company’s latest T-Blade 2 platform, whose CPU-only implementation is already one of the most compute-dense in the industry. Using Intel Westmere chips, the T-Blade 2 delivers 27 teraflops per rack. In a recent interview with EnterTheGrid- PrimeurWeekly, Alexey Nechuyatov, Director of Product Marketing at T-Platforms, said that Fermi-based blades will deliver “up to 3 times more compute density.”
We got a chance to ask Nechuyatov about the upcoming heterogeneous blade product, and while he declined to offer much in the way of system specs, which will not be public until NVIDIA’s GTC conference in September, Nechuyatov did outline the rationale for the new product and how he expects it to play to the company’s customer base.
HPCwire: Can you briefly characterize your new GPGPU blade offering?
Alexey Nechuyatov: Our solution is based on T-Blade 2 infrastructure, as the basis for the 420-teraflop Lomonosov cluster deployed at Moscow State University. While we do not comment on future products in detail, I can say our planned blade solution is based upon Tesla 20-series GPUs and will also feature Intel Xeon 5500/5600 CPUs.
HPCwire: Putting 200-plus watt GPUs into an already dense blade configuration can be a bit of a challenge. What is the power and cooling setup like?
Nechuyatov: The power and cooling setup is again based on the T-Blade 2 infrastructure. A standard 32-node T-Blade 2 system can draw up to 11kW peak power and the fully populated Fermi-based solution would draw about the same level of power. T-Blade 2 is the world’s most dense x86 system packing 64 of the Xeon 5600 CPU series into 7U — giving up to 27 teraflops of peak performance per industry standard 42U rack cabinet. The system was designed using extremely thorough thermal simulations and the upcoming Fermi blade will be a straightforward snap-in upgrade. We can recommend or supply a turnkey solution to ensure flawless thermal operation of fully loaded racks, using cold door or hot aisle containment solutions.
HPCwire: What size HPC systems can be built with this new offering?
Nechuyatov: The T-Blade 2 infrastructure is positioned at the higher end of our product portfolio. Theoretically it scales from a single 32-node enclosure to multi-petaflops level. To achieve this we implemented specialized global barrier and global interrupt networks that help to significantly improve high-count collective node operations. The GPU based blade system will fully support specialized network functionality, but to support this you have to have the right software layer, the Clustrx OS.
HPCwire: Is there GPU awareness built into the Clustrx OS?
Nechuyatov: The Clustrx OS is the next generation OS that moves away from a node level CNL (compute node Linux) approach to aggregated, single source code distributed OS at a cluster-wide level, where the OS is treating all the compute infrastructure as a single dynamic resource. Yes, the integrated resource manager of Clustrx provides full support for heterogeneous architectures by nature. It supports Fermi technology, which means that you can run CUDA and OpenCL-compiled applications, and by the time pre-production samples are ready, we expect to implement dynamic, application-based node activation and suspend operations to the idle scheme and provide a so-called connector for the management subsystem to fully recognize the GPU-based node.
HPCwire: What kind of demand do you see for GPU-accelerated HPC in your customer base? Are they in specific industries?
Nechuyatov: GPU-based computing is causing a lot of buzz today. We are seeing demand from oil and gas customers in Russia. We also see demand coming from our existing customer base in the academia sector to upgrade their installations with a heterogeneous compute segment. Most of it is still pilot projects for customers to understand the potential of technology, application porting/tuning process as well as commercial application readiness. We already have a few customers interested in acquiring such technology and additionally we are trying to release the product earlier to have the right differentiation and time-to-market.
HPCwire: When will the new blades become generally available?
Nechuyatov: We expect general availability in the Fall of 2010 and hopefully not only European but US customers would have access to our T-Blade 2 system to try the technology out. We are planning to showcase a sample at NVIDIA’s GPU Tech Conference in San Jose in September and also will make a demo unit of T-Blade 2 with the Tesla 20-series available at the NVIDIA lab in the US.
HPCwire: Is this part of a larger strategy to penetrate the North American market?
Nechuyatov: Our target market is Europe, yet we do not exclude cases where US customers of NVIDIA would want to evaluate our product.