Later this week, QLogic will announce a partnership between itself, Dell, Microsoft, AMD and ANSYS to offer an HPC cluster that will be publicly available to users for test driving applications. Dell is providing a 16-node PowerEdge cluster; Microsoft, its Windows HPC Server cluster software; and AMD, 32 of its six-core “Istanbul” Opteron processors. QLogic, of course, will be contributing its own InfiniBand switches and adapters. The first application to be tested on the cluster is the ANSYS’ FLUENT computational fluid dynamics (CFD) code.
The cluster is housed at QLogic’s NETtrack Developer Center in Shakopee, Minnesota. Although this is the company’s first publicly-announced test cluster from a major OEM, there are actually four HPC systems at the center today and more are on the way. “Over the next 60 to 90 days, we will have clusters from all the major server providers in the industry,” said Joe Yaworski, QLogic’s manager for Strategic Global Alliances.
The NETtrack center is part of a larger strategy being pursued by QLogic to help build an ecosystem of hardware and software around the company’s switches and adapters. According to Yaworski, the idea is to attract ISVs, OEMs, storage and chip vendors to come together to test and certify their products with QLogic gear. The NETtrack program was officially launched in September 2008 and now has over 70 partners.
QLogic has also expanded the charter of the NETtrack program to encourage ISVs to do performance profiling of their codes. The idea here is to highlight the performance advantages of QLogic-equipped computers, while enabling the software vendors to demonstrate the advantages of their products on the latest interconnect, processor or storage technologies. An additional purpose is to show how codes scale across cluster nodes. Performance profiling is also intended to illustrate the benefits of moving from a workstation to a cluster or from a smaller cluster to a larger one. Both ANSYS and CD-adapco have published performance results in conjunction with the NETtrack program.
More recently, QLogic has opened up the center to end users, allowing them to run their applications on the NETtrack infrastructure. Over the last six months, a number of universities have taken QLogic up on the offer, as well as an aircraft maker and a construction equipment manufacturer. QLogic views this as a way of getting more intimately connected with end users. “We actually see it as a resource to drive incremental business,” said Yaworski.
It’s worth noting that Mellanox also maintains a cluster center under the auspices of its HPC Advisory Council. They currently list systems from three vendors — Dell, Rackable (now SGI) and Colfax. Of course, these clusters are all flavored with Mellanox interconnects, but the idea is the same: to bring vendors and users together for the purpose of testing InfiniBand-based clusters.
The fact that Mellanox and now QLogic are willing to go to the trouble of maintaining a cluster test facility points to the unique place InfiniBand has in the HPC ecosystem. Selling integrated, self-contained HPC machines is fairly straightforward, but selling the cluster glue is a little trickier, since one must deal with competing interests from a vendor-neutral stance. And unlike other system components such as processors, memory or (to a lesser extent) storage, InfiniBand is not quite plug-and-play. Until it becomes so, demonstrating interoperability with other hardware and compatibility with application software will be the key.