Since InfiniBand came onto the scene, users have focused their efforts on using the high performance network fabric to connect compute and storage boxes within the datacenter. But a couple of enterprising companies, Network Equipment Technologies and Obsidian Research Corp., have developed InfiniBand connectivity for wide area networks (WANs). In both instances the vendors have developed solutions that can transparently connect IB clusters and storage over long distances — hundreds or thousands of miles. From the application’s viewpoint, the remote compute and storage nodes look and (more or less) act as if they’re sitting right next to each other.
The benefits of long distance InfiniBand mirror its advantages in the datacenter — namely high bandwidth and low latency. While the WAN InfiniBand performance won’t always match local performance, solutions have demonstrated user data rates of up to 8 Gbps over thousands of miles across SONET OC-192 or 10 GbE backbones. Bandwidth and latency tends to drop a bit the farther you go, but unlike TCP/IP implementations, Quality of Service (QoS) is maintained.
Obsidian’s Longbow IB WAN solution has been deployed at NASA Ames, Arizona State University and the University of Florida, and is being researched by Oak Ridge National Laboratory and Ohio State. The Longbow product has also been a feature at the last three Supercomputing (SC) conferences. Last year, Canadian-based Obsidian set up a subsidiary to go after the lucrative U.S. federal, intelligence and defense market spaces.
Network Equipment Technologies (NET) has a competitive product, the NX5010 InfiniBand bridge, a $100K+ box that is already fairly well-established in the U.S. DoD and Intelligence Community market. NET, a provider of a range of telecommunication platforms, got into the long distance InfiniBand market about a year and a half ago when its government customers started demanding long haul InfiniBand capability. Many of these federal organizations maintain a network of HPC sites dispersed across the country. These customers have developed a need to use wide area clusters to run some of their most critical MPI-based programs. Although the three-letter agencies don’t talk about specific applications, wide area InfiniBand is a good fit for things such as dispersed intelligence gathering, network centric warfare, and general data mining.
NET’s current InfiniBand offering, the 2U NX5010 box, works with any standard IB protocol. To the subnet manager, the NX5010 looks like a two-port InfiniBand switch. The device acts as a network bridge, converting the InfiniBand stream to the subnet manager’s WAN protocol — ATM, 10 Gigabit Ethernet, or whatever. At the other end, the companion NX5010 box attached to the remote cluster or SAN reverses the conversion. The magic is that the translation to and from the subnet protocol is performed at the 10 Gbps line rate, without losing the InfiniBand semantics or incurring a big latency penalty.
NET says they’ve sold about 100 NX systems so far. That’s hardly a commodity market, but the company now thinks it can drive its solution into the commercial space. As InfiniBand adoption grows beyond HPC, NET is eyeing the demand for real time data capture on remote InfiniBand-equipped storage area networks. The company is looking at the financial market, where there is a real demand to synchronize streaming data in real time across storage silos. In particular, for these institutions, the need for remote disaster recovery (DR) may turn out to be the first killer app for long distance InfiniBand.
In the dot-com days, a number of financial firms on Wall Street bought a lot of dark fiber, which is still underutilized. NET is pitching them the idea of using this capacity for InfiniBand-enabled DR. “They have the bandwidth,” says Haseeb Budhani, director of strategic planning for NET. “They just don’t have a way to push the data.” The traditional TCP/IP solution, which was never intended for high performance data transfer, incurs a heavy latency penalty, especially at longer distances.
NET is looking to piggyback onto deployments from Oracle, SAP, EMC, NetApp and system vendors as a way to enter the commercial market. The recent decision by Colfax International to offer NX 5000 systems alongside its high performance cluster gear is a development NET would like to see repeated with other system integrators and OEMs.
While NET is excited about connecting remote storage over IB, at this point, the company doesn’t perceive a big demand for long haul computing over InfiniBand outside the government space. But in that market, the need for speed is unrelenting. NET is planning to introduce NX bridges that support 40 Gbps data rate later this year. These devices will be especially handy if you happen to be connected to a next-generation 40G OC-768 backbone.
But for most organizations, remote computing over high performance networks is still a bit too expensive. While NET expects to drive its NX boxes below $100K at some point, it still makes sense for the average HPC customer to expand their compute capacity on-site. As high performance network infrastructure becomes more commonplace and the InfiniBand ecosystem continues to mature, we may see a more general demand for IB-based wide area networking.
As the only two vendors of WAN InfiniBand gear, Obsidian and NET are in a good position to take advantage of those opportunities. From NET’s perspective, Budhani would welcome more players, if only to validate the business opportunity. “More competitors absolutely make a case for the market,” he says.