Voltaire has announced the Grid Director 4036E, a new QDR InfiniBand switch that includes a built-in Ethernet gateway. As such, it acts as a network bridge that hooks together QDR InfiniBand and 10 GigE infrastructure, all implemented inside a 1U box. The offering is yet another example of Voltaire’s larger strategy to expand its market reach into the datacenter Ethernet arena.
The 4036E is especially designed for applications where low latency communication is a top priority. It provides 34 QDR (40 Gbps) InfiniBand ports, delivering less than 100 nanoseconds of port-to-port latency, and two GigE or 10 GigE ports bridging traffic in less than two microseconds. The bridge is an evolution of Voltaire’s product set for InfiniBand/Ethernet gateways. For DDR InfiniBand, the company offers core switches (in 6U and 15U form factors) that can bridge to Ethernet via optional line cards. The 4036E adds QDR capability and collapses the Ethernet bridging functionality into a modular 1U design.
The new offering uses Voltaire’s fifth generation silicon for the Ethernet gateway. Unlike the company’s switch technology, which relies on ASICs from Mellanox (for InfiniBand switching) and Fulcrum (for Ethernet switching), Voltaire’s gateway technology has always been based on in-house ASICs. It’s worth noting that Mellanox recently developed its own gateway chips (BridgeX), but so far they have only been used in the company’s BX line of gateway systems.
The 4036E is designed to operate transparently from the standpoint of the application. Since the bridge supports the OpenFabrics Enterprise Distribution (OFED) standard, no special drivers are needed. So, for example, 10 GigE-attached storage hooked up to the 4036E will think the InfiniBand cluster at the other end is talking Ethernet, and vice versa.
For Voltaire, the new offering is expected to play especially well in one of its application strongholds — algorithmic trading, a.k.a. high frequency trading. According to Asaf Somekh, Voltaire’s VP of marketing, the company has essentially 100 percent of the market for InfiniBand installations in this application segment. The segment also happens to be Voltaire’s top revenue generator, accounting for up to 40 percent of its total sales. “In Q1 [of 2009], which was the worst quarter for us over the past couple of years, the biggest vertical that was still buying was financial services,” notes Somekh.
Given the recent chaos in the financial sector over the past year, that might seem surprising. But during the economic downturn, high frequency trading has been a bright spot for the financial sector, raising the bottom line for banks, exchanges, hedge funds, and algo trading companies. Unlike other types of Wall Street activity that depend upon rising asset values, algo trading can make money regardless of asset health.
For a number of reasons, the 4036E is an especially nice fit for algo trading performed in a mixed networking environment. Over the past few years, a number of traders have adopted InfiniBand for their computer systems because it allows them to squeeze maximum performance from their compute clusters. But the data feeds for the exchanges and the WAN in these datacenters remain Ethernet-based. And because most investment firms co-locate their algo trading systems in close proximity to the exchanges (in places like New York or London) to minimize data feed latency, datacenter real estate and power tend to be at a premium. A 1U box that can manage all these requirements is likely to look attractive to the Wall Street crowd.
And because of the feature set, Voltaire thinks the 4036E will actually spur their financial services customers to move to QDR InfiniBand technology. While many HPC customers, such as those in government and education, have been busy upgrading from DDR to QDR in 2009, as a whole, most commercial customers have not. Somekh sees the 4036E as “the trigger” to start migration to 40Gbps InfiniBand in commercial HPC, and especially in the high frequency trading arena.
Another target for the 4036E is commercial HPC, especially the manufacturing, life sciences, and oil & gas segments. In these areas, InfiniBand clusters often require Ethernet connectivity for 10 GigE-connected storage from vendors like NetApp, Panasas and BlueArc. The idea here is to hook the compute nodes to the storage via the Voltaire bridge, which eliminates the need for 10 GigE NICs on the server side. Given that 10 GigE NICs are more expensive than IB QDR adapters, this can save money up front, while also allowing for more configuration flexibility.
Another application is database acceleration. Again, the setup here is InfiniBand clusters connected to Ethernet storage. In these cases, the cluster size is usually limited to 4-8 nodes, so a 1U switch/gateway solution seems like an especially good fit.
The list price for the Grid Director 4036E is about $1,000 per port, and is expected to be available toward the end of the quarter. Somekh says they already have a number of financial services and commercial HPC customers lined up.