In the quest for a unified data center fabric, a new company has introduced a novel Ethernet-based solution. On Tuesday, Woven Systems, Inc. announced its EFX 1000 Ethernet Fabric Switch. The 144-port 10 Gigabit Ethernet (GbE) switch is designed to create a lossless Ethernet fabric with latency comparable to InfiniBand, and at one-fifth the cost of other 10 GbE solutions.
The switches, which may be linked together to support up to 4000 10 GbE ports, are intended to be used in scaled-out data centers and traditional high performance computing systems. According to Woven, the EFX switch is compatible with any 10 GbE-capable server, storage system, or router.
Woven Systems was founded in November 2003 by Bert Tanaka, the company's chief technology officer, and Dan Maltbie, the chief product officer, with the goal of developing the next generation of high performance interconnect technology. Both worked for Capsian Networks, a company known for its high performance routing technology. In 2005, Woven raised $10 million in Series A funding and has been judiciously spending it developing their EFX 1000 technology. Much of the in-house development is focused on their packet processing vSCALE chip, which is at the heart of the switch.
Derek Granath, Woven's vice president of marketing, says their business direction is being driven by the trends of scaled-out computing environment and multicore processors. Clusters, grids, utility computing, and virtualization environments are all taking advantage of these trends, but as computation and storage systems increase capacity, users are finding one Gb/sec bandwidth inadequate for many applications. In addition, data centers often have to support a mix of interconnect types to meet compute, storage and WAN requirements.
“The challenge that IT managers are faced with for the scale-out model is that the cost of interconnecting can far exceed that of the servers,” says Granath, “especially if you have redundant Fibre Channel HBAs, multiple Gigabit Ethernet connections, and possibly even InfiniBand for high performance applications.”
For high performance technical computing users needing 10 Gb/sec throughput, InfiniBand has been the most cost-effective solution. Even commercial enterprise users, like Wall Street firms, are starting to look at InfiniBand for high-throughput market data applications.
It's not just the cost of the interconnect, but also the cost of power consumption, which can exceed that of the server itself. High-end servers today may run about 200 watts, while a single 10 GbE connection is at least 70 watts. But a server may use three 10 GbE ports in a two-tiered system to get the needed throughput. High power consumption by current 10 GbE solutions is one of the factors inhibiting its adoption in HPC and in the broader commercial market.
According to Woven, their EFX switch burns 16 watts per port, which works out to about four times better than their 10 GbE competition, and is in the ballpark of InfiniBand (although IB switches tend to run in the single digits watts per 10G port).
The other downside of 10 GbE solutions is the relatively high latencies — usually in the range of 10 to 40 microseconds. Woven has achieved four microseconds end-to-end latency with their solution, which is better than Fibre Channel, but still not quite as speedy as InfiniBand. Mellanox recently announced a one microsecond latency for their 20 Gb/sec InfiniBand offering.
Perhaps the most compelling feature of the EFX switch is its intelligent congestion management. This allows the switch to dynamically load balance data traffic across the various 10 GbE paths. Real-time traffic management is especially valuble if applications exhibit different I/O profiles based on input data or if a system is being used to run multiple types of applications.
Dynamic congestion management is something even InfiniBand switches don't currently have. In these environments, traffic is tuned manually with a subnet manager to configure static route maps. This offers the potential for traffic congestion. To relieve it, you have to go back and retune the route maps.
The intelligence in Woven's congestion management is in their vSCALE packet processor ASIC — three per card. Each one mangages 40 Gigabits of traffic. The chip inspects the Ethernet packets to monitor latency (a proxy for congestion) across the network. When the latency threshold is crossed, traffic is rerouted onto an alternate path that has less traffic. This is done by modifying the VLAN tags in the layer 2 protocol.
Essentially the switches are load balancing the data traffic across the entire fabric. The ability to dynamically reroute traffic circumvents the static routing that is inherent in typical Ethernet networks, where the inability to manage congestion has become a defining weakness. If it works as promised, this would be a big step forward for Ethernet, especially for applications where Quality of Service (QoS) requirements are specified.
“It turns out that nobody had ever done dynamic routing based upon congestion measurement,” says Granath. “Standard Ethernet and standard InfiniBand don't have any way of being able to change paths dynamically in real time. What our packet processor ASIC has is the ability to steer traffic onto different paths in a dynamic manner by monitoring any of those congestion points or hot spots that might occur, and do so intelligently, without dropping or reordering any packets.”
In situations where all paths are congested, such as would occur when multiple servers are writing to a storage device with a single port, an Ethernet PAUSE is issued to slow down the servers. The pauses work like anti-lock brakes. They slows down the traffic just enough to prevent packets from being dropped — something you definitely want to prevent in an HPC application. In this way, Woven has essentially created a lossless Ethernet fabric.
A user can also partition the fabric by workclass or priority to guarantee resources to specific applications. Partitions may be configured with overlapping applications or be dedicated to a particular one. As with the congestion management, this capability is implemented with VLANs.
“One of the practical limitations of an InfiniBand fabric today is that they're typically used for one application,” says Granath. “This is because the I/O profiles of these applications can vary widely.”
He says the two initial markets for their switch will be Web service data centers, where the switch will serve as an aggregator for Gigabit Ethernet ports, and HPC. The HPC solution applies to both the server clusters and clustered storage. Granath says they've talked to a number of the national labs and a few commercial HPC companies. They're also starting to establish relationships with a number of cluster manufacturers and other HPC system integrators.
The first trial will be at a national lab that is benchmarking their solution against InfiniBand. Although the company declined to name the organization, Sandia National Laboratories (mentioned in Woven's press release) would be a likely candidate.
Woven says it intends to establish four beta customers — one national lab and three web services companies — by the end of the month and have the product ready for general availability in Q3. As they ramp up, the company is looking to follow the rise of 10 GbE in the data center over the next three years. According to IDC, by 2009 5 million 10 GbE ports will be added, 20 times the number of InfiniBand ports.
“Reasonably priced Ethernet NICs will come on the market this year,” says Granath. “The volume ramp starts in 2008. By 2009 it's expected to become a very, very large market.”