Founded in June 2004 as the OpenIB Alliance to develop a Linux-based InfiniBand software stack, the OpenFabrics Alliance has recently expanded its charter to support iWARP (RDMA over Ethernet), in addition to InfiniBand. The OpenFabrics Alliance provides tools, communications and resources for vendors and developers to create, refine and publish standard open source software stacks for RDMA-capable datacenter fabrics. It is comprised of approximately 25 technology vendors and end-user organizations.
HPCwire got the opportunity to speak with Jim Ryan, Chairman of the OpenFabrics Alliance, just before he departed for this week's OpenFabrics Workshop in Paris. In this interview, he describes the mission of the Alliance and its significance to the high performance computing community.
HPCwire: Why did OpenIB change its name to OpenFabrics?
Ryan: The Alliance changed its name from OpenIB to OpenFabrics in March this year because we wanted to ensure HPC and datacenter customers' software would work no matter what underlying fabric they chose today or in the future. We're including the iWARP open source Remote Direct Memory Access (RDMA) over Ethernet code to our existing open source InfiniBand code, making one integrated stack.
HPCwire: Why should application providers be interested in OpenFabrics?
Ryan: The OpenFabrics Alliance is creating a multi-vendor supported open source software stack for the enterprise datacenter, as well as HPC environments. People are starting to call this field “datacenter networking” (DAN) to differentiate it from local area networking (LAN) because the infrastructure, service and application architectures that lead technical managers to deploy these two types of networking are quite different. For example, virtualization support in the network or fabric is required in the DAN but not the LAN.
HPCwire: What other differences does OpenFabrics enable?
Ryan: The LAN is about connectivity and servicing email, browsing, calendaring, information sharing, mobility, VoIP, etc.
Datacenter networking in the enterprise is to support multi-tiered servers running collaborative applications or applets accessing databases, and in increasingly more cases, the success of services these applets in a particular datacenter are highly interconnected. Therefore the applications are network bandwidth and latency dependent and frequently collaborating with applications running in related multi-tiered servers in other datacenters with a similar dependency. For some, this has become known as Grid Computing. For mainstream IT, this is the environment that Service Oriented Architectures enable.
Latencies in the typical datacenter are usually hundreds of milliseconds. Today's datacenter network is a collection of wires, 100 BaseT for management, GbE for communication and control and 2 Gb Fibre Channel for storage. In some cases, HPC customers have deployed proprietary cluster interconnect solutions.
In an HPC datacenter on the other hand, the combination of multiple interconnected servers with collaborating applets are known as clusters. Interestingly some 70 percent of the Top500 have similar bandwidth and latency characteristics (i.e., Ethernet) as enterprise datacenters, but the remaining 30 percent have much lower latency and much higher bandwidth interconnects typified by InfiniBand.
In these clusters, latency is usually 5 to 10 microseconds and bandwidth at least 10, if not 20 gigabits per second.
HPCwire: Clearly some see InfiniBand and Ethernet as competitive technologies. Why has OpenFabrics chosen to adopt both?
Ryan: Intelligent hardware-assisted RDMA and kernel bypass adapters for InfiniBand and 10 Gigabit Ethernet are expected to become even more widely available over the next year or two. Both kinds of adapters need to support the same set of upper layer protocols such as IP, Sockets Direct (SDP), Message passing (MPI) iSER (iSCSI over RDMA). SCSI Remote (SRP), and NFS/RDMA and Reliable Datagram (RDS) for Oracle. With all the upper layer protocols being common and many of the architectural issues of RDMA being the same, it just makes sense to have the single open source stack for common cluster interconnect, storage and networking protocols.
HPCwire: As a user of HPC/Grid systems, why would I be interested in using both InfiniBand and Ethernet?
Ryan: As enterprise datacenters move through consolidation phases toward next generation architectures that increasingly leverage virtualization technologies, the importance of very high performance RDMA fabrics continues to grow. Costs can be greatly reduced by leveraging the software development and application costs across both InfiniBand and Ethernet. Datacenter architectures based on a unified switching fabric can be implemented without any of the performance compromises that exist today. Fabric “consolidation” at the highest level of performance for all datacenter applications will serve as the basis for future waves of consolidation and evolution of the datacenter architecture.
In today's high performance datacenter, it is vital for information to be transformed into answers. It is the lifeblood of the datacenter. Information technology is a strategic asset and a competitive advantage. The datacenter must continue to evolve to ensure businesses are prepared to meet the new challenges and opportunities of the emerging answer economy.
HPCwire: I understand that you're developing an OpenFabrics Enterprise Distribution (OFED). Can you please explain what it is, and why you are doing it?
Ryan: Today's servers in the enterprise and nodes in HPC clusters are nearly always identical system architectures, that is, in both realms we are using commodity processor, memory, motherboard and I/O chip set technologies. So, in general, applications in the enterprise and applications in HPC are running on architecturally the same hardware, and even more important to realize is that there are two really dominant operating systems — Linux and Windows.
At the hardware and OS level, the OpenFabrics open source software stack has been developed as a culmination of multiple manufacturers, is currently being tested, and will be released by the Alliance members this month as the OpenFabrics Enterprise Distribution 1.0 (OFED 1.0). [Visit the OpenFabrics web site for a picture of this stack at http://www.openfabrics.org/doc.html.]
This software will be distributed by the major Linux distributions, for example, Red Hat and Novell (SUSE), and by the Alliance for Windows.
We anticipate that this concurrent and inclusive availability with releases of Linux and Windows will continue for the foreseeable future. For Linux, we can anticipate this because the kernel level components of the OpenFabrics software are now integrated into the work of the Linux kernel developers.
When customers adopt OFED 1.0, they will be able to rely on the Linux distribution of their choice to contain and support the cluster interconnect of their choice, starting very soon. OpenFabrics applies to enterprise computing just as well as HPC.
For Ethernet with RDMA, we have the open source code in the OpenFabrics software repository and integrated with InfiniBand, but the combined stack is still at the development and test stage. Our goal is to coordinate with the Linux distributions and try to make OFED 2.0 available late this year or early next supporting both iWARP Ethernet and InfiniBand.
HPCwire: What makes this software stack unique from other solutions that are available commercially today?
Ryan: The upper level Application Program Interfaces (API) in the OpenFabrics software all conform to specifications for API's defined by various international standards or industry associations such as the IETF, the Open Group, the InfiniBand Trade Association, the RDMA Consortium and others such as the storage industry, as well as the key Linux distributions.
Therefore these APIs do not change with processor technologies, network architectures or storage components. They also do not change with OS release levels or even between Linux, Windows and proprietary systems. So applications coded to these API's are future-proof — they'll work with architectures and infrastructure deployed in HPC and the enterprise today and with any foreseeable ones tomorrow, particularly as sites start to deal with the pain points of scaling up by a factor of ten — 1 GigE to 10 GigE for example. So applications can be OS and transport agnostic or independent. That's big for the IT technology community.
Even better, customers can deploy their latest Linux or Windows consistent with OpenFabrics and make sure their infrastructure works today as they need and their software will be ready for scaling up their hardware as future business needs require.
Customers can now have their core software ready before they make hardware changes. Usually it's the other way around, and months are spent settling the software and applications down to achieve stability and performance the hardware for which it was purchased.
HPCwire: What's next for the OpenFabrics Alliance?
Ryan: The OpenFabrics Alliance is hosting its first European workshop in Paris June 22-23 and the details and presentations will be available on our website at http://www.openfabrics.org. We also just added IBM as a member, and just formed an Interoperability Working Group (IWG), working with the University of New Hampshire Interoperability Laboratory. This will allow us to develop a process and an environment to create and test enterprise-quality RDMA software. The test plan is available on the website. Also on the site, anyone can join any of the email reflectors or wikis. Participation and contribution of open source code is open, but inclusion in the stack is rigorously managed.
This first release of OpenFabrics is a start of significant set of changes in the datacenter. It took 10 years for Ethernet or Fibre Channel to settle down and achieve widespread adoption. We need to understand this is a change of similar significance.