by Tim Staub, associate editor LIVEwire
Dallas, Texas — Stuart Bailey is founder and chief technology officer of InfoBlox Inc. 912 Chicago Ave. Evanston IL, 60202, 847-475-8500 http://www.infoblox.com
InfoBlox develops and sells high-performance Server Appliances for the rapidly growing Internet and intranet Markets. InfoBlox’s mission is to reduce complexity, provide lower total cost of ownership, enhance reliability and improve speed and throughput of Internet services for it’s customers.
After working for five years as technical lead for the Laboratory for Advanced Computing/National Center for Data Mining at the University of Illinois at Chicago (Dr. Robert Grossman, Director), Bailey founded InfoBlox Inc. During his time at the University of Illinois, he led four competitive teams that received awards from the High Performance Computing Challenge (HPC) at the Supercomputing series of conferences, including two Most Innovative Show (’96,’98) and one Best of Show (’99) awards for their work in high performance, distributed data mining.
Bailey has co-authored several papers on high performance networking, wide-area clustering, distributed data management, and distributed data mining including: “PSockets: The Case for Application-level Network Striping for Data Intensive Applications using High Speed Wide Area Networks” being presented at SC00, Dallas.
HPCwire: Is TCP suitable for high performance networking?
BAILEY: TCP is not well suited for high performance networking. However, the prevalence of TCP or more importantly, the lack of more network centric feedback, control, and QoS facilities; even in many of today’s research networks such as vBNS and Abilene, will continue to force researchers to use TCP and to develop work-arounds for TCP’s limitations.
TCP is an appropriate technology to make end-to-transmissions reliable (i.e. making sure all the data gets to it’s destination) without adding complexity to networking elements such as routers. This was especially important in the beginnings of the Internet when the Internet was primarily comprised of relatively low-bandwidth, high loss connections.
TCP = easily implemented reliability at the cost of bandwidth and any kind of bandwidth guarantees.
By contrast to the early days of the Internet, today’s high performance networks have relatively high-bandwidth and extremely low loss rates. In other words, the networks themselves have a high degree of reliability. Therefore, using TCP over such reliable links is analogous to using a sledge hammer to tack up a picture frame. Today’s networks are reliable and more intelligent; why use a heavy weight, host centric control mechanism like TCP?
As the technical lead for the National Center for Data Mining (NCDM) at the University of Illinois at Chicago (UIC) (Dr. Robert Grossman, Director) from 1995-2000, I had the privilege to work on several projects to explore wide-area clustering, data mining, and data management over both vBNS and Abilene. We quickly realized that what we wanted out of a high performance WAN was not just reliability, but PREDICTABILITY. Being able schedule transfers was extremely important to overall resource management during a distributed application.
We experimented with both wide-area, native ATM (with network enforced QoS parameters) and wide-area TCP/IP based applications, TCP required a tremendous amount of tuning to be competitive on bandwidth, and the best effort nature of the connectionless IP networks made network saturation and traffic storms difficult problems to adapt to. My conclusion after five years of high performance networking research is that TCP is not ideal for high performance, wide area networking.
HPCwire: Do you have any views of TCP’s Future, Recommendations, Insight?
BAILEY: I fully recognize that the vast majority of operating systems and applications are designed to use TCP, and TCP will be in use for a number of years. However, it is my opinion that better network centric protocols that don’t waste bandwidth on unnecessary error checking, provide better network guarantees and QoS, and are connection oriented/switched (e.g. MPLS) need to be aggressively pursued in the research community. In the end, we want the high performance WANs to appear more like a bus between distributed applications than an unreliable, unpredictable network.
HPCwire: Then what is TCP for, not for?
BAILEY: TCP is well suited for general distributed applications which need to be developed quickly or are being developed for and need to be compatible with commodity networks of today and the near future.
TCP is inadequate for high performance, widely distributed applications that have a mixture of heavy network I/O, CPU usage, and storage I/O. Furthermore, TCP is not at all appropriate for streaming applications, which tend to be loss tolerant by their very nature.
HPCwire: Tell us something about the technology you wish to promote.
BAILEY: While high performance network applications research is no longer my primary responsibility, my experience in tuning systems and networks for high performance distributed applications has helped InfoBlox to develop scalable, easy to deploy, secure server appliances for core network services. Our first product, DNS One (TM), is an enterprise/ISP scale Domain Name Service (DNS) server appliance which ensures unprecedented reliability, performance, and ease of use for your DNS solution.
As your network bandwidth continues to rise at a blistering pace, Infobox’s sever appliances will protect your core services, like DNS, from being a liability in overall performance and reliability.
You can find out more at: http://www.infoblox.com/