Myricom is well known for its innovations in cluster computing interconnects. Witness the three 2005 HPCwire Reader's Choice Awards, including “Most Innovative HPC Networking Solutions” and “Most Important Software Innovation for the Myrinet MX software.” In the TOP500 Supercomputer list, there are more systems using the Myrinet interconnect family than any other low-latency network. In its 13th year, Myricom is still a pioneer in HPC.
Michael Feldman, Editor, HPCwire, had the opportunity to talk with Myricom CEO and founder Dr. Chuck Seitz just prior to his departure for ISC2006 in Dresden, Germany, and to ask him questions ranging from the acceptance of Myricom's new products to his views on the raging Ethernet versus Ethernot debates.
HPCwire: A year ago at ISC, you announced the new generation of Myricom products, Myri-10G, a convergence of your popular Myrinet technology with Ethernet. What was your strategic thinking in this departure from a successful business model?
Seitz: The Myri-10G and Ethernet-convergence strategy wasn't just my idea, but came out of planning discussions during 2002-2003 with several of Myricom's senior techies. We were ready to do a fourth generation of products. The evolution to 10-Gigabit/s data rates was obvious, and the convergence with standard Ethernet, while a bit more formidable, appealed to us as a way to expand our market.
Myrinet has always been compatible with Ethernet. For example, Myrinet NICs have always had Ethernet MAC addresses, the software drivers advertise themselves to the host OS as Ethernet drivers, and Myrinet carries TCP/IP traffic along with MPI and other traffic by what we call “Ethernet emulation.” Meanwhile, the IEEE 802.3ae 10-Gigabit Ethernet standard had been released, and we thought that this standard was downright elegant. Adopting the 10-Gigabit Ethernet PHYs (layer 1, the Physical layer) for what we then called Myrinet-10G was an easy technical decision.
It was then largely a business decision for us to design at least our NICs and software to be able to support both Ethernet and Myrinet protocols at the Data Link layer (layer 2). This approach improved the Ethernet interoperability of our HPC products, and also allowed us to leverage our silicon, software, and sales channel to diversify into the 10-Gigabit Ethernet market. We knew even then how to take several additional steps in this technical convergence, the second of which (Low Latency Ethernet) will be announced at ISC2006, to bring low latency and low host-CPU utilization into the Ethernet world. Also on the business side, we had a lot of analyst forecasts that the 10-Gigabit Ethernet market in particular would grow very rapidly in the 2006-2009 period, a distinct contrast to the limited HPC market.
I've been fairly outspoken on conference panels that I don't see much future in “specialty networks.” Mainstream Ethernet will prevail. Myrinet is a specialty network that's used mostly in HPC, a limited market. Fibre Channel is a specialty network used mostly in storage. InfiniBand is also a specialty network, one that does not have the installed base or maturity of Myrinet or Fibre Channel. I believe that over the next few years all of these specialty networks will decline or disappear, their functions being taken over by Ethernet, perhaps with slight extensions of Ethernet such as those Myri-10G provides. In addition to Ethernet being ubiquitous in general networking, it is capable of being an excellent HPC-cluster network, or a storage network (iSCSI).
HPCwire: The Myri-10G products have been shipping for several months now. At this point, how are they being received by the market?
Seitz: When I look at Myricom sales reports, I see four distinct markets.
Myrinet-2000 (2-Gigabit Myrinet) sales continue to be so strong that it won't be until the third or fourth quarter of 2006 that the accelerating Myri-10G revenue passes Myrinet-2000 revenue.
With 10-Gigabit Myrinet, we have shipped NICs, switches, and software for about 30 clusters since the end of February, many to the usual early adopters. Now that our long-time OEM customers have completed their qualification testing, we see the 10-Gigabit Myrinet volume as jumping up steeply. We will be shipping Myri-10G products for several large clusters during the third quarter, and for some very large clusters during the fourth quarter.
In the pure Ethernet market, we're selling Myri-10G NICs with the driver and firmware that makes them protocol-offload 10-Gigabit Ethernet NICs. We've closed some key “design wins,” particularly in areas such as storage, video, and media, which need the wire-speed throughput of our NICs. The sales volume in this segment is accelerating.
Finally, the innovative “hybrid” or “interoperability” market is typified by the DAS-3 grid of clusters that you described in your very lively Grid Envy article. Here people really do demand the best of both worlds, the low latency, low host-CPU utilization, and scalability of Myri-10G in Myrinet mode together with the interoperability of Ethernet. This segment has applications beyond grids and clusters to metro networks, private corporate networks, and requirements for affordable, secure, multi-site installations that include disaster-recovery capabilities.
HPCwire: What are the initial customer experiences with your Myri-10G product offering?
Seitz: Let me give you one data point for HPC, and one for Ethernet.
Dual Opterons, often with dual-core Opterons in each of two sockets, have been popular for HPC clusters. With the Linpack BLAS used for Opterons, one achieves about 90 percent of peak performance. With jobs running on sub-clusters in the range from 32 to 64 nodes, including dual dual-core systems, customers are seeing High Performance Linpack (HPL) results at about 87 percent of peak. HPL is not very sensitive to latency, but is to data rate, so these 87 percent results are expected given the 1.2 GByte/s one-way and 2.3 GByte/s two-way data rates of MX with Myri-10G. These kinds of results make the end customers quite happy.
For 10-Gigabit Ethernet, our tech-support people got an email recently from a customer who said that it was really nice to buy a 10-Gigabit Ethernet NIC that installed, worked, and performed as advertised, and without leaving him with the feeling he had been 'gouged.' Tech-support people spend much of their time dealing with problems, so it's great when they get a message such as this one.
HPCwire: How will Microsoft's entry into HPC affect the cluster computing market? More specifically, how will it affect Myricom?
Seitz: Myricom has been working closely with Microsoft. In addition to Myricom supplying Myrinet-2000 and MX software support for Microsoft Windows Compute Cluster Server 2003, Myricom is a member of the Microsoft Partners Solution Center (MPSC) in Redmond. The MPSC operates a 70-node Myri-10G cluster of HP dual dual-core Opterons. Of course, the cluster runs Windows CCS. This cluster is very useful for Windows software testing, Myri-10G software testing, application testing, and application development, and is one of the reasons we're confident that Myri-10G clusters with Windows CCS will be trouble-free when they appear at end customers.
My views about the likely success of Windows CCS clusters for HPC are influenced by the experience that about 85 percent of Myricom's market (in dollars) is customers who have no interest in being software developers or system administrators, even though they are probably running Linux today. These are customers such as auto companies running LS-Dyna or Fluent, or Saudi Aramco running seismic and reservoir-modeling codes, or the chemistry department at the University of Anywhere. These customers don't operate giant clusters, but perhaps up to a few hundred nodes. This is the market that Microsoft has said that they are going after, clusters of limited size where the most important factor for customer satisfaction is that the installation and maintenance be turnkey.
In the Linux world, licensed applications are distributed as binaries linked to specific libraries, such as specific MPI libraries. Each time Linux and the libraries change, the application vendors have to generate and test new binaries. One very smart thing that Microsoft has done to reduce the logistical problems in operating a cluster and to make life easier for application developers is to standardize the APIs. For fast networks, the APIs for message passing employ Sockets (Winsock Direct). This approach gives up a very small amount in performance, but allows applications to be distributed as binaries, exactly fitting the Microsoft Windows world.
So, as much as Myricom loves Linux, I think Microsoft will become a significant force in HPC clusters simply by their reducing the system-management hassle factor, and by making it attractive for ISVs to develop applications to run under Windows CCS. Of course, there's no reason why the Linux vendors couldn't do the same thing.
HPCwire: What do you think of the growth in popularity of InfiniBand interconnects for high-performance clusters? How are Myricom's products going to compete?
Seitz: I hope that you'll be satisfied with a short answer.
First of all, it's clear from public information such as the TOP500 lists that Myricom is competing very effectively. Outside of the TOP500 list, we continue to win the lion's share of the market for production clusters, even with Myrinet-2000 versus InfiniBand. With Myri-10G, our products compete by having lower latency, higher throughput, lower host-CPU utilization, Ethernet interoperability, highly evolved and mature software, and wonderful tech support.
Anyone can get a foothold in the HPC market — whether in interconnect or as a cluster integrator — if they are willing to “buy business” by quoting prices below costs. As a general rule, HPC customers love a good deal. We see a lot of this “buying business” from the InfiniBand companies, particularly for large clusters that give them some bragging rights. I don't believe that any of the InfiniBand companies are profitable. They are living off of venture capital. By contrast, 2005 was Myricom's 11th consecutive profitable year. We fund all of our research and development, as well as sales, marketing, and G&A expenses, from profits on sales. We don't quote unprofitable business.
Not to be crass about it, but if we're talking about a war between upstart InfiniBand companies and Myricom, or between InfiniBand and Ethernet, it's partly a war of attrition. How long will the venture backers of the InfiniBand companies be willing to see these companies lose money?
Also, as I said before in my comments about “specialty networks,” InfiniBand is very much a specialty network. Myricom is making a successful transition from being a specialty-network company to being a supplier of high-performance Ethernet products. Ethernet will thrive. Specialty networks will fade away.
HPCwire: What is Myricom looking to accomplish over the next year?
Seitz: We see the next year as devoted to continued refinements to our Myri-10G components and software. We're working on new chips and software aimed at yet better performance, and at taking Ethernet places it's never gone before.
As you know, Myrinet is an ANSI standard. Our products also conform to IEEE Ethernet standards and to a large number of software and API standards. As throughout Myricom's history, we'll continue to make the software and network protocols we develop open, and will contribute software to open software distributions. A recent example is that we contributed our 10-Gigabit Ethernet driver to the Linux kernel. It was accepted and will first appear in the Linux 2.6.18 kernel. We expect to devote greater resources in the future to such activities.
Dr. Charles L. Seitz earned BS, MS, and PhD degrees from MIT, and during the 17 years prior to founding Myricom in 1994 was a Professor of Computer Science at Caltech. Myricom products are based in part on the communication, switching, and software technologies that Seitz's Caltech research group developed under DARPA sponsorship for advanced multicomputers. Among Seitz's many professional honors, he was elected to the US National Academy of Engineering in 1992 “for pioneering contributions to the design of asynchronous and concurrent computing systems.”