Earlier this month, ClusterVision announced that it has been selected to build the DAS-3 Grid (Distributed ASCI Supercomputer) in the Netherlands. The DAS-3 Grid will consist of five Linux supercomputer clusters, with an aggregate theoretical peak performance of more than 3.5 teraflops. The individual clusters will be hosted at four leading Dutch universities and will be connected with SURFnet's dedicated multi-color optical network and Myricom's Myri-10G interconnect. Because of the advanced interconnect technology, data transfer rates between clusters will be up to 80 gigabits per second (Gbps).
DAS-3 is third-generation DAS. Unlike its DAS-1 (1997) and DAS-2 (2002) predecessors, DAS-3 will use the inter-city SURFnet optical network as the grid's backbone. In total, DAS-3 will link more than 550 AMD Opteron processors, 1 TB of memory and 100 TB of mass storage. The five grid clusters will be installed with the Linux-based ClusterVisionOS cluster operating system.
All the DAS grids were designed as research environments used for studying distributed computing architectures. Topics of interest include parallel programming languages, operating systems research, runtime language systems and algorithmic research. The Ibis open source Java Grid software environment has been studied extensively using the DAS-2 Grid.
Compared to other grids, the DAS architecture is very homogeneous in nature. Although each individual cluster's memory capacity and number of processor may vary, all systems are Opteron-based, running the same Linux OS and linked with the same interconnect hardware. This greatly simplifies system administration. More importantly, since the fundamental characteristics of each system are identical, distributed application performance is much simpler to measure. No “apples to oranges” comparisons are necessary.
Because of this, Dr. Henri Bal, a researcher from Vrije Universiteit, thinks the DAS Grid model is an ideal environment to do parallel computing research.
“You could say it's almost like a laboratory grid,” said Bal. “Some people say it's not really a grid because it's too nice and clean and it's too homogeneous. But you can do really meaningful, controlled experiments.”
The high-speed SURFnet optical ring network and the small size of the Netherlands allows for a rather low latency inter-city interconnect. The Dutch like to remind people that their country is only two milliseconds by three milliseconds in size (in terms of the speed of light in the optical fiber). So they're going to enjoy reasonably low latency across their network, which will allow them to run fairly tightly-coupled distributed computing applications.
Myri-10G Finds Its Natural Habitat
Presumably many of the top high performance interconnect vendors bid for the DAS-3 work. In what has become a crowded field, many companies are offering 10 Gigabit interconnect solutions, either Ethernet or InfiniBand. And although the bidding vendors were not privy to the details of the competition, it's a good bet that the leading InfiniBand and Ethernet interconnect companies all wanted this work.
The selection of Myricom's Myri-10G interconnect for DAS-3 was the result of its dual-protocol capabilities. The product offers Myricom's proprietary Myrinet technology converged with industry-standard Ethernet. At the physical level, the ports are 10 Gigabit Ethernet (GbE). The data rate is 10+10 Gbps, full-duplex. At the data-link level, the links may use either Ethernet or Myrinet protocols.
The Myrinet protocol will be used within the clusters to take advantage of its lower latency and lower CPU overhead (no IP protocol stack). Between clusters, the Myri-10G will use TCP/IP over Ethernet. In both cases the data rates will be uniformly 10 Gbps. According to the Myricom, it's the dual-protocol interoperability that makes their solution so unique.
“This is the first major business that we've seen that depends so critically on the Myrinet-Ethernet convergence,” said Chuck Seitz, Myricom founder and CEO. “Here they're getting it all. They're getting the low latency communication in the clusters; they're getting the Ethernet IP communication between the clusters. It's just plug-and-play.”
Seitz is hard-pressed to contain his enthusiasm for the project. For him, the DAS-3 Grid represents an ideal showcase for Myricom's converged Myrinet-Ethernet technology. Each cluster is expected to be installed with eight 10 GbE links to the Myricom switch, which will be directly connected into the inter-city ring that links the universities. So data traffic between clusters will be on the order of 80 Gbps.
“To me this is the way people interested in performance always wanted to make clusters,” said Seitz. “When this thing goes [operational] in August it's going to be the fastest grid of clusters in the world.”
The largest grid in the United States is the TeraGrid, which includes eight partner sites: NCSA, SDSC, PSC, ORNL, Purdue University, Indiana University, and TACC. Each site connects to the TeraGrid at either 10 or 30 Gbps.
Many other grids, use one or two special server nodes that have interfaces to the typical telecom OC-48 links that go from city to city. Basically, a cluster host talks through a specialized communications server that shuttles data across the backbone. The OC-48 link will support bandwidth up to 2.5 Gbps. The more advanced OC-192 links takes it up to 10 Gbps.
In the DAS model, no communication server nodes are needed. Data is going straight from the Myricom switch into a Nortel router that's connected to SURFnet's WDM optical fiber. According to Seitz, one way to look at it is that the IP protocol stack processing is being done in the cluster host rather than in the communication server. Myricom's solution lets you just extend the communication fabric. So it allows the grid builder to significantly simplify the communication infrastructure.
“This is the dual-protocol interoperability business,” said Seitz, “where the connection between the host in the cluster and the inter-city optics fiber [between clusters] is absolutely seamless. They don't have to add an extra box to shuttle the data around.”
A New Model for Distributed Computing
Seitz is not only hoping that the DAS-3 architecture will become a model for other grids, but for many kinds of high performance wide-area networking applications as well. For example, Manhattan brokerage firms that use a lot of HPC and require high-bandwidth connections to the trading floor can use this sort of set-up to great advantage. Another potential application would be a typical airline reservation system, where two or more sites must do a lot of real-time computing while keeping distributed databases synchronized.
For businesses like these, it costs millions of dollars per minute to go without service, so they perform constant data mirroring for disaster recovery, usually over their own private fiber infrastructures. Seitz believes that the DAS-3 model is a near perfect architecture for supporting these types of applications.
As the cost of optical network communication technology decreases and distributed computing becomes mainstream, more commercial users are going to be looking for ways to take advantage of this new infrastructure. The emergence of enterprise grid applications within the last several years points the way towards more widespread adoption of distributed computing architectures. With their Myri-10G product, Myricom sees itself as a key enabler for this new paradigm.
“This is the way people always wanted to build grids,” said Seitz. “When people in the U.S., Japan, China and other areas of Europe see this, they're going to develop grid envy.”