SCIENCE & ENGINEERING NEWS
San Diego, CALIF. — Ron Anderson, Mike Lee and Steve J. Chapin report that scientific clusters are used for a broad range of disciplines, including biology (genome mapping, protein folding), engineering (turbo-fan design, automobile design), high-energy physics (nuclear-weapons simulation), astrophysics (galaxy simulation) and meteorology (climate simulation, earth/ocean modeling). For example, computer scientists at Syracuse University’s High Performance Distributed Computing Lab in Syracuse, N.Y., are working with biochemists at the University of Washington on ab initio protein folding, which is a critical step in helping us interpret the results of Celera Genomics’ recent sequencing of the human genome. To solve this problem, Syracuse’s Orange Grove Cluster uses Condor from the University of Wisconsin to run thousands of independent jobs covering portions of a search space, using a process known as simulated annealing, which starts with a guess at a solution and iteratively improves that solution until it reaches convergence.
One of the first “modern” clusters was the Beowulf project at NASA’s CESDIS, led by Thomas Sterling and Donald Becker ( http://www.beowulf.org ). Beowulf was inspired by physicists’ need to analyze large data sets and the scarcity of computer time on their local supercomputers. The Beowulf team bought 16 off-the-shelf PCs and connected them with two Fast Ethernet networks. The emphasis of the Beowulf project was on using the cheapest available commodity components, so the team selected commodity Ethernet and Intel-compatible processors.
At the other end of the spectrum, the Computational Plant (C-Plant) project at Sandia National Labs in Albuquerque, N.M., is an attempt to build a true supercomputer from COTS (commercial, off-the-shelf) components. The Sandia scientists focused on distributed computing using message passing (that is, no shared memory between processors). They focused on performance and built their machine using Digital (now Compaq) Alpha processors with a gigabit-speed system-area network based on Myrinet. The scientists have developed low-latency message-passing software, called Portals, which lets them extract the maximum performance from the underlying network. The newest C-Plant cluster, Antarctica, is scheduled to be in place in mid-October and will have more than 1,800 Alpha computers connected by Myrinet.
The choice of hardware for scientific computing depends on the applications you need to run. For high-performance applications, the critical factor is usually the granularity, or ratio of computation to communication. Coarse-grained applications tend to send larger messages with low frequency, while fine-grained computations more often send smaller messages. Beowulf-class clusters, because of their slower networks, are best for coarse-grained applications. C-Plant-class clusters preserve the computation-to-communication ratio seen on past supercomputers, such as the Intel Paragon, and can handle more communication-intensive applications than the Beowulf clusters can. Commercial clusters offer better support for coarse-grained applications, typically using Fast Ethernet as the interconnection network.
The Orange Grove cluster at Syracuse University represents a midpoint on the spectrum of hardware choices. The Orange Grove has 48 dual-processor Intel machines and 16 Alphas, all connected by 100-Mbps switched Fast Ethernet. In addition, the cluster has 16 nodes connected via Giganet’s cLAN, a native hardware implementation of the VIA.
Most scientific clusters use some form of batch-processing or work-sharing software, such as the PBS (Portable Batch System) from MRJ Technology Solutions (now part of Veridian Information Solutions), LSF (Load Sharing Facility) from Platform Computing Corp. or Condor. These software packages control the placement and execution of programs on the cluster automatically, freeing the end user from worrying about administrative details.
Commercial vendors are offering complete scientific cluster packages. Linux NetworX’s solutions contain both the hardware and the management software in the box. Sun Microsystems sells the hardware solution as a package called Sun Technical Compute Farm, and its recent acquisition of Gridware, a batch-processing software provider, implies a turnkey solution is on the way.
Research clusters are almost exclusively Unix-based, with Linux being the dominant operating system. This dates back to Don Becker’s choice of Linux for the original Beowulf cluster; the rapid propagation of Beowulf established Linux as the default cluster OS. The cluster realm is one in which Unix and its variants have a substantial lead on Windows NT. Clearly, Microsoft has taken steps to address this, but for the near term, Unix remains the operating system of choice for cluster applications. There have been research clusters built based on Windows, including Andrew Chien’s work at the University of Illinois and the University of California at San Diego, and the AC3 cluster at Cornell University. And many of the management packages for batch processing software have also been ported to Windows. Even though these researchers have established that it is possible, if not easy, to build Windows clusters for scientific research, there is a widely held bias in the research community toward Unix.
Two trends first used in research computing are now emerging and should soon find application at more businesses. First is the construction of clusters of multiprocessors, or clumps. These clusters use the same networking technology as plain clusters, but each node in the cluster is a parallel, shared-memory machine. While traditional clusters use message passing almost exclusively, clumps encourage a combination of shared-memory and message-passing programming known as mixed-mode programming. In the commercial world, this will allow the easy replication of multithreaded servers and speed up each individual server.
At the same time, we are moving toward a convergence of the system-area network (used to pass messages between processes) and the storage-area network (used to access devices). VIA (Virtual Interface Architecture) is an emerging standard in message passing. Another new standard for storage networks, called Infiniband, is also beginning to take hold. While Infiniband is the anointed standard, whether customers will adopt it in lieu of Fibre Channel remains to be seen (the contest will be akin to that between OSI networking standards and TCP/IP). If Infiniband pans out, the next logical step is to produce a “VIA++”, which merges system-area and storage-area networks. This convergence will help applications fronting large databases, and will make access to remote servers and remote devices seamless, enabling a new wave of applications.
============================================================