by Karen Green, NCSA Science Writer
San Diego, CA — Two years ago NCSA had its first success running scientific code on a brand new hardware platform called the NT supercluster – a collection of high-end, commodity PCs running the Windows NT operating system and linked together with a fast network to operate at supercomputer performance levels.
Since then the supercluster has evolved and now serves a variety of NCSA and Alliance users. Between February 2000, when the NT supercluster changed from friendly user to production mode, and May 2000 26 allocations of CPU time were granted on the cluster. The supercluster runs a variety of scientific Message Passing Interface (MPI) codes at performance levels that are comparable to, and sometimes even better than, performance levels achieved on more conventional supercomputers.
After the transition to production mode, utilization of the supercluster quickly rose to about 30 percent. That figure should double in the next few months, as more scientists are granted allocations on the system, according to Rob Pennington, head of the Alliance NT cluster development team at NCSA.
“That’s a very good start for production, and that figure will undoubtedly go up as more researchers are granted time on the cluster,” he adds.
At present the supercluster consists of more than 400 processors. A cluster of 288 Intel processors in 144 compute nodes (with one compute node equal to one dual-processor machine) is available to the scientific user community. A total of 256 of those processors – 128 Hewlett-Packard machines with dual 550-MHz Intel Pentium III Xeon processors – are capable of running MPI codes using Myricom’s Myrinet interconnect and software called High Performance Virtual Machine (HPVM). HPVM was developed by Andrew Chien and the Concurrent Systems Architecture Group at the University of California, San Diego. MPI codes are those that allow communications among multiple systems, making distributed computing possible. The majority of distributed scientific codes utilize MPI for message passing.
Thirty-two 333-MHz Intel Pentium II processors are available to run serial codes. The remainder of the supercluster’s processors are used for testbeds that look at a range of infrastructure and development issues and performance and portability with different interconnects and operating systems.
Codes that run on the supercluster include the MIMD Lattice Calculation (MILC), a parallel code used for lattice quantum chromodynamics (QCD) simulations. The MILC collaboration, led by Bob Sugar of the University of California at Santa Barbara, includes physicists from nine universities working on a Department of Energy Grand Challenge Initiative that involves using parallel computing to model QCD. The MILC code was easily ported to the NT supercluster. Since the supercluster entered production mode, MILC researchers Doug Toussaint and Kostas Orginos of the University of Arizona have used more hours on the cluster than any other allocated project to date.
Other codes that have been successfully ported to the supercluster include GMIN (chemistry), OVERFLOW (aeronautical fluid dynamics), Cactus (general relativity), ZEUS-MP (astrophysics), Tree Particle Mesh (cosmology), ARPI-3D (weather research), a Quantum Monte Carlo materials science code, and a polymer research code.
One of the cluster testbeds focuses on cluster middleware called Virtual Machine Interface (VMI), developed by NCSA cluster team member Avneesh Pant.
VMI, unlike HPVM, makes it possible for applications to run on a cluster using different types of interconnects to communicate among processors. VMI also allows MPI applications to run between clusters, including those using different operating systems. The team has successfully run the MILC, Cactus, and ARPI-3D applications across multiple clusters, including heterogeneous clusters running Windows NT and Linux.
“Without VMI a scientist who has compiled an application on a cluster using Myrinet would have to recompile that code to use with a different cluster, such as one that had only Ethernet,” says Pant. “If you compile your MPI code for systems that use the VMI layer, it will run on other types of clusters or on multiple clusters. In other words, you could have just one executable that could run on any NT cluster on the Grid, and it could also be part of a larger computation that includes a Linux cluster or other NT clusters.”
The development of VMI was a consequence of the need to do performance testing on clusters, says Pennington. VMI works as middleware that allows MPICH – a tool developed by Argonne National Laboratory that implements MPI on most computer systems – to work transparently across the different interconnects and systems in the supercluster testbeds. MPICH is the standard for implementing MPI on most high-end computing systems, although it is often fine-tuned to the characteristics of specific computing systems. VMI allows the cluster team to tune MPICH to the supercluster environment. In addition VMI allows machines with different operating systems to work together on an application with just a simple recompilation for each operating system.
“We now have a common layer of middleware for MPI applications, so we can make direct comparisons among clusters running various applications, ” says Pennington. “As a result, we can get a better idea of the performance of specific codes on different cluster systems.”
The ability to run applications on multiple clusters using different interconnects and operating systems is important as the Alliance continues to develop the concept of computing on the Grid. The PACI Grid – an experimental system that links high-speed hardware and cutting-edge applications into an efficient, persistent infrastructure – is being built by the Alliance. This Virtual Machine Room gives researchers remote access to any of the Alliance’s computing resources regardless of physical location and lets resources at different locations work together as one seamless system.
“This type of technology, which can bridge multiple clusters using multiple interconnects to create a seamless system is important to the concept of Grid computing,” says Pennington. “Clusters are built from commodity components, and since no single vendor supplies all the components, it is unlikely that a particular interconnect will become the standard for clusters.”
The VMI middleware addresses this reality and at the same time makes the computational scientist’s job easier. It is another step in the supercluster’s journey from experimental system to one of the main computing platforms offered to scientific users by the Alliance.
This project is supported by the National Computational Science Alliance, with additional support from Microsoft Corp., Intel Corporation, and Myricom.