by Christopher Lazou
San Diego, CALIF. — In the late 1980’s it became fashionable to give presentations with sound bites, such as, “supercomputers and the killer micro.” The message was that cheap micros will replace the large scale supercomputers then in use, for solving the pressing problems of our time.
Today the “killer Beowulf” has arrived; a cheap solution to supercomputing. A lot of time and money albeit in small parcels have been spent by “Geegs” to install clusters of Pentium, Alphas’, Power 3 or other makes of commodity Chip, on a DIY (Do It Yourself) basis. These systems are mostly experimental relatively small, 32-64 nodes, cheap in capital outlay but fraught with danger and poor reliability, especially in novice hands. “Buy (DIY) in haste, repent in leisure”, as the Romans used to say.
Yet there are others with a much more serious mission, for example the Beowulf machine at Sandia, with 2600 Alpha EV67 chips, is using LINUX and Myrinet2000 as interconnect. Bill Camp at Sandia wants to enhance this system to 20,000 Alpha EV7 processors by year 2004-5. The largest civilian Beowulf system planned, paid from the $45 million NSF money, is the one at the Pittsburgh Supercomputing Centre, consisting of 2,728 Alpha EV67 processors in 682 nodes using QSNet and rated at 6Tflop/s peak performance.
Thus, in the last three years the Beowulf paradigm has been used to build departmental and desktop systems from off-the-shelf components. These typically consist of 32 – 64, or several hundred or in a few cases 1000s, of Intel Pentium III, IBM power 3 or Compaq Alpha chips, cobbled together using Myrinet or QSNet as interconnect. This Do It Yourself (DIY) system building, is cheaper on paper, as costs often do not include the engineering and commissioning manpower or the vendor profit margins. It has become an attractive option in academic institutions strapped for money, especially in those with experimental non mission critical research. Apart from the money constrains, those who take this path tend to be young academics with a technical scientific background who find the idea of bolting their own systems together challenging. This activity gives them job satisfaction even though it diverts energy from their scientific task.
Experience from the US (ASCI) programme showed that the scalable systems from commodity chips is viable and provides capability computing, while not as efficient in throughput for conventional large scale applications.
Experience also shows, that prototype Beowulf systems with 32-CPUs compare favourably with the Cray T3E, but not when using larger systems. These prototype systems with Pentium, IBM Power 3 and Alpha Chips, are available and with benchmark results, at Daresbury Laboratory. For example, the applications platform developed under the QUASI project is a 32-CPU Alpha cluster connected with QSNet and running LINUX. In the UK some clusters are mainly for single application, e.g. university of Liverpool, while Daresbury Laboratory, UK, NOAA, Sandia, Cornell and Pittsburgh university, in the USA, are trying to prove the Beowulf paradigm for HPC.
At present a 32-node Beowulf system using Myrinet interconnect can be assembled for about $100 thousand. It is scalable to 128 nodes and even if you add 1TBytes of archive disks, this can cost about $200 thousand. For the moment, HPC Beowulf systems are mainly experimental, with lots of question marks on whether they are suited to these applications. The cost per peak Mflop/s is however very seductive. The cost of a traditional HPC system, such as the Cray T3E, Fujitsu VPP5000 or NEC SX-5 is typically 5 times more expensive than a commodity SMP system.
What is often neglected is the performance one gets from these systems. For instance taking the Compaq Alpha as having performance 100, the SGI Origin 3200 is 61, the Compaq Alpha ES40/667 is 113, the IBM RS/6000 Power 3 is 94, the SunBlade 1000/750 is 85 and the NEC SX-5 is 3975. Thus the NEC SX-5, achieves 39.75 times more performance than the baseline Alpha Chip using a Daresbury benchmark. This is why one reads (HPCWire June 2nd 2000), that on sparse matrix problems using FMSLIB, a 16 CPU NEC SX-5 delivers 126.5 Gflop/s sustained performance out of the 128Gflop/s peak. To get the same sustain performance on scalar SMPs one would need to buy a system with Tflop/s peak.
Although a lot of effort is going into proving the Beowulf paradigm, there are still a lot of issues to be resolved. It crucially includes bandwidth of interconnect, share memory size, effect of variable message passing on L2 cache, reliability of clusters, and the immaturity of LINUX with many HPC needed functions missing. These issues cumulatively add up to deliver poor performance at present. This state of affairs is, however, temporary and once the many HPC components from high end computer vendors are imported into LINUX and standardised as open source, the picture should change fundamentally. Whether performance would improve remains to be seen.
The Beowulf paradigm is also gaining momentum with a number of small companies providing build and maintenance services, for made to order systems, using commodity Chips and Interconnect of the customer’s choice. Currently Myrinet2000 and QSNet are the preferred ones.
In summary, LINUX is a disruptive technology but whether Beowulf should be considered as one is my view not proven. Although Beowulf could be a threat to scalable clusters offered presently by computer vendors, it is unlikely to have a significant impact on the Parallel Vector Processor (PVP) supercomputer line. The reason is simple. The PVP market was never strong in strapped-for-money university departments. I guess, the initial impact is likely to be seen in commodity servers, where LINUX is in the process of being customised and adopted by all the main vendors, IBM, Compaq, HP, SUN, Fujitsu and so on.
It is bemusing to observe that some of the problems of the 1970’s are revisiting computing. The programming difficulties of optimising vector codes are now re-appearing as part of multi-level cache; the challenge is how to achieve optimisation across different levels of cache without loss of coherence. The problems derived from ruefully inadequate Operating System functionality and non-standard source, are to be imported in computing production, in the guise of open source LINUX. In the past, modifying Operating Systems to enhance functionality and paying the price of unreliability, was a necessity, now we are told, it is to be done as of preferred choice.
Maybe “Supercomputing history repeats itself” or, maybe “History proceeds by changing the subject”.
Copyright: Christopher Lazou, HiPerCom Consultants, Ltd., UK. Email: [email protected]