Much of the bleeding edge work that goes on in the global high performance computing community takes place in federally funded supercomputing centers and laboratories around the world. These are the places destined to get the first petascale supercomputers, where grand challenge applications can be pushed to their grandest limits. But now that high performance computing is firmly entrenched in all types of organizations — commercial, academic and government — it's safe to say that mainstream HPC is much more common than high-end supercomputing.
This democratization of supercomputing means cluster computing machines have become the de facto HPC architecture for people without deep pockets, i.e., most of the world. This is the reason the cluster model has become the dominant force shaping the industry, and the reason why the number of departmental and workgroup cluster systems are growing at double-digit rates. Since the way people use capability supercomputers is rather different from the way they use cluster computing, this dichotomy tends to split the HPC community in a variety of ways. One of the most important ways this manifests itself is the different ways HPC software is developed.
The legacy applications and libraries of high performance computing are almost entirely written in Fortran or C, with MPI thrown in to provide the parallelism. This level of technology almost always requires a software engineer to be in the room when any non-trivial application is developed. This is a workable arrangement for supercomputing centers and the national labs; programmers can be sequestered on demand. In this environment, plenty of time is devoted to squeeze the last bit of performance from the application. Such are the advantages of federally funded enterprises (at least when they're properly funded).
Once you leave the rarified atmosphere of supercomputing centers and national labs, the situation changes. As I've implied in previous articles on the subject, not every department or workgroup is going to be able to afford their very own software engineer. It's a simple numbers game. So if these groups are planning to develop their own applications, rather than just using someone else's, then another programming model must be considered.
One promising development is the emergence of high-level parallel languages like UPC (Universal Parallel C), CAF (Co-Array Fortran), and Titanium (parallel Java). Also in process is the development of the High Productivity Computing Systems (HPCS) language being pursued with DARPA money. Currently there are three HPCS languages: Chapel, X10, and Fortress. Eventually DARPA will whittle these down to one. In general though, this entire group represents third generation languages (3GLs) with built-in constructs for parallelism.
By increasing the level of abstraction for parallelism, these languages promise to increase productivity for HPC development. But by themselves, they won't be able to deliver high performance computing to the small developer.
Why not?
Because they are high-level languages in name only. The term is applied to any language that rises above the abstraction of assembly code. This includes all 3GLs like C/C++, Fortran, Java, etc. To be honest, true high-level programming languages don't exist yet. I mean this in the same sense that high-level human languages don't exist either. Is Mandarin high-level? Certainly not to me. All these “new” parallel programming languages will still be dependent on software engineers.
There are other programming models that may come to play a very important role in high performance computing. They fall under the general categories of domain specific languages (DSLs) and fourth generation languages (4GLs). As compared to a general-purpose language, DSLs attempt to be expressive for a specific subject matter or domain, while 4GLs emphasize higher level of abstraction. For both types, the execution model is often interactive, rather than compiled. Much of this technology is built on 3GLs, making use of both the older language environments and 3GL libraries.
In practice, the features of these DSLs and 4GLs overlap quite a bit, and are often just referred to as “very high-level languages”. Because these languages target rather large domains and usually provide greater abstraction, they are accessible to a wider audience. Specifically, they are targeted to people without formal computer science training.
Example of these languages include MATLAB for scientific and mathematical computing; SQL for database applications; and Excel for financial and other numerical business applications. Today, all three of these have parallelized variants. The MathWorks and Interactive Supercomputing (ISC) have their own versions of a parallel-enabled MATLAB. In this week's issue we discuss ISC's version with Bill Blake, the new CEO. Parallel versions of SQL have been developed by a number of vendors. For example, both Microsoft and Oracle have built platforms that can distribute SQL database queries across multiple machines. Finally, Microsoft is combining an Excel Services front end with Microsoft Windows Computer Cluster Server 2003 to parallelize spreadsheet computations. A prototype was demonstrated at the SIA (Securities Industry Association) Technology Management Conference last June.
While parallel programming languages like UPC may help increase productivity for the HPC elite, my money is on the parallelized versions of the DSLs/4GLs to help spread HPC to the masses. At least until we get to the 5GLs.
—–
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].