The Case for a Parallel Programming Alternative
Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems.
To explain why Cray is pursuing a new computing language when there are currently established models in place, Principal Engineer Brad Chamberlain has penned a detailed blog post.
Chamberlain maintains that programmers have never had a decent programming language for large-scale parallel computing. By that, he means “one that contains sufficient concepts for expressing the parallelism and locality control required to leverage supercomputers, while also being as general, effective, and feature-rich as languages like Fortran, C, C++, or Java.”
“Ideally,” he continues, “such a language would strive to be more than simply ‘decent’ and feel as attractive and productive to programmers as Python or MATLAB are. Libraries and pragma-based notations are very reasonable and effective alternatives to creating a language. Yet, given the choice between the three, languages are almost always going to be preferable from the perspectives of: providing good, clear notation; supporting semantic checks on the programmer’s operations; and enabling optimization opportunities by expressing the programmer’s intent most clearly to the compiler, runtime, and system.”
The communities’ current go-to technologies for parallel programming, namely MPI and OpenMP, have done the job, but they are lower-level and lack many of the features of more modern languages.
As to the claim that HPC workflows necessitate lower-level techniques, Chamberlain clarifies that those who are completely satisfied with currently-available tools can certainly keep using them, but he wants to provide an alternative for those who find them lacking. He also wants to push back on the idea that HPC programming can only be done close to the metal.
It is possible to use abstractions that boost productivity as well as performance, Chamberlain contends. “With good design,” he writes, “not only can raising the level of abstraction improve programmability and portability, it can also help a compiler — to say nothing of subsequent programmers — better understand and optimize a piece of code.”
Chapel is not only a higher-level language; however. It was actually designed with a multiresolution philosophy. According to this overview, the approach allows users to begin by writing very abstract code and then add more detail until they are as close to the machine as their needs require.
The overarching goal of the Chapel initiative is to make parallel programming more accessible so that computational scientists, domain experts, and mainstream programmers can leverage the full benefits of parallelism as core counts proliferate.
Chapel 1.9.0 was released on April 17, 2014. More details about the project are laid out in an earlier blog post from Chamberlain.