ISC is looming fast and on the Wednesday we will be holding a panel asking the question whether it is time to focus more on the consolidation and interoperability of existing parallel programming technologies, rather than the development of new ones.
MPI P2P, MPI RMA, OpenMP, OmpSs, Legion, GPI-Space, UPC++, Charm++, HPX, Chapel, GASPI, OpenACC, OpenCL, CUDA, these are just a tiny subset of the HPC programming technologies that programmers have available for writing their parallel code. However, with the exception of a few that have reached ubiquity in our community, very many of these have enjoyed limited uptake. This is a massive shame because some of the less popular technologies contain really useful features, but it is an up-hill struggle to get new HPC programming technologies widely adopted. There are a variety of reasons that drive this, and probably most significant is that HPC codes tend to be long lived – so there can be a significant risk associated with an HPC developer choosing a technology that isn’t yet fully mature because they can’t be sure whether it will do what’s needed, will be installed on their target machines and whether it will be fully supported throughout the lifetime of the code.
There is a saying, better the devil you know, and even though classical parallel technologies such as MPI v1 and OpenMP might not be perfect, at-least their ubiquity means that they are well supported, their future assured and programmers to some extent know what to expect. Many in the HPC community agree that these common tools are not ideal, but when it comes to writing parallel code that’s about all we can agree on! So instead of focusing on developing new solutions, which inevitably spreads the community’s effort fairly thinly across many technologies, a question is whether we should aim to tackle the problem by consolidating what there is already and ensuring that these work together? One can imagine a future where programmers are able to pick and mix different parallel programming technologies that suit their needs, with these then working seamlessly together and the overall value being greater than the sum of their parts.
Of course it’s easy to say this, but actually achieving it will require a combination of technical and political efforts. This panel at ISC will be focussed around these questions and whether we, as a community, should be looking more closely at consolidating and combining existing parallel programming technologies, standardisation to enable better interoperability and what sort of parallel programming technologies as a community we should be getting behind. Hosted by Nick Brown of EPCC at the University of Edinburgh, there are four panellists, Brad Chamberlin from Cray who is responsible for the Chapel effort, Rosa Badia from BSC who is responsible for the Workflows and Distributed Computing group, Hari Subramoni from Ohio State Univeristy, an MPI expert and co-organiser of the successful SC workshop on Extreme Scale Programming Models and Middleware, and Mirko Rahn at Fraunhofer who has been instrumental in the development of GASPI and GPI-Spaces.
The panel is on Wednesday, June 19th between 4pm and 5pm in Panorama 1 (link).