June 19, 2013

Developers Tout GPI Model for Exascale Computing

Alex Woodie

Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn’t changed is the MPI programming model. To get around the bottleneck that MPI poses to exascale computing, developers are banking on the new GPI programming model to unlock the potential of future parallel architectures.

GPI, which stands for Global Address Space Programming Interface, takes an entirely new approach than MPI for enabling communication among processors in a supercomputer. The model implements an asynchronous communication paradigm that’s based on remote completion, according to a story in Phys.org.

Each processor in a parallel HPC system can directly access all data, regardless of where it resides and without affecting other parallel processes. This gives GPI the potential to scale beyond what’s possible with MPI, and to fully exploit today’s highly parallel clusters of multicore systems, using traditional HPC programming languages, such as C and Fortran. 

The effort to create GPI was spearheaded by Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. Lojewski was working on HPC problem involving seismic data, and the existing methods weren’t working. “The problems were a lack of scalability, the restriction to bulk-synchronous, two-sided communication, and the lack of fault tolerance,” Lojewski tells Phys.org. “So out of my own curiosity I began to develop a new programming model.” 

The GPI model, which was first unveiled at the ISC 2010 conference in Hamburg, continues to be developed by dozens of developers around the world, including Rui Machado from Fraunhofer ITWM and Dr. Christian Simmendinger from T-Systems Solutions. Together with Lojewski, the three HPC developers were awarded the Joseph von Fraunhofer prize this year for their work. 

GPI is also finding its way into production as development continues. According to Simmendinger, the European aerospace industry worked with the German Aerospace Center (DLR) to port an aerospace HPC program called TAU to use GPI. The results have been impressive. “GPI allowed us to significantly increase parallel efficiency,” Simmendinger tells Phys.org.

GPI is not a plug-in replacement for MPI, and requires developers to port applications to use the new low-level API. Squeezing the most benefit from GPI also requires applications to be multi-threaded, which may also bring additional work. But based on early reports, GPI has a promising future as the communications protocol for tomorrow’s exascale supercomputers.

Related Articles

Is Amazon’s ‘Fast’ Interconnect Fast Enough for MPI?

An HPC Programming Model for the Exascale Age

GASPI Targets Exascale Programming Limits

Share This