Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
May 28, 2013

GASPI Targets Exascale Programming Limits

Nicole Hemsoth

As we look ahead to the exascale era, many have noted that there will be some limitations to the MPI programming model.

According to researchers who work on the Global Address Space Programming Interface (GASPI), there are some critical programming elements that must be addressed to ensure system reliability as programmers construct codes that can scale to hundreds of thousands of cores and beyond.

The one-sided communication in GASPI is based on remote completion and targets highly scalable dataflow implementations for distributed memory architectures.  As such one-sided communication does not require specific communication epochs for message exchanges.  Rather data is written asynchronously whenever it is produced and data is locally available whenever a corresponding notification has been flagged by the underlying network infrastructure.  Failure-tolerant and robust execution in GASPI is achieved through timeouts in all non-local procedures of the GASPI API.  GASPI features support for asynchronous collectives.

The GASPI collectives rely on time-based blocking with flexible timeout parameters, where the latter range from minimal-progress tests to full synchronous blocking. GASPI supports passive communication and mechanisms for global atomic operations. The former mechanism is unique to GASPI and is most directly comparable to a non-time critical active message, which triggers a corresponding user-defined remote execution. Global atomic operations in GASPI allow to apply  low-level functionality of e.g. compare-and swap or add-and-fetch to all data in the RDMA memory segments of GASPI.

The following shows how the GASPI segments are mapped to an architecture like Xeon Phi.

Just in time for the ISC 2013 in Leipzig, the GASPI consortium will release the new GASPI standard. GASPI is a PGAS API for developers who seek high scalability as well as low-level support for fault tolerant execution.

The creators say the GASPI API is very flexible and offers full control over the underlying network ressources and the pre-pinned GASPI memory segments.  GASPI allows to map the memory heterogeneity (RAM, GPGPU, NVRAM) of modern supercomputers to dedicated memory segments and also offers the possibility to have multiple memory managements systems (e.g. symmetric and non-symmetric memory management) and/or multiple applications to co-exist in the same Global Partitioned Address Space.

The first implementation of GASPI is GPI-2, from Fraunhofer ITWM. GPI-2 implements the GASPI standard and will be available as a open source software shortly before the ISC13 in Leipzig and also at the booth of ITWM Fraunhofer during the event.