Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
May 28, 2013

GASPI Targets Exascale Programming Limits

Nicole Hemsoth

As we look ahead to the exascale era, many have noted that there will be some limitations to the MPI programming model.

According to researchers who work on the Global Address Space Programming Interface (GASPI), there are some critical programming elements that must be addressed to ensure system reliability as programmers construct codes that can scale to hundreds of thousands of cores and beyond.

The one-sided communication in GASPI is based on remote completion and targets highly scalable dataflow implementations for distributed memory architectures.  As such one-sided communication does not require specific communication epochs for message exchanges.  Rather data is written asynchronously whenever it is produced and data is locally available whenever a corresponding notification has been flagged by the underlying network infrastructure.  Failure-tolerant and robust execution in GASPI is achieved through timeouts in all non-local procedures of the GASPI API.  GASPI features support for asynchronous collectives.

The GASPI collectives rely on time-based blocking with flexible timeout parameters, where the latter range from minimal-progress tests to full synchronous blocking. GASPI supports passive communication and mechanisms for global atomic operations. The former mechanism is unique to GASPI and is most directly comparable to a non-time critical active message, which triggers a corresponding user-defined remote execution. Global atomic operations in GASPI allow to apply  low-level functionality of e.g. compare-and swap or add-and-fetch to all data in the RDMA memory segments of GASPI.

The following shows how the GASPI segments are mapped to an architecture like Xeon Phi.

Just in time for the ISC 2013 in Leipzig, the GASPI consortium will release the new GASPI standard. GASPI is a PGAS API for developers who seek high scalability as well as low-level support for fault tolerant execution.

The creators say the GASPI API is very flexible and offers full control over the underlying network ressources and the pre-pinned GASPI memory segments.  GASPI allows to map the memory heterogeneity (RAM, GPGPU, NVRAM) of modern supercomputers to dedicated memory segments and also offers the possibility to have multiple memory managements systems (e.g. symmetric and non-symmetric memory management) and/or multiple applications to co-exist in the same Global Partitioned Address Space.

The first implementation of GASPI is GPI-2, from Fraunhofer ITWM. GPI-2 implements the GASPI standard and will be available as a open source software shortly before the ISC13 in Leipzig and also at the booth of ITWM Fraunhofer during the event.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video