The top research stories of the week have been hand-selected from prominent journals and leading conference proceedings. Here’s another diverse set of items, including one on GPU programming, distributed file systems, exhaustive search with parallel agents, the benefits of invasive computing, and an HPC cloud proof-of-concept.
Extending OpenMP for GPU Programming
The International Journal of Computational Science and Engineering (Volume 8, Number 1/2013) includes an interesting research item from Seyong Lee (Computer Science and Mathematics Division, Oak Ridge National Laboratory) and Rudolf Eigenmann (School of Electrical and Computer Engineering, Purdue University). The duo have developed a directive-based OpenMP extension to address programmability and tunability issues relevant to the GPGPU developer community.
GPGPU computing provides an inexpensive parallel computing platform for compute-intensive applications, yet programming complexity can challenge developers hindering more widespread adoption, the authors note. “Even though the compute unified device architecture (CUDA) programming model offers better abstraction, developing efficient GPGPU code is still complex and error–prone,” they argue.
Thus the authors propose a new programming interface, called OpenMPC, comprised of standard OpenMP and a new set of compiler directives and environment variables that have been extended for CUDA. They argue that OpenMPC, a directive–based, high–level programming model, offers better programmability and tunability for GPGPU code.
“We have developed a fully automatic compilation and user–assisted tuning system supporting OpenMPC. In addition to a range of compiler transformations and optimisations, the system includes tuning capabilities for generating, pruning, and navigating the search space of compilation variants. Evaluation using 14 applications shows that our system achieves 75% of the performance of the hand–coded CUDA programmes (92% if excluding one exceptional case),” they write.
Six Distributed File Systems
A trio of French scientists provide a thorough analysis of six distributed file systems in this recent 39-page research paper, appearing in the HAL/INRIA open archive. The authors, one from SysFera and two from Laboratoire MIS at the Universite de Picardie Jules Verne, start with the observation that a large number of HPC applications rely on distributed computing environments to process and analyze large amounts of data. (Examples provided include probabilistic analysis, weather forecasting and aerodynamic research.) They further note the emergence of new infrastructures designed to handle the increased computational demand. Most of these new architectures, the authors assert, involve some manner of distributed computing, such that the computing process is spread among the nodes of a large distributed computing platform.
Furthermore the team believes that the growing body of scientific data will likewise necessitate innovations in distributed storage. “Easy to use and reliable storage solutions” are essential for scientific computing, they argue, and the community already has a “well-tried solution to this issue,” in the form of Distributed File Systems (DFSs).

The paper offers a comparison of six modern DFSs as to three fundamental issues: scalability, transparency and fault tolerance. For their study, the authors selected popular, widely-used and frequently updated DFSs: HDFS, MooseFS, iRODS, Ceph, GlusterFS, and Lustre.
Exhaustive Search with Parallel Agents
In a recent paper, Macedonia researcher Toni Draganov Stojanovski from University for Information Science And Technology in the Republic of Macedonia sets out to examine the performance of exhaustive search when it is conducted with many search agents working in parallel.
Stojanovski and his research team observe that the advance of manycore processors and more sophisticated distributed processing offers more opportunities for exhaustive search via the use of multiple search agents. While there are a selection of elegant algorithms available for solving complex problems, exhaustive search remains as the best or only solution for real-life problems with no regular structure.
The paper reviews the performance that is achieved using the exhaustive search approach in conjunction with several different search agents with special attention to the following parameters:
• Differences in speeds of search agents.
• Length of allocated search subregions.
• Type of communication between central server and agents.
The findings reveal that the performance of the search improves with the increase in the level of mutual assistance between agents. Furthermore, nearly identical performance outcomes can be achieved with homogeneous and heterogeneous search agents as long as “the lengths of subregions allocated to individual search regions follow the differences in the speeds of heterogeneous search agents.” The research team also demonstrate how to achieve the optimum search performance by means of increasing the dimension of the search region.
The work appears in the January issue of the Turkish Journal of Electrical Engineering & Computer Sciences.
The Benefits of Invasive Computing
In their paper, titled Invasive Computing on High Performance Shared Memory Systems, three researchers from the Department of Informatics, at Garching, Germany, offer new approaches for improving the throughput of runtime-adaptive applications on cutting-edge HPC systems. Their work was published as a chapter in Facing the Multicore Challenge III.
According to the team, there are multiple issues at play:
A first issue is the, in general, missing information about the actual impact of unforeseeable workload by adaptivity and of the unknown number of time steps or iterations on the runtime of adaptive applications. Another issue is that resource scheduling on HPC systems is currently done before an application is started and remains unchanged afterwards, even in case of varying requirements. Furthermore, an application cannot be started after another running application allocated all resources.
The authors propose a solution that involves the design of algorithms that adapt their use of resources during runtime, e.g., by relinquishing or adding compute cores. In the event that concurrent applications are competing for resources, they recommend that an appropriate resource management solution be adopted.
To improve the throughput of runtime-adaptive applications, the computer scientists employed invasive paradigms that start applications and schedule resources during runtime. Scheduling work can be achieved through the use of a global resource manager, and scalability graphs help improve load balancing of multiple applications. In the case of adaptive simulations, several scalability graphs are employed.
The paper includes a proof-of-concept that demonstrates runtime/throughput results for a fully adaptive shallow-water simulation.
Easy to Use Cloud Service
Among the many HPC cloud research pieces that were published this week was an Australian endeavor that seeks to transform complicated HPC applications into easy-to-use SaaS cloud services. Researchers Adam K.L. Wonga and Andrzej M. Goscinskia from the School of Information Technology at Deakin University in Australia set out to develop and test a unified framework for HPC applications as services in clouds.
The duo acknowledge the benefits of HPC cloud. Scalable, affordable and accessible on demand, the use of HPC resources in a cloud environment have been a natural fit for many scientific disciplines, including biology, medicine, chemistry, they write. Still they have observed a steep learning curve when it comes to preparing for and deploying HPC applications in the cloud. This they say has stood in the way of many innovative HPC-backed discoveries.
To remedy this situation and improve ease of use and access to HPC resources, the researchers are looking to the world of Web-based tools, but as they write “high-performance computational research are both unique and complex, which make the development of web-based tools for this research difficult.”
The paper describes their approach to developing a unified cloud framework – one that makes it easier for various domain users to deploy HPC applications in public clouds as services. Their proof-of-concept integrates three components:
(i) Amazon EC2 public cloud for providing HPC infrastructure.
(ii) a HPC service software library for accessing HPC resources.
(iii) the Galaxy web-based platform for exposing and accessing HPC application services.
The authors conclude that “this new approach can reduce the time and money needed to deploy, expose and access discipline HPC applications in clouds.”