Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: MPI

The Portability Mandate

Jul 24, 2014 |

Argonne National Laboratory recently published several sessions from its Summer 2013 Extreme-Scale Computing program to YouTube. One of these is a lesson on combining performance and portability presented by Argonne Assistant Computational Scientist Jeff Hammond. For some reason the video image does not match the lecture, but you will find a link to Hammond’s slide deck here. Read more…

Getting to Exascale

Jul 24, 2014 |

As the exascale barrier draws ever closer, experts around the world turn their attention to enabling this major advance. Providing a truly deep dive into the subject matter is the Harvard School of Engineering and Applied Science. The institution’s summer 2014 issue of “Topics” takes a hard look at the way that supercomputing is progressing. In Read more…

An Easier, Faster Programming Language?

Jun 18, 2014 |

The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can Read more…

Is It Time to Look Beyond MPI?

Apr 30, 2014 |

MPI…the acronym stands for Message Passing Interface, yet in some places is nearly synonymous with HPC. While this was true in years past, is it still the case? A recent blog by PhD candidate Andreas Schäfer (specialty: HPC, supercomputing, and discrete optimization) tackles this subject, raising a number of excellent questions in the process. “MPI Read more…

HPC and Big Data: A “Best of Both Worlds” Approach

Mar 31, 2014 |

While they may share a number of similar, overarching challenges, data-intensive computing and high performance computing have some rather different considerations, particularly in terms of management, emphasis on performance, storage and data movement. Still, there is plenty of room for the two areas to merged, according to Indiana University’s Dr. Geoffrey Fox. Fox and his Read more…

Benchmarking MPI Communication on Phi-Based Clusters

Mar 12, 2014 |

Intel’s Many Integrated Core (MIC) architecture was designed to accommodate highly-parallel applications, a great many of which rely on the Message Passing Interface (MPI) standard. Applications deployed on Intel Xeon Phi coprocessors may use offload programming, an approach similar to the CUDA framework for general purpose GPU (GPGPU) computing, in which the CPU-based application is Read more…

SC13 Research Highlight: There Goes the Performance Neighborhood…

Nov 16, 2013 |

Message passing can take up a significant fraction of the run time for massively parallel science simulation codes. Consistently high message passing rates are required for these codes to deliver good performance. At Supercomputing 2013 (SC13), our research team from Lawrence Livermore (LLNL) will present the results of our study that show that run-to-run variability Read more…

Developers Tout GPI Model for Exascale Computing

Jun 19, 2013 |

Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn’t changed is the MPI programming model.

Development Tools Hold Key to Results for Heterogeneous HPC

Jun 3, 2013 |

The mainstream adoption of accelerator-based computing in HPC is driving the most significant change to software since the arrival of MPI almost twenty years ago. Faced with competing “similar but different” approaches to heterogeneous computing, developers and computational scientists need to tackle their software challenges quickly. They are rapidly discovering that a single unified development toolkit able to both debug and profile is the key to results – whichever platform they choose.

GASPI Targets Exascale Programming Limits

May 28, 2013 |

As we look ahead to the exascale era, many have noted that there will be some limitations to the MPI programming model. According to researchers who work on the Global Address Space Programming Interface…