Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Tag: MPI

Benchmarking MPI Communication on Phi-Based Clusters

Mar 12, 2014 |

Intel’s Many Integrated Core (MIC) architecture was designed to accommodate highly-parallel applications, a great many of which rely on the Message Passing Interface (MPI) standard. Applications deployed on Intel Xeon Phi coprocessors may use offload programming, an approach similar to the CUDA framework for general purpose GPU (GPGPU) computing, in which the CPU-based application is Read more…

SC13 Research Highlight: There Goes the Performance Neighborhood…

Nov 16, 2013 |

Message passing can take up a significant fraction of the run time for massively parallel science simulation codes. Consistently high message passing rates are required for these codes to deliver good performance. At Supercomputing 2013 (SC13), our research team from Lawrence Livermore (LLNL) will present the results of our study that show that run-to-run variability Read more…

Developers Tout GPI Model for Exascale Computing

Jun 19, 2013 |

Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn’t changed is the MPI programming model.

Development Tools Hold Key to Results for Heterogeneous HPC

Jun 3, 2013 |

The mainstream adoption of accelerator-based computing in HPC is driving the most significant change to software since the arrival of MPI almost twenty years ago. Faced with competing “similar but different” approaches to heterogeneous computing, developers and computational scientists need to tackle their software challenges quickly. They are rapidly discovering that a single unified development toolkit able to both debug and profile is the key to results – whichever platform they choose.

GASPI Targets Exascale Programming Limits

May 28, 2013 |

As we look ahead to the exascale era, many have noted that there will be some limitations to the MPI programming model. According to researchers who work on the Global Address Space Programming Interface…

Is Amazon’s ‘Fast’ Interconnect Fast Enough for MPI?

Apr 10, 2013 |

Amazon’s EC2 Cluster Compute instance goes head-to-head with Myrinet 10GigE cluster.

Is Amazon’s ‘Fast’ Interconnect Fast Enough for MPI?

Apr 10, 2013 |

Amazon’s EC2 Cluster Compute instance goes head-to-head with Myrinet 10GigE cluster.

The Week in HPC Research

Mar 21, 2013 |

<img src=”” alt=”” width=”95″ height=”95″ />The top research stories of the week include an evaluation of sparse matrix multiplication performance on Xeon Phi versus four other architectures; a survey of HPC energy efficiency; performance modeling of OpenMP, MPI and hybrid scientific applications using weak scaling; an exploration of anywhere, anytime cluster monitoring; and a framework for data-intensive cloud storage.

XPRESS Route to Exascale

Feb 28, 2013 |

<img src=”” alt=”” width=”95″ height=”83″ />The Center for Research in Extreme Scale Computing (CREST) at Indiana University just got a $1.1 million grant to help further the move to exascale computing. Director Thomas Sterling is using some of the money to bolster UI’s research into highly parallel processing for HPC. He talks to HPCwire about his plans.

Intel Adds Programming Support for Latest Silicon

Sep 6, 2012 |

<img style=”float: left;” src=”” alt=”” width=”146″ height=”96″ />We’re only a little more than halfway through 2012, but Intel has already announced the 2013 versions Parallel Studio XE and Cluster Studio XE, two software suites that support x86-based parallel programming for high performance computing and beyond. Intel refreshes their software development offerings each year at about this time to sync up its tool support with the latest and greatest silicon and to add new features for developers.