Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: MPI

HPC and Big Data: A “Best of Both Worlds” Approach

Mar 31, 2014 |

While they may share a number of similar, overarching challenges, data-intensive computing and high performance computing have some rather different considerations, particularly in terms of management, emphasis on performance, storage and data movement. Still, there is plenty of room for the two areas to merged, according to Indiana University’s Dr. Geoffrey Fox. Fox and his Read more…

Benchmarking MPI Communication on Phi-Based Clusters

Mar 12, 2014 |

Intel’s Many Integrated Core (MIC) architecture was designed to accommodate highly-parallel applications, a great many of which rely on the Message Passing Interface (MPI) standard. Applications deployed on Intel Xeon Phi coprocessors may use offload programming, an approach similar to the CUDA framework for general purpose GPU (GPGPU) computing, in which the CPU-based application is Read more…

SC13 Research Highlight: There Goes the Performance Neighborhood…

Nov 16, 2013 |

Message passing can take up a significant fraction of the run time for massively parallel science simulation codes. Consistently high message passing rates are required for these codes to deliver good performance. At Supercomputing 2013 (SC13), our research team from Lawrence Livermore (LLNL) will present the results of our study that show that run-to-run variability Read more…

Developers Tout GPI Model for Exascale Computing

Jun 19, 2013 |

Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn’t changed is the MPI programming model.

Development Tools Hold Key to Results for Heterogeneous HPC

Jun 3, 2013 |

The mainstream adoption of accelerator-based computing in HPC is driving the most significant change to software since the arrival of MPI almost twenty years ago. Faced with competing “similar but different” approaches to heterogeneous computing, developers and computational scientists need to tackle their software challenges quickly. They are rapidly discovering that a single unified development toolkit able to both debug and profile is the key to results – whichever platform they choose.

GASPI Targets Exascale Programming Limits

May 28, 2013 |

As we look ahead to the exascale era, many have noted that there will be some limitations to the MPI programming model. According to researchers who work on the Global Address Space Programming Interface…

Is Amazon’s ‘Fast’ Interconnect Fast Enough for MPI?

Apr 10, 2013 |

Amazon’s EC2 Cluster Compute instance goes head-to-head with Myrinet 10GigE cluster.

Is Amazon’s ‘Fast’ Interconnect Fast Enough for MPI?

Apr 10, 2013 |

Amazon’s EC2 Cluster Compute instance goes head-to-head with Myrinet 10GigE cluster.

The Week in HPC Research

Mar 21, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/Cloud_Storage_and_Bioinformatics_in_a_private_cloud_Fig._3_150x.png” alt=”" width=”95″ height=”95″ />The top research stories of the week include an evaluation of sparse matrix multiplication performance on Xeon Phi versus four other architectures; a survey of HPC energy efficiency; performance modeling of OpenMP, MPI and hybrid scientific applications using weak scaling; an exploration of anywhere, anytime cluster monitoring; and a framework for data-intensive cloud storage.

XPRESS Route to Exascale

Feb 28, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/abstract_future_150x.jpg” alt=”" width=”95″ height=”83″ />The Center for Research in Extreme Scale Computing (CREST) at Indiana University just got a $1.1 million grant to help further the move to exascale computing. Director Thomas Sterling is using some of the money to bolster UI’s research into highly parallel processing for HPC. He talks to HPCwire about his plans.