Tag: MPI

Nielsen and Intel Migrate HPC Efficiency and Data Analytics to Big Data

May 16, 2016 |

Nielsen has collaborated with Intel to migrate important pieces of HPC technology into Nielsen’s big-data analytic workflows including MPI, mature numerical libraries from NAG (the Numerical Algorithms Group), as well as custom C++ analytic codes. This complementary hybrid approach integrates the benefits of Hadoop data management and workflow scheduling with an extensive pool of HPC tools and C/C++ capabilities for analytic applications. In particular, the use of MPI reduces latency, permits reuse of the Hadoop servers, and co-locates the MPI applications close to the data.

The Scalability Dilemma and the Case for Decoupling

Mar 30, 2016 |

The need for extreme scale computing is driven by the seemingly forever fledgling Internet. In abstract, the entire network is already an extreme scale computing engine. The technical difficulty, however, is to harness the dispersed computing powers for a single purpose. An analogy to this would be to build an engine capable of harnessing the Read more…

Energy-Aware MPI Communication Library for Power Management?

Feb 9, 2016 |

Can MPI communication runtimes be designed to be energy-aware? Can energy be saved during MPI calls without a loss in performance? These are two questions briefly examined by Dhabaleswar K. (DK) Panda, of The Ohio State University, in a blog post on the Top500 website today (Designing Energy-Aware MPI Communication Library: Opportunities and Challenges). Power, Read more…

COSMOS Team Achieves 100x Speedup on Cosmology Code

Aug 24, 2015 |

One of the most popular sessions at the Intel Developer Forum last week in San Francisco, and certainly one of the most exciting from an HPC perspective, brought together two of the world’s foremost experts in parallel programming to discuss current state-of-the-art methods for leveraging parallelism on processors and coprocessors. The speakers, Intel’s Jim Jeffers and Read more…

The Portability Mandate

Jul 24, 2014 |

Argonne National Laboratory recently published several sessions from its Summer 2013 Extreme-Scale Computing program to YouTube. One of these is a lesson on combining performance and portability presented by Argonne Assistant Computational Scientist Jeff Hammond. For some reason the video image does not match the lecture, but you will find a link to Hammond’s slide deck here. Read more…

Getting to Exascale

Jul 24, 2014 |

As the exascale barrier draws ever closer, experts around the world turn their attention to enabling this major advance. Providing a truly deep dive into the subject matter is the Harvard School of Engineering and Applied Science. The institution’s summer 2014 issue of “Topics” takes a hard look at the way that supercomputing is progressing. In Read more…

An Easier, Faster Programming Language?

Jun 18, 2014 |

The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can Read more…

Is It Time to Look Beyond MPI?

Apr 30, 2014 |

MPI…the acronym stands for Message Passing Interface, yet in some places is nearly synonymous with HPC. While this was true in years past, is it still the case? A recent blog by PhD candidate Andreas Schäfer (specialty: HPC, supercomputing, and discrete optimization) tackles this subject, raising a number of excellent questions in the process. “MPI Read more…

HPC and Big Data: A “Best of Both Worlds” Approach

Mar 31, 2014 |

While they may share a number of similar, overarching challenges, data-intensive computing and high performance computing have some rather different considerations, particularly in terms of management, emphasis on performance, storage and data movement. Still, there is plenty of room for the two areas to merged, according to Indiana University’s Dr. Geoffrey Fox. Fox and his Read more…

Benchmarking MPI Communication on Phi-Based Clusters

Mar 12, 2014 |

Intel’s Many Integrated Core (MIC) architecture was designed to accommodate highly-parallel applications, a great many of which rely on the Message Passing Interface (MPI) standard. Applications deployed on Intel Xeon Phi coprocessors may use offload programming, an approach similar to the CUDA framework for general purpose GPU (GPGPU) computing, in which the CPU-based application is Read more…