MPI Is 25 Years Old!

By Ewing Lusk and Jesper Larsson Träff

May 1, 2017

Has it really been 25 years since the Message Passing Interface standard was born? It has indeed, and at this year’s EuroMPI meeting in September in Chicago, a “birthday” symposium will be held to celebrate the occasion. Speakers from the remote past of MPI, the middle years, and the current time will touch on the ideas that have given MPI its long life and will highlight the impact the standard has had on multiple aspects of parallel computing, from applications to libraries to its multiple implementations.

The concept of a standard for message passing emerged over time. While assorted systems, both commercial and free, competed for “mind share” and commercial success, a small meeting of researchers took place in 1991 at a conference in Oberlech, Austria. There Jack Dongarra, Rolf Hempel, Tony Hey, and David Walker drafted a white paper outlining a proposal for what a standard might look like, borrowing heavily from Marc Snir’s work at IBM. Jack Dongarra, Professor of Computer Science at the University of Tennessee, recalls, “Each of the existing systems had merit, but none had everything needed to move application development forward. We decided to instigate a community effort to address the problem.” It seems reasonable to affix the label “Birth of MPI” to the resulting workshop entitled “Standards for Message Passing in a Distributed Memory Environment” organized by Jack Dongarra and David Walker with funding from the Ken Kennedy Center for Research in Parallel Computation at Rice University in April 1992. That was the first time a wide variety of interested stakeholders gathered in an open meeting dedicated to the topic of a standard for message passing, forecasting the openness of the process that would follow. The result of that workshop, which featured presentations on multiple vendor-specific and portable systems, was a realization that a great diversity of good ideas existed among then-current message-passing libraries but that the lack of a standard was impeding the progress of parallel computing.

Jack Dongarra

At the Supercomputing ’92 conference in November, a committee was formed to define a message-passing standard. At the time of creation, no one knew what the outcome might look like, but the effort was begun with the following objectives:  (1) to define a portable standard for message-passing, which would not be an official, ANSI-like standard but would attract both implementers and users; (2) to operate in a completely open way, allowing anyone to join the discussions, either by attending meetings in person or by monitoring open email discussions; and (3) to be finished in one year.

The MPI effort was a lively one, as a result of the tensions among these three objectives. The committee decided to follow the format used by the High-Performance Fortran Forum, whose procedures had been well received by its community. (It even decided to meet in the same hotel in North Dallas.)  An early decision of the MPI Forum was to not adopt any existing system or proposal as a starting but to start from scratch, with the explicit goals of portability, expressiveness, and performance capability. “Ease of use” was not a primary goal; the idea was that libraries, compilers, and other software layers would provide this aspect of parallel programming, and that applications would rely on their implementations over MPI to provide convenience of programming.

More formal meetings began in January 1993 under the name “MPI Forum,” an extension of the SC ’92 committee, and continued until the following February. Over that time, more than 60 people from 40 organizations participated, although attendance at most meetings was about 30. The procedures for submitting proposals and voting were adopted from those of HPF Forum, which had worked well. One reason the MPI standardization effort succeeded was that the MPI Forum itself was so broadly based. At the original (MPI-1) Forum the parallel computer vendors were represented by Convex, Cray, IBM, Intel, Meiko, nCUBE, NEC, and Thinking Machines. Members of the groups associated with portable software libraries were also there: PVM, p4, Zipcode, Chameleon, PARMACS, TCGMSG, and Express were all represented, as well as some application groups. One subgroup committed to providing a test implementation of each iteration of the standard as it evolved from meeting to meeting; this proved valuable in uncovering the implementation consequences of API decisions, as well as ensuring that when the standard definition was completed, a prototype implementation was immediately available. Marc Snir, Professor of Computer Science at the University of Illinois and an original Forum member representing IBM, has said, “The MPI Forum was an outstanding example of many companies, research labs, and individuals working together to achieve a common good.”

The first version of the MPI standard was published in May 1994. It included standard versions of many well-known message-passing operations such as blocking and nonblocking sends and receives, together with collective operations such as broadcast, reduce, and scan. It broke new ground with its concept of communicators (essential for the modularity of MPI-based libraries), datatypes (to deal efficiently with structured and noncontiguous messages), and process topologies (ignored by many in those days but becoming more significant on today’s machines). Its inclusion of both Fortran and C bindings (with identical semantics) signaled its desire to be immediately useful to both libraries and end-user scientific applications.

MPI also took an innovative approach to the problem of tools for debugging and performance analysis. Rather than designing such a tool into the standard specification itself, MPI provided a mechanism, its “profiling interface,” by which anyone could write a library that intercepted a subset of MPI calls in order to count, measure, or display them in some way, before (and after) passing them to the underlying MPI implementation for actual execution. As expected, this has spawned a wide collection of tools that are completely portable, since the profiling interface is part of the standard rather than the tool itself.

During the 1993-1994 meetings of the MPI Forum, several issues were postponed in order to reach early agreement on a core of message-passing functionality, which nonetheless included several innovative concepts, such as communicators, datatypes, and topologies. The Forum reconvened during 1995-1997 to extend MPI to include remote memory operations, parallel I/O, and dynamic process management, along with a number of features designed to increase the convenience and robustness of MPI. This effort resulted in the MPI-2 standard, released in 1997. MPI-2 had three major new feature sets:  an extensive interface to efficiently support parallel file I/O to and from MPI programs; support for one-sided (put/get) communication; and dynamic process management, namely, the ability to create additional processes from a running MPI program and the ability for separately started MPI applications to connect to each other and communicate. MPI-2 also introduced other features, such as precisely defined semantics for multithreaded communication that in some way foreshadowed the multiple modes of OpenMP parallelism, bindings for Fortran-90 and C++, and detailed support for mixed language programming (how to send a message from Fortran and have it received in C, for example).

While the MPI-2 standard was finished in 1997, it took a few years for full implementations to appear. In contrast to the MPI-1 effort, there was no hand-in-hand prototype developed for most of the additions of MPI-2, and in retrospect, some of the useful feedback on the standardization process from a co-developed prototype was missing. Nevertheless, over the next decade and a half, MPI filled the needs of most computational science codes that required a high-performance, scalable, portable programming system. The Forum itself disbanded.

The timing of MPI seems to have been about right. Trying to establish such a standard earlier might have failed to benefit from research into multiple approaches. Indeed, some feared that adoption of a standard would shut down research into the message-passing model. In fact, the opposite happened. Having a fairly complete, performance-enabling, portable interface target stimulated a wealth of research into implementation approaches, tool development, and application algorithms. Much of the research appeared in the Proceedings of the Euro-* conferences, underlining the international nature of MPI-based research. These workshops started as PVM (Parallel Virtual Machine) user group meetings, became EuroPVM workshops from 1994 to 1996, EuroPVM/MPI from 2007 to 2009, and EuroMPI from 2010 to 2017. It is telling and amusing that “Euro”MPI 2017 will be held in Chicago this year.

Over the next fifteen years or so, the MPI Forum itself was inactive, the published standard remained unchanged, and MPI was a stable interface for users and implementers alike. Vendors used the open-source prototype implementations (MPICH, and later OpenMPI), layered to allow optimizations at multiple levels, to evolve their proprietary implementations over time in order to gradually take advantage of their own evolving specialized hardware.

This was no mean feat. As Bill Gropp, Acting Director and Chief scientist at the National Center for Supercomputing Applications, says, “One of the hardest things about an MPI implementation is keeping the implementation focused on the future. This requires finding a balance between making engineering decisions based on today’s hardware and designing and implementing for likely directions in the future.”  Many message-passing applications, written in customized ways to deal with the portability problem, switched to making direct MPI calls, improving efficiency and maintainability. And library development was unleashed, fulfilling one of MPI’s original goals. Barry Smith, Senior Computer Scientist at Argonne National Laboratory and primary developer of the PETSc library, explains MPI’s contribution to library development as follows:  “MPI changed everything, by providing an extensive API for message passing and collectives that allowed portable distributed memory scientific libraries to no longer need to be programmed to the lowest common denominator of message passing systems. Equally important, MPI eliminated the problem of ‘tag collision’ where each library might utilize the same tags for messages, resulting in messages sent from one library being (improperly) received and processed by a different library or the application code. The MPI communicator concept made distributed parallel scientific libraries practical in two ways, it eliminated the tag collision problem and (by the use of subcommunicators) allowed applications to simply utilize scientific libraries to perform needed computations on subsets of processes, for example with ‘divide and conquer’ algorithms.”

For more than a decade after the Forum disbanded in 1997, the MPI specification remained stable, providing a period during which MPI could “sink in” while implementations steadily improved, parallel libraries flourished, and applications, now portable, took advantage of multiple new tera- and petascale machines, challenging those implementations and libraries to become ever more scalable. However, HPC moves fast, and after a dozen years multiple trends had gradually increased community pressure to restart the MPI process, whose inclusiveness and openness had served the community so well in the past.

For one thing, the scale of massively parallel systems had reached more than a million cores. Single-core processors had disappeared, nodes had become symmetric multiprocessors, and defining how a distributed-memory model like MPI’s would interact with threads (specifically, the emerging OpenMP standard) and shared memory became more critical. Remote memory access (put/get) support in networks became mainstream, raising the applicability of efficient remote memory access (RMA) as a programming model. Although MPI-2’s RMA was used by some applications, it had failed to live up to expectations and needed an overhaul. C and Fortran had both evolved, requiring updates to the MPI interfaces. Nonblocking collective operations had been proposed, and some experience with them obtained. At the time of MPI-2, nonblocking collectives had been considered but deliberately left out of the standard because of the expectation that they could be implemented on top of MPI by issuing blocking operations in separate threads. However, threads turned out to be more difficult to use efficiently, and support for threads was uneven. The increase in scale had brought fault tolerance issues to the fore. And finally, a list of (mostly) minor errata had accumulated.

In response to all this, the MPI Forum reconstituted itself in 2008, at first tidying up MPI-2 and eventually releasing the initial version of MPI-3 in September 2012. Major new features of MPI-3 include the nonblocking collective operations, together with “neighborhood” collectives, useful for stencil computations and relying on the topology functions from MPI-1. (The concept of a nonblocking barrier was considered a joke during the MPI-1 meetings; now MPI has one!) There is an improved one-sided communication interface as well as a tools interface that goes beyond MPI-1’s profiling interface to dynamically access the behavior of an MPI implementation. The Fortran bindings have been updated to take advantage of the Fortran 2008 standard, which was a major step forward in making Fortran work well with libraries in a parallel environment. C bindings were modernized to catch more errors at compile time. Other new features improved interactions with threads and shared memory.

Some topics that the MPI-3 Forum grappled with have not (yet) become part of MPI, such as fault tolerance and more complex support for multithreaded programming, because the Forum decided that current proposals were not quite ready for standardization. The Forum continues to work on these and other issues. Martin Schulz, Computer Scientist at Lawrence Livermore National Laboratory and current chairperson of the MPI-3 Forum, says, “As MPI has established itself as the dominant standard in HPC, it has been exciting and rewarding to see that the members of the MPI forum have not been resting on their laurels. Instead, the Forum continues to drive innovation balanced with the pragmatism necessary for a standards document as we race towards exascale as well as to embrace new commercial application fields and their different requirements.”

Many of the participants in this decades-long effort will speak at the “25 Years of MPI” symposium during the EuroMPI Workshop to be held at Argonne National Laboratory near Chicago on September 25-27, 2017.

About the Authors

Ewing “Rusty” Lusk is Argonne Distinguished Fellow Emeritus at Argonne National Laboratory.

Prof. Jesper Larsson Träff is on the Faculty of Informatics at the Vienna University of Technology.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. The news follows HPE’s acquisition nearly three years ago o Read more…

By Doug Black & Tiffany Trader

China Establishes Seventh National Supercomputing Center

May 16, 2019

Chinese media is reporting that China will construct a new National Supercomputer Center in Zhengzhou, in central China's Henan Province. The new Zhengzhou facility will house a 100-petaflops supercomputer and will be ta Read more…

By Staff report

Interview with 2019 Person to Watch Ken King

May 16, 2019

Today, as the final installment of our HPCwire People to Watch focus series, we present our interview with Ken King, general manager of OpenPOWER for the IBM Systems Group. Ken is responsible for building and managing t Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Autonomous Vehicles: New challenges for the CAE Data Center

Managing infrastructure complexity in the age of AI

When most of us hear the term autonomous vehicles, we conjure up images of driverless Waymos or robotic transport trucks driving long-haul highway routes. Read more…

What’s New in HPC Research: Image Classification, Crowd Computing, Genome Informatics & More

May 15, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

CCC Offers Draft 20-Year AI Roadmap; Seeks Comments

May 14, 2019

Artificial Intelligence in all its guises has captured much of the conversation in HPC and general computing today. The White House, DARPA, IARPA, and Departmen Read more…

By John Russell

Cascade Lake Shows Up to 84 Percent Gen-on-Gen Advantage on STAC Benchmarking

May 13, 2019

The Securities Technology Analysis Center (STAC) issued a report Friday comparing the performance of Intel's Cascade Lake processors with previous-gen Skylake u Read more…

By Tiffany Trader

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

ASC19: NTHU Returns to Glory

May 11, 2019

As many of you Student Cluster Competition fanatics know by now, Taiwan’s National Tsing Hua University (NTHU) won the gold medal at the recently concluded AS Read more…

By Dan Olds

Intel 7nm GPU on Roadmap for 2021, OneAPI Coming This Year

May 8, 2019

At Intel's investor meeting today in Santa Clara, Calif., the company filled in details of its roadmap and product launch plans and sought to allay concerns about delays of its 10nm chips. In laying out its 10nm and 7nm timelines, Intel revealed that its first 7nm product would be... Read more…

By Tiffany Trader

Ten Great Reasons to Build the 1.5 Exaflops Frontier

May 7, 2019

It’s perhaps obvious that the fundamental reason for building expensive exascale computers is to drive science and industry forward, realizing the resulting b Read more…

By John Russell

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This