|Volume 7, Issue 1 -
Message-passing Standards Developed for Parallel Systems
Jack Dongarra, University of Tennessee; Tom Haupt and Sanjay Ranka, Syracuse University; and Bill Gropp and Rusty Lusk, Argonne National Laboratory
CRPC researchers at the University of Tennessee, Argonne National Laboratory, and Syracuse University have developed a message-passing interface (MPI) standard for parallel systems. This effort will help define the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in Fortran or C. The MPI effort is an outgrowth of discussions from the CRPC-sponsored "Standards for Message Passing in a Distributed Memory Environment" workshop held in April 1992. It involves some 40 researchers from various companies, laboratories, and universities and is being conducted in a similar spirit to the High Performance Fortran Forum (see Parallel Computing Research, Vol. 1, Issue 1, p. 3).
MPI provides parallel hardware vendors with a clearly defined base set of routines that can be efficiently implemented. As a result, hardware vendors can build upon this collection of standard low-level routines to create higher-level routines for the distributed-memory communication environments supplied with their parallel machines. MPI provides a simple-to-use interface for the basic user, yet is powerful enough to allow programmers to use the high-performance message passing operations available on advanced machines
In an effort to create a "true" standard for message passing, researchers incorporated into MPI the most useful features of several systems, rather than choosing one system to adopt as a standard. Features were used from systems by IBM, Intel, nCUBE, Express, P4, and PARMACS. The message paradigm will be attractive because of its wide portability and can be used in communications for distributed-memory and shared-memory multiprocessors, networks of workstations, and any combination of these elements. The paradigm will not be made obsolete by increases in network speeds or by architectures combining shared an distributed-memory components.
"The development of MPI has been a collective process," said CRPC researcher Jack Dongarra. "We will continue to promote discussion within the parallel computing research community on issues that must be addressed in establishing a practical, portable, and flexible standard for message passing."
Table of Contents