Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Message-passing Standards Developed for Parallel Systems

Jack Dongarra, University of Tennessee; Tom Haupt and Sanjay Ranka, Syracuse University; and Bill Gropp and Rusty Lusk, Argonne National Laboratory

CRPC researchers at the University of Tennessee, Argonne National Laboratory, and Syracuse University have developed a message-passing interface (MPI) standard for parallel systems. This effort will help define the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in Fortran or C. The MPI effort is an outgrowth of discussions from the CRPC-sponsored "Standards for Message Passing in a Distributed Memory Environment" workshop held in April 1992. It involves some 40 researchers from various companies, laboratories, and universities and is being conducted in a similar spirit to the High Performance Fortran Forum (see Parallel Computing Research, Vol. 1, Issue 1, p. 3).

MPI provides parallel hardware vendors with a clearly defined base set of routines that can be efficiently implemented. As a result, hardware vendors can build upon this collection of standard low-level routines to create higher-level routines for the distributed-memory communication environments supplied with their parallel machines. MPI provides a simple-to-use interface for the basic user, yet is powerful enough to allow programmers to use the high-performance message passing operations available on advanced machines

In an effort to create a "true" standard for message passing, researchers incorporated into MPI the most useful features of several systems, rather than choosing one system to adopt as a standard. Features were used from systems by IBM, Intel, nCUBE, Express, P4, and PARMACS. The message paradigm will be attractive because of its wide portability and can be used in communications for distributed-memory and shared-memory multiprocessors, networks of workstations, and any combination of these elements. The paradigm will not be made obsolete by increases in network speeds or by architectures combining shared an distributed-memory components.

"The development of MPI has been a collective process," said CRPC researcher Jack Dongarra. "We will continue to promote discussion within the parallel computing research community on issues that must be addressed in establishing a practical, portable, and flexible standard for message passing."

Table of Contents

News | From the Director | Parallel Profile | Research Focus |
Work in Progress | Education Resources | Resources | Calendar | CRPC Home