Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

MPI, A STANDARD FOR MESSAGE PASSING, FINISHED AND AVAILABLE

Jack Dongarra, University of Tennessee/Oak Ridge National Laboratory Over the past 18 months, researchers from the United States and Europe have been meeting to develop a standard for message passing. The standard, called the Message Passing Interface (MPI), provides a common interface for distributed-memory concurrent computers and networks of workstations. MPI functionality includes point-to-point and collective communication routines, as well as support for process groups, communication contexts, and application topologies. While making use of new ideas, the MPI standard is based largely on current practice, including methods from Express, PVM, NX/2 Vertex, and p4.

The main advantages of establishing a message-passing interface are portability and ease-of-use; a standard for message passing is a key component in building a concurrent computing environment in which applications, software libraries, and tools can be transparently ported between different machines. Furthermore, the definition of a standard provides vendors with a clearly defined set of routines that they can implement efficiently, or in some cases provide hardware or low-level system support for, thereby enhancing scalability.

The MPI standardization effort involved about 60 people from 40 organizations, mainly from the United States and Europe. Most of the major vendors of concurrent computers have been involved in MPI, along with researchers from universities, government laboratories, and industry. MPI is intended to be a standard message-passing interface for applications running on MIMD distributed-memory concurrent computers and workstation networks.

A number of vendors are preparing MPI implementations. IBM has a complete prototype implementation on their SP1 machines and plans to offer MPI as a product on the SPx platform. There is a reference implementation available through a joint project between Argonne National Laboratory and Mississippi State University. See the URL ftp://info.mcs.anl.gov/pub/mpi/README for additional information.

The MPI Standard Document can be downloaded from (ftp www.netlib.org, cd mpi, get mpi-report.ps). An HTML version of the report is available through the URL http://www.mcs.anl.gov/mpi/index.html.


Table of Contents

News | From the Director | Parallel Profile | Research Focus | Work in Progress | Education / Outreach | Resources | Calendar | CRPC Home