|Volume 7, Issue 1 -
PARALLEL DISTRIBUTED COMPUTING TEAM SUPPORTS MPI
Established in the fall of 1994, the Los Alamos National Laboratory- based Parallel Distributed Computing Team (PDCT) has joined the CRPC and other research organizations throughout the United States and Europe in support of the Message Passing Interface (MPI). MPI defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran or C. The PDCT supports MPI as development software available on the ptool-server. The team plans to beta-test native applications of MPI and make them available to users.
The CRPC took the lead in the development of MPI. CRPC's Jack Dongarra of the University of Tennessee and members of the Oak Ridge National Laboratory chaired the MPI Forum, which included 60 people from 40 organizations who helped develop the MPI standard from its conception more than two years ago to its completion in March 1994.
MPI provides a simple-to-use portable interface for the basic user, yet is powerful enough to allow programmers to use the high-performance message-passing operations available on advanced machines. MPI is also sophisticated enough to take advantage of a variety of specialized hardware and software offered by individual vendors.
In an effort to create a true standard for message passing, researchers incorporated several systems rather than choosing one system to adopt as a standard. Features were used from systems by IBM, Intel, nCUBE, PVM, Express, p4, and PARMACS.
The message-passing paradigm is attractive because of its wide portability and its application to communications for distributed-memory and shared-memory multiprocessors, networks of workstations, and any combination of these elements. Users writing portable message-passing programs in Fortran and C will benefit from MPI. It is particularly useful for individual application programmers, developers of software designed to run on parallel machines, and creators of environments and tools. Among the biggest potential MPI users are parallel library and application writers, for whom efficient, portable, and highly functional code is most important. MPI allows them to write applications and libraries that are truly portable.
MPI provides many features intended to improve performance on scalable parallel computers with specialized interprocessor communication hardware. Native high-performance implementation of MPI is expected to be provided on such machines. Implementations of MPI on top of standard UNIX interprocessor communication protocols will provide portability to workstation clusters and heterogeneous networks of workstations. Several proprietary, native implementations of MPI are in progress at this time and these are expected to become available on CRPC available machines.
Information on MPI and how to deploy the software can be obtained by contacting email@example.com or on the web. The URL is
For further information, contact MaryDell Tholburn, of the Parallel Distributed Computing Team/CIC-8 at (505) 667-0619, firstname.lastname@example.org .
Information about MPI was provided courtesy of MaryDell Tholburn, Los Alamos National Laboratory. Using MPI, Portable Parallel Programming with the Message-Passing Interface, by William Gropp, Ewing Lusk, and Anthony Skjellum, (MIT Press) was used as a reference.
Table of Contents