Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Research Focus


HPF 2.0 AND MPI-2 DRAFTS TO BE RELEASED AT SC '96

Two of the most important informal standards for parallel computing are the High Performance Fortran (HPF) and Message Passing Interface (MPI) specifications. Both specifications began at meetings convened by CRPC researchers, and have since been developed by consortia of academic, government, vendors, and industrial users. The original specifications for HPF and MPI were presented at Supercomputing '92 and '93, respectively. Now, the HPF Forum chaired by CRPC Director Ken Kennedy and the MPI Forum chaired by Rusty Lusk of Argonne National Laboratory will present new versions of their specifications at Supercomputing '96 Birds-of-a-Feather (BOF) sessions.

The MPI BOF will take place first, on Wednesday, November 20, from 3:30 pm - 5:00 pm. MPI version 1.1 includes many standard facilities used by message-passing programs, such as point-to-point communication and collective operations. MPI-1 also introduced the concept of "communicators," which provides greatly enhanced modularity for message- passing. The MPI-2 Forum was devoted to extensions to MPI-1; that is, all MPI-1 operations remain exactly the same. MPI-2 divided its work on the extensions into eight subcommittees by technical area:

  • Dynamic Process Management
  • One-Sided Communication
  • Extended Collective Operations
  • External Interfaces
  • Parallel I/O
  • Real-time Extensions
  • Language Bindings (C++ and Fortran 90)
  • Miscellaneous Topics
Certain features, such as the real-time extensions, are reserved for the MPI-2 Journal of Development, and will not be supported on all MPI-2 implementations. Most, however, will officially be part of MPI-2 and supported everywhere. Rusty Lusk described the impact of the new extensions: "Users have been asking for the process management, one- sided communication, and I/O features for a long time. We're happy to provide them in a standardized way, and we're sure they will lead to much improved parallel programs."

The HPF BOF will take place on Wednesday, November 20, at 7:00 pm. HPF took a much different route to parallelism than MPI, emphasizing data- parallel computations. This focus led the HPF Forum to extend Fortran 90 (then the latest Fortran standard) with the ALIGN and DISTRIBUTE directives for dividing data among processors, and the FORALL, INDEPENDENT, and HPF library features to specify data-parallel computations. In defining HPF 2.0, the Forum considered a number of capabilities:

  • New data distribution patterns, such as INDIRECT mappings and SHADOW regions
  • New parallel control mechanisms, such as the ON directive and TASK_REGION directive
  • Asynchronous I/O operations
  • New EXTRINSIC types, such as F77_LOCAL and HPF_CRAFT
Asynchronous I/O was included in the HPF 2.0 "base language," which all implementations are expected to support within approximately one year. Many of the other features appear in the HPF Approved Extensions, a separate section of the language specification with more advanced features. These features will be slower to appear, although some vendors have said that they will give them priority in response to requests from users. Kennedy summed up the HPF process by saying, "This was a long process, even harder than the original HPF effort because we were dealing with deeper issues. But I think that we have a very good document at the end of it, and HPF 2.0 will be a huge benefit for programmers who need data parallelism."

Both the MPI-2 and HPF 2.0 documents are now in their public comment phase, and both Forums are eagerly awaiting feedback from their BOF sessions. Readers can obtain the latest drafts from ftp://titan.cs.rice.edu/public/HPFF/hpf2/hpf-report.ps (HPF) and http://www.cs.wisc.edu/~lederman/mpi2/mpi2-report.ps.Z (MPI).

Details on where to send comments are included in the documents themselves. Or you can attend the BOF sessions to give direct feedback.


Table of Contents

News | From the Director | Parallel Profile | Research Focus | Work in Progress | Spotlight on Teachers | Education Resources | Resources | Calendar | CRPC Home