Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

From the Director


HIGH PERFORMANCE FORTRAN: PROBLEMS AND PROGRESS

Ken Kennedy, Director, CRPC

Two years ago, I wrote an editorial in this column entitled "High Performance Fortran: What Can Go Wrong?" In it I outlined three potential obstacles to the success of HPF: slow progress toward the sophisticated compiler technology needed to compile the language; the unavailability of debugging and performance-tuning tools; and the need for additional features to provide higher-level support for important computations, particularly irregular problems.

From today's vantage point, it is clear that those concerns were amply justified. Commercial HPF compilers have been slow to mature, and research compilers have not been sufficiently robust; this has discouraged many users. Current commercial tools tend to concentrate on message passing and are not well-suited to support programming in HPF. Mapping irregular problems to HPF and performing efficient I/O are difficult at best, both due to immature compilers and users' lack of experience. As a result, the language has not yet achieved widespread acceptance in the high-performance computing community.

However, there is good news as well. Compilers now exist for every high-performance computing system in the market, largely because of the existence of fairly portable commercial implementations that use message-passing as an underlying implementation. In addition, the most recent round of compiler releases shows great promise, in some cases outperforming hand-coded message-passing programs for the same problem. When one considers the difference in development time between HPF and message-passing versions, it is not unlikely that total turnaround time for the HPF version was actually less than the other. The current Portland Group, Inc. (PGI) HPF compiler reportedly produces code for the NAS simulated CFD applications parallel benchmarks that is within a factor of two of the highly optimized MPI message-passing versions on the IBM SP2 and CRAY T3D.

Although there are still very few commercial tools available for HPF, significant progress has been made in this area. Through a joint effort between the CRPC at Rice University and the CRPC affiliated site at the University of Illinois at Urbana-Champaign led by Dan Reed, the Pablo performance analysis and tuning system has been adapted to work with HPF compilers. Initially the target compiler was the experimental HPF compiler developed at Rice, but more recently the system has been ported to the PGI HPF compiler as well. The result is a system that can annotate the source of an HPF program with information on generated communication and the associated expense without forcing the user to look at the actual message-passing program generated.

Finally, the problem of limited coverage is being addressed in the current round of HPF standardization, which is scheduled to complete a new standard document by Supercomputing '96. Included in that document will be a set of "approved extensions" that are designed to provide the new functionality needed to support irregular applications. Examples of such extensions include data distribution via a mapping array, asynchronous I/O, and computation partitioning via the ON-clause. Vendors will not implement all of these features immediately because they are currently concentrating on reliable performance for features in the core language. However, the approved extensions pave the way for development of HPF toward much broader applicability.

The result of these advances has been a resurgence of interest in the language. PGI now has HPF installations at more than 100 sites, including site licenses at three of the six major NASA research centers and two of the four NSF Supercomputer Centers. PGI will soon install site licenses at all four DoD Major Shared Resource Centers. However, there are still significant concerns about the future of the language, in part because of the unequivocal success of MPI (Message Passing Interface). MPI is a portable message-passing interface that resulted from an open standardization effort similar to the one that produced HPF. The CRPC hosted the first meeting of the MPI Forum and has remained a strong supporter of the effort.

Because reliable and efficient implementations of MPI are available now, many grand challenge groups are using it instead of HPF. This is appropriate given the immediate need for high performance on those projects. Also, the grand challenge teams are highly experienced in parallel computation and are willing to expend considerable energy in porting a code to the highest performing machine available. Such careful coding is, in fact, necessary to work on the "bleeding edge" of computational science.

Other users, however, may find HPF more to their liking than MPI. Since HPF relieves the user from responsibility for handling communication and synchronization, it should be more accessible for novice users. Of course, new HPF users must master the intricacies of data distribution, but this is a task required by MPI as well. In short, as HPF implementations improve, they will provide an easy entry into parallelism for many working scientists and engineers.

Given these recent developments, it seems to be the right time to take stock of HPF's suitability for programming real applications. To that end, the CRPC is co-sponsoring, with Los Alamos National Laboratory and the HPF Forum, a workshop for current and potential HPF users, scheduled to take place in Santa Fe, New Mexico in February 1997. A major goal of this meeting is for users to provide feedback to the HPF vendors about what is needed to make the language a success. This feedback will support their efforts and prioritize the tasks ahead, including the approved extensions. This prioritization will be of enormous benefit to the vendors and, as a result, to the community at large. Those who have heard of HPF but are not yet certain that it is a good vehicle for their applications should definitely consider attending. Only by active participation from the user community can we bring the HPF effort to a successful realization.


Table of Contents