Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Fortran Parallel Programming Systems

Ken Kennedy (director), Vikram Adve, Christian Bischof, Preston Briggs, Debbie Campbell, Alan Carle, Alok Choudhary, Keith Cooper, Kevin Cureton, Geoffrey Fox, Elana Granston, Gil Hansen, Tom Haupt, Seema Hiranandani, Charles Koelbel, John Mellor-Crummey, Ravi Ponnusamy, Sanjay Ranka, Joel Saltz, Alan Sussman, Linda Torczon, and Scott Warren


The objective of the Fortran Parallel Programming Systems project is to make parallel computer systems usable for programmers in Fortran, a widely used language in the scientific community. In this effort, a special emphasis is placed on data-parallel programming and scalable parallelism. To achieve this objective, the research group is developing a coordinated programming system that includes compilers and tools for Fortran D, an extended dialect of Fortran that supports machine- independent data-parallel programming. The tools support a variety of parallel programming activities, including compilation, intelligent editing and program transformation, parallel debugging, performance estimation, performance visualization and tuning, and automatic data partitioning.

Research efforts also include validation of the compilers and tools on realistic applications, as well as investigations of new functionality to handle irregular computations, parallel I/O, and automatic differentiation using the program analysis infrastructure developed for the project.

The Fortran D Language and Compilers

Existing languages for parallel programming on scalable parallel systems are primitive and hard to use. Each language reflects the architecture of the target machine for which it is intended, making programs written for current parallel systems highly machine dependent. As a result, there is no protection of the programming investment on parallel machines. A program written for one target machine may need to be completely rewritten when the next-generation machine is available. This situation is the principal impediment to widespread use of scalable parallel systems for science and engineering problems.

To address this problem, CRPC researchers have developed Fortran D, a set of extensions to Fortran 77 and Fortran 90 that permit the programmer to specify, in a machine-independent way, how to distribute a program's principal data structures among the processors of a parallel system. In addition, Fortran D makes programming easier than it is with explicit message passing, because programmers can write codes that use a shared name space, independent of the target architecture. Programmers find a shared name space easier to use than a distributed name space because, aside from optionally incorporating data distribution directives, data placement and access issues can be ignored. Using sophisticated compiler techniques, these "high-level" programs can be compiled for both SIMD and MIMD parallel architectures.

The Fortran D research effort has led to the development of prototype compilers for the Intel Paragon and Thinking Machines CM-5 for both Fortran 77D and Fortran 90D. In addition, the Fortran 90D compiler has been ported to a number of other machines, including the Intel Paragon, nCube/2, and a network of workstations. Compilers for other machines, such as the SIMD MasPar MP-2, are under development. The strategy for all these compilers is based upon thorough program analysis, aggressive communication optimization, advanced code-generation techniques, and the use of sophisticated computation and communication libraries. The effectiveness of these methods is being evaluated using a suite of scientific programs developed by CRPC researchers at Syracuse University.

High-performance Fortran

Fortran D was a major impetus behind the definition of High Performance Fortran (HPF), a data-parallel language that is being widely adopted by computer vendors and users alike. The High Performance Fortran Forum (HPFF), which was originally convened by the CRPC to produce the definition of HPF, includes representatives from industry, academia, and government laboratories. Now starting its third year, HPFF continues to meet to refine the language and consider further extensions. The Fortran D compilers produced by the CRPC are being used as models for several commercial HPF compilers. Thus, the project has established an efficient technology transfer mechanism by which new features in Fortran D, once demonstrated, may be included in future mainstream languages.

Irregular Problems

The Fortran group also works closely with applications scientists and engineers working on "irregular" scientific problems, such as computational fluid dynamics, computational chemistry, computational biology, structural mechanics, and electrical power grid calculations. One key aspect of the research associated with irregular scientific problems is the development of portable runtime support libraries that 1) coordinate interprocessor data movement, 2) manage the storage of and access to copies of off-processor data, 3) support a shared name space, and 4) couple runtime data and workload partitioners to compilers. These runtime support libraries are being used to port application codes to a variety of multiprocessor architectures and are being incorporated into the Fortran D distributed-memory compilers. Other aspects of compiling irregular problems are the extension of the index-based data distribution paradigm from data-parallel languages, such as Fortran D, to value-based distributions, and communication placement for array references with indirect subscripts.

Parallel I/O

The CRPC is also an active collaborator in the Scalable I/O Consortium, a project researching methods for parallel input and output. The goals of this project are to develop runtime support for managing and accessing data in secondary storage, to develop compiler support for out -of-core problems, and to identify language extensions needed to enhance parallel I/O support in Fortran D and High Performance Fortran. In particular, CRPC researchers at Syracuse University have studied methods for reading and writing distributed data to disk. These techniques extend the concepts of collective communication to I/O operations. Experiments at Rice University have extended Fortran D to out-of-core data structures and applied these techniques to an LU factorization code. The preliminary results were encouraging and the researchers are now studying compiler implementations of these techniques.

Fortran D Tools

Data-parallel languages such as Fortran D and HPF will allow a programmer to express a parallel algorithm in an abstract, machine- independent manner, in particular without the need for explicitly managing the communication, synchronization, and partitioning of work in the program. Concomitantly, however, both debugging a program for correctness and tuning it for good performance become much more difficult because of the great dissimilarity between the code written by the programmer and the explicitly parallel code that actually executes on the underlying parallel system. Thus, it becomes essential to provide sophisticated support in the programming environment for simplifying the task of understanding, debugging, and tuning a program under development.

A project under way within the CRPC parallel Fortran group aims to explore the research issues that arise in providing such programming support and to develop the requisite suite of programming tools. The tools emerging from this research, which is supported by funding from ARPA, are collectively called the D System. This system is primarily aimed at supporting the development of programs written in Fortran D, but the techniques developed as part of this research will extend to HPF and other, similar data-parallel programming environments as well.

At the heart of the D System are the Fortran D compiler described above, which includes support for sophisticated interprocedural analysis and communication optimizations, and an intelligent editor that provides programmers with information about communication between processors, helping them make decisions about how parallelism can best be exploited. The other key components of the D System are tools for performance visualization and debugging of Fortran D programs. These tools will collectively help the programmer to understand the impact of Fortran D source level constructs on parallel performance, select efficient data distributions, tune overall program performance, and provide support for debugging code executing on a parallel machine in terms of the original Fortran D source. A preliminary version of the D System will be available through Softlib this summer.

As the D System technology matures, the group will explore the issues involved in retargeting the Fortran D compiler and programming tools to distributed as well as to tightly coupled shared-memory systems.

Related Projects

Members of the Fortran group are involved in several additional collaborations that are capitalizing on the available software infrastructure underlying the D System. For instance, researchers at Rice University and Argonne National Laboratory are continuing to enhance ADIFOR, an automatic differentiation tool for Fortran built upon the infrastructure to support sensitivity analysis of large simulation codes for use in multidisciplinary design optimization by members of the CRPC Parallel Optimization and Automatic Differentiation group.

The Massively Scalar Compiler Project at Rice is an ARPA-sponsored project investigating compilation techniques for modern microprocessors. The project's goal is to improve the level of routinely attainable performance on modern microprocessors. The research focuses on several issues that arise in uniprocessor systems: compiler management of the memory hierarchy, code generation for microprocessors with instruction- level parallelism, and classical code optimization. While the project does not directly address multiprocessor parallelism, its results should find application in compilers for massively parallel machines, because they are almost always built from microprocessors.

The Fortran group is also collaborating with the CRPC Parallel Paradigm Integration project to investigate ways of integrating Fortran D-style data decomposition directives into Fortran M, a modular version of Fortran. Syracuse University is setting up the Parallel Compiler Runtime Consortium to design and implement common runtime support for both data and task parallelism in several languages.


Table of Contents

News | From the Director | Parallel Profile | Research Focus |
Work in Progress | Education Resources | Resources | Calendar | CRPC Home