|Volume 7, Issue 1 -
Research Focus: Parallel Paradigm Integration
Participants: Mani Chandy (director), Keith Cooper, Ian Foster, Herb Keller, Ken Kennedy, Carl Kesselman, Tal Lancaster, Rajit Manohar, Berna Massingill, Dan Meiron, Bob Olson, Sharif Rahman, Adam Rifkin, Paolo Sivilotti, Mei Su, John Thornley, Linda Torczon, Steve Tuecke, and Eric Van de Velde
The goal of this project is to enable systematic modular development of correct parallel programs. The thesis is that program modules can be specified and developed independently and then put together in systematic ways to obtain parallel programs. A programmer who uses a module need be concerned only about the specification of the module and not its implementation.
One objective is to develop notation and supporting tools that are completely transportable across asynchronous parallel computers, and to implement a subset of the notation on data parallel machines. Research focuses on methods, tools, and theories for (a) specifying, designing, and verifying efficient processes, (b) interfaces between processes that guarantee absence of race conditions, (c) notations for parallel composition of processes and (d) integration of different parallel programming paradigms within a single unified framework. The group has been developing notations for parallel programs through two ways: the development of new languages and the extension of sequential languages. They have also developed methods and theory for parallel program verification, and step-wise refinement of programs to obtain better efficiency.
A central theme of the project is to bring "parallel programming to the masses" (see the "Parallel Profile" on Mani Chandy) by providing evolutionary paths from conventional sequential programming, using familiar languages, methods, tools and computational platforms. The group believes that there is no abyss separating conventional programming (say using the C language on a PC) from new ideas in computer science in areas such as parallel functional programming, logic programming, verification of concurrent programs, and performance tuning. The new ideas can be applied right now by most people - including high- school students and small business people - provided the evolutionary path is clear.
Another theme is that of "paradigm integration." Today, many computer scientists view data- parallelism and task-parallelism as separate research areas. This is due to data parallelism having developed from vector computing for scientific applications, and task-parallelism having developed from reactive systems such as command and control. The group feels that the distinction between data and task parallelism, and between scientific applications and reactive applications, will become increasingly blurred. Parallel scientific computing components will be used within larger command and control applications. Many scientific and engineering applications such as multi-disciplinary optimization require task-parallel integration of data-parallel components. Furthermore, programming paradigms such as functional programming can be integrated within a unified framework, allowing programmers to mix and match paradigms.
The group works very closely with application groups to help ensure that the research produced by the group is truly useful to those developing applications today. Indeed, applications development has been the driving force for most of the language constructs and the research on determinism and object libraries.
With Steve Taylor, the group first developed a theory and notation of parallel composition to obtain feedback from applications programmers about the modular approach. A simple language, PCN (Program Composition Notation), was implemented to put Fortran and C program modules together to obtain parallel programs. PCN has been used for a variety of applications on a variety of machines. It is available by anonymous FTP from Argonne National Laboratory, and over 350 copies have been made. User feedback suggested that the theory and central ideas were good, and that the approach would help many more programmers use parallelism if it were encapsulated as small extensions to widely-used sequential languages such as C and Fortran. So, the group shifted its language efforts to two projects, Fortran M and CC++, languages that extend Fortran and C++ and support a modular, object-oriented style of parallel programming.
Prototypes of Fortran M and CC++ are being used with collaborators both in and outside the CRPC for developing parallel applications. Specific projects include the development of software archetypes for spectral methods, linear algebra, and mesh computations with particular application to a smog model for the Los Angeles basin. This work is in collaboration with CRPC groups on numerical simulation and linear algebra, and with the environmental engineering department at Caltech. In addition, CC++ is being used to develop a parallel application to predict three-dimensional structure of proteins. This work is being done with the Center for Computational Biology at Caltech. Another major effort uses Fortran M for global and mesoscale climate models. The group plans to work closely with the Fortran D efforts at Rice and Syracuse.
The group is truly a multi-institutional effort. The Fortran M development is directed from Argonne, the CC++ development is directed from Caltech, and the work on interoperability is carried out together by both groups. Four students from Caltech will spend part of their summers working on group-related projects at Argonne. Interactions with other CRPC groups play a central role in the research of the group.
Editor's note: The CRPC has five major research thrusts. Each issue of Parallel Computing Research will highlight one of these thrusts through an in-depth "Research Focus" article.
Table of Contents