|Volume 7, Issue 1 -
Center for Research on Parallel Computation: An Introduction
The Center for Research on Parallel Computation (CRPC), a consortium of Argonne National Laboratory, the California Institute of Technology (Caltech), Los Alamos National Laboratory, Syracuse University, Rice University, and the University of Tennessee, is committed to a program of research that will make parallel computer systems truly usable - as usable as sequential computer systems are today. Basic research at CRPC sites has led to the development of new software tools, new parallel algorithms, and prototype implementations of scientific application programs. In the coming years, these basic research programs will continue and expand, concentrating on two important themes: massive parallelism and architecture independence.
CRPC research is both interdisciplinary and interinstitutional. Projects are shared among the six CRPC sites, and collaborations with industry, academic institutions, and government laboratories are commonplace. The strength of the research is in the combined efforts and use of shared resources among the scientists. Most of the work of CRPC researchers falls within five principal research thrusts:
Fortran Parallel Programming System
The first objective of the Fortran Parallel Programming System project is to seek to establish that a variant of Fortran that presents a shared -memory programming interface can be used as a vehicle for machine- independent parallel programming. It must be possible to compile this variant of Fortran to efficient code for distributed-memory machines of various types. The second objective of the Fortran parallel programming system project is to develop an integrated parallel programming environment to support the development and debugging of high-performance programs on shared-memory machines.
Compositional Programming System
The goal of the Compositional Programming System project is to enable systematic modular development of correct parallel programs. Research focuses on methods, tools and theories for (a) specifying, designing, and verifying efficient program modules, (b) defining interfaces between processes that guarantee absence of race conditions, and (c) implementing notations for parallel composition. In the first stage of the project, a high-level language, PCN (Program Composition Notation), was implemented that can be used to put Fortran and C program modules together to obtain parallel programs and extensions to Fortran (Fortran M) and C++ (CC++) have been developed for parallel applications. A key theme for the new projects is to provide an evolutionary (as opposed to revolutionary) path for theories, software engineering and tools, from sequential to parallel programming.
To allow the wide use of linear algebra in parallel computing, project researchers address several problems: the development of LAPACK for distributed-memory machines, dense nonsymmetric eigenvalue problems, parallel algorithms for large-scale eigenvalue problems, sparse linear least squares, multigrid algorithms, sparse linear systems, and linear algebra for signal processing. CRPC researchers are working on a version of LAPACK for distributed-memory machines and have also begun to develop a set of core routines for linear algebra in Fortran 90. In another area, this group is developing a prototype scalable library for solving the major problems of numerical linear algebra.
This group is developing algorithms for parallel machines in order to solve larger or harder optimization problems than it is possible to solve today on existing machines, sequential or parallel. Initial applications are in airline crew scheduling, the traveling salesman problem and optimal placement of oil wells. The group also is cooperating with the ParaScope project in the development of ADIFOR, an automatic differentiator for FORTRAN programs. Significant parallelism in optimization problems has been found by this group by identifying and exploiting structure currently being ignored by designers of sequential optimization algorithms, and has been applied to both linear programs and differential equations.
To develop scalability in simulation algoritms, the Differential Equations group conducts research in four areas: multilevel methods, domain decomposition, homotopy and continuation methods, and computational fluid dynamics. Those algorithms will have applications to problems in combustion, enhanced oil recovery, ocean and atmospheric circulation, and plasma physics. The group emphasizes the solution of three-dimensional problems and the effects of multi-scale and subgrid-scale phenomena in the areas of linear and nonlinear equations, domain decomposition techniques, continuation methods, and discretization methods, particularly those tailored for computational fluid dynamics.
Applications of CRPC Software and Algorithms
Application scientists interact with software and algorithm researchers both within the CRPC and in the external academic and industrial community. The application projects serve several functions: as a testbed for the development of core parallel algorithms, as a testbed for the software and other basic computer science research, and as a demonstration of the relevance of parallel computing in particular application areas.
Successful technologies from the CRPC are transferred to universities, government laboratories, and industry through licensing arrangements, visitor programs, workshops, and work with researchers and graduate students. More than 240 CRPC technical reports are currently available, and CRPC-developed public domain software can be obtained directly from Netlib and Softlib, electronic software distribution systems at Oak Ridge National Laboratory and Rice University. CRPC workshops have been held in software development and applications, Fortran, linear algebra, optimization, and a number of other areas. In addition, all sites have active visitors programs, placing an emphasis on applications scientists seeking to utilize CRPC parallel computation techniques and tools. Presentations are also given at national and international conferences; CRPC researchers often serve as organizers of these meetings. For instance, the High Performance Fortran Forum (HPFF), a series of meetings between industry and academic researchers, was led by Ken Kennedy.
Education and Outreach
An aggressive approach to providing research experiences and opportunities for K-12 students and teachers, university students, and faculty contributes to the overall scope and impact of the CRPC. Through the work of the CRPC, new courses and degree programs have recently begun at Caltech, Rice University, and Syracuse University. (See the article on academic courses and degrees.) CRPC researchers have developed courses within several educational frameworks, including academic courses on applications, compilers, and software use and short courses presented at a number of conferences. Many of these educational experiences have been successful in increasing minority participation in science, engineering, and mathematics. CRPC programs, such as the "Summer Program in Parallel Computing for Minority Undergraduates," at Caltech and the "Spend a Summer with a Scientist" at Rice University have achieved national recognition and have encouraged many underrepresented minorities to pursue careers in the computational sciences.
Table of Contents