Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Research Focus: Linear Algebra

Participants: Jack Dongarra (director), University of Tennessee/Oak Ridge National Laboratory; Herb Keller, California Institute of Technology; Danny Sorensen, Rice University; and Eric van de Velde, California Institute of Technology


Several problems in linear algebra must be addressed to allow wide use of parallel computing. Some of these problems include the development of LAPACK for distributed-memory machines, dense nonsymmetric eigenvalue problems, parallel algorithms for large-scale eigenvalue problems, sparse linear least squares, multigrid algorithms, sparse linear systems, and linear algebra for signal processing. One of the important areas for software development is the design and implementation of a mathematical software library which is scalable on parallel computers. The linear algebra group is developing a prototype scalable library for solving the major problems of numerical linear algebra, first concentrating on dense matrix algorithms and sparse structure.

LAPACK for Distributed-memory Machines

The module approach of LAPACK for shared-memory MIMD architectures will be adapted for distributed-memory MIMD in the not-too-distant future. It would be beneficial to have the same software superstructure for the solution of linear equations on both classes of parallel machines. The linear algebra group plans to use algorithms based upon blocking techniques that take advantage of surface-to-volume effects and allow effective reuse of data in local memory. This approach can lead to optimal algorithms on supercomputers with modest amounts of parallelism, as well as on distributed-memory machines where the levels of parallelism are much greater (e.g., the hypercube). Portability will be achieved through a collection of routines that are portable across a wide range of architectures without sacrificing performance.

Two recent workshops focused on the development of a set of linear algebra communication routines for the specification and manipulation of distributed data structures in algorithms. The result has been a proposed standard for the implementation of a scalable communication library, which is the first step in seeking a community consensus on such standards.

Dense Non-symmetric Eigenvalue Problems

There is a natural extension of the divide-and-conquer paradigm to the nonsymmetric dense eigenvalue problem. CRPC researchers are investigating this approach to find a parallel version of QR that takes full advantage of significant parallelism. In particular, the original problem is divided into two smaller and independent subproblems by a rank-one modification of the matrix. Once the eigensystems of the smaller subproblems are known, it is possible to compute those of the original matrix. In the non-symmetric case, however, the eigenvalues of the modified matrix do not interlace with those of the original matrix. Indeed, the eigenvalues can scatter anywhere in the complex plane.

There has been considerable progress with the work on homotopy techniques, since this paradigm can be used in place of the interlace property in the symmetric case to provide parallelism of the root- finding process while avoiding the problem of two independent sets of iterates converging to the same root. Linking divide-and-conquer with homotopy techniques is attractive because the eigensystem of the divided matrix provides a natural starting point for the conquer phase.

In the group's algorithm for the non-symmetric case, the eigensystem of the subproblem is used only to construct initial guesses for an iterative process that yields the desired eigensystem of the original problem. Under suitable conditions, Newton's method or a continuation method can be used to find the eigenpairs of the original problem. The algorithm that is being experimented with is based on an iterative refinement procedure which is really Newton's method.

Large-scale Eigenvalue Problems

Iterative methods to solve large-scale algebraic eigenvalue problems and linear systems are essential to the numerical solution of many large- scale applications. Basic research on algorithms for the iterative solution of eigenproblems and linear systems is still needed, particularly in nonsymmetric problems. During the past year, an approach to the numerical solution of large-scale eigenproblems has been developed that is based upon a variant of Arnoldi's method. This approach promises to provide a unified computational framework for the development of numerical software for the iterative solution of large-scale eigenproblems and large-scale linear systems. Although further investigation is needed to identify the best variants within this framework, it is possible to begin construction of a prototypical library based upon these ideas. The research codes that have already been developed will serve as a starting point for construction of this library.

Editor's note: The CRPC has five major research thrusts. Each issue of Parallel Computing Research will highlight one of these thrusts through an in-depth "Research Focus" article. This article on the Linear Algebra group presents the critical issues that the researchers must address as well as how these issues have been specifically addressed.


Table of Contents

News | From the Director | Research Focus | Work in Progress | Education / Outreach | Resources | Calendar | CRPC Home