![]() |
|
Volume 7, Issue 1 - Spring/Summer 1999 Volume 6, Issue 2 Volume
3, Issue 1 Volume
2, Issue 4 Volume
2, Issue 1 |
TEMPLATES FOR THE SOLUTION OF LINEAR SYSTEMS: BUILDING BLOCKS FOR ITERATIVE METHODSJack Dongarra, Director, CRPC Linear Algebra Group, University of Tennessee/Oak Ridge National LaboratoryWhich of the following statements is true?
Paradoxically, both statements are true. Traditionally, users have asked for and been provided with black box software in the form of mathematical libraries such as LAPACK, LINPACK, NAG, and IMSL. More recently, members of the high-performance community have discovered that they must write custom software for their problem. Their reasons include inadequate functionality of existing software libraries, inappropriate data structures that are not natural or convenient for a particular problem, and overly general software that sacrifices too much performance when applied to a special case of interest. Can we meet the needs of both groups of users? With the book "Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods" (written by Richard Barrett, Mike Berry, Tony Chan, Jim Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Chuck Romine and Henk van der Vorstwe), we believe we can. Accordingly, we introduce the use of templates. A template is a description of a general algorithm rather than the executable object code or the source code more commonly found in a conventional software library. Nevertheless, although templates are general descriptions of key data structures, they offer whatever degree of customization the user may desire. For example, they can be configured for the specific data structure of a problem or for the specific computing system on which the problem is to run. The type of problem we focus on is large sparse systems of linear equations. Many methods exist for solving such problems. The trick is to find the most effective method for the problem at hand. Unfortunately, a method that works well for one problem type may not work as well for another. Indeed, it may not work at all. Thus, besides providing templates, we also present suggestions for choosing and implementing an effective method and ways to extend a method to more specific matrix types. We are particularly interested in iterative methods, which work by continually refining an initial approximate solution so that it becomes closer and closer to the correct solution. With an iterative method a sequence of approximate solutions {x^(k)} is constructed that essentially involve the matrix A only in the context of matrix-vector multiplication. Thus the sparsity can be taken advantage of so that each iteration requires O(n) operations. We believe that after reading this book, applications developers will be able to use templates to get their program running on a parallel machine quickly. Nonspecialists will know how to choose and implement an approach to solve a particular problem. Specialists will be able to assemble and modify their codes-- without having to make the huge investment that has, up to now, been required to tune large-scale applications for each particular machine. Finally, we hope that all users will gain a better understanding of the algorithms employed. While education has not been one of the traditional goals of mathematical software, we believe that our approach will go a long way in providing such a valuable service. Why Use Templates? Templates offer three significant properties. First, templates are general and reusable. Thus, they can simplify ports to diverse machines. This feature is important given the ever-widening diversity of parallel architectures. Second, templates exploit the expertise of two distinct groups. The expert numerical analyst creates a template reflecting in-depth knowledge of a specific numerical technique. The applications user then provides "value-added" capability to the general template description, customizing it for specific contexts or applications needs. Third, templates are not language specific. Rather, they are displayed in an Algol-like structure, which is readily translatable into a target high-level language such as Fortran (with the use of the Basic Linear Algebra Subprograms, or BLAS, whenever possible) and C. By using these familiar styles, we believe that the users will have trust in the algorithms. We also hope that users will gain a better understanding of numerical techniques and parallel programming. For each template, we provide:
What Methods are Covered? So many iterative methods have been developed that it is impossible to cover all approaches. We have focused on the following methods that are important for historical reasons and/or are in production use and represent the state-of-the-art today:
For each method we present a global description, including a discussion of the history of the method and numerous references to the literature. We also give the mathematical conditions for selecting a given method. We did not intend to write a "cookbook" and have deliberately avoided the words "numerical recipes." Rather, we present an algorithmic outline, with guidelines for implementing a chosen method in various high-performance computing environments. In addition, we discuss problems of data storage and the use of appropriate preconditioners. "Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods" is published by SIAM, and is available in bound form for $18.00 (list price) and $14.40 (SIAM member price). Royalties from the book will go into a SIAM fund to help support students who wish to attend SIAM meetings. In addition, SIAM is trying an experiment with this book and has allowed the postscript file containing the book to be distributed freely over the Internet. It is available from Netlib. To retrieve the postscript file, you can use one of the following methods:
The book can be viewed through the World Wide Web from Mosaic by giving the following URL: http://netlib2.cs.utk.edu/linalg/html_templates/Templates.html The algorithm descriptions in Fortran and MATLAB can be found in Netlib under the directory "linalg". Table of Contents |