Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

PARALLEL ADAPTIVE MESH REFINEMENT

Marsha Berger, Courant Institute; Daniel Quinlan, Jeffrey Saltzman, Los Alamos National Laboratory


Many solutions of the equations of computational physics are approximated using finite difference methods, which use lattices of data to cover a domain of interest. The data on these lattices are advanced using a discretization that approximates the partial differential equations. As the lattices become finer, the data agrees more closely with the original differential equation on the continuum domain.

Solutions are rarely uniform in either time or space. Adaptive computations accelerate the convergence process by dynamically adding and removing local refinements in the lattice as needed. There are tradeoffs between adaptive methods and uniform techniques. Uniform methods work very well on parallel machines evenly distributing computation and the necessary communication. Adaptive computations may converge more quickly with fewer lattice points but the cost on parallel machinery is possible nonuniformity in both computation and communication stemming from interpolation and data management.

To make adaptive methods useful on parallel machines, this group has developed an algorithm using fixed- sized patches and nonstandard data layouts. The end result is that no router communication takes place when patches at the finest level exchange data. All communication between adjacent levels is local. The algorithm is implemented for two- and three-dimensional computations on the Thinking Machines CM-5. The algorithm can be applied to systems of hyperbolic conservation laws and is instantiated with the equations of compressible fluid dynamics. The implementation is 80% efficient in the sense that most of the time is spent in the uniform patch integrator that is called as a subroutine.

Future work is divided into two areas. As data parallel languages grow in sophistication, the algorithm can be optimized further by simultaneously integrating multiple patches. Additionally, interpolation can also be done in parallel. The group has planned to move the method to other programming environments. This list of environments is headed by a C++ array class based parallel library with an underlying message passing architecture.


Table of Contents

News | From the Director | Parallel Profile | Research Focus | Work in Progress | Education / Outreach | Resources | Calendar | CRPC Home