![]() |
|
Volume 7, Issue 1 - Spring/Summer 1999 Volume 6, Issue 2 Volume
3, Issue 1 Volume
2, Issue 4 Volume
2, Issue 1 |
PARALLEL ADAPTIVE MESH REFINEMENTMarsha Berger, Courant Institute; Daniel Quinlan, Jeffrey Saltzman, Los Alamos National LaboratoryMany solutions of the equations of computational physics are approximated using finite difference methods, which use lattices of data to cover a domain of interest. The data on these lattices are advanced using a discretization that approximates the partial differential equations. As the lattices become finer, the data agrees more closely with the original differential equation on the continuum domain. Solutions are rarely uniform in either time or space. Adaptive computations accelerate the convergence process by dynamically adding and removing local refinements in the lattice as needed. There are tradeoffs between adaptive methods and uniform techniques. Uniform methods work very well on parallel machines evenly distributing computation and the necessary communication. Adaptive computations may converge more quickly with fewer lattice points but the cost on parallel machinery is possible nonuniformity in both computation and communication stemming from interpolation and data management. To make adaptive methods useful on parallel machines, this group has developed an algorithm using fixed- sized patches and nonstandard data layouts. The end result is that no router communication takes place when patches at the finest level exchange data. All communication between adjacent levels is local. The algorithm is implemented for two- and three-dimensional computations on the Thinking Machines CM-5. The algorithm can be applied to systems of hyperbolic conservation laws and is instantiated with the equations of compressible fluid dynamics. The implementation is 80% efficient in the sense that most of the time is spent in the uniform patch integrator that is called as a subroutine. Future work is divided into two areas. As data parallel languages grow in sophistication, the algorithm can be optimized further by simultaneously integrating multiple patches. Additionally, interpolation can also be done in parallel. The group has planned to move the method to other programming environments. This list of environments is headed by a C++ array class based parallel library with an underlying message passing architecture. Table of Contents |