Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Astrophysical N-body Simulations Using Hierarchical Tree Data Structures

John K. Salmon, Physics, California Institute of Technology, Center for Research on Parallel Computation; and Michael S. Warren, Theoretical Astrophysics, Los Alamos National Laboratory

The process by which galaxies form is among the most important unsolved problems in physics. Although there is a wealth of observational data, astrophysics is at a disadvantage to some of the more terrestrial sciences because it is impossible to conduct experiments on galaxies. With the use of numerical methods, however, one can simulate galactic formations. Simulations employing adaptive tree data structures, or treecodes, are used because they adequately represent the distribution of matter on alllength-scales. Salmon and Warren have developed a parallel N-body code that utilizes this type of structure.

To study galaxy formation numerically, it is common to use N-body simulations, i.e., numerical simulations in which the distribution of matter is represented by a large number, N, of massive bodies that move under their mutual gravitational influence. Large values of N (upwards of one million) are needed to accurately model the large density contrasts that form in astrophysical systems. In order to reduce the cost of each evaluation of all the forces in the system, it is beneficial to introduce a data structure that represents the distribution of matter on all length-scales. Adaptive tree data structures have been popular in recent years due to the low cost of the subsequent force evaluation and a relative insensitivity to the spatial distribution of bodies. N-body simulations which use adaptive tree data structures are referred to as treecodes.

Treecodes represent a formidable challenge for parallel computation because the distribution of astrophysical bodies in the simulation is dynamic and highly non-uniform. Each body needs both global and local data for its update. The researchers use orthogonal recursive bisection (ORB), whereby space is recursively divided in two, and half the processors are assigned to each domain until there is one processor associated with each rectangular domain. Each body sees distant parts of the data tree at a very course level of detail while nearby sections are seen in much finer detail. The crucial observation is that nearby bodies see the same data trees. If one considers all the bodies in the domain, one can construct the union of all the trees they can see, called the local essential tree.

Salmon and Warren's parallel N-code has been implemented on an Intel Touchstone Delta, an Intel iPSC/860, an nCUBE, and the Caltech/JPL Mark III. Portability was achieved that resulted in increased functionality, convenience, and control over the timing of bug fixes and upgrades. The reported times are for the entire application, including I/O, communication and program initialization, and correspond to a parallel efficiency of about 87%. In March 1992, the team ran a simulation with 8,783,848 bodies on 512 processors for 780 timesteps, and in June 1992 they ran two large simulations of the Cold Dark Matter model of the universe. These are, by any measure, the largest N-body simulations ever run.

One particular treecode developed by the researchers, the hashed oct-tree (HOT) algorithm, represents a major advance in style and sophistication. These treecodes can also be useful in molecular dynamics, computational fluid dynamics, reaction-diffusion equations, data compression, image processing, visualization, and database searching. Salmon and Warren believe that the flexibility of the HOT implementation will not only be adaptable enough to satisfy their own needs for data analysis tasks, but will also enable them to collaborate with scientists in other disciplines to investigate broader uses of parallel treecodes.

This research received the $35,000 Intel Grand Challenge Computing Award this year, and was one of five recipients of the Gordon Bell Prize, also this year. Two other winners of Gordon Bell Prizes made use of CRPC facilities, as well.

Table of Contents

News | From the Director | Research Focus | Work in Progress | Education / Outreach | Resources | Calendar | CRPC Home