Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Work in Progress


HIGH-PERFORMANCE I/O FOR A REMOTE-SENSING DATABASE

Joel Saltz, Anurag Acharya, Chialin Chang, Bongki Moon, and Alan Sussman, University of Maryland

Remote-sensing data acquired from satellite-based sensors is widely used in geographical, meteorological, and environmental studies. Data volume has been one of the major limiting factors for these studies. Coarse- grained satellite data (4.4 km per pixel) for a global query that spans the shortest period of interest (10 days) is about 4 GB; a finer-grained version of the same data (1.1 km per pixel) is about 65 GB. The output images are usually significantly smaller than the input data. For example, a multi-band full-globe image corresponding to the 4.4 km dataset mentioned above is 228 MB. This data reduction is achieved by composition of information corresponding to different days. Before it can be used for composition, individual data items have to be processed for correcting the effects of various distortions, including instrument drift and atmospheric effects.

These characteristics present two major challenges for the design and implementation of a high-performance remote-sensing database. First, the database must provide low-latency retrieval of very large volumes of spatio-temporal data from secondary storage. This requires effective declustering of a multi-dimensional dataset onto a suitably configured disk farm. Second, the order of magnitude reduction in data size makes it imperative, from a performance perspective, that correction and composition operations be performed on the same machine that the data is stored on. This requires careful coordination of computation and data retrieval to avoid slowing down either process.

To meet these requirements, researchers at the University of Maryland have developed Titan, a parallel database designed for handling remote- sensing data. Titan is currently operational and contains about 24 GB of data from the Advanced Very High Resolution Radiometer (AVHRR) sensor on the NOAA-7 satellite. Titan addresses the problem of low-latency retrieval of very large volumes of data in two ways. First, it takes advantage of the AVHRR data format and of common query patterns identified by earth science researchers, to partition the entire dataset into coarse-grained chunks that achieve good disk bandwidth. Second, it tries to maximize disk parallelism by declustering the set of chunks onto a large disk farm. For declustering the dataset, the group developed and used the minimax declustering algorithm. This algorithm models the dataset as a complete graph -- each vertex represents a data block and each edge represents the likelihood that the corresponding data blocks will be accessed together. The key idea of the algorithm is to extend Prim's minimal spanning tree algorithm to construct as many spanning trees as there are disks in the disk farm, and to assign all the blocks associated with a single spanning tree to a single disk. To generate the edge costs, we chose the proximity index proposed by Kamel and Faloutsos.

Titan delivers good performance for both small and large queries. It achieves interactive response times (less than 10 seconds) for local queries and relatively quick turnaround (1.5 minutes) for global queries. Nevertheless, there are two factors that limit further improvements in performance. First, the high I/O parallelism has been achieved at the cost of locality. In the experiments, this shows up as a large amount of time spent in communication. Second, there is considerable computational load imbalance.

The loss of locality is primarily due to the hidden assumption in the declustering algorithm, as well as in the algorithm that partitions the query processing, that the cost of moving the data from a disk to a processing node is the same for all disks and all nodes. In reality, data blocks retrieved by a node must be forwarded to all consumers of the data block, resulting in a large amount of communication. Data blocks retrieved for the global query had an average of 1.4 remote consumers; the corresponding numbers for queries from Africa and the United Kingdom are 1.8 and 6.2 respectively. The computational imbalance is primarily due to the use of a uniform partitioning scheme to process AVHRR data that is distributed in a non-uniform manner over the attribute-space. The non-uniform distribution is caused by the structure of the satellite's orbit and the shape of the earth.

The researchers are currently working on the trade-off between I/O parallelism and locality. They are considering two techniques: (1) using a phased strategy for composition of data blocks and (2) significantly increasing the size of the data blocks. The phased strategy would perform the processing and composition of all data blocks on a processing node and forward only the composited result for combination with data from other nodes. This will reduce communication requirements. Since most of the processing for a data block will be done at the node on which it resides and since the declustering scheme achieves a good I/O balance, this should significantly improve the computational balance. Increasing the size of the data blocks will increase locality and improve the performance of the local composition operations. It will, however, reduce parallelism, particularly for small queries.


Table of Contents