|Volume 7, Issue 1 -
RESEARCH FOCUS -- APPLICATIONS: GEOSCIENCES PARALLEL COMPUTATION PROJECT
Scientific applications play an important and natural role in unifying center research by bringing software and algorithm researchers together from both the CRPC and the external academic and industrial community. The CRPC supports applications projects for several reasons: they are testbeds for the development of core parallel algorithms, software, and other basic computer science research and are demonstrations of parallel computing technologies in real scientific computations.
An important part of these activities is the Geosciences Parallel Computation Project. The Geosciences Parallel Computation Project (GPCP) was established through a grant by the state of Texas to specifically address the computational needs of the petroleum industry. Through a collaboration of researchers from the CRPC, Rice University, the University of Texas at Austin, the University of Houston, and several corporations in the petroleum industry, the GPCP specifically addresses the use of parallel computation to simulate methods for enhanced oil recovery. In addition to an extensive program of research, the project has also fostered knowledge transfer to industry through two very active corporate affiliate programs.
Areas of study for the GPCP include flow in porous media, seismic analysis, optimal well placement, and the development of advanced tools for parallel scientific programming. This article will focus specifically on flow in porous media and seismic analysis, since the work of the optimization group was covered in the April 1993 issue of Parallel Computing Research and the advanced tools research will be covered in a future issue.
Flow in Porous Media Parallel Project
Mary Wheeler (director), Todd Arbogast, Clint N. Dawson, Philip T. Keenan, Luca Pavarino, Marcelo Rame, Chong-Huey Wang
The Flow in Porous Media Parallel Project (FPMPP) group focuses on petroleum reservoir simulation, though a natural technological spin-off is groundwater contaminant simulation. The group is pursuing several objectives: to develop accurate and efficient parallel algorithms for reservoir simulation and for the parallel simulation of contaminant remediation; to develop an understanding of parallel scaling issues in reservoir simulation; to investigate problems in porting reservoir simulators to parallel computers; to develop techniques for conditional simulation on parallel machines; and to perform basic research on various aspects of flow in porous media. These objectives are essential for predicting the response of reservoirs or aquifers to complicated processes and to the understanding, designing, and testing of economically feasible recovery or decontamination strategies.
Specific work within the FPMPP includes the following:
Parallel Computation for Seismic Inversion William Symes (director), Michel Kern, Alain Sei, Huy Tran, and Roelof Versteeg
The goal of this project is the improvement of reflection seismic data processing through development of new algorithms and employment of parallel computation. Reflection seismology provides the most detailed picture of the earth's structure available for petroleum exploration and production. It is the primary tool used by geophysicists to locate and map likely oil and gas prospects, and is of increasing importance in advanced production techniques. Seismic crews generate explosions or other sources of acoustic energy (noise), record the echoes from underground formations, and collect the records of many such "shots." Surveys are carried out at sea from ships towing cables full of microphones, in deserts, on mountains, and in swamps all over the world. The petroleum industry spends several billion dollars annually in the worldwide application of this technology. Texas and the contiguous Gulf of Mexico form the most thoroughly seismically surveyed territory in the world.
The current focus of this project is the development of accurate seismic inversion on a small industrial scale. The meaning of "inversion" in this context includes estimating seismic wave velocity and other rock properties in a manner consistent with the data as measured in the field. Inversion is distinguished in contemporary usage from ordinary processing by its emphasis on automatically extracting as much information as possible. Seismic data does not sensitively reveal all the important details of subsurface structure–for example seismic wavelengths are generally on the order of tens of meters, whereas rock bed thicknesses important in reservoir simulation are on the order of meters. Thus human intervention and judgment are ultimately necessary to make geologically meaningful results from seismic data. Nonetheless many aspects of the subsurface structure are well determined by the data, and the task of inversion is to extract these properties with minimal human effort. The approach pursued by this project is to produce detailed subsurface models that are physically sensible and approximately reproduce the data, through detailed simulation of seismic wave propagation. The production of such models has been studied by many academic and industrial research groups around the world, but is still very much an open research problem. Even small problems of this type involve large-scale computations that exhibit intrinsic parallelism at many levels. Thus, parallel computation is an essential tool and an integral part of this project.
The improved estimation of seismic wave velocities is key to the "focusing" of seismic data hence to the formation of accurate earth models. Over the past several years, researchers in this group have developed an approach to velocity estimation that overcomes theoretical obstacles fatal to other approaches. This approach is called differential semblance optimization (DSO). DSO is one of very few working techniques for the extraction of velocities and other earth features directly from seismic data, with minimal human intervention. It has been successfully applied both in synthetic model studies and in field data trials. It accommodates a wide variety of physical descriptions of the seismic wave propagation process and yields estimates of physical parameters such as compressional and shear velocities, density, and source radiation pattern that are indicative of rock properties and in some cases fluid content. A number of other research groups around the world have developed approaches similar to DSO, some independently, some inspired directly by the work at Rice University. The technological potential of DSO is also being evaluated through The Rice Inversion Project, an industrial research consortium with eight corporate sponsors in 1994 that also supports the work of this group.
Along with developing a theoretical understanding of this approach, researchers from this group have produced a prototype computer implementation that runs efficiently on a variety of platforms (from UNIX workstations to massively parallel supercomputers) with uniform interfaces. This implementation must accommodate a wide variety of data set sizes and formats and physical modeling assumptions. Therefore the code is designed to isolate simulation and related modules from those that perform generic tasks (e.g., I/O, linear algebra, optimization, etc.) common to all instances. This design has been very successful in allowing project members to test quickly the effects of changed physical assumptions and data properties and may have some utility for researchers in other fields.
Parallelism is implicit at many levels in seismic inversion software. For example, seismic experiments are really "multi-experiments" consisting of many logically identical sub-experiments that can be simulated in parallel. It is a commonplace of scientific computation that realizing the benefits of parallelism requires considerable effort. Researchers in the Seismic Inversion subproject have implemented multiexperiment parallelism at the level of generic software (i.e. in a way independent of simulation details) to amortize the parallelization effort over many instances of the code. The design uses explicit message passing implemented through Oak Ridge National Laboratory's Parallel Virtual Machine package. In principle, this software could even be used to drive multi-experiment simulations having nothing to do with seismology (e.g., conditional simulations of reservoir flow). The parallel inversion software runs especially well in distributed computing environments, i.e. workstation clusters, and could be useful even in smaller firms and laboratories that do not have access to large multiprocessor machines.
GPCP: A Valuable Resource for Industry
Oil and natural gas production are of critical importance to the United States economy. Future production is dependent upon the use of computational science to aid in the extraction of oil and gas from the existing oil reserves in the United States. Parallel computers permit researchers to create detailed oil reservoir models that help them better predict the effects of well placement and enhanced oil recovery strategies. Since much of the GPCP's work, particularly that of the FPMPP, can also be applied to groundwater remediation, the GPCP is also a valuable resource to environmental scientists and engineers as well as industries that must work within stringent environmental regulations for local water quality.
The GPCP is one of several applications projects that draw upon the fundamental advances in parallel computing made by the CRPC's main research thrusts to make parallel computing truly usable to scientists and engineers. "Our applications projects are putting the fruits of our labor in parallel computing to work on important real-world problems," said Geoffrey Fox, coordinator of the CRPC's applications projects. "What we're doing now for the petroleum and environmental industries is an excellent example of CRPC's strategy of applying the best HPCC technology in focused industrial and academic areas" said Fox. "In the latter regard, we have a significant role in ten Grand Challenges that will we describe in later articles. CRPC's strong internal computer science projects give us collaborations with the experts in most important computer technology areas. In this way, we can help application scientists even when CRPC technology is not directly applicable."
Editor's note: Other CRPC applications projects will be highlighted in future "Research Focus" articles.
Table of Contents