Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993


Scientific applications play an important and natural role in unifying center research by bringing software and algorithm researchers together from both the CRPC and the external academic and industrial community. The CRPC supports applications projects for several reasons: they are testbeds for the development of core parallel algorithms, software, and other basic computer science research and are demonstrations of parallel computing technologies in real scientific computations.

An important part of these activities is the Geosciences Parallel Computation Project. The Geosciences Parallel Computation Project (GPCP) was established through a grant by the state of Texas to specifically address the computational needs of the petroleum industry. Through a collaboration of researchers from the CRPC, Rice University, the University of Texas at Austin, the University of Houston, and several corporations in the petroleum industry, the GPCP specifically addresses the use of parallel computation to simulate methods for enhanced oil recovery. In addition to an extensive program of research, the project has also fostered knowledge transfer to industry through two very active corporate affiliate programs.

Areas of study for the GPCP include flow in porous media, seismic analysis, optimal well placement, and the development of advanced tools for parallel scientific programming. This article will focus specifically on flow in porous media and seismic analysis, since the work of the optimization group was covered in the April 1993 issue of Parallel Computing Research and the advanced tools research will be covered in a future issue.

Flow in Porous Media Parallel Project

Mary Wheeler (director), Todd Arbogast, Clint N. Dawson, Philip T. Keenan, Luca Pavarino, Marcelo Rame, Chong-Huey Wang

The Flow in Porous Media Parallel Project (FPMPP) group focuses on petroleum reservoir simulation, though a natural technological spin-off is groundwater contaminant simulation. The group is pursuing several objectives: to develop accurate and efficient parallel algorithms for reservoir simulation and for the parallel simulation of contaminant remediation; to develop an understanding of parallel scaling issues in reservoir simulation; to investigate problems in porting reservoir simulators to parallel computers; to develop techniques for conditional simulation on parallel machines; and to perform basic research on various aspects of flow in porous media. These objectives are essential for predicting the response of reservoirs or aquifers to complicated processes and to the understanding, designing, and testing of economically feasible recovery or decontamination strategies.

Specific work within the FPMPP includes the following:

New Discretization Algorithms.
One thrust has been to develop accurate and efficient parallel algorithms for reservoir simulation. Investigation has focused on the application of solution techniques such as linear and nonlinear multigrids with operator-based averaging and domain decomposition. Of high priority is the investigation of advanced finite difference and finite element discretization methods such as Godunov, characteristic, and Leonard methods, used for advective flow problems. Mixed finite element methods are used for diffusive processes and velocity computations; techniques have been developed to implement mixed methods as finite differences on nonrectangular, fairly geometrically general domains, while maintaining the simplicity of a logically rectangular computational grid. Extensions to nonlinear problems have been considered, including unfavorable miscible displacement, biodegradation, and nonlinear sorption. A three- dimensional code for these extensions has been implemented and is under further development.
Parallel Implementation of Reservoir Simulators on Distributed Memory Environments.
One of the major problems that limits users in simulation studies is the inability to employ fine enough grids to obtain accurate solutions. For parallel computation environments to alleviate this problem, major changes to existing codes, such as the addition of parallel linear solvers, must be made. The group has converted the University of Texas chemical flood simulator (UTCHEM) and is presently converting the compositional flood simulator (UTCOMP) to parallel machines. So far, these serial simulators have been used by more than 25 major oil companies and ten universities. FPMPP researchers are also working with the CRPC's compiler group to test whether the Fortran language could efficiently express the parallelism needed for the UTCOMP simulator. PIERS, a parallel simulator developed at Exxon for the Intel iPSC/2, has been ported to the Intel Delta and several other distributed-memory machines.

Conditional Simulation. FPMPP researchers have been developing techniques for conditional simulation on parallel machines. The physical problem posed is the interpretation of field-scale tracer experiments using two-dimensional simulated annealing. The algorithm attempts to minimize an objective function, which includes two sets of data. Permeability and porosity data are generated and optimized to give the best fit to the semi-variogram, based on the known data at the wells.

Dual-Porosity Simulation.
In naturally fractured porous media, different physical phenomena occur on disparate length scales, so it is difficult to properly average their effects. A general dual-porosity model was developed through the mathematical technique of formal two-scale homogenization. The resulting model is naturally suited to parallel computation, since flow in the blocks of porous rock form a series of small, nearly independent problems.
Groundwater Simulation.
As a technological spinoff, enhanced oil recovery simulation applies also to the remediation of contaminated aquifers. The broader interests of this group include the modeling and simulation of the transport and reaction of chemicals in groundwater. Of particular interest are biological processes, geochemistry, and radionuclide decay kinetics used in various remediation strategies.
Grand Challenges for High Performance Computing.
The FPMPP is currently collaborating with the Center for Petroleum and Geosystems Engineering at the University of Texas on a Department of Energy project in petroleum reservoir and groundwater modeling. The collaboration is part of an overall $18 million Grand Challenge program developed to solve fundamental and economically significant science and engineering issues that require unprecedented computational power.

Parallel Computation for Seismic Inversion William Symes (director), Michel Kern, Alain Sei, Huy Tran, and Roelof Versteeg

The goal of this project is the improvement of reflection seismic data processing through development of new algorithms and employment of parallel computation. Reflection seismology provides the most detailed picture of the earth's structure available for petroleum exploration and production. It is the primary tool used by geophysicists to locate and map likely oil and gas prospects, and is of increasing importance in advanced production techniques. Seismic crews generate explosions or other sources of acoustic energy (noise), record the echoes from underground formations, and collect the records of many such "shots." Surveys are carried out at sea from ships towing cables full of microphones, in deserts, on mountains, and in swamps all over the world. The petroleum industry spends several billion dollars annually in the worldwide application of this technology. Texas and the contiguous Gulf of Mexico form the most thoroughly seismically surveyed territory in the world.

The current focus of this project is the development of accurate seismic inversion on a small industrial scale. The meaning of "inversion" in this context includes estimating seismic wave velocity and other rock properties in a manner consistent with the data as measured in the field. Inversion is distinguished in contemporary usage from ordinary processing by its emphasis on automatically extracting as much information as possible. Seismic data does not sensitively reveal all the important details of subsurface structure–for example seismic wavelengths are generally on the order of tens of meters, whereas rock bed thicknesses important in reservoir simulation are on the order of meters. Thus human intervention and judgment are ultimately necessary to make geologically meaningful results from seismic data. Nonetheless many aspects of the subsurface structure are well determined by the data, and the task of inversion is to extract these properties with minimal human effort. The approach pursued by this project is to produce detailed subsurface models that are physically sensible and approximately reproduce the data, through detailed simulation of seismic wave propagation. The production of such models has been studied by many academic and industrial research groups around the world, but is still very much an open research problem. Even small problems of this type involve large-scale computations that exhibit intrinsic parallelism at many levels. Thus, parallel computation is an essential tool and an integral part of this project.

The improved estimation of seismic wave velocities is key to the "focusing" of seismic data hence to the formation of accurate earth models. Over the past several years, researchers in this group have developed an approach to velocity estimation that overcomes theoretical obstacles fatal to other approaches. This approach is called differential semblance optimization (DSO). DSO is one of very few working techniques for the extraction of velocities and other earth features directly from seismic data, with minimal human intervention. It has been successfully applied both in synthetic model studies and in field data trials. It accommodates a wide variety of physical descriptions of the seismic wave propagation process and yields estimates of physical parameters such as compressional and shear velocities, density, and source radiation pattern that are indicative of rock properties and in some cases fluid content. A number of other research groups around the world have developed approaches similar to DSO, some independently, some inspired directly by the work at Rice University. The technological potential of DSO is also being evaluated through The Rice Inversion Project, an industrial research consortium with eight corporate sponsors in 1994 that also supports the work of this group.

Along with developing a theoretical understanding of this approach, researchers from this group have produced a prototype computer implementation that runs efficiently on a variety of platforms (from UNIX workstations to massively parallel supercomputers) with uniform interfaces. This implementation must accommodate a wide variety of data set sizes and formats and physical modeling assumptions. Therefore the code is designed to isolate simulation and related modules from those that perform generic tasks (e.g., I/O, linear algebra, optimization, etc.) common to all instances. This design has been very successful in allowing project members to test quickly the effects of changed physical assumptions and data properties and may have some utility for researchers in other fields.

Parallelism is implicit at many levels in seismic inversion software. For example, seismic experiments are really "multi-experiments" consisting of many logically identical sub-experiments that can be simulated in parallel. It is a commonplace of scientific computation that realizing the benefits of parallelism requires considerable effort. Researchers in the Seismic Inversion subproject have implemented multiexperiment parallelism at the level of generic software (i.e. in a way independent of simulation details) to amortize the parallelization effort over many instances of the code. The design uses explicit message passing implemented through Oak Ridge National Laboratory's Parallel Virtual Machine package. In principle, this software could even be used to drive multi-experiment simulations having nothing to do with seismology (e.g., conditional simulations of reservoir flow). The parallel inversion software runs especially well in distributed computing environments, i.e. workstation clusters, and could be useful even in smaller firms and laboratories that do not have access to large multiprocessor machines.

GPCP: A Valuable Resource for Industry

Oil and natural gas production are of critical importance to the United States economy. Future production is dependent upon the use of computational science to aid in the extraction of oil and gas from the existing oil reserves in the United States. Parallel computers permit researchers to create detailed oil reservoir models that help them better predict the effects of well placement and enhanced oil recovery strategies. Since much of the GPCP's work, particularly that of the FPMPP, can also be applied to groundwater remediation, the GPCP is also a valuable resource to environmental scientists and engineers as well as industries that must work within stringent environmental regulations for local water quality.

The GPCP is one of several applications projects that draw upon the fundamental advances in parallel computing made by the CRPC's main research thrusts to make parallel computing truly usable to scientists and engineers. "Our applications projects are putting the fruits of our labor in parallel computing to work on important real-world problems," said Geoffrey Fox, coordinator of the CRPC's applications projects. "What we're doing now for the petroleum and environmental industries is an excellent example of CRPC's strategy of applying the best HPCC technology in focused industrial and academic areas" said Fox. "In the latter regard, we have a significant role in ten Grand Challenges that will we describe in later articles. CRPC's strong internal computer science projects give us collaborations with the experts in most important computer technology areas. In this way, we can help application scientists even when CRPC technology is not directly applicable."

Editor's note: Other CRPC applications projects will be highlighted in future "Research Focus" articles.

Table of Contents

News | From the Director | Parallel Profile | Research Focus | Work in Progress | Education / Outreach | Resources | Calendar | CRPC Home