Sites and Affiliations
Research and Applications
Major Accomplishments
Frequently Asked Questions
Knowledge and Technology Transfer
Education and Outreach
Media Resources
Technical Reports and Publications
Parallel Computing Research - Our Quarterly Newsletter
Contact Information
CRPC Home Page

UniGuide Featured Site

Geosciences Parallel Computation Project

Oil and natural gas production are of critical importance to the United States economy. Future production is dependent upon the use of computational science to aid in the extraction of oil and gas from the existing oil reserves in the United States. Parallel computers permit researchers to create detailed oil reservoir models that help them better predict the effects of well placement and enhanced oil recovery strategies. To specifically address the computational needs of the petroleum industry, the Geosciences Parallel Computation Project (GPCP) was established through a grant by the state of Texas, as part of a matching commitment for the CRPC. The application projects of the GPCP are described on the following pages.

Subsurface Modeling Group

Mary Wheeler (director), Todd Arbogast, Clint N. Dawson, Philip T. Keenan, Luca Pavarino, Marcelo Rame, Chong-Huey Wang

The Subsurface Modeling Group (SMG) is pursuing several objectives: to develop accurate and efficient parallel algorithms for reservoir simulation; to develop an understanding of parallel scaling issues in reservoir simulation; to investigate problems in porting reservoir simulators to parallel computers; to develop techniques for conditional simulation on parallel machines and for the parallel simulation of contaminant remediation; and to perform basic research on various aspects of flow in porous media. These objectives are essential for predicting the response of reservoirs to complicated processes, and understanding, designing, and testing of economically feasible recovery or decontamination strategies.

Mary Wheeler's research interests include the numerical solution of partial differential systems with application to flow in porous media and parallel computation. Her numerical work includes formulation, analysis, and implementation of finite-difference/finite-element discretization schemes for nonlinear coupled partial differential equations as well as domain decomposition iterative solution methods. Her applications include reservoir engineering and contaminant transport in groundwater. Current work has emphasized mixed finite-element methods for modeling reactive multi-phase flow and transport in a heterogeneous porous media, with the goal of simulating these systems on parallel computing platforms.

Specific work within the SMG includes:

  • New Discretization Algorithms. One thrust has been to develop accurate and efficient parallel algorithms for reservoir simulation. Investigation has focused on the application of solution techniques such as linear and nonlinear multigrids with operator-based averaging and domain decomposition. Of high priority is the investigation of advanced finite difference and finite element discretization methods such as Godunov, characteristic, and Leonard methods, used for advective flow problems. Mixed finite element methods are used for diffusive processes and velocity computations. Extensions to nonlinear problems have been considered, including unfavorable miscible displacement, biodegradation, and nonlinear sorption. A three-dimensional code for these extensions has been implemented and is under further development.
  • Parallel Implementation of Reservoir Simulators on Distributed Memory Environments. One of the major problems that limits users in simulation studies is the inability to employ fine enough grids to obtain accurate solutions. For parallel computation environments to alleviate this problem, major changes to existing codes, such as the addition of parallel linear solvers, must be made. The group has converted the University of Texas chemical flood simulator (UTCHEM) and is presently converting the compositional flood simulator (UTCOMP) to parallel machines. So far, the simulators have been used by more than 25 major oil companies and ten universities. Another simulator, PIERS, has been ported to the Intel Delta and several other distributed-memory machines.
  • Conditional Simulation. SMG researchers have also been developing techniques for conditional simulation on parallel machines. The physical problem posed is the interpretation of the field-scale tracer experiments using two-dimensional simulated annealing. The algorithm attempts to minimize an objective function, which includes two sets of data. Permeability and porosity data are generated and optimized to give the best fit to the semi-variogram, based on the known data at the wells.
  • Dual-Porosity Simulation. In naturally fractured porous media, different physical phenomena occur on disparate length scales, so it is difficult to properly average their effects. A general dual-porosity model was developed through the mathematical technique of formal two-scale homogenization. The resulting model is naturally suited to parallel computation, since flow in the blocks of porous rock form a series of small, nearly independent problems.
  • Groundwater Simulation. As a technological spinoff, enhanced oil recovery simulation applies also to the remediation of contaminated aquifers. The broader interests of this group include the modeling and simulation of the transport and reaction of chemicals in groundwater. Of particular interest are biological processes, geochemistry, and radionuclide decay kinetics used in various remediation strategies.

Parallel Computation for Seismic Inversion

William Symes (director), Joakim Blanch, Kenneth P. Bube, Michel Kern, Alain Sei, Huy Tran, and Roelof Versteeg

The goal of this project is the improvement of reflection seismic data processing through development of new algorithms and employment of parallel computation. Reflection seismology provides the most detailed picture of the earth's structure available for petroleum exploration and production. It is the primary tool used by geophysicists to locate and map likely oil and gas prospects, and is of increasing importance in advanced production techniques. Seismic crews generate explosions or other sources of acoustic energy (noise), record the echoes from underground formations, and collect the records of many such "shots." Surveys are carried out at sea from ships towing cables full of microphones, in deserts, on mountains, and in swamps all over the world. The petroleum industry spends several billion dollars annually in the worldwide application of this technology. Texas and the contiguous Gulf of Mexico form the most thoroughly seismically surveyed territory in the world.

The current focus of this project is the development of a feasible plan for accurate seismic inversion on a small industrial scale. This investigation will include estimating seismic wave velocity directly from waveform data and extracting detailed estimates of local parameter fluctuations in the earth model. In this project, models are regarded as successful if they are physically sensible and approximately reproduce the data, through detailed simulation of seismic wave propagation. The production of such models is very much an open research problem, and is the first step toward extraction of maximal information from seismic data. Even small problems of this type involve large-scale computations that exhibit intrinsic parallelism at many levels. Thus, parallel computation is an essential tool and an integral part of this project.

The improved estimation of seismic wave velocities is key to the formation of earth models. Over the past several years, group researchers have developed an approach to velocity estimation that overcomes theoretical obstacles fatal to other approaches. This approach is called differential semblance optimization (DSO). DSO is one of a limited number of working techniques for the extraction of velocities and other earth features derived directly from seismic data, with minimal human intervention. It has been successfully applied both in synthetic model studies and in field data trials. A number of other research groups around the world have developed similar ideas, some of which were derived at independently, some of which were inspired by the work at Rice University. The technological potential of DSO is also being demonstrated through The Rice Inversion Project, an industrial research consortium that has tested DSO implementations in selected field data situations.

Along with developing a theoretical understanding of this approach, researchers from this group have produced a prototype computer implementation that runs efficiently on a variety of platforms (from UNIX workstations to massively parallel supercomputers) with uniform interfaces.

Reservoir Characterization

John Dennis (director), Robert Michael Lewis, and Virginia Torczon

Several important problems arise in petroleum resources management that are naturally posed as optimization problems. For instance, suppose an enhanced oil recovery (EOR) strategy for an oil reservoir is being designed. In this EOR strategy, one might inject CO2 into the reservoir at some wells to drive the oil to other wells for extraction. A natural question exists: What is the best EOR strategy that can be designed? The meaning of "best" will vary from situation to situation. Roughly speaking, the best EOR strategy is to maximize the amount of oil recovered subject to limits on cost and perhaps other economic and technological constraints. A similar optimization problem arises in reservoir modeling. Generally, there is incomplete information about such physical properties as the variation of rock permeability in the reservoir. On the other hand, such knowledge is crucial to accurately model the behavior of the reservoir. The question that might be asked then is: What is the best estimate for such reservoir properties?

The solution of these problems in petroleum resources management is the work of the Reservoir Characterization group of the Geosciences Parallel Computation Project. The group's goal is to combine the developments in numerical optimization with the power of parallel computation. Researchers are developing computational methods that will improve upon the time-consuming optimization methods currently used in the oil industry.

The optimization problems that arise in oil reservoir management have enormous computational requirements. For instance, in the problem of determining the best EOR strategy, various sets of injection and extraction rates at the wells are examined to determine the set that produces the best results. This requires the repeated simulation of the oil reservoir's response to the different injection and extraction rates. Only since the early 1980s has research in optimization yielded efficient methods to handle such large problems. At the same time, the development of parallel computation has given us machines on which to implement these large-scale problems quickly and efficiently.

Sites & Affiliations | Leadership | Research & Applications | Major Accomplishments | FAQ | Search | Knowledge & Technology Transfer | Calendar of Events | Education & Outreach | Media Resources | Technical Reports & Publications | Parallel Computing Research Quarterly Newsletter | News Archives | Contact Information

Hipersoft | CRPC