Schlumberger

Time-Lapse Seismic Processing

Factors affecting repeatability in time-lapse 4D processing

There are many factors that affect the repeatability of time-lapse seismic data. In most cases, there are statistical processes available to address these challenges. Although these processes have some inherent variability due to noise and other causes, the calibrated marine source that is a key component of the Q-Marine acquisition system helps to address these challenges.

Offset regularization

The amplitude response of seismic reflectors generally varies with offset (source-to-receiver distance). The data bandwidth will also change with offset due to normal moveout (NMO) stretch and other issues, and the timing may differ as a result of non-hyperbolic NMO or other factors. The characteristics of these reflectors on a stacked section willdepend upon the offset range included in the stack. It is essential that the stacked data volumes used for time-lapse analysis are created using the same offset ranges for each survey. A minimum and maximum offset, appropriate for all surveys, is defined, and data outside this range is discarded. This offset range is divided into a fixed number of offset bins, generally with a constant offset increment between each bin.

Ideally, the number of contributions to each offset bin should be fairly consistent between the surveys, though this can be handled to some extent by careful regularization processing. Occasionally, substantial excess fold is present, for example in the overlap areas between prime and undershoot acquisition, or as the result of a deliberate acquisition policy, e.g., overlapped marine streamer acquisition. A 4D binning scheme can be applied to each offset bin to choose the most similar pairs of traces from the two surveys.

Variable fold of coverage

The fold of coverage within a given offset bin varies from one cell to the next as a result of the variability of the acquisition. Excess fold can be handled very well by equalized dip moveout (DMO), either alone or as part of a regularization scheme based on forward/inverse DMO. Alternatively, redundant traces can be discarded, either by using a simple nearest bin center criterion or by 4D binning to choose the most similar traces from each survey. It is preferable to reconstitute missing data by interpolation than to use flex binning.

Rigs and other obstructions can also cause loss of fold, with certain offset bins being empty over a substantial area. Frequently, the lower coverage is present on the newer survey, but not on the older one, which may have been acquired before the obstruction was in place. In this case, there are two options:

  • The regularized data from the old survey can be used to infill the hole in the newer one. Obviously, the two surveys will be identical in this area, which may mislead unsuspecting interpreters.
  • An identical hole can be introduced into the affected offset bins of the older survey. This has the advantage that it is obvious what has been done, and that both surveys are treated identically. There is, however, a risk of introducing edge effects around the hole. These may not be identical from one survey to the next due to differences in noise and other factors, and so could degrade repeatability in this area of the survey.

Navigation data quality

Time-lapse seismic processing and analysis are very dependent on the quality of the navigation data. Bulk shifts in positioning between surveys are surprisingly common, and it is important that these be detected at an early stage. Bulk shifts can be compensated by a simple update of the XY coordinates of the sources and receivers, provided that they are discovered early enough. Simple errors in navigation processing can usually be determined and fixed, but more complex problems such as errors in streamer shape may be more difficult to resolve. Poststack repositioning can improve, but not solve, problems caused by inaccurate navigation.

Survey azimuth

Marine streamer time-lapse surveys that have been acquired with the same survey azimuths have much greater inherent repeatability than surveys shot with different azimuths. This is because rapid velocity variations in the overburden can cause significant positioning, timing, and amplitude differences between traces that have a common midpoint, but very different azimuths. The positioning issues and amplitude issues are very difficult to address unless the velocity variations can be accurately modeled for prestack depth migration. However, timing differences are often the major cause of non-repeatability and these can be, at least partially, compensated by one of the following methods:

  • Interpret NMO velocities on one of the surveys and create a reference or model dataset by stacking it. This procedure normalizes the timing of the prestack gathers on both datasets and so enhances the repeatability of the two volumes. This works well if the prestack data are of sufficient quality for the trim static picking process to operate consistently.
  • Interpret NMO velocities on the first survey and use these as the starting point to interpret a new set of NMO velocities on the second survey. It is preferable to use automatic velocity picking, where it is reliable, as this will tend to be more consistent than a human interpreter. Trim statics, using a common reference volume, can be applied to each survey after NMO correction to further improve the match.

Wavelet matching

So that time-lapse datasets have identical wavelets two issues must be addressed:

  • Correction of bulk wavelet differences between the time-lapse datasets
  • Correction of wavelet variability within each volume. The shape and timing of minimum-phase wavelets is strongly dependent on their amplitude spectra. Zero-phase wavelets are less variable, and so we convert the data to zero phase at an early stage. This simplifies the analysis of repeatability and ensures that the subsequent processing algorithms operate consistently on each dataset. We typically use a deterministic zero-phasing procedure which zero-phases and matches the vertical incidence wavelets. It assumes that the source and receiver depths are sufficiently similar so that differences in the angular variation in the signatures are negligible. If this is not the case, we may apply angular designature.

The deterministic zero-phasing procedure assumes that the source is stable. We can distinguish between three general stability problems:

  • Variable wavelet amplitude, such as that caused by airgun pressure variations. This is quite common, and can, in theory, be corrected deterministically if source pressures are recorded in the field tape headers. Otherwise, we can perform a process such as surface-consistent amplitude compensation. Careful testing and QC is required to ensure the stability of the process.
  • Variable bubble shape, caused by changes in ambient conditions and airgun pressure. If this is an issue, a single-window shot-averaged long-gap deconvolution, or surface-consistent deconvolution may help to reduce it.
  • Variable wavelet shape, such as those caused by airgun dropouts. This is generally addressed by a statistical compensation such as shot-averaged or surface-consistent deconvolution.

Spatial regularization

It is important that the time-lapse seismic volumes are collocated, i.e., traces from one survey have identical XY coordinates to equivalent traces from the second survey. This is achieved by ensuring that all traces of the final processed datasets are located at bin centers. Spatial regularization interpolates the prestack data to bin centers from the irregular locations from which it was acquired. We do this during imaging using one of the following methods:

  1. Interpolation. The traces of each offset bin may be interpolated to bin centers prior to imaging. WesternGeco does not favor this approach, as interpolation before imaging is only valid if the traces involved have very similar azimuths.
  2. DMO. The use of spatially dealiased (so-called "FAT") DMO ensures that the contribution of each input trace is interpolated to the bin centers, rather than simply being copied as would happen with conventional DMO. Empty cells within a given offset plane are infilled by interpolation before or after DMO.
  3. Prestack Kirchhoff migration. This will also interpolate the irregularly sampled input traces to bin centers, although it is often preferable to regularize the data beforehand using forward/inverse DMO.

Multiple attenuation

Water-bottom multiples tend to be non-repeatable because minor changes in the two-way water-bottom time, caused by tidal or water velocity variations, are magnified with each reverberation within the water layer. Reverberation trains will not be repeatable from one survey to the next. Also, if the reverberations vary spatially, they may appear on one trace of a given midpoint gather, but not on another. Therefore, in areas with strong water-layer multiples, we remove as much water-bottom multiple energy as possible. Processes such as Radon demultiple tend to be fairly stable and are preferred over more data-adaptive algorithms such as wave-equation demultiple. However, it is often necessary to combine processes to achieve sufficient multiple attenuation and so we perform careful testing to determine the best compromise between multiple removal and possible modification of primary reflections.

Variable group sensitivity

This is common on older surveys and is difficult to correct deterministically as the variations tend to change as cable sections are swapped during acquisition. Surface-consistent amplitude compensation is commonly used to address this problem.

Swell noise attenuation

Swell noise usually only becomes a problem below the reservoir zone, but may be swept up into the reservoir during DMO and prestack migration. Our zone anomaly process and swell-noise attenuation processes are highly effective in dampening swell noise, but need careful QC to ensure that data at the reservoir are not being affected. This is achieved by differencing the input and output from the process, and autopicking the first live sample of the difference volume. All difference traces that have live data at the level of the reservoir are displayed to ensure that they are dominated by noise.

Tidal statics

Tidal variations cause time shifts between surveys. Although these are typically too small to affect the quality of the stacked datasets, they will degrade difference volumes computed between the surveys. Compensating for tidal statics using corrections derived from tide tables or modeled or measured tidal values removes this source of variability, and may allow identification of the causes of other minor time shifts.

Water velocity variations

Variations in the speed of sound in the water layer can cause time shifts between, or within, surveys. Swath-dependent time shifts can be used to compensate for this effect when the variations are minor (and may also correct minor errors in tidal statics and others). Or a 4D-consistent water velocity estimation and compensation procedure may be applied.

Coherent noise attenuation

Steeply dipping noise (and primary energy) tends to be less repeatable as it is more affected by differences in positioning and in the performance of imaging algorithms. Pre- or poststack dip filtering in fk or Tau-p space is commonly applied to time-lapse data volumes. Simple K filtering is often very effective. The reservoir differences that are the objective of the processing tend to be fairly flat-lying, and so quite aggressive spatial filtering can often be used on time-lapse datasets.

Random noise attenuation

Because the typical processing flow contains a number of multichannel processes, genuinely random noise is not usually a problem. However, noise can appear randomly organized in a given direction—for example, it is often found that applying 2D FX deconvolution in the crossline direction increases repeatability by attenuating small line-to-line variations in the data volumes.

Global QC 4D Attributes

PrevNextZoom1 of 1