Velocity analysis is one of the most important aspects of imaging seismic data. Regardless of whether the project is
a prestack time or depth migration, finding an Earth model that produces the best possible image is seldom easy.
What we know today is that finding the optimum isotropic velocity is directly related to the experience of the
analyst, the quality of the migration tools at his or her disposal, and, of course, the quality of the seismic data
itself.
Almost all velocity analysis done today is what is normally referred to as migration velocity analysis (MVA), and is
usually based on some form of semblance calculation and picking. This works reasonably well so long as the Earth
model is isotropic, but when the subsurface is anisotropic, it falls far short of producing reasonable estimates of the
totality of parameters defining the anisotropic world. Moreover, as we saw the estimated velocity may produce a
high quality image with excellent lateral positioning, but depth conversions will be inaccurate. In this case, the
analyst must have proper tools for improving the number and accuracy of the parameters in the ultimate Earth
model.
Almost nothing can be done about the seismic data from which the required Earth model parameters must be
estimated. There are certain simple preprocessing steps that can at least reduce the possibility of limiting the
quality of the final image. Some good and bad data preparation practices are summarized in the following
list:
Deconvolution is good, provided that it enhances low frequency content
Removing low frequencies is bad
Velocity analysis requires many low but not so many high frequencies
Migration basically trades horizontal wavenumber for vertical wavenumbers
Migrate the data first to assess the need for low frequency removal
Two dimensional linear noise reduction may reduce dips
FK or fan filters should be avoided unless absolutely necessary
Prestack migration usually images linear noise to a point or off the section
Multiple suppression can be a necessary
SRME/Inverse Scattering is the optimum choice
Parabolic methods should be used with care
Migration from topography should always be a priority
Sea floor topography is the same as topography
Refraction statics should really be refraction tomography
In the author's mind, there are four basic approaches to MVA.
The first approach is what we will call short-spread-semblance-based velocity analysis. This velocity
analysis is based on a short enough spread to avoid anisotropic effects and essentially provides what
we have referred to as the NMO velocity. It is useful for both compressional and, when available,
shear data. Typically, it does not consider issues related to any form of anisotropy. It can be completed
with or without horizons. This approach has been the workhorse of MVA for many years.
The second approach continues the use of the short spread approach, but adds residual tomography to
the mix. When the short-spread analysis methodology is considered to have run its course, residual
picks are used in a tomographic inversion to produce a refined update. Tomography sometimes
suffers from a lack of redundancy that precludes its usefulness. It may also have problems due to
short spread limitations. In the traditional formulation, it may not have sufficiently wide incidence
angles to be effective. In some cases, the tomographic inversion can be used to estimate simple
anisotropic parameters, but this does not appear to be routine.
The third velocity analysis approach relaxes the short-spread assumptions, uses all the data, and
incorporates well information directly into the mix. This combination of techniques requires the
availability of additional data, usually in the form of shear measurements, but some form of
subsurface information is a must. Subsurface knowledge can be empirical, rather then from a drill
bit, but it is a must. This approach requires much more interpretive input than the other two. Perhaps
its chief drawback is its continued dependence on semblance style picking.
The fourth, and definitely least used and understood methodology, is what we will call full-waveforminversion. This is what we might refer to as a hands-off method. We formulate the problem in a
purely mathematical sense and let a super computer do all the work. While this approach, for the
most part, has failed miserably in the past, there are beginning to be indications that with the right
data, full-waveform inversion may eventually become a useful tool. What is becoming clear is that
for full-waveform processing to become a useful technology, the industry must begin to acquire
much lower frequency and more densely sampled data. In addition, computation power will have
to increase several orders of magnitude, and the cost of compute cycles will also have to decrease
significantly as well.