This paper describes least squares reverse-time migration in a matrix-based formulation providing the exact adjoint operator pair for solving the linear inverse problem and thereby enhancing the convergence of gradient-based iterative linear inversion methods. In this formulation, modified source wavelets are used to correct the source signature imprint in the predicted data. Moreover, a roughness constraint is applied to stabilize the inversion and reduce high-wavenumber artefacts. It is verified by the mathematical proof provided that a deconvolution imaging condition is implicitly applied in least squares migration. Three numerical experiments illustrate that this new formulation is able to produce seismic reflectivity images with higher resolution, more accurate amplitudes, and fewer artefacts than conventional reverse-time migration. The methodology is currently feasible in 2-D, but as computational restraints decrease in the future, it will naturally extend to the 3-D application.

We present an alternative to Roy's theorem for direct current regimes with the aim of validating the theoretical basis of signal contribution sections in electrical prospecting. Roy's theorem establishes that the electrical potential at a point can be expressed as an integral over all space of the electric field weighted by the gradient of inverse distance. The integrand is interpreted to represent elementary contributions to the potential that can be analyzed to compare different electrode arrays. Signal contribution sections and depth of investigation characteristics can be beautifully illustrated with important practical applications. However, the electric potential, being the solution of a boundary value problem, cannot be uniquely decomposed into elementary contributions. There is no guarantee for the integrand of a given integral to be meaningful in all situations. In the case of Roy's theorem, the concept has been severely criticized by respected scholars who challenged the scientific legitimacy of his approach. If we are going to keep the concept of elementary contributions alive, we need to go beyond Roy's theorem. In this paper, we develop an alternative theorem and show that it merges with the concept of sensitivity, which is unique and mathematically sound, and is also open to physical validation. This prevents any possible contradictions in the future and, equally important, eliminates the dichotomy between sensitivity and elementary contributions.

Waveform inversion is a velocity-model-building technique based on full waveforms as the input and seismic wavefields as the information carrier. Conventional waveform inversion is implemented in the data domain. However, similar techniques referred to as image-domain wavefield tomography can be formulated in the image domain and use a seismic image as the input and seismic wavefields as the information carrier. The objective function for the image-domain approach is designed to optimize the coherency of reflections in extended common-image gathers. The function applies a penalty operator to the gathers, thus highlighting image inaccuracies arising from the velocity model error. Minimizing the objective function optimizes the model and improves the image quality. The gradient of the objective function is computed using the adjoint state method in a way similar to that in the analogous data-domain implementation. We propose an image-domain velocity-model building method using extended common-image-point space- and time-lag gathers constructed sparsely at reflections in the image. The gathers are effective in reconstructing the velocity model in complex geologic environments and can be used as an economical replacement for conventional common-image gathers in wave-equation tomography. A test on the Marmousi model illustrates successful updating of the velocity model using common-image-point gathers and resulting improved image quality.

A spatially non-local model for inelastic deformation of solids is proposed and studied. The non-locality of deformation is taken into account by the additional parameter of state beyond the classical parameters such as stress and strain tensors. This additional parameter is the curvature tensor expressed in terms of the metric strain tensor, and it is called the failure parameter. In the case of small deformation, it is equivalent to the Saint-Venant incompatibility tensor. Thermodynamic properties of the model are studied, and governing differential equations for spatially non-local model are formulated, which are composed by the elasticity equations and parabolic equation for the failure parameter. The model can be applied to the study of the rock failure problem, and as an example, the one-dimensional problem for the deformation of half-plane loaded by the normal surface stress is studied. Stationary and non-stationary formulations of the problem are considered, and qualitative agreement with available experimental data is observed.

Gaussian beam depth migration overcomes the single-wavefront limitation of most implementations of Kirchhoff migration and provides a cost-effective alternative to full-wavefield imaging methods such as reverse-time migration. Common-offset beam migration was originally derived to exploit symmetries available in marine towed-streamer acquisition. However, sparse acquisition geometries, such as cross-spread and ocean bottom, do not easily accommodate requirements for common-offset, common-azimuth (or common-offset-vector) migration. Seismic data interpolation or regularization can be used to mitigate this problem by forming well-populated common-offset-vector volumes. This procedure is computationally intensive and can, in the case of converted-wave imaging with sparse receivers, compromise the final image resolution. As an alternative, we introduce a common-shot (or common-receiver) beam migration implementation, which allows migration of datasets rich in azimuth, without any regularization pre-processing required. Using analytic, synthetic, and field data examples, we demonstrate that converted-wave imaging of ocean-bottom-node data benefits from this formulation, particularly in the shallow subsurface where regularization for common-offset-vector migration is both necessary and difficult.

Imaging the change in physical parameters in the subsurface requires an estimate of the long wavelength components of the same parameters in order to reconstruct the kinematics of the waves propagating in the subsurface. One can reconstruct the model by matching the recorded data with modeled waveforms extrapolated in a trial model of the medium. Alternatively, assuming a trial model, one can obtain a set of images of the reflectors from a number of seismic experiments and match the locations of the imaged interfaces. Apparent displacements between migrated images contain information about the velocity model and can be used for velocity analysis. A number of methods are available to characterize the displacement between images; in this paper, we compare shot-domain differential semblance (image difference), penalized local correlations, and image-warping. We show that the image-warping vector field is a more reliable tool for estimating displacements between migrated images and leads to a more robust velocity analysis procedure. By using image-warping, we can redefine the differential semblance optimization problem with an objective function that is more robust against cycle-skipping than the direct image difference. We propose an approach that has straightforward implementation and reduced computational cost compared with the conventional adjoint-state method calculations. We also discuss the weakness of migration velocity analysis in the migrated-shot domain in the case of highly refractive media, when the Born modelling operator is far from being unitary and thus its adjoint (migration) operator poorly approximates the inverse.

Heating heavy oil reservoirs is a common method for reducing the high viscosity of heavy oil and thus increasing the recovery factor. Monitoring of these viscosity changes in the reservoir is essential for delineating the heated region and controlling production. In this study, we present an approach for estimating viscosity changes in a heavy oil reservoir. The approach consists of three steps: measuring seismic wave attenuation between reflections from above and below the reservoir, constructing time-lapse Q and Q^{−1} factor maps, and interpreting these maps using Kelvin–Voigt and Maxwell viscoelastic models. We use a 4D relative spectrum method to measure changes in attenuation. The method is tested with synthetic seismic data that are noise free and data with additive Gaussian noise to show the robustness and the accuracy of the estimates of the Q-factor. The results of the application of the method to a field data set exhibit alignment of high attenuation zones along the steam-injection wells, and indicate that temperature dependent viscosity changes in the heavy oil reservoir can be explained by the Kelvin–Voigt model.

Interferometric redatuming is a data-driven method to transform seismic responses with sources at one level and receivers at a deeper level into virtual reflection data with both sources and receivers at the deeper level. Although this method has traditionally been applied by cross-correlation, accurate redatuming through a heterogeneous overburden requires solving a multidimensional deconvolution problem. Input data can be obtained either by direct observation (for instance in a horizontal borehole), by modelling or by a novel iterative scheme that is currently being developed. The output of interferometric redatuming can be used for imaging below the redatuming level, resulting in a so-called interferometric image. Internal multiples from above the redatuming level are eliminated during this process. In the past, we introduced point-spread functions for interferometric redatuming by cross-correlation. These point-spread functions quantify distortions in the redatumed data, caused by internal multiple reflections in the overburden. In this paper, we define point-spread functions for interferometric imaging to quantify these distortions in the image domain. These point-spread functions are similar to conventional resolution functions for seismic migration but they contain additional information on the internal multiples in the overburden and they are partly data-driven. We show how these point-spread functions can be visualized to diagnose image defocusing and artefacts. Finally, we illustrate how point-spread functions can also be defined for interferometric imaging with passive noise sources in the subsurface or with simultaneous-source acquisition at the surface.

Various methods for computing the terrain correction in a high-precision gravity survey are currently available. The present paper suggests a new method that uses linear analytical terrain approximations. In this method, digital terrain models for the near-station topographic masses are obtained by vectorizing scan images of large-scaled topographic maps, and the terrain correction computation is carried out using a Fourier series approximation of discrete height values. Distant topography data are represented with the help of digital GTOPO30 and Shuttle Radar Topography Mission cartographic information. We formulate linear analytical approximations of terrain corrections for the whole region using harmonic functions as the basis of our computational algorithm. Stochastic modelling allows effective assessment of the accuracy of terrain correction computation. The Perm Krai case study has shown that our method makes full use of all the terrain data available from topographic maps and digital terrain models and delivers a digital terrain correction computed to *a priori* precision. Our computer methodology can be successfully applied for the terrain correction computation in different survey areas.

Streaming-potentials are produced by electrokinetic effects in relation to fluid flow and are used for geophysical prospecting. The aim of this study is to model streaming potential measurements for unsaturated conditions using an empirical approach. A conceptual model is applied to streaming potential measurements obtained from two drainage experiments in sand. The streaming potential data presented here show a non-monotonous behaviour with increasing water saturation, following a pattern that cannot be predicted by existing models. A model involving quasi-static and dynamic components is proposed to reproduce the streaming potential measurements. The dynamic component is based on the first time derivative of the driving pore pressure. The influence of this component is investigated with respect to fluid velocity, which is very different between the two experiments. The results demonstrate that the dynamic component is predominant at the onset of drainage in experiments with the slowest water flow. On the other hand, its influence appears to vanish with increasing drainage velocity. Our results suggest that fluid flow and water distribution at the pore scale have an important influence on the streaming potential response for unsaturated conditions. We propose to explain this specific streaming potential response in terms of the behaviour of both rock/water interface and water/air interfaces created during desaturation processes. The water/air interfaces are negatively charged, as also observed in the case of water/rock interfaces. Both the surface area and the flow velocity across these interfaces are thought to contribute to the non-monotonous behaviour of the streaming potential coefficient as well as the variations in its amplitude. The non-monotonous behaviour of air/water interfaces created during the flow was highlighted as it was measured and modelled by studies published in the literature. The streaming potential coefficient can increase to about 10 to 40 when water saturation decreases. Such an increase is possible if the amount of water/air interfaces is increased in sufficient amount, which can be the case.

In this study, a locally linear model tree algorithm was used to optimize a neuro-fuzzy model for prediction of effective porosity from seismic attributes in one of Iranian oil fields located southwest of Iran. Valid identification of effective porosity distribution in fractured carbonate reservoirs is extremely essential for reservoir characterization. These high-accuracy predictions facilitate efficient exploration and management of oil and gas resources.

The multi-attribute stepwise linear regression method was used to select five out of 26 seismic attributes one by one. These attributes introduced into the neuro-fuzzy model to predict effective porosity. The neuro-fuzzy model with seven locally linear models resulted in the lowest validation error. Moreover, a blind test was carried out at the location of two wells that were used neither in training nor validation. The results obtained from the validation and blind test of the model confirmed the ability of the proposed algorithm in predicting the effective porosity. In the end, the performance of this neuro-fuzzy model was compared with two regular neural networks of a multi-layer perceptron and a radial basis function, and the results show that a locally linear neuro-fuzzy model trained by a locally linear model tree algorithm resulted in more accurate porosity prediction than standard neural networks, particularly in the case where irregularities increase in the data set. The production data have been also used to verify the reliability of the porosity model. The porosity sections through the two wells demonstrate that the porosity model conforms to the production rate of wells.

Comparison of the locally linear neuro-fuzzy model performance on different wells indicates that there is a distinct discrepancy in the performance of this model compared with the other techniques. This discrepancy in the performance is a function of the correlation between the model inputs and output. In the case where the strength of the relationship between seismic attributes and effective porosity decreases, the neuro-fuzzy model results in more accurate prediction than regular neural networks, whereas the neuro-fuzzy model has a close performance to neural networks if there is a strong relationship between seismic attributes and effective porosity.

The effective porosity map, presented as the output of the method, shows a high-porosity area in the centre of zone 2 of the Ilam reservoir. Furthermore, there is an extensive high-porosity area in zone 4 of Sarvak that extends from the centre to the east of the reservoir.

Ricker-compliant deconvolution spikes at the center lobe of the Ricker wavelet. It enables deconvolution to preserve and enhance seismogram polarities. Expressing the phase spectrum as a function of lag, it works by suppressing the phase at small lags. A by-product of this decon is a pseudo-unitary (very clean) debubble filter where bubbles are lifted off the data while onset waveforms (usually Ricker) are untouched.

Waveform inversion met severe challenge in retrieving long-wavelength background structure. We have proposed to use envelope inversion to recover the large-scale component of the model. Using the large-scale background recovered by envelope inversion as new starting model, we can get much better result than the conventional full waveform inversion. By comparing the configurations of the misfit functional between the envelope inversion and the conventional waveform inversion, we show that envelope inversion can greatly reduce the local minimum problem. The combination of envelope inversion and waveform inversion can deliver more faithful and accurate final result with almost no extra computation cost compared to the conventional full waveform inversion. We also tested the noise resistance ability of envelope inversion to Gaussian noise and seismic interference noise. The results showed that envelope inversion is insensitive to Gaussian noise and, to a certain extent, insensitive to seismic interference noise. This indicates the robustness of this method and its potential use for noisy data.

During surveys, water layers may interfere with the detection of oil layers. In order to distinguish between oil and water layers, research on the properties of well diameters and oil and water layers and their relation to acoustic logging rules is essential. Using Hudson's crack theory, we simulated oil and water layers with different well diameters or crack parameters (angle and number density). We found that when the well radius increases from 0.03 m to 0.05 m, the variation ratio of compressional wave amplitude for the oil layer is less than that for the water layer. The difference of Stoneley wave amplitude between the crack parameters (angle and number density) is greater in the case of the water layer than in the case of the oil layer. The response sensitivity of wave energy is greater for the water layer than that for the oil layer. When the well radius increases from 0.05 m to 0.14 m, the maximum excitation intensity for oil layer is greater than that for the water layer. We conclude that the propagation of an elastic wave is affected by medium composition and well diameter, and the influence has certain regularity. These results can guide further reservoir logging field exploration work.

Seismic conditioning of static reservoir model properties such as porosity and lithology has traditionally been faced as a solution of an inverse problem. Dynamic reservoir model properties have been constrained by time-lapse seismic data. Here, we propose a methodology to jointly estimate rock properties (such as porosity) and dynamic property changes (such as pressure and saturation changes) from time-lapse seismic data. The methodology is based on a full Bayesian approach to seismic inversion and can be divided into two steps. First we estimate the conditional probability of elastic properties and their relative changes; then we estimate the posterior probability of rock properties and dynamic property changes. We apply the proposed methodology to a synthetic reservoir study where we have created a synthetic seismic survey for a real dynamic reservoir model including pre-production and production scenarios. The final result is a set of point-wise probability distributions that allow us to predict the most probable reservoir models at each time step and to evaluate the associated uncertainty. Finally we also show an application to real field data from the Norwegian Sea, where we estimate changes in gas saturation and pressure from time-lapse seismic amplitude differences. The inverted results show the hydrocarbon displacement at the times of two repeated seismic surveys.

Hard rock seismic exploration normally has to deal with rather complex geological environments. These types of environments are usually characterized by a large number of local heterogeneity (e.g., faults, fracture zones, and steeply dipping interfaces). The seismic data from such environments often have a poor signal-to-noise ratio because of the complexity of hard rock geology. To be able to obtain reliable images of subsurface structures in such geological conditions, processing algorithms that are capable of handling seismic data with a low signal-to-noise ratio are required for a reflection seismic exploration. In this paper, we describe a modification of the 3D Kirchhoff post-stack migration algorithm that utilizes coherency attributes obtained by the diffraction imaging algorithm in 3D to steer the main Kirchhoff summation. The application to a 3D synthetic model shows the stability of the presented steered migration to the presence of high level of the random noise. A test on the 3D seismic volume, acquired on a mine site located in Western Australia, reveals the capability of the approach to image steep and sharp objects such as fracture and fault zones and lateral heterogeneity.

The Beldih open cast mine of the South Purulia Shear Zone in Eastern India is well known for apatite deposits associated with Nb–rare-earth-element–uranium mineralization within steeply dipping, altered ferruginous kaolinite and quartz–magnetite–apatite rocks with E–W strikes at the contact of altered mafic–ultramafic and granite/quartzite rocks. A detailed geophysical study using gravity, magnetic, and gradient resistivity profiling surveys has been carried out over ∼1 km^{2} area surrounding the Beldih mine to investigate further the dip, depth, lateral extension, and associated geophysical signatures of the uranium mineralization in the environs of South Purulia Shear Zone. The high-to-low transition zone on the northern part and high-to-low anomaly patches on the southeastern and southwestern parts of the Bouguer, reduced-to-pole magnetic, and trend-surface-separated residual gravity–magnetic anomaly maps indicate the possibility of highly altered zone(s) on the northern, southeastern, and southwestern parts of the Beldih mine. The gradient resistivity survey on either side of the mine has also revealed the correlation of low-resistivity anomalies with low-gravity and moderately high magnetic anomalies. In particular, the anomalies and modeled subsurface features along profile P6 perfectly match with subsurface geology and uranium mineralization at depth. Two-dimensional and three-dimensional residual gravity models along P6 depict the presence of highly altered vertical sheet of low-density material up to a depth of ∼200 m. The drilling results along the same profile confirm the continuation of uranium mineralization zone for the low-density material. This not only validates the findings of the gravity model but also establishes the geophysical signatures for uranium mineralization as low-gravity, moderate-to-high magnetic, and low-resistivity values in this region. This study enhances the scope of further integrated geophysical investigations along the South Purulia Shear Zone to delineate suitable target areas for uranium exploration.

Although most rocks are complex multi-mineralic aggregates, quantitative interpretation workflows usually ignore this complexity and employ Gassmann equation and effective stress laws that assume a micro-homogeneous (mono-mineralic) rock. Even though the Gassmann theory and effective stress concepts have been generalized to micro-inhomogeneous rocks, they are seldom if at all used in practice because they require a greater number of parameters, which are difficult to measure or infer from data. Furthermore, the magnitude of the effect of micro-heterogeneity on fluid substitution and on effective stress coefficients is poorly understood. In particular, it is an open question whether deviations of the experimentally measurements of the effective stress coefficients for drained and undrained elastic moduli from theoretical predictions can be explained by the effect of micro-heterogeneity. In an attempt to bridge this gap, we consider an idealized model of a micro-inhomogeneous medium: a Hashin assemblage of double spherical shells. Each shell consists of a spherical pore surrounded by two concentric spherical layers of two different isotropic minerals. By analyzing the exact solution of this problem, we show that the results are exactly consistent with the equations of Brown and Korringa (which represent an extension of Gassmann's equation to micro-inhomogeneous media). We also show that the effective stress coefficients for bulk volume *α*, for porosity *n _{ϕ}* and for drained and undrained moduli are quite sensitive to the degree of heterogeneity (contrast between the moduli of the two mineral components). For instance, while for micro-homogeneous rocks the theory gives

A new array type, i.e., the γ_{11n} arrays, is introduced in this paper, in which the sequence of the current (C) and potential (P) electrodes is CPCP, and the distance between the last two electrodes is *n* times the distance between the first two ones and that of the second one and the third one. These arrays are called quasinull arrays because they are—according to their array and behaviour—between the traditional and null arrays. It is shown by numerical modelling that, in detecting small-effect inhomogeneity, these configurations may be more effective than the traditional ones, including the optimized Stummer configuration. Certain γ_{11n} configurations—especially the γ_{112}, γ_{113,} and γ_{114}—produced better results both in horizontal and vertical resolution investigations. Based on the numerical studies, the γ_{11n} configurations seem to be very promising in problems where the anomalies are similar to the numerically investigated ones, i.e., they can detect and characterize, e.g., tunnels, caves, cables, tubes, abandoned riverbeds, or discontinuity, in a clay layer with greater efficacy than those of the traditional configurations. γ_{11n} measurements need less data than traditional configurations; therefore, the time demand of electrical resistivity tomography measurements can be shortened by their use.

We present a modified interferometry method based on local tangent-phase analysis, which corrects the cross-correlated data before summation. The approach makes it possible to synthesize virtual signals usually vanishing in the conventional seismic interferometry summation. For a given pair of receivers and a set of different source positions, a plurality of virtual traces is obtained at new stationary projected points located along the signal wavefronts passing through the real reference receiver. The position of the projected points is estimated by minimizing travel times using wavefront constraint and correlation-signal tangent information. The method uses mixed processing, which is partially based on velocity-model knowledge and on data-based blind interferometry. The approach can be used for selected events, including reflections with different stationary conditions and projected points with respect to those of the direct arrivals, to extend the interferometry representation in seismic exploration data where conventional illumination coverage is not sufficient to obtain the stationary-phase condition. We discuss possible applications in crosswell geometry with a velocity anomaly and a time lapse.

Topography and severe variations of near-surface layers lead to travel-time perturbations for the events in seismic exploration. Usually, these perturbations could be estimated and eliminated by refraction technology. The virtual refraction method is a relatively new technique for retrieval of refraction information from seismic records contaminated by noise. Based on the virtual refraction, this paper proposes super-virtual refraction interferometry by cross-correlation to retrieve refraction wavefields by summing the cross-correlation of raw refraction wavefields and virtual refraction wavefields over all receivers located outside the retrieved source and receiver pair. This method can enhance refraction signal gradually as the source–receiver offset decreases. For further enhancement of refracted waves, a scheme of hybrid virtual refraction wavefields is applied by stacking of correlation-type and convolution-type super-virtual refractions. Our new method does not need any information about the near-surface velocity model, which can solve the problem of directly unmeasured virtual refraction energy from the virtual source at the surface, and extend the acquisition aperture to its maximum extent in raw seismic records. It can also reduce random noise influence in raw seismic records effectively and improve refracted waves’ signal-to-noise ratio by a factor proportional to the square root of the number of receivers positioned at stationary-phase points, based on the improvement of virtual refraction's signal-to-noise ratio. Using results from synthetic and field data, we show that our new method is effective to retrieve refraction information from raw seismic records and improve the accuracy of first-arrival picks.

For pre-stack phase-shift migration in homogeneous isotropic media, the offset-midpoint travel time is represented by the double-square-root equation. The travel time as a function of offset and midpoint resembles the shape of Cheops’ pyramid. This is also valid for transversely isotropic media with a vertical symmetry axis. In this study, we extend the offset-midpoint travel-time pyramid to the case of 2D transversely isotropic media with a tilted symmetry axis. The P-wave analytical travel-time pyramid is derived under the assumption of weak anelliptical property of the tilted transverse isotropy media. The travel-time equation for the dip-constrained transversely isotropic model is obtained from the depth-domain travel-time pyramid. The potential applications of the derived offset-midpoint travel-time equation include pre-stack Kirchhoff migration, anisotropic parameter estimation, and travel-time calculation in transversely isotropic media with a tilted symmetry axis.

Migration velocity analysis aims at determining the background velocity model. Classical artefacts, such as migration smiles, are observed on subsurface offset common image gathers, due to spatial and frequency data limitations. We analyse their impact on the differential semblance functional and on its gradient with respect to the model. In particular, the differential semblance functional is not necessarily minimum at the expected value. Tapers are classically applied on common image gathers to partly reduce these artefacts. Here, we first observe that the migrated image can be defined as the first gradient of an objective function formulated in the data-domain. For an automatic and more robust formulation, we introduce a weight in the original data-domain objective function. The weight is determined such that the Hessian resembles a Dirac function. In that way, we extend quantitative migration to the subsurface-offset domain. This is an automatic way to compensate for illumination. We analyse the modified scheme on a very simple 2D case and on a more complex velocity model to show how migration velocity analysis becomes more robust.

A new methodology that levels airborne magnetic data without orthogonal tie-lines is presented in this study. The technique utilizes the low-wavenumber content of the flight-line data to construct a smooth representation of the regional field at a scale appropriate to the line lengths of the survey. Levelling errors are then calculated between the raw flight-line data and the derived regional field through a least squares approach. Minimizing the magnitude of the error, with a first-degree error function, results in significant improvements to the unlevelled data. The technique is tested and demonstrated using three recent airborne surveys.

Over the past decade the typical size of airborne electromagnetic data sets has been growing rapidly, along with an emerging need for highly accurate modelling. One-dimensional approximate inversions or data transform techniques have previously been employed for very large-scale studies of quasi-layered settings but these techniques fail to provide the consistent accuracy needed by many modern applications such as aquifer and geological mapping, uranium exploration, oil sands and integrated modelling. In these cases the use of more time-consuming 1D forward and inverse modelling provide the only acceptable solution that is also computationally feasible.

When target structures are known to be quasi layered and spatially coherent it is beneficial to incorporate this assumption directly into the inversion. This implies inverting multiple soundings at a time in larger constrained problems, which allows for resolving geological layers that are undetectable using simple independent inversions. Ideally, entire surveys should be inverted at a time in huge constrained problems but poor scaling properties of the underlying algorithms typically make this challenging.

Here, we document how we optimized an inversion code for very large-scale constrained airborne electromagnetic problems. Most importantly, we describe how we solve linear systems using an iterative method that scales linearly with the size of the data set in terms of both solution time and memory consumption. We also describe how we parallelized the core region of the code, in order to obtain almost ideal strong parallel scaling on current 4-socket shared memory computers. We further show how model parameter uncertainty estimates can be efficiently obtained in linear time and we demonstrate the capabilities of the full implementation by inverting a 3327 line km SkyTEM survey overnight. Performance and scaling properties are discussed based on the timings of the field example and we describe the criteria that must be fulfilled in order to adapt our methodology for similar type problems.

Wave-induced oscillatory fluid flow in the vicinity of inclusions embedded in porous rocks is one of the main causes for *P*-wave dispersion and attenuation at seismic frequencies. Hence, the *P*-wave velocity depends on wave frequency, porosity, saturation, and other rock parameters. Several analytical models quantify this wave-induced flow attenuation and result in characteristic velocity–saturation relations. Here, we compare some of these models by analyzing their low- and high-frequency asymptotic behaviours and by applying them to measured velocity–saturation relations. Specifically, the Biot–Rayleigh model considering spherical inclusions embedded in an isotropic rock matrix is compared with White's and Johnson's models of patchy saturation. The modeling of laboratory data for tight sandstone and limestone indicates that, by selecting appropriate inclusion size, the Biot-Rayleigh predictions are close to the measured values, particularly for intermediate and high water saturations.

In the present work, the waveforms of reflected wave sonic log for open and cased boreholes are calculated. Calculations are performed for a borehole containing an acoustic multipole source (monopole, dipole, or quadrupole). A reflected wave is more efficiently excited at resonant frequencies. These frequencies for all source types are close to the frequencies of oscillations of a fluid column located in an absolutely rigid hollow cylinder. It is shown that the acoustic reverberation is controlled by the acoustic impedance of the rock *Z* = *V _{p} ρ_{s}* for fixed parameters of the borehole fluid, where

The aim of this work is to introduce the application of the fuzzy ordered weighted averaging method as a straightforward knowledge-driven approach to explore porphyry copper deposits in an airborne prospect. In this paper, the proposed method is applied to airborne geophysical (potassium radiometry, magnetometry, and frequency-domain electromagnetic) data, geological layers (fault and host rock zones), and various extracted alteration layers from remote sensing images. The central Iranian volcanic–sedimentary belt in Kerman province of Iran that is located within the Urumieh–Dokhtar (Sahand–Bazman) magmatic arc is chosen for this study. This region has high potential of mineral occurrences, especially porphyry copper, containing some active world-class copper mines such as Sarcheshmeh. Two evidential layers, including the downward continued map and the analytic signal of such filtered magnetic data, are generated to be used as geophysical plausible traces of porphyry copper occurrences. The low values of the resistivity layer acquired from airborne frequency-domain electromagnetic data are also used as an electrical criterion in this study. Four remote sensing evidential layers, including argillic, phyllic, propylitic, and hydroxyl alterations, are extracted from Advanced Spaceborne Thermal Emission and Reflection Radiometer images in order to map the altered areas associated with porphyry copper deposits. The Enhanced Thematic Mapper plus images are used to map iron oxide layer. Since potassium alteration is the mainstay of copper alteration, the airborne potassium radiometry data are used. Here, the fuzzy ordered weighted averaging method uses a wide range of decision strategies in order to generate numerous mineral potential/prospectivity maps. The final mineral potential map based upon desired geo-data set indicates adequately matching of high-potential zones with previous working mines and copper deposits.

Full-waveform inversion is re-emerging as a powerful data-fitting procedure for quantitative seismic imaging of the subsurface from wide-azimuth seismic data. This method is suitable to build high-resolution velocity models provided that the targeted area is sampled by both diving waves and reflected waves. However, the conventional formulation of full-waveform inversion prevents the reconstruction of the small wavenumber components of the velocity model when the subsurface is sampled by reflected waves only. This typically occurs as the depth becomes significant with respect to the length of the receiver array. This study first aims to highlight the limits of the conventional form of full-waveform inversion when applied to seismic reflection data, through a simple canonical example of seismic imaging and to propose a new inversion workflow that overcomes these limitations. The governing idea is to decompose the subsurface model as a background part, which we seek to update and a singular part that corresponds to some prior knowledge of the reflectivity. Forcing this scale uncoupling in the full-waveform inversion formalism brings out the transmitted wavepaths that connect the sources and receivers to the reflectors in the sensitivity kernel of the full-waveform inversion, which is otherwise dominated by the migration impulse responses formed by the correlation of the downgoing direct wavefields coming from the shot and receiver positions. This transmission regime makes full-waveform inversion amenable to the update of the long-to-intermediate wavelengths of the background model from the wide scattering-angle information. However, we show that this prior knowledge of the reflectivity does not prevent the use of a suitable misfit measurement based on cross-correlation, to avoid cycle-skipping issues as well as a suitable inversion domain as the pseudo-depth domain that allows us to preserve the invariant property of the zero-offset time. This latter feature is useful to avoid updating the reflectivity information at each non-linear iteration of the full-waveform inversion, hence considerably reducing the computational cost of the entire workflow. Prior information of the reflectivity in the full-waveform inversion formalism, a robust misfit function that prevents cycle-skipping issues and a suitable inversion domain that preserves the seismic invariant are the three key ingredients that should ensure well-posedness and computational efficiency of full-waveform inversion algorithms for seismic reflection data.

The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly as opposed to of the straightforward velocity scan, with *N* being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

Mud volcanism is commonly observed in Azerbaijan and the surrounding South Caspian Basin. This natural phenomenon is very similar to magmatic volcanoes but differs in one considerable aspect: Magmatic volcanoes are generally the result of ascending molten rock within the Earth's crust, whereas mud volcanoes are characterised by expelling mixtures of water, mud, and gas. The majority of mud volcanoes have been observed on ocean floors or in deep sedimentary basins, such as those found in Azerbaijan. Furthermore, their occurrences in Azerbaijan are generally closely associated with hydrocarbon reservoirs and are therefore of immense economic and geological interest. The broadside long-offset transient electromagnetic method and the central-loop transient electromagnetic method were applied to study the inner structure of such mud volcanoes and to determine the depth of a resistive geological formation that is predicted to contain the majority of the hydrocarbon reservoirs in the survey area. One-dimensional joint inversion of central-loop and long-offset transient electromagnetic data was performed using the inversion schemes of Occam and Marquardt. By using the joint inversion models, a subsurface resistivity structure ranging from the surface to a depth of approximately 7 km was determined. Along a profile running perpendicular to the assumed strike direction, lateral resistivity variations could only be determined in the shallow depth range using the transient electromagnetic data. An attempt to resolve further two-dimensional/three-dimensional resistivity structures, representing possible mud migration paths at large depths using the long-offset transient electromagnetic data, failed. Moreover, the joint inversion models led to ambiguous results regarding the depth and resistivity of the hydrocarbon target formation due to poor resolution at great depths (>5 km). Thus, 1D/2D modelling studies were subsequently performed to investigate the influence of the resistive terminating half-space on the measured long-offset transient electromagnetic data.

The 1D joint inversion models were utilised as starting models for both the 1D and 2D modelling studies. The results tend to show that a resistive terminating half-space, implying the presence of the target formation, is the favourable geological setting. Furthermore, the 2D modelling study aimed to fit all measured long-offset transient electromagnetic Ex transients along the profile simultaneously. Consequently, 3125 2D forward calculations were necessary to determine the best-fit resistivity model. The results are consistent with the 1D inversion, indicating that the data are best described by a resistive terminating half-space, although the resistivity and depth cannot be determined clearly.

The problem of conversion from time-migration velocity to an interval velocity in depth in the presence of lateral velocity variations can be reduced to solving a system of partial differential equations. In this paper, we formulate the problem as a non-linear least-squares optimization for seismic interval velocity and seek its solution iteratively. The input for the inversion is the Dix velocity, which also serves as an initial guess. The inversion gradually updates the interval velocity in order to account for lateral velocity variations that are neglected in the Dix inversion. The algorithm has a moderate cost thanks to regularization that speeds up convergence while ensuring a smooth output. The proposed method should be numerically robust compared to the previous approaches, which amount to extrapolation in depth monotonically. For a successful time-to-depth conversion, image-ray caustics should be either nonexistent or excluded from the computational domain. The resulting velocity can be used in subsequent depth-imaging model building. Both synthetic and field data examples demonstrate the applicability of the proposed approach.

Seismic diffracted waves carry valuable information for identifying geological discontinuities. Unfortunately, the diffraction energy is generally too weak, and standard seismic processing is biased to imaging reflection. In this paper, we present a dynamic diffraction imaging method with the aim of enhancing diffraction and increasing the signal-to-noise ratio. The correlation between diffraction amplitudes and their traveltimes generally exists in two forms, with one form based on the Kirchhoff integral formulation, and the other on the uniform asymptotic theory. However, the former will encounter singularities at geometrical shadow boundaries, and the latter requires the computation of a Fresnel integral. Therefore, neither of these methods is appropriate for practical applications. Noting the special form of the Fresnel integral, we propose a least-squares fitting method based on double exponential functions to study the amplitude function of diffracted waves. The simple form of the fitting function has no singularities and can accelerate the calculation of diffraction amplitude weakening coefficients. By considering both the fitting weakening function and the polarity reversal property of the diffracted waves, we modify the conventional Kirchhoff imaging conditions and formulate a diffraction imaging formula. The mechanism of the proposed diffraction imaging procedure is based on the edge diffractor, instead of the idealized point diffractor. The polarity reversal property can eliminate the background of strong reflection and enhance the diffraction by same-phase summation. Moreover,the fitting weakening function of diffraction amplitudes behaves like an inherent window to optimize the diffraction imaging aperture by its decaying trend. Synthetic and field data examples reveal that the proposed diffraction imaging method can meet the requirement of high-resolution imaging, with the edge diffraction fully reinforced and the strong reflection mostly eliminated.

Reverse-time migration can accurately image complex geologic structures in anisotropic media. Extended images at selected locations in the Earth, i.e., at common-image-point gathers, carry rich information to characterize the angle-dependent illumination and to provide measurements for migration velocity analysis. However, characterizing the anisotropy influence on such extended images is a challenge. Extended common-image-point gathers are cheap to evaluate since they sample the image at sparse locations indicated by the presence of strong reflectors. Such gathers are also sensitive to velocity error that manifests itself through moveout as a function of space and time lags. Furthermore, inaccurate anisotropy leaves a distinctive signature in common-image-point gathers, which can be used to evaluate anisotropy through techniques similar to the ones used in conventional wavefield tomography. It specifically admits a V-shaped residual moveout with the slope of the “V” flanks depending on the anisotropic parameter η regardless of the complexity of the velocity model. It reflects the fourth-order nature of the anisotropy influence on moveout as it manifests itself in this distinct signature in extended images after handling the velocity properly in the imaging process. Synthetic and real data observations support this assertion.

Unlike light oils, heavy oils do not have a well-established scheme for modelling elastic moduli from dynamic reservoir properties. One of the main challenges in the fluid substitution of heavy oils is their viscoelastic nature, which is controlled by temperature, pressure, and fluid composition. Here, we develop a framework for fluid substitution modelling that is reliable yet practical for a wide range of cold and thermal recovery scenarios in producing heavy oils and that takes into account the reservoir fluid composition, grounded on the effective-medium theories for estimating elastic moduli of an oil–rock system. We investigate the effect of fluid composition variations on oil–rock elastic moduli with temperature changes. The fluid compositional behaviour is determined by flash calculations. Elastic moduli are then determined using the double-porosity coherent potential approximation method and the calculated viscosity based on the fluid composition. An increase in temperature imposes two opposing mechanisms on the viscosity behaviour of a heavy-oil sample: gas liberation, which tends to increase the viscosity, and melting, which decreases the viscosity. We demonstrate that melting dominates gas liberation, and as a result, the viscosity and, consequently, the shear modulus of the heavy oils always decrease with increasing temperature. Furthermore, it turns out that one can disregard the effects of gas in the solution when modelling the elastic moduli of heavy oils. Here, we compare oil–rock elastic moduli when the rock is saturated with fluids that have different viscosity levels. The objective is to characterize a unique relation between the temperature, the frequency, and the elastic moduli of an oil–rock system. We have proposed an approach that takes advantage of this relation to find the temperature and, consequently, the viscosity in different regions of the reservoir.

Characterizing the pore space of rock samples using three-dimensional (3D) X-ray computed tomography images is a crucial step in digital rock physics. Indeed, the quality of the pore network extracted has a high impact on the prediction of rock properties such as porosity, permeability and elastic moduli. In carbonate rocks, it is usually very difficult to find a single image resolution which fully captures the sample pore network because of the heterogeneities existing at different scales. Hence, to overcome this limitation a multiscale analysis of the pore space may be needed. In this paper, we present a method to estimate porosity and elastic properties of clean carbonate (without clay content) samples from 3D X-ray microtomography images at multiple resolutions. We perform a three-phase segmentation to separate grains, pores and unresolved porous phase using 19 μm resolution images of each core plug. Then, we use images with higher resolution (between 0.3 and 2 μm) of microplugs extracted from the core plug samples. These subsets of images are assumed to be representative of the unresolved phase. We estimate the porosity and elastic properties of each sample by extrapolating the microplug properties to the whole unresolved phase. In addition, we compute the absolute permeability using the lattice Boltzmann method on the microplug images due to the low resolution of the core plug images.

In order to validate the results of the numerical simulations, we compare our results with available laboratory measurements at the core plug scale. Porosity average simulations for the eight samples agree within 13%. Permeability numerical predictions provide realistic values in the range of experimental data but with a higher relative error. Finally, elastic moduli show the highest disagreements, with simulation error values exceeding 150% for three samples.

Least squares Fourier reconstruction is basically a solution to a discrete linear inverse problem that attempts to recover the Fourier spectrum of the seismic wavefield from irregularly sampled data along the spatial coordinates. The estimated Fourier coefficients are then used to reconstruct the data in a regular grid via a standard inverse Fourier transform (inverse discrete Fourier transform or inverse fast Fourier transform).

Unfortunately, this kind of inverse problem is usually under-determined and ill-conditioned. For this reason, the least squares Fourier reconstruction with minimum norm adopts a damped least squares inversion to retrieve a unique and stable solution.

In this work, we show how the damping can introduce artefacts on the reconstructed 3D data. To quantitatively describe this issue, we introduce the concept of “extended” model resolution matrix, and we formulate the reconstruction problem as an appraisal problem. Through the simultaneous analysis of the extended model resolution matrix and of the noise term, we discuss the limits of the Fourier reconstruction with minimum norm reconstruction and assess the validity of the reconstructed data and the possible bias introduced by the inversion process. Also, we can guide the parameterization of the forward problem to minimize the occurrence of unwanted artefacts. A simple synthetic example and real data from a 3D marine common shot gather are used to discuss our approach and to show the results of Fourier reconstruction with minimum norm reconstruction.

Airborne geophysical surveys provide spatially continuous regional data coverage, which directly reflects subsurface petrophysical differences and thus the underlying geology. A modern geologic mapping exercise requires the fusion of this information to complement what is typically limited regional outcrop. Often, interpretation of the geophysical data in a geological context is done qualitatively using total field and derivative maps. With a qualitative approach, the resulting map product may reflect the interpreter's bias. Source edge detection provides a quantitative means to map lateral physical property changes in potential and non-potential field data. There are a number of Source edge detection algorithms, all of which apply a transformation to convert local signal inflections associated with source edges into local maxima. As a consequence of differences in their computation, the various algorithms generate slightly different results for any given source depth, geometry, contrast, and noise levels. To enhance the viability of any detected edge, it is recommended that one combines the output of several Source edge detection algorithms. Here we introduce a simple data compilation method, deemed edge stacking, which improves the interpretable product of Source edge detection through direct gridding, grid addition, and amplitude thresholding. In two examples, i.e., a synthetic example and a real-world example from the Bathurst Mining Camp, New Brunswick, Canada, a number of transformation algorithms are applied to gridded geophysical data sets and the resulting Source edge detection solutions combined. Edge stacking combines the benefits and nuances of each Source edge detection algorithm; coincident or overlapping and laterally continuous solutions are considered more indicative of a true edge, whereas isolated points are taken as being indicative of random noise or false solutions. When additional data types are available, as in our example, they may also be integrated to create a more complete geologic model. The effectiveness of this method is limited only by the resolution of each survey data set and the necessity of lateral physical property contrasts. The end product aims at creating a petrophysical contact map, which, when integrated with known lithological outcrop information, can be led to an improved geological map.

The double-square-root equation is commonly used to image data by downward continuation using one-way depth extrapolation methods. A two-way time extrapolation of the double-square-root-derived phase operator allows for up and downgoing wavefields but suffers from an essential singularity for horizontally travelling waves. This singularity is also associated with an anisotropic version of the double-square-root extrapolator. Perturbation theory allows us to separate the isotropic contribution, as well as the singularity, from the anisotropic contribution to the operator. As a result, the anisotropic residual operator is free from such singularities and can be applied as a stand alone operator to correct for anisotropy. We can apply the residual anisotropy operator even if the original prestack wavefield was obtained using, for example, reverse-time migration. The residual correction is also useful for anisotropic parameter estimation. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach. It also proves useful in approximately imaging the Vertical Transverse Isotropic Marmousi model.

Numerical simulation of the acoustic wave equation is widely used to theoretically synthesize seismograms and constitutes the basis of reverse-time migration. With finite-difference methods, the discretization of temporal and spatial derivatives in wave equations introduces numerical grid dispersion. To reduce the grid dispersion effect, we propose to satisfy the dispersion relation for a number of uniformly distributed wavenumber points within a wavenumber range with the upper limit determined by the maximum source frequency, the grid spacing and the wave velocity. This new dispersion-relationship-preserving method relatively uniformly reduces the numerical dispersion over a large-frequency range. Dispersion analysis and seismic numerical simulations demonstrate the effectiveness of the proposed method.

Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source–receiver time domain. The fourth-order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P-waves and reduce to the zero-offset exploding-reflector solutions when the source coincides with the receiver. A challenge for implementing two-way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral-based extrapolation. Using spectral methods based on the low-rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.

The estimation of a velocity model from seismic data is a crucial step for obtaining a high-quality image of the subsurface. Velocity estimation is usually formulated as an optimization problem where an objective function measures the mismatch between synthetic and recorded wavefields and its gradient is used to update the model. The objective function can be defined in the data-space (as in full-waveform inversion) or in the image space (as in migration velocity analysis). In general, the latter leads to smooth objective functions, which are monomodal in a wider basin about the global minimum compared to the objective functions defined in the data-space. Nonetheless, migration velocity analysis requires construction of common-image gathers at fixed spatial locations and subsampling of the image in order to assess the consistency between the trial velocity model and the observed data. We present an objective function that extracts the velocity error information directly in the image domain without analysing the information in common-image gathers. In order to include the full complexity of the wavefield in the velocity estimation algorithm, we consider a two-way (as opposed to one-way) wave operator, we do not linearize the imaging operator with respect to the model parameters (as in linearized wave-equation migration velocity analysis) and compute the gradient of the objective function using the adjoint-state method. We illustrate our methodology with a few synthetic examples and test it on a real 2D marine streamer data set.

]]>4D seismic is widely used to remotely monitor fluid movement in subsurface reservoirs. This technique is especially effective offshore where high survey repeatability can be achieved. It comes as no surprise that the first 4D seismic that successfully monitored the CO_{2} sequestration process was recorded offshore in the Sleipner field, North Sea. In the case of land projects, poor repeatability of the land seismic data due to low S/N ratio often obscures the time-lapse seismic signal. Hence for a successful on shore monitoring program improving seismic repeatability is essential.

Stage 2 of the CO2CRC Otway project involves an injection of a small amount (around 15,000 tonnes) of CO_{2}/CH_{4} gas mixture into a saline aquifer at a depth of approximately 1.5 km. Previous studies at this site showed that seismic repeatability is relatively low due to variations in weather conditions, near surface geology and farming activities. In order to improve time-lapse seismic monitoring capabilities, a permanent receiver array can be utilised to improve signal to noise ratio and hence repeatability.

A small-scale trial of such an array was conducted at the Otway site in June 2012. A set of 25 geophones was installed in 3 m deep boreholes in parallel to the same number of surface geophones. In addition, four geophones were placed into boreholes of 1–12 m depth. In order to assess the gain in the signal-to-noise ratio and repeatability, both active and passive seismic surveys were carried out. The surveys were conducted in relatively poor weather conditions, with rain, strong wind and thunderstorms. With such an amplified background noise level, we found that the noise level for buried geophones is on average 20 dB lower compared to the surface geophones.

The levels of repeatability for borehole geophones estimated around direct wave, reflected wave and ground roll are twice as high as for the surface geophones. Both borehole and surface geophones produce the best repeatability in the 30–90 Hz frequency range. The influence of burying depth on S/N ratio and repeatability shows that significant improvement in repeatability can be reached at a depth of 3 m. The level of repeatability remains relatively constant between 3 and 12 m depths.

We consider the problem of simultaneously estimating three parameters of multiple microseimic events, i.e., the hypocenter, moment tensor, and origin time. This problem is of great interest because its solution could provide a better understanding of reservoir behavior and can help to optimize the hydraulic fracturing process. The existing approaches employing spatial source sparsity have advantages over traditional full-wave inversion-based schemes; however, their validity and accuracy depend on the knowledge of the source time-function, which is lacking in practical applications. This becomes even more challenging when multiple microseimic sources appear simultaneously. To cope with this shortcoming, we propose to approach the problem from a frequency-domain perspective and develop a novel sparsity-aware framework that is blind to the source time-function. Through our simulation results with synthetic data, we illustrate that our proposed approach can handle multiple microseismic sources and can estimate their hypocenters with an acceptable accuracy. The results also show that our approach can estimate the normalized amplitude of the moment tensors as a by-product, which can provide worthwhile information about the nature of the sources.

Full-waveform inversion is an appealing technique for time-lapse imaging, especially when prior model information is included into the inversion workflow. Once the baseline reconstruction is achieved, several strategies can be used to assess the physical parameter changes, such as parallel difference (two separate inversions of baseline and monitor data sets), sequential difference (inversion of the monitor data set starting from the recovered baseline model) and double-difference (inversion of the difference data starting from the recovered baseline model) strategies. Using synthetic Marmousi data sets, we investigate which strategy should be adopted to obtain more robust and more accurate time-lapse velocity changes in noise-free and noisy environments. This synthetic application demonstrates that the double-difference strategy provides the more robust time-lapse result. In addition, we propose a target-oriented time-lapse imaging using regularized full-waveform inversion including a prior model and model weighting, if the prior information exists on the location of expected variations. This scheme applies strong prior model constraints outside of the expected areas of time-lapse changes and relatively less prior constraints in the time-lapse target zones. In application of this process to the Marmousi model data set, the local resolution analysis performed with spike tests shows that the target-oriented inversion prevents the occurrence of artefacts outside the target areas, which could contaminate and compromise the reconstruction of the effective time-lapse changes, especially when using the sequential difference strategy. In a strongly noisy case, the target-oriented prior model weighting ensures the same behaviour for both time-lapse strategies, the double-difference and the sequential difference strategies and leads to a more robust reconstruction of the weak time-lapse changes. The double-difference strategy can deliver more accurate time-lapse variation since it can focus to invert the difference data. However, the double-difference strategy requires a preprocessing step on data sets such as time-lapse binning to have a similar source/receiver location between two surveys, while the sequential difference needs less this requirement. If we have prior information about the area of changes, the target-oriented sequential difference strategy can be an alternative and can provide the same robust result as the double-difference strategy.

In this paper, we discuss high-resolution coherence functions for the estimation of the stacking parameters in seismic signal processing. We focus on the Multiple Signal Classification which uses the eigendecomposition of the seismic data to measure the coherence along stacking curves. This algorithm can outperform the traditional semblance in cases of close or interfering reflections, generating a sharper velocity spectrum. Our main contribution is to propose complexity-reducing strategies for its implementation to make it a feasible alternative to semblance. First, we show how to compute the multiple signal classification spectrum based on the eigendecomposition of the temporal correlation matrix of the seismic data. This matrix has a lower order than the spatial correlation used by other methods, so computing its eigendecomposition is simpler. Then we show how to compute its coherence measure in terms of the signal subspace of seismic data. This further reduces the computational cost as we now have to compute fewer eigenvectors than those required by the noise subspace currently used in the literature. Furthermore, we show how these eigenvectors can be computed with the low-complexity power method. As a result of these simplifications, we show that the complexity of computing the multiple signal classification velocity spectrum is only about three times greater than semblance. Also, we propose a new normalization function to deal with the high dynamic range of the velocity spectrum. Numerical examples with synthetic and real seismic data indicate that the proposed approach provides stacking parameters with better resolution than conventional semblance, at an affordable computational cost.

Scattered ground roll is a type of noise observed in land seismic data that can be particularly difficult to suppress. Typically, this type of noise cannot be removed using conventional velocity-based filters. In this paper, we discuss a model-driven form of seismic interferometry that allows suppression of scattered ground-roll noise in land seismic data. The conventional cross-correlate and stack interferometry approach results in scattered noise estimates between two receiver locations (i.e. as if one of the receivers had been replaced by a source). For noise suppression, this requires that each source we wish to attenuate the noise from is co-located with a receiver. The model-driven form differs, as the use of a simple model in place of one of the inputs for interferometry allows the scattered noise estimate to be made between a source and a receiver. This allows the method to be more flexible, as co-location of sources and receivers is not required, and the method can be applied to data sets with a variety of different acquisition geometries. A simple plane-wave model is used, allowing the method to remain relatively data driven, with weighting factors for the plane waves determined using a least-squares solution. Using a number of both synthetic and real two-dimensional (2D) and three-dimensional (3D) land seismic data sets, we show that this model-driven approach provides effective results, allowing suppression of scattered ground-roll noise without having an adverse effect on the underlying signal.

The nonlinearity of the seismic amplitude-variation-with-offset response is investigated with physical modelling data. Nonlinearity in amplitude-variation-with-offset becomes important in the presence of large relative changes in acoustic and elastic medium properties. A procedure for pre-processing physical modelling reflection data is enacted on the reflection from a water-plexiglas boundary. The resulting picked and processed amplitudes are compared with the exact solutions of the plane-wave Zoeppritz equations, as well as approximations that are first, second, and third order in , , and . In the low angle range of 0°–20°, the third-order plane-wave approximation is sufficient to capture the nonlinearity of the amplitude-variation-with-offset response of a liquid-solid boundary with , , and ρ contrasts of 1485–2745 m/s, 0–1380 m/s, and 1.00–1.19 gm/cc respectively, to an accuracy value of roughly 1%. This is in contrast to the linear Aki–Richards approximation, which is in error by as much as 25% in the same angle range. Even-order nonlinear corrective terms are observed to be primarily involved in correcting the angle dependence of , whereas the odd-order nonlinear terms are involved in determining the absolute amplitude-variation-with-offset magnitudes.

Naturally fractured reservoirs are becoming increasingly important for oil and gas exploration in many areas of the world. Because fractures may control the permeability of a reservoir, it is important to be able to find and characterize fractured zones. In fractured reservoirs, the wave-induced fluid flow between pores and fractures can cause significant dispersion and attenuation of seismic waves. For waves propagating normal to the fractures, this effect has been quantified in earlier studies. Here we extend normal incidence results to oblique incidence using known expressions for the stiffness tensors in the low- and high-frequency limits. This allows us to quantify frequency-dependent anisotropy due to the wave-induced flow between pores and fractures and gives a simple recipe for computing phase velocities and attenuation factors of quasi-P and SV waves as functions of frequency and angle. These frequency and angle dependencies are concisely expressed through dimensionless velocity anisotropy and attenuation anisotropy parameters. It is found that, although at low frequencies, the medium is close to elliptical (which is to be expected as a dry medium containing a distribution of penny-shaped cracks is known to be close to elliptical); at high frequencies, the coupling between P-wave and SV-wave results in anisotropy due to the non-vanishing excess tangential compliance.

The Eagle Ford Shale of Central and South Texas is currently of great interest for oil and gas exploration and production. Laboratory studies show that the Eagle Ford Shale is anisotropic, with a correlation between anisotropy and total organic carbon. Organic materials are usually more compliant than other minerals present in organic-rich shales, and their shapes and distribution are usually anisotropic. This makes organic materials an important source of anisotropy in organic-rich shales. Neglecting shale anisotropy may lead to incorrect estimates of rock and fluid properties derived from inversion of amplitude versus offset seismic data. Organic materials have a significant effect on the PP and PS reflection amplitudes from the Austin Chalk/Upper Eagle Ford interface, the Upper Eagle Ford/Lower Eagle Ford interface, and the Lower Eagle Ford/Buda Limestone interface. The higher kerogen content of the Lower Eagle Ford compared with that of the Upper Eagle Ford leads to a negative PP reflection amplitude that dims with offset, whereas the PS reflection coefficient increases in magnitude with increasing offset. The PP and PS reflection coefficients at the Austin Chalk/Upper Eagle Ford interface, the Upper Eagle Ford/Lower Eagle Ford interface, and the Lower Eagle Ford/Buda Limestone interface all increase in magnitude with increasing volume fraction of kerogen.

Blast damage to the tops of coal seams due to incorrect blast standoff distances is a serious issue, costing the industry in Australia about one open-cut mine for every ten operating mines. The current approach for mapping coal-seam tops is through drilling and pierce-point logging. To provide appropriate depth control with accuracy of ±0.2 m for blast hole drilling, it is typically necessary to drill deep reconnaissance boreholes on a 50 m x 50 m grid well in advance of overburden removal. Pierce-point mapping is expensive and can be inaccurate, particularly when the seam is disturbed by rolls, faults, and other obstacles.Numerical modelling and prototype-field testing are used in this paper to demonstrate the feasibility of two seismic-while-drilling-based approaches for predicting the approach to the top of coal during blast hole drilling: (i) reverse “walk-away” vertical seismic profiling recording, in which the drill bit vibration provides the source signal and the geophones are planted on the surface near the drill rig, and (ii) in-seam seismic recording, in which channel waves, driven by the coupling to the coal of the seismic signal emitted by the approaching drill bit, are guided by the seam to geophones located within the seam in nearby or remote boreholes.

Predicting reservoir parameters, such as porosity, lithology, and saturations, from geophysical parameters is a problem with non-unique solutions. The variance in solutions can be extensive, especially for saturation and lithology. However, the reservoir parameters will typically vary smoothly within certain zones—in vertical and horizontal directions. In this work, we integrate spatial correlations in the predicted parameters to constrain the range of predicted solutions from a particular type of inverse rock physics modelling method. Our analysis is based on well-log data from the Glitne field, where vertical correlations with depth are expected. It was found that the reservoir parameters with the shortest depth correlation (lithology and saturation) provided the strongest constraint to the set of solutions. In addition, due to the interdependence between the reservoir parameters, constraining the predictions by the spatial correlation of one parameter also reduced the number of predictions of the other two parameters. Moreover, the use of additional constraints such as measured log data at specific depth locations can further narrow the range of solutions.

We compare selected marine electromagnetic methods for sensitivity to the presence of relatively thin resistive targets (e.g., hydrocarbons, gas hydrates, fresh groundwater, etc.). The study includes the conventional controlled-source electromagnetic method, the recently introduced transient electromagnetic prospecting with vertical electric lines method, and the novel marine circular electric dipole method, which is still in the stage of theoretical development. The comparison is based on general physical considerations, analytical (mainly asymptotic) analysis, and rigorous one-dimensional and multidimensional forward modelling. It is shown that transient electromagnetic prospecting with vertical electric lines and marine circular electric dipole methods represent an alternative to the conventional controlled-source electromagnetic method at shallow sea, where the latter becomes less efficient due to the air-wave phenomenon. Since both former methods are essentially short-offset time-domain techniques, they exhibit a much better lateral resolution than the controlled-source electromagnetic method in both shallow sea and deep sea. The greatest shortcoming of the transient electromagnetic prospecting with vertical electric lines and marine circular electric dipole methods comes from the difficulties in accurately assembling the transmitter antenna within the marine environment. This makes these methods significantly less practical than the controlled-source electromagnetic method. Consequently, the controlled-source electromagnetic method remains the leading marine electromagnetic technique in the exploration of large resistive targets in deep sea. However, exploring laterally small targets in deep sea and both small and large targets in shallow sea might require the use of the less practical transient electromagnetic prospecting with vertical electric lines and/or marine circular electric dipole method as a desirable alternative to the controlled-source electromagnetic method.

Over the last few decades, very low frequency electromagnetics has been widely and successfully applied in mineral exploration and groundwater exploration. Many radio transmitters with strong signal-to-noise ratios are scattered in the very low frequency band and low frequency band. Based on experiences gained from ground measurements with the radio-magnetotelluric technique operating in the frequency interval 1–250 kHz, broadband magnetometers have been used to cover both very low frequency (3–30 kHz) and low frequency (30–300 kHz) bands to increase the resolution of the near-surface structure. The metallic aircraft as a conductive body will distort the magnetic signal to some extent, and thus it is important to investigate aircraft interference on the electromagnetic signal. We studied noise caused by rotation of an aircraft and the aircraft itself as a metallic conductive body with three methods: 3D wave polarization, determination of transmitter direction and full tipper estimation. Both very low frequency and low frequency bands were investigated. The results show that the magnetic field is independent of the aircraft at low frequencies in the very low frequency band and part of the low frequency band (below 100 kHz). At high frequencies (above 100 kHz), the signals are more greatly influenced by the aircraft, and the wave polarization directions are more scattered, as observed when the aircraft turned. Some aircraft generated noise mixed with radio transmitter signals, detected as ‘dummy’ signals by the 3D wave polarization method. The estimated scalar magnetic transfer functions are dependent on the aircraft flight directions at high frequencies, because of aircraft interference. The aircraft eigenresponse in the transfer functions (tippers) between vertical and horizontal magnetic field components was compensated for in the real part of the estimated tippers, but some unknown effect was still observed in the imaginary parts.

Three-component borehole magnetics provide important additional information compared to total field or horizontal and vertical measurements. These data can be used for several tasks such as the localization of ferromagnetic objects, the determination of apparent polar wander curves and the computation of the magnetization of rock units. However, the crucial point in three-component borehole magnetics is the reorientation of the measured data from the tool's frame to the geographic reference frame North, East and Downwards. For this purpose, our tool, the Göttinger Borehole Magnetometer, comprises three orthogonally aligned fibre optic gyros along with three fluxgate sensors. With these sensors, the vector of the magnetic field along with the tool rotation can be recorded continuously during the measurement. Using the high–precision gyro data, we can compute the vector of the magnetic anomaly with respect to the Earth's reference frame. Based on the comparison of several logs measured in the Outokumpu Deep Drill Hole (OKU R2500, Finland), the repeatability of the magnetic field vector is 0.8° in azimuthal direction, 0.08° in inclination and 71 nT in magnitude.

Time-domain electromagnetic data are conveniently inverted by using smoothly varying 1D models with fixed vertical discretization. The vertical smoothness of the obtained models stems from the application of Occam-type regularization constraints, which are meant to address the ill-posedness of the problem. An important side effect of such regularization, however, is that horizontal layer boundaries can no longer be accurately reproduced as the model is required to be smooth. This issue can be overcome by inverting for fewer layers with variable thicknesses; nevertheless, to decide on a particular and constant number of layers for the parameterization of a large survey inversion can be equally problematic.

Here, we present a focusing regularization technique to obtain the best of both methodologies. The new focusing approach allows for accurate reconstruction of resistivity distributions using a fixed vertical discretization while preserving the capability to reproduce horizontal boundaries. The formulation is flexible and can be coupled with traditional lateral/spatial smoothness constraints in order to resolve interfaces in stratified soils with no additional hypothesis about the number of layers. The method relies on minimizing the number of layers of non-vanishing resistivity gradient, instead of minimizing the norm of the model variation itself. This approach ensures that the results are consistent with the measured data while favouring, at the same time, the retrieval of horizontal abrupt changes. In addition, the focusing regularization can also be applied in the horizontal direction in order to promote the reconstruction of lateral boundaries such as faults.

We present the theoretical framework of our regularization methodology and illustrate its capabilities by means of both synthetic and field data sets. We further demonstrate how the concept has been integrated in our existing spatially constrained inversion formalism and show its application to large-scale time-domain electromagnetic data inversions.

Several power-law relationships of geophysical potential fields have been discussed recently with renewed interests, including field value–distance () and power spectrum–wavenumber () models. The singularity mapping technique based on the density/concentration–area (C–A) power-law model is applied to act as a high-pass filter for extracting gravity and magnetic anomalies regardless of the background value and to detect the edges of gravity or magnetic sources with the advantage of scale invariance. This is demonstrated on a synthetic example and a case study from the Nanling mineral district, Southern China. Compared with the analytic signal amplitude and total horizontal gradient methods, the singularity mapping technique provides more distinct and less noisy boundaries of granites than traditional methods. Additionally, it is efficient for enhancing and outlining weak anomalies caused by concealed granitic intrusions, indicating that the singularity method based on multifractal analysis is a potential tool to process gravity and magnetic data.