Four-dimensional imaging using geophysical data is of increasing interest in the oil and gas industries. While travel-time and amplitude variations are commonly used to monitor reservoir properties at depth, their interpretation can suffer from a lack of information to decipher the parts played by different parameters. In this context, this study focuses on the slowness and azimuth angle measured at the surface using source and receiver arrays as complementary observables. In the first step, array processing techniques are used to extract both azimuth and incidence angles at the source side (departure angles) and at the receiver side (arrival angles). In the second step, the slowness and angle variations are monitored in a laboratory environment. These new observables are compared with traditional arrival-time variations when the propagation medium is subject to temperature fluctuations. Finally, field data from a heavy-oil permanent reservoir monitoring system installed onshore and facing steam injection and temperature variations are investigated. The slowness variations are computed over a period of 152 days. In agreement with Fermat's principle, strong correlations between the slowness and arrival-time variations are highlighted, as well as good consistency with other techniques and field pressure measurements. Although the temporal variations of slowness and arrival time show the same features, there are still differences that can be considered for further characterization of the physical changes at depth.

Surface waves in seismic data are often dominant in a land or shallow-water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near-surface characterization. However, their complex properties make the surface-wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three-dimensional surface-wave estimation and separation using an iterative closed-loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward-modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface-wave model is parameterized by the frequency-dependent slowness and source properties for each surface-wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.

We study the azimuthally dependent hyperbolic moveout approximation for small angles (or offsets) for quasi-compressional, quasi-shear, and converted waves in one-dimensional multi-layer orthorhombic media. The vertical orthorhombic axis is the same for all layers, but the azimuthal orientation of the horizontal orthorhombic axes at each layer may be different. By starting with the known equation for normal moveout velocity with respect to the surface-offset azimuth and applying our derived relationship between the surface-offset azimuth and phase-velocity azimuth, we obtain the normal moveout velocity versus the phase-velocity azimuth. As the surface offset/azimuth moveout dependence is required for analysing azimuthally dependent moveout parameters directly from time-domain rich azimuth gathers, our phase angle/azimuth formulas are required for analysing azimuthally dependent residual moveout along the migrated local-angle-domain common image gathers. The angle and azimuth parameters of the local-angle-domain gathers represent the opening angle between the incidence and reflection slowness vectors and the azimuth of the phase velocity ψ_{phs} at the image points in the specular direction. Our derivation of the effective velocity parameters for a multi-layer structure is based on the fact that, for a one-dimensional model assumption, the horizontal slowness and the azimuth of the phase velocity ψ_{phs} remain constant along the entire ray (wave) path. We introduce a special set of auxiliary parameters that allow us to establish equivalent effective model parameters in a simple summation manner. We then transform this set of parameters into three widely used effective parameters: fast and slow normal moveout velocities and azimuth of the slow one. For completeness, we show that these three effective normal moveout velocity parameters can be equivalently obtained in both surface-offset azimuth and phase-velocity azimuth domains.

In theory, the streaming potential coefficient depends not only on the zeta potential but also on the permeability of the rocks that partially determines the surface conductivity of the rocks. However, in practice, it is hard to show the permeability dependence of streaming potential coefficients because of the variation of zeta potential from sample to sample. To study permeability dependence of streaming potential, including the effects of the variation of the zeta potential and surface conductance due to the difference in mineral compositions between samples, we perform measurements on 12 consolidated samples, including natural and artificial samples saturated with 7 different NaCl solutions to determine the streaming potential coefficients. The results have shown that the streaming potential coefficients strongly depend on the permeability of the samples for low fluid conductivity. When the fluid conductivity is larger than than 0.50 S/m for the natural samples or 0.25 S/m for the artificial ceramic samples, the streaming potential coefficient is independent of permeability. This behavior is quantitatively explained by a theoretical model.

A modular borehole monitoring concept has been implemented to provide a suite of well-based monitoring tools that can be deployed cost effectively in a flexible and robust package. The initial modular borehole monitoring system was deployed as part of a CO_{2} injection test operated by the Southeast Regional Carbon Sequestration Partnership near Citronelle, Alabama. The Citronelle modular monitoring system transmits electrical power and signals, fibre-optic light pulses, and fluids between the surface and a reservoir. Additionally, a separate multi-conductor tubing-encapsulated line was used for borehole geophones, including a specialized clamp for casing clamping with tubing deployment. The deployment of geophones and fibre-optic cables allowed comparison testing of distributed acoustic sensing. We designed a large source effort (>64 sweeps per source point) to test fibre-optic vertical seismic profile and acquired data in 2013. The native measurement in the specific distributed acoustic sensing unit used (an iDAS from Silixa Ltd) is described as a localized strain rate. Following a processing flow of adaptive noise reduction and rebalancing the signal to dimensionless strain, improvement from repeated stacking of the source was observed. Conversion of the rebalanced strain signal to equivalent velocity units, via a scaling by local apparent velocity, allows quantitative comparison of distributed acoustic sensing and geophone data in units of velocity. We see a very good match of uncorrelated time series in both amplitude and phase, demonstrating that velocity-converted distributed acoustic sensing data can be analyzed equivalent to vertical geophones. We show that distributed acoustic sensing data, when averaged over an interval comparable to typical geophone spacing, can obtain signal-to-noise ratios of 18 dB to 24 dB below clamped geophones, a result that is variable with noise spectral amplitude because the noise characteristics are not identical. With vertical seismic profile processing, we demonstrate the effectiveness of downgoing deconvolution from the large spatial sampling of distributed acoustic sensing data, along with improved upgoing reflection quality. We conclude that the extra source effort currently needed for tubing-deployed distributed acoustic sensing vertical seismic profile, as part of a modular monitoring system, is well compensated by the extra spatial sampling and lower deployment cost as compared with conventional borehole geophones.

For 3-D shallow-water seismic surveys offshore Abu Dhabi, imaging the target reflectors requires high resolution. Characterization and monitoring of hydrocarbon reservoirs by seismic amplitude-versus-offset techniques demands high pre-stack amplitude fidelity. In this region, however, it still was not clear how the survey parameters should be chosen to satisfy the required data quality. To answer this question, we applied the focal-beam method to survey evaluation and design. This subsurface- and target-oriented approach enables quantitative analysis of attributes such as the best achievable resolution and pre-stack amplitude fidelity at a fixed grid point in the subsurface for a given acquisition geometry at the surface. This method offers an efficient way to optimize the acquisition geometry for maximum resolution and minimum amplitude-versus-offset imprint. We applied it to several acquisition geometries in order to understand the effects of survey parameters such as the four spatial sampling intervals and apertures of the template geometry. The results led to a good understanding of the relationship between the survey parameters and the resulting data quality and identification of the survey parameters for reflection imaging and amplitude-versus-offset applications.

We have previously applied three-dimensional acoustic, anisotropic, full-waveform inversion to a shallow-water, wide-angle, ocean-bottom-cable dataset to obtain a high-resolution velocity model. This velocity model produced an improved match between synthetic and field data, better flattening of common-image gathers, a closer fit to well logs, and an improvement in the pre-stack depth-migrated image. Nevertheless, close examination reveals that there is a systematic mismatch between the observed and predicted data from this full-waveform inversion model, with the predicted data being consistently delayed in time. We demonstrate that this mismatch cannot be produced by systematic errors in the starting model, by errors in the assumed source wavelet, by incomplete convergence, or by the use of an insufficiently fine finite-difference mesh. Throughout these tests, the mismatch is remarkably robust with the significant exception that we do not see an analogous mismatch when inverting synthetic acoustic data. We suspect therefore that the mismatch arises because of inadequacies in the physics that are used during inversion. For ocean-bottom-cable data in shallow water at low frequency, apparent observed arrival times, in wide-angle turning-ray data, result from the characteristics of the detailed interference pattern between primary refractions, surface ghosts, and a large suite of wide-angle multiple reflected and/or multiple refracted arrivals. In these circumstances, the dynamics of individual arrivals can strongly influence the apparent arrival times of the resultant compound waveforms. In acoustic full-waveform inversion, we do not normally know the density of the seabed, and we do not properly account for finite shear velocity, finite attenuation, and fine-scale anisotropy variation, all of which can influence the relative amplitudes of different interfering arrivals, which in their turn influence the apparent kinematics. Here, we demonstrate that the introduction of a non-physical offset-variable water density during acoustic full-waveform inversion of this ocean-bottom-cable field dataset can compensate efficiently and heuristically for these inaccuracies. This approach improves the travel-time match and consequently increases both the accuracy and resolution of the final velocity model that is obtained using purely acoustic full-waveform inversion at minimal additional cost.

In tight gas sands, the signal-to-noise ratio of nuclear magnetic resonance log data is usually low, which limits the application of nuclear magnetic resonance logs in this type of reservoir. This project uses the method of wavelet-domain adaptive filtering to denoise the nuclear magnetic resonance log data from tight gas sands. The principles of the maximum correlation coefficient and the minimum root mean square error are used to decide on the optimal basis function for wavelet transformation. The feasibility and the effectiveness of this method are verified by analysing the numerical simulation results and core experimental data. Compared with the wavelet thresholding denoise method, this adaptive filtering method is more effective in noise filtering, which can improve the signal-to-noise ratio of nuclear magnetic resonance data and the inversion precision of transverse relaxation time *T*_{2} spectrum. The application of this method to nuclear magnetic resonance logs shows that this method not only can improve the accuracy of nuclear magnetic resonance porosity but also can enhance the recognition ability of tight gas sands in nuclear magnetic resonance logs.

To reduce the numerical errors arising from the improper enforcement of the artificial boundary conditions on the distant surface that encloses the underground part of the subsurface, we present a finite-element–infinite-element coupled method to significantly reduce the computation time and memory cost in the 2.5D direct-current resistivity inversion. We first present the boundary value problem of the secondary potential. Then, a new type of infinite element is analysed and applied to replace the conventionally used mixed boundary condition on the distant boundary. In the internal domain, a standard finite-element method is used to derive the final system of linear equations. With a novel shape function for infinite elements at the subsurface boundary, the final system matrix is sparse, symmetric, and independent of source electrodes. Through lower upper decomposition, the multi-pole potentials can be swiftly obtained by simple back-substitutions. We embed the newly developed forward solution to the inversion procedure. To compute the sensitivity matrix, we adopt the efficient adjoint equation approach to further reduce the computation cost. Finally, several synthetic examples are tested to show the efficiency of inversion.

Attenuation compensation in reverse-time migration has been shown to improve the resolution of the seismic image. In this paper, three essential aspects of implementing attenuation compensation in reverse-time migration are studied: the physical justification of attenuation compensation, the choice of imaging condition, and the choice of a low-pass filter. The physical illustration of attenuation compensation supports the mathematical implementation by reversing the sign of the absorption operator and leaving the sign of the dispersion operator unchanged in the decoupled viscoacoustic wave equation. Further theoretical analysis shows that attenuation compensation in reverse-time migration using the two imaging conditions (cross-correlation and source-normalized cross-correlation) is able to effectively mitigate attenuation effects. In numerical experiments using a simple-layered model, the source-normalized cross-correlation imaging condition may be preferable based on the criteria of amplitude corrections. The amplitude and phase recovery to some degree depend on the choice of a low-pass filter. In an application to a realistic Marmousi model with added *Q*, high-resolution seismic images with correct amplitude and kinematic phase are obtained by compensating for both absorption and dispersion effects. Compensating for absorption only can amplify the image amplitude but with a shifted phase.

In many practical cases, it is necessary to characterize the explored area with a regular set of geodata. Regular matrix data (e.g., ordinary maps) are calculated via existing data interpolation and extrapolation. For low frequency (oversampled) data acquired within a dense profile net (e.g., seismic three-dimensional structural or gravity mapping), this procedure is mathematically more or less stable and, to a certain extent, unique since we might neglect discrepancies resulting from different interpolations. The situation is quite different for high-resolution and high-frequency contaminated data (e.g., raw seismic attributes or geochemistry measurements) represented by sparse profiling. Considering the variety of exploration cases, the investigation of different interpolation algorithm efficiency seems very important. Since it is impossible to compare all algorithms by means of formal mathematics, we have designed a test program. A representative set of seismic attribute maps has been artificially destroyed by introducing blank values (from 20% up to 95%) and then restored by different interpolation algorithms— bicubic, bilinear, nearest neighbor, and “smart averaging.” Smart averaging interpolation is done in a “live” window. The position, form, and size of the window are determined by some mathematical criterion on a trial-and-error basis. Discrepancies between restored and initial (true) data have been assessed and analysed. It is shown that the total (absolute) efficiency and comparative (relative) efficiency of the algorithms depend mostly upon the initial interpolant data characteristics. Identifying the best interpolation algorithm for all interpretive cases seems impossible. Some aspects of data processing are discussed in connection with interpolation accuracy.

A detailed magnetotelluric survey was conducted in 2013 in the Sehqanat oil field, southwestern Iran to map the geoelectrical structures of the sedimentary Zagros zone, particularly the boundary between the Gachsaran Formation acting as cap rock and the Asmari Formation as the reservoir. According to the electrical well logs, a large resistivity contrast exists between the two formations. The Gachsaran Formation is formed by tens to hundreds of metres of evaporites and it is highly conductive (ca. 1 Ωm–10 Ωm), and the Asmari Formation consists of dense carbonates, which are considerably more resistive (more than 100 Ωm). Broadband magnetotelluric data were collected along five southwest–northeast directed parallel lines with more than 600 stations crossing the main geological trend. Although dimensionality and strike analysis of the magnetotelluric transfer functions showed that overall they satisfied local 2D conditions, there were also strong 3D conditions found in some of the sites. Therefore, in order to obtain a more reliable image of the resistivity distribution in the Sehqanat oil field, in addition to standard 2D inversion, we investigated to what extent 3D inversion of the data was feasible and what improvements in the resistivity image could be obtained. The 2D inversion models using the determinant average of the impedance tensor depict the main resistivity structures well, whereas the estimated 3D model shows significantly more details although problems were encountered in fitting the data with the latter. Both approaches resolved the Gachsaran–Asmari transition from high conductivity to moderate conductivity. The well-known Sehqanat anticline could also be delineated throughout the 2D and 3D resistivity models as a resistive dome-shaped body in the middle parts of the magnetotelluric profiles.

Recent advances in survey design have led to conventional common-midpoint-based analysis being replaced by subsurface-based seismic acquisition analysis, with emphasis on advanced techniques of illumination analysis. Among them is the so-called focal beam method, which is a wave-equation-based seismic illumination analysis method. The objective of the focal beam method is to provide a quantitative insight into the combined influence of acquisition geometry, overburden structure, and migration operators on the resolution and angle-dependent amplitude fidelity of the image. The method distinguishes between illumination and sensing capability of a particular acquisition geometry by computing the focal source beam and the focal detector beam, respectively. Sensing is related to the detection properties of a detector configuration, whereas illumination is related to the emission properties of a source configuration. The focal source beam analyses the incident wavefield at a specific subsurface grid point from all available sources, whereas the focal detector beam analyses the sensing wavefield reaching at the detector locations from the same subsurface grid point. In the past, this method could only address illumination by primary reflections. In this paper, we will extend the concept of the focal beam method to incorporate the illumination due to the surface and internal multiples. This in fact complies with the trend of including multiples in the imaging process. Multiple reflections can illuminate a target location from other angles compared with primary reflections, resulting in a higher resolution and an improved illumination. We demonstrate how an acquisition-related footprint can be corrected using both the surface and the internal multiples.

]]>This paper describes least-squares reverse-time migration. The method provides the exact adjoint operator pair for solving the linear inverse problem, thereby enhancing the convergence of gradient-based iterative linear inversion methods. In this formulation, modified source wavelets are used to correct the source signature imprint in the predicted data. Moreover, a roughness constraint is applied to stabilise the inversion and reduce high-wavenumber artefacts. It is also shown that least-squares migration implicitly applies a deconvolution imaging condition. Three numerical experiments illustrate that this method is able to produce seismic reflectivity images with higher resolution, more accurate amplitudes, and fewer artefacts than conventional reverse-time migration. The methodology is currently feasible in 2-D and can naturally be extended to 3-D when computational resources become more powerful.

We propose new implicit staggered-grid finite-difference schemes with optimal coefficients based on the sampling approximation method to improve the numerical solution accuracy for seismic modelling. We first derive the optimized implicit staggered-grid finite-difference coefficients of arbitrary even-order accuracy for the first-order spatial derivatives using the plane-wave theory and the direct sampling approximation method. Then, the implicit staggered-grid finite-difference coefficients based on sampling approximation, which can widen the range of wavenumber with great accuracy, are used to solve the first-order spatial derivatives. By comparing the numerical dispersion of the implicit staggered-grid finite-difference schemes based on sampling approximation, Taylor series expansion, and least squares, we find that the optimal implicit staggered-grid finite-difference scheme based on sampling approximation achieves greater precision than that based on Taylor series expansion over a wider range of wavenumbers, although it has similar accuracy to that based on least squares. Finally, we apply the implicit staggered-grid finite difference based on sampling approximation to numerical modelling. The modelling results demonstrate that the new optimal method can efficiently suppress numerical dispersion and lead to greater accuracy compared with the implicit staggered-grid finite difference based on Taylor series expansion. In addition, the results also indicate the computational cost of the implicit staggered-grid finite difference based on sampling approximation is almost the same as the implicit staggered-grid finite difference based on Taylor series expansion.

Reverse time migration backscattered events are produced by the cross-correlation between waves reflected from sharp interfaces (e.g., salt bodies). These events, along with head waves and diving waves, produce the so-called *reverse time migration* artefacts, which are visible as low wavenumber energy on migrated images. Commonly, these events are seen as a drawback for the *reverse time migration* method because they obstruct the image of the geologic structure, which is the real objective for the process. In this paper, we perform numeric and theoretical analysis to understand the *reverse time migration* backscattering energy in conventional and extended images. We show that the *reverse time migration* backscattering contains a measure of the synchronization and focusing information between the source and receiver wavefields. We show that this synchronization and focusing information is sensitive to velocity errors; this implies that a correct velocity model produces reverse time migration backscattering with maximum energy. Therefore, before filtering the reverse time migration backscattered energy, we should try to obtain a model that maximizes it.

Seismoelectric coupling coefficients are difficult to predict theoretically because they depend on a large numbers of rock properties, including porosity, permeability, tortuosity, etc. The dependence of the coupling coefficient on rock properties such as permeability requires experimental data. In this study, we carry out a set of laboratory measurements to determine the dependence of seismoelectric coupling coefficient on permeability. We use both an artificial porous “sandstone” sample, with cracks, built using quartz-sand and Berea sandstone samples. The artificial sample is a cube with 39% porosity. Its permeability levels are anisotropic: 14.7 D, 13.8 D, and 8.3 D in the *x*-, *y*-, and *z*-directions, respectively. Seismoelectric measurements are performed in a water tank in the frequency range of 20 kHz–90 kHz. A piezoelectric P-wave source is used to generate an acoustic wave that propagates through the sample from the three different (*x*, *y*, and *z*) directions. The amplitudes of the seismoelectric signal induced by the acoustic waves vary with the direction. The highest signal is in the direction of the highest permeability, and the lowest signal is in the direction of the lowest permeability. Since the porosity of the sample is constant, the results directly show the dependence of seismoelectric coefficients on permeability. Seismoelectric measurements with natural rocks are performed using Berea sandstone 500 and 100 samples. Because the Berea samples are nearly isotropic in permeability, the amplitudes of the seismoelectric signals induced in the different directions are the same within the measurement error. Because the permeability of Berea 500 is higher than that of Berea 100, the amplitude of the seismoelectric signals induced in Berea 500 is higher than those in Berea 100. To determine the relative contributions of porosity and permeability on seismoelectric conversion, we carried out an analysis, using Pride (1994) formulation and Kozeny–Carman relationship; the normalized amplitudes of seismoelectric coupling coefficients in three directions are calculated and compared with the experimental results. The results show that the seismoelectric conversion is related to permeability in the frequency range of measurements. This is an encouraging result since it opens the possibility of determining the permeability of a formation from seismoelectric measurements.

We carried out a magnetotelluric field campaign in the South–East Lower Saxony Basin, Germany, with the main goal of testing this method for imaging regional Posidonia black shale sediments. Two-dimensional inversion results of the magnetotelluric data show a series of conductive structures correlating with brine-saturated sediments but also with deeper, anthracitic Westphalian/Namurian coals. None of these structures can be directly related with the Posidonia black shale, which appears to be generally resistive and therefore difficult to resolve with the magnetotelluric method. This assumption is supported by measurements of electrical resistivity on a set of Posidonia shale samples from the Hils syncline in the Lower Saxony basin. These rock samples were collected in shallow boreholes and show immature (0.53% Ro), oil (0.88% Ro), and gas (1.45% Ro) window thermal maturities. None of the black shale samples showed low electrical resistivity, particularly those with oil window maturity show resistivity exceeding 10^{4} Ωm. Moreover, we could not observe a direct correlation between maturity and electrical resistivity; the Harderode samples showed the highest resistivity, whereas the Haddessen samples showed the lowest. A similar trend has been seen for coals in different states of thermal maturation. Saturation of the samples with distilled and saline water solutions led to decreasing electrical resistivity. Moreover, a positive correlation of electrical resistivity with porosity is observed for the Wickensen and Harderode samples, which suggests that the electrical resistivity of the Posidonia black shale is mainly controlled by porosity.

Distributed acoustic sensing is a novel technology for seismic acquisition. In this technology, strain changes induced by seismic waves impinging on an optical fibre are monitored. Due to the fact that glass is relatively rigid, straight glass fibres are not sensitive to broadside waves. We suggest using distributed acoustic sensing systems with fibres helically wound around cables. One increases the fibre sensitivity to broadside waves by decreasing the fibre wrapping angle (the angle between the fibre axis and the plane normal to the cable axis). The optimal wrapping angle is chosen to minimize the impact of Rayleigh waves on the signal measured. This angle depends on the cable Poisson ratio, and it is approximately equal to 30° for cables composed of plastic. For reliable detection of seismic waves, one needs a good mechanical contact between the cable and the surrounding medium. On the other hand, the sensitivity of distributed acoustic sensing systems to primary waves can be significantly reduced if the cable is placed in a cemented borehole.

Modern airborne transient electromagnetic surveys typically produce datasets of thousands of line kilometres, requiring careful data processing in order to extract as much and as reliable information as possible. When surveys are flown in populated areas, data processing becomes particularly time consuming since the acquired data are contaminated by couplings to man-made conductors (power lines, fences, pipes, etc.). Coupled soundings must be removed from the dataset prior to inversion, and this is a process that is difficult to automate. The signature of couplings can be both subtle and difficult to describe in mathematical terms, rendering removal of couplings mostly an expensive manual task for an experienced geophysicist.

Here, we try to automate the process of removing couplings by means of an artificial neural network. We train an artificial neural network to recognize coupled soundings in manually processed reference data, and we use this network to identify couplings in other data. The approach provides a significant reduction in the time required for data processing since one can directly apply the network to the raw data. We describe the neural network put to use and present the inputs and normalizations required for maximizing its effectiveness. We further demonstrate and assess the training state and performance of the network before finally comparing inversions based on unprocessed data, manually processed data, and artificial neural network automatically processed data. The results show that a well-trained network can produce high-quality processing of airborne transient electromagnetic data, which is either ready for inversion or in need of minimal manual processing. We conclude that the use of artificial neural network scan significantly reduce the processing time and its costs by as much as 50%.

Radial-trace time–frequency peak filtering filters a seismic record along the radial-trace direction rather than the conventional channel direction. It takes the spatial correlation of the reflected events between adjacent channels into account. Thus, radial-trace time–frequency peak filtering performs well in denoising and enhancing the continuity of reflected events. However, in the seismic record there is often random noise whose energy is concentrated in certain directions; the noise in these directions is correlative. We refer to this kind of random noise (that is distributed randomly in time but correlative in the space) as directional random noise. Under radial-trace time–frequency peak filtering, the directional random noise will be treated as signal and enhanced when this noise has same direction as the signal. Therefore, we need to identify the directional random noise before the filtering. In this paper, we test the linearity of signal and directional random noise in time using the Hurst exponent. The time series of signals with high linearity lead to large Hurst exponent value; however, directional random noise is a random series in time without a fixed waveform and thus its linearity is low; therefore, we can differentiate the signal and directional random noise by the Hurst exponent values. The directional random noise can then be suppressed by using a long filtering window length during the radial-trace time–frequency peak filtering. Synthetic and real data examples show that the proposed method can remove most directional random noise and can effectively recover the reflected events.

Most sedimentary rocks are anisotropic, yet it is often difficult to accurately incorporate anisotropy into seismic workflows because analysis of anisotropy requires knowledge of a number of parameters that are difficult to estimate from standard seismic data. In this study, we provide a methodology to infer azimuthal P-wave anisotropy from S-wave anisotropy calculated from log or vertical seismic profile data. This methodology involves a number of steps. First, we compute the azimuthal P-wave anisotropy in the dry medium as a function of the azimuthal S-wave anisotropy using a rock physics model, which accounts for the stress dependency of seismic wave velocities in dry isotropic elastic media subjected to triaxial compression. Once the P-wave anisotropy in the dry medium is known, we use the anisotropic Gassmann equations to estimate the anisotropy of the saturated medium. We test this workflow on the log data acquired in the North West Shelf of Australia, where azimuthal anisotropy is likely caused by large differences between minimum and maximum horizontal stresses. The obtained results are compared to azimuthal P-wave anisotropy obtained via orthorhombic tomography in the same area. In the clean sandstone layers, anisotropy parameters obtained by both methods are fairly consistent. In the shale and shaly sandstone layers, however, there is a significant discrepancy between results since the stress-induced anisotropy model we use is not applicable to rocks exhibiting intrinsic anisotropy. This methodology could be useful for building the initial anisotropic velocity model for imaging, which is to be refined through migration velocity analysis.

When a seismic source is placed in the water at a height less than a wavelength from the water–solid interface, a prominent S-wave arrival can be observed. It travels kinematically as if it was excited at the projection point of the source on the interface. This non-geometric S-wave has been investigated before, mainly for a free-surface configuration. However, as was shown in a field experiment, the non-geometric S-wave can also be excited at a fluid–solid configuration if the S-wave speed in the solid is less than the sound speed in the water. The amplitude of this wave exponentially decreases when the source is moved away from the interface revealing its evanescent character in the fluid. In the solid, this particular converted mode is propagating as an ordinary S-wave and can be transmitted and reflected as such. There is a specific region of horizontal slownesses where this non-geometric wave exists, depending on the ratio of the S-wave velocity and the sound speed of water. Only for ratios smaller than 1, this wave appears. Lower ratios result in a wider region of appearance. Due to this property, this particular P-S converted mode can be identified and filtered from other events in the Radon domain.

We suggest a new method to determine the piecewise-continuous vertical distribution of instantaneous velocities within sediment layers, using different order time-domain effective velocities on their top and bottom points. We demonstrate our method using a synthetic model that consists of different compacted sediment layers characterized by monotonously increasing velocity, combined with hard rock layers, such as salt or basalt, characterized by constant fast velocities, and low velocity layers, such as gas pockets. We first show that, by using only the root-mean-square velocities and the corresponding vertical travel times (computed from the original instantaneous velocity in depth) as input for a Dix-type inversion, many different vertical distributions of the instantaneous velocities can be obtained (inverted). Some geological constraints, such as limiting the values of the inverted vertical velocity gradients, should be applied in order to obtain more geologically plausible velocity profiles. In order to limit the non-uniqueness of the inverted velocities, additional information should be added. We have derived three different inversion solutions that yield the correct instantaneous velocity, avoiding any *a priori* geological constraints. The additional data at the interface points contain either the average velocities (or depths) or the fourth-order average velocities, or both. Practically, average velocities can be obtained from nearby wells, whereas the fourth-order average velocity can be estimated from the quartic moveout term during velocity analysis. Along with the three different types of input, we consider two types of vertical velocity models within each interval: distribution with a constant velocity gradient and an exponential asymptotically bounded velocity model, which is in particular important for modelling thick layers. It has been shown that, in the case of thin intervals, both models lead to similar results. The method allows us to establish the instantaneous velocities at the top and bottom interfaces, where the velocity profile inside the intervals is given by either the linear or the exponential asymptotically bounded velocity models. Since the velocity parameters of each interval are independently inverted, discontinuities of the instantaneous velocity at the interfaces occur naturally. The improved accuracy of the inverted instantaneous velocities is particularly important for accurate time-to-depth conversion.

This paper presents a case study of mapping basement structures in the northwestern offshore of Abu Dhabi using high-resolution aeromagnetic data. Lineament analysis was carried out on the derivatives of the reduced-to-the-pole magnetic data, along with supporting information from published geologic data. The lineament analysis suggests three well-defined basement trends in the north–south, northeast–southwest, and northwest–southeast directions. The reduced-to-the-pole magnetic data reveal high positive magnetic anomalies hypothesized to be related to intra-basement bodies in the deep seated Arabian Shield. Depth to basement was estimated using spectral analysis and Source Parameter Imaging techniques. The spectral analysis suggests that the intruded basement blocks are at the same average depth level (around 8.5 km). The estimated Source Parameter Imaging depths from gridded reduced-to-the-pole data are ranged between 4 km and 12 km with a large depth variation within small distances. These estimated depths prevent a reliable interpretation of the nature of the basement relief. However, low-pass filtering of the horizontal local wavenumber data across two profiles shows that the basement terrain is characterized by a basin-like structure trending in the northeast–southwest direction with a maximum depth of 10 km. Two-dimensional forward magnetic modelling across the two profiles suggests that the high positive magnetic anomalies over the basin could be produced by intrusion of mafic igneous rocks with high susceptibility values (0.008 to 0.016 SI.

Gaussian beam migration is a versatile imaging method for geologically complex land areas, which overcomes the limitation of Kirchhoff migration in imaging multiple arrivals and has no steep-dip limits of one-way wave-equation migration. However, its imaging accuracy depends on the geometry of Gaussian beam that is determined by the initial parameter of dynamic ray tracing. As a result, its applications in exploration areas with strong variations in topography and near-surface velocity are limited. Combined with the concept of Fresnel zone and the theory of wave-field approximation in effective vicinity, we present a more robust common-shot Fresnel beam imaging method for complex topographic land areas in this paper. Compared with the conventional Gaussian beam migration for irregular topography, our method improves the beam geometry by limiting its effective half-width with Fresnel zone radius. Moreover, through a quadratic travel-time correction and an amplitude correction that is based on the wave-field approximation in effective vicinity, it gives an accurate method for plane-wave decomposition at complex topography, which produces good imaging results in both shallow and deep zones. Trials of two typical models and its application in field data demonstrated the validity and robustness of our method.

We show how to estimate the fluid permeability changes due to accumulated biopolymer within the pore space of a granular material using laboratory measurements of overall permeability, together with various well-known quantitative measures (e.g., porosity, specific surface area, and formation factor) of the granular medium microstructure. The main focus of the paper is on mutual validation of existing theory and a synthesis of new experimental results. We find that the theory and data are in good agreement within normal experimental uncertainties. We also establish quantitative empirical relationships between seismic and/or acoustic attenuation and overall permeability for these same systems.

Presence of noise in the acquisition of surface nuclear magnetic resonance data is inevitable. There are various types of noise, including Gaussian noise, spiky events, and harmonic noise that affect the signal quality of surface nuclear magnetic resonance measurements. In this paper, we describe an application of a two-step noise suppression approach based on a non-linear adaptive decomposition technique called complete ensemble empirical mode decomposition in conjunction with a statistical optimization process for enhancing the signal-to-noise ratio of the surface nuclear magnetic resonance signal. The filtering procedure starts with applying the complete ensemble empirical mode decomposition method to decompose the noisy surface nuclear magnetic resonance signal into a finite number of intrinsic mode functions. Afterwards, a threshold region based on de-trended fluctuation analysis is defined to identify the noisy intrinsic mode functions, and then the no-noise intrinsic mode functions are used to recover the partially de-noised signal. In the second stage, we applied a statistical method based on the variance criterion to the signal obtained from the initial phase to mitigate the remaining noise. To demonstrate the functionality of the proposed strategy, the method was evaluated on an added-noise synthetic surface nuclear magnetic resonance signal and on field data. The results show that the proposed procedure allows us to improve the signal-to-noise ratio significantly and, consequently, extract the signal parameters (i.e., and *V*_{0}) from noisy surface nuclear magnetic resonance data efficiently.

A marine source generates both a direct wavefield and a ghost wavefield. This is caused by the strong surface reflectivity, resulting in a *blended* source array, the blending process being natural. The two unblended response wavefields correspond to the real source at the actual location below the water level and to the ghost source at the mirrored location above the water level. As a consequence, deghosting becomes deblending (‘echo-deblending’) and can be carried out with a deblending algorithm. In this paper we present source deghosting by an iterative deblending algorithm that properly includes the angle dependence of the ghost: It represents a closed-loop, *non-causal* solution. The proposed echo-deblending algorithm is also applied to the detector deghosting problem. The detector cable may be slanted, and shot records may be generated by blended source arrays, the blending being created by simultaneous sources. Similar to surface-related multiple elimination the method is independent of the complexity of the subsurface; only what happens at and near the surface is relevant. This means that the actual sea state may cause the reflection coefficient to become frequency dependent, and the water velocity may not be constant due to temporal and lateral variations in the pressure, temperature, and salinity. As a consequence, we propose that estimation of the actual ghost model should be part of the echo-deblending algorithm. This is particularly true for source deghosting, where interaction of the source wavefield with the surface may be far from linear. The echo-deblending theory also shows how multi-level source acquisition and multi-level streamer acquisition can be numerically simulated from standard acquisition data. The simulated multi-level measurements increase the performance of the echo-deblending process. The output of the echo-deblending algorithm on the source side consists of two ghost-free records: one generated by the real source at the actual location below the water level and one generated by the ghost source at the mirrored location above the water level. If we apply our algorithm at the detector side as well, we end up with four ghost-free shot records. All these records are input to migration. Finally, we demonstrate that the proposed echo-deblending algorithm is robust for background noise.

A calendar time interpolation method for 2D seismic amplitude maps, done in two steps, is presented. The contour interpolation part is formulated as a quadratic programming problem, whereas the amplitude value interpolation is based on a conditional probability formulation. The method is applied on field data from the Sleipner CO_{2} storage project. The output is a continuous image (movie) of the CO_{2} plume. Besides visualization, the output can be used to better couple 4D seismic to other types of data acquired. The interpolation uncertainty increases with the time gap between consecutive seismic surveys and is estimated by leaving a survey out (blind test). Errors from such tests can be used to identify problems in understanding the flow and possibly improve the interpolation scheme for a given case. Field-life cost of various acquisition systems and repeat frequencies are linked to the time-lapse interpolation errors. The error in interpolated amplitudes increased by 3%-4% per year of interpolation gap for the Sleipner case. Interpolation can never fully replace measurements.

We present an approach based on local-slope estimation for the separation of scattered surface waves from reflected body waves. The direct and scattered surface waves contain a significant amount of seismic energy. They present great challenges in land seismic data acquisition and processing, particularly in arid regions with complex near-surface heterogeneities (e.g., dry river beds, wadis/large escarpments, and karst features). The near-surface scattered body-to-surface waves, which have comparable amplitudes to reflections, can mask the seismic reflections. These difficulties, added to large amplitude direct and back-scattered surface (Rayleigh) waves, create a major reduction in signal-to-noise ratio and degrade the final sub-surface image quality. Removal of these waves can be difficult using conventional filtering methods, such as an filter, without distorting the reflected signal. The filtering algorithm we present is based on predicting the spatially varying slope of the noise, using steerable filters, and separating the signal and noise components by applying a directional nonlinear filter oriented toward the noise direction to predict the noise and then subtract it from the data. The slope estimation step using steerable filters is very efficient. It requires only a linear combination of a set of basis filters at fixed orientation to synthesize an image filtered at an arbitrary orientation. We apply our filtering approach to simulated data as well as to seismic data recorded in the field to suppress the scattered surface waves from reflected body waves, and we demonstrate its superiority over conventional techniques in signal preservation and noise suppression.

Based on the theory of anisotropic elasticity and observation of static mechanic measurement of transversely isotropic hydrocarbon source rocks or rock-like materials, we reasoned that one of the three principal Poisson's ratios of transversely isotropic hydrocarbon source rocks should always be greater than the other two and they should be generally positive. From these relations, we derived tight physical constraints on *c*_{13}, Thomsen parameter δ, and anellipticity parameter η. Some of the published data from laboratory velocity anisotropy measurement are lying outside of the constraints. We analysed that they are primarily caused by substantial uncertainty associated with the oblique velocity measurement. These physical constraints will be useful for our understanding of Thomsen parameter δ, data quality checking, and predicting δ from measurements perpendicular and parallel to the symmetrical axis of transversely isotropic medium. The physical constraints should also have potential application in anisotropic seismic data processing.

Wavefield decomposition forms an important ingredient of various geophysical methods. An example of wavefield decomposition is the decomposition into upgoing and downgoing wavefields and simultaneous decomposition into different wave/field types. The multi-component field decomposition scheme makes use of the recordings of different field quantities (such as particle velocity and pressure). In practice, different recordings can be obscured by different sensor characteristics, requiring calibration with an unknown calibration factor. Not all field quantities required for multi-component field decomposition might be available, or they can suffer from different noise levels. The multi-depth-level decomposition approach makes use of field quantities recorded at multiple depth levels, e.g., two horizontal boreholes closely separated from each other, a combination of a single receiver array combined with free-surface boundary conditions, or acquisition geometries with a high-density of vertical boreholes. We theoretically describe the multi-depth-level decomposition approach in a unified form, showing that it can be applied to different kinds of fields in dissipative, inhomogeneous, anisotropic media, e.g., acoustic, electromagnetic, elastodynamic, poroelastic, and seismoelectric fields. We express the one-way fields at one depth level in terms of the observed fields at multiple depth levels, using extrapolation operators that are dependent on the medium parameters between the two depth levels. Lateral invariance at the depth level of decomposition allows us to carry out the multi-depth-level decomposition in the horizontal wavenumber–frequency domain. We illustrate the multi-depth-level decomposition scheme using two synthetic elastodynamic examples. The first example uses particle velocity recordings at two depth levels, whereas the second example combines recordings at one depth level with the Dirichlet free-surface boundary condition of zero traction. Comparison with multi-component decomposed fields shows a perfect match in both amplitude and phase for both cases. The multi-depth-level decomposition scheme is fully customizable to the desired acquisition geometry. The decomposition problem is in principle an inverse problem. Notches may occur at certain frequencies, causing the multi-depth-level composition matrix to become uninvertible, requiring additional notch filters. We can add multi-depth-level free-surface boundary conditions as extra equations to the multi-component composition matrix, thereby overdetermining this inverse problem. The combined multi-component–multi-depth-level decomposition on a land data set clearly shows improvements in the decomposition results, compared with the performance of the multi-component decomposition scheme.

For a magnetic target, the spatial magnetic signal can be expressed as a convolutional integral over Green's function of an assumed model with susceptibility as its parameter. A filter can be used to obtain the susceptibility by minimizing the mismatch between observed and the computed magnetic anomalies. In this perspective, we report the development of an advanced digital filter, which is efficient and can be used to map rock susceptibility from the acquired magnetic data. To design the new filter, we modified the space-domain standard Wiener–Hopf filter by imposing two different constraints: (i) the filter energy constraint; and (ii) normalization of the filter coefficients. These constraints make it capable to characterize source bodies from their produced magnetic anomalies. We assume that the magnetic data are produced by induced magnetization only and interpretation can be as good as the subsurface model.

Our technique is less sensitive to the data noise, which makes it efficient in enhancing the interpretation model. The modified filter demonstrates its applicability over the synthetic data with additive white Gaussian noise. In order to check the efficacy and adaptivity of this tool in a more realistic perspective, it is also tested on the real magnetic data acquired over a kimberlitic district adjoining to the western margin of the Cuddapah Basin in India to identify the source bodies from the anomalies. Our result shows that the modified Wiener–Hopf filter with the constraint for the magnetic data is more stable and efficient than the standard Wiener–Hopf filter.

In the traditional inversion of the Rayleigh dispersion curve, layer thickness, which is the second most sensitive parameter of modelling the Rayleigh dispersion curve, is usually assumed as correct and is used as fixed *a priori* information. Because the knowledge of the layer thickness is typically not precise, the use of such *a priori* information may result in the traditional Rayleigh dispersion curve inversions getting trapped in some local minima and may show results that are far from the real solution. In this study, we try to avoid this issue by using a joint inversion of the Rayleigh dispersion curve data with vertical electric sounding data, where we use the common-layer thickness to couple the two methods. The key idea of the proposed joint inversion scheme is to combine methods in one joint Jacobian matrix and to invert for layer S-wave velocity, resistivity, and layer thickness as an additional parameter, in contrast with a traditional Rayleigh dispersion curve inversion. The proposed joint inversion approach is tested with noise-free and Gaussian noise data on six characteristic, synthetic sub-surface models: a model with a typical dispersion; a low-velocity, half-space model; a model with particularly stiff and soft layers, respectively; and a model reproduced from the stiff and soft layers for different layer-resistivity propagation. In the joint inversion process, the non-linear damped least squares method is used together with the singular value decomposition approach to find a proper damping value for each iteration. The proposed joint inversion scheme tests many damping values, and it chooses the one that best approximates the observed data in the current iteration. The quality of the joint inversion is checked with the relative distance measure. In addition, a sensitivity analysis is performed for the typical dispersive sub-surface model to illustrate the benefits of the proposed joint scheme. The results of synthetic models revealed that the combination of the Rayleigh dispersion curve and vertical electric sounding methods in a joint scheme allows to provide reliable sub-surface models even in complex and challenging situations and without using any *a priori* information.

In the field of seismic interferometry, researchers have retrieved surface waves and body waves by cross-correlating recordings of uncorrelated noise sources to extract useful subsurface information. The retrieved wavefields in most applications are between receivers. When the positions of the noise sources are known, inter-source interferometry can be applied to retrieve the wavefields between sources, thus turning sources into virtual receivers. Previous applications of this form of interferometry assume impulsive point sources or transient sources with similar signatures. We investigate the requirements of applying inter-source seismic interferometry using non-transient noise sources with known positions to retrieve reflection responses at those positions and show the results using synthetic drilling noise as source. We show that, if pilot signals (estimates of the drill-bit signals) are not available, it is required that the drill-bit signals are the same and that the phases of the virtual reflections at drill-bit positions can be retrieved by deconvolution interferometry or by cross-coherence interferometry. Further, for this case, classic interferometry by cross-correlation can be used if the source power spectrum can be estimated. If pilot signals are available, virtual reflection responses can be obtained by first using standard seismic-while-drilling processing techniques such as pilot cross-correlation and pilot deconvolution to remove the drill-bit signatures in the data and then applying cross-correlation interferometry. Therefore, provided that pilot signals are reliable, drill-bit data can be redatumed from surface to borehole depths using this inter-source interferometry approach without any velocity information of the medium, and we show that a well-positioned image below the borehole can be obtained using interferometrically redatumed reflection responses with just a simple velocity model. We discuss some of the practical hurdles that restrict the application of the proposed method offshore.

This paper introduces an efficiency improvement to the sparse-grid geometric sampling methodology for assessing uncertainty in non-linear geophysical inverse problems. Traditional sparse-grid geometric sampling works by sampling in a reduced-dimension parameter space bounded by a feasible polytope, e.g., a generalization of a polygon to dimension above two. The feasible polytope is approximated by a hypercube. When the polytope is very irregular, the hypercube can be a poor approximation leading to computational inefficiency in sampling. We show how the polytope can be regularized using a rotation and scaling based on principal component analysis. This simple regularization helps to increase the efficiency of the sampling and by extension the computational complexity of the uncertainty solution. We demonstrate this on two synthetic 1D examples related to controlled-source electromagnetic and amplitude versus offset inversion. The results show an improvement of about 50% in the performance of the proposed methodology when compared with the traditional one. However, as the amplitude versus offset example shows, the differences in the efficiency of the proposed methodology are very likely to be dependent on the shape and complexity of the original polytope. However, it is necessary to pursue further investigations on the regularization of the original polytope in order to fully understand when a simple regularization step based on rotation and scaling is enough.

The hydrodynamic characterization of the epikarst, the shallow part of the unsaturated zone in karstic systems, has always been challenging for geophysical methods. This work investigates the feasibility of coupling time-lapse refraction seismic data with petrophysical and hydrologic models for the quantitative determination of water storage and residence time at shallow depth in carbonate rocks. The Biot–Gassmann fluid substitution model describing the seismic velocity variations with water saturation at low frequencies needs to be modified for this lithology. I propose to include a saturation-dependent rock-frame weakening to take into account water–rock interactions. A Bayesian inversion workflow is presented to estimate the water content from seismic velocities measured at variable saturations. The procedure is tested first with already published laboratory measurements on core samples, and the results show that it is possible to estimate the water content and its uncertainty. The validated procedure is then applied to a time-lapse seismic study to locate and quantify seasonal water storage at shallow depth along a seismic profile. The residence time of the water in the shallow layers is estimated by coupling the time-lapse seismic measurements with rainfall chronicles, simple flow equations, and the petrophysical model. The daily water input computed from the chronicles is used to constraint the inversion of seismic velocities for the daily saturation state and the hydrodynamic parameters of the flow model. The workflow is applied to a real monitoring case, and the results show that the average residence time of the water in the epikarst is generally around three months, but it is only 18 days near an infiltration pathway. During the winter season, the residence times are three times shorter in response to the increase in the effective rainfall.

The increased application of airborne electromagnetic surveys to hydrogeological studies is driving a demand for data that can consistently be inverted for accurate subsurface resistivity structure from the near surface to depths of several hundred metres. We present an evaluation of three commercial airborne electromagnetic systems over two test blocks in western Nebraska, USA. The selected test blocks are representative of shallow and deep alluvial aquifer systems with low groundwater salinity and an electrically conductive base of aquifer. The aquifer units show significant lithologic heterogeneity and include both modern and ancient river systems. We compared the various data sets to one another and inverted resistivity models to borehole lithology and to ground geophysical models. We find distinct differences among the airborne electromagnetic systems as regards the spatial resolution of models, the depth of investigation, and the ability to recover near-surface resistivity variations. We further identify systematic biases in some data sets, which we attribute to incomplete or inexact calibration or compensation procedures.

In seismic interpretation and seismic data analysis, it is of critical importance to effectively identify certain geologic formations from very large seismic data sets. In particular, the problem of salt characterization from seismic data can lead to important savings in time during the interpretation process if solved efficiently and in an automatic manner. In this work, we present a novel numerical approach that is able to automatically segmenting or identifying salt structures from a post-stack seismic data set with a minimum intervention from the interpreter. The proposed methodology is based on the recent theory of sparse representation and consists in three major steps: first, a supervised learning assisted by the user which is performed only once, second a segmentation process via unconstrained ℓ_{1} optimization, and finally a post-processing step based on signal separation. Furthermore, since the second step only depends upon local information at each time, the whole process greatly benefits from parallel computing platforms. We conduct numerical experiments in a synthetic 3D seismic data set demonstrating the viability of our method. More specifically, we found that the proposed approach matches up to 98.53% with respect to the corresponding 3D velocity model available in advance. Finally, in appendixes A and B, we present a convergence analysis providing theoretical guarantees for the proposed method.

Elastic interactions between pores and cracks reflect how they are organized or spatially distributed in porous rocks. The principle goal of this paper is to understand and characterize the effect of elastic interactions on the effective elastic properties. We perform finite element modelling to quantitatively study how the spatial arrangement of inclusions affects stress distribution and the resulting overall elasticity. It is found that the stress field can be significantly altered by elastic interactions. Compared with a non-interacting situation, stress shielding considerably stiffens the effective media, while stress amplification appreciably reduces the effective elasticity. We also demonstrate that the T-matrix approach, which takes into account the ellipsoid distribution of pores or cracks, can successfully characterize the competing effects between stress shielding and stress amplification. Numerical results suggest that, when the concentrations of cracks increase beyond the dilute limit, the single parameter crack density is not sufficient to characterize the contribution of the cracks to the effective elasticity. In order to obtain more reliable and accurate predictions for the effective elastic responses and seismic anisotropies, the spatial distribution of pores and cracks should be included. Additionally, such elastic interaction effects are also dependent on both the pore shapes and the fluid infill.

Based on analytic relations, we compute the reflection and transmission responses of a periodically layered medium with a stack of elastic shales and partially saturated sands. The sand layers are considered anelastic (using patchy saturation theory) or elastic (with effective velocity). Using the patchy saturation theory, we introduce a velocity dispersion due to mesoscale attenuation in the sand layer. This intrinsic anelasticity is creating frequency dependence, which is added to the one coming from the layering (macroscale). We choose several configurations of the periodically layered medium to enhance more or less the effect of anelasticity. The worst case to see the effect of intrinsic anelasticity is obtained with low dispersion in the sand layer, strong contrast between shales and sands, and a low value of the net-to-gross ratio (sand proportion divided by the sand + shale proportion), whereas the best case is constituted by high dispersion, weak contrast, and high net-to-gross ratio. We then compare the results to show which dispersion effect is dominating in reflection and transmission responses. In frequency domain, the influence of the intrinsic anelasticity is not negligible compared with the layering effect. Even if the main resonance patterns are the same, the resonance peaks for anelastic cases are shifted towards high frequencies and have a slightly lower amplitude than for elastic cases. These observations are more emphasized when we combine all effects and when the net-to-gross ratio increases, whereas the differences between anelastic and elastic results are less affected by the level of intrinsic dispersion and by the contrast between the layers. In the time domain, the amplitude of the responses is significantly lower when we consider intrinsic anelastic layers. Even if the phase response has the same features for elastic and anelastic cases, the anelastic model responses are clearly more attenuated than the elastic ones. We conclude that the frequency dependence due to the layering is not always dominating the responses. The frequency dependence coming from intrinsic visco-elastic phenomena affects the amplitude of the responses in the frequency and time domains. Considering intrinsic attenuation and velocity dispersion of some layers should be analyzed while looking at seismic and log data in thin layered reservoirs.

The Catalão I alkaline–carbonatite complex, which is located in Central Brazil, is one of the main producers of niobium and phosphates in the world. It has been intensely studied geologically and geochemically for its economic potential. This work presents a geophysical analysis over this complex, identifying its behaviour in the subsurface and in portions that have not been explored yet. Different geophysical methods and techniques were applied to achieve the most reliable results possible: at the surface, through radiometric, geological, and topographic data, and at depth, by geological, magnetic, and gravimetric data. The analysis was successfully completed with inversions of gravity and magnetic data that resulted in quite similar models, both in volume and shape. Their density and magnetic susceptibility contrasts were consistent with the expected dunite–pyroxenite lithology from the original mafic intrusion and indicated (by exclusion) the volume of the carbonatite body, which along with the known contents of phosphates and niobium allowed an indirect estimate of the reserves and resources of the complex.

We present a method for inversion of fracture compliance matrix components from wide-azimuth noisy synthetic PS reflection data and quantitatively show that reflection amplitude variations with offset and azimuth for converted PS-waves are more informative than P-waves for fracture characterization. We consider monoclinic symmetry for fractured reservoir (parameters chosen from Woodford Shale), which can be formed by two or more sets of vertical fractures embedded in a vertically transverse isotropic background.

Components of effective fracture compliance matrices for a medium with monoclinic symmetry are related to the characteristics of the fractured medium. Monte Carlo simulation results show that inversion of PS reflection data is more robust than that of PP reflection data to uncertainties in our *a priori* knowledge (vertically transverse isotropic parameters of unfractured rock) than PP reflection data. We also show that, while inversion of PP reflections is sensitive to contrasts in elastic properties of upper and lower media, inversion of PS reflections is robust with respect to such contrasts.

In applications such as oil and gas production, deep geothermal energy production, underground storage, and mining, it is common practice to implement local seismic networks to monitor and to mitigate induced seismicity. For this purpose, it is crucial to determine the capability of the network to detect a seismic event of predefined magnitude in the target area. The determination of the magnitude of completeness of a network is particularly required to properly interpret seismic monitoring results. We propose a method to compute the detection probability for existing local seismic networks, which (i) strictly follows the applied detection sequence; (ii) estimates the detection capability where seismicity has not yet occurred; and (iii) delivers the results in terms of probabilities. The procedure includes a calibration of a local magnitude scale using regional earthquakes recorded by the network and located outside the monitored area. It involves pre-processing of the seismograms recorded at each station as performed during the triggering sequence, which is assumed based on amplitude thresholds. Then, the calibrated magnitude–distance–amplitude relations are extrapolated at short distances and combined to reproduce the network detection sequence. This generates a probability to detect a seismic event of a given magnitude at a specified location. This observation-based approach is an alternative to a fully theoretical detection capability modelling and includes field conditions. Seismic wave attenuation by geometrical spreading and intrinsic attenuation, site effect, and instrumental responses are partly accounted for by the calibration. We apply this procedure on the seismic network deployed in the Bruchsal geothermal field (Germany). Although the system was in good working order, no induced seismicity was identified in the area between June 2010, when monitoring started, and November 2012. The recording of distant seismicity during this time period, however, allowed the application of the proposed procedure. According to the applied network detection parameters, the results indicate that the absence of seismicity can be interpreted as a 95% probability that no seismic event with *M _{L}* ≥ 0.7 occurred below the network at 2.4-km depth, i.e., in the geothermal reservoir.

Integrating migration velocity analysis and full waveform inversion can help reduce the high non-linearity of the classic full waveform inversion objective function. The combination of inverting for the long and short wavelength components of the velocity model using a dual objective function that is sensitive to both components is still very expensive and have produced mixed results. We develop an approach that includes both components integrated to complement each other. We specifically utilize the image to generate reflections in our synthetic data only when the velocity model is not capable of producing such reflections. As a result, we get the migration velocity analysis working when we need it, and we mitigate its influence when the velocity model produces accurate reflections (possibly first for the low frequencies). This is achieved using a novel objective function that includes both objectives. Applications to a layered model and the Marmousi model demonstrate the main features of the approach.

Decomposing seismic data in local slopes is the basic idea behind velocity-independent imaging. Using accurate moveout approximations enables computing moveout attributes such as normal moveout velocity and nonhyperbolic parameters as functions of zero-offset travel time. Mapping of moveout attributes is performed from the pre-stack seismic data domain into the time-migrated image domain. The different moveout attributes have different accuracy for a given moveout approximation that depends on the corresponding order of travel-time derivative. The most accurate attribute is the zero-offset travel time, and the nonhyperbolic parameter has the worst accuracy, regardless of the moveout approximation. Typically, the mapping of moveout attributes is performed using a point-to-point procedure, whereas the generalized moveout approximation requires two point-to-point mappings. Testing the attribute mapping on the different models shows that the accuracy of mapped attributes is model dependent, whereas the generalized moveout approximation gives practically exact results.

Seismic acquisition can be costly and inefficient when using spiked geophones. In most cases, such as the desert, the most practical solution is the use of flat bases, where geophone-ground coupling is based on an optimal choice of the mass and area of contact between the receiver and the ground. This optimization is necessary since areas covered by sand are loose sediments and poor coupling occurs. Other cases include ground coupling in stiff pavements, for instance urban areas and ocean-bottom nodes. We consider three different approaches to analyse coupling and model the geophone with a flat base (plate) resting on an elastic half-space. Two existing models, based on the full-wave theory, which we refer to as the Wolf and Hoover-O'Brien models, predict a different behaviour with respect to the novel method introduced in this work. This method is based on the transmission coefficient of upgoing waves impinging in the geophone-ground contact, where the ground is described as an anelastic half-space. The boundary conditions at the contact have already been used to model fractures and are shown here to provide the equation of the damped oscillator. This fracture-contact model depends on the stiffness characteristic of the contact between the geophone base plate and the ground. The transmission coefficient from the ground to the plate increases for increasing weight and decreasing base plate area. The new model predicts that the resonant frequency is independent of the geophone weight and plate radius, while the recorded energy increases with increasing weight and decreasing base plate area (as shown from our own experiments and measurements by Krohn) which is contrary to the theories developed by Wolf and Hoover-O'Brien. The transient response is obtained by an inverse Fourier transform. Optimal geophone-ground coupling and energy transmission are required, the first concept meaning that the geophone is following the motion of the ground and the second one that the signal is detectable. As a final example, we simulate seismic acquisition based on the novel theory, showing the differences between optimal and poor ground-to-geophone energy transmission.

We propose a method for imaging small-scale diffraction objects in complex environments in which Kirchhoff-based approaches may fail. The proposed method is based on a separation between the specular reflection and diffraction components of the total wavefield in the migrated surface angle domain. Reverse-time migration was utilized to produce the common image gathers. This approach provides stable and robust results in cases of complex velocity models. The separation is based on the fact that, in surface angle common image gathers, reflection events are focused at positions that correspond to the apparent dip angle of the reflectors, whereas diffracted events are distributed over a wide range of angles. The high-resolution radon-based procedure is used to efficiently separate the reflection and diffraction wavefields. In this study, we consider poststack diffraction imaging. The advantages of working in the poststack domain are its numerical efficiency and the reduced computational time. The numerical results show that the proposed method is able to image diffraction objects in complex environments. The application of the method to a real seismic dataset illustrates the capability of the approach to extract diffractions.

Recently, an effective and powerful approach for simulating seismic wave propagation in elastic media with an irregular free surface was proposed. However, in previous studies, researchers used the periodic condition and/or sponge boundary condition to attenuate artificial reflections at boundaries of a computational domain. As demonstrated in many literatures, either the periodic condition or sponge boundary condition is simple but much less effective than the well-known perfectly matched layer boundary condition. In view of this, we intend to introduce a perfectly matched layer to simulate seismic wavefields in unbounded models with an irregular free surface. We first incorporate a perfectly matched layer into wave equations formulated in a frequency domain in Cartesian coordinates. We then transform them back into a time domain through inverse Fourier transformation. Afterwards, we use a boundary-conforming grid and map a rectangular grid onto a curved one, which allows us to transform the equations and free surface boundary conditions from Cartesian coordinates to curvilinear coordinates. As numerical examples show, if free surface boundary conditions are imposed at the top border of a model, then it should also be incorporated into the perfectly matched layer imposed at the top-left and top- right corners of a 2D model where the free surface boundary conditions and perfectly matched layer encounter; otherwise, reflections will occur at the intersections of the free surface and the perfectly matched layer, which is confirmed in this paper. So, by replacing normal second derivatives in wave equations in curvilinear coordinates with free surface boundary conditions, we successfully implement the free surface boundary conditions into the perfectly matched layer at the top-left and top-right corners of a 2D model at the surface. A number of numerical examples show that the perfectly matched layer constructed in this study is effective in simulating wave propagation in unbounded media and the algorithm for implementation of the perfectly matched layer and free surface boundary conditions is stable for long-time wavefield simulation on models with an irregular free surface.

Pressure drops associated with reservoir production generate excess stress and strain that cause travel-time shifts of reflected waves. Here, we invert time shifts of P-, S-, and PS-waves measured between baseline and monitor surveys for pressure reduction and reservoir length. The inversion results can be used to estimate compaction-induced stress and strain changes around the reservoir. We implement a hybrid inversion algorithm that incorporates elements of gradient, global/genetic, and nearest neighbour methods and permits exploration of the parameter space while simultaneously following local misfit gradients. Our synthetic examples indicate that optimal estimates of reservoir pressure from P-wave data can be obtained using the reflections from the reservoir top. For S-waves, time shifts from the top of the reservoir can be accurately inverted for pressure if the noise level is low. However, if noise contamination is significant, it is preferable to use S-wave data (or combined shifts of all three modes) from reflectors beneath the reservoir. Joint wave type inversions demonstrate improvements over any single pure mode. Reservoir length can be estimated using the time shifts of any mode from the reservoir top or deeper reflectors. We also evaluate the differences between the actual strain field and those corresponding to the best-case inversion results obtained using P- and S-wave data. Another series of tests addresses the inversion of the time shifts for the pressure drops in two-compartment reservoirs, as well as for the associated strain field. Numerical testing shows that a potentially serious source of error in the inversion is a distortion in the strain-sensitivity coefficients, which govern the magnitude of stiffness changes. This feasibility study suggests which wave types and reflector locations may provide the most accurate estimates of reservoir parameters from compaction-induced time shifts.

Borehole fluid injections are accompanied by microseismic activity not only during but also after termination of the fluid injection. Previously, this phenomenon has been analysed, assuming that the main triggering mechanism is governed by a linear pressure diffusion in a hydraulically isotropic medium. In this context the so-called back front of seismicity has been introduced, which allows to characterize the hydraulic transport from the spatiotemporal distribution of post-injection induced events. However, rocks are generally anisotropic, and in addition, fluid injections can strongly enhance permeability. In this case, permeability becomes a function of pressure. For such situations, we carry out a comprehensive study about the behaviour and parametrization of the back front. Based on a model of a factorized anisotropic pressure dependence of permeability, we present an approach to reconstruct the principal components of the diffusivity tensor. We apply this approach to real microseismic data and show that the back front characterizes the least hydraulic transport. To investigate the back front of non-linear pore-fluid pressure diffusion, we numerically consider a power-law and an exponential-dependent diffusivity. To account for a post-injection enhanced hydraulic state of the rock, we introduce a model of a frozen (i.e., nearly unchanged after the stimulation) medium diffusivity and generate synthetic seismicity. We find that, for a weak non-linearity and 3D exponential diffusion, the linear diffusion back front is still applicable. This finding is in agreement with microseismic data from Ogachi and Fenton Hill. However, for a strong non-linear fluid–rock interaction such as hydraulic fracturing, the back front can significantly deviate from a time dependence of a linear diffusion back front. This is demonstrated for a data set from the Horn River Basin. Hence, the behaviour of the back front is a strong indicator of a non-linear fluid–rock interaction.

Multi-offset phase analysis of seismic surface waves is an established technique for the extraction of dispersion curves with high spatial resolution and, consequently, for the investigation of the subsurface in terms of shear wave velocity distribution. However, field applications are rarely documented in the published literature. In this paper, we discuss an implementation of the multi-offset phase analysis consisting of the estimation of the Rayleigh wave velocity by means of a moving window with a frequency-dependent length. This allows maximizing the lateral resolution at high frequencies while warranting stability at the lower frequencies. In this way, we can retrieve the shallow lateral variability with high accuracy and, at the same time, obtain a robust surface-wave velocity measurement at depth. In this paper, we apply this methodology to a dataset collected for hydrogeophysical purposes and compare the inversion results with those obtained by using refraction seismics and electrical resistivity tomography. The surface-wave results are in good agreement with those provided by the other methods and demonstrate a superior capability in retrieving both lateral and vertical velocity variations, including inversions. Our results are further corroborated by the lithological information from a borehole drilled on the acquisition line. The availability of multi-offset phase analysis data also allows disentangling a fairly complex interpretation of the other geophysical results.

Localization of fractured areas is of primary interest in the study of oil and gas geology in carbonate environments. Hydrocarbon reservoirs in these environments are embedded within an impenetrable rock matrix but possess a rich system of various microheterogeneities, i.e., cavities, cracks, and fractures. Cavities accumulate oil, but its flow is governed by a system of fractures. A distinctive feature of wave propagation in such media is the excitation of the scattered/diffracted waves by the microheterogeneities. This scattering could be a reliable attribute for characterization of the fine structure of reservoirs, but it has extremely low energy and any standard data processing renders them practically invisible in comparison with images produced by specular reflections. Therefore, any attempts to use these waves for image congestion of microheterogeneities should first have a preliminary separation of the scattering and specular reflections. In this paper, the approach to performing this separation is based on the asymmetric summation. It is implemented by double focusing of Gaussian beams. To do this, the special weights are computed by propagating Gaussian beams from the target area towards the acquisition system separately for sources and receivers. The different mutual positioning of beams in each pair introduces a variety of selective images that are destined to represent some selected singular primitives of the target objects such as fractures, cavities, and edges. In this way, one can construct various wave images of a target reservoir, particularly in scattered/diffracted waves. Additional removal of remnants of specular reflections is done by means of spectral analysis of the scattered/diffracted waves' images to recognize and cancel extended lineaments. Numerical experiments with Sigsbee 2A synthetic seismic data and some typical structures of the Yurubcheno-Tokhomskoye oil field in East Siberia are presented and discussed.

Recently, new on-shore acquisition designs have been presented with multi-component sensors deployed in the shallow sub-surface (20 m–60 m). Virtual source redatuming has been proposed for these data to compensate for surface statics and to enhance survey repeatability. In this paper, we investigate the feasibility of replacing the correlation-based formalism that undergirds virtual source redatuming with multi-dimensional deconvolution, offering various advantages such as the elimination of free-surface multiples and the potential to improve virtual source repeatability. To allow for data-driven calibration of the sensors and to improve robustness in cases with poor sensor spacing in the shallow sub-surface (resulting in a relatively high wavenumber content), we propose a new workflow for this configuration. We assume a dense source sampling and target signals that arrive at near-vertical propagation angles. First, the data are preconditioned by applying synthetic-aperture-source filters in the common receiver domain. Virtual source redatuming is carried out for the multi-component recordings individually, followed by an intermediate deconvolution step. After this specific pre-processing, we show that the downgoing and upgoing constituents of the wavefields can be separated without knowledge of the medium parameters, the source wavelet, or sensor characteristics. As a final step, free-surface multiples can be eliminated by multi-dimensional deconvolution of the upgoing fields with the downgoing fields.

Borehole seismic addresses the need for high-resolution images and elastic parameters of the subsurface. Full-waveform inversion of vertical seismic profile data is a promising technology with the potential to recover quantitative information about elastic properties of the medium. Full-waveform inversion has the capability to process the entire wavefield and to address the wave propagation effects contained in the borehole data—multi-component measurements; anisotropic effects; compressional and shear waves; and transmitted, converted, and reflected waves and multiples. Full-waveform inversion, therefore, has the potential to provide a more accurate result compared with conventional processing methods.

We present a feasibility study with results of the application of high-frequency (up to 60 Hz) anisotropic elastic full-waveform inversion to a walkaway vertical seismic profile data from the Arabian Gulf. Full-waveform inversion has reproduced the majority of the wave events and recovered a geologically plausible layered model with physically meaningful values of the medium.

Fluid depletion within a compacting reservoir can lead to significant stress and strain changes and potentially severe geomechanical issues, both inside and outside the reservoir. We extend previous research of time-lapse seismic interpretation by incorporating synthetic near-offset and full-offset common-midpoint reflection data using anisotropic ray tracing to investigate uncertainties in time-lapse seismic observations. The time-lapse seismic simulations use dynamic elasticity models built from hydro-geomechanical simulation output and a stress-dependent rock physics model. The reservoir model is a conceptual two-fault graben reservoir, where we allow the fault fluid-flow transmissibility to vary from high to low to simulate non-compartmentalized and compartmentalized reservoirs, respectively. The results indicate time-lapse seismic amplitude changes and travel-time shifts can be used to qualitatively identify reservoir compartmentalization. Due to the high repeatability and good quality of the time-lapse synthetic dataset, the estimated travel-time shifts and amplitude changes for near-offset data match the true model subsurface changes with minimal errors. A 1D velocity–strain relation was used to estimate the vertical velocity change for the reservoir bottom interface by applying zero-offset time shifts from both the near-offset and full-offset measurements. For near-offset data, the estimated P-wave velocity changes were within 10% of the true value. However, for full-offset data, time-lapse attributes are quantitatively reliable using standard time-lapse seismic methods when an updated velocity model is used rather than the baseline model.

A fluid-saturated flat channel between solids, such as a fracture, is known to support guided waves—sometimes called Krauklis waves. At low frequencies, Krauklis waves can have very low velocity and large attenuation and are very dispersive. Because they propagate primarily within the fluid channel formed by a fracture, Krauklis waves can potentially be used for geological fracture characterization in the field. Using an analogue fracture consisting of a pair of flat slender plates with a mediating fluid layer—a trilayer model—we conducted laboratory measurements of the velocity and attenuation of Krauklis waves. Unlike previous experiments using ultrasonic waves, these experiments used frequencies well below 1 kHz, resulting in extremely low velocity and large attenuation of the waves. The mechanical compliance of the fracture was varied by modifying the stiffness of the fluid seal of the physical fracture model, and proppant (fracture-filling high-permeability sand) was also introduced into the fracture to examine its impact on wave propagation. A theoretical frequency equation for the trilayer model was derived using the poroelastic linear-slip interface model, and its solutions were compared to the experimental results.

This paper presents the first controlled-source electromagnetic survey carried out in the German North Sea with a recently developed seafloor-towed electrical dipole–dipole system, i.e., HYDRA II. Controlled-source electromagnetic data are measured, processed, and inverted in the time domain to estimate an electrical resistivity model of the sub-seafloor. The controlled-source electromagnetic survey targeted a shallow, phase-reversed, seismic reflector, which potentially indicates free gas. To compare the resistivity model to reflection seismic data and draw a combined interpretation, we apply a trans-dimensional Bayesian inversion that estimates model parameters and uncertainties, and samples probabilistically over the number of layers of the resistivity model. The controlled-source electromagnetic data errors show time-varying correlations, and we therefore apply a non-Toeplitz data covariance matrix in the inversion that is estimated from residual analysis. The geological interpretation drawn from controlled-source electromagnetic inversion results and borehole and reflection seismic data yield resistivities of ∼1 Ωm at the seafloor, which are typical for fine-grained marine deposits, whereas resistivities below ∼20 mbsf increase to 2–4 Ωm and can be related to a transition from fine-grained (Holocene age) to unsorted, coarse-grained, and compacted glacial sediments (Pleistocene age). Interface depths from controlled-source electromagnetic inversion generally match the seismic reflector related to the contrast between the different depositional environments. Resistivities decrease again at greater depths to ∼1 Ωm with a minimum resistivity at ∼300 mbsf where a seismic reflector (that marks a major flooding surface of late Miocene age) correlates with an increased gamma-ray count, indicating an increased amount of fine-grained sediments. We suggest that the grain size may have a major impact on the electrical resistivity of the sediment with lower resistivities for fine-grained sediments. Concerning the phase-reversed seismic reflector that was targeted by the survey, controlled-source electromagnetic inversion results yield no indication for free gas below it as resistivities are generally elevated above the reflector. We suggest that the elevated resistivities are caused by an overall decrease in porosity in the glacial sediments and that the seismic reflector could be caused by an impedance contrast at a thin low-velocity layer. Controlled-source electromagnetic interface depths near the reflector are quite uncertain and variable. We conclude that the seismic interface cannot be resolved with the controlled-source electromagnetic data, but the thickness of the corresponding resistive layer follows the trend of the reflector that is inclined towards the west.

The accurate estimation of sub-seafloor resistivity features from marine controlled source electromagnetic data using inverse modelling is hindered due to the limitations of the inversion routines. The most commonly used one-dimensional inversion techniques for resolving subsurface resistivity structures are gradient-based methods, namely Occam and Marquardt. The first approach relies on the smoothness of the model and is recommended when there are no sharp resistivity boundaries. The Marquardt routine is relevant for many electromagnetic applications with sharp resistivity contrasts but subject to the appropriate choice of a starting model. In this paper, we explore the ability of different 1D inversion schemes to derive sub-seafloor resistivity structures from time domain marine controlled source electromagnetic data measured along an 8-km-long profile in the German North Sea. Seismic reflection data reveal a dipping shallow amplitude anomaly that was the target of the controleld source electromagnetic survey. We tested four inversion schemes to find suitable starting models for the final Marquardt inversion. In this respect, as a first scenario, Occam inversion results are considered a starting model for the subsequent Marquardt inversion (Occam–Marquardt). As a second scenario, we employ a global method called Differential Evolution Adaptive Metropolis and sequentially incorporate it with Marquardt inversion. The third approach corresponds to Marquardt inversion introducing lateral constraints. Finally, we include the lateral constraints in Differential Evolution Adaptive Metropolis optimization, and the results are sequentially utilized by Marquardt inversion. Occam–Marquardt may provide accurate estimation of the subsurface features, but it is dependent on the appropriate conversion of different multi-layered Occam model to an acceptable starting model for Marquardt inversion, which is not straightforward. Employing parameter spaces, the Differential Evolution Adaptive Metropolis approach can be pertinent to determine Marquardt *a priori* information; nevertheless, the uncertainties in Differential Evolution Adaptive Metropolis optimization will introduce some inaccuracies in Marquardt inversion results. Laterally constrained Marquardt may be promising to resolve sub-seafloor features, but it is not stable if there are significant lateral changes of the sub-seafloor structure due to the dependence of the method to the starting model. Including the lateral constraints in Differential Evolution Adaptive Metropolis approach allows for faster convergence of the routine with consistent results, furnishing more accurate estimation of *a priori* models for the subsequent Marquardt inversion.

The study presents a fast imaging technique for the very low-frequency data interpretation. First, an analytical expression was derived to compute the vertical component of the magnetic field at any point on the Earth's surface for a given current density distribution in a rectangular block on the subsurface. Current density is considered as exponentially decreasing with depth, according to the skin depth rule in a particular block. Subsequently, the vertical component of the magnetic field due to the entire subsurface was computed as the sum of the vertical component of the magnetic field due to an individual block. Since the vertical component of the magnetic field is proportional to the real part of very low-frequency anomaly, an inversion program was developed for imaging of the subsurface conductors using the real very low-frequency anomaly in terms of apparent current density distribution in the subsurface. Imaging results from the presented formulation were compared with other imaging techniques in terms of apparent current density and resistivity distribution using a standard numerical forward modelling and inversion technique. Efficacy of the developed approach was demonstrated for the interpretation of synthetic and field very low-frequency data. The presented imaging technique shows improvement with respect to the filtering approaches in depicting subsurface conductors. Further, results obtained using the presented approach are closer to the results of rigorous resistivity inversion. Since the presented approach uses only the real anomaly, which is not sensitive to very small isolated near-surface conducting features, it depicts prominent conducting features in the subsurface.

We present a structural smoothing regularization scheme in the context of inversion of marine controlled-source electromagnetic data. The regularizing hypothesis is that the electrical parameters have a structure similar to that of the elastic parameters observed from seismic data. The regularization is split into three steps. First, we ensure that our inversion grid conforms with the geometry derived from seismic. Second, we use a seismic stratigraphic attribute to define a spatially varying regularization strength. Third, we use an indexing strategy on the inversion grid to define smoothing along the seismic geometry. Enforcing such regularization in the inversion will encourage an inversion result that is more intuitive for the interpreter to deal with. However, the interpreter should also be aware of the bias introduced by using seismic data for regularization. We illustrate the method using one synthetic example and one field data example. The results show how the regularization works and that it clearly enforces the structure derived from seismic data. From the field data example we find that the inversion result improves when the structural smoothing regularization is employed. Including the broadside data improves the inversion results even more, due to a better balancing between the sensitivities for the horizontal and vertical resistivities.

Using a subset of the SEG Advanced Modeling Program Phase I controlled-source electromagnetic data, we apply our standard controlled-source electromagnetic interpretation workflows to delineate a simulated hydrocarbon reservoir. Experience learned from characterizing such a complicated model offers us an opportunity to refine our workflows to achieve better interpretation quality. The exercise proceeded in a blind test style, where the interpreting geophysicists did not know the true resistivity model until the end of the project. Rather, the interpreters were provided a traditional controlled-source electromagnetic data package, including electric field measurements, interpreted seismic horizons, and well log data. Based on petrophysical analysis, a background resistivity model was established first. Then, the interpreters started with feasibility studies to establish the recoverability of the prospect and carefully stepped through 1D, 2.5D, and 3D inversions with seismic and well log data integrated at each stage. A high-resistivity zone is identified with 1D analysis and further characterized with 2.5D inversions. Its lateral distribution is confirmed with a 3D anisotropic inversion. The importance of integrating all available geophysical and petrophysical data to derive more accurate interpretation is demonstrated.

Marine controlled source electromagnetic methods are used to derive the electrical properties of a wide range of sub-seafloor targets, including gas hydrate reservoirs. In most marine controlled source electromagnetic surveys, the deep-tow transmitter is used with a long horizontal electric dipole being towed above the seafloor, which is capable of transmitting dipole moments in the order of up to several thousand ampere-metres. The newly developed deployed transmitter uses two horizontal orthogonal electrical dipoles and can land on the seafloor. It can transmit higher frequency electromagnetic signals, can provide accurate transmission orientation, and can obtain higher signal stacking, which compensates for the shorter source dipole length. In this paper, we present the study, key technologies, and implementation details of two new marine controlled source electromagnetic transmitters (the deep-tow transmitter and the deployed transmitter). We also present the results of a marine controlled source electromagnetic experiment conducted from April to May 2014 in the South China Sea using both the deep-tow transmitter and the deployed transmitter, which show that the two types of marine transmitters can be used as effective source for gas hydrate exploration.

We developed a new marine controlled-source electromagnetic receiver for detecting methane hydrate zones and oil and gas reservoirs on the seafloor, which is not imaged well by seismic reflection surveys. To determine the seafloor structure, the electromagnetic receiver should have low noise, power consumption, clock drift error, and operating costs while being highly reliable. Because no suitable receiver was available in our laboratory, we developed a new marine controlled-source electromagnetic receiver with these characteristics; the receiver is equipped with acoustic telemetry modem and an arm-folding mechanism to facilitate deployment and recovering operations. To demonstrate the applicability of our new receiver, we carried out a field experiment offshore of Guangzhou in the South China Sea, where methane hydrates have been discovered. We successfully obtained controlled-source electromagnetic data along a profile about 13 km long. All six new receivers were recovered, and high-quality electromagnetic data were obtained. Relatively high apparent resistivity values were detected. The results of the offshore field experiment support the claim that the electromagnetic data obtained using the new receiver are of sufficient quality for the survey target.

We present a numerical study for 3D time-lapse electromagnetic monitoring of a fictitious CO_{2} sequestration using the geometry of a real geological site and a suite of suitable electromagnetic methods with different source/receiver configurations and different sensitivity patterns. All available geological information is processed and directly implemented into the computational domain, which is discretized by unstructured tetrahedral grids. We thus demonstrate the performance capability of our numerical simulation techniques.

The scenario considers a CO_{2} injection in approximately 1100 m depth. The expected changes in conductivity were inferred from preceding laboratory measurements. A resistive anomaly is caused within the conductive brines of the undisturbed reservoir horizon. The resistive nature of the anomaly is enhanced by the CO_{2} dissolution regime, which prevails in the high-salinity environment. Due to the physicochemical properties of CO_{2}, the affected portion of the subsurface is laterally widespread but very thin.

We combine controlled-source electromagnetics, borehole transient electromagnetics, and the direct-current resistivity method to perform a virtual experiment with the aim of scrutinizing a set of source/receiver configurations with respect to coverage, resolution, and detectability of the anomalous CO_{2} plume prior to the field survey. Our simulation studies are carried out using the 3D codes developed in our working group. They are all based on linear and higher order Lagrange and Nédélec finite-element formulations on unstructured grids, providing the necessary flexibility with respect to the complex real-world geometry. We provide different strategies for addressing the accuracy of numerical simulations in the case of arbitrary structures.

The presented computations demonstrate the expected great advantage of positioning transmitters or receivers close to the target. For direct-current geoelectrics, 50% change in electric potential may be detected even at the Earth's surface. Monitoring with inductive methods is also promising. For a well-positioned surface transmitter, more than 10% difference in the vertical electric field is predicted for a receiver located 200 m above the target. Our borehole transient electromagnetics results demonstrate that traditional transient electromagnetics with a vertical magnetic dipole source is not well suited for monitoring a thin horizontal resistive target. This is due to the mainly horizontal current system, which is induced by a vertical magnetic dipole.

Electromagnetic methods are routinely applied to image the subsurface from shallow to regional structures. Individual electromagnetic methods differ in their sensitivities towards resistive and conductive structures and in their exploration depths. If a good balance between different electromagnetic data can be be found, joint 3D inversion of multiple electromagnetic datasets can result in significantly better resolution of subsurface structures than the individual inversions. We present a weighting algorithm to combine magnetotelluric, controlled source electromagnetic, and geoelectric data. Magnetotelluric data are generally more sensitive to regional conductive structures, whereas controlled source electromagnetic and geoelectric data are better suited to recover more shallow and resistive structures. Our new scheme is based on weighting individual components of the total data gradient after each model update. Norms of individual data residuals are used to assess how much of the total data gradient must be assigned to each method to achieve a balanced contribution of all datasets for the joint inverse model. Synthetic inversion tests demonstrate advantages of joint inversion in general and also the influence of the weighting. In our tests, the controlled source electromagnetic data gradients are larger than those of the magnetotelluric and geoelectric datasets. Consequently, direct joint inversion of controlled source electromagnetic, magnetotelluric, and geoelectric data results in models that are mostly dominated by structures required by the controlled source electromagnetic data. Applying the new adaptive weighting scheme results in an inversion model that fits the data better and resembles more the original model. We used the modular system electromagnetic as a framework to implement the new joint inversion and briefly describe the new modules for forward modelling and their interfaces to the modular system electromagnetic package.

To advance and optimize secondary and tertiary oil recovery techniques, it is essential to know the areal propagation and distribution of the injected fluids in the subsurface. We investigate the applicability of controlled-source electromagnetic methods to monitor fluid movements in a German oilfield (Bockstedt, onshore Northwest Germany) as injected brines (highly saline formation water) have much lower electrical resistivity than the oil within the reservoir. The main focus of this study is on controlled-source electromagnetic simulations to test the sensitivity of various source–receiver configurations. The background model for the simulations is based on two-dimensional inversion of magnetotelluric data gathered across the oil field and calibrated with resistivity logs. Three-dimensional modelling results suggest that controlled-source electromagnetic methods are sensitive to resistivity changes at reservoir depths, but the effect is difficult to resolve with surface measurements only. Resolution increases significantly if sensors or transmitters can be placed in observation wells closer to the reservoir. In particular, observation of the vertical electric field component in shallow boreholes and/or use of source configurations consisting of combinations of vertical and horizontal dipoles are promising. Preliminary results from a borehole-to-surface controlled-source electromagnetic field survey carried out in spring 2014 are in good agreement with the modelling studies.

Steel well casings in or near a hydrocarbon reservoir can be used as source electrodes in time-lapse monitoring using grounded line electromagnetic methods. A requisite component of carrying out such monitoring is the capability to numerically model the electromagnetic response of a set of source electrodes of finite length. We present a modelling algorithm using the finite-element method for calculating the electromagnetic response of a three-dimensional conductivity model excited using a vertical steel-cased borehole as a source. The method is based on a combination of the method of moments and the Coulomb-gauged primary–secondary potential formulation. Using the method of moments, we obtain the primary field in a half-space due to an energized vertical steel casing by dividing the casing into a set of segments, each assumed to carry a piecewise constant alternating current density. The primary field is then substituted into the primary–secondary potential finite-element formulation of the three-dimensional problem to obtain the secondary field. To validate the algorithm, we compare our numerical results with: (i) the analytical solution for an infinite length casing in a whole space, excited by a line source, and (ii) a three-layered Earth model without a casing. The agreement between the numerical and analytical solutions demonstrates the effectiveness of our algorithm. As an illustration, we also present the time-lapse electromagnetic response of a synthetic model representing a gas reservoir undergoing water flooding.

As motivation for considering new electromagnetic techniques for hydraulic fracture monitoring, we develop a simple financial model for the net present value offered by geophysical characterization to reduce the error in stimulated reservoir volume calculations. This model shows that even a 5% improvement in stimulated reservoir volume for a 1 billion barrel (bbl.) field results in over 1 billion U.S. dollars (US$) in net present value over 24 years for US$100/bbl. oil and US$0.5 billion for US$50/bbl. oil. The application of conductivity upscaling, often used in electromagnetic modeling to reduce mesh size and thus simulation runtimes, is shown to be inaccurate for the high electrical contrasts needed to represent steel-cased wells in the earth. Fine-scale finite-difference modeling with 12.22-mm cells to capture the steel casing and fractures shows that the steel casing provides a direct current pathway to a created fracture that significantly enhances the response compared with neglecting the steel casing. We consider conductively enhanced proppant, such as coke-breeze-coated sand, and a highly saline brine solution to produce electrically conductive fractures. For a relatively small frac job at a depth of 3 km, involving 5,000 bbl. of slurry and a source midpoint to receiver separation of 50 m, the models show that the conductively enhanced proppant produces a 15% increase in the electric field strength (in-line with the transmitter) in a 10-Ωm background. In a 100-Ωm background, the response due to the proppant increases to 213%. Replacing the conductive proppant by brine with a concentration of 100,000-ppm NaCl, the field strength is increased by 23% in the 100-Ωm background and by 2.3% in the 10-Ωm background. All but the 100,000-ppm NaCl brine in a 10-Ωm background produce calculated fracture-induced electric field increases that are significantly above 2%, a value that has been demonstrated to be observable in field measurements.

A multichannel borehole-to-surface controlled-source electromagnetic experiment was carried out at the onshore CO_{2} storage site of Hontomín (Spain). The electromagnetic source consisted of a vertical electric dipole located 1.5 km deep, and the electric field was measured at the surface. The subsurface response has been obtained by calculating the transfer function between the transmitted signal and the electric field at the receiver positions. The dataset has been processed using a fast processing methodology, appropriate to be applied on controlled-source electromagnetics (CSEM) data with a large signal-to-noise ratio. The dataset has been analysed in terms of data quality and repeatability errors, showing data with low experimental errors and good repeatability. We evaluate if the induction of current along the casing of the injection well can reproduce the behaviour of the experimental data.