In seismic waveform inversion, non-linearity and non-uniqueness require appropriate strategies. We formulate four types of L2 normed misfit functionals for Laplace-Fourier domain waveform inversion: i) subtraction of complex-valued observed data from complex-valued predicted data (the ‘conventional phase-amplitude’ residual), ii) a ‘conventional phase-only’ residual in which amplitude variations are normalized, iii) a ‘logarithmic phase-amplitude’ residual and finally iv) a ‘logarithmic phase-only’ residual in which the only imaginary part of the logarithmic residual is used. We evaluate these misfit functionals by using a wide-angle field Ocean Bottom Seismograph (OBS) data set with a maximum offset of 55 km. The conventional phase-amplitude approach is restricted in illumination and delineates only shallow velocity structures. In contrast, the other three misfit functionals retrieve detailed velocity structures with clear lithological boundaries down to the deeper part of the model. We also test the performance of additional phase-amplitude inversions starting from the logarithmic phase-only inversion result. The resulting velocity updates are prominent only in the high-wavenumber components, sharpening the lithological boundaries. We argue that the discrepancies in the behaviours of the misfit functionals are primarily caused by the sensitivities of the model gradient to strong amplitude variations in the data. As the observed data amplitudes are dominated by the near-offset traces, the conventional phase-amplitude inversion primarily updates the shallow structures as a result. In contrast, the other three misfit functionals eliminate the strong dependence on amplitude variation naturally and enhance the depth of illumination. We further suggest that the phase-only inversions are sufficient to obtain robust and reliable velocity structures and the amplitude information is of secondary importance in constraining subsurface velocity models.

In this paper, we treat passive surface microseismic monitoring as a predominantly statistical problem of location sources of weak seismicity recorded in the presence of strongly correlated noise using dense seismic arrays. We introduce two statistically optimal algorithms (adaptive maximal likelihood algorithm and statistically optimal phase algorithm) and show that the traditional semblance-based microseismic processing algorithm (Seismic Emission Tomography) is just an extreme case of the maximal likelihood algorithm for Gaussian white noise (i.e., noise that is stationary and uncorrelated in time and space). We evaluate location uncertainties of all three microseismic algorithms for different types of noise patterns and signal-to-noise ratios. For Gaussian white noise, the Seismic Emission Tomography algorithm performs well, demonstrating even slightly better location accuracy than statistically optimal techniques. Actual noise affecting seismic sensors during hydraulic fracturing is non-stationary. It is correlated in time and space, and varies greatly in power and spectral content for different sensors of the array. We use Monte Carlo simulation to show that the location accuracy of statistically optimal algorithms can be 20 to 40 times better than for the Seismic Emission Tomography algorithm in the presence of man-made surface noise during hydraulic fracturing.

Simultaneous estimation of velocity gradients and anisotropic parameters from seismic reflection data is one of the main challenges in transversely isotropic media with a vertical symmetry axis migration velocity analysis. In migration velocity analysis, we usually construct the objective function using the *l*_{2} norm along with a linear conjugate gradient scheme to solve the inversion problem. Nevertheless, for seismic data this inversion scheme is not stable and may not converge in finite time. In order to ensure the uniform convergence of parameter inversion and improve the efficiency of migration velocity analysis, this paper develops a double parameterized regularization model and gives the corresponding algorithms. The model is based on the combination of the *l*_{2} norm and the non-smooth *l*_{1} norm. For solving such an inversion problem, the quasi-Newton method is utilized to make the iterative process stable, which can ensure the positive definiteness of the Hessian matrix. Numerical simulation indicates that this method allows fast convergence to the true model and simultaneously generates inversion results with a higher accuracy. Therefore, our proposed method is very promising for practical migration velocity analysis in anisotropic media.

Window-based Euler deconvolution is commonly applied to magnetic and sometimes to gravity interpretation problems. For the deconvolution to be geologically meaningful, care must be taken to choose parameters properly. The following proposed process design rules are based partly on mathematical analysis and partly on experience.

- The interpretation problem must be expressible in terms of simple structures with integer Structural Index (SI) and appropriate to the expected geology and geophysical source.
- The field must be sampled adequately, with no significant aliasing.
- The grid interval must fit the data and the problem, neither meaninglessly over-gridded nor so sparsely gridded as to misrepresent relevant detail.
- The required gradient data (measured or calculated) must be valid, with sufficiently low noise, adequate representation of necessary wavelengths and no edge-related ringing.
- The deconvolution window size must be at least twice the original data spacing (line spacing or observed grid spacing) and more than half the desired depth of investigation.
- The ubiquitous sprays of spurious solutions must be reduced or eliminated by judicious use of clustering and reliability criteria, or else recognized and ignored during interpretation.
- The process should be carried out using Cartesian coordinates if the software is a Cartesian implementation of the Euler deconvolution algorithm (most accessible implementations are Cartesian).

If these rules are not adhered to, the process is likely to yield grossly misleading results. An example from southern Africa demonstrates the effects of poor parameter choices.

In this paper, we present a case study on the use of the normalized source strength (NSS) for interpretation of magnetic and gravity gradient tensors data. This application arises in exploration of nickel, copper and platinum group element (Ni-Cu-PGE) deposits in the McFaulds Lake area, Northern Ontario, Canada. In this study, we have used the normalized source strength function derived from recent high resolution aeromagnetic and gravity gradiometry data for locating geological bodies.

In our algorithm, we use maxima of the normalized source strength for estimating the horizontal location of the causative body. Then we estimate depth to the source and structural index at that point using the ratio between the normalized source strength and its vertical derivative calculated at two levels; the measurement level and a height *h* above the measurement level. To discriminate more reliable solutions from spurious ones, we reject solutions with unreasonable estimated structural indices.

This method uses an upward continuation filter which reduces the effect of high frequency noise. In the magnetic case, the advantage is that, in general, the normalized magnetic source strength is relatively insensitive to magnetization direction, thus it provides more reliable information than standard techniques when geologic bodies carry remanent magnetization. For dipping gravity sources, the calculated normalized source strength yields a reliable estimate of the source location by peaking right above the top surface.

Application of the method on aeromagnetic and gravity gradient tensor data sets from McFaulds Lake area indicates that most of the gravity and magnetic sources are located just beneath a 20 m thick (on average) overburden and delineated magnetic and gravity sources which can be probably approximated by geological contacts and thin dikes, come up to the overburden.

Pre-stack seismic data are indicative of subsurface elastic properties within the amplitude versus offset characteristic and can be used to detect elastic rock property changes caused by injection. We perform time-lapse pre-stack 3-D seismic data analysis for monitoring sequestration at Cranfield. The time-lapse amplitude differences of Cranfield datasets are found entangled with time-shifts. To disentangle these two characters, we apply a local-correlation-based warping method to register the time-lapse pre-stack datasets, which can effectively separate the time-shift from the time-lapse seismic amplitude difference without changing the original amplitudes. We demonstrate the effectiveness of our registration method by evaluating the inverted elastic properties. These inverted time-lapse elastic properties can be reliably used for monitoring plumes.

We develop and apply an imaging procedure for simultaneous location and characterization of seismic source properties called Moment Tensor Migration Imaging. The procedure constructs images for moment tensor components using a weighted diffraction stack migration, and combines ray-theoretical Green's functions with a reverse time moment tensor imaging methodology. By applying an approximation we term the ‘ray-angles only approximation’, we form an expression for Moment Tensor Migration Imaging where the migration weights depend only on the take-off and arrival angles for rays leaving receiver positions and incident upon the image points. Moment Tensor Migration Imaging retains the benefits of diffraction stack procedures for source location and characterization, namely speed, flexibility, and the potential for incorporating non-linear stacking procedures, whilst also providing the benefits of moment tensor imaging such as: the inclusion of multiple phase and multiple component data; the collapsing of the source radiation pattern; estimation of the moment tensor.

We examine variations of the imaging procedure through a synthetic test. We show that although the assumptions required for the imaging and ray-angles only approximation may not be strictly valid for realistic survey geometries, a simple weight adjustment can be used to obtain more accurate and stable results in these situations. In our synthetic example we find that the use of a P-wave only migration without this reweighting structure produces poor results, whereby the resulting images show activity upon incorrect moment tensor components. However, many of these effects are mitigated by use of the reweighting scheme and the results are further improved through the introduction of non-linear stacking operators such as semblance weighted stacks. The highest quality moment tensor images (for the synthetic test examined here) are obtained through the use of both P-wave and S-wave wave fields. This highlights the importance of multicomponent data and multiphase modelling when characterizing seismic sources. We also find that the imaged moment tensor components vary proportionately when the input velocities are perturbed by a scale factor. This suggests, for the geometry investigated here, derived source properties such as fault-plane solutions and shear-tensile components will not be influenced by bulk changes in seismic velocities. Finally, we show the application to a real microseismic event observed using a surface array during hydraulic fracturing. We find that the procedure collapses the seismic radiation pattern into an anomaly with a maximum at the hypocentre and our derived mechanism is consistent with the observed radiation pattern from the source.

Quantitative interpretation of time-lapse seismic data requires knowledge of the relationship between elastic wave velocities and fluid saturation. This relationship is not unique but depends on the spatial distribution of the fluid in the pore-space of the rock. In turn, the fluid distribution depends on the injection rate. To study this dependency, forced imbibition experiments with variable injection rates have been performed on an air-dry limestone sample. Water was injected into a cylindrical sample and was monitored by X-Ray Computed Tomography and ultrasonic time-of-flight measurements across the sample. The measurements show that the P-wave velocity decreases well before the saturation front approaches the ultrasonic raypath. This decrease is followed by an increase as the saturation front crosses the raypath. The observed patterns of the acoustic response and water saturation as functions of the injection rate are consistent with previous observations on sandstone. The results confirm that the injection rate has significant influence on fluid distribution and the corresponding acoustic response. The complexity of the acoustic response —- that is not monotonic with changes in saturation, and which at the same saturation varies between hydrostatic conditions and states of dynamic fluid flow – may have implications for the interpretation of time-lapse seismic responses.

Updating of reservoir models by history matching of 4D seismic data along with production data gives us a better understanding of changes to the reservoir, reduces risk in forecasting and leads to better management decisions. This process of seismic history matching requires an accurate representation of predicted and observed data so that they can be compared quantitatively when using automated inversion. Observed seismic data is often obtained as a relative measure of the reservoir state or its change, however. The data, usually attribute maps, need to be calibrated to be compared to predictions. In this paper we describe an alternative approach where we normalize the data by scaling to the model data in regions where predictions are good. To remove measurements of high uncertainty and make normalization more effective, we use a measure of repeatability of the monitor surveys to filter the observed time-lapse data.

We apply this approach to the Nelson field. We normalize the 4D signature based on deriving a least squares regression equation between the observed and synthetic data which consist of attributes representing measured acoustic impedances and predictions from the model. Two regression equations are derived as part of the analysis. For one, the whole 4D signature map of the reservoir is used while in the second, 4D seismic data is used from the vicinity of wells with a good production match. The repeatability of time-lapse seismic data is assessed using the normalized root mean square of measurements outside of the reservoir. Where normalized root mean square is high, observations and predictions are ignored. Net: gross and permeability are modified to improve the match.

The best results are obtained by using the normalized root mean square filtered maps of the 4D signature which better constrain normalization. The misfit of the first six years of history data is reduced by 55 per cent while the forecast of the following three years is reduced by 29 per cent. The well based normalization uses fewer data when repeatability is used as a filter and the result is poorer. The value of seismic data is demonstrated from production matching only where the history and forecast misfit reductions are 45% and 20% respectively while the seismic misfit increases by 5%. In the best case using seismic data, it dropped by 6%. We conclude that normalization with repeatability based filtering is a useful approach in the absence of full calibration and improves the reliability of seismic data.

Wave field reconstruction – the estimation of a three-dimensional (3D) wave field representing upgoing, downgoing or the combined total pressure at an arbitrary point within a marine streamer array – is enabled by simultaneous measurements of the crossline and vertical components of particle acceleration in addition to pressure in a multicomponent marine streamer. We examine a repeated sail line of North Sea data acquired by a prototype multicomponent towed-streamer array for both wave field reconstruction fidelity (or accuracy) and reconstruction repeatability. Data from six cables, finely sampled in-line but spaced at 75 m crossline, are reconstructed and placed on a rectangular data grid uniformly spaced at 6.25 m in-line and crossline. Benchmarks are generated using recorded pressure data and compared with wave fields reconstructed from pressure alone, and from combinations of pressure, crossline acceleration and vertical acceleration. We find that reconstruction using pressure and both crossline and vertical acceleration has excellent fidelity, recapturing highly aliased diffractions that are lost by interpolation of pressure-only data. We model wave field reconstruction error as a linear function of distance from the nearest physical sensor and find, for this data set with some mismatched shot positions, that the reconstructed wave field error sensitivity to sensor mispositioning is one-third that of the recorded wave field sensitivity. Multicomponent reconstruction is also more repeatable, outperforming single-component reconstruction in which wave field mismatch correlates with geometry mismatch. We find that adequate repeatability may mask poor reconstruction fidelity and that aliased reconstructions will repeat if the survey geometry repeats. Although the multicomponent 3D data have only 500 m in-line aperture, limiting the attenuation of non-repeating multiples, the level of repeatability achieved is extremely encouraging compared to full-aperture, pressure-only, time-lapse data sets at an equivalent stage of processing.

Wave-equation based methods, such as the estimation of primaries by sparse inversion, have been successful in the mitigation of the adverse effects of surface-related multiples on seismic imaging and migration-velocity analysis. However, the reliance of these methods on multidimensional convolutions with fully sampled data exposes the ‘curse of dimensionality’, which leads to disproportional growth in computational and storage demands when moving to realistic 3D field data. To remove this fundamental impediment, we propose a dimensionality-reduction technique where the ‘data matrix’ is approximated adaptively by a randomized low-rank factorization. Compared to conventional methods, which need for each iteration passage through all data possibly requiring on-the-fly interpolation, our randomized approach has the advantage that the total number of passes is reduced to only one to three. In addition, the low-rank matrix factorization leads to considerable reductions in storage and computational costs of the matrix multiplies required by the sparse inversion. Application of the proposed method to two-dimensional synthetic and real data shows that significant performance improvements in speed and memory use are achievable at a low computational up-front cost required by the low-rank factorization.

Seismic tomography is a well-established approach to invert smooth macro-velocity models from kinematic parameters, such as traveltimes and their derivatives, which can be directly estimated from data. Tomographic methods differ more with respect to data domains than in the specifications of inverse-problem solving schemes. Typical examples are stereotomography, which is applied to prestack data and Normal-Incidence-Point-wave tomography, which is applied to common midpoint stacked data. One of the main challenges within the tomographic approach is the reliable estimation of the kinematic attributes from the data that are used in the inversion process. Estimations in the prestack domain (weak and noisy signals), as well as in the post-stack domain (occurrence of triplications and diffractions leading to numerous conflicting dip situations) may lead to parameter inaccuracies that will adversely impact the resulting velocity models. To overcome the above limitations, a new tomographic procedure applied in the time-migrated domain is proposed. We call this method Image-Incident-Point-wave tomography. The new scheme can be seen as an alternative to Normal-Incidence-Point-wave tomography. The latter method is based on traveltime attributes associated with normal rays, whereas the Image-Incidence-Point-wave technique is based on the corresponding quantities for the image rays. Compared to Normal-Incidence-Point-wave tomography the proposed method eases the selection of the tomography attributes, which is shown by synthetic and field data examples. Moreover, the method provides a direct way to convert time-migration velocities into depth-migration velocities without the need of any Dix-style inversion.

We reformulate the equation of reverse-time migration so that it can be interpreted as summing data along a series of hyperbola-like curves, each one representing a different type of event such as a reflection or multiple. This is a generalization of the familiar diffraction-stack migration algorithm where the migration image at a point is computed by the sum of trace amplitudes along an appropriate hyperbola-like curve. Instead of summing along the curve associated with the primary reflection, the sum is over all scattering events and so this method is named generalized diffraction-stack migration. This formulation leads to filters that can be applied to the generalized diffraction-stack migration operator to mitigate coherent migration artefacts due to, e.g., crosstalk and aliasing. Results with both synthetic and field data show that generalized diffraction-stack migration images have fewer artefacts than those computed by the standard reverse-time migration algorithm. The main drawback is that generalized diffraction-stack migration is much more memory intensive and I/O limited than the standard reverse-time migration method.

Cost reduction in seismic reconnaissance is an issue in geothermal exploration and can principally be achieved by sparse acquisition. To address the adherent decrease in signal/noise ratio, the common-reflection-surface method has been proposed. We reduced the data density of an existing 3D dataset and evaluated the results of common-reflection-surface processing using seismic attributes. The application of the common-reflection-surface method leads in all cases to an improvement of the signal/noise ratio. The most distinct improvement can be seen in the low fold regions. The improvement depends strongly on the midpoint aperture, and there is a tradeoff between reflector continuity and horizontal resolution. If small scale targets are to be imaged, a small aperture size is necessary, which may be far below the Fresnel zone for a specific reflector. The substantial reduction of the data density leads in our case to an irrecoverable information loss.

Scattering theory, a form of perturbation theory, is a framework from within which time-lapse seismic reflection methods can be derived and understood. It leads to expressions relating baseline and monitoring data and Earth properties, focusing on differences between these quantities as it does so. The baseline medium is, in the language of scattering theory, the reference medium and the monitoring medium is the perturbed medium. The general scattering relationship between monitoring data, baseline data, and time-lapse Earth property changes is likely too complex to be tractable. However, there are special cases that can be analysed for physical insight. Two of these cases coincide with recognizable areas of applied reflection seismology: amplitude versus offset modelling/inversion, and imaging. The main result of this paper is a demonstration that time-lapse difference amplitude versus offset modelling, and time-lapse difference data imaging, emerge from a single theoretical framework. The time-lapse amplitude versus offset case is considered first. We constrain the general time-lapse scattering problem to correspond with a single immobile interface that separates a static overburden from a target medium whose properties undergo time-lapse changes. The scattering solutions contain difference-amplitude versus offset expressions that (although presently acoustic) resemble the expressions of Landro (). In addition, however, they contain non-linear corrective terms whose importance becomes significant as the contrasts across the interface grow. The difference-amplitude versus offset case is exemplified with two parameter acoustic (bulk modulus and density) and anacoustic (P-wave velocity and quality factor Q) examples. The time-lapse difference data imaging case is considered next. Instead of constraining the structure of the Earth volume as in the amplitude versus offset case, we instead make a small-contrast assumption, namely that the time-lapse variations are small enough that we may disregard contributions from beyond first order. An initial analysis, in which the case of a single mobile boundary is examined in 1D, justifies the use of a particular imaging algorithm applied directly to difference data shot records. This algorithm, a least-squares, shot-profile imaging method, is additionally capable of supporting a range of regularization techniques. Synthetic examples verify the applicability of linearized imaging methods of the difference image formation under ideal conditions.

We study the stability of source mechanisms inverted from data acquired at surface and near-surface monitoring arrays. The study is focused on P-wave data acquired on vertical components, as this is the most common type of acquisition. We apply ray modelling on three models: a fully homogeneous isotropic model, a laterally homogeneous isotropic model and a laterally homogeneous anisotropic model to simulate three commonly used models in inversion. We use geometries of real arrays, one consisting in surface receivers and one consisting in ‘buried’ geophones at the near-surface. Stability was tested for two of the frequently observed source mechanisms: strike-slip and dip-slip and was evaluated by comparing the parameters of correct and inverted mechanisms. We assume these double-couple source mechanisms and use quantitatively the inversion allowing non-double-couple components to measure stability of the inversion. To test the robustness we inverted synthetic amplitudes computed for a laterally homogeneous isotropic model and contaminated with noise using a fully homogeneous model in the inversion. Analogously amplitudes computed in a laterally homogeneous anisotropic model were inverted in all three models. We show that a star-like surface acquisition array provides very stable inversion up to a very high level of noise in data. Furthermore, we reveal that strike-slip inversion is more stable than dip-slip inversion for the receiver geometries considered here. We show that noise and an incorrect velocity model may result in narrow bands of source mechanisms in Hudson's plots.

In many land seismic situations, the complex seismic wave propagation effects in the near-surface area, due to its unconsolidated character, deteriorate the image quality. Although several methods have been proposed to address this problem, the negative impact of 3D complex near-surface structures is still unsolved to a large extent. This paper presents a complete 3D data-driven solution for the near-surface problem based on 3D one-way traveltime operators, which extends our previous attempts that were limited to a 2D situation. Our solution is composed of four steps: 1) seismic wave propagation from the surface to a suitable datum reflector is described by parametrized one-way propagation operators, with all the parameters estimated by a new genetic algorithm, the self-adjustable input genetic algorithm, in an automatic and purely data-driven way; 2) surface-consistent residual static corrections are estimated to accommodate the fast variations in the near-surface area; 3) a replacement velocity model based on the traveltime operators in the good data area (without the near-surface problem) is estimated; 4) data interpolation and surface layer replacement based on the estimated traveltime operators and the replacement velocity model are carried out in an interweaved manner in order to both remove the near-surface imprints in the original data and keep the valuable geological information above the datum. Our method is demonstrated on a subset of a 3D field data set from the Middle East yielding encouraging results.

We present a two-dimensional (2D) gradient operator that produces more accurate results than known traditional operators such as Ando, Sobel and the so-called Isotropic operator. We further extend the derivation to three-dimensional (3D), a powerful feature missing in all conventional operators.

We start by constructing a parameterized formula that generically represents all 2D numerical gradient operators. We then solve for the required parameter by equating this numerical gradient with that obtained analytically from a single Fourier harmonic (or, equivalently here, a stationary plane wave). As this parameter is frequency- and direction-dependent (by virtue of the underlying Fourier harmonic), we construe a pragmatic version of it that is independent of these two variables yet capable of significantly reducing the error associated with traditional operators. Extension to 3D is achieved similarly; it requires dealing with two parameters as opposed to only one in the 2D case. Synthetic and real-data results confirm higher accuracy from this operator than from traditional ones.

Common shot ray tracing and finite difference seismic modelling experiments were undertaken to evaluate variations in the seismic response of the Devonian Redwater reef in the Alberta Basin, Canada after replacement of native pore waters in the upper rim of the reef with CO_{2}. This part of the reef is being evaluated for a CO_{2} storage project. The input geological model was based on well data and the interpretation of depth-converted, reprocessed 2D seismic data in the area. Pre-stack depth migration of the ray traced and finite difference synthetic data demonstrate similar seismic attributes for the Mannville, Nisku, Ireton, Cooking Lake, and Beaverhill Lake formations and clear terminations of the Upper Leduc and Middle Leduc events at the reef margin. Higher amplitudes at the base of Upper-Leduc member are evident near the reef margin due to the higher porosity of the foreslope facies in the reef rim compared to the tidal flat lagoonal facies within the central region of the reef. Time-lapse seismic analysis exhibits an amplitude difference of about 14% for Leduc reflections before and after CO_{2} saturation and a travel-time delay through the reservoir of 1.6 ms. Both the ray tracing and finite difference approaches yielded similar results but, for this particular model, the latter provided more precise imaging of the reef margin. From the numerical study we conclude that time-lapse surface seismic surveys should be effective in monitoring the location of the CO_{2} plume in the Upper Leduc Formation of the Redwater reef, although the differences in the results between the two modelling approaches are of similar order to the effects of the CO_{2} fluid replacement itself.

Recent studies have revealed the great potential of acoustic reflection logging in detecting near borehole fractures and vugs. The new design of acoustic reflection imaging tool with a closest spacing of 10.6m and a certain degree of phase steering makes it easier to extract the reflection signals from the borehole mode waves. For field applications of the tool, we had developed the corresponding processing software: Acoustic Reflection Imaging. In this paper, we have further developed an effective data processing flow by employing multi-scale slowness-time-coherence for reflection wave extraction and incorporating reverse time migration for imaging complicated subtle structures with the strong effects of borehole environment. Applications of the processing flow to synthetic data of acoustic reflection logging in a fractured formation model and interface model with fluid filled borehole generated by 2D finite difference method, and to the physical modelling data from a laboratory water tank, as well as to the field data from two wells in a western Chinese oil field, demonstrate the validity and capability of our multi-scale slowness-time-coherence and reverse time migration algorithms.

In hydraulic fracturing experiments, perforation shots excite body and tube waves that sample, and thus can be used to characterize, the surrounding medium. While these waves are routinely employed in borehole operations, their resolving power is limited by the experiment geometry, the signal-to-noise ratio, and their frequency content. It is therefore useful to look for additional, complementary signals that could increase this resolving power. Tube-to-body-wave conversions (scattering of tube to compressional or shear waves at borehole discontinuities) are one such signal. These waves are not frequently considered in hydraulic fracture settings, yet they possess geometrical and spectral attributes that greatly complement the resolution afforded by body and tube waves alone. Here, we analyze data from the Jonah gas field (Wyoming, USA) to demonstrate that tube-to-shear-wave conversions can be clearly observed in the context of hydraulic fracturing experiments. These waves are identified primarily on the vertical and radial components of geophones installed in monitoring wells surrounding a treatment well. They exhibit a significantly lower frequency content (10–100 Hz) than the primary compressional waves (100–1000 Hz). Tapping into such lower frequencies could help to better constrain velocity in the formation, thus allowing better estimates of fracture density, porosity and permeability. Moreover, the signals of tube-to-shear-wave conversion observed in this particular study provide independent estimates of the shear wave velocity in the formation and of the tube wave velocity in the treatment well.

A towed streamer electromagnetic system capable of simultaneous seismic and electromagnetic data acquisition has recently been developed and tested in the North Sea. We introduce a 3D inversion methodology for towed streamer electromagnetic data that includes a moving sensitivity domain. Our implementation is based on the 3D integral equation method for computing responses and Fréchet derivatives and uses the re-weighted regularized conjugate gradient method for minimizing the objective functional with focusing regularization. We present two model studies relevant to hydrocarbon exploration in the North Sea. First, we demonstrate the ability of a towed electromagnetic system to detect and characterize the Harding field, a medium-sized North Sea hydrocarbon target. We compare our 3D inversion of towed streamer electromagnetic data with 3D inversion of conventional marine controlled-source electromagnetic data and observe few differences between the recovered models. Second, we demonstrate the ability of a towed streamer electromagnetic system to detect and characterize the Peon discovery, which is representative of an infrastructure-led shallow gas play in the North Sea. We also present an actual case study for the 3D inversion of towed streamer electromagnetic data from the Troll field in the North Sea and demonstrate our ability to image all the Troll West Oil and Gas Provinces and the Troll East Gas Province. We conclude that 3D inversion of data from the current generation of towed streamer electromagnetic systems can adequately recover hydrocarbon-bearing formations to depths of approximately 2 km. We note that by obviating the need for ocean-bottom receivers, the towed streamer electromagnetic system enables electromagnetic data to be acquired over very large areas in frontier and mature basins for higher acquisition rates and relatively lower cost than conventional marine controlled-source electromagnetic methods.

The marine controlled source electromagnetic (CSEM) technique has been adopted by the hydrocarbon industry to characterize the resistivity of targets identified from seismic data prior to drilling. Over the years, marine controlled source electromagnetic has matured to the point that four-dimensional or time lapse surveys and monitoring could be applied to hydrocarbon reservoirs in production, or to monitor the sequestration of carbon dioxide. Marine controlled source electromagnetic surveys have also been used to target shallow resistors such as gas hydrates. These novel uses of the technique require very well constrained transmitter and receiver geometry in order to make meaningful and accurate geologic interpretations of the data. Current navigation in marine controlled source electromagnetic surveys utilize a long base line, or a short base line, acoustic navigation system to locate the transmitter and seafloor receivers. If these systems fail, then rudimentary navigation is possible by assuming the transmitter follows in the ship's track. However, these navigational assumptions are insufficient to capture the detailed orientation and position of the transmitter required for both shallow targets and repeat surveys. In circumstances when acoustic navigation systems fail we propose the use of an inversion algorithm that solves for transmitter geometry. This algorithm utilizes the transmitter's electromagnetic dipole radiation pattern as recorded by stationary, close range (<1000 m), receivers in order to model the geometry of the transmitter. We test the code with a synthetic model and validate it with data from a well navigated controlled source electromagnetic survey over the Scarborough gas field in Australia.

In order to couple spatial data from frequency-domain helicopter-borne electromagnetics with electromagnetic measurements from ground geophysics (transient electromagnetics and radiomagnetotellurics), a common 1D weighted joint inversion algorithm for helicopter-borne electromagnetics, transient electromagnetics and radiomagnetotellurics data has been developed. The depth of investigation of helicopter-borne electromagnetics data is rather limited compared to time-domain electromagnetics sounding methods on the ground. In order to improve the accuracy of model parameters of shallow depth as well as of greater depth, the helicopter-borne electromagnetics, transient electromagnetics, and radiomagnetotellurics measurements can be combined by using a joint inversion methodology. The 1D joint inversion algorithm is tested for synthetic data of helicopter-borne electromagnetics, transient electromagnetics and radiomagnetotellurics. The proposed concept of the joint inversion takes advantage of each method, thus providing the capability to resolve near surface (radiomagnetotellurics) and deeper electrical conductivity structures (transient electromagnetics) in combination with valuable spatial information (helicopter-borne electromagnetics). Furthermore, the joint inversion has been applied on the field data (helicopter-borne electromagnetics and transient electromagnetics) measured in the Cuxhaven area, Germany.

In order to avoid the lessening of the resolution capacities of one data type, and thus balancing the use of inherent and ideally complementary information content, a parameter reweighting scheme that is based on the exploration depth ranges of the specific methods is proposed. A comparison of the conventional joint inversion algorithm, proposed by Jupp and Vozoff (), and of the newly developed algorithm is presented. The new algorithm employs the weighting on different model parameters differently. It is inferred from the synthetic and field data examples that the weighted joint inversion is more successful in explaining the subsurface than the classical joint inversion approach. In addition to this, the data fittings in weighted joint inversion are also improved.

The seismic imaging of salt diapirs in the Nordkapp Basin gave rise to considerable problems in defining their shape and volume. Independent information was added by integrating the interpretation with high resolution gravity and magnetic data. We developed a novel, iterative workflow, separated into sub-categories: sediments, salt structures, basement and Moho. Distinctions between the sources of the anomalies from different depths was achieved by utilizing the different decay characteristics of gravity, gravity gradiometry and high resolution magnetic anomalies. The workflow was applied to the southern part of the Nordkapp Basin. It started with the sedimentary model derived from seismics, populated with measured densities and magnetic susceptibilities and a starting model for the base salt. The residual after the removal of this model was interpreted in terms of a crustal model, including flexural isostatic calculations for the Moho with the sedimentary load. The residual after the removal of crustal and early sedimentary model was used to tune the salt model. As these major and minor modelling steps depend on each other, an iterative process was applied to stepwise improve the density and magnetic susceptibility model. The first vertical gradient of gravity and the magnetic field were found to give most information about the cap rock of the diapirs. The improvement in salt imaging, integrated with results from controlled-source electromagnetic and magneto-telluric modelling is shown for the salt diapir Uranus, where a well, terminated in the salt, constrains the minimum of the depth to base salt.

Electrical conductivity of alluvial sediments depends on litho-textural properties, fluid saturation and porewater conductivity. Therefore, for hydrostratigraphic applications of direct current resistivity methods in porous sedimentary aquifers, it can be useful to characterize the prevailing mechanisms of electrical conduction (electrolytic or shale conduction) according to the litho-textural properties and to the porewater characteristics. An experimental device and a measurement protocol were developed and applied to collect data on eight samples of alluvial sediments from the Po plain (Northern Italy), characterized by different grain-size distribution, and fully saturated with porewater of variable conductivity. The bulk electrical conductivities obtained with the laboratory tests were interpreted with a classical two-component model, which requires the identification of the intrinsic conductivity of clay particles and the effective porosity for each sample, and with a three-component model. The latter is based on the two endmember mechanisms, surface and electrolytic conduction, but takes into account also the interaction between dissolved ions in the pores and the fluid-grain interface. The experimental data and their interpretation with the phenomenological models show that the volumetric ratio between coarse and fine grains is a simple but effective parameter to determine the electrical behaviour of clastic hydrofacies at the scale of the representative elementary volume.

When anomalous gravity gradient signals provide a large signal-to-noise ratio, airborne and marine surveys can be considered with wide line spacing. In these cases, spatial resolution and sampling requirements become the limiting factors for specifying the line spacing, rather than anomaly detectability. This situation is analysed by generating known signals from a geological model and then sub-sampling them using a simulated airborne gravity gradient survey with a line spacing much wider than the characteristic anomaly size. The data are processed using an equivalent source inversion, which is used subsequently to predict and grid the field in-between the survey lines by means of forward calculations. Spatial and spectral error analysis is used to quantify the accuracy and resolution of the processed data and the advantages of acquiring multiple gravity gradient components are demonstrated. With measurements of the full tensor along survey lines spaced at 4 × 4 km, it is shown that the vertical gravity gradient can be reconstructed accurately over a bandwidth of 2 km with spatial root-mean square errors less than 30%. A real airborne full-tensor gravity gradient survey is presented to confirm the synthetic analysis in a practical situation.

Very early times in the order of 2–3 μs from the end of the turn-off ramp for time-domain electromagnetic systems are crucial for obtaining a detailed resolution of the near-surface geology in the depth interval 0–20 m. For transient electromagnetic systems working in the off time, an electric current is abruptly turned off in a large transmitter loop causing a secondary electromagnetic field to be generated by the eddy currents induced in the ground. Often, however, there is still a residual primary field generated by remaining slowly decaying currents in the transmitter loop. The decay disturbs or biases the earth response data at the very early times. These biased data must be culled, or some specific processing must be applied in order to compensate or remove the residual primary field. As the bias response can be attributed to decaying currents with its time constantly controlled by the geometry of the transmitter loop, we denote it the ‘Coil Response’. The modelling of a helicopter-borne time-domain system by an equivalent electronic circuit shows that the time decay of the coil response remains identical whatever the position of the receiver loop, which is confirmed by field measurements. The modelling also shows that the coil response has a theoretical zero location and positioning the receiver coil at the zero location eliminates the coil response completely. However, spatial variations of the coil response around the zero location are not insignificant and even a few cm deformation of the carrier frame will introduce a small coil response. Here we present an approach for subtracting the coil response from the data by measuring it at high altitudes and then including an extra shift factor into the inversion scheme. The scheme is successfully applied to data from the SkyTEM system and enables the use of very early time gates, as early as 2–3 μs from the end of the ramp, or 5–6 μs from the beginning of the ramp. Applied to a large-scale airborne electromagnetic survey, the coil response compensation provides airborne electromagnetic methods with a hitherto unseen good resolution of shallow geological layers in the depth interval 0–20 m. This is proved by comparing results from the airborne electromagnetic survey to more than 100 km of Electrical Resistivity Tomography measured with 5 m electrode spacing.