Radio Science

Mitigation of 50–60 Hz power line interference in geophysical data

Authors


Abstract

[1] The analysis of ELF/VLF radio data has broad applications for ionospheric and magnetospheric phenomena, lightning activity, long-range communications, and geophysical prospecting. However, recordings of ELF/VLF data on the ground are adversely affected by the presence of electromagnetic fields from 50–60 Hz power lines, whose harmonics can extend to many kilohertz and interfere with the detection of natural and man-made signals. Removal of this interference is complicated by the time-varying fundamental frequency of power lines and strongly varying characteristics across different power grids. We discuss two methods for isolation and then subtraction of this interference, by an adaptive filtering technique and with least squares matrix analysis. Methods for estimating the time-varying frequency are also discussed. A few variants of these techniques are applied both to simulated data and then to real data. It is found that least squares isolation gives superior results, although the adaptive filter is potentially more effective for poorly behaved power line interference with rapidly changing fundamental frequencies as well as being computationally more efficient.

1. Introduction

[2] Ground-based measurement of extremely low frequency (ELF, defined here as 300–3000 Hz) and very low frequency (VLF, defined here as 3–30 kHz) radio waves has long been a productive technique in the field of ionospheric and magnetospheric physics for remote detection and sensing of a variety of geophysical phenomena and structures (many of which are reviewed by Barr et al. [2000]) such as lightning, sprites, the ionospheric D region, the Van Allen radiation belts, the effect of solar and cosmic activity on the ionosphere and magnetosphere, and electron precipitation from the radiation belts. Furthermore, because of its comparatively deep penetration into seawater, and efficient long-range propagation (attenuation rates typically a few dB/Mm [Davies, 1990, p. 389] following the establishment of waveguide modes in the first ∼500 km), ELF/VLF waves propagating in the so-called Earth-ionosphere waveguide have emerged as an important component of naval communications. ELF/VLF waves are also useful for geophysical prospecting [McNeil and Labson, 1991].

[3] ELF/VLF receivers typically consist of magnetic and/or electric antennas, connected to amplification circuitry, with the amplified data then digitized at base-band so that the full broadband spectrum is coherently captured. The so-called Atmospheric Weather Electromagnetic System for Observation Modeling and Education (AWESOME) was recently developed based on existing ELF/VLF receivers that have been operated by Stanford University for many years, and is described more fully by Cohen et al. [2010]. These receivers utilize two orthogonal air-core magnetic loop antennas, oriented with axes parallel to the ground, so that both components of the horizontal magnetic field are recorded. A custom-designed transformer between the wire loops and the transistor amplification circuit allows exceptional sensitivity for a known antenna impedance (in this case, 1 Ω, 1 mH), allowing detection of sinusoidally varying magnetic fields as low as ∼3 fT with 20 ms time resolution, utilizing a loop antenna with area of 25 m2. GPS synchronization of a phase-locked sampling clock limits the timing error to approximately the ∼100 ns inherent in GPS, corresponding to ∼0.36° of phase error at 10 kHz. Sixteen bit digitization enables detection of signals over a 96 dB dynamic range. Accurate calibration can be achieved via injection of a known reference signal into the input of the receiver. For magnetic loop antennas, a given voltage amplitude directly relates to a magnetic field value via Faraday’s law.

[4] Power transmission lines are the primary method of delivering power from electric generation facilities to cities, buildings, and homes, and can be routed overhead or underground. Underground power lines generally emit weaker electromagnetic fields, since the individual wires can be placed closer together, but are typically only used in densely populated areas due to higher costs of installation and maintenance. The voltage of power transmission lines vary from a few hundred volts to hundreds of kilovolts, depending on the age of the line and the amount of power being carried. The voltages are typically alternated at 50 or 60 Hz, but the nonmonochromatic nature of these interferences generate harmonics of this fundamental which can extend to many kilohertz.

[5] The natural ELF/VLF radio environment on the Earth is dominated by the presence of so-called radio atmospherics [Chrissan and Fraser-Smith, 1996], or “sferics,” impulsive (∼1 ms) broadband radiation originating from lightning strokes even at global distances from a given receiver. Typical sferic amplitudes are 1–100 pT. Other natural ELF/VLF signals detected on the ground such as chorus [Gołkowski and Inan, 2008], hiss [Hayakawa and Sazhin, 1992], and whistlers [Helliwell, 1965] are often present with substantially smaller amplitudes (<1 pT). The root-mean-square (RMS) average spectral density of natural ELF/VLF noise is typically between 1 and 100 fT/equation image in this frequency range [Chrissan and Fraser-Smith, 1996]. Signals can also be generated artificially via high frequency (HF, 3–10 MHz) heating of the auroral lower ionosphere, with amplitudes as strong as several picoteslas [Cohen et al., 2008].

[6] The electromagnetic interference from power lines, or “hum,” can often be significantly stronger. The U.K. National Grid EMF Information site (http://www.emfs.info) has made available quantitative measurements of both electric and magnetic fields for a wide variety of different power lines as far as 100 m. For instance, even a low-power 11 kV line (generally used for <500 A or <5.5 MW), will typically generate “hum” magnetic fields at 100 m of between 2 and 90 nT, depending on the structure and loading of the line, i.e., orders of magnitude stronger than typical natural ELF/VLF signals. Being dominated by near-field energy from power lines, these fields may attenuate with distance as 1/rn (where n lies between 1 and 3 depending on the type of line and the conductivity of the ground). Assuming 1/r3 attenuation, the 2000 pT magnetic field value at 100 m distance would be 2 pT at 1 km distance, so even low-power lines at kilometer distances can represent a very substantial source of interference when compared to typical natural ELF/VLF sources.

[7] Even though most of the power in the hum may be at the fundamental frequency, for ELF/VLF radio science applications it is important to take into account the higher harmonics, which have varying relative amplitudes and phases and which interfere with natural signals in the ELF/VLF frequency range. The time-varying nature of the fundamental frequency is also very important. Drifts in the fundamental frequency are part of generator and power plant design, effectively acting as a feedback mechanism for multiple generators to be connected to the same power grid, and for those generators to respond to changing loads on the power line. These drifts can be very significant. For instance, a drift of ±0.1 Hz in the fundamental frequency (a reasonable amount, as will be shown later) implies that the 100th harmonic (at ∼6000 Hz) changes by ±10 Hz. A static notch filter at 6000 Hz would therefore require at least ±10 Hz bandwidth to consistently remove the 100th harmonic. A series of notch filters with ±10 Hz bandwidth could be used, but when these filters are spaced out by 60 Hz, a substantial fraction (at least 1/3) of the total power is removed, and the natural ELF/VLF signal is therefore distorted. Additionally, coherent subtraction of a constant-frequency sinusoid would be less accurate, as the interference effectively has a finite bandwidth due to the frequency drift, which cannot be captured by a sinusoid. Adaptive response to changing power line characteristics may be particularly important for ELF/VLF magnetic field measurements, since the magnetic field interference may vary substantially as the load (and thus the current) in the power line changes (whereas the electric field interference may be more steady since the voltage on the power line is more or less constant). It may also be more important for smaller power grids (like those in remote areas), for which fewer loads are present on the power line at a given time, and fluctuations may be quicker and more pronounced.

[8] Let us assume that the ELF/VLF data record, x, contains power line interference of a quasiperiodic nature, i.e., consisting of a fundamental frequency with harmonics at exactly integer multiples at any given time. The filtering is achieved by isolating and then subtracting a reconstructed version of the interference, i.e.,

equation image

where p(t) is the isolated power line interference signal. Our task in mitigation of the power line signal is to isolate p(t) accurately and with computational efficiency. We describe two particular techniques for this achievement: one in which an adaptive filter is utilized to track the power line interference, and another in which a least squares matrix approach is utilized to estimate the power line interference. These techniques are also described by Said [2009, chapter 3] and Cohen [2010, Appendix D].

[9] While the techniques discussed here do not diminish the benefit of a remotely located ELF/VLF antenna (where power line interference is weak to begin with), they can greatly aid in removing moderate levels of power line interference. For the remainder of this manuscript, we consider the “signal” to be the ELF/VLF fluctuations of interest, the “hum” to be the interference from power lines, and the “data” to be the sum of the two, i.e., the real recorded output from the receiver.

2. Adaptive Filtering

[10] Adaptive filters have been used for removal of hum, both for geophysical applications and in other fields of study. As noted by Butler and Russell [1993], these techniques tend to fall into two categories: sinusoidal subtraction and block averaging. The first simply involves estimating the amplitude and phase of a sinusoid (at the hum frequency), which is then subtracted from the data. Widrow [1975] review adaptive noise canceling and describe a wide variety of applications, most notably, a 60 Hz subtraction algorithm designed to remove interference from electrocardiogram (ECG) data in this manner. This algorithm noticeably reduces the interference but only at the fundamental frequency. A variant of this technique involves estimating the fundamental frequency via the autocorrelation of the data, which can also subtract many harmonics.

[11] The so-called “block subtraction” technique involves isolating a hum period where only the interference is present, and subsequently subtracting this waveform repeatedly from the ensuing data, for instance, in the studies by Furno and Tompkins [1983], Reising [1998, p. 41], and Cummer [1997, p. 85]. Although this technique intrinsically adapts to the complete harmonic structure of the 60 Hz interference, it relies on manually finding a suitable “interference-only” hum period, and still assumes that the hum does not change noticeably in time. However, in using only one block to estimate this hum, it is invariably susceptible to noise (impulsive and Gaussian) being copied over with every block subtraction. On the other hand, block subtraction requires very simple computations.

[12] An adaptive filter can also be achieved with hardware (although we utilize a software implementation). For instance, E. W. Paschal and M. Trimpi are known to have designed and implemented a hardware-based system in which a periodically averaged 60 Hz waveform is built and then subtracted from the incoming data as they arrive into the digitizer of an ELF/VLF receiver. A variable delay in the phasing of the subtraction also accounts to some extent for drifting of the fundamental frequency (E. W. Paschal, personal communication, 2009). This latter design bears several similarities with the adaptive filter technique described here, in that it is also able to track and subtract the complete harmonic content of the hum.

[13] Under the adaptive filtering approach, p(t) is calculated via a convolution of the input data with an impulse train, which can be written as follows:

equation image

where f° is the hum fundamental frequency. The impulses are time separated by the time-varying fundamental hum period and weighted with exponentially decreasing values as e−∣t∣/a, hence the epochs farther away from t become exponentially less valuable in terms of reconstructing the hum. There is no impulse at t = 0 (since this would amount to simply scaling the entire data by a coefficient), but the impulses exist for both positive and negative values of t, so the filter is noncausal.

[14] The parameter a is a characteristic adaptation time which accounts for the variation in the hum harmonic content and fundamental frequency, respectively, which can occur in particular as the loading on the power lines changes. Smaller lines with less stable loading would likely require shorter adaptation times, allowing the filter to adapt more quickly. On the other hand, shorter adaptation times decrease the number of epochs effectively included, thereby increasing the bandwidth that is subtracted and possibly including portions of “real” ELF/VLF data.

[15] C1 is a normalization term calculated as follows:

equation image

and M is defined such that e−(M/af°) is sufficiently small (e.g., 10%) in order to closely approximate an infinite sum.

3. Least Squares Estimation

[16] A second possible method of estimating the hum comes from least squares estimation, described by Saucier et al. [2006] and reviewed here. The hum interference signal p(t), can be written in terms of a sum of individual harmonic components as follows:

equation image

where Ak and Bk are real coefficients, which indicate the sine and cosine components of the hum at the kth harmonic, assuming the hum has K harmonics that need to be subtracted.

[17] To set up our least squares problem, we rewrite equation (4) as a matrix equation, or

equation image

where P is a 2K × 1 matrix, containing the sine and cosine coefficients, as follows:

equation image

and Y(t) is a T × 2K matrix (where T is the length of data being estimated) defined with rows containing sine and cosine functions, as follows:

equation image

[18] We can now rewrite our ELF/VLF data as

equation image

where n(t) includes everything else in the ELF/VLF data (natural ELF/VLF radiation, VLF transmitters, and other local interference sources) which, for the purposes of this estimation problem, is actually the noise. Our task is estimating P given knowledge of x(t) and additive noise. For cases where n(t) is white Gaussian noise, the optimal mathematical solution is well known, and is given by

equation image

[19] The number of harmonics K can be arbitrarily specified; they need not be a consecutive series of integers, but they can simply contain the strongest harmonics of the hum. For instance, in the presence of natural noise which obscures the hum components below ∼1 kHz, the values of k can be chosen to include only higher-order harmonics of 50/60 Hz, those above 1 kHz. Saucier et al. [2006] present a technique to analytically solve the least squares inversion to reduce the computation time.

4. Frequency Estimation

[20] The fundamental frequency, f°, may significantly vary on timescales of seconds, and must therefore be explicitly calculated. However, both subtraction techniques described here assume a constant fundamental hum frequency over some block of time, so they cannot be applied to long-term data. Effective use of this subtraction therefore involves dividing up the data into segments short enough that the frequency does not change appreciably, and then estimating the frequency separately in each segment. We discuss two methods for this frequency estimation, one of which can be applied to either the adaptive filter, or the least squares subtraction, and a second technique which can be applied only to the least squares technique.

[21] One method for frequency estimation is given by J. O. Smith (see https://ccrma.stanford.edu/∼jos/sasp/), referred to by him as “quadratic interpolation.” A monochromatic function, after multiplication by a Gaussian window, a Fourier transformation, and then conversion to logarithmic values, becomes a quadratic function. The highest three points of the resulting function (which have the highest signal-to-noise ratio (SNR)) can therefore be fitted to a parabola, whose peak frequency is precisely known. The method is accurate as long as the function is sufficiently monochromatic so that the Fourier transform of the Gaussian-windowed function is well approximated (in logarithmic values) by a parabola. The quadratic interpolation is repeated at each frequency where a hum harmonic is present, and the final estimate of the hum frequency is weighted by the SNR of each spectral peak, and then summed.

[22] In the case of least squares estimation, determining the frequency can also be formulated as a nonlinear least squares optimization problem. To find the optimal frequency, we apply equation (9) and search for a value of f° which minimizes the mean squared error, or

equation image

[23] In MATLAB, this nonlinear optimization can be achieved with the lsqnonlin function, which utilizes the trust-region-reflective method of optimization. In addition, determining the optimal segment length for dividing the data is not trivial, as we consider below.

5. Comparison

[24] It is helpful to compare the various techniques with computer-simulated data containing both coherent functions and white Gaussian noise, since the comparison can be repeated many times to obtain statistics on their effectiveness. The simulated data consist of a number of sinusoids with frequencies at selected harmonics of 60 Hz summed together. This coherent function is then embedded in white Gaussian noise such that the noise power in a 1 Hz bandwidth is at some known ratio of the power of each sinusoid, which is defined here as the signal-to-noise ratio (SNR).

5.1. Frequency Error

[25] We first compare the effectiveness of the two frequency estimation techniques for a variety of simulated functions types, with the results shown in Figure 1. Figure 1 (left) shows the results for the minimized error technique, and Figure 1 (right) shows the results for quadratic interpolation. For each function duration and SNR, the frequency estimation is repeated 30 times, and the RMS error of the 30 estimates is shown with the color bar. The function durations are taken in fractional multiples of 1/60 s. The hum fundamental frequency is 60.0 Hz, for all cases except the third row, where it is 60.1 Hz. If multiple harmonics are in the data, all are assumed to be of the same SNR.

Figure 1.

Frequency estimation error. (top row) Only the first harmonic is present and used for the estimation. (second row) Only the 63rd harmonic is present and used for the estimation. (third row) Only the 63rd harmonic is present and used for the estimation, and f0 = 60.1. (fourth row) The first 16 odd harmonics are present and used for the estimation. (fifth row) The first 32 odd harmonics are present, but only the first 16 harmonics are used for the estimation. (bottom row) The first 32 odd harmonics are present and used for the estimation.

[26] The top row of Figure 1 shows a baseline case, where only the first harmonic exists in the data. Increasing the function length or the SNR improves the frequency estimation. For low-SNR functions, the minimized error technique performs noticeably better, whereas for high-SNR functions, the quadratic interpolation performs as well as the minimized error. This feature is consistent for all the cases tested here.

[27] The second row of Figure 1 shows the same situation, except only the 63rd harmonic is present. Both the minimized error and quadratic interpolation methods become substantially more accurate, indicating that using higher harmonics is more accurate, since there are more periods in a given segment, and thus more information for the frequency estimation.

[28] The third row of Figure 1 shows the 63rd harmonic estimation, but for the case where the fundamental frequency is 60.1 Hz. Both minimized error and quadratic interpolation degrade significantly in accuracy of frequency estimation at certain lengths of data for which the number of periods at the 63rd harmonic is an integer plus 1/2. This example emphasizes the importance of using adaptive window lengths for the minimized error frequency estimation technique as the frequency changes, especially if higher harmonics are used in the estimation. As an interference mitigation scheme is applied in successive segments through the data, the length of each segment should be determined from the frequency estimated in the previous segment, so that the length of the segment is close to an integer number of hum periods.

[29] The fourth row of Figure 1 shows the frequency error when the simulated data consist of the first 16 odd harmonics of 60 Hz, all of which are present with equal amplitude in the data. Even with many independent measurements of the frequency, this estimate is not substantially better than the estimate using only the 32nd odd harmonic (as in the second row). This result emphasizes the value of using higher harmonics for the estimation.

[30] The fifth row of Figure 1 shows the same estimation as the fourth row, except the simulated data also include 16 more odd harmonics that are present in the data but are not used in the estimation. The minimized error technique suffers a loss of effectiveness, since the additional harmonics effectively constitute additional noise when not taken into account in the estimation. However, the quadratic interpolation suffers no loss of effectiveness, since the interpolation strategy works on one harmonic at a time, and thus the presence of higher harmonics does not affect the estimation of the lower harmonic frequencies.

[31] The sixth row of Figure 1 shows the frequency error when the simulated data consist of the first 32 odd harmonics of 60 Hz. The minimized error scheme becomes exceptionally effective in this case, although the quadratic interpolation techniques does not benefit noticeably from the use of additional harmonics. As will be discussed later, the computation time for the minimized error technique scales with the number of harmonics, whereas for quadratic interpolation it remains roughly constant, so the minimized error technique does provide some improvement as trade-off for the added computation time.

[32] It should be noted that although these results indicate that using the higher harmonics for the frequency estimate provides better accuracy, the results of these simulations assume equal SNR for all harmonics present in the data. In practice, however, the amplitude of the hum harmonic often decreases with higher harmonic number, so the specific characteristics of the hum should be taken into account before the optimal technique for frequency estimation is determined.

5.2. Hum Removal

[33] We now quantitatively compare the following four techniques of hum subtraction, again using simulated data with coherent functions embedded in white Gaussian noise. It is important in these simulations to take into account the time-varying nature of the hum frequency, since both of the above techniques assume a constant fundamental frequency, which in practice will never be precisely true. Thus the fundamental frequency is now allowed to drift at a certain constant rate.

[34] The four tested techniques are as follows: (1) minimum error: frequency estimation with nonlinear optimization, and hum isolation with least squares fitting; (2) least squares: frequency estimation with quadratic interpolation, and hum isolation with least squares fitting; (3) fast filter: frequency estimation with quadratic interpolation, and hum isolation with an adaptive filter and a = 0.04 s (2.4 hum periods); and (4) slow filter: frequency estimation with quadratic interpolation, and hum isolation with an adaptive filter and a = 0.32 s (19.2 hum periods).

[35] These simulated data are processed with the four subtraction techniques listed above, and the Fourier spectra of the outputs are calculated. Via repetition of the data simulation and subtraction, the Fourier spectra for each technique can be averaged, yielding a smooth curve showing the input-output frequency characteristics.

[36] Figure 2 shows an example of the average spectrum, after 100 repetitions of the subtraction techniques, with a function length of 0.5 s, and a fundamental drift rate of 0.10 Hz/s (implying that the fiftieth harmonic drifts at 5 Hz/s). In this example, the simulated hum contains only the fiftieth harmonic (3 kHz). The dashed curve shows the spectrum of the raw simulated data, while the solid curve shows the spectrum of the processed data for the minimum error technique (left), least squares technique (second plot), fast filter technique (third plot), and slow filter technique (right).

Figure 2.

An example of an averaged spectrum after postprocessing simulated hum with the (left) minimum error technique, (second plot) least squares technique, (third plot) fast filter technique, and (right) slow filter technique. The spectrum is obtained by averaging the Fourier transforms of 100 repetitions. The 0.5 s long simulated hum consists of the fiftieth harmonic of 60 Hz, where the fundamental drifts at a rate of 5 Hz/s. The two extracted metrics are indicated with the arrows (peak reduction) and the spillover (the RMS sum of the red lines).

[37] We define two metrics to evaluate the performance of each technique. The peak reduction defines the reduction of the maximum value of the average spectrum, as shown with the arrows in each of the four plots of Figure 2. The peak reduction metric evaluates the effectiveness of the subtraction technique at removing the hum. The spillover is the root-mean-square (RMS) value of the deviation between the raw data and the processed data at all spectral points outside the center frequencies, where the peak reduction is in logarithmic values, and within ±1.5 fundamental frequencies. The spillover is defined as the RMS value of the red lines (which are large enough to be visible only in the right two plots of Figure 2). This spillover metric evaluates the degree to which the rest of the spectrum is altered as a result of the hum subtraction.

[38] In this particular example, the two adaptive filter techniques (right two plots) have a greater peak reduction compared to the minimum error and least squares techniques (left two plots), and therefore are more effective at removing the hum frequencies. The adaptive filters, however, demonstrate a lot more spillover, indicating that the other signals embedded in the data will be somewhat distorted as a result of the adaptive filter subtraction, more than with the other two techniques.

[39] The above example is repeated for various data lengths, and fundamental frequency drift rates, and the results are shown in Figure 3. As before, the hum contains only the fiftieth harmonic and is injected with a SNR in a 1 Hz bandwidth of 30 dB. The top row shows the peak reduction of the fiftieth harmonic for each of the four techniques: minimum error (left column), least squares (second column), fast filter (third column), and slow filter (right column). Since the SNR of the data is 30 dB, a 30 dB peak reduction is the maximum possible, with 0 dB representing no reduction of the hum spectral peak.

Figure 3.

(top) The peak reduction and (bottom) RMS spillover of the four subtraction techniques.

[40] In terms of the peak reduction, for well behaved hum (i.e., with the fundamental frequency drifting very slowly) the minimum error and least squares techniques demonstrate superior hum removal, even for short data durations of a few periods. The performance/efficiency of the adaptive filter techniques is not substantially worse, but it does require data longer than ∼10 hum periods in order to adequately track and subtract the hum (since there must be enough data for the adaptive filter to adapt). The difference between the least squares and the minimum error techniques is fairly minimal, indicating that the choice of frequency estimation has a relatively small impact on the effectiveness of the subtraction, whereas the use of adaptive filters or matrix methods strongly affects the subtraction.

[41] As the fundamental frequency drifts more quickly, the performance of all four techniques break down, but the adaptive filters do not break down as much, holding relatively good performance even for drift rates as high as 0.1 Hz/s. The fast filter, in particular, can still adequately subtract the hum. The reason for this is likely because the adaptive filters intrinsically determine the hum based on portions of the data near a time index, so that for a rapidly drifting hum frequency, the portions of the data with frequency similar to that of a given location dominate the isolation of the hum. On the other hand, the minimum error and least squares technique utilize the entire data in isolating the hum via matrix methods.

[42] The bottom row of Figure 3 shows the RMS spillover value (in dB) for the same four techniques, data durations, and drift rates. Both the minimum error and least squares techniques have substantially less spillover, indicating that they are clearly superior at subtracting the hum without distorting the rest of the data, even for rapidly varying hum, or long data. In addition, the fast filter, which was able to retain good subtraction even in the case of rapidly varying power line frequency, has an even larger spillover than the slow filter. The fast filter does not utilize enough data to form a “clean” filter that isolates only the hum.

[43] It is also worthwhile to compare the computation times. Figure 4 shows the relative logarithmic computation time (normalized to the length of the data) for each of the four techniques, as a function of the number of estimated and subtracted hum harmonics. Computation time as a function of the number of harmonics is important, since subtraction of hum with harmonics from 60 Hz to 6 kHz requires estimation of the amplitudes and phases of 100 different harmonics (although sufficiently accurate estimation of f° may require fewer harmonics). For the adaptive filtering technique, the computation time changes only marginally with the number of hum harmonics, by ∼25% for each factor of 10 more harmonics being estimated. The adaptive filtering technique is applied with no a priori knowledge of the number of harmonics present, so the growth in computation time with harmonics results from the incremental effect of applying the quadratic interpolation at many hum harmonics. The fast filter, since it utilizes a smaller number of epochs to isolate the hum, responds even better for the case of hum frequency fluctuations.

Figure 4.

The relative computation time of the four hum subtraction algorithms as a function of the number of hum harmonics that are present.

[44] The computation time increase associated with the minimum error can be mitigated, however, by restricting the search to only one iterative step around 60 Hz in order to find the frequency of minimum error. Although not shown, in most cases the single step provides sufficient accuracy in the frequency estimation that additional iterations in the optimization provide negligible improvement.

[45] The least squares technique does take somewhat more time to calculate compared to the filtering, and does show some growth in computation time with more harmonics. The computation time still does not scale dramatically, however, increasing by ∼75% for each factor of 10 increase in the number of harmonics, due to the increasing order of the matrix operation in equation (9).

[46] On the other hand, the number of hum harmonics drastically affects the computational efficiency of the minimum error technique, since the optimization techniques require repeated calculation of the least squares inversion (equation (10)) which takes a lot of time. The computation time scales roughly linearly with the number of harmonics. One way to mitigate this computational cost is to use a small number of (high SNR) harmonics for the frequency estimation, then apply that frequency estimate to remove a larger set of power line harmonics. However, even for a small number of harmonics, the minimum error technique takes ∼3 times longer to execute compared to the filter techniques.

[47] It therefore appears that the least squares technique generally gives the best combination of good subtraction, low distortion, and fast computational efficiency. The added computational cost of the nonlinear optimization methods of frequency estimation appears to be unnecessary, since the performance can be met or exceeded by quadratic interpolation of spectral peaks.

[48] However, because the least squares technique does not intrinsically adapt to changing hum, and is therefore less effective with poorly behaved hum, the data must be divided into relatively short (no more than 0.2 s) segments in order for this technique to be effective for hum with rapid fundamental frequency variations (>∼0.1 Hz/s).

[49] On the other hand, because it requires about 3 times more computation time to perform the least squares technique compared to the adaptive filter (for hum containing many harmonics), the least squares technique may not be appropriate for large-scale data processing, or for real-time, computationally intensive applications.

[50] The adaptive filters provide the advantage of rapid calculation, at the expense of distortion of the signal. However, the distortion would not be visible for the case when the hum spectral peaks are only moderately strong, since the distortion would therefore be below the noise floor. Proper selection of hum subtraction technique therefore depends in part on the characteristics of the power line, and need for computational efficiency.

6. Results

[51] Having established the performance of all four techniques, we now apply it to real ELF/VLF data collected with the “AWESOME” receiver described by Cohen et al. [2010]. Figure 5 shows portions of a 20 s data segment taken from Kodiak, Alaska, on 1 March 2007, 0603:20–0603:40 UT. Spectral analysis shows that this period featured fluctuation of the fundamental frequency as fast as ∼0.03 Hz/s, changing by as much as 0.3 Hz over 10 s, presumably due to load fluctuations in a relatively remote area of Alaska.

Figure 5.

Comparison between (top row) raw data, (second row) data after hum removal with the minimum error technique, (third row) least squares technique, and (bottom three rows) data after adaptive filtering with three different values of a. A portion of the time domain is shown at left, and a portion of the frequency domain (derived using Welch’s method) is shown at right. Data are taken from Kodiak, Alaska, 1 March 2007, 0603:20–0603:40 UT, a period when the hum fundamental frequency is observed to change on the order of seconds.

[52] Figure 5 (left) shows a 30 ms segment of time, with the six rows corresponding to raw data, and data after processing with minimum error, least squares, and three different adaptive filters with different characteristic adaptation times a. The short segment of time allows distinguishing individual radio atmospherics, which are millisecond-long impulsive signatures of lightning discharges which can be detected at global distances from the source, such as the one at 0603:22.405 UT. Figure 5 (right) shows a portion of the spectrum for the entire 20 s block of data. The 41st, 42nd, 43rd, and 44th harmonics of 60 Hz are clearly visible in the power spectrum.

[53] All of the subtraction techniques succeed in removing a substantial portion of the hum clearly visible in the raw data. The SNR of the radio atmospheric is clearly increased by the minimum error and least squares techniques, whereas in the frequency domain, the spectral peaks from the hum are removed. The adaptive filter is also effective, but only with correct choice of the characteristic adaptation time. A value of a too long (such as in the fourth row) will render the filter unable to entirely remove the hum, leaving visible artifacts of the hum in the time domain, and remaining spectral peaks in the Fourier domain. A choice of a too short (such as in the bottom row) creates “clones” of the sferic, spaced out by one hum period, since the adaptive filter is formed from only a few epochs. In the frequency domain, this appears as an overshooting of the hum removal, such that the spectral peaks become spectral wells.

[54] The subtraction techniques can also be applied to pull out weaker natural or man-made signals which ordinarily would be extremely difficult to characterize. Figure 6 shows the results of the two subtraction algorithms applied to two different 10 s records of ELF/VLF data taken from Valdez, Alaska (61.06°N, 146.02°W), and presented in the form of a spectrogram, with the data divided into segments, and a Fourier transform performed on each segment. In both plots, the first 10 s period show unfiltered data, the second 10 s period shows data with least squares subtraction (segment length of 0.4 s), and the third 10 s period shows data after adaptive filtering (a = 1/8).

Figure 6.

Examples of hum filtered data. (top) Spectrograms of a 10 s chunk of data taken from Valdez, Alaska, on (left) 15 February 2007 and (right) 1 March 2007, shown separately for the raw data, least squares processed data, and adaptive filter processed data. (bottom) Portions of the frequency spectrum for the same data.

[55] Figure 6 (left) shows an example of chorus emissions, which can be seen as a series of brief ∼1 s long rising emissions. This particular case of chorus is also discussed by Gołkowski and Inan [2008]. Although the chorus elements are barely visible in the raw data, they are seen much clearer in the postprocessed data, after applying either the adaptive filter or least squares estimation approach. For the spectrogram for the chorus, the difference between the adaptive filter technique and the least squares estimation technique is barely noticeable, as both seem to do an adequate job at removing the hum interference. However, close examination of the spectrum indicates some discernable differences between the two techniques. The bottom plots show 10 s long Fourier transforms of the same data records as those shown in the above spectrograms, but here it is visible that the adaptive filtering technique adds some degree of a valley at the frequencies corresponding to the even harmonics of 60 Hz. On the other hand, the least squares estimation technique does not leave such an artifact in the data, because the amplitudes of the harmonics are individually calculated.

[56] Figure 6 (right) shows an example of ELF/VLF radiation generated with high frequency heating of the auroral electrojet (a particular instance presented by Cohen et al. [2008]), utilizing the HAARP facility near Gakona, Alaska, and detected ∼150 km away. The generated transmission consists of constant-frequency tones at 2375 Hz (3 s), 2875 Hz (1 s), and 2175 Hz (1 s), followed by a ramp from 500 Hz to 3 kHz (5 s). The signal-to-noise ratio of all three tones and the ramps are much larger for the postprocessed cases. The differences between the three techniques are relatively minor, indicating that the proper selection of the a parameter for the adaptive filter can give results that are very similar to that of the least squares technique.

7. Conclusion

[57] Coupled electromagnetic fields from power lines at harmonic multiples of 60 Hz (or 50 Hz) constitute a major source of interference for ELF/VLF data. This interference is complicated by the time-varying nature of the fundamental frequency and harmonic content, as well as the varying characteristics across different power grids.

[58] We have compared a number of methods of isolating the power line interference so that it can be subtracted from the original data. An adaptive filter tracks the fundamental frequency and also the harmonic content of the power line interference, while a least squares matrix approach finds the values of the amplitude and phase for each of the harmonics, and subtracts a series of sinusoids with these parameters. The choice of optimal parameters can be made empirically, as they depend primarily on the particular power line characteristics.

Acknowledgments

[59] This work was supported under Office of Naval Research grant N00014-09-1-0100-P00003, and by National Science Foundation grant OPP-0233955 to Stanford University. We thank Evans Paschal and Mark Golkowski for helpful discussions. The authors thank the two reviewers for their helpful comments and suggestions.

Ancillary