High-rate real-time GPS network at Parkfield: Utility for detecting fault slip and seismic displacements



[1] A network of 13 continuous GPS stations near Parkfield, California has been converted from 30 second to 1 second sampling with positions of the stations estimated in real-time relative to a master station. Most stations are near the trace of the San Andreas fault, which exhibits creep. The noise spectra of the instantaneous 1 Hz positions show flicker noise at high frequencies and change to frequency independence at low frequencies; the change in character occurs between 6 to 8 hours. Our analysis indicates that 1-second sampled GPS can estimate horizontal displacements of order 6 mm at the 99% confidence level from a few seconds to a few hours. High frequency GPS can augment existing measurements in capturing large creep events and postseismic slip that would exceed the range of existing creepmeters, and can detect large seismic displacements.

1. Introduction

[2] GPS has established itself as the standard technique to measure the long-term, annual to decades, crustal deformation at global and regional scales. Continuous GPS (CGPS) and Interferometric Synthetic Aperture Radar (InSAR) are the preferred methods to measure crustal deformation from earthquakes and volcanoes. CGPS has also been used to measure aseismic slow earthquakes [e.g., Dragert et al., 2001] and the long period components of seismic shaking from large earthquakes [Nikolaidis et al., 2001; Larson et al., 2003; Bock et al., 2004].

[3] For the past 20 years at Parkfield, California, a network of borehole strainmeters, high-precision geodesy using a two-color electronic distance meter (EDM), and creepmeters, has been measuring strain accumulation and release on a section of the San Andreas fault in the transition zone between steady state creep to the northwest and locked to the southeast [Roeloffs and Langbein, 1994]. On the locked zone, strain is released by large earthquakes, the last being an M8 in 1857 (Figure 1). Much of the deformation measured at Parkfield comes from aseismic slip, on the San Andreas fault, which occurs as steady state creep and short creep events spanning a few hours to a day with amplitude between 0.5 and 1.5 mm. The background slip rate at Parkfield ranges from 0 to 18 mm/yr. Surface slip is measured with creepmeters which are extensometers that span 30 meters within a wider fault zone. Some creep events are also detected on nearby strainmeters [Johnston and Linde, 2003].

Figure 1.

Map showing the GPS network and other crustal deformation instruments at Parkfield. Also shown is the location of SAFOD and the location of the 1966 Parkfield Earthquake.

[4] The use of high-rate (1 Hz) GPS measurements processed with real time kinematic (RTK) methods for detecting fault slip was described by Genrich and Bock [1992]. Bock et al. [2000] introduced the new method of instantaneous positioning, which allows, without loss of precision, integer-cycle phase ambiguity resolution with only a single epoch of data, unlike static GPS and RTK methods that require multiple epochs to resolve ambiguities and maintain phase continuity. Instantaneous GPS therefore provides a precise independently-computed position at each observation epoch, at the receiver's sampling rate. Nikolaidis et al. [2001] used this method to observe seismic motions from the 1999 Hector Mine earthquake with 30 sec data from the Southern California Integrated GPS Network (SCIGN). Larson et al. [2003] used RTK methods to analyze 1 Hz GPS data collected at individual stations in Alaska and across North America and detect seismic waves caused by the Denali fault earthquake, while Bock et al. [2004] used 1 sec data from the dense Orange County Real Time Network to monitor teleseismic waves caused by the 2003 Denali fault earthquake in Alaska.

[5] We examine the first 95 days of data (starting in July 2003) recorded from a 13-station high-rate (1 Hz), low latency (1 sec) CGPS network in the region of Parkfield, to test the sensitivity of instantaneous positioning for detecting in real time short-period aseismic fault creep and the long-period displacement components of seismic shaking. For SAFOD (San Andreas Fault Observatory at Depth), the anticipated placement of GPS near the observatory will be sensitive to surface creep and this measurement complements the expected downhole measurements of the fault slip.

2. Data

[6] Figure 1 shows the location of CGPS and other crustal deformation instruments in the Parkfield region. Part of the GPS network replaces the two-color EDM network that had been measuring fault slip and strain accumulation since 1984 [Langbein et al., 1990]. Continuous GPS measurements at Parkfield started in 1992; by 2001, there were 14 sites operating. With the exception of PKDB and HOGS, all of the stations use drilled braced monuments [e.g., Langbein et al., 1995] that extend to a depth of 7–10 meters. HOGS is installed on an abandoned water well that extends to more than 100 meters depth and PKDB is on a cement block attached to subsurface rock. Initially, all stations recorded the GPS observations at the standard 30-second sampling rate and downloaded daily using 900 MHz, spread-spectrum radio-telemetry.

[7] In the summer of 2003, all stations, except for PKDB, were converted to 1 Hz sampling with real-time continuous data streaming using 2.4 GHz Ethernet radios and a 18-minute, on-site buffer which minimizes loss of data. All data functions are fully controlled by software on a PC workstation at the master site CARH. The 1 Hz data are analyzed in real time (less than 1 second latency) at the master site using the method described by Bock et al. [2000] to give three-dimensional positions for the 12 remote stations relative to station POMM. The distances from POMM range from 1.8 to 28.4 km. The position estimates are derived from an ionosphere-free (LC) solution, with a zenith delay parameter estimated once per epoch at each site to account for the effects of troposphere refraction.

[8] As in Bock et al. [2000] and Nikolaidis et al. [2001], each coordinate component from each baseline is analyzed in two steps: removing outliers and removing the multi-path affect through a sidereal filter. Outliers are identified by high-pass filtering the position time series and removing those measurements that fall outside 3 times the interquartile range (IQR). Our high-pass filter is a subtraction of a 0.25-day running median from the measurements.

[9] The IQRs for the raw Parkfield 1 Hz positions range from 14 mm to 28 mm for the horizontal and 70 to 165 mm for the vertical. Typically, with a 3 IQR threshold for outliers, roughly 1 to 2% of the data are outliers. In addition, another 0.6 to 2% of the data are missing due to telemetry problems. The stations HOGS and LOWS are missing more data because of equipment problems.

[10] After removing outliers, a sidereal filter is used to improve the precision of the position data. This filter relates the current position with a weighted average of positions from the preceding sidereal days. While Nikolaidis et al. [2001] used equal weights to construct the filter, here, we set up the observation equation that relates the preceding measurements to the current measurement by:

equation image

where xi is the measurement at i, and xi−1*j is measurement from the prior sidereal day, (j = 86164 s), and ei is the data error. The weights, Am, are estimated using least-squares where we find that 5 coefficients provide a satisfactory fit to the data by minimizing the norm of the residuals, ei. This adjustment reduces the variance by 70% for MIDA and by 40% at most sites. After removing the sidereal component, the root mean square (RMS) of the scatter in the data for all stations is 5 to 7 mm in the horizontal and 30 to 40 mm in the vertical.

3. Results

[11] The power spectra of the data (excluding LOWS and HOGS) were estimated after filling gaps using linear interpolation. For each component, power spectral density (PSD) was estimated using windows between 1.1 hours and 12 days and using a Hanning taper. To extract the average PSD, the estimates for each component are averaged and the results are shown in Figure 2.

Figure 2.

Power spectra of GPS station positions.

[12] For all three components, north, east, and vertical, the PSDs have a 1/f character at high frequencies and are frequency independent at the low frequencies. The change in character corresponds to 3.0 × 10−5 Hz or a period of 8 hours. This period is the commonly accepted occupation period used in survey-mode (“campaign”) GPS.

[13] Bock et al. [2000] show PSDs for three baselines. For their baseline of 14 km, which is similar in length to those reported here, their PSDs for the three components are similar to the composite presented here for limited frequency band between 1 × 10−5 and 2 × 10−2 Hz. However, because we have more data with higher sampling rate, and more baselines, the composite PSD's presented here better define the noise spectra over a wider frequency band.

[14] We can characterize the noise spectrum, P(f), as:

equation image

where the values of Po, fo, and the spectral index, n, can be estimated from the PSDs using least squares and our estimates are shown in Table 1 and compared with those from Bock et al. [2000].

Table 1. Fit of PSD Functions
Componentn IndexPo Amplitude mm2 hz1−nfo Frequency 10−3 Hz
This Report
Bock et al. [2000]

[15] The sensitivity of these GPS measurements can be directly assessed from the functional form for the PSD. The estimates of Po, n, and fo are used to construct a data covariance matrix [Williams, 2003a; Langbein, 2004]. Using the data covariance matrix, various scenario functions that might describe deformation can be tested; the standard error in the amplitude of the function can be determined using the data covariance. Two functions, an offset in the data, and a rate are used here to quantify the sensitivity of the GPS measurements. The test offset is defined as a function that spans 2T in time and the offset occurs at t = T or half-way through the time series. Since the rate over the time series is often unknown, the rate is estimated along with the offset. The test rate function is simply a linear in time function spanning the time series with a length of T.

[16] Results of estimating the sensitivity of 1-second, 1-minute, 30-minute, and 1-day sampled GPS data for both rate and offset are shown in Figure 3 for the horizontal positions. Although the data covariance for these intervals are derived from the PSDs in Figure 2, the data covariance for the daily sampled data are derived with Maximum Likelihood Estimators [Williams et al., 2004; Langbein, 2004] from 3 years of solutions using standard processing techniques applied to continuous GPS data [Wdowinski et al., 1997; Zumberge et al., 1997]. The assumed function for the PSD of continuous GPS data is a combination of power law noise and white noise. Based upon the relative amplitude of the PSDs in Figure 2, the sensitivity of high-sampling rate GPS solutions to vertical displacements would be roughly 7 to 8 times worse than the horizontal displacements shown in Figure 3.

Figure 3.

Estimate of sensitivity of GPS, horizontal measurements to rate and displacements. Values of sensitivity are the 1 standard deviation level. The heavy, black curves are results derived from the PSD for horizontal position changes shown in Figure 2 but with 1-second, 1-minute, and 30-minute sampling. The thin curves are results derived from high precision, daily solutions described in the text. (a) Sensitivity in rate. (b) Sensitivity to offsets. The sensitivity assumes that the time of the offset is known.

[17] For rate sensitivity, the precision in rate improves linearly with time interval. For periods of less than 0.1 days, the precision in rate, σr is proportional to 1/T. This is expected for 1/f or flicker noise [Williams, 2003a]. For periods between 0.1 and 10 days, the precision is proportional to 1/T1.5 which is expected for a white noise process. Recall that the PSD function is constant for low frequencies, f < fo, and equivalent to the spectra of white noise with a fo sampling interval. For periods in excess of 10 days, the rate sensitivity flattens due to the effects of power-law noise with an spectral index near 2 [Williams, 2003a]. Close inspection of the rate sensitivity plots in Figure 3 reveals a slight improvement in precision with higher sampling rates.

[18] The error in estimating the size of an offset shows some improvement with longer averaging interval. For only 2 seconds of data, the standard error in the estimate of an offset is 5 mm, but with 60 samples of 1 Hz data, the sensitivity improves to 2 mm. For longer periods between 0.5 and 10 days, the offset sensitivity is proportional to 1/T0.5 which is expected for a white noise process. Again, like the rate sensitivity, the higher sampling rate yields slight improvement in precision. For instance, at 0.01 days, 1-second sampling provides a 1 standard deviation precision of 2 mm where 1-minute sampling provides a precision of 3 mm.

4. Conclusions

[19] Creepmeters can detect displacement of the order of 0.02 mm over periods between 20 minutes and 3 hours [Langbein et al., 1993]. At longer than 3 hours, the displacement sensitivity is roughly 0.04 × T mm where T is the period in days. At short intervals, creepmeters are much more sensitive than GPS; Both techniques have equal sensitivity at about 10 days. Since typical creep events at Parkfield range 0.5 and 2 mm over periods ranging from 30 minutes to 1 day, these events will not be detected with GPS.

[20] If the GPS rate curves are normalized to a distance scale of about 10 km, then strain rate sensitivity can be compared with borehole strain. For borehole strain, Johnston and Linde [2003] show a sensitivity relation of 0.004/T0.5 ppm. For GPS, the relation is 0.55/T ppm. Thus, at 20-minutes, the strain sensitivity is 0.03 ppm/day but the GPS is 40 ppm/day. Borehole strain is roughly 3 orders of magnitude more sensitive than GPS. On the other hand, the strain field from fault slip varies as 1/r3, but the displacement is proportional to 1/r2, where r is the distance from the source. Hence, displacements estimated from high-frequency GPS buy back some of its deficit in sensitivity in rate relative to borehole strain.

[21] The sensitivities provided here are 1 standard deviation level but should be multiplied by 3 to give 99% confidence. However, this degree of confidence is only valid if the time of the offset is precisely known. According to Williams [2003b], to confidently estimate the time of the offset with minimal false alarms requires that the offset exceeds 8 times its standard error.

[22] On the other hand, we are anticipating a moderate sized M6 earthquake at Parkfield which is expected to have 30 cm of surface slip and sizable post-seismic displacements [Smith and Wyss, 1968]. One basis for a short-term prediction of an earthquake at Parkfield is more than 5 mm of slip occurring at many sites on the fault over a 6 to 10 hours [Bakun et al., 1987]. More than 15 mm of slip would exceed the range of these creepmeters but would be measured with GPS.

[23] High sampling rate GPS complements the existing high-precision GPS provided by CGPS networks in two ways. For studying earthquake sources, high-rate GPS can provide an accurate record of strong shaking and provides estimates of the displacement history during the earthquake. Since seismometers are inertial sensors, they only provide acceleration. In addition, high frequency GPS extends the dynamic range of the existing creepmeters at Parkfield thereby allowing for the possibility of providing a time series of displacements during and immediately following a moderate sized earthquake.


[24] We thank SCIGN for coordinating and installing many of the continuous stations; UC Berkeley and BARD for routinely downloading the 30-second data, and Glen Offield for designing and installing the radio telemetry system. Funding was provided by the Southern California Earthquake Center. Use of the Geodetics, Inc. RTD software for this project was provided by UCSD. Use of commercial software by the USGS does not constitute an endorsement.