Recurrence statistics of great earthquakes

Authors

  • E. Ben-Naim,

    Corresponding author
    1. Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    2. Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    • Corresponding author: E. Ben-Naim, Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87544, USA. (ebn@lanl.gov)

    Search for more papers by this author
  • E. G. Daub,

    1. Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    2. Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    3. Institut des Sciences de la Terre, Université Joseph Fourier, Grenoble, France
    Search for more papers by this author
  • P. A. Johnson

    1. Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    Search for more papers by this author

Abstract

[1] We investigate the sequence of great earthquakes over the past century. To examine whether the earthquake record includes temporal clustering, we identify aftershocks and remove those from the record. We focus on the recurrence time, defined as the time between two consecutive earthquakes. We study the variance in the recurrence time and the maximal recurrence time. Using these quantities, we compare the earthquake record with sequences of random events, generated by numerical simulations, while systematically varying the minimal earthquake magnitude Mmin. Our analysis shows that the earthquake record is consistent with a random process for magnitude thresholds 7.0≤Mmin≤8.3, where the number of events is larger. Interestingly, the earthquake record deviates from a random process at magnitude threshold 8.4≤Mmin≤8.5, where the number of events is smaller; however, this deviation is not strong enough to conclude that great earthquakes are clustered. Overall, the findings are robust both qualitatively and quantitatively as statistics of extreme values and moment analysis yield remarkably similar results.

1 Introduction

[2] Remote triggering of large earthquakes, where one large earthquake causes another large earthquake at a global distance comparable to the size of the earth, is the subject of ongoing debate in geophysics. It is well known that earthquakes do cause aftershocks on local scales, at distances comparable to the size of the fault. In the last 20 years, it has been shown that seismic waves can dynamically trigger earthquakes at large distances [Hill et al., 1993; Gomberg et al., 2004; Freed, 2005], and more recently, that a large earthquake can trigger other large earthquakes at global distances [Pollitz et al., 2012]. However, other recent studies suggest that dynamic triggering of large earthquakes is not widespread [Parsons and Velasco, 2011; van der Elst et al., 2013]. Thus, dynamic triggering of large events at global distances remains an open question, one with potentially significant implications for hazard analysis and earthquake physics.

[3] Remote triggering necessarily implies that large earthquakes are correlated in time, that is, earthquakes are not equivalent to a random process. The increase in earthquake activity over the past decade including three of the six largest events on record over the past century [Brodsky, 2009; Ammon et al., 2011] raises the question whether great earthquakes are clustered (Figure 1).

Figure 1.

The sequence of large earthquakes during the years 1900−2012. Three thresholds are used: (bottom), Mmin=7.5 (middle), Mmin=8.0 and (top) Mmin=8.5.

[4] Recent studies have utilized a variety of statistical methods to examine whether the sequence of large earthquakes is consistent with a random process. The approaches used to analyze the earthquake record include, for example, statistics of the number of events in a fixed time interval and statistics of the time between events. However, the small number of powerful events constitutes a serious challenge for such investigations [Kerr, 2011; Dimer de Oliveira, 2012]. To date, some studies reported deviations from random event statistics [Bufe and Perkins 2005, 2011], while several others report that the earthquake record is consistent with random statistics [Michael, 2011; Shearer and Stark, 2012; Parsons and Geist, 2012; Daub et al., 2012].

[5] In this study, we focus on the recurrence time between successive earthquake events, a quantity that allows us to probe the most powerful events on record. Our statistical analysis quantifies typical properties as well as extremal properties of the recurrence time. Using numerical simulations, we generate a large number of random sequences, thereby allowing probabilistic comparison between the earthquake record and a random process.

2 Earthquake Record

[6] We analyze the earthquake event times in the U.S. Geological Survey PAGER (Prompt Assessment of Global Earthquake Response) catalog [Allen et al., 2009], supplemented by the Global CMT (Centroid Moment Tensor) catalog [Ekström et al., 2012]. These two catalogs comprise a global record from 1900 through 31 December 2012, containing 1770 events with magnitude M≥7 (Table 1).

Table 1. The Number of Large Events on Record During the Years 1900−2012
MminAll EventsAftershocks Removed
  1. a

    Listed are the total number of events (with and without aftershocks) versus the minimum magnitude Mmin.

7.017701255
7.5447371
8.08474
8.51917
9.055
9.511

[7] The catalog contains aftershocks, which must be identified to address whether earthquake occurrence is random over global distances. Removal of aftershocks is not a trivial procedure, as it requires assumptions that cannot be tested due to limited data [Marsan and Lengliné, 2008]. We identify aftershocks using a window method [Gardner and Knopoff, 1974]: any event close enough to another larger event in both space and time is considered an aftershock and is removed from the catalog. We examine a variety of choices for the distance and time windows and verify that our conclusions are robust with respect to the aftershock removal procedure. In the following, we use the time window in the original Gardner and Knopoff study, and our choice for the distance window is a purposely conservative estimate of the rupture length for a given magnitude (i.e., overestimated spatial extent of aftershocks), based on an empirical law [Wells and Coppersmith, 1994]. We note that our analysis classifies two of the M=8.5 events as aftershocks: the M=8.6 2005 Nias earthquake and the M=8.5 2007 Sumatra earthquake, both aftershocks of the 2004 M=9.0 Sumatra-Andaman earthquake. Without aftershocks, the catalog contains 1255 events (Table 1). For completeness, we include in our investigation both the raw earthquake catalog as well as the catalog with aftershocks removed.

3 Recurrence Statistics

[8] The basic quantity in our analysis is the recurrence time, defined as the time between two successive events. Recurrence times are commonly used to characterize seismic activity. For a random process, where events occur at a constant rate and there are no correlations between different events, the cumulative distribution P(t) of recurrence intervals that are larger than t is purely exponential,

display math(1)

Here, 〈t〉 is the average recurrence time.

[9] As the magnitude threshold increases, the number of events becomes smaller and the distribution of recurrence times can be probed only over a smaller range. Consequently, the tail of the distribution, which quantifies the likelihood of large gaps between events, becomes difficult to measure. To address this issue and to systematically probe high magnitudes, we analyze a standard measure for fluctuations, the normalized variance

display math(2)

Here the bracket denotes an average over all recurrence intervals in the sequence. The variance involves the lowest (nontrivial) integer moment of the distribution, yet, as discussed below, we also analyze a range of other moments.

[10] We use numerical simulations to characterize how the normalized variance behaves for a random process. We generate a very large number (108) of random sequences where the recurrence times are identical and independently distributed variables, drawn from the exponential distribution (1). The number of events N and the average recurrence time 〈t〉; are set by the earthquake record, for each magnitude threshold Mmin. By simulating the precise number of events N on record, our analysis properly quantifies the large fluctuations that are expected when the number of events is small.

[11] We measure the average variance, 〈V〉, and the standard deviation in the variance, δV, defined by (δV)2=〈V2〉−〈V2(here, the bracket denotes an average over all random sequences). As shown in Figure 2a, the normalized variance is close to unity when the number of events is large, but when the number of events is small (at large magnitudes), the expected variance decreases, and the standard deviation becomes comparable to the mean. We also confirm that as expected, δVN−1/2 for large N.

Figure 2.

(a) The average variance 〈V〉 and the standard deviation of the variance δV as a function of magnitude threshold Mmin. These quantities correspond to a random sequence with a number of events that matches that of the earthquake record (aftershocks removed). (b) Normalized variance V as a function of Mmin. Shown are the behaviors with and without aftershocks. (c) The number of standard deviations away from the mean σ defined in equation (3) versus Mmin.

[12] The normalized variance defined in equation (2) is shown as a function of the threshold magnitude in Figure 2b. Using the average 〈V〉 and the standard deviation δV obtained from simulated sequences, we also calculate σ the number of standard deviations away from the mean (Figure 2c),

display math(3)

For most magnitude thresholds, even without removing aftershocks, the quantity σ is not large, evidence that the earthquake sequence is consistent with a random process. There are however three significant peaks that indicate potential deviations from random event statistics. First, at the magnitude thresholds 7.0≤Mmin≤7.2, the raw earthquake catalog deviates from a random process, but once aftershocks are removed, these deviations are largely eliminated. Second, there is a peak at Mmin=7.8, but again, this peak is eliminated once aftershocks are removed. Third, the most pronounced peak occurs when 8.4≤Mmin≤8.5. In this case, however, removing aftershocks diminishes the magnitude of the peak only slightly (for such powerful earthquakes, aftershocks are of course rare, see Table 1).

[13] To quantify the significance of the peaks in the quantity σ, we use probabilistic analysis. Such analysis requires numerical simulations because the distribution of the variance depends strongly on the number of events: this distribution approaches a normal distribution as the number of events becomes very large, but it is much broader when the number of events is small. Specifically, we measure the fraction FV of simulated random sequences where the normalized variance exceeds the empirical value V. Figure 3 shows the fraction FV as a function of magnitude threshold Mmin. For each peak in σ, there is a corresponding dip in the fraction FV. These dips are mostly suppressed once aftershocks are removed. Yet, the dip at the narrow band 8.4≤Mmin≤8.5 is robust. At Mmin=8.5, we find FV≈1/300, that is, only one in about 300 random sequences has a variance that exceeds that of the earthquake data. This small fraction implies that the earthquake record deviates from a random process at this particular magnitude threshold. As pointed out by Shearer and Stark [2012], because Mmin=8.5 is chosen a posteriori, the measured fraction FVmay represent an underestimate. Regardless, the fraction FV is not sufficiently small to conclude with confidence that the earthquake record violates random statistics or equivalently, that there are temporal correlations (or causal relationships) between large events.

Figure 3.

The fraction FV of random sequences with variance exceeding the empirical value versus magnitude threshold Mmin. Shown are results for the raw catalog (circles) and the catalog with aftershocks removed (squares). The error bars were produced using the moment analysis described in the main text.

[14] As a reference, our simulations show if 3, 6, or 9 additional M≥8.5 events occur over the next decade [Shearer and Stark, 2012], the quantity FV would then drop to 1.8×10−3, 2.9×10−4, and 9×10−5, respectively. The change in the quantity FVwith even a few additional events illustrates the uncertainties associated with such small catalogs.

[15] For further insight, we examine statistical properties of the maximal recurrence time tmax, corresponding to the longest quiescent period between consecutive earthquakes. Similar to the probabilistic analysis above, we measure the fraction Fmaxof random sequences where the maximal recurrence time exceeds tmax. When the number of events is large, this fraction is given by the formula Fmax=1−1− exp(−t/〈t〉)N. Figure 4a shows the fraction Fmax as a function of Mmin. Statistics of the largest recurrence time are strongly correlated with those of the variance: the fraction Fmax mirrors the behavior of the fraction FV(Figures 3 and 4a). Moreover, if we restrict our attention to magnitudes Mmin≥7.7 where aftershocks are rare, the two fractions are remarkably close to each other (Figure 4b). Indeed, we verify that simulated random sequences with a maximal gap that exceeds tmax also have variance that exceeds V. This extreme event analysis demonstrates that the very large 39.9 year gap separating two clusters of activity, one during 1950–1965 and one during 2004–2012, is responsible for the anomalously large variability observed at the magnitude threshold Mmin=8.5 (Figure 1).

Figure 4.

(a) Fraction Fmax of random sequences where the maximal recurrence time exceeds the largest recurrence time on record versus magnitude threshold Mmin. (b) The fraction FV (see also Figure 3) and the fraction Fmax for the earthquake catalog without aftershocks.

[16] Previous statistical analysis based on the number of events in a given time interval revealed deviations from random event statistics at this magnitude range that can be traced to magnitude uncertainties in the earlier part of the century [Daub et al., 2012]. To assess the effects of uncertainties in the earthquake magnitude [Engdahl and Villasenor, 2002], we introduce unbiased variations in the magnitude: MM+δM where δMrepresents a potential measurement error. The quantity δM is drawn from a uniform distribution in the range −ΔMΔM. We systematically increase the range ΔMup to as high as ΔM=0.8 and repeat the analysis used to obtain Figures 2-4. Each data point is obtained using 108 simulated catalogs: 104distinct modifications of the original earthquake catalog were generated, and for each modification, 104 simulated catalogs were produced. The fractions FV and Fmax become smoother as the range ΔM increases (Figure 5), and moreover, the dips at M=8.5 are strongly suppressed. We also consider situations where the magnitude is always underestimated or overestimated by uniformly drawing δMin the range 0ΔMor −ΔM0. Biased errors lead to the same patterns shown in Figure 5. We also verify that variations δM drawn from a normal distribution with standard deviation ΔM lead to similar results. Consistent with Parsons and Geist [2012], magnitude uncertainty analysis supports the conclusions of our statistical analysis.

Figure 5.

Magnitude uncertainty analysis for the earthquake catalog (aftershocks removed). Shown are the following: (a) the quantity FV as in Figure 3 and (b) the quantity Fmax as in Figure 4 versus Mmin. The quantity ΔM quantifies the range of magnitude uncertainty, and the different curves represent different values of ΔM in the plots.

[17] To examine whether the results are sensitive to the particular measure of variability (2), we repeat the analysis using the normalized moments Mn=〈tn〉/〈tn instead of the variance V=M2−1. We examine a series of moments in the range 1.25≤n≤4 and again measure the fraction FM of simulated catalogs where the moment Mnexceeds the value measured for the earthquake data. By varying the parameter n and identifying the maximal and minimal fractions FM, we produce the error bars shown in Figure 3. The results of this moment analysis confirm that the dips in the quantity FV are robust.

4 Conclusions

[18] In summary, we analyzed typical and extremal properties of the time intervals between large earthquakes. The results of our statistical tests reconcile recent studies that address the question: “Are great earthquakes clustered?” Our study yields three important conclusions.

[19] First, in the magnitude threshold range 7.0≤Mmin≤8.3 which constitutes the vast majority of great earthquakes on record, the earthquake sequence does not exhibit significant deviations from a random set of events. These findings reinforce the results of several studies [Michael, 2011; Shearer and Stark, 2012; Parsons and Velasco, 2011; Daub et al., 2012]. At several threshold magnitudes, the earthquake record is consistent with a random process even if aftershocks are not removed from the catalog.

[20] Second, the roughly 20 most powerful events on record, corresponding to magnitude threshold 8.4≤Mmin≤8.5, deviate from a random sequence of events. This departure is tied to the anomalously long gap between two clusters of events, one in the mid-century, and one over the past decade, an observation also noted in [Bufe and Perkins, 2005, 2011; Shearer and Stark, 2012]. However, this departure is not sufficiently strong to conclude that there are temporal correlations between great earthquakes: the likelihood that a random sequence matches the variability in the data (≈1/300) is equivalent to only ≈2.6 standard deviations from the mean for a normal distribution.

[21] Third, the results are qualitatively and quantitatively robust. Analysis of average properties and analysis of extremal properties of the recurrence time lead not only to similar conclusions but also to very similar likelihood figures that the observed sequence of events can be explained by a random process. We have also considered magnitude uncertainties using unbiased and biased measurement errors in earthquake magnitude and observed that such errors systematically suppress deviations from random event statistics.

[22] Finally, our study uses the average recurrence time as a measure for the overall rate of events. Uncertainties in the overall rate of events are significant when the number of events is small, and an important challenge for future research is to generalize the analysis above to incorporate uncertainties in the overall rate of events.

Acknowledgments

[23] We thank Robert Guyer, Robert Ecke, Joan Gomberg, and Thorne Lay for comments. We gratefully acknowledge support for this research through DOE grant DE-AC52-06NA25396.

[24] The Editor thanks Andrew Michael and an anonymous reviewer for their assistance in evaluating this manuscript.