Are megaquakes clustered?

Authors

  • Eric G. Daub,

    1. Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    2. Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    Search for more papers by this author
  • Eli Ben-Naim,

    1. Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    2. Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    Search for more papers by this author
  • Robert A. Guyer,

    1. Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    2. Physics Department, University of Nevada, Reno, Nevada, USA
    Search for more papers by this author
  • Paul A. Johnson

    1. Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    Search for more papers by this author

Abstract

[1] We study statistical properties of the number of large earthquakes over the past century. We analyze the cumulative distribution of the number of earthquakes with magnitude larger than threshold M in time interval T, and quantify the statistical significance of these results by simulating a large number of synthetic random catalogs. We find that in general, the earthquake record cannot be distinguished from a process that is random in time. This conclusion holds whether aftershocks are removed or not, except at magnitudes below M = 7.3. At long time intervals (T = 2–5 years), we find that statistically significant clustering is present in the catalog for lower magnitude thresholds (M = 7–7.2). However, this clustering is due to a large number of earthquakes on record in the early part of the 20th century, when magnitudes are less certain.

Ancillary