## 1. Introduction

[2] The faults on which earthquake occur are not simple planar structures, but have bends, splays, and steps in them. These geometrical features are used to define segments of faults, which are themselves used to delineate future expected large events. The role of fault segmentation in determining future large earthquakes is not, however, well understood. While there are many instances of large earthquakes initiating and terminating at geometrical discontinuities [*King and Nabelek*, 1985], there are also examples such as the 1992 M7.1 Landers events which jumped two segment stepovers and then died in the middle of a third segment. Underlying these complications is the long repeat times of large earthquakes– of order hundreds of years– which make simple observational answers hard to find. Despite the limited observations, current planning efforts for future earthquakes revolve centrally around the concept of fault segmentation, defining fault segments and then relying on panels of experts to vote on which segments they think might break separately or together [*Working Group on California Earthquake Probabilities*, 2002]. Clearly, there is a need for more scientific understanding of this problem.

[3] Improvements in our understanding of the physics operating on various timescales has allowed improvements on our ability to do time dependent hazard estimation [*Dieterich*, 1994; *Parsons et al.*, 2000]. On long timescales used for planning and mitigation purposes (e.g., the 50 year probabilities used in the national hazard maps), a critical parameter affecting these hazard estimates is the coefficient of variation of large event repeat times (the standard deviation of the repeat times divided by the mean repeat time). For large coefficients of variation there is little change in the probabilities of large events occurring during the earthquake cycle, and the time dependence of long term probabilities become negligible. In contrast, for smaller coefficients of variation, the distribution approaches a periodic distribution, we have more pronounced changes in the probabilities during the earthquake cycle, and the potential of doing time dependent long term hazard estimation becomes significant. What the appropriate value or values of the coefficient of variation are for earthquakes remains a hotly debated topic, with major implications for earthquake predictability and hazard estimates [*Working Group on California Earthquake Probabilities*, 2002; *Lindh*, 2003].

[4] Fueling the controversy is the paucity of observational data from which values can be obtained. Important constraints have been derived from direct observations of the time intervals between the few areas with historical records [*Nishenko and Buland*, 1987; *Lindh*, 2003; *Sykes*, 2003]. There are, however, a number of limitations with this approach, including the small number of events in each sequence, and thus the need to average over widely different fault systems, and the long times between large events– hundreds of years– which precludes much additional improvements in the data.

[5] Other observational contributions have come from paleoseismic trenches, which record sequences of ruptures at individual points along a fault. Trenches, however, have yielded only limited sequence lengths, and concerns about missing events, which may be difficult to see or may have ruptured nearby branches, further complicate these efforts. For perhaps the best recorded site, where a remarkable 14 events have been dated at Wrightwood [*Fumal et al.*, 2002] a further issue complicates a simple interpretation of the data: it has been argued that the site may be near an overlap of large events rupturing to the north and to the south, and thus the relatively large coefficient of variation measured there is not typical of values along the length of fault. With these observational limitations, and the difficulty of obtaining further data, other approaches which can contribute to this problem are obviously needed.

[6] Here, we present numerical results from a newly developed model which generates long sequences of elastodynamic events on complex fault systems [*Shaw*, 2004]. The model has a number of features which are important to bring to bear on this problem. First, it self-consistently generates a complex fault system geometry, through a physical mechanism rather than being externally imposed. This self-consistency is important in insuring strain is compatibly accommodated in the long run over many earthquake cycles. The self-consistency also reduces the number of things which must be specified, by allowing the fault system to self-organize from a simple physics, here a random strength heterogeneity combined with a long term slip weakening. The complex geometry is important in the ability to study the role of fault geometry in the problem, particularly since fault segmentation is a foundation upon which seismic hazard maps are based. Second, it self-consistently generates sequences of elastodynamic events on the fault system. The long sequences are critical here in that the stresses left over by previous events form the setting for subsequent events. The self-consistency and elastodynamics are important in our ability to study the interaction of geometry and dynamics and to simulate the cascading ruptures seen, as studies of individual ruptures on segmented faults have illuminated the critical role of the prestress in the ability of ruptures to jump stepovers [*Harris et al.*, 1991]; here the sequences generate their own distributions of prestress. Finally, our ability to simulate long sequences of events allows us not only to reach a representative population of events, the attractor of the dynamics, but also to examine statistical measures of the system over the timescale of many many earthquake cycles, to thus elucidate quantitative measures relating dynamics, geometry, and the variation of large events.