Using Markov chain successional models backwards

Authors


A Solow, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA (fax 508 4572184; e-mail asolow@whoi.edu)

Summary

  • 1Markov chains are commonly used to model succession in plant and animal communities. Once fitted to data, these models are typically used to address ecological issues concerning future successional states. In some situations it may be of interest to use the present successional state to reconstruct past conditions.
  • 2The properties of a time-reversed Markov chain are reviewed and used to provide an expression for the conditional probability distribution of the most recent time that the chain was in a particular successional state given its present state.
  • 3The estimation of this conditional probability is discussed and a parametric bootstrap is described for constructing a confidence interval.
  • 4The calculations are illustrated using a published Markov chain model of succession in a rocky subtidal community.
  • 5Synthesis and applications. The term succession refers to the progressive changes over time in the state of an ecological community. Although most analyses of succession focus on characterizing the future, in some situations interest centres on reconstructing the past. For example, forensic entomologists are commonly interested in estimating time of death from the insect community present on a corpse. When succession can be modelled as a Markov chain, the results obtained here can be used for this kind of reconstruction.

Introduction

Markov chains are commonly used to model the dynamics of succession in a variety of settings, including forests (Waggoner & Stephens 1970; Horn 1975; Runkle 1981), plant communities (Isagi & Nakagoshi 1990; Aaviksoo 1995), insect assemblages (Usher 1979), coral reefs (Tanner, Hughes & Connell 1994, 1996) and rocky intertidal communities (Wootton 2001a,b). Hill, Witman & Caswell (2004) gave an excellent account of these models and discussed their use in addressing questions of ecological interest. The questions considered by Hill, Witman & Caswell (2004) concern what might be called the forward problem: characterizing the future successional state given its present state. In some situations, interest may centre on the backward problem of characterizing the successional history given the present state. For example, Hill, Witman & Caswell (2004) described a Markov chain model of succession at a location in a rocky subtidal zone. The successional states in this model represent 14 species and bare rock. A typical backward question is: given the present state, when was the last time that the location was bare rock? The purpose of this note is to show how the Markov chain model can be exercised to address this question.

The remainder of this paper is organized in the following way. The necessary Markov chain theory is outlined in the following section. The issue of estimation is then taken up. An illustration based on results reported by Hill, Witman & Caswell (2004) is presented. The final section contains some brief concluding remarks.

Theory

The results about Markov chains used in this section can be found in Iosifescu (1980). Let the random variable Xt denote the successional state at discrete time t with state space {1, 2, … , n}. The dynamics of Xt are assumed to be governed by a stationary ergodic Markov chain with n-by-n transition matrix P = [pjk] where pjk is the conditional probability that Xt = k given that Xt−1 = j. A Markov chain is stationary if its transition matrix does not change over time and ergodic if it is irreducible (i.e. each state can eventually be reached from every other state), positive recurrent (i.e. the expected return time to each state is finite) and aperiodic (i.e. starting in each state, there exists no regular period at which the state cannot be reached). For later use, let πj be the limiting probability that Xt = j. This limiting probability is unique for a regular stationary ergodic Markov chain. A Markov chain is regular if there exists an integer k such that all elements of Pk are positive (i.e. for some integer k, every state is accessible from every other state in k steps). The vector π = (π1π2…πn)′ of these limiting probabilities is given by the eigenvector corresponding to the unit eigenvalue of P′ and scaled so that its elements have unit sum. Here and below, ′ denotes the vector or matrix transpose. Note that Hill, Witman & Caswell (2004) defined the transition matrix as the transpose of the definition used here. As the definition used here is standard, it will be retained.

Consider a random time t and define the random variable:

image(eqn 1 )

That is, Sx is the time since the successional state was last in state x. Interest centres on the conditional probability:

image(eqn 2 )

that Sx = s given that Xt = j. This section derives an expression for the conditional probability in equation 2.

Define the time-reversed successional process Yu =Xt−u. Under the assumption that the original Markov chain is ergodic and stationary, this time-reversed process is also a Markov chain with transition matrix Q = [qjk] where:

image(eqn 3 )

This result, which follows directly from Bayes’ theorem, allows the backward problem for the original Markov chain to be expressed as a forward problem for the time-reversed Markov chain. Specifically, Sx represents the so-called first passage time to state x of the time-reversed chain and equation 2 gives the conditional probability mass function of this first passage time given that Y0 = j.

An expression for the probability in equation 2 can be found using the following standard trick. Let be the n-by-n transition matrix formed by modifying Q so that qxx = 1 and qxk = 0 for all k ≠ x. Under this modification, the state x is a so-called absorbing state: once the chain arrives at state x, it remains there. Let e be the n-vector with jth element equal to 1 and all other elements equal to 0. The elements of the vector:

image(eqn 4 )

give the state probabilities for the modified Markov chain at time s given initial state j. Let rx(s) be the element of r(s) corresponding to state x. It follows that rx(s) is the probability that, starting in state j, the first passage time to state x of the time-reversed Markov chain is less than or equal to s. In other words, rx(s) the cumulative distribution function of Sx evaluated at s. Finally, the basic result is:

image(eqn 5 )

Estimation

The theoretical results of the previous section are based on the true transition matrix P. In practice, of course, P is not accessible and must be estimated from an observed sequence of successional states. Let mjk be the observed number of transitions from state j to state k. The transition probability pjk can be estimated by:

image(eqn 6 )

(Anderson & Goodman 1957). This is just the observed proportion of observed transitions starting in state j that end in state k. The corresponding estimate  = [jk] can be used in place of P in the calculations outlined in the previous section to find an estimate x(s | j) of px(s | j).

It most cases, it will be important to go beyond a point estimate of px(s | j) and to provide a confidence interval. This can be done through a parametric bootstrap (Efron & Tibshirani 1994). The basic idea of the parametric boot-strap is to simulate data from the fitted Markov chain model, form x(s | j) as outlined above using the simulated data, repeat the procedure a large number B of times, and treat the B values generated in this way as an estimate of the sampling distribution of x(s | j). The implementation of this algorithm depends on the way in which the data were collected. This is illustrated in the following section.

Illustration

This section illustrates the theoretical result of the previous section using the results presented in table 2 of Hill, Witman & Caswell (2004). This table reports the estimated annual transition probabilities for a single location in rocky subtidal community in the central Gulf of Maine. As noted, the successional states for this community comprise 14 species and bare rock. Interest here centres on the distribution of the backward first passage time Sx to bare rock x given the current successional state. Table 1 reports the estimated backward transition probabilities qjx given in equation 3 from each of the other successional states to bare rock. This estimated probability is smallest for j= 6, corresponding to the sea anemone Urticina crassicornis, and largest for j= 14, corresponding to the polychaete Spirobis spirobis.

Table 1.  Backward transition probabilities qjx to bare rock from each of the other successional states
jqjx
10·024
20·097
30·026
40·016
50·114
60·015
70·165
80·124
90·048
100·146
110·084
120·097
130·128
140·208

As an illustration, Fig. 1 shows the estimated probability mass functions of Sx given in equation 5 in the cases where the current successional state is Urticina and Spirobis. When the current successional state is Spirobis, x(s | j) declines monotonically with s. The upper 0·05-quantile of the estimated distribution is around 49 years. On the other hand, when the current successional state is Urticina, x(s | j) initially increases with s, reaches a mode at 7 years, and thereafter declines monotonically with s. The upper 0·05-quantile of this estimated distribution is around 59 years.

Figure 1.

Conditional probability mass functions (solid line) and approximate 0·95 confidence intervals (dashed line) of the backward first passage time to bare rock when the current successional states are (a) Spirobis and (b) Urticina.

Although details are unavailable, in rough terms the data analysed by Hill, Witman & Caswell (2004) consisted of annual observations over an 8-year period at each of 5000 locations. The following parametric bootstrap procedure was used to construct approximate 0·95 confidence intervals for the values of px(s | j), the estimates of which are shown in Fig. 1. A total of 5000 sequences of eight observations was simulated from the Markov chain model fitted by Hill, Witman & Caswell (2004). The initial value in each sequence was simulated from the estimated stationary distribution and the subsequent values were simulated from the estimated transition probabilities. The complete set of 5000 sequences yielded a total of 35 000 transitions. The transition matrix was estimated from these as given in equation 6 and the corresponding estimates of px(s | j) were formed. The entire procedure was repeated B= 200 times. In Fig. 1, the approximate 0·95 confidence intervals based on the upper and lower 0·025-quantiles of these 200 values are plotted.

It is notable that, despite the large overall sample size, the confidence interval for Spirobis is quite wide (0·15–0·27) for s= 1. This is explained by the relative rarity of Spirobis: 225 of more than 40 000 individuals identified by Hill, Witman & Caswell (2004).

Discussion

The purpose of this note has been to illustrate how Markov chain models of succession can be used to address questions about successional history. One practical situation in which this is useful is in forensic entomology, where interest commonly centres on using the insect community present on a human corpse to estimate the time of death (Byrd & Castner 2000). From a statistical point of view, current practice in this area is ad hoc, in the sense that it is not based on a formal model of succession. A somewhat less fanciful application would be using the macrofauna present at a hydrothermal vent to estimate its age (Tsurumi & Tunnicliffe 2001). To date, work in this relatively recent area has been descriptive.

Acknowledgements

The helpful comments of two anonymous referees are acknowledged with gratitude. This work was supported by the National Science Foundation through Award OCE-0083976.

Ancillary