## Introduction

Using matrix population models, ecological indices can be calculated as functions of vital rates such as survival or fertility. Measures of population growth rate, including the discrete-time growth rate λ, the continuous-time growth rate *r* = log λ and the net reproductive rate *R*_{0}, are of particular interest. The discrete-time population growth rate λ is given by the dominant eigenvalue of the population projection matrix. Sensitivities (first partial derivatives) of λ with respect to relevant parameters quantify how population growth responds to vital rate perturbations. These first derivatives are used to project the effects of vital rate changes due to environmental or management perturbations, uncertainty in parameter estimates and phenotypic evolution (i.e. with λ as a fitness measure, the sensitivity of λ with respect to a parameter is the selection gradient on that parameter) (Caswell 2001).

### Applications of second derivatives of growth rates

The second derivatives of growth rates have applications in both ecology (e.g. assessing and improving recommendations from sensitivity analysis, approximating the sensitivities of stochastic growth rates) and evolution (e.g. characterizing nonlinear selection gradients and evolutionary equilibria). Several of these applications are summarized in Table 1 and described in the following sections.

#### Second-order sensitivity analysis and growth rate estimation

The sensitivity of growth rate provides insight into the population response to parameter perturbations. However, such perturbations also affect the sensitivity itself, that is, sensitivity is 'situational' (Stearns 1992). These second-order effects are quantified by the sensitivity, with respect to a parameter θ_{j}, of the sensitivity of λ to another parameter θ_{i}, that is, by the second derivatives . The sensitivity of the elasticity of growth rate to parameters similarly depends on second derivatives (Caswell, 1996, 2001).

Second derivative | Sign | Interpretations |
---|---|---|

=0 | Sensitivity of λ to θ is independent of θ Linear selection on trait θ | |

>0 | Sensitivity of λ to θ increases with θ Convex selection on trait θ Evolutionarily unstable singular strategy | |

<0 | Sensitivity of λ to θ decreases with increases in θ Concave selection on trait θ Evolutionarily stable singular strategy | |

>0 | Sensitivity of λ to θ Selection to increase correlation between traits θ | |

<0 | Sensitivity of λ to θ Selection to decrease correlation between traits θ | |

N/A | Used to calculate sensitivity of the stochastic growth rate λ_{s} |

In conservation applications, attention is often focused on the vital rates to which population growth is particularly sensitive or elastic; these first-order results may change depending on parameter perturbations. First derivatives also provide a linear, first-order approximation to the response of the growth rate to changes in parameters. The linear approximation is guaranteed to be accurate for sufficiently small perturbations and is often very accurate even for quite large perturbations (Caswell 2001). If the response of λ to θ is nonlinear, it is tempting to use a second-order approximation for Δλ:

We caution that although this may, in some cases, provide a more accurate calculation, this is not guaranteed. As shown in Fig. 1 of Carslake, Townley & Hodgson (2008), for example, adding the second-order terms may actually reduce the accuracy of the approximation.

#### Characterizing nonlinear selection processes

The second derivatives of fitness with respect to trait values have consequences for selection. The first derivatives of fitness are selection gradients (Lande 1982). When fitness is a linear function of a trait, its second derivatives are zero, and there is selection to shift the trait's mean value. When fitness is a nonlinear function of a trait, its second derivatives are nonzero and provide additional information on how selection affects the trait's higher moments (Lande & Arnold 1983, Phillips & Arnold 1989, Brodie, Moore & Janzen 1995). Such nonlinear selection can be classified as concave or convex depending on whether the second derivatives are negative or positive.

One can classify a selection process as linear, concave or convex using quadratic selection gradients, the local second derivatives of fitness with respect to trait value (Phillips & Arnold 1989). If fitness is measured as λ, these quadratic selection gradients are equivalent to ∂^{2}λ/∂θ^{2}, the pure second derivatives of λ with respect to trait θ (e.g. the second derivatives with respect to stage-specific survival in *C. ovandensis*, as shown in Fig. 3a). Concave, linear and convex selection correspond to negative, zero and positive second derivatives, respectively.

Concave selection reduces the variance in the trait, and convex selection increases it; Lande & Arnold (1983, p.1216) equate this to a more sophisticated version of the concepts of stabilizing and disruptive selection. Brodie, Moore & Janzen (1995) provide further analysis of the curvature of the fitness surface and its effects on selection.

Selection operating on pairs of traits is said to be correlational if the cross second derivatives are nonzero. Thus, if the pure second derivatives of two different traits, θ_{i} and θ_{j}, are both nonzero, their mixed second derivative ∂^{2}λ/∂θ_{j}∂θ_{i} is a measure of correlational selection. If ∂^{2}λ/∂θ_{j}∂θ_{i}<0, there is selection to decrease the phenotypic correlation between the two traits; if ∂^{2}λ/∂θ_{j}∂θ_{i}>0, there is selection to increase their correlation. The concepts of nonlinear selection are powerful, but require the second derivatives of fitness to be applied.

#### Stability of evolutionary singular strategies

Second derivatives play a role in adaptive dynamic analyses. Evolutionary singular strategies (SSs) are phenotypes for which the selection gradient is locally zero (e.g. Geritz *et al*. 1998). SSs are classified as stable, attracting or repelling, and by whether they can invade or coexist with other nearby phenotypes (Geritz *et al*. 1998, Diekmann 2004, Waxman & Gavrilets 2005, Doebeli 2011).

These classifications depend on the local second derivatives of invasion fitness, the growth rate of a rare mutant in an equilibrium resident environment. For example, the second derivative of the mutant growth rate λ to the mutant trait *y* determines whether a SS is evolutionarily stable (∂^{2}λ/∂*y*^{2}<0) or evolutionarily unstable (∂^{2}λ/∂*y*^{2}>0). Evolutionarily stable strategies, once established, are unbeatable phenotypes against which no nearby mutants can increase under selection and are thus long-term evolutionary endpoints. Evolutionarily unstable strategies, on the other hand, are branching points open to phenotypic divergence and may ultimately become sources of sympatric speciation (Geritz *et al*. 1998).

#### Sensitivity of the stochastic growth rate

Second derivatives provide a way to calculate the sensitivity of the stochastic growth rate in some cases. The stochastic growth rate is

where *N*(*t*) is the population size at time *t*. Tuljapurkar (1982) derived a small-noise approximation for log λ_{s} in the absence of temporal autocorrelation. As shown by Caswell (2001 Section 14.3.6), this approximation can be written in terms of the first derivatives of λ, the dominant eigenvalue of the mean projection matrix . Thus, the derivatives of this approximation can be written in terms of the second derivatives of that eigenvalue (Caswell 2001, Section 14.3.6). We discuss this application further in the section ‘Sensitivity analysis of stochastic growth rates’.

### Calculating second derivatives of growth rates

The second derivatives of λ with respect to matrix elements were introduced by Caswell (1996); see also Caswell (2001, Section 9.7). However, these calculations are awkward and error-prone, because they involve all the eigenvalues and eigenvectors of the projection matrix. McCarthy, Townley & Hodgson (2008) introduced an alternative approach for calculating the second derivatives of eigenvalues (they call them 'second-order sensitivities') based on transfer functions, partially to avoid the calculation of all the eigenvectors. However, they consider only rank-one perturbations of a subset of the matrix elements, excluding fertilities, and their calculations are perhaps equally difficult.

Here, we reformulate the second derivative calculations using matrix calculus, providing easily computable results. We extend previous results by including not only second derivatives with respect to matrix elements, but also those with respect to any lower-level parameters that may affect the matrix elements, and by presenting the second derivatives of the continuous-time invasion exponent *r* and the net reproductive rate *R*_{0}.

The key to our approach is that the calculation of first derivatives using matrix calculus yields a particular expression, the differentiation of which leads directly to the second derivatives. Second derivatives are easily computed by this method in any matrix-oriented language, such as Matlab or R. Although we consider only the second derivatives of population growth rates, our approach extends naturally to other scalar-dependent variables.

In the section ‘A case study: *Calathea ovandensis*’, we present an example of the calculation of second derivatives in a case study of the tropical herb *Calathea ovandensis*.

### Notation

Matrices are denoted by upper-case boldface letters (e.g. **A**) and vectors by lower-case boldface letters (e.g. **w**); unless otherwise indicated, all vectors are column vectors. Transposes of matrices and vectors are indicated by the superscript . The matrix **I**_{n} is the *n*×*n* identity matrix, the vector **e** is a vector of ones, and **e**_{1} is a vector with 1 as its first entry and zeros elsewhere. The matrix **K**_{m,n} is a *mn*×*mn* commutation matrix (vec-permutation matrix) (Magnus & Neudecker 1979, Henderson & Searle 1981), which can be calculated using the Matlab function provided in Appendix S1-D. The expression diag(**x**) indicates the square matrix with **x** on the diagonal and zeros elsewhere.

The Kronecker product is denoted by **X**⊗**Y** and the Hadamard (element-by-element) product by **X**∘**Y**. The vec operator (e.g. vec**A**) stacks the columns of a matrix into a single vector. For convenience, we will write (vec**A**) as vec**A** . We will make frequent use of Roth's theorem (Roth 1934), which states that for any matrices **X**,** Y** and **Z**: