#### Self-Starting (selfStart) Functions

Nonlinear equations generally lack analytical solutions, as they include products or divisions among their parameters, and thus are generally solved numerically. Such iterative estimation approaches require *a priori* specification of starting values for all parameters that are refined during the fitting process until estimation accuracy meets specified convergence criteria. Selection of starting values is often burdensome and, consequently, self-starting [selfStart()] functions are available for many curves (Pinheiro & Bates 2000; p.342–346; Ritz & Streibig 2005; Paine *et al.* 2012).

FlexParamCurve includes a selfStart() function –SSposnegRichards()– for many nonmonotonic curves. SSposnegRichards() combines two Richards curves (Nelder 1962):

where *A, k, i* and *m* are, respectively, the asymptote, rate parameter, inflection point and shape parameter of the first Richards curve and *A′, k′, i′* and *m*′ are the corresponding parameters for the second curve (parameters described in Appendix S1 Table S1·1). In FlexParamCurve, these parameters are designated Asym, K, Infl, M, RAsym, Rk, Ri and RM, respectively. Individual Richards curves can model many sigmoidal forms (e.g. logistic, Gompertz and von Bertalanffy); the double-Richards curve is equally flexible for nonmonotonic relationships (Fig. 1).

Parameter redundancy often arises when the equation fitted is too complex for the data and can lead to estimation problems. Therefore, FlexParamCurve allows fitting [SSposnegRichards()] and plotting [posnegRichards.eqn()] reduced versions of the double-Richards curve by fixing ≤5 parameters to user-specified values (or means by default). FlexParamCurve uses this approach because parameters of the Richards curve have empirical biological meaning for many datasets (e.g. Brisbin *et al.* 1987). Fixing a parameter achieves the same numerical advantage (fewer estimable parameters) but avoids compensatory changes to estimable parameters that occur when a parameter is dropped. In this way, by default FlexParamCurve allows the data to suggest the most parsimonious curve, but also permits users to select appropriate parameterizations. The default parameter bounds (tested across diverse datasets) can also be specified by the user.

SSposnegRichards() and posnegRichards.eqn() use argument *modno* to specify one of 32 versions of the double-Richards curve (all 16 possible reductions in the second curve (fixing *A′, k′, i′,* or *m′*) both when *m* is fixed or estimated; see Appendix S1 Table S1·2 and the SSposnegRichards() help file). This allows fitting of monotonic curves such as logistic (model 32, *m = *1), Gompertz (model 32, *m* ≈ 0) and von Bertalanffy (model 32, *m = *−0·3), as well as many nonmonotonic forms, for example, double-logistic (model 22, *m = m′ = *1), double-Gompertz (model 22, *m = m′ *≈ 0), double-von Bertalanffy (model 22, *m = m′ = *−0·3) and biphasic growth models (Fig. 1, Appendix S1 Table S1·2).

The output from SSposnegRichards() feeds directly into functions such as nls(), nlsList() and nlme() (Pinheiro *et al.* 2007) and is thus compatible with all methods for these functions [e.g. anova()].

#### Model Selection

FlexParamCurve includes functions [pn.mod.compare() and pn.modselect.step()] to determine the most suitable reduction in the double-Richards curve for a data set. These fit models in nlsList() (Pinheiro *et al.* 2007), yielding nonlinear least-squares (NLS) fits for each group (e.g. each individual in a growth analysis). This represents the suitability of a particular curve more robustly than a simple NLS across all groups (which ignores individual contributions; Fig. 2).

pn.mod.compare() ranks candidate nlsList() models according to penalized root-mean-square error (pRSE*′*):

where *σ*^{2} is the estimated variance (square of residual standard error) for each of the *β* fitted grouping levels and *n* is the number of data points fitted. √(Σ*σ*^{2}/β) is root-mean-square error (RSE) and by default this is divided by √n (thus, pRSE*′* represents per level measurement error exponentially discounted by sample size). This penalizes models that fit only a few groups and consequently have low RSE (because there is likely less variation in fit among fewer levels) and allows comparison of nonnested models (as it uses residual squared error rather than maximum likelihood). Users can also edit the formulation of pRSE*′* to match their desired balance between sensitivity and specificity.

Processing time can be considerable for multiple nlsList() models with many groups, so pn.mod.compare() and pn.modselect.step() first evaluate the parsimony of a fixed shape parameter *m*. Initially, an extra sum-of-squares *F*-test compares the full 8-parameter model (model 1) with a 7-parameter model (model 21) in which *m* is fixed to the mean across the data set. If the 8-parameter model provides a significantly better fit, subsequent reductions explore models in which *m* is estimated (*modno = *2–16). Otherwise, subsequent evaluations use the same reductions but with *m* fixed to the mean across the data set (*modno = *22–36) (see Fig. 1 and Appendix S1 Table S1·2).

After assessing the need to estimate *m*, pn.modselect.step() uses backwards, step-wise selection of subsequent nlsList() models. At the next step, four candidate models (each with one of the four-second curve parameters, *A′, k′, i′, m′*, fixed at its mean value) are ranked by pRSE*′* (as they are not mutually nested) and the highest-ranked reduction is compared with the general model (1 or 21) using extra sum-of-squares *F*-tests (Ritz & Streibig 2009). This rank-then-test procedure is used at all subsequent steps.

For additional flexibility, functions extraF() and extraF.nls() allow users to undertake extra sum-of-squares *F*-tests for any two nested nlsList() or nls() models, respectively.

#### Using FlexParamCurve: Examples from Avian Growth Analyses

The help files for FlexParamCurve provide illustrative examples; see also Figs 1–3 and Appendix S2. Here, we demonstrate the general approach for using FlexParamCurve to determine the most suitable parametric curve then fit NLS models or nonlinear mixed-effects models. We use published data on growth of common terns (*Sterna hirundo* Linnaeus) (Nisbet 1975; Nisbet, Wilson & Broad 1978; tern.data) and little penguins (*Eudyptula minor* Forster) (Chiaradia & Nisbet 2006; penguin.data) and a simulated data set for black-browed albatrosses (*Thalassarche melanophrys* Temminck; posneg.data see help file). Appendix S3 provides codes for these examples.

**1** Run function modpar() to generate a list of initial parameter estimates, fitting options and parameter bounds. This provides information needed to fit [using SSposnegRichards()] and predict [using posnegRichards.eqn()] and can subsequently be modified manually or with change.pnparameters(). Calling either model selection routine automatically calls modpar() if a suitable list is not supplied.

**2** Perform model selection using pn.model.compare() and pn.modselect.step(). These functions may suggest different reductions in the double-Richards curve because pn.model.compare() is more sensitive to curves with low pRSE*′* and pn.modselect.step() relies on sequential model reduction. For example, both routines selected a double-Gompertz curve (*modno = *22, Table 1, Appendix S1 Table S1·2) as the best fit to the posneg.data data set. In contrast, they each suggested different final models for both penguin and tern data sets (Table 1). For penguins, pn.model.compare() selected model (*modno)* 31, a 4-parameter model including one-second curve parameter that fitted 90% (122/150) of the individuals in the data set (Table 1, Fig. 2), rather than the anticipated (Chiaradia & Nisbet 2006) double-Gompertz curve that required two-second curve parameters (*modno = *34). For terns, pn.model.compare() selected model (*modno)* 32, a 3-parameter model with the shape parameter *m* fixed at 0·72 (mean across the data set); this fitted 89% (67/75) of the individuals in the data set (Table 1, Fig. 2) and was similar in shape to a logistic curve (*m = *1·0).

Table 1. Top-ranked models by pn.model.compare() (first subtable) and stepwise selection by pn.modselect.step() (second subtable) for (a) posneg.data (100 levels), (b) little penguin (150 levels) and (c) common tern (75 levels) data sets. For pn.model.compare() models are ranked according to minimized, penalized root-mean-square error (pRSE*′*) (lowest value in bold), No. of levels fit is the number of groups (individual chicks) parameterized in nlsList() and No. of params is the number of parameters. For pn.modselect.step(), only the most general and most reduced models are shown (see Appendix S1 Tables S1·4–6 for full output) |

(a) *posneg.data* (black-browed albatross; simulated) |

* pn.model.compare* |

*modno* | pRSE*′* | No. of levels fit | RSE | Model d.f. | Residual d.f. | No. of params |

**22** | **0·40** | **79** | **12·9** | **474** | **553** | **6** |

24 | 0·42 | 87 | 14·1 | 435 | 696 | 5 |

35 | 0·55 | 93 | 19·2 | 465 | 744 | 5 |

* pn.modselect.step* |

Selected | Step | Reduced | General | *F* | d.f. | *P* |

**22** | **6** | **12** | **22** | **323** | **200, 900** | **<0·001** |

22 | 2 | 22 | 21 | 2·56 | 100, 700 | <0·001 |

(b) little penguin |

* pn.model.compare* |

*modno* | pRSE*′* | No. of levels fit | RSE | Model d.f. | Residual d.f. | No. of params |

**31** | **1·52** | **122** | **64·5** | **488** | **1317** | **4** |

32 | 1·67 | 125 | 72·1 | 375 | 1488 | 3 |

33 | 2·40 | 42 | 59·4 | 210 | 400 | 4 |

* pn.modselect.step* |

Selected | Step | Reduced | General | *F* | d.f. | *P* |

**21** | **6** | **12** | **21** | **4·85** | **580, 1630** | **<0·001** |

21 | 2 | 22 | 21 | 5·82 | 274, 1324 | <0·001 |

(c) common tern |

* pn.model.compare* |

*modno* | pRSE*′* | No. of levels fit | RSE | Model d.f. | Residual d.f. | No. of params |

**32** | **0·18** | **67** | **5·9** | **201** | **852** | **3** |

33 | 0·24 | 32 | 5·8 | 160 | 448 | 5 |

30 | 0·27 | 28 | 6·0 | 112 | 376 | 4 |

* pn.modselect.step* |

Selected | Step | Reduced | General | *F* | d.f. | *P* |

**12** | **6** | **12** | **22** | **3·20** | **350, 856** | **<0·001** |

22 | 3 | 22 | 33 | 3·09 | 544, 1050 | <0·001 |

**3** Fit NLS [nls()] or nonlinear mixed-effects models [nlme()] using the most suitable curve in SSposnegRichards(). Model selection in nls() or nlme() can then investigate effects of factors, variates or covariates (fixed or random) on the parameters selected (Pinheiro & Bates 2000; p. 377–409). For example, the penguin data set contains data from two contrasting years (Chiaradia & Nisbet 2006). When analysed within a single NLME model (Appendix S3) both yearly and seasonal differences are evident (Fig. 3).