Varying effort in capture–recapture studies

Authors


*Correspondence author. E-mail: murray.efford@otago.ac.nz

Summary

  1. The standard spatial capture–recapture design for sampling animal populations uses a fixed array of detectors, each operated for the same time. However, methods are needed to deal with the unbalanced data that may result from unevenness of effort due to logistical constraints, partial equipment failure or pooling of data for analysis.
  2. We describe adjustments for varying effort for three types of data each with a different probability distribution for the number of observations per individual per detector per sampling occasion. A linear adjustment to the expected count is appropriate for Poisson-distributed counts (e.g. faeces per searched quadrat). A linear adjustment on the hazard scale is appropriate for binary (Bernoulli-distributed) observations at either traps or binary proximity detectors (e.g. automatic cameras). Data pooled from varying numbers of binary detectors have a binomial distribution; adjustment is achieved by varying the size parameter of the binomial.
  3. We compared a hazard-based adjustment to a more conventional covariate approach in simulations of one temporal and one spatial scenario for varying effort. The hazard-based approach was the more parsimonious and appeared more resistant to bias and confounding.
  4. We analysed a dataset comprising DNA identifications of female grizzly bears Ursus arctos sampled asynchronously with hair snares in British Columbia in 2007. Adjustment for variation in sampling interval had negligible effect on density estimates, but unmasked an apparent decline in detection probability over the season. Duration-dependent decay in sample quality is an alternative explanation for the decline that could be included in future models.
  5. Allowing for known variation in effort ensures that estimates of detection probability relate to a consistent unit of effort and improves the fit of detection models. Failure to account for varying effort may result in confounding between effort and density variation in time or space. Adjustment for effort allows rigorous analysis of unbalanced data with little extra cost in terms of precision or processing time. We suggest it should become routine in capture–recapture analyses. The methods have been made available in the R package secr.

Introduction

Capture–recapture is used to survey animal populations in which the individuals are too cryptic, too numerous or too mobile to count directly. In a typical application, traps or other detectors are placed at known points and operated over one or more time intervals (sampling occasions). Any animal caught or detected is identified individually by tagging or other means and released at the point of capture.

Conventional capture–recapture analyses are nonspatial: per capita detection probability may be modelled as time-specific or depending on previous experience of capture, but is unrelated to the location or nature of the detectors (Otis et al. 1978). Spatially explicit methods developed in the last 10 years use a more fine-grained and realistic model of the detection process. Each individual notionally occupies a fixed home range whose centre is an unknown point; a detection results when an individual interacts with a device or observer at a known spatial location and with known properties. By including the locations of both animal and detector as an integral part of the model, it has become possible to estimate density directly and to avoid the negative effects of unmodelled spatial variation in nonspatial models (Efford 2004; Efford et al. 2004; Borchers & Efford 2008; Royle & Young 2008; Efford et al. 2009a).

Detectors can be classified according to whether or not they detain animals (traps do, proximity detectors do not) and if they do not, whether each detector can detect the same animal more than once within an occasion (‘count’ proximity detectors can; binary proximity detectors cannot; Efford et al. 2009a,b). For example, an automatic camera may be operated as a binary proximity detector (a maximum of one shot per animal per occasion) or as a count proximity detector (accumulating detections throughout the interval). Multi-catch traps that allow multiple animals to be caught by one device are distinguished from single-catch traps.

The standard spatial capture–recapture design for sampling animal populations uses a fixed array of detectors each operated for the same time. Conventionally, sampling is carried out on several occasions, but sampling may be for a single interval if each animal is potentially detected multiple times in that interval (Efford et al. 2009b). The standard design is balanced in the sense that each detector is active for the same duration. In practice, it is common for detectors to be active for differing durations or for the duration to differ between sampling occasions. This can result from logistical constraints or weather forcing the premature closure of some devices, or from equipment failure. The intensity of area searches (Royle & Young 2008; Efford 2011) may differ between searched polygons, each of which may be viewed as a detection device. Unevenness may also be imposed after data collection if the analyst decides to pool data from groups of nearby detectors, or to combine data from consecutive occasions when the total number of occasions is not evenly divisible.

We use the term ‘varying effort’ for any known variation in effort between detectors or between sampling occasions. The spatial detection process comprises a large number of potential events, one for each animal × detector combination on each sampling occasion. A unique level of effort may pertain to each combination of detector × occasion. As sampling effort varies, we expect a commensurate change in the number of detections, which should be allowed for when modelling the data. Failure to allow for variable effort potentially results in bias due to misspecification of the detection model, and confounding between effort and density variation in time or space. Allowing for known variation in effort ensures that estimates of detection probability relate to a consistent unit of effort (e.g. one trap set for 1 day).

A similar issue arises in generalised linear modelling with a log link function when events are counted over differing, known intervals; varying exposure is then accommodated by including in the model an offset variable whose coefficient on the log scale is 1·0 (McCullagh & Nelder 1989). In open-population capture–recapture models, a linear adjustment on the log scale is routinely applied to survival estimates for varying inter-sample intervals (White & Burnham 1999, p. S137). The effects of known temporal variation in effort on capture probability may also be incorporated in these models by ad hoc manipulation of the design matrix, but we are not aware of any systematic theory. One exception is the so-called robust design, in which each primary sampling session comprises several secondary sessions (occasions; e.g. Kendall et al. 1997). Capture probability is parameterised in the robust design at the level of secondary sessions, and the number of secondary sessions may vary freely between primary sessions.

Partial solutions for varying effort exist already in the context of spatially explicit capture–recapture (SECR). The spatial between-session models of Efford et al. (2009a) allow the number of occasions to vary between primary sessions as in the robust design, without affecting the parameterisation. Software has generally provided for a different subset of detectors to be active on each sampling occasion (Efford et al. 2004; Gopalaswamy et al. 2012; Efford 2013). It is straightforward in simulation-based methods [inverse prediction (Efford 2004) or Markov chain Monte Carlo] to simulate only over active detectors. Likewise, the likelihood (Borchers & Efford 2008; Efford et al. 2009b; Efford 2011) may be adapted for incomplete usage by setting the corresponding log likelihood components to zero (Efford 2013). Varying effort may be included in spatially explicit models fitted in the R package secr as a linear covariate on the link scale, varying across occasions or detectors or both (Efford 2013). However, this approach is suboptimal because it requires the estimation of one or more additional model coefficients, and the effect is not expected to be strictly linear on the logit link scale used routinely for detection probability, or the log scale used for expected Poisson counts.

Here we consider how to allow for known variation in effort more generally when modelling spatial detection. Borchers & Efford (2008) allowed the duration of exposure (inline image) to vary between sampling occasions in their competing hazard model for multi-catch traps, but to our knowledge, this adjustment for occasion-specific effort has yet to be applied. We generalise the method of Borchers & Efford (2008) to allow joint variation in effort over detectors and over time (occasions) and indicate how effort may be included in models for other detector types. We expect adjustment for varying effort will be more critical in analyses where (i) the variation is confounded with temporal (between-session) or spatial variation in density, and (ii) it is important to estimate the temporal or spatial pattern. For example, if detector usage was high in one part of a landscape, while density was constant, failure to allow for varying effort might produce a spurious density pattern. We performed simulations to assess alternative adjustments for effort in one such scenario.

We also analyse a dataset on grizzly bears (Ursus arctos) in which hair samples for DNA identification were collected over varying time intervals. This example both illustrates how controlling for effort (sampling interval) can unmask other effects and raises the further issue of incorporating sample decay in detection models.

We conclude that allowing for varying effort in SECR models is straightforward and should become routine. The methods have been made available in the R package secr (Efford 2013).

Models for effort

We use inline image for the effort on occasion s at detector k. For small inline image, the number of detections and the probability of detecting a particular individual are expected to increase linearly with inline image, and clearly there are no detections when inline image. Effort is relative to a standard unit inline image such as one detector–day. Examples of possible effort variables are the number of days that each automatic camera was operated in a tiger study, or the number of rub trees sampled for DNA in each grid cell of a grizzly bear study. The key problem is that the response of detection probability to effort is inevitably nonlinear at higher levels of effort for detector types that limit the number of detections per individual per detector, such as binary proximity detectors.

The observations to be modelled are either binary (represented by inline image, an indicator variable for the presence of an animal on occasion s at binary detector k) or integer (represented by inline image, the number of detections on occasion s at count detector k). We assume the probability of detecting an individual declines with the distance inline image between a detector k and the animal's range centre at coordinates X = (x,y). The binary relationship is described by a spatial detection function inline image, where θ is a vector of parameters. We define g(·) so that its intercept when inline image is a nonspatial scale parameter inline image. For a concrete example, the half-normal detection function uses inline image.

If the data are counts rather than binary observations, we may choose to define the spatial detection function as the decline in expected count with distance inline image. We use the symbol inline image for the intercept (inline image). For a particular distribution of the counts, we can switch back and forth between the binary and expected-count representations (e.g. inline image when the counts are Poisson-distributed). The transformation is nonlinear so, for example, a half-normal form for g(·) does not correspond to half-normal form for λ(·). Other count models such as the negative binomial may sometimes be required, but we know of no examples of their use.

The function g(·) or λ(·) is used to model per capita detection probability (inline image) or the expected number of detections per individual (inline image) per unit of effort inline image (e.g. detector–day), and these in turn are used to compute the probability of observing each inline image or inline image. The probability of observing a particular detection history is the product of a sequence of one or more such probabilities, assuming independence between occasions. The full likelihood combines the model for the probability of each detection history, depending on parameter vector θ or inline image, with a spatial point process model for the distribution of home-range centres, depending on parameter vector ϕ (including population density). Numerical maximisation of the likelihood yields estimates of θ or inline image and ϕ. Interested readers should consult Borchers & Efford (2008), Efford et al. (2009a) and Efford (2011) for a more complete description of the method.

Binary detectors

Our approach to effort adjustment for binary detectors (traps and binary proximity detectors) is based on the proportional hazard model of Borchers & Efford (2008) extended for detector-specific effort. We assume the effect of varying effort to be linear on the hazard scale. The hazard of detection for an individual located at X given a standard level of effort inline image is inline image, so the hazard adjusted for effort is inline image. This is back-transformed to a probability of detection (Table 1). The expression inline image for binary proximity detectors results from expanding and simplifying inline image. This simplifies further to inline image when inline image.

Table 1. Spatially explicit capture–recapture (SECR) models for the detection (inline image, inline image) of an animal centred at X in detector k on occasion s, allowing for effort inline image at different types of detector
Detector typeModel 
  1. Effort at multi-catch traps is included implicitly via the individual hazard inline image. For binomial counts aggregated over inline image occasions at inline image detectors, inline image indicates whether a detector was active on each original occasion and inline image substitutes for inline image. See text for other symbols.

Multi-catch trap inline image inline image
Binary proximity inline image inline image
Poisson count inline image inline image
Binomial count inline image inline image, inline image

If an animal can be detected at most once on any occasion, then detectors (‘multi-catch traps’) ‘compete’ for animals, and we require a competing hazard model in which inline image appears only as a fraction of the total hazard of detection for an individual, summed across all K detectors (inline image; Borchers & Efford 2008; Table 1). Adjustment for effort is achieved by scaling each location-specific hazard inline image.

Unbounded counts

An unbounded count detector may register an individual or its cues multiple times in any interval. If the count inline image has a Poisson distribution, then the adjustment of λ(·) for varying effort is linear (Table 1). This follows from formulating λ(·) as the integral of the hazard of detection per unit effort, which makes it consistent with the linear hazard adjustment for binary data. The adjusted Poisson count model may also be parameterised in terms of the probability of at least one detection: inline image.

Binomial counts

Counts are sometimes modelled as binomial with size N. This is appropriate, for example, when data have been aggregated across a known number of occasions, each representing a binary (Bernoulli-distributed) opportunity for detection with constant probability inline image (Efford et al. 2009b). Data may also be aggregated across space from binary proximity detectors, each possibly used on a different set of occasions. Aggregation across space may be justified and efficient when several detectors are close together, relative to the spatial scale of animal movement and detection. Whether aggregation is across time or space or both, inline image is the aggregate number of opportunities for detection within the redefined ‘detector’ k and ‘occasion’ s. Formally, inline image, where inline image is an indicator for whether original detector inline image was active on original occasion inline image, and inline image opportunities were aggregated. If the original effort matrix contains some inline image, the aggregated effort is also likely to vary (i.e. inline image is a non-negative integer specific to the occasion and the detector). Here, inline image substitutes for inline image as the measure of effort, and inline image is implicitly a single binary detector × occasion opportunity for detection. Models for aggregated (binomial) counts bear a simple relationship to those for the raw (binary) data (Efford et al. 2009b, Appendix 1), and we do not consider them further.

Materials and methods

Field example

Grizzly bears have been surveyed for several years in the southern Rocky Mountains of British Columbia, Canada, using baited barbed-wire hair snares as in Mowat et al. (2005). We analysed a subset of data on female grizzlies in the South Rockies grizzly bear population unit from 2007 (G. Mowat, unpublished data). In that year, 66 snares were established over 13–20 June and a further five over 27–30 June (Fig. 1); snares were dispersed one per 10-km × 10-km grid square over 6700 kminline image (convex hull of sites; median distance to nearest site 6·9 km, range 2·6–12·2 km). Snares were checked after 8–20 days, and most snares were checked twice (Fig. 1). Groups of hairs collected from the wire were assigned individual identities based on their DNA microsatellite genotype (Paetkau 2003). Each check potentially yielded hairs from multiple individuals, and multiple samples relating to one individual were treated as a single record. The snares therefore functioned as binary proximity detectors (Efford et al. 2009a).

Figure 1.

Timing of hair snare setup (open circles) and checks (filled circles) in grizzly bear study, southern Rocky Mountains, British Columbia, 2007.

We considered SECR models both with and without adjustment for the variation in sampling interval inline image. The adjustment followed the specification for binary proximity detectors in Table 1, with a reference duration of inline image day. Bear behaviour may have varied over time, so we also considered models with and without a detector-specific temporal covariate, the date of the midpoint of each sampling interval; the parameter inline image was modelled as a linear function of date, on the logit scale. The region of integration (the notional extent of habitat) was a rectangle extending 20 km from the outermost sites in the cardinal compass directions. Models were fitted by maximising the full likelihood in secr 2.5.0, and other settings followed the defaults in function ‘secr.fit’ (Efford 2013). We used Akaike Information Criterion with small sample adjustment (AICc) to compare models (Burnham & Anderson 2002). ΔAICc was the difference in AICc between a particular model and the model with smallest AICc (the preferred model).

Simulations

We contrived two simulated examples to illustrate the effects of controlling for temporal and spatial variation in effort (Table 2). Both examples used a 10 × 10 square grid of binary proximity detectors with spacing s equal to the half-normal scale parameter σ, operated for five sampling occasions. Home-range centres followed a homogeneous Poisson distribution in a region extending 5σ beyond the detectors. On each occasion, a simulated individual was randomly detected or not detected independently at each detector, with probability determined by effort and a half-normal function of distance between the detector and home-range centre. In the temporal scenario, effort was uniform across detectors, but varied between occasions. In the spatial scenario, effort varied 10-fold in a linear west-to-east gradient across the grid. One hundred random datasets were generated for each scenario. Datasets were generated with the function ‘sim.capthist’ (using s = 20 m) and analysed with function ‘secr.fit’ in secr 2.5.0 (Efford 2013; see R code in Supporting information).

Table 2. Scenarios for temporal and spatial variation in effort, for a population sampled with a 10 × 10 array of detectors over five occasions
ScenarioD inline image σVariation in effort
  1. D is the population density; inline image and σ are parameters of a half-normal spatial detection function inline image. The unit of distance for D and σ is the spacing between detectors, s. Values in braces are levels of effort inline image.

Temporal inline image 0·11·0 sOccasions 1…5: {1, 2, 2, 3, 3}
Spatial inline image 0·11·0 sDetector columns 1…10: {0·5, 1, 1·5, 2, 2·5, 3, 3·5, 4, 4·5, 5}

We fitted three models to the data from each simulation: one with the hazard-based effort adjustment from Table 1, one with a covariate-based effort adjustment and a null model with no effort adjustment. For the temporally varying scenario, the covariate-based adjustment treated inline image as a function of a 3-level categorical predictor tcov (inline image in secr) with levels varying across sampling occasions as shown in Table 2. For the spatially varying scenario, the covariate-based adjustment treated inline image as a linear function of a detector-level spatial covariate kcov, the x-coordinate of the detector (inline image). For that scenario, we assumed that a study objective was to estimate a linear west–east trend in density, so all three models included a term D ∼ x, although the true value of this coefficient was zero. Using detector spacing s as the distance metric, the gradient has units inline image; inline image corresponds to a density gradient of 5·0 hainline image per 100 m when s = 20 m. The spatial model is an extreme example of confounding and a questionable study design, but one that reveals interesting model behaviour. Models were fitted and compared as in the field example. Extent of habitat and the shape of the fitted detection function followed the generating model. An identity link was used for density, a logit link for inline image and a log link for σ. The relative bias of the density estimate inline image was estimated by the average over replicates of inline image where D was the true density. The precision of inline image was expressed as the relative standard error inline image, where inline image was an asymptotic estimate (Borchers & Efford 2008).

Results

Field example

The grizzly bear dataset comprised 104 detections of 77 females; 13 of the 27 repeat detections were at the same site as the initial detection, and others ranged from 4·7 km to 14·6 km away. Models differed only slightly in overall fit and yielded nearly the same estimates of density (2 bears per 100 kminline image; Table 3). Adjustment for varying sampling interval revealed a temporal trend in daily detection probability (slope –0·03 dayinline image on logit scale). This corresponded to a decline in inline image from 0·041 dayinline image (SE 0·012 dayinline image) on 19 June to 0·021 dayinline image (SE 0·006 dayinline image) on 11 July. The sampling interval predicted by linear regression increased from 10·1 day to 17·9 day between these dates.

Table 3. Spatially explicit capture–recapture (SECR) models fitted to data on female grizzly bears from hair snares in the southern Rocky Mountains of British Columbia, 2007
ModelnpLLΔAICcinline image (kminline image) inline image inline image inline image (km)
  1. Models differed in whether they included adjustment for varying sampling interval (V), a linear effect of date on inline image with slope k on the logit scale (T), both (VT) or neither (N). For models VT and T, inline image is the predicted value on 1 July; inline image was scaled to a daily rate in models VT and V. Model descriptors: ‘np’ number of parameters, ‘LL’ log likelihood and ‘ΔAICc’ AICc relative to best model. Estimated SE is shown in parentheses.

VT4−209·60·00·023 (0·004)0·029 (0·009)−0·030 (0·015)3·85 (0·39)
N3−211·41·40·022 (0·004)0·367 (0·085)3·97 (0·39)
V3−211·72·00·023 (0·004)0·031 (0·008)3·85 (0·39)
T4−211·84·40·019 (0·003)0·412 (0·092)0·001 (0·024)3·89 (0·37)

Simulations

The temporal scenario generated on average 129 detections of 35 animals. The model with linear effort adjustment on the hazard scale was selected by AIC in 93 of 100 simulations. The null model was selected only three times and typically lagged behind the best model by a large margin (interquartile range of ΔAICc 12·4–22·8). The model using a covariate-based adjustment for effort was preferred by AICc in only four simulations, but ΔAICc was generally small (interquartile range 2·5–5·0) commensurate with the two additional parameters. Despite the large discrepancies in AIC, all models yielded essentially the same density estimates, negligible relative bias (−0·036, SE 0·018 for all models) and the same estimated precision (Fig. 2).

Figure 2.

Spatially explicit capture–recapture (SECR) models differing in effort adjustment fitted to simulated data with temporal variation in effort: ‘usage’ hazard adjustment, ‘tcov’ time-varying covariate and ‘null’ no adjustment for effort. True density inline image where s is detector spacing. See text for details.

The spatial scenario generated on average 322 detections of 57 animals, and the number of detections per detector increased from west to east. The model with linear effort adjustment on the hazard scale was selected by AICc in 91 of 100 simulations. The model using a covariate-based adjustment for effort was preferred in the remaining nine simulations; in general, ΔAICc for the covariate model was considerable (interquartile range 2·4–6·8) and greater than could be explained by the penalty for fitting one more parameter. The null model was not selected in any simulation (ΔAICc interquartile range 36·6–60·3). Estimated density at the midpoint of the fitted gradient was nearly unbiased when allowance was made for varying effort using either the hazard model (inline image, SE 0·015) or the linear covariate model (inline image, SE 0·014); the estimate was negatively biased if there was no adjustment for varying effort (inline image, SE 0·014). The relative standard error of the midpoint density estimate was nearly the same across all models (0·134 for both effort models, and 0·131 for the null model). Estimates of the density gradient (true value zero) using the hazard-based effort adjustment were nearly unbiased (estimated gradient inline image, SE inline image), whereas adjustment with a detector-level covariate resulted in slight positive bias (estimated gradient inline image, SE inline image). Failure to adjust for effort in the ‘null’ model led to erroneous estimates of a strong positive gradient (inline image, SE inline image), owing to the confounding of effort and density (Fig. 3).

Figure 3.

Spatially explicit capture–recapture (SECR) models differing in effort adjustment fitted to simulated data with a spatial gradient in detection: ‘usage’ hazard adjustment, ‘kcov’ detector-level covariate and ‘null’ no adjustment for effort. All fitted models included a gradient in density although the true gradient had zero slope. See text for details.

Discussion

We have included a measure of effort on a ratio scale (inline image) directly in formulae for the detection parameters inline image or inline image. In no case is it necessary to estimate additional parameters. For binary detectors, the cumulative hazard of detection is assumed to increase linearly with inline image. Doubling of inline image doubles the hazard, which results in a less than twofold increase in detection probability, and the fitted detection parameter inline image corresponds to inline image = inline image. For Poisson count detectors, the equivalent adjustment is linear because expected counts and the parameter inline image are effectively on an integrated hazard scale. When data from a ragged collection of binary detectors are aggregated across time or space or both, the result is binomial with size parameter equals to the number of component Bernoulli distributions, each corresponding to one opportunity for detection. For effort adjustment, it is sufficient to keep track of the number of components. Both the hazard-based and binomial adjustments apply cumulatively when aggregated data are aggregated further.

In our limited temporal simulations, adjustment for effort had no discernable effect on the density estimates themselves, while substantially improving the fit of the detection model. The primary benefit from effort adjustment may be in controlling for variation that would otherwise be attributed wrongly to other covariates. We contrived an example in which a spatial trend in effort potentially resulted in false inference regarding spatial trend in density when a null model was fitted ignoring the variation in effort. Direct (hazard-based) adjustment for effort was more parsimonious than covariate-based adjustment (avoiding the estimation of an extra parameter) and resulted in less-biased estimation of the density gradient.

The grizzly bear results appear also to indicate confounding in the naive model, in this case between sampling interval and detection probability: longer intervals later in the season masked a decline in daily detection probability. This interpretation assumes both a uniform rate of sample accumulation within each interval and a constant probability that a sample will yield usable DNA, regardless of sample age. If the DNA half-life is such that there is substantial decay within sampling intervals, then assuming a constant hazard over the sampling interval is potentially misleading. It is possible, for example, that the inferred temporal decline in detection probability was an artefact of lower average quality of samples from the longer intervals late in the study, or lower average accumulation rate over longer intervals due to declining attractiveness of the lure.

Rigorous analysis of varying-interval, passively collected DNA data would combine a model of sample accumulation, such as the constant-hazard model used here, and a model of sample usability, such as exponential decay. DNA half-life might be estimated as an additional parameter from the present data or from a dataset augmented with the numbers of samples for which genotyping failed. We can obtain an approximation by the following back-of-the-envelope method. Between 19 June and 11 July, predicted inline image fell from inline image to inline image; over the same period, the sampling interval increased from inline image day to inline image day. If exponential decay of the hair samples was solely responsible for the decline in inline image, then the DNA sample half-life was approximately inline image day.

There are other scenarios in which absolute duration does not equate with effort, and caution is required. Consider trapping an animal that is active for only a few hours a day. For example, brushtail possums Trichosurus vulpecula Kerr are generally caught soon after emerging from their daytime dens at dusk (Cowan & Forrester 2012). Traps set late afternoon and checked early in the morning can be expected to catch as many animals as those set in the middle of 1 day and checked in the middle of the next, despite being open for fewer hours. The integrated hazard is not in this case proportional to clock time within a day, although it may be proportional to time measured in whole numbers of days.

Asynchronous sampling

Sampling times differed between sites in the grizzly bear example (Fig. 1). We call this ‘asynchronous sampling’; asynchrony may be due to variation in the initiation of sampling, its duration or both. Conventional closed-population capture–recapture requires that sample times coincide across a detector array (Otis et al. 1978). Data from sub-arrays sampled at different times may be aligned post hoc according to their starting times, but the histories of animals exposed to multiple sub-arrays are ambiguous (e.g. should animal x detected only in sub-array A also have zeros in its capture history for times it was not caught in adjoining sub-array B while B was operating and A was not?).

Spatially explicit capture–recapture largely avoids the requirement for synchronous sampling by modelling detection at the level of each detector, rather than the entire array. Detector-specific adjustment for varying effort, as described here, addresses the duration component of asynchrony. Indexing of successive sampling occasions need not correspond across detectors. For example, the detectors sampled in the bear study for only one interval could be assigned to occasion 1 or occasion 2 with no effect on the result, given zero effort on the alternate occasion.

Certain SECR models do assume synchronous sampling, particularly the spatial analogues of models Mb (general learned response) and Mt (occasion-specific detection probability; Otis et al. 1978). Other, possibly better, SECR models may be substituted. For example, the learned response may be specific to a detector (‘bk’ models in secr) or, as in our bear example, temporal variation may be a continuous function of date (the midpoint of each sampling interval) rather than differing on each occasion.

Methods that require nearly synchronous sampling at all detectors impose a major logistical burden on field studies, especially those requiring significant travel and setup time for each detector. Relaxing this constraint may enable larger studies that result in more precise and more representative estimates. We encourage the careful use of asynchronous sampling designs, recognising that abuse is possible. The total duration of sampling should be consistent with the closure assumption (Otis et al. 1978), and sampling should follow a rigorous spatial design.

Aggregation and variable effort

The methods we describe allow a proper accounting for effort when data are aggregated across occasions or detectors. The initial definition of sampling occasions is arbitrary when sampling devices such as automatic cameras are operated continuously. It is convenient to enter data with high resolution (e.g. hourly or daily) and to aggregate for analysis (the secr function ‘reduce’ – Efford 2013). In general, the resulting data will be counts even when the components are binary, as retaining a binary structure would imply the discarding of information on repeated detections.

The binomial size adjustment is suitable only when there is a fundamental unit of binary data with constant effort. We have not proposed any method for binomial aggregation of binary data when the probability varies among the component Bernoulli trials because of varying effort, temporal or spatial variation, or a response to previous detection. The penalty for aggregation is that one loses the ability to model variation in detection probability at finer scales. However, modelling detection probability is usually not an end in itself, data are often too sparse to support fine-scale models and density estimates are typically robust, so we doubt this is often a significant issue.

Single-catch traps

Single-catch traps do not figure in Table 1 because no likelihood model is known and there is no simple expression for inline image. A simple SECR model may nevertheless be fitted by simulation and inverse prediction (Efford 2004). This method resolves the simultaneous competition between animals for traps and between traps for animals with a time-to-event model based on the distance-determined hazard for each animal × trap combination. The method may be adapted for varying effort by including a linear effect on each animal × trap hazard, as for multi-catch traps.

Polygons and transects

Binary or count data from searches of polygons or transects (Efford 2011) do not raise any new issues for including effort, at least when effort is homogeneous across each polygon or transect. Binary data arise when animals may be detected at no more than one place on a particular occasion; in this case, the effort-adjusted hazard may be used in a competing hazard model as for multi-catch traps. Count data commonly arise from searches for sign, when multiple cues may be detected from one animal on a particular occasion. Assuming a Poisson model for the counts, a linear effort adjustment is appropriate. Effects of varying polygon or transect size are automatically accommodated in the models of Efford (2011). Models for varying effort within polygons or transects have not been needed for problems encountered to date. Such variation may in any case be accommodated by splitting the searched areas or transects into smaller units that are more nearly homogeneous.

General comments

Our treatment addresses spatial applications of capture–recapture. It is also appropriate to adjust capture probability in nonspatial applications for known variation in effort. Existing nonspatial software (White & Burnham 1999) allows adjustment by means of a linear model on the logit scale (inline image where inline image is the capture probability on occasion s, and inline image is the corresponding effort). This may approximate a hazard-scale adjustment, but it does not match exactly, does not relate inline image to a standard unit of effort and requires inline image to be estimated. Direct implementation of the hazard adjustment, as we have carried out for SECR, is less restrictive. Whether this is warranted requires further investigation.

The formal methods suggested here are expected to give unbiased estimates that are independent of the absolute level of effort for large samples in the absence of unmodelled heterogeneity. In other words, they may not adjust perfectly for varying effort in the real world. It is not known how they perform when there is substantial individual heterogeneity in detection probability. Large effort increases detection probability and thereby reduces both the absolute magnitude of any unmodelled heterogeneity and consequential bias in population estimates.

Acknowledgements

We thank Deanna Dawson for her comments on an early draft, and the reviewers and associate editor for their helpful suggestions.

Ancillary