Lloyd Guy Pardon, Key Centre for Tropical Wildlife Management, Northern Territory University, Darwin NT 0909 Australia. Tel: 08 89466760; Fax: 08 89467088; E-mail: email@example.com
1More than half of all Australian bandicoot species (family Peramelidae) are listed by the IUCN as extinct or threatened and changed fire regimes in arid and semi-arid Australia have been identified as an important agent in their decline. The northern brown bandicoot is currently one of Australia's most common bandicoots, but their continued persistence in the tropical savannas cannot be taken for granted. Previous studies in Kakadu National Park, Northern Territory have shown this species to be prone to sudden declines in abundance, possibly linked to the occurrence of intense fires.
2Here we examine the impact of four experimental fire management regimes (fire prevention, early dry season burning, late dry season burning and progressive burning several times through the dry season) on survival of the northern brown bandicoot. The analysis is based on capture–mark–recapture data obtained during a landscape-scale fire experiment conducted at Kapalga, in Kakadu National Park from 1989 to 1995.
3All experimental fire treatments (including total fire exclusion) were associated with decline in survival rates over time, indicating that none of the tested approaches were appropriate for this species. Burning in the late dry season or progressively throughout the dry season produced substantially more severe declines in survival than did early dry season fires or fire exclusion.
4Fire regime was found to be the most important determinant of bandicoot survival, far exceeding other factors such as gender, age, vegetation type, rainfall and season, all of which had comparatively little influence. The results demonstrate the importance of the frequency and seasonal timing of fires in determining the survival of bandicoots and suggest that spatially uniform and temporally invariant fire regimes are inappropriate for bandicoot conservation in the north Australian savannas.
While the patterns of mammal decline have been most severe in arid environments, there is accumulating evidence that declines are extending northward into the wet–dry tropics (Woinarski, Milne & Wanganeen 2001) and that these populations can no longer be considered secure. Previous research of the tropical savannas of northern Australia has postulated two probable agents associated with this apparent decline. Surveys at Kapalga in Kakadu National Park, from 1986 to 1993, revealed that a number of small mammal species were undergoing substantial declines in abundance and this was correlated with falling groundwater levels and below-average rainfall over an 8-year period (Braithwaite & Muller 1997). Woinarski et al. (2001) resampled the same trap sites in October 1999 but they failed to support the hypothesis that mammal abundance was linked to groundwater, noting that many species had not recovered, despite many years of above average rainfall. Woinarski et al. (2001) proposed that the most likely cause of mammal decline in the northern savannas was progressive environmental change associated with widespread and continuing loss of Aboriginal fire management. The previously highly mobile Aboriginal population was moved into fixed settlements, and consequently most of northern Australia now suffers from a lack of regular, active fire management, resulting in a much higher frequency of extensive and destructive wildfires (Russell-Smith et al. 2000). However, the inability of these previous studies to identify clear and direct causes of the observed small mammal declines hampers their utility for management of declining populations (Caughley 1994).
To evaluate the hypothesis that fire regime is a dominant factor in small mammal decline in Australian tropical savannas, this study investigated the influence of four different fire regimes on the survival of the northern brown bandicoot (Isoodon macrourus Gould) over a large study area (300 km2) at the Kapalga Research Station, located in an intact savanna landscape in Kakadu National Park in the wet–dry tropics of northern Australia. The northern brown bandicoot is a small (< 2 kg) ground-dwelling omnivorous marsupial, currently regarded as one of Australia's most common bandicoots (Gordon 1995), and is considered one of the most common and widespread small mammals in the Kakadu region (Press, Brock & Andersen 1995). Despite its common status, I. macrourus appears to have contracted along the drier inland margin of its range, in the pastoral zone of south-east Queensland (Gordon 1974a) and Gulf of Carpentaria hinterland in the Northern Territory (Johnson & Southgate 1990). Most other Australian bandicoots have fared far worse, with more than half of Australia's bandicoot species (family Peramelidae) becoming extinct or threatened (IUCN 2000) since European colonization. Previous studies of I. macrourus, conducted at Kapalga have shown this species to be prone to large fluctuations in abundance, linked apparently to climatic cycles and the occurrence of intense fires (Friend 1990).
Here, we investigate the influence of four experimentally imposed fire regimes on the survival probability of bandicoots at the Kapalga study area from 1989 to 1995. Specifically, we set out to test two hypotheses: first that fire regime was the primary factor influencing bandicoot survival rate (compared to other assessable factors such as sex, age, habitat, rainfall and season); and second, that survival rates were lower in areas experiencing annual fires in the late dry season than those exposed to annual fires in the early dry season (due to seasonal differences in fire intensity and behaviour; Williams, Gill & Moore 1998). To test these hypotheses it was necessary to overcome the methodological constraints of previous analyses. Braithwaite & Muller (1997) and Woinarski et al. (2001) drew on abundance indices based on mean annual trap success rates; however, indices such as this failed to maximize the utility of the capture–mark–recapture (CMR) data from which they were derived. Aggregation of bimonthly trapping data to an annual mean obscured finer temporal variations such as seasonal cycles, or the ability to determine whether declines coincided with the time of burning. The annual trap success index also includes an implicit assumption of equal catch rates among all animals at all locations and at all times. Any variation in catch rates therefore obscures underlying variation in more informative demographic parameters such as survival probability (Pollock et al. 1990). CMR data can provide powerful insight into population processes, as they implicitly relate demographic, environmental and temporal parameters. Recent developments in CMR analysis (Burnham, White & Anderson 1995; Burnham & Anderson 2001), and the software that simplifies it (White & Burnham 1999; White, Burnham & Anderson 2001), allowed us to look beyond simple abundance indices and to perform a more thorough and rigorous investigation of the factors influencing survival probability, a key demographic parameter of substantially greater utility and reliability (Lebreton et al. 1992).
Mark–recapture data were collected at the CSIRO Kapalga Research Station in the Northern Territory of Australia, 160 km east of Darwin (12°43′ S, 132°26′ E). The landform and vegetation of this 300 km2 area has remained largely unmodified by human activity following European settlement, consisting of a low ridge and gentle side slopes, with mainly open forest and woodland vegetation dominated by Eucalyptus tetrodonta and Eucalyptus miniata. The climate is monsoonal and is characterized by a humid wet season between November and March, when approximately 90% of the average annual rainfall of 1381 mm falls (meteorological data for Oenpelli 1910–2001, 70 km east of the study site, Bureau of Meteorology 2001). Temperatures are high all year, varying between a mean daily minimum of 18·2 °C in June to a mean daily maximum of 37·4 °C in September (Bureau of Meteorology 2001).
From 1989 to 1995, Kapalga was the site of a landscape-scale fire experiment that involved systematic burning of the study area within large experimental compartments. A detailed description of the study area and fire treatments is provided by Andersen et al. (1998) and we summarize here from that source. Each compartment was defined by a separate subcatchment and separated by a fixed firebreak. The area of each compartment was approximately 15–20 km2. Four fire treatments were assigned randomly to experimental compartments as shown in Fig. 1, and applied to those compartments every year for 5 years from 1990 to 1994. The stated aim of the treatments was to simulate the timing of different types landscape fires occurring in the region (Andersen et al. 1998). ‘Unburnt’ compartments were protected from fire for the duration of the experiment. ‘Early’ compartments were burnt in the early dry season (May–June), as is practiced widely in managed burns in Kakadu National Park and other conservation reserves in the region (Russell-Smith 1995). ‘Late’ compartments were burnt in the late dry season (September–October), as occurs extensively throughout the region from unmanaged wildfires (Russell-Smith et al. 2000). ‘Progressive’ compartments were burnt in the early, mid- and late dry season (May, July and September), simulating fire management in areas of high human activity. The early, late and first of the progressive fires were lit from vehicles on surrounding roads and firebreaks to allow the prevailing south-east winds to carry the fires across the whole compartment. The second and third fires in the progressive compartments were ignited by dropping incendiary pellets from a helicopter along the seasonal creek lines. Prior to the start of burning in 1990, no fires had occurred in the study area since September 1987.
Details of the trapping procedure have also been reported previously (Braithwaite & Griffiths 1996; Braithwaite & Muller 1997). Briefly, trapping occurred bimonthly over 6 years from July 1989 to May 1995, constituting a total of 36 sampling occasions. Trapping occurred in 16 separate 5·7-ha grids located in pairs within eight compartments, representing two of each fire treatment. Within each compartment, grid pairs were placed 500–750 m apart, one within riparian/woodland vegetation and the other in drier open forest further upslope. This placement permitted comparison of vegetation types within fire treatments. Traps within each grid were arranged in four rows 50 m apart, with each row being made up of four possum-sized wire traps (baited with apples) and 16 Elliott traps (baited with a mixture of peanut butter, oats and honey) placed at 20-m intervals. Each 80-trap grid was trapped for two nights during each bimonthly trapping occasion, producing a total of 92 160 trap nights for the entire survey. All bandicoots were sexed, weighed and marked with a small numbered ear tag, then immediately released at site of capture. Age category at first capture was based on weight, with adults classed as females > 500 g, and males > 600 g (Friend 1990). Individuals that were immature at first capture were considered to be adult from the next capture onwards, as time from weaning to sexual maturity is approximately 2 months (Friend 1990).
Over the 6 years of the survey, 658 individual I. macrourus were captured and tagged, 317 of which were subsequently retrapped at least once, and at most 14 times, producing a total of 1345 captures and 1004 recaptures. Relative abundance of I. macrourus peaked in 1991 at 3·89 captures per 100 trap nights, declined to 1·09 per 100 trap nights by 1993 (Braithwaite & Muller 1997) and by the end of the study period had decreased to 0·03 captures per 100 trap nights (Braithwaite, unpublished data). Of the 658 individuals captured 418 were male, representing 64% of individuals and 63% of total captures. Males were found to have a mean weight at first capture of 1034 g (± SD, 440 g) and females had a mean weight of 658 g (± SD, 224 g). At the time of first capture, 542 individuals were adults, and 116 were immature. The most frequently trapped and longest-lived individual was a female that was captured 14 times over a period of 28 months. This individual was reproductively active and weighed 630 g at the time of her first capture, so must have been at least 4 months old at that time (Friend 1990).
To overcome the weaknesses of abundance indices used in previous analyses, we used the Cormack–Jolly–Seber (CJS) model, an open population capture–recapture approach that allows for the separate estimation of survival and recapture probability (Cormack 1964; Jolly 1965; Seber 1965). Combining the CJS model with extensions proposed by Lebreton et al. (1992), permitted modelling of survival and recapture probabilities as functions of environmental, temporal and individual covariates using the flexibility of linear models, and the power of information–theoretic model selection strategies (Burnham & Anderson 1998).
Under this approach, survival probability (φ) is defined as the probability that an animal known to be alive at one capture occasion will remain available for trapping at the next capture occasion (a period of two months in this study). Recapture probability (p) is simply the probability of catching an animal that is known to be present in the population. The assumptions underlying using this approach are that all animals have independent fates; are identical within identifiable groups (such as those within the same fire treatment); that no tags are lost or misread; and temporary emigration, if present, is random (Pollock et al. 1990). It should also be noted that while recapture rates account for temporary emigration, permanent emigration cannot be differentiated from survival probability, so survival in this context really refers to local persistence at the survey sites.
Model fitting and selection was performed with program mark version 2·1 (White & Burnham 1999) using information–theoretic model selection methods based on Akaike's information criterion (AIC), as outlined by Anderson & Burnham (1999; for a more comprehensive discussion see Burnham & Anderson 1998). This procedure uses Kullback–Leibler information as an objective basis for selecting the most parsimonious model (also referred to as the ‘K–L best model’) from an a priori candidate set, which is the model that explains the most substantial proportion of variance in the data, yet excludes unnecessary parameters that cannot be justified on the basis of the data (Burnham & Anderson 1998; Burnham & Anderson 2001). Apart from identifying the best model for use in parameter estimation, information–theoretic model selection methods also arrange candidate models in order of parsimony, allowing further inference on the relative importance of modelled effects and their interactions (Burnham & Anderson 2001). Candidate models were constructed as generalized linear models using effects identified previously by Friend & Taylor (1985) and Friend (1990) as being important to this species in the Kapalga study area. These included the individual effects of age and sex, the environmental effects of rainfall and fire regime, and the temporal effects of seasonality and long-term trend. All these effects were considered a priori to have some influence on survival and recapture probability and were therefore incorporated into the global model (subscripts defined in Table 1):
Table 1. Factors (and their symbols) used in parameterization of survival and recapture probability models. Fire and age were assigned as groups to allow for interactive effects. Sex and vegetation type were assigned as individual covariates as only additive effects were considered (group classification would have led to an unreasonable expansion of design matrices and computation time, yet have no influence on parameter estimates or model selection). Month was assigned as a categorical parameterization of full time dependence, allowing independence of the six seasons. Rainfall, trend and fire lag were assigned as time covariates with linear effect on the logit scale
Group 1: Unburnt Group 2: Early (May) Group 3: Late (Sep) Group 4: Progressive (May, July, Sept)
Fire treatment category. No bandicoots were observed to move between fire treatments during the study
Group 1: Immature Group 2: Adult
Age category at first capture was based on weight, with adults classed as females > 500 g, and males > 600 g. Immature individuals entered the adult category from the next capture occasion onwards
1 = Female 2 = Male
1 = Riparian mixed forest 2 = Eucalypt woodland
Vegetation type was permanently assigned to individuals as none were observed to move between vegetation types
Divides the year into six bimonthly periods corresponding to capture intervals. These periods can be thought of as six seasons (Braithwaite & Estbergs, 1988). Allows independent variation by season
Rainfall in each capture interval 0–1358 mm
Total rainfall during each capture interval recorded at Oenpelli (70 km east of Kapalga)
Capture intervals were numbered 1–35
Linear trend (on the logit scale) over the course of the experiment
Fire lag (L)
Capture intervals elapsed since the last fire ranged from 0 (burnt within the last 2 months) to 45 (90 months unburnt)
Fire lag was set to zero for intervals in which fires were lit and accrued one unit each capture interval thereafter until the next fire. All areas began with lag values of 11, as there had been no fires since September 1987. Fire treatments began in 1990
Uniform probability across all groups and time intervals
This was the most general model in the candidate set and contained all effects and relevant interaction terms, such that all other candidate models were nested within it. As there were eight effects, it was impossible to investigate the full CJS model of all effects with all interactions. Such a model would be hopelessly over-parameterized and of no biological interest. Care was therefore taken in formulating models to keep the sample size (n) to parameter (K) ratio reasonably high and to include only those parameters and interaction terms thought reasonable a priori. The global model had 42 parameters which, combined with an effective sample size of 1345 captures, gave an n/K ratio of 32. Because the n/K ratio was less than 40, AICc was used as the basis for model selection, which is a second-order bias-corrected form of AIC (Anderson & Burnham 1999). Following the advice of Lebreton et al. (1992) we used a two-stage model selection approach, with recapture probabilities modelled first in combination with the global parameterization of survival, thereby retaining as much power as possible for tests on survival parameters, which were of greater biological interest. Once the most parsimonious recapture rate model had been identified, this parameterization was used in all candidate models for survival probability. Akaike weights (Burnham & Anderson 1998) were calculated for each model set, providing a measure of the relative likelihood of each model given the data and the candidate set.
While it is desirable to perform goodness of fit (GOF) testing on the global model (Lebreton et al. 1992), this was not possible as there is currently no analytical means to assess the fit of a model that contains individual covariates. As an alternative, GOF testing was performed on the K–L best model, which did not include any individual covariates. A parametric bootstrap approach was used as the preferred means for establishing GOF (White et al. 2001), and was implemented in program mark (White & Burnham 1999; White et al. 2001). This allowed the deviance of the K–L best model to be ranked against the deviance of 1000 bootstrapped simulations, which were simulated from parameter estimates of the K–L best model. This revealed that 32·3% of bootstrap iterations had a deviance greater than the selected model, indicating that the observed deviance was reasonably likely to be observed, and confirming that the model satisfied the CJS assumptions and was a valid basis for inference. Overdispersion was also estimated from the K–L best model instead of the global model due to limitations already mentioned. Using the method outlined by White et al. (2001), the quasi-likelihood parameter (ĉ) was estimated from a ratio of model deviance to the mean deviance of bootstrapped simulations and found to be very close to unity (ĉ = 1·031). Because the K–L best model provided no evidence for overdispersion and ĉ could not be calculated for the global model, AICc was used as the basis of model selection, with no quasi-likelihood modification (QAICc).
Using the general parameterization of survival probability, the most parsimonious model for recapture probability was found to be a combination of fire lag and month (pL+m), which had an 82·2% probability of being the best recapture model from the candidate set (as indicated by the Akaike weight, which provides an estimate of the relative likelihood of each model, Burnham & Anderson 1998). The next best recapture rate model combined fire treatment and month (pf+m), and had a 13·8% probability of being the best model. All other models were comparatively unlikely, including the model for constant recapture rate (p) that had been assumed implicitly in previous studies. According to Burnham & Anderson (1998), inferences should be based only on models that make up the top 90% of Akaike weights. In this case, only the top two models fit this criterion and both had quite similar parameterization, including a month effect combined additively with a fire effect (either fire lag or fire treatment). Hence, both of these models indicated that recapture rates varied systematically according to fire history and time of the year. Given the large difference in Akaike weights and the similar parameterization of these models, model-averaging procedures were considered unnecessary. The recapture model pL+ m was adopted as the best parameterization for recapture rate and was used in all models in the next stage of survival modelling.
The most parsimonious model for survival probability was found to be the interaction of fire treatment and trend over time (φf*T), which had a 92·3% likelihood of being the best model from the candidate set. The most general survival model was the next most likely, at 7·6% and all remaining nested models were found to be highly unlikely. The K–L best model from the full candidate set was therefore φf*TpL+m. The survival component of this model is very simple, constraining survival to be an independent linear function of time (on the logit scale) within each of the four fire treatments. Survival and recapture probability estimates from this model are shown in Fig. 2.
Survival probability estimates were found to be almost equal in all fire treatments at the start of the experiment in 1989, prior to the onset of experimental burning at Kapalga. This is not a modelling artefact, as the K–L best model permitted independence of starting conditions and time trends within each fire treatment. This is a clear indication that prior to the onset of fires, experimental compartments were equal in terms of bandicoot survival. Survival trends within each fire treatment showed that survival rates declined in all areas over the course of the experiment, even in unburnt sites. However, the rate at which survival declined depended heavily on fire treatment.
Unburnt and early burnt areas experienced the least overall decline. The bimonthly survival rate in unburnt areas dropped from 0·756 (95% confidence interval 0·694–0·810) in July–August 1989 to 0·549 (0·456–0·640) in March–April 1995. Similarly, early burnt areas dropped from 0·748 (0·663–0·818) to 0·590 (0·423–0·739). Differences between these two treatments are minor and their confidence intervals overlap, so no definitive distinctions can be drawn between them. The effects of late burns on bandicoot survival are clearly different however, leading to a marked decline in bimonthly survival rates from 0·783 (0·638–0·881) in July–August 1989 to 0·187 (0·039–0·565) in March–April 1995. The detrimental effect of progressive burns was the most dramatic, causing a rapid and almost complete collapse in bandicoot survival from 0·782 (0·662–0·867) to 0·058 (0·010–0·273) over the 6 years of the experiment.
Recapture probability estimates from the best model show a distinct pattern of seasonal variation with bandicoots being increasingly trap-prone from March to June, and with time elapsed since the last fire, leading to substantially higher recapture rates in unburnt areas. These relationships suggest that trap success may be influenced by low ground cover, which is known to be influenced strongly by fire history and phenology of understorey vegetation (Bowman, Wilson & Hooper 1988). Seasonal cycles in activity levels, foraging methods and breeding behaviours may also contribute to variation in recapture rates (Friend 1990).
Fire regime and duration of exposure to that regime were identified as the primary determinants of survival in northern brown bandicoots. The K–L best model stood clearly apart from other candidate models such that the influences of sex, age, vegetation type, rainfall and season had comparatively little bearing on survival, either separately or in combination. In the absence of fire, these other factors may play a role in explaining subtle variation in survival rates, but in this experiment the large differences in survival between fire treatments obscured these influences. This confirmed our first hypothesis that fire regime was the primary factor influencing bandicoot survival rate.
Parameter estimates derived from the K–L best model showed clearly that burning in the late dry season, or progressively throughout the dry season, produced more severe declines than did early dry season fires or no fires at all. This finding confirmed our second hypothesis, that the seasonal timing of fires was crucial to bandicoot survival. This is attributable to seasonal differences in the mean intensity of fires with the mean intensity of early dry season fires at Kapalga (2100 kW/m) being significantly less than that of the late dry season fires (7700 kW/m), due largely to higher daily maximum temperatures, stronger winds, higher fuel loads and drier fuels (Williams et al. 1998).
declining survival rates
The decline in survival of bandicoots in all fire treatments, including unburnt sites was consistent with Braithwaite & Muller (1997) and Woinarski et al.'s (2001) finding of an apparent overall decline in bandicoot abundance at the study site over the course of the Kapalga fire experiment. It is particularly alarming that a species generally regarded as common in the region had been driven from a situation of abundance to virtual absence over a 300 km2 area, after just 5 years of experimental burning. Even 5 years after experimental burns had ceased, with the land back under normal contemporary fire management regimes of Kakadu National Park (extensive prescribed burning in the early dry season, Russell-Smith 1995), bandicoots showed no sign of recovery from 0·03 captures per 100 trap nights in 1995 to only one individual captured in almost 7000 trap nights in 1999 (Woinarski et al. 2001). This finding is made even more alarming by the fact that the experimental fire regimes were intended to simulate a range of fire management outcomes that are becoming increasingly common across the tropical savannas of northern Australia following the widespread and continuing loss of Aboriginal fire management (Andersen et al. 1998; Russell-Smith et al. 2000). Our findings demonstrate that this observed decline at Kapalga has been driven largely by declining survival rates in all fire treatments. These declines reflect the combined influence of direct mortality from fire itself, as well as indirect mortality and permanent emigration.
This pattern of decline and poor recovery had been noted previously by Friend (1990) from a mark–recapture investigation of I. macrourus at Kapalga from September 1980 to January 1983. Friend found a similar pattern in bandicoot abundance (declined from being abundant to only one individual remaining in population) and looked to climatic variables for explanation, citing below-average rains in the 1982/83 wet season as the cause. Subsequent retrapping efforts in 1984 and 1985 following higher rainfalls failed to find any bandicoots, and this lack of recovery was attributed to late dry season wildfires in 1984. Clearly, populations of I. macrourus had recovered by the start of sampling for this study in July 1989, but this prolonged suppression may be an indication that bandicoot populations can take many years to recover from late dry season wildfires. This seems highly likely in light of the findings of this study, given the strong negative influence of late dry-season fires on bandicoot survival.
It is important to note that the observed decline in survival took place over the duration of the experiment and did not occur as a sudden collapse. If the introduction of fire in 1990 had caused a sudden drop in survival rate at burnt sites, then one of the fire lag (L) models would have provided a better explanation for the data. This factor was found to be relatively unlikely, and instead the best model indicates a more gradual pattern of decline in survival over all fire treatments (although more rapidly in the Late and Progressive treatments). This gradual decline in survival rate indicates that a substantial proportion of mortality was associated with more gradual processes associated with fire regimes (Williams et al. 1999), as opposed to the stepped decline (possible with a fire lag model) that would be expected if mortality were chiefly associated with fire events. In addition, this gradual decline suggests disease to be an unlikely factor in the explaining the decline in survival over the study.
The overall pattern of decline in survival rate is most probably the result of indirect factors that are related to burning. A reduction in habitat suitability following fire is obviously an important factor that may influence the survival of I. macrourus. Adults have static home ranges and tend not to move very far in response to local environmental changes (Heinsohn 1966; Gordon 1974b; Stoddart & Braithwaite 1979). During their nocturnal foraging movements, the northern brown bandicoot is known to avoid open understorey areas such as that created immediately following fire, and tend to stay within 50 m of low ground cover (Gordon 1974b). Dense patches of understorey vegetation and litter are also required within the home range for construction of grass nests, which are used as daytime refuges (Gordon 1974b). Repeated, intense burning over relatively large areas reduces the suitability of habitat for bandicoots by reducing the ground cover and in turn the habitat heterogeneity. This may affect apparent survival by increasing the rate of emigration or increasing individual's susceptibility to predation, whereas low-intensity fires during the early dry season leave more unburnt habitat for bandicoots to use. It should be noted that the linear ignition method used for the fire experiment may have increased the ‘treatment effect’ compared to point source ignition, as this probably reduced the understorey patchiness preferred by bandicoots, especially in the Late and Progressive fire treatments.
However, this does not explain the decline in survival rates in unburnt sites. A possible explanation for this within the habitat suitability paradigm is that the absence of fire may homogenize the habitat. Unburnt sites may also have gradually lost spatial heterogeneity in the understorey vegetation, as fire exclusion leads to widespread thickening of understorey vegetation (Bowman et al. 1988; Bowman & Panton 1995). An alternative explanation could lie in a broad regional effect that caused a general pattern of decline in survival rates across all treatments and that intense and frequent fires accelerated this decline within the late and progressively burnt treatments. Under this regional decline paradigm, survival could be viewed as having two additive components: the first and most substantial component being associated with the differences between fire treatments, the second being a global decline across all fire treatments due to unknown causes. One potential cause is the influence of long-term climatic cycles on mammal abundance at Kapalga, which has been explored previously by Braithwaite & Muller (1997). Their abundance estimates for 12 species of mammals at Kapalga showed that many species had declined in this region from 1986 to 1993. They found that groundwater levels were correlated strongly with abundance indices of several mammal species, but not I. macrourus, suggesting that long-term climatic and groundwater patterns are unlikely as possible explanations for the decline in survival of this species. However, there was a strong correlation with the Southern Oscillation Index and further analysis of this relationship may provide some insight.
All fire treatments at Kapalga produced a decline in bandicoot survival, so none of these approaches can be recommended for the long-term preservation of bandicoots in the tropical savannas. Notably, early dry-season fires appear to have had the smallest decline in survival, which gives some support to the view expressed by Reading et al. (1996) from southern Australia that: ‘periodic, low-intensity fires may indeed be a requirement in the medium to long-term management of reserves for bandicoots’. The higher survival rates under early dry-season fires are due most probably to a combination of lower direct mortality from fire and creation of a patchier understorey environment.
It is disconcerting that just 5 years of annual fires or fire exclusion in an otherwise apparently healthy landscape had such a detrimental effect on the relatively common northern brown bandicoot. The speed of this decline and prolonged lack of recovery provide a warning of the potential vulnerability of small mammal populations to poor fire management in the seasonal tropical savannas. This is strikingly consistent with evidence of the rapid process of extinction for mammals in arid central Australia, where loss of Aboriginal fire regimes has been proposed as a critical factor (Burbidge 1985; Burbidge et al. 1988).
Mammals of the Australian wet–dry tropics have suffered less disastrous declines than the arid mammal fauna, with no extinctions recorded as yet, but there is growing evidence of an ongoing pattern of decline right across northern Australia (Woinarski et al. 2001), which appears to parallel the patterns of decline in granivorous birds throughout the same region (Franklin 1999). In this study, late dry-season wildfires were found to be highly problematic for bandicoot survival. The contemporary fire regime in the northern high rainfall region of the Northern Territory, where up to 40% of the landscape is burnt annually and the majority of fires occur in the late dry season (Williams, Griffiths & Allan 2002), is clearly a high-risk strategy for species such as the northern brown bandicoot. This finding lends support to the incorporation of basic principles of Aboriginal fire management using intermittent, small, low-intensity fires that preclude large, catastrophic wildfires by ensuring that the land is fragmented into a fine-grained mosaic of different age classes (Yibarbuk et al. 2001). Achieving such a pattern over large spatial scales remains the major fire management issue for northern Australia. This is one of the first studies to show a clear link between fire and small mammal population regulation in the Australian tropical savannas and provides an important step in future research and management of this unique and diverse group of small mammals.
This study is built upon the collective efforts of many CSIRO scientists and technicians, who worked for more than a decade at Kapalga, with the support and cooperation of the managers of Kakadu National Park. We acknowledge that our work would not have been possible without their considerable labours. We thank Peter Whitehead, David Bowman and Alan Andersen for their constructive comments on an earlier draft.