A comparison of statistical downscaling methods suited for wildfire applications

Authors


Abstract

Place-based data is required in wildfire analyses, particularly in regions of diverse terrain that foster not only strong gradients in meteorological variables, but also complex fire behaviour. However, a majority of downscaling methods are inappropriate for wildfire application due to the lack of daily timescales and variables such as humidity and winds that are important for fuel flammability and fire spread. Two statistical downscaling methods, the daily Bias corrected Spatial Downscaling (BCSD) and the Multivariate Adapted Constructed Analogs (MACA) that directly incorporate daily data from global climate models, were validated over the western US using global reanalysis data. While both methods outperformed results obtained from direct interpolation from reanalysis, MACA exhibited additional skill in temperature, humidity, wind, and precipitation due to its ability to jointly downscale temperature and dew point temperature, and its use of analog patterns rather than interpolation. Both downscaling methods exhibited value added information in tracking fire danger indices and periods of extreme fire danger; however, MACA outperformed the daily BCSD due to its ability to more accurately capture relative humidity and winds. Copyright © 2011 Royal Meteorological Society

1. Introduction

Climate and weather are key enablers and drivers of wildfire regimes (e.g. Swetnam and Betancourt, 1990) and a host of other related ecosystem disturbances (e.g. Parmesan et al., 2000). From a purely atmospheric perspective, wildfire potential is a collective response to climatology (i.e. vegetation distribution, e.g. Stephenson, 1990), low-frequency climate variability (i.e. fuel availability and quantity, e.g. Littell et al., 2009), the integrated sequence of daily meteorological conditions in the days to months prior to ignition (i.e. moisture content of fuel, ignition efficiency, e.g. Deeming 1977), lightning ignitions (e.g. Rorig and Ferguson, 1999), and meteorological conditions during active burning (e.g. Flannigan and Harrington, 1988). The historical range of variability of these factors defines wildfire regimes. Climate change has the potential to result in conditions outside the contemporary range of variability for such systems, thus having widespread implications for wildfire and ecosystems (e.g. Torn and Fried, 1992; McKenzie et al., 2004; Flannigan et al., 2009).

Projected changes in meteorology at spatial and temporal scales are to better understand the impacts of climate change on wildfire regimes. While global climate models (GCMs) are the primary tool for such projections, several hurdles must be addressed to translate GCM output into locally relevant meteorological data for use in impact assessment. These include selecting GCMs appropriate for a given study area and application (e.g. Mote and Salathé, 2010), removing model biases, and accounting for the inherent scale mismatch between the coarse horizontal resolution of GCMs and the scale usually needed for local applications (e.g. Fowler et al., 2007). Downscaling addresses the latter hurdles through dynamical or empirical links between climate at large scales (as simulated by GCMs) and that at finer scales (not directly simulated by GCMs). Downscaling is especially critical in regions of complex terrain characterized by steep spatial gradients in meteorological variables and mesoscale circulation. These features also tend to define areas prone to wildfire and extreme fire behaviour (e.g. Sharples, 2009).

Two fundamental types of downscaling exist: dynamical and statistical. Dynamical downscaling nests a regional climate model in a global climate model, and is advantageous in that it physically resolves processes that occur at scales smaller than the driving GCM. However, dynamical downscaling suffers from biases introduced by the driving GCM (e.g. Plummer et al., 2006) and computational demands. Thus, current dynamical downscaling capabilities are limited by a lack of ensembles, and have to date been used sparingly in climate impact assessment. By contrast, statistical downscaling is computationally efficient, is able to directly incorporate observations used in operational decision-making or modelling, and can be applied across multiple GCMs to develop ensembles for scenario building. Statistical downscaling assumes that synoptic-scale meteorology and local physiographic features influence local meteorology. These relationships are then used to develop quantitative relationships between physically meaningful large-scale predictors and local predictands. However, statistical downscaling is not without fault as methods often assume stationarity though time, and ignore first principles of meteorology (e.g. Fowler et al., 2007).

The increased need for place-based climate projections has resulted in a proliferation in downscaling methods and datasets in recent years. While some downscaling methods may suffice for certain applications, they may not be ideal for addressing the needs for other applications. Although several downscaled products have been developed for hydrologic modelling and impact assessment (e.g. Maurer and Hidalgo, 2008; Salathé et al., 2007), downscaling methods and datasets that specifically address the needs of the fire community have been lacking to date. For example, whereas wildfire potential is a collective response to temperature, precipitation, humidity, winds, and other meteorological variables across a spectrum of timescales, most downscaling methods have been performed at monthly temporal resolution, and only for temperature and precipitation. To advance the utility of GCM output for wildfire and other natural hazards applications, downscaling methods that encapsulate the spatial and temporal behaviour of meteorological data to ascertain elements such as fuel moistures, fire danger indices (e.g. National Fire Danger Rating System [NFDRS], Canadian Forest Fire Danger Rating System [CFFDRS]) and critical fire weather situations are needed.

This paper compares two statistical downscaling methods across the western continental US (west of 104°W longitude) in the context of fire danger metrics widely used in operational fire management. The study area was selected because the complex meteorology across the heterogeneous landscape of the western US necessitates downscaling, as well as the prominence of wildfire on natural resources and human infrastructure across the region. Section 2 provides an overview of the downscaling methodologies and limitations. A comparison and discussion of the results is presented in Section 3. Concluding remarks are presented in Section 4.

2. Data and methods

2.1. Statistical downscaling methods

Statistical downscaling techniques offer advantages in ease of use, but often contain a number of caveats (Fowler et al., 2007 for a review). The primary limitations of established downscaling methods are separated into three primary categories: space, time and covariability. First, spatial limitations imposed using statistical interpolation is problematic in regions of complex topography, such as the western US, where observational evidence suggests that the interaction of the atmospheric circulation with physiographic gradients drives local to regional-scale variability (e.g. Abatzoglou et al., 2009) and critical fire-weather patterns (e.g. Hughes and Hall, 2010). Second, applications that require daily and sub-daily input may be remiss if the richer temporal spectrum of information available from GCMs is not incorporated. Finally, variables are often downscaled independent of one another in most statistical downscaling methods. This results in decoupling of first principles of meteorology (e.g. the relationship between temperature and relative humidity) and may result in physically implausible outcomes. Two classes of established downscaling methods are elaborated on: bias correction and spatial downscaling (BCSD) method, and constructed analogs (CA) method.

The BCSD method (Wood et al., 2004; Salathé et al., 2007; Maurer and Hidalgo, 2008) has been used extensively for impact assessment in the US and globally (e.g. Karl et al., 2009). This two-step method first corrects for biases in GCM data using a quantile-based mapping of monthly temperature and precipitation from GCMs to observations aggregated to a common resolution. Bias correction matches the statistical moments of observations and GCM output covering a common time period (e.g. late 20th century), and accordingly adjusts for biases in GCM output for projected time periods (e.g. 21st century) by assuming a constant model bias. Second, monthly anomalies of bias corrected GCM output are spatially interpolated to the downscaled resolution and multiplied (added) to climatological precipitation (temperature) fields. Daily data can be obtained by disaggregating monthly output to daily time scales by resampling a historical month and scaling daily data to match the monthly projections (e.g. Wood et al., 2004). Temporal disaggregation is limited in that it can only be applied in restricted domains, may produce physically unrealistic meteorology and makes the assumption that synoptic meteorology is stationary (restricts any changes in higher order sub-monthly statistics—e.g. length of dry spells). While the BCSD method may yield useful information for wildfire applications that require data on monthly and longer timescales (e.g. drought indices), the lack of information at sub-monthly timescales limits its utility.

The CA method identifies commonality between the synoptic-scale field from a GCM and a catalog of observed synoptic-scale fields from observations (Zorita and von Storch, 1999; Hidalgo et al., 2008). The principle of analog-based downscaling aligns more closely with the first principles of meteorology in its ability to directly incorporate daily synoptic patterns from GCMs. As climate change is composed of changes in the magnitude and frequency of meteorological fields, the direct incorporation of daily meteorology is important in elucidating impacts sensitive to sequencing of weather patterns and extremes. The CA is also more apt to model regionally complex meteorological phenomena (e.g. rain shadowing, inversions, local winds) that are not captured through interpolation based methods. However, the CA method is not without limitations, including its negligence of model biases and inability to address no-analog situations that may arise in a future climate.

Two novel downscaling methodologies are presented that build off the BCSD and CA methods: a daily BCSD method and the newly termed Multivariate Adaptive Constructed Analogs (MACA) method. To compare downscaling skill of these methods, a common domain (30–50°N, 100–126°W) and set of predictor variables is selected. The choice of predictor variables is a critical component of statistical downscaling and its ability to physically represent downscaled output and change under future climate scenarios (e.g. Fowler et al., 2007). Predictor variables are paired directly to downscaled output (e.g. GCM precipitation used to model downscaled precipitation) based on their ability to express the myriad of physical processes simulated under future climate (e.g. Maurer and Hidalgo, 2008). GCM outputs used as predictors included daily maximum and minimum temperature, precipitation, dew point temperature (derived from specific humidity) and wind velocity. As widely available output from GCMs is generally restricted to daily specific humidity, it is assumed that daily maximum and minimum relative humidity coincide with daily minimum and maximum temperatures, respectively.

2.1.1. Daily BCSD

Whereas the BCSD method has conventionally been used to downscale climate data at monthly scales, the method can be extended to operate on daily timescales. Two modifications to the BCSD method described above are made. First, daily GCM output is spatially interpolated to the downscaled grid. These fields are then bias corrected using quantile mapping. However, instead of restricting the sample distribution for daily bias correction to a single calendar day of the year (e.g. only using 8 July data for all years to bias correct 8 July data for a given year), quantile mapping is performed by populating the sample distribution using a 15-day moving window centered on each calendar day. Temperature, precipitation, and wind speed are bias corrected directly. Relative humidity is first estimated from bias corrected temperature and dew point temperature, after which a secondary bias correction to observed relative humidity is performed. Quantile mapping adjustments determined from late 20th century GCM experiments and observations, can be transferred to future time slices, therein preserving changes in statistical moments between the two periods.

2.1.2. Multivariate adapted constructed analogs (MACA)

Although the CA method includes several desirable downscaling qualities, there are limitations to its applicability. The strengths of the CA method are built upon and additional measures incorporated to circumvent limitations by: (1) bias correcting GCM output, (2) accounting for no analog situations and (iii) incorporating additional variables. The MACA is described in the steps below:

  • 1.Bias correction: The CA method described in Hidalgo et al. (2008) does not correct for GCM biases, and instead uses pattern matching of anomaly fields. The procedure described in 2.1.1 is used to map daily GCM data to the aggregated observations similar to the Bias Corrected Constructed Analogs of Maurer et al. (2010).
  • 2.Epoch adjustment: The main limitation of analog based methods is the potential of no analogs under future climate scenarios (e.g. a heat-wave during the late 21st century). Differences between the means of future time slices (2046–2065) and the means of historical (e.g. 20th century runs covering 1971–2000) time slices are removed using a 21-day moving window. Differences are taken as additive for temperature and dew point temperature, and multiplicative for precipitation and wind speed.
  • 3.Constructed Analogues: Following Hidalgo et al. (2008), a daily GCM field (‘target’ pattern) is built by identifying the 30 best predictor patterns (based on pattern root mean square error) taken from a library of observed patterns that fall within 45 days of the target date. Following Maurer et al. (2010), analogs are selected based on absolute values rather than anomalies. Linear combinations of accompanying fine-scale patterns yield the downscaled field. In contrast to methods that downscale variables independent of one another, the MACA method is performed jointly for temperature (maximum and minimum) and dew point temperature to improve coherence across downscaled fields. Analogs are identified separately for wind velocity and precipitation due to the inability to easily weight the influence of all variables in an analog search.
  • 4.Epoch adjustment: Adjustments performed in step 2 are reintroduced.
  • 5.Bias correction: A final quantile mapping procedure is performed on the downscaled output to ensure statistical moments of the downscaled data conform to observations (i.e. Maurer et al., 2010). Relative humidity fields are calculated from dew point temperature and temperatures, after which the data are bias corrected to observed relative humidity.

2.1.3. Advantages and caveats of methods

The daily BCSD and MACA methods are advantageous over other available methodologies for use in wildfire applications for two primary reasons. First, the direct use of daily GCM output avoids a caveat of other downscaling approaches that assume stationarity in synoptic meteorology by resampling historical daily weather. This is particularly critical given that wildfire growth and behaviour is sensitive to sequences of daily synoptic patterns and extremes (e.g. Flannigan and Harrington, 1988; Abatzoglou and Kolden, 2011). Secondly, these methods are capable of incorporating humidity and winds that are of critical importance for assessing fire danger.

Similar to other downscaling methods, these methods come with caveats in their application. First, analog approaches are sensitive to the geographic extent of the chosen domain (e.g. Fowler et al., 2007), with the influence of domain dependence differing across variables (Hidalgo et al., 2008). Secondly, the skill of downscaling method is subject to the ability of the GCM to simulate synoptic patterns. This point emphasizes the importance of GCM selection in application of downscaled data (Sheridan and Lee, 2010), and suggests that models unable to simulate the spatiotemporal features of synoptic meteorology may be ill-suited for downscaling (e.g. Schoof and Pryor, 2006). Third, the accuracy of downscaling methods typically degrades near the tails of the distribution, resulting in extremes to generally be underestimated (e.g. Fowler et al., 2007); however, extreme value theory methods may be used in statistical downscaling to target extreme events (e.g. Benestad, 2010). Finally, these downscaling methods assume a static relationship between synoptic and sub-synoptic scales. Changes in climate are likely to alter land-surface conditions (e.g. soil moisture, snow cover), thereby resulting in changes at local scales (e.g. amplified warming in regions of snow cover loss) that may deviate from those determined using statistical methods (e.g. Salathé et al., 2009). As the availability of output from regional climate models becomes available, the statistical downscaling methods can be used in hybrid statistical-dynamical downscaling.

2.2. Data sources

Statistical downscaling requires a long-term high-quality data that encompass a representative sample of observations. While the methodology presented herein is applicable to any long-term observed dataset, efforts here are focused on observations tailored to wildfire applications. High-resolution gridded data of surface fire-weather conditions are limited due to either inadequate spatial and temporal scales, or an incomplete set of variables. A recently developed daily high-resolution (8-km; 5-arc s) gridded dataset that covers the continental US from 1979–2008 (Abatzoglou and Brown, 2009) is utilized in the calibration and assessment of the downscaling methods.

The observed gridded dataset of Abatzoglou and Brown (2009) was developed by blending three data sources: (1) National Center for Environmental Prediction's (NCEP) North American Regional Reanalysis (NARR; Mesinger et al.2006, 32-km horizontal resolution, 3-hrly), (2) Parameter-elevation Regressions on Independent Slopes Model (PRISM; Daly et al., 1994, 4-km horizontal resolution, monthly), and (3) local observations of relative humidity and wind speed from Remote Automated Weather Stations (RAWS, 900 + sites, daily observations at 1300 local time). The data provide daily maximum and minimum temperatures and relative humidity; daily accumulated precipitation and precipitation duration; and temperature, relative humidity, wind velocity, and state of the weather for 1300 local time, all variables needed in NFDRS calculations. Temperature, dew point temperature and precipitation adhere to surface observations from PRISM at monthly and lower frequency timescales, with intra-monthly departures adhering to NARR through the use of time varying ratios (e.g. Di Luzio et al., 2008). As a final step, relative humidity and wind speeds from NARR were bias corrected to RAWS. Potential shortcomings of this dataset include: (1) the observed tendency of NARR to underestimate precipitation extremes during the warm season (e.g. Becker et al., 2009), (2) potential climate inhomogeneities in underlying station data and, (3) the lack of detailed (<10-km) wind velocity that may not capture mesoscale features.

Assessing the skill of downscaling using GCM output is problematic due to the lack of validation data and the inability to separate biases introduced from the GCM and the downscaling method. Alternatively, the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis (Simmons et al., 2007) from 1989–2008 is employed as a verifiable GCM surrogate. Reanalyses are functionally similar to GCMs in spatial and temporal resolution in that both simulate synoptic-scale features, but fail to resolve complex topography and mesoscale features. The distinct difference between a GCM and reanalysis is that the reanalysis assimilates observations including upper-air temperature and moisture, and is constrained to observations at synoptic-scales. Many of the same observations (e.g. radiosondes) are assimilated into both the ERA-Interim reanalysis and NARR (included in developing the high-resolution observed dataset), therein not making the datasets completely independent. However, the datasets are treated as being independent on the basis that (1) ECMWF and NCEP reanalysis employ different assimilation models and data sources, (2) the NARR is a regional model nested within a global model, and (3) the high-resolution gridded dataset incorporates surface observations that are not assimilated into either reanalysis.

Although reanalysis is regarded as a ‘best case’ GCM, it is by no means a perfect GCM. Surface temperatures are well represented by the reanalysis through strong linkages between assimilated lower-tropospheric temperatures and surface temperatures. Precipitation, however, is derived by the model itself through parameterisations and has been shown to exhibit substantial biases (e.g. Widmann and Bretherton, 2000). Surface observations including 2-m humidity and 10-m wind velocity remain a challenging field to validate and model (e.g. Timbal et al., 2009).

2.3. Assessment of skill

Downscaled data are cross-validated to evaluate the reproducibility of observations at local scales from the ERA-Interim reanalysis covering the period 1989–2008. Cross-validation allows for the complete library of data to be used in the quantile mapping and analogue search (aside from the day being downscaled); however, the cross-validation does not utilize the epoch adjustment methodology of MACA (steps 2 and 4) given the short period of record. The downscaling methods are quantitatively compared using the correlation coefficient and root mean square error (RMSE) for the cool season (Nov–Apr) and warm season (May–Oct). These downscaling methods are compared to direct linear interpolation from reanalysis. Statistical skill is assessed as the relative accuracy (examined through correlation and RMSE) of the downscaling methods relative to direct interpolation from reanalysis.

In addition to meteorological variables, the downscaling skill is examined for two daily fire danger indices: the Energy Release Component (ERC) and the Fosberg Fire Danger Index (FFWI; Fosberg, 1978). The ERC is a widely used metric from the NFDRS (Deeming et al., 1977) that serves as a proxy for fuel moisture content and fire intensity for a given fuel type (model G, short needle pine, heavy dead loads is used in the present case). ERC is a weather-climate hybrid index that considers the cumulative drying effect of previous daily weather conditions by integrating temperature, precipitation and humidity, and hence is a frequently used decision tool in operational fire management. By contrast, the FFWI is a flashy weather-driven index derived from temperature, wind speed and relative humidity tailored to the short-term impacts on wildfire potential irrespective of fuel abundance or availability.

3. Results and discussion

3.1. Validation of meteorological variables

Validation results were sensitive to both the reanalysis, in its ability to capture surface observations, and the ability of the two downscaling methods to improve upon results obtained from direct interpolation. Temperatures were well simulated by the reanalysis due to the strong coupling between lower-troposphere air temperature and surface temperature, with the strongest correlations in summer and for maximum temperatures when the atmosphere is well mixed (Figure 1). Notably lower correlations were observed in areas of complex physiographic influences such as mountain ranges (e.g. Sierra Nevada, Cascades), valleys (e.g. California's Central Valley, Idaho's Snake River Plain) and coastal regions where local meteorology often decouples from the synoptic signature. Both downscaling methods demonstrated skill through the reduction of RMSE. While the BCSD method exhibited modest improvements in correlation, its reliance on spatial interpolation resulted in unexplained variance in regions of complex topography. By contrast, the MACA method showed additional skill in complex terrain given its ability to use pattern matching rather than interpolation.

Figure 1.

Correlation coefficients (top three rows) and RMSE (bottom three rows) for daily maximum temperature (left two columns) and daily minimum temperature (right two columns). From top to bottom are validation statistics for direct interpolation from reanalysis (INTP), the daily Bias Correction Spatial Downscaling (BCSD) method, and the Multivariate Adaptive Constructed Analogs (MACA) method. For each variable, daily skill is separated into the cool season (Nov–Apr) and the warm season (May–Oct). Units for RMSE are °C. This figure is available in colour online at wileyonlinelibrary.com/journal/joc

The ability to accurately model relative humidity requires capturing both moisture and temperature. Reanalysis fields were more adept at tracking daily minimum relative humidity, whereas weaker correlations were found for daily maximum relative humidity and in locales that often reach 100% humidity and exhibit little variance such as coastal zones (Figure 2). Both downscaling methods demonstrated added skill, although the MACA method outperformed the daily BCSD method. The MACA method performed well over nearly the entire western US for both maximum and minimum relative humidity during the warm season, important due to the role humidity plays in determining fuel moisture during the fire season. Additional skill observed in the MACA method is a consequence of not relying on interpolation based downscaling (e.g. Figure 3), and incorporating a multivariate analog search for temperature and dew point temperature. These results suggest the importance of coupling temperature and dew point temperature fields in downscaling relative humidity as opposed to downscaling variables independent of one another.

Figure 2.

As in Figure 1, but for minimum relative humidity (left two columns) and maximum relative humidity (right two columns). Units for RMSE are %. This figure is available in colour online at wileyonlinelibrary.com/journal/joc

Figure 3.

(a) Interpolated, (b) daily BCSD downscaled, (c) MACA downscaled, and (d) observed minimum relative humidity anomaly for 4 Sep 2006. Anomalies are computed with respect to the daily average taken from the 1989–2008 period of record. This figure is available in colour online at wileyonlinelibrary.com/journal/joc

Precipitation was assessed using the square root of precipitation to reduce data skew. Distinct geographic and seasonal variability was realized for precipitation with the strongest correlations observed on the windward side of significant topographic barriers during the cool season (Figure 4). While lower correlations were observed in the immediate lee of the Sierras and Cascades due to the known difficulties of resolving precipitation processes across complex terrain (e.g. Widmann et al., 2003), MACA showed added skill that support the advantages of a non-interpolation-based approach. While cool season precipitation is associated with large-scale circulation and associated with progressive mid-latitude synoptic systems that are reasonably well simulated by the reanalysis, warm season precipitation over the southwestern interior is heavily influenced by the North American Monsoon. Convective precipitation associated with ill-defined large-scale patterns and locally intense precipitation is not adequately simulated at synoptic-scales by reanalyses (e.g. Castro et al., 2007). This exemplifies that obtaining high-quality downscaled fields is contingent upon the ability of the driving global model (reanalysis or GCM) to simulate synoptic meteorology and the local character of precipitation.

Figure 4.

As in Figure 1, but for precipitation (left two columns) and wind speed (right two columns). Units for RMSE are mm∧.5 and m/s for precipitation and wind speed, respectively. This figure is available in colour online at wileyonlinelibrary.com/journal/joc

Downscaled wind velocity (directional components not shown) exhibited less geographic and seasonal structure than other analysed variables. Both the daily BCSD and MACA methods showed skill, with the MACA method outperforming the daily BCSD. Scatterplots (not shown) suggest that strong winds (in the upper quartile) that are conducive to fire growth were captured across most locations, and that errors were most acute for lighter wind speeds not associated with a robust large-scale forcing in the reanalysis. However, given that surface wind speeds exhibit variability at local scales in response to a consortium of large-scale and fine-scale drivers, additional large-scale predictors (e.g. 700 hPa heights, temperature fields) might help improve downscaling of wind in geographic areas decoupled from synoptic influences.

3.2. Validation of fire danger and extreme fire danger periods

On shorter time scales encompassing days to months, the temporal sequencing of meteorology (i.e. fire danger indices, ignitions) is heavily used in operational fire management decision-making (e.g. Kolden and Brown, 2010) and estimating fire behaviour (Finney, 1998). To bridge the gap between downscaling individual meteorological variables, and how downscaled meteorological variables are integrated to metrics used in wildfire applications, the downscaling skill for ERC and FFWI is examined.

As a build-up hybrid weather-climate index, ERC is particularly sensitive to relative humidity and precipitation events, and does not incorporate wind speed; thus, methods and areas that exhibited skill for precipitation, temperature, and relative humidity were likely to do so for ERC (Figure 5). Lower correlations across the southwestern US and the Colorado Rockies were a consequence of the inability to capture convective precipitation associated with the North American Monsoon. The daily BCSD method exhibited skill across the Pacific Northwest, the northern half of California and the Great Basin. The MACA method exhibited additional skill over a broader geographic area extending from southwestern California to the northern Rockies.

Figure 5.

As in Figure 1, but for ERC (left two columns) and FFWI (right two columns). RMSE is unitless. This figure is available in colour online at wileyonlinelibrary.com/journal/joc

By contrast, the FFWI is insensitive to precipitation and is acutely sensitive to wind speed, thus performing well across the interior southwest and Colorado plateau where wind speeds were best captured (Figure 5). Improvements in skill were noted in the downscaling methods as a consequence of improved representation of wind speed and humidity. The strong skill of the methods in tracking FFWI across the interior southwest is important in assessing fire potential, given the role of high-frequency meteorological conditions for wind-driven fires (Crimmins et al., 2006).

Fire managers in the US use fire danger indices operationally in strategic decision making. Of particular interest is extreme fire danger, typically defined as the 90th or 97th percentile threshold calculated from historical fire danger indices. These extremes are designated as critical thresholds for operational fire management, as they tend to represent conditions at which fire suppression becomes problematic and ignitions may more easily become large wildfires. Likewise, empirical analysis has shown strong relationships between ERC and large wildfires in the northern Rockies (Kolden et al., 2010), as well as with FFWI and large wildfires in Southern California (e.g. Moritz et al., 2010).

Two examples illustrate the ability of the downscaling methods to track observed fire danger indices during periods of extreme fire danger. Figure 6(a) shows ERC from 1 June–20 Oct 2006 in the Lolo National Forest in western Montana (47.4°N, 114.9°W). Both the daily BCSD and MACA methods tracked observations closely, whereas the direct interpolation from the reanalysis exhibited a large positive bias. During the period of extreme fire danger (ERC > 97th percentile, denoted by the dashed horizontal line and derived from historical observations) from August through mid-September, when several conflagrations burned in the northern Rockies, the MACA method was more adept at tracking the magnitude of ERC than the BCSD method. Figure 6(b) shows FFWI from 12 Oct to 28 Nov 2007 in the southwestern foothills of the Angeles National Forest in California (34.3°N, 118.1°W) and the observed offshore Santa Ana wind events of Oct 21–24 and Nov 24 that drove several large fires across the region. The daily BCSD and MACA methods captured the extreme fire danger as observed, suggesting their utility in such settings. In these two cases, as well as others not explicitly shown, the MACA method does a better job representing the magnitude of the extremes, whereas the BCSD method is more conservative.

Figure 6.

Time series of (a) ERC for 1 Jun–20 Oct 2006, Lolo National Forest (47.4°N, 114.9°W), (b) FFWI for 12 Oct–28 Nov 2007, Angeles National Forest (34.3°N, 118.1°W) for observed (black), direct interpolation from reanalysis (dashed green), daily BCSD (blue), and MACA (red) methods. The horizontal dashed black line denotes 97th percentile conditions from derived from observations from 1979–2008. This figure is available in colour online at wileyonlinelibrary.com/journal/joc

4. Conclusions

The use of statistical downscaling was shown to perform well in capturing surface meteorological variables and fire danger indices across the physiographically challenging landscape of the western US. However, the MACA method was shown to be more advantageous due to (1) MACA's use of analogs that avoid interpolation based methods, and (2) MACA's multivariate approach that improves the physical relationships between variables compared to treating variables independently. The superior ability of MACA to track fire danger indices suggests that multivariate statistical downscaling is better suited for applications that are sensitive to a spectrum of meteorological variables.

Although the daily BCSD and MACA methods exhibited value-added skill for present day climate conditions, a few caveats are outlined pertaining to its application to GCM output and utility for end users. First, statistical downscaling needs to be derived from a representative sample distribution of observations. Statistical relationships built using short periods of record (e.g. 10 years) may be of reduced quality. Second, robust skill in downscaling for the observed period does not imply that skill will remain constant with climate change scenarios. Finally, in regions and seasons where the GCM lacks skill, statistical downscaling may not yield useful results and regional climate modelling may be preferable.

Although downscaling translates large-scale data into place-based data, downscaling itself does not eliminate the uncertainty inherent in climate projections. Decision makers can use downscaling scenarios in a probabilistic framework to understand impacts and devise adaptation strategies, but in order for practitioners to make informed decisions, a transparent explanation of the methodology and validation is required. As with most datasets, practitioners should be cognizant of the assumptions and limitations of the data sources (i.e. observations and GCMs).

Acknowledgements

We would like to thank the efforts of anonymous reviewers in improving the quality of this manuscript. This research was supported by the NSF Idaho EPSCoR Program and by the National Science Foundation under award number EPS-0814387, the Joint Fire Science Program award number 08-1-1-19, and the United States Forest Service award number 10-JV-11261900-039.

Ancillary