SEARCH

SEARCH BY CITATION

Keywords:

  • hydrologic extremes;
  • climate change;
  • water quality;
  • extreme value statistics

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[1] Although information about climate change and its implications is becoming increasingly available to water utility managers, additional tools are needed to translate this information into secondary products useful for local assessments. The anticipated intensification of the hydrologic cycle makes quantifying changes to hydrologic extremes, as well as associated water quality effects, of particular concern. To this end, this paper focuses on using extreme value statistics to describe maximum monthly flow distributions at a given site, where the nonstationarity is derived from concurrent climate information. From these statistics, flow quantiles are reconstructed over the historic record and then projected to 2100. This paper extends this analysis to an associated source water quality impact, whereby the corresponding risk of exceeding a water quality threshold is examined. The approach is applied to a drinking water source in the Pacific Northwest United States that has experienced elevated turbidity values correlated with high streamflow. Results demonstrate that based on climate change information from the most recent Intergovernmental Panel on Climate Change assessment report, the variability and magnitude of extreme streamflows substantially increase over the 21st century. Consequently, the likelihood of a turbidity exceedance increases, as do the associated relative costs. The framework is general and could be applied to estimate extreme streamflow under climate change at other locations, with straightforward extensions to other water quality variables that depend on extreme hydroclimate.

1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[2] The anticipated changes in extreme events that may accompany climate change are of particular concern, as they hold the most potential for negative societal impacts and disruption [Kunkel et al., 1999; Easterling et al., 2000; Meehl et al., 2000]. Although the Atmosphere-Ocean General Circulation Models (AOGCMs) from the recent Intergovernmental Panel on Climate Change assessment report (IPCC AR4) generally project increases in intense precipitation and flooding at a large spatial scale [Meehl et al., 2007; Christensen et al., 2007], the coarse resolution of the models limits their ability to quantify extreme events at regional and point scales that are important for decision making [Wilbanks et al., 2007]. Typically, statistics of extreme weather events are estimated on a subcontinental scale using coarse techniques such as AOGCM outputs to quantify threshold exceedances (e.g., number of frost days, number of very heavy precipitation events, etc. [see Gutowski et al., 2008, and references therein]). There have been limited efforts at characterizing the extremes in large-scale precipitation associated with climate change using statistical techniques [Kharin and Zwiers, 2005; Kharin et al., 2007], and streamflow extremes have been estimated by using numerical simulations of climate change in a coupled ocean-atmosphere-land model [Milly et al., 2002].

[3] Extreme value statistics are well suited to analyzing hydrologic phenomena [Katz et al., 2002], especially flood events, which are of particular interest to the fields of engineering and water resources. As such, the generalized extreme value (GEV) distribution has been employed widely in describing flood characteristics [Lettenmaier et al., 1987; Hosking and Wallis, 1988; Stedinger and Lu, 1995; Morrison and Smith, 2002]. Traditionally, the GEV distribution is fit assuming that the underlying process is stationary in time, with observations that are independent and identically distributed (IID). However, it has become generally accepted that climate variability can play a significant role in the magnitude and frequency of extreme streamflow events. Natural modes of interannual and interdecadal variability (e.g., the El Niño phenomenon) have been found to influence flood frequency [Cayan et al., 1999; Jain and Lall, 2000; Jain and Lall, 2001]. Furthermore, evidence of long-term trends, such as from global warming, has undermined the long-held assumption of stationarity [Milly et al., 2008].

[4] To account for nonstationarity, the parameters of the GEV can be varied with a set of predictors or covariates [Coles, 2001]. Several extremal distribution studies provide a variety of applications where covariates have been incorporated; some examples include the annual cycle of precipitation [Katz et al., 2002; Maraun et al., 2009], trends in extreme sea level [Coles, 2001; Katz et al., 2002] and temperature [Cooley, 2009], the influence of basin characteristics on high flows [Smith, 1989], effect of winter mean temperature on maximum snow depth [Buishand, 1989], and influence of disturbances on extreme sediment yield and rates [Katz et al., 2005]. Large-scale atmospheric variables (e.g., sea level pressure or standardized indices such as the Southern Oscillation Index) have been incorporated into modeling extreme precipitation [Katz et al., 2002; El Adlouni et al., 2007], streamflow peaks [Katz et al., 2002], extreme sea level [Coles, 2001], and hurricane intensities [Mestre and Hallegatte, 2009]. Climate indices have also been used as covariates to estimate conditional flood frequency in a quantile regression, semiparametric framework [Sankarasubramanian and Lall, 2003] and fully nonparametric approach using local polynomials [Apipattanavis et al., 2010]. However, the GEV-based approach is much more flexible and easy to implement, as well as being justified by statistical theory. The GEV approach has also been used in conjunction with Bayesian frameworks to improve parameter estimation for small sample sizes [Martins and Stedinger, 2000], combine local and regional information [Seidou et al., 2006], and incorporate nonstationarity [Renard et al., 2006].

[5] These studies clearly have implications for quantifying extremes under climate change but have not been utilized very often in this context. Our study is motivated by this capability and the pressing need for a robust tool to provide statistics of extreme events at regional and point scales for decision making. To this end, this paper proposes the use of the GEV to describe maximum monthly streamflow quantiles with concurrent climate information to capture the nonstationarity. The developed nonstationary GEV model is used to obtain information about extreme streamflows in the future based on climate projections from the IPCC AR4. Furthermore, the estimates of flow quantiles are integrated with the probabilistic technique developed by Towler et al. [2010] to obtain the risk of turbidity threshold exceedance in source water quality. The two-step framework is applied to a drinking water source in the Pacific Northwest United States that has experienced elevated turbidity values in correlation with high streamflow.

[6] A brief overview of the case study for which the framework is developed and demonstrated is provided in section 2. The methodology includes two main parts: (1) the extreme value model used to describe the maximum flow distributions and (2) the approach used to calculate the corresponding water quality threshold exceedance risk. Results for the case study utility are then presented and discussed.

2. Case Study Overview

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[7] The case study overview is divided into two parts: Section 2.1 provides background information on the municipal drinking water source and water quality issue, and section 2.2 describes the data used in the approach.

2.1. Background

[8] The Bull Run Watershed, which is the primary source of water for the Portland Water Bureau (PWB) in Oregon, will be used as a case study to demonstrate the framework. PWB provides water to more than 20% of all Oregonians, including the city of Portland. The Bull Run Watershed is a protected watershed and is a source with very high water quality, enabling PWB to meet federal drinking water standards without the filtration treatment process. However, historic flooding and subsequent high turbidity events have underscored PWB's vulnerability as an unfiltered source (PWB, Discover your drinking water, 2007, available at http://www.portlandonline.com/WATER/index.cfm?c=49358&a=225481). For utilities that do not filter, one criteria of the Surface Water Treatment Rule requires that before disinfection, the turbidity level should not exceed 5 nephelometric turbidity units (NTU) [U.S. Environmental Protection Agency, 1989]. For PWB, if conditions arise that could cause an exceedance, they follow procedures and make decisions based on monitored turbidity levels, weather patterns, antecedent conditions, and other case-specific information to ensure compliance. When necessary, the PWB switch to their low-turbidity supplemental groundwater source. This groundwater source ensures that the PWB is able to remain in compliance but is more expensive because of pumping costs. As such, the ability to estimate the flood distributions and potential for turbidity exceedance under climate change could provide additional information for management and planning purposes.

2.2. Data

2.2.1. Historic

[9] The following data sets for the period of 1970–2007 were used in the analysis:

[10] 1. Streamflow discharge data for the main stem to the drinking water source were obtained from U.S. Geological Survey Bull Run station gage 14138850. For this analysis, the time series of maximum monthly streamflows was derived from the available daily time series.

[11] 2. Turbidity data from the treatment system headworks were obtained from PWB. The headworks are located below the two storage reservoirs on the Bull Run River, the location from which the municipal drinking water supply is provided to the conduits that take water into the Portland metropolitan area. For this analysis, the time series of maximum monthly turbidity was derived from the available daily time series.

[12] 3. For each month, average temperature and average daily precipitation data over the Oregon Northern Cascades region (Division 4) were obtained from the U.S. climate division data set available at the NOAA-Cooperative Institute for Research in Environmental Sciences Climate Diagnostics Center (CDC) Web site (http://www7.ncdc.noaa.gov/CDO/CDODivisionalSelect.jsp).

[13] The climate of the Pacific Northwest includes a distinct wet winter season (November–February), which is generally when the flooding risk, and therefore turbidity exceedance risk, is highest. Thus, focus was placed on the winter season, and henceforth, the four winter months of November–February were pooled together for analysis in subsequent figures and calculations. Preliminary analyses showed that the streamflow and turbidity characteristics were similar for all 4 months, thus allowing for a larger sample size.

2.2.2. Climate Change Projections

[14] AOGCM output that was used in the IPCC AR4 was obtained from the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 3 (CMIP3) multimodel data set. These include bias-corrected and spatially downscaled climate projections derived from CMIP3 data and served at http://gdo-dcp.ucllnl.org/downscaled_cmip3_projections/, as described by Maurer et al. [2007]. This effort has resulted in a public-access archive of monthly data of average daily precipitation (R) and average temperature (T), where each climate projection is downscaled using a two-step technique [Wood et al., 2004; Maurer, 2007]. First, the AOGCM output is bias-corrected for the simulated past relative to the observed record. Second, the data are spatially downscaled onto a 1/8° grid (i.e., approximately 12 km2). The details of each step are outlined on the aforementioned Web site.

[15] From this Web site, monthly R and T values were obtained for the period of January 1950 through December 2099 and averaged over the approximate bounds of the watershed area (i.e., latitude range of 45.25°N–45.75°N and longitude range of −122.25° to −121.625°), for emissions path A2 for all available runs of each climate model, resulting in a total of 36 climate model runs. Similar to the bias-correction step performed in the CMIP3 downscaling, the R and T values from the historic period were bias-corrected relative to the observed data (i.e., the CDC data described in section 2.2.1, paragraph 3). This was carried out for each climate model run as follows: (1) A linear regression was fitted between the quantiles from the AOGCM model run and the observed record for the overlapping historic period (i.e., 1970–2007), and (2) the regression model was applied to the future AOGCM R and T values. Given the prior bias-correction step, it was not surprising that the quantiles were highly correlated (>0.95) for both R and T and that the AOGCM output just slightly overpredicted R and underpredicted T for the majority of the runs.

3. Methodology

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[16] This methodology is divided into two main parts: Section 3.1 explains the extreme value model used for the maximum flow distributions, and section 3.2 describes the approach used to calculate the corresponding water quality exceedance risk.

3.1. Extreme Value Model

3.1.1. Generalized Extreme Value Model

[17] Extreme value theory provides a basis for the modeling of data maxima (or minima). On the basis of an underlying asymptotic argument, the theory allows for the extrapolation of the observed to the unobserved [Coles, 2001]. When the data maxima of random variables are taken over sufficiently long blocks of time, it is appropriate to fit the family of GEV distributions. The cumulative distribution function for the GEV is defined as

  • equation image

where in our application z denotes a value of the monthly streamflow maxima and θ = [μ, σ, ξ] and [1 + ξ(equation image)] ≥ 0. The location parameter μ indicates where the distribution is centered, the scale parameter σ > 0 indicates the spread of the distribution, and the shape parameter ξ indicates the behavior of the distribution's upper tail [Coles, 2001]. On the basis of the shape parameter, the GEV can assume three possible types, known as the Gumbel, Fréchet, and Weibull. The Gumbel is an unbounded light-tailed distribution (ξ = 0), whereby the tail decreases relatively rapidly (i.e., exponential decay). The Fréchet is a heavy-tailed distribution (ξ > 0), whereby the tail's rate of decrease is relatively slower (i.e., polynomial decay). The Weibull is a bounded distribution (ξ < 0), whereby the tail has a finite value at z = μequation image [Coles, 2001]. Although several methods can be used for the estimation of the GEV model parameters, here the Maximum Likelihood Estimation (MLE) was utilized for its ability to easily incorporate covariate information [Katz et al., 2002]. Here the unknown parameters θ were estimated by maximizing the log-likelihood (llh) equation, which is defined as

  • equation image

where g(z;θ) is the derivative of G(z;θ) with respect to z and N is the sample size. Equation (2) can be expanded as

  • equation image

where equation image ≥ 0. For the purpose of optimization, it is convenient to minimize nllh(θ) = − llh(θ) instead of directly maximizing llh(θ).

[18] It is generally useful to analyze the probabilistic quantiles of the associated GEV. Quantiles, z(1−p), are obtained by inverting equation (1):

  • equation image

with 0 < p < 1. That is, there is a p × 100% chance of exceeding z(1−p) in the time block chosen. When the annual maxima are being examined, z(1−p) corresponds to the return period (1/p), where the level z(1 − p) is expected to be exceeded on average once every 1/p years [Coles, 2001]. For time scales other than annual, the interpretation is analogous but must be adjusted appropriately. Confidence intervals can be calculated for the quantiles utilizing techniques such as the delta method or the profile likelihood method (see sections 2.6.4 and 2.6.5, respectively, in the study by Coles [2001]).

[19] For the case study, we fitted the GEV to the historic winter monthly streamflow maxima. Typically, extreme values are analyzed on an annual time scale, thus resulting in one maximum (or minimum) value per year, but for data sets of limited length, this discards much of the available data. As such, here we examined the winter monthly maxima (the time period of importance in this application); thus, there were four values per year for 37 years, resulting in a sample size of 148. This relatively large sample size helped to justify the use of the MLE, as this estimation technique can be problematic for small sample sizes [Hosking and Wallis, 1988; Hosking, 1990].

[20] We refer to the above model as the unconditional (Uncond) model, and in section 3.1.2, we describe the modification of the GEV to include covariates (referred to as “conditional”).

3.1.2. Nonstationarity in GEV Model

[21] Traditionally, the GEV distribution assumes that observations are IID, but this assumption can be relaxed, with the introduction of covariates to account for nonstationarity. For instance, the parameters of the GEV distribution could be dependent on a given predictor, x, or more generally, for a suite (i.e., vector) of predictors, X. Theoretically, this could apply to any of the three aforementioned model parameters. Here, because of its intuitive appeal, we examined this dependence only for the location parameter:

  • equation image

where the β values are the intercept and predictor coefficients, which were fitted so as to maximize the likelihood function (equation (2)). We note that now θ = [β, σ, ξ] and that for each time block, μ and the resulting GEV will change with the covariate(s).

[22] The strong linear relationship between precipitation and streamflow for the winter months [Towler et al., 2010] is indicative of a rainfall-runoff mechanism for the streamflow, which provides 90%–95% of Bull Run River's water (PWB, 2007). Hence, R was examined as a covariate, as well as T, which influences evaporation and soil moisture. Furthermore, these covariates were readily available and downscaled from the AOGCMs. This resulted in the testing of four conditional (Cond) model combinations (see Table 1): (1) T as a covariate (CondT model), (2) R as a covariate (CondR model), (3) the product of R and T as a covariate (CondRT model), and (4) both R and T as covariates (CondR+ T). The significance of the covariates was evaluated using a likelihood ratio test comparing the nested models that is adapted from section 2.6.6 in the study by Coles [2001] and summarized below.

Table 1. Generalized Extreme Value Coefficients and Goodness of Fit Statistics for Models of Winter Monthly Maximum Streamflow
VariableGEV Model
Uncond β0CondTβ0 + β1TCondRβ0 + β1RCondRTβ0 + β1(RT)CondR + Tβ0 + β1R + β2T
  • a

    Nested model to which model is compared in likelihood ratio test.

  • b

    Significance is tested at α = 0.05 level, and ( ) indicates p-value.

  • c

    Correlation between the cross-validated z90 estimates and the observed maximum values.

β0 (SE)1924 (120)1930 (1000)1739 (410)611.4 (150)1911 (880)
β1 (SE)-−0.8914 (27)61.08 (32)3.716 (0.36)141.2 (14)
β2 (SE)----−36.45 (24)
σ (SE)1245 (84)1220 (81)1246 (160)923.7 (69)968.5 (74)
ξ (SE)−0.02246 (0.065)−0.01286 (0.065)−0.06180 (0.084)0.07009 (0.082)0.01619 (0.075)
llh−1289−1289−1274−1250−1250
K12223
AIC25802582255225042506
M0a-UncondUncondUncondCondR
D-0307848
Sigb-No (0.635)Yes (0.000)Yes (0.000)Yes (0.000)
ρc--0.55160.59890.5918

[23] Consider a model, M0, which is a sub- (i.e., nested) model of M1, and llh0(M0) and llh1(M1) are the maximized values of the log-likelihood for the models, respectively. The deviance statistic D can be calculated as:

  • equation image

If D > cα, where cα is the (1 − α) quantile of the χk2 distribution, then M0 can be rejected in favor of M1. Here, α is the level of significance, the χk2 distribution is a large-sample approximation, and k are the degrees of freedom associated with the test. Models were tested at the α = 0.05 significance level against the appropriate submodels. For CondT, CondR, and CondRT, the appropriate submodel was the Uncond model. For CondR+ T, the submodel that was tested against was CondR. For each test, the degrees of freedom k = 1, with cα = 3.84.

[24] The “best model” was selected from the conditional models based on minimizing the Akaike Information Criterion (AIC) [Akaike, 1974], which can be calculated as:

  • equation image

where llh was estimated from equation (3) with K being the number of parameters estimated.

[25] The conditional and unconditional GEV models were fitted using the extRemes package (http://cran.r-project.org/web/packages/extRemes/extRemes.pdf) in the statistical package “R” (http://www.r-project.org/). The best fit model was used in conjunction with future climate projections of R and T to estimate the corresponding streamflow quantiles. Furthermore, leave-one-out cross-validation of the quantile estimates was calculated. This is carried out in the following manner: (1) the first response (z1) and predictors (R1, T1) are removed from the observed data set, (2) the GEV model is fitted to the remaining (N − 1) responses and predictors, (3) the predictors (R1, T1) are used to estimate the first quantile response from the model developed in (2), and (4) the procedure is repeated for each of the remaining paired responses and predictors. Correlation between the observed maxima and cross-validated conditional flood quantiles were calculated, with four quantiles considered: z99, z90, z50, and z10.

3.2. Translation to Water Quality Extremes

[26] The relationship between the maximum streamflow and turbidity values results in a positive linear association (ρ = 0.28), but the scatterplot (Figure 1) reveals a distinct nonlinearity, as modeled by a local smoother [Loader, 1999]. Here it can be seen that for streamflow below 3000 cubic feet per second (cfs; 85 m3/s), the turbidity is low, typically less than 2 NTU, and the distribution is fairly tight and constant, whereas above 3000 cfs (85 m3/s), the turbidity response shows much more spread. However, all but one of the threshold exceedances occurs above 3000 cfs (85 m3/s). Although the variability of the turbidity response for higher streamflows would make estimation of actual values difficult, the ability to model the probability of being either above or below the threshold shows potential. In particular, the turbidity threshold exceedance can be modeled as a function of the monthly maximum flow. A logistic regression technique would be appropriate with the dependent variable (i.e., turbidity) taking on a categorical value of “1” if the value is greater than the prescribed threshold and “0” if the value is less than the threshold. The model can be expressed generally as:

  • equation image

where P(ES) is the probability of a turbidity exceedance E, conditioned on streamflow S, which is fit to its predictor using a general function f, and e is the associated estimation error. We note that the error term is assumed to be normally distributed with mean of 0, though the variance is not constant [Helsel and Hirsch, 1995]. The detailed steps of implementation for the logistic regression are given by Helsel and Hirsch [1995]. Our approach made one modification to the traditional approach by using the “local” (i.e., nonparametric) version of the logistic regression [Loader, 1999], which was implemented using the Locfit package (http://cran.r-project.org/web/packages/locfit/locfit.pdf) in the aforementioned statistical package “R.” Essentially, the “local” approach provides an advantage in that the function f(S) is able to capture any arbitrary underlying features (i.e., nonlinearities) in the data. The local approach is more general and has been found to outperform its global counterpart in terms of several objective criteria [Towler et al., 2010].

image

Figure 1. Maximum monthly streamflow versus maximum monthly turbidity for the winter months. Dotted horizontal line is the regulatory threshold, 5 NTU, and triangles (and associated printed values) represent outliers outside the y axis range. Solid line is local smoother.

Download figure to PowerPoint

[27] The key quantity of interest is the total likelihood of water quality threshold exceedance for a given projection or forecast probability density function (PDF) of the streamflow. As such, the theorem of total probability [Ang and Tang, 2007] was used in its continuous form:

  • equation image

where P(E) is the total probability of a turbidity exceedance given the climate change projection, P(E∣S) is the conditional threshold exceedance probability from the logistic regression, and P(S) is the streamflow PDF under the climate change projection. In this case, the conditional GEV probability density function under climate change was used to estimate P(S). To estimate P(ES) × P(S), each term was estimated at equally spaced intervals of the streamflow range and then multiplied together. This was followed by a numerical integration technique. Additional details are available in the study by Towler et al. [2010].

4. Results

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[28] Following the first step of the framework, the GEV was fitted to the winter monthly streamflow maxima using the aforementioned predictor combinations of R and T. Three of the four GEV combinations were found to be significant from the likelihood ratio test (Table 1). CondRT and CondR+ T were very similar in terms of llh and ρ. However, the CondRT required the estimation of one less parameter, which resulted in a smaller AIC value, thus making it the “best” model for the remaining analysis. This resulted in a final model for the location parameter μ, given by

  • equation image

Also, the diagnostics of the chosen “best model” were examined using quantile-quantile (i.e., q-q) plots to insure the appropriateness of the GEV model (figures not shown).

[29] The cross-validated performance of the CondRT model over the historic period was evaluated for all four aforementioned conditional quantiles, and two of the quantiles (z99 and z10) were visually compared to the observed maximum monthly flows (Figure 2). The corresponding unconditional quantiles, estimated from the Uncond model, are also included (horizontal dashed lines) for a “stationary” comparison. It can be seen that the conditional quantiles shift considerably from the unconditional quantiles and generally in the same direction as the observations. This is further emphasized by comparing the density functions from the Uncond model with the CondRT model for two selected winter months that were chosen as representative examples of months of “high” and “low” flow (Figure 3). This shows the dramatic shift in the PDF between high- and low-flow months, as compared to the stationary Uncond model. It can be seen that the PDFs have reasonable probability mass over the observed maxima (i.e., asterisk and triangle). Furthermore, we can quantify these differences by examining the probability of exceeding a certain quantile value for the Uncond model. For example, z90 = 4656 cfs (132 m3/s) for the Uncond model (see Table 2 and vertical gray dotted line in Figure 3), and so the corresponding probability of exceeding this value for the Uncond model is 10%. However, in the selected high- (low-) flow month, the chance of exceeding z90 (= 4656 cfs) increases (decreases) to 39% (2.9%). As expected, the trend followed similarly for other examined quantiles.

image

Figure 2. 10th and 99th quantile estimates (z10 and z90) of maximum streamflow for the unconditional model (Uncond) and cross-validated conditional model (CondRT).

Download figure to PowerPoint

image

Figure 3. Distributions for the unconditional model (Uncond; black line), as well as examples of the conditional model (CondRT) for high (dark gray dashed line) and low (light gray dashed line) flow winter months. The 99th unconditional quantile (Uncond z99; dotted vertical gray line) and observed maximum flows (triangle and asterisk) are overlaid.

Download figure to PowerPoint

Table 2. Probability of Exceeding Unconditional Quantiles for the Unconditional Model (Uncond) and for Selected Low- and High-Flow Months for the Conditional Model (CondRT)
Uncond Quantile (cfs)P[Z > z(1 − p)] (%)
Uncond“Low” CondRT (11/2006)“High” CondRT (2/2005)
z99 = 736510.363.8
z90 = 4656102.939
z50 = 23795021100
z10 = 876.29067100

[30] Using the mean z99 from the observed (RT) data in the CondRT model for the 1970–2007 period, the z99 anomalies were calculated for (1) for the observed (RT) data for each winter month from 1970 to 2007 and (2) for the bias-corrected AOGCM output of (RT) values for each winter month from 1950 to 2100 (Figure 4). For (2), each gray line represents the calculation from a different AOGCM model run, and the black line is the ensemble mean. For (1), the observed anomalies from the historic period are overlaid (blue line, Figure 4) and visually indicate that there is reasonable agreement in terms of variability and magnitude with the AOGCM ensembles. As the time line moves further into the future, the envelope of variability increases, and the ensemble average exhibits an increasing trend. This is seen more clearly when the conditional z99 are represented as PDFs and broken into four different time periods (Figure 5). The PDF of the CondRT model during the historic record (black line) is shown for reference in each frame, with the PDFs from the ensembles (gray lines) exhibiting a flattening of the peak and shifting to the right (Figure 5).

image

Figure 4. The 99th quantile (z99) anomalies from the conditional model (CondRT) for the 1950–2100 period. The blue line represents the calculation from the observed data (precipitation multiplied by temperature, RT), the gray lines are calculated from the bias-corrected RT values from each climate model run, and the black line is the mean of the climate model runs.

Download figure to PowerPoint

image

Figure 5. The 99th quantile (z99) values from the conditional model (CondRT) for four epochs. The black line represents the calculation from the observed data (precipitation multiplied by temperature, RT), and the gray lines are calculated for RT values from each climate model run.

Download figure to PowerPoint

[31] As described in section 3, a local logistic regression was developed to quantify how the conditional risk of exceeding the 5 NTU turbidity threshold changes with maximum monthly streamflow. For the developed model, the best parameters were identified as α = 0.65 and degree = 1, indicating that 65% of the data points were used in a 1° order polynomial to fit the local logistic regression at each estimation point (see Loader [1999] for details). The resulting conditional probability of exceedance function from the local logistic regression (Figure 6) shows that the probability estimates increase rapidly around 4000 cfs (113 m3/s). An example of this function's ability to capture local features is exhibited at the increase around 1000 cfs (28 m3/s), reflecting the historical observed exceedance at this flow.

image

Figure 6. Monthly categorical turbidity values are regressed against maximum monthly streamflow values using local logistic regression, resulting in the conditional probability of exceedance, P(ES). Vertical dashes are observed categorical data.

Download figure to PowerPoint

[32] The logistic curve and the CondRT model for each winter month were convoluted and the area under each curve was summed (i.e., equation (9)), resulting in the total probability of a turbidity exceedance. Total probabilities were calculated for each of the four time periods and shown as PDFs (Figure 7). Using the historic record PDF (black line) for reference, the PDFs from the AOGCM projections reveal flattening peaks and heavier tails (Figure 7). For each of the four epochs, the total exceedance probabilities from the AOGCM projections (i.e., gray lines from Figure 7) were aggregated and summarized in terms of selected quantiles (i.e., Table 3a and the box plots in Figure 8). The selected quantiles correspond to the values represented as box plots in Figure 8, in which the bottom and top of the box represent the 25th and 75th quantile, respectively; the whiskers show the 5th and 95th quantiles; points are values outside this range; and the horizontal line represents the median. The implications of these results for decision making are further explored in section 5.

image

Figure 7. Resulting total exceedance probability P(E) for each of the four epochs. Black line is from the observed data, and gray lines are from the values of each climate model run.

Download figure to PowerPoint

image

Figure 8. Same data as in Figure 7, except that results are shown as box plots for the four epochs and the y axis has a log scale. Horizontal dashed lines correspond to values from comparative study [Towler et al., 2010], where dark gray is “very wet” scenario, black is “normal,” and light gray is “very dry.”

Download figure to PowerPoint

Table 3. Total Exceedance Probabilities, P(E) × 100a
 (a)(b)
 P(E) × 100 (%)P(E) × 100 (%)
Percentileb1950–20072010–20392040–20692070–2099Scenarioc 
  • a

    For (a) selected quantile values from the climate model ensembles and (b) previous scenario-based results.

  • b

    Results from each climate model run were aggregated and are summarized in terms of selected quantiles.

  • c

    Forecast scenarios are based on weighted resampling of historic average streamflow terciles. For example, the “very wet” (“very dry”) scenario was obtained by resampling from the wet (dry) tercile 90% of the time. See text and comparison paper [Towler et al., 2010] for details.

95th13162024Very wet14
75th6.57.28.29.9  
50th4.24.55.05.4Normal5.8
25th3.33.33.43.5  
5th3.03.03.03.0Very dry1.5

[33] The AOGCM simulations for the historic period (i.e., 1950–2007 in Figure 8) show the natural variability as represented by the climate model runs. However, resampling observed data is another method to characterize historic variability. In a previous study [Towler et al., 2010], total exceedance probability was calculated under a range of forecast scenarios in terms of average monthly streamflow (see Table 3b and horizontal dashed lines in Figure 8). Results were comparable from these two studies because the association between turbidity exceedance and flow was very similar for both average and maximum flows. In the comparison study, probabilistic seasonal forecasts were used as weights in resampling the historic average flows to generate an ensemble for each scenario. To this end, the observed historic flows were ordered and grouped into terciles; the “normal” ensemble was obtained by resampling equally from each tercile, whereas the “very wet” (“very dry”) ensemble was obtained by resampling from the wettest (driest) tercile 90% of the time. These “extreme” scenarios, though not representative of actual historic forecasts, were used to illustrate the envelope of turbidity exceedance likelihood under historic variability. In Figure 8, the log scale on the y axis emphasizes the notion that a forecast scenario of “very wet” (blue dashed line) was an outlier for the historic period but moves within the 95th quantile whiskers as the time periods progress. Similarly, the “normal” forecast scenario (black dashed line) moves from the 75th quantile toward the median. This indicates that relative to both the historic variability of the AOGCM runs and the observed resample technique, the exceedance likelihoods increase.

5. Discussion

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[34] Section 5.1 discusses the relevant uncertainties and associated limitations to the approach. Section 5.2 provides an example of how this type of information could be incorporated in a decision application.

5.1. Uncertainties and Limitations

[35] It should be noted that the hydrologic extremes and the turbidity exceedance results need to be viewed in the context of their associated uncertainty. Here we discuss three main sources of uncertainty: climate variability, parameter uncertainty, and model uncertainty. A comprehensive overview of uncertainty estimation in climate change studies is provided by Katz [2002].

[36] In this approach, the main source of uncertainty stems from climate variability, which we characterize by using climate output from the full suite of available AOGCM runs for the A2 emission scenario. Although it is widely recognized that there are underlying uncertainties regarding future greenhouse gas emissions, our understanding of the climate system, and in the downscaling methods, a precondition of using these models is to assume that they realistically reproduce the known features of climate and natural variability. Furthermore, as AOGCM projections improve with increased scientific understanding and modeling capabilities, we note the importance of having these types of methods developed and ready to use.

[37] The uncertainty in the statistical framework is more straightforward to characterize. From the statistical theory, parameter uncertainty can be readily obtained through standard error (SE) estimates (see Table 1) and confidence intervals can be calculated for the GEV [Coles, 2001] as well as for the local logistic [Helsel and Hirsch, 1995]. Alternatively, bootstrap resampling techniques [Efron and Tibshirani, 1993] offer an appealing nonparametric option.

[38] The issue of model (or structural) uncertainty is a more complex problem. Here only “best fit” single models, the GEV and the local logistic, are developed to describe the maximum flows and turbidity exceedance risk, respectively. Furthermore, although the covariates examined in this paper were found to be sufficient, other explanatory variables should be considered as additional data become available and system knowledge increases. A robust way to incorporate this type of information is to employ multimodel ensembles, which can help capture the process uncertainty [e.g., Regonda et al., 2006]. Also related is the implicit assumption that the established empirical relationship between flow and water quality will hold in the future. This underscores the importance of updating models as new data become available. Furthermore, mechanistic and process-based models can be developed that include additional attributes to complement data-driven models; examples of combination dynamic-statistical models outperforming individual models have been found [Block and Rajagopalan, 2009].

5.2. Application to a Decision Framework

[39] There has been considerable debate regarding how uncertain results based on AOGCM projections can be interpreted, and ultimately, utilized in decision making. Table 3a summarizes the range of exceedance probabilities for selected quantiles for the historic and future epochs. Three future epochs were chosen because water resource decisions generally correspond to a specific planning horizon, and the 1950–2007 period was included to establish a “standard” reference to which future values could be compared. However, interpreting and utilizing the information from the selected quantiles is more subjective. One option might be to simply look at the 50th quantile results. For the 50th quantile, which can be thought of as the “normal” climate for the given epoch, the 1950–2007 period compared to the 2070–2099 period shows a slight increase in the probability of exceeding a turbidity threshold from 4.2% to 5.4%, respectively. This represents a modest incremental 1.2% increase in the likelihood exceedance. From this modest result, one might reason that there is no need to factor climate change into a related decision. On the other hand, for the 95th quantile, there is an exceedance probability of 24%, which is almost double what it was in 1950–2007. To say it another way, 5% of all the AOGCM projections utilized in this study resulted in an exceedance probability of 24% or more. Is this information compelling enough to factor into a decision? The answer depends on the amount of “risk” that one is willing to take, and the percentiles only provide a qualitative guide.

[40] In an attempt to quantify this in more concrete terms, we provide a hypothetical example of how small probabilistic shifts can result in potentially significant financial impacts. To illustrate, the expected loss L of the event can be found using:

  • equation image

where C is the cost of the event occurring and P(E) is the likelihood of the event. We would assume that the cost would depend largely on the price of pumping the groundwater, though other costs may be considered, such as additional operational costs that may be involved with notifications, outreach, and different protocols with using the groundwater system. However, given the complexity of utility systems and finances, it would be difficult to quantify C using exact dollar amounts. As such, we will use relative terms to examine the ratio of L between future and historic epochs:

  • equation image

Furthermore, if it is assumed that C stays constant over time (which is a conservative estimate because costs generally increase with time), it stands that the ratio is obtained simply by taking the ratio of P(E) between epochs. Now, it can be seen for the 50th quantile that the modest increase of 1.2%, yields a ratio for the 2070–2099 epoch of equation image = 1.3, which represents a percent increase of 30% (Figure 9). Furthermore, it can be seen that as a utility's risk threshold decreases from the 50th to the 95th quantile, the percent increase rises from 30% to 85%, respectively (Figure 9). For utilities, the costs associated with this increase need to be put in perspective with the total cost of producing and delivering potable water.

image

Figure 9. Relative to the 1950–2007 baseline period, the percent increase in expected loss for the three future epochs is shown for the 50th, 75th, and 95th quantiles.

Download figure to PowerPoint

6. Summary and Conclusions

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[41] Broad-scale information on the subject of climate change and its impacts are just becoming available to municipal drinking water providers [Miller and Yates, 2006; Cromwell et al., 2007; Smith, 2008]. However, examining climate change and its implications in terms of local vulnerabilities and in the context of uncertainty is critical for more effective planning by water utilities. Presently, there is no framework available for utility managers to incorporate climate change projections in their estimates of hydrologic extremes or associated water quality impacts. In this paper, we present an operating framework that begins to address these deficiencies.

[42] This paper provides a method for translating climate change information into secondary products that are useful to local assessments, namely extreme streamflow and water quality characteristics. In the first part of the framework, extreme value statistics that incorporate nonstationarity were used to demonstrate that for this case study location, climate change information indicates that there will be an increase in the variability and magnitude of streamflow extremes. From the second part of the framework, which uses a local logistic regression and total probability estimation technique, the resulting increase in risk of a turbidity exceedance was quantified. The framework is general, with the intention that it can be modified to meet the hydrologic and impact needs of a particular end user. Once the appropriate covariates are identified, the flood frequency analysis could be portable to other locations, with straightforward extensions to secondary impacts such as water quality that depend on extreme hydroclimate. In terms of time scale, though the tool was demonstrated using long-term climate model output, it could be easily altered to incorporate seasonal forecasts or decadal projections.

[43] As concerns over future extreme events and associated impacts become more pressing, utility managers need additional tools to facilitate efficient planning and management. AOGCMs continue to improve, and so the ways in which they can contribute to water management should be exploited. Here we have developed an approach that can utilize large-scale climate change information for estimating local hydrologic extremes and risks of water quality exceedance. Ultimately, the results from this framework can be used in combination with a decision tool that can evaluate various decision strategies under future scenarios.

Acknowledgments

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information

[44] The authors would like to acknowledge AwwaRF project 3132, “Incorporating climate change information in water utility planning: A collaborative, decision analytic approach,” the National Water Research Institute (NWRI) through a NWRI fellowship to the senior author, and the U.S. EPA through a STAR fellowship to the senior author for partial financial support on this research effort. This publication was developed under a STAR research assistance agreement F08C20433 awarded by the U.S. Environmental Protection Agency. It has not been formally reviewed by the EPA. The views expressed in this document are solely those of the authors, and the EPA does not endorse any products or commercial services mentioned in this publication. The second author is thankful to National Center for Atmospheric Research (NCAR) for providing a visitor fellowship during the course of this study. NCAR is sponsored by the National Science Foundation. In addition, they thank the staff of the Portland Water Bureau for providing data and useful discussions, as well as two anonymous reviewers whose comments helped to improve the manuscript. We acknowledge the modeling groups, the Program for Climate Model Diagnosis and Intercomparison and the WCRP's Working Group on Coupled Modelling for their roles in making available the WCRP CMIP3 multimodel data set. Support of this data set is provided by the Office of Science, U.S. Department of Energy.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information
  • Akaike, H. (1974), New look at statistical-model identification, IEEE Trans. Autom. Control, 19, 716723.
  • Ang, A. H., and W. H. Tang (2007), Probability Concepts in Engineering: Emphasis on Applications in Civil and Environmental Engineering, 2nd ed., 406 pp., Wiley, New York.
  • Apipattanavis, S., B. Rajagopalan, and U. Lall (2010), Local polynomial-based flood frequency estimator for mixed population, J. Hydrol. Eng., 15, 680691.
  • Block, P., and B. Rajagopalan (2009), Statistical-dynamical approach for streamflow modeling at Malakal, Sudan, on the White Nile River, J. Hydrol. Eng., 14, 185196.
  • Buishand, T. A. (1989), Statistics of extremes in climatology, Stat. Neerl., 43, 130.
  • Cayan, D. R., K. T. Redmond, and L. G. Riddle (1999), ENSO and hydrologic extremes in the western United States, J. Clim., 12, 28812893.
  • Christensen, J. H., et al. (2007), Regional climate projections, in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, edited by S. Solomon, et al., Cambridge Univ. Press, Cambridge, UK.
  • Coles, S. (2001), An Introduction to Statistical Modeling of Extreme Values, Springer Ser. Stat., 208 pp., Springer, London.
  • Cooley, D. (2009), Extreme value analysis and the study of climate change, Clim. Change, 97, 7783.
  • Cromwell, J. E., J. B. Smith, and R. S. Raucher (2007), No doubt about climate change and its implications for water suppliers, J. Am. Water Works Assoc., 99, 112117.
  • Easterling, D. R., G. A. Meehl, C. Parmesan, S. A. Changnon, T. R. Karl, and L. O. Mearns (2000), Climate extremes: Observations, modeling, and impacts, Science, 289, 20682074.
  • Efron, B., and R. Tibshirani (1993), An Introduction to the Bootstrap, Monogr. Stat. Appl. Prob., vol. 57, 436 pp., Chapman and Hall, New York.
  • El Adlouni, S., T. B. M. J. Ouarda, X. Zhang, R. Roy, and B. Bobee (2007), Generalized maximum likelihood estimators for the nonstationary generalized extreme value model, Water Resour. Res., 43, W03410, doi:10.1029/2005WR004545.
  • Gutowski, W. J., G. C. Hegerl, G. J. Holland, T. R. Knutson, L. O. Mearns, R. J. Stouffer, P. J. Webster, M. F. Wehner, and F. W. Zwiers (2008), Causes of observed changes in extremes and projections of future changes, in Weather and Climate Extremes in a Changing Climate. Regions of Focus: North America, Hawaii, Caribbean, and U.S. Pacific Islands, edited by T. R. Karl, et al., U.S. Clim. Change Sci. Prog., Global Change Res., Washington, D. C.
  • Helsel, D. R., and R. M. Hirsch (1995), Statistical Methods in Water Resources, Stud. Environ. Sci., vol. 49, 529 pp., Elsevier, Amsterdam; New York.
  • Hosking, J. R. M. (1990), L moment analysis and estimation of distributions using linear-combinations of order statistics, J. R. Stat. Soc. Ser. B, 52, 105124.
  • Hosking, J. R. M., and J. R. Wallis (1988), The effect of intersite dependence on regional flood frequency-analysis, Water Resour. Res., 24(4), 588600, doi:10.1029/WR024i004p00588.
  • Jain, S., and U. Lall (2000), Magnitude and timing of annual maximum floods: Trends and large-scale climatic associations for the Blacksmith Fork River, Utah, Water Resour. Res., 36(12), 36413651, doi:10.1029/2000WR900183.
  • Jain, S., and U. Lall (2001), Floods in a changing climate: Does the past represent the future? Water Resour. Res., 37(12), 31933205, doi:10.1029/2001WR000495.
  • Katz, R. W. (2002), Techniques for estimating uncertainty in climate change scenarios and impact studies, Clim. Res., 20, 167185.
  • Katz, R. W., M. B. Parlange, and P. Naveau (2002), Statistics of extremes in hydrology, Adv. Water Resour., 25, 12871304.
  • Katz, R. W., G. S. Brush, and M. B. Parlange (2005), Statistics of extremes: Modeling ecological disturbances, Ecology, 86, 11241134.
  • Kharin, V. V., and F. W. Zwiers (2005), Estimating extremes in transient climate change simulations, J. Clim., 18, 11561173.
  • Kharin, V. V., F. W. Zwiers, X. Zhang, and G. C. Hegerl (2007), Changes in temperature and precipitation extremes in the IPCC ensemble of global coupled model simulations, J. Clim., 20, 14191444.
  • Kunkel, K. E., R. A. Pielke, and S. A. Changnon (1999), Temporal fluctuations in weather and climate extremes that cause economic and human health impacts: A review, Bull. Am. Meteorol. Soc., 80, 10771098.
  • Lettenmaier, D. P., J. R. Wallis, and E. F. Wood (1987), Effect of regional heterogeneity on flood frequency estimation, Water Resour. Res., 23(2), 313323, doi:10.1029/WR023i002p00313.
  • Loader, C. (1999), Local Regression and Likelihood, Stat. Comput., 290 pp., Springer, New York.
  • Maraun, D., H. W. Rust, and T. J. Osborn (2009), The annual cycle of heavy precipitation across the United Kingdom: A model based on extreme value statistics, Int. J. Climatol., 29, 17311744.
  • Martins, E. S., and J. R. Stedinger (2000), Generalized maximum likelihood generalized extreme-value quantile estimators for hydrologic data, Water Resour. Res., 36(3), 737744, doi:10.1029/1999WR900330.
  • Maurer, E. P. (2007), Uncertainty in hydrologic impacts of climate change in the Sierra Nevada, California, under two emissions scenarios, Clim. Change, 82, 309325.
  • Maurer, E. P., L. Brekke, T. Pruitt, and P. B. Duffy (2007), Fine-resolution climate projections enhance regional climate change impact studies', Eos Trans. AGU, 88(47), 504.
    Direct Link:
  • Meehl, G. A., et al. (2000), An introduction to trends in extreme weather and climate events: Observations, socioeconomic impacts, terrestrial ecological impacts, and model projections, Bull. Am. Meteorol. Soc., 81, 413416.
  • Meehl, G. A., et al. (2007), Global climate projections, in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, edited by S. Solomon et al., Cambridge Univ. Press, Cambridge, UK.
  • Mestre, O., and S. Hallegatte (2009), Predictors of tropical cyclone numbers and extreme hurricane intensities over the North Atlantic using generalized additive and linear models, J. Clim., 22, 633648.
  • Miller, K., and D. Yates (2006), Climate Change and Water Resources: A Primer for Municipal Water Providers, Awwa Res. Found., Denver, Colo.
  • Milly, P. C. D., R. T. Wetherald, K. A. Dunne, and T. L. Delworth (2002), Increasing risk of great floods in a changing climate, Nature, 415, 514517.
  • Milly, P. C. D., J. Betancourt, M. Falkenmark, R. M. Hirsch, Z. W. Kundzewicz, D. P. Lettenmaier, and R. J. Stouffer (2008), Climate change: Stationarity is dead: Whither water management? Science, 319, 573574.
  • Morrison, J. E., and J. A. Smith (2002), Stochastic modeling of flood peaks using the generalized extreme value distribution, Water Resour. Res., 38(12), 1305, doi:10.1029/2001WR000502.
  • Regonda, S. K., B. Rajagopalan, M. Clark, and E. Zagona (2006), A multimodel ensemble forecast framework: Application to spring seasonal flows in the Gunnison River Basin, Water Resour. Res., 42, W09404, doi:10.1029/2005WR004653.
  • Renard, B., M. Lang, and P. Bois (2006), Statistical analysis of extreme events in a nonstationary context via a Bayesian framework: Case study with peak-over-threshold data, Stoch. Environ. Res. Risk Assess., 21, 97112.
  • Sankarasubramanian, A., and U. Lall (2003), Flood quantiles in a changing climate: Seasonal forecasts and causal relations, Water Resour. Res., 39(5), 1134, doi:10.1029/2002WR001593.
  • Seidou, O., T. B. M. J. Ouarda, M. Barbet, P. Bruneau, and B. Bobee (2006), A parametric Bayesian combination of local and regional information in flood frequency analysis, Water Resour. Res., 42, W11408, doi:10.1029/2005WR004397.
  • Smith, J. A. (1989), Regional flood frequency-analysis using extreme order-statistics of the annual peak record, Water Resour. Res., 25(2), 311317, doi:10.1029/WR025i002p00311.
  • Smith, J. B. (2008), Climate change is real: How can utilities cope with potential risks? J. Am. Water Works Assoc., 34, 1217.
  • Stedinger, J. R., and L. H. Lu (1995), Appraisal of regional and index flood quantile estimators, Stoch. Hydrol. Hydraul., 9, 4975.
  • Towler, E., B. Rajagopalan, R. S. Summers, and D. Yates (2010), An approach for probabilistic forecasting of seasonal turbidity threshold exceedance, Water Resour. Res., 46, W06511, doi:10.1029/2009WR007834.
  • U.S. Environmental Protection Agency (1989), Final surface water treatment rule, Federal Reg. 54:124:27486.
  • Wilbanks, T. J., P. Romero Lankao, M. Bao, F. Berkhout, S. Cairncross, J.-P. Ceron, M. Kapshe, R. Muir-Wood, and R. Zapata-Marti (2007), Industry, settlement and society, in Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, edited by M. L. Parry, et al., 357390, Cambridge Univ. Press, Cambridge, UK.
  • Wood, A. W., L. R. Leung, V. Sridhar, and D. P. Lettenmaier (2004), Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs, Clim. Change, 62, 189216.

Supporting Information

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Case Study Overview
  5. 3. Methodology
  6. 4. Results
  7. 5. Discussion
  8. 6. Summary and Conclusions
  9. Acknowledgments
  10. References
  11. Supporting Information
FilenameFormatSizeDescription
wrcr12556-sup-0001-t01.txtplain text document1KTab-delimited Table 1.
wrcr12556-sup-0002-t02.txtplain text document0KTab-delimited Table 2.
wrcr12556-sup-0003-t03.txtplain text document1KTab-delimited Table 3.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.