Notice: Wiley Online Library will be unavailable on Saturday 27th February from 09:00-14:00 GMT / 04:00-09:00 EST / 17:00-22:00 SGT for essential maintenance. Apologies for the inconvenience.
 There remains uncertainty in the projected climate change over the 21st century, in part because of the range of responses to rising greenhouse gas concentrations in current global climate models (GCMs). The representation of potential changes in the form of a probability density function (PDF) is increasingly sought for applications. This article presents a method of estimating PDFs for projections based on the “pattern scaling” technique, which separates the uncertainty in the global mean warming from that in the standardized regional change. A mathematical framework for the problem is developed, which includes a joint probability distribution for the product of these two factors. Several simple approaches are considered for representing the factors by PDFs using GCM results, allowing model weighting. The four-parameter beta distribution is found to provide a smooth PDF that can match the mean and range of GCM results, allowing skewness when appropriate. A beta representation of the range in global warming consistent with the Intergovernmental Panel on Climate Change Fourth Assessment Report is presented. The method is applied to changes in Australian temperature and precipitation, under the A1B scenario of concentrations, using results from 23 GCMs in the CMIP3 database. Statistical results, including percentiles and threshold exceedences, are compared for the case of southern Australian temperature change in summer. For the precipitation example, central Australian winter rainfall, the usual linear scaling assumption produces a net change PDF that extends to unphysically large decreases. This is avoided by assuming an exponential relationship between percentage decreases in rainfall and warming.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 Projections of regional climate change forced by rising greenhouse gases concentrations made by many authors and in the Intergovernmental Panel on Climate Change Assessment Reports, including the chapter on Global Climate Projections in the Working Group I Fourth Report (AR4) by Meehl et al. , have been largely based on the simulations of comprehensive global atmosphere/ocean climate models (GCMs). The theory behind such projections is still being developed, however (see Collins  for an overview). It is usually assumed, although not always stated, that the ensemble of available GCM results, for a specific period of the 21st century and a specific forcing scenario, gives an indication of the range of possible changes of the real world for that case. While it is possible that the models may have a systematic bias, it is often assumed that this is small. The best estimate of the forced change is then essentially the average or multimodel mean, as presented by Meehl et al.  for the ensemble of models submitted to the CMIP3 database (see http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php). Variations in this estimate among studies can relate to whether some climate models are given more weight than others based on their assessed reliability [e.g., Giorgi and Mearns, 2003], and how that assessment is made and applied.
 With regard to simplifying the presentation of changes for several time slices or periods and for various forcing scenarios, it has long been recognized [e.g., Santer et al., 1990] that for a particular model, the spatial pattern of change for important variables like surface air temperature (T) and precipitation (P) is often similar. Indeed, at many places changes “standardized” by dividing or scaling by the global mean warming at the time are rather constant, aside from the effects of unforced or internal variability. The “pattern scaling” approach can thus be used to provide quite useful estimates of regional or local changes, based on an assessment of a single case, as shown by Mitchell  for simulations with rising CO2.
 The spatial pattern of forced change can vary in some regions due to changing aerosol distributions [e.g., Harvey, 2004]. Watterson  and Harris et al.  consider further differences in patterns, between those from transient climate simulations and those from simulations of warming at near-equilibrium, which are associated with oceanic heat uptake and heat transport changes. Nevertheless, Meehl et al.  show that linear pattern scaling is generally useful for temperature and also precipitation, with respect to the CMIP3 multimodel means for the Special Report on Emissions Scenarios forcing scenarios, particularly later in the 21st century. Note that the similarity of the global patterns relative to their spatial variation is a little less for precipitation (see Table S10.2 of the supplementary material available online at http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter10-supp-material.pdf) than for temperature.
 A further advantage of pattern scaling is that the problem of projecting the global mean warming can be separated from that of estimating the standardized regional change. Dessai et al. , Luo et al. , and others used simpler models to estimate global warming over a wider range of possibilities than is available from full GCMs. Assuming that these warming values can be combined arbitrarily with any scaled change, the product of any choice of the two factors provides a possible regional net (or total) change. The Commonwealth Scientific and Industrial Research Organisation (CSIRO) Climate Impact Group has used this approach for a number of years in providing estimates of the range of change over Australia, as reviewed by Whetton et al. . They noted, however, a limitation of this linear method for a nonnegative variable like rainfall, when larger warmings are combined with strong scaled decreases. Mitchell  also discussed related nonlinearities.
 Given the spread of model results for many cases, it is clear that there is considerable uncertainty in such projections, even for a particular forcing scenario. Some studies have attempted to quantify this uncertainty and to express it in probabilistic terms [e.g., Giorgi and Mearns, 2003; Harris et al., 2006], such as in the form of “probability density functions” (PDFs). In the pattern scaling approach, the uncertainty in the net regional change can be related to both uncertainty in the global mean warming (or sensitivity of the climate system) and uncertainty in the local response factor. Dessai et al.  and others have estimated PDFs for the net change using the Monte Carlo method of random sampling, from the ranges of both warming and scaled change. The basic rationale for this new article is to present a simple mathematical framework by which the scaling approach can be more precisely applied and the resulting PDFs accurately calculated (conditional, of course, on the assumptions made).
 In many of the above studies a key assumption is still that the range of uncertainty in the real-world value is directly given by the range of model results (allowing for weighting, perhaps). Typically, the probability distribution that is derived would apply to the case of randomly choosing one of the models. Hence this assumption is effectively considering the real world to be a single sample from the models, even if these are from a limited “ensemble of opportunity” such as CMIP3. Naturally, such sample-dependent PDFs can only be approximations to the PDF that might represent the full “space of all possible models” [Collins, 2007]. (In that context, “PDF” will be used loosely here.) The uncertainty in the real-world warming so obtained would not depend directly on the number of models in the ensemble. Recently, other researchers have used Bayesian methods based on the assumption that it is the model results that are a sample from a PDF and, in the case of Tebaldi et al.  and Furrer et al.  at least, it is assumed that this is unbiased: The mean of the PDF is the unknown real-world change. Typically in statistical inference under such assumptions, the uncertainty in the estimate of this value tends to diminish as the number of independent samples increases. It is unclear to what extent GCMs can be regarded as independent (especially when important components are shared), but in any case the PDFs from these Bayesian analyses do tend to be relatively narrow.
 Whether the range of the PDF for a real-world change should diminish as more GCMs are assessed is a key issue, for both global mean change and regional change, but it is only just being explored in the literature [e.g., Lopez et al., 2006]. In the case of the approach that combines scaled local change with global mean change, it is conceivable that some reduction in range should apply to both factors. This issue will not be pursued here, except through an example of scaled change with a deliberately narrowed PDF. Neither will we specifically include additional unquantifiable uncertainty, as described by Jones , such as due to processes that are poorly represented in GCMs or from unknown biases. While the intention is to represent the real-world change here, in practice, the factor PDFs used in the examples are smoothed distributions of GCM results.
 The following section presents the mathematical framework that attempts to advance the approach of Dessai et al.  for producing PDFs. The linear scaling assumption is presented, and the probabilistic formulation described. Two related methods for calculation of the PDF for net change are given.
 In section 3 the approach is illustrated by its application to the case of summer temperature change in southern Australia, as simulated by the CMIP3 models. These include the new CSIRO model Mark 3.5, recently submitted to CMIP3. Plausible weights of the models are used, calculated as described in Appendix A. Several representations of global warming under the A1B scenario are developed, including one using the beta distribution, which allows skewness. Net change PDFs are calculated. The results for this regional case can be compared with those from Dessai et al. , as well as with PDFs generated from the “raw” model data.
 In section 4 the focus is on winter precipitation change in central Australia, which is considered using both absolute and relative changes from the GCMs. Problems with the linear assumption emerge, with the larger decreases exceeding the base climate rainfall, which is clearly unphysical. An advance made here is the use of a exponential assumption, which avoids this problem. Section 5 summarizes the methods used and makes several recommendations.
 It should be noted that the original motivation for the present study was to develop a method that could be readily applied to the provision of PDFs for change in a large range of quantities, for all seasons, several periods and scenarios, and at all locations in Australia. Some aspects of the application were extensions of previous CSIRO practice, as in the recent analysis by Suppiah et al. . The dependence of the results on some choices is discussed here. Further results and their context can be seen in the publication “Climate Change in Australia” [Commonwealth Scientific and Industrial Research Organisation and Bureau of Meteorology, 2007] (hereinafter referred to as CSIRO 2007). Please see also the acknowledgments.
2. Pattern Scaling Method
2.1. Assumed Relationships
 The pattern scaling method relies on approximate relationships that can be expressed in two parts. The first is that the forced change in a climate variable v (e.g., precipitation for the winter season at a particular location) is a function of a scaling quantity, denoted here y. The past history of v and y is unimportant. As in other studies, the scaling quantity here is the annual global-mean temperature, and it is assumed that this also does not include interannual, unforced variability. Further, both quantities are anomalies relative to a base climate.
 For small changes in the variables, one might expect that the relationship between the anomalies is linear, and this is the usual second approximation [e.g., Mitchell et al., 1999; Mitchell, 2003]. The method then estimates v by using the simple linear relationship
where x is the standardized (or normalized) variable, which is taken to be constant. For convenience, x can also be termed the change per degree (of global warming), and v the net change.
 In writing equation (1), we assume that x can be determined from model simulations such that any error in v for the quantity and period of interest is small enough to be ignored. We also assume that the relationships and resulting v apply usefully to the real-world climate.
2.2. Probabilistic Formulation
 Under these assumptions, the net change, v, is uncertain because of uncertainty in each of the components on the right side of equation (1). In the language of mathematical probability theory [e.g., Hogg and Craig, 1970], we represent the uncertainty in each variable using a probability density function (PDF), which is nonzero (and positive) over the range of values considered possible. Since these PDFs are intended to represent the real world continuous forms are appropriate.
 In performing numerical calculations, a discretization of a PDF over a finite range is necessary, but this can be done with no significant inaccuracy. For example, if the PDF for the standardized variable x is the function f(x), and we can approximate it to be zero outside the range xs to xe (with the subscripts meaning start and end), then we have a cumulative distribution function (CDF) denoted F(x), with
We ensure that F(xe) = 1 in the discretized integration, over a suitable number of steps or bins of x values. Several examples of PDFs in this form, to be discussed later, are given in Figure 1 (for which the dependent variable is the global warming y) and Figure 2 (for x). Examples of CDFs can be seen later.
 The expected value E of some other function of x, X, is given by
 The mean (μ) of the distribution is E(x), and the standard deviation (SD, or σ) is the square root of E((x − μ)2). In characterizing a PDF it is often useful to give the x value at which F = p/100, the p th percentile or Pp, with P50 being the median. Another statistic to consider is the probability that x exceeds a threshold, xt, given by 1 − F(xt).
2.3. Calculation of Probability Density Function (PDF) for Net Change
 Following Dessai et al. , we allow x and y to vary arbitrarily (although later tests suggest some limitation to this). In our context, the variables are assumed to be stochastically independent. This means that the joint probability distribution over the two-dimensional domain of the pair (x, y) is given simply by the product f(x) g(y), where g is the PDF for y. An example of such a joint distribution is depicted in Figure 3a, with the domain extended for later purposes. The notation allows the axes to represent their customary variables.
 Evaluation of some statistic of v can be performed by integrating, over the domain, the appropriate function of xy multiplied by fg, using the two-dimensional form of equation (3). The example in Figure 3b shows the value of v, itself, over the extended domain. Each contour is a hyperbola. It is worth noting that the mean of v is equal to the product of the means of x and y. However, this match does not hold for the various percentile statistics, as will be shown in section 4.3.
 It would be very useful, of course, to have the PDF h or CDF H for v, as well. Luo et al. , Dessai et al. , and others have used Monte Carlo software to estimate these distributions of the net change. Having constructed a mathematical framework for the problem, it is clear that we can evaluate these distributions numerically to the precision required.
 To evaluate H(v1), at a particular net change v1, we need to simply integrate fg over the part of the 2D domain where v < v1. For instance, for v1 = 4 K in the example in Figure 3, the integration is over all but the upper right portion in 3b. Given discretization of x and y, the domain can be treated as a large number of points, each of which has a value v and a probability fg. After choosing discrete bins of v, calculation of h involves summing the probabilities of the points contained within each bin, then dividing by the bin spacing. H follows from equation (2).
 We can make use of the separability of the joint distribution fg in an alternative approach which calculates H directly. For each v1 and each y, the points where v < v1 fall along a line segment with one end at v1/y, assuming this is larger than xs. In the usual case that y is positive, the other end is xs, so the integral along the line segment yields F(v1/y) g(y) (which is simply g(y) if v1/y > xe). The integration can then be written as
Negative values of v1 and x can be accommodated in this form, and would lie in the top left quadrant of Figure 3b. For a case where the scaling variable is negative, the CDF factor in equation (4) becomes 1 − F(v1/y). The domain in Figure 3b has been extended to show v values for these in the lower quadrants. In practice, this integration needs to be discretized, by stepping through the y values for each of the span of v values. Converting the resulting CDF to a PDF requires differentiation of equation (2).
 The relative efficiency of the two methods will depend on the choice of discretization for the variables and numerical techniques. This second method was used here, where the number of y steps was 100 and the number in x and v was 1000, and all steps are assumed equally spaced in each case. Examples of the calculated CDFs are given in Figure 4a. The resulting discrete PDFs h can be noisy. This has a negligible effect on our other statistics, but in any case a five-point smoother produces presentable results, as seen from those in Figure 4b.
 Before applying this approach, it should be emphasized that the intention is to determine PDFs for forced local change, which can ideally be determined for a specific year under a forcing scenario. In reality, as discussed by Giorgi , climate from a single year, or even averaged over a decade, can be strongly affected by internal variability. A similar joint distribution approach can be used to illustrate how the net change PDF would be broadened by such variability. CSIRO 2007 provides an example of this.
 We use the linear assumption (equation (1)) in providing PDFs for local temperature change, as described in section 4. For precipitation, a modification to the approach appears useful, as discussed in section 5. We consider first the representation of the global mean warming as the scaling variable.
3. Global Warming PDFs
 For their application of pattern scaling, Dessai et al.  used a large number of plausible global warming values, simulated using a simple energy balance model. These can be represented as a PDF. Ultimately, CSIRO 2007 relies on ranges of warming from AR4, but no suitable PDF was recommended. It is useful, for this study, to use a similar approach to generate PDFs for warming and scaled local change, both based on CMIP3 simulations. The application to the global warming case is presented first, but using the variable x (and f) rather than the y of section 2, or a more usual name, ΔTg.
 For the A1B scenario, Meehl et al.  provide a best estimate of the real-world global warming over the 21st century (strictly, between the period 1980–1999 and the period 2090–2099) of 2.8 K. This was calculated directly from the multimodel mean of results from 21 of the GCMs listed in Table 1. (There was no A1B run for model bcr, using the code given, while runs from 3_5 were not yet available). On the basis of a variety of studies, the authors assessed the “likely” range for this warming to be 1.7 K to 4.4 K; values that are asymmetric relative to the mean. By the AR4 definitions, this means that the chance of the warming being within this range is greater than 0.66. However, no probability was actually mentioned or justified by Meehl et al. .
Table 1. Models Considered in the Study Using the CMIP3 I.D., a Three-Letter Code Which Provides the Order, a Simple Skill Score Out of 1000 Used for Weighting the Models, and the Inferred Global Mean Warming From 2000 to 2100 Under A1B (Not Available for bcr)a
The calculation of the score, which measures model quality for the Australian climate, and its uncertainties are described in Appendix A.
Geophys. Fluid. Dyn. Lab., USA
Geophys. Fluid. Dyn. Lab., USA
CSIRO Marine and Atm. Res.
NASA Goddard Institute, USA
Bjerknes Centre, Norway
UK Meteorological Office
NASA Goddard Institute, USA
NASA Goddard Institute, USA
UK Meteorological Office
Inst. Numer. Math., Russia
CSIRO Atmospheric Research
MPI, Hamburg, Germany
 Global mean temperature from a single simulation of a GCM includes unforced variability, which affects the differences between two period averages. To reduce the uncertainty in the forced response, we can use the trend calculated from annual values over the whole 21st century using linear regression. The near-linearity of the multimodel mean warming series for A1B shown by Meehl et al.  suggests that much of the variation from the resulting trend line is due to internal variability. This can be reduced for models with more than one simulation by using the mean series. Multiplying the regression coefficient for a single GCM by the period 100 y produces a representation of the model's response over the century, nominally 2000 to 2100. The results for each available GCM are given in Table 1. The mean of these 22 values is 2.80 K, matching the AR4 best estimate (for a slightly longer period; the inclusion of 3_5 here helps this match). From standard regression theory [e.g., Yamane, 1967, p. 408], assuming the deviations in variable Y from the regression line Yc are independent and normally distributed, the trend coefficient has a small statistical uncertainty. This can be represented by a normal distribution n(μi, σi2), where the mean μi is the calculated value for model i and
where n is the number of samples (100), X is the independent variable (year), and overbar denotes the mean. The analysis package Grace (available from http://plasma-gate.weizmann.ac.il/Grace) gives this “standard error” (σ) for the trend.
 A PDF for the ensemble of model results can be constructed by simply summing the individual normal distributions:
Here the subscript S is for Sum, N is the number of models, and wi the weighting assigned to them (uniformly N−1, in this case). The result is depicted in Figure 1. The approach of summing individual CDFs is used by Harris et al. [2006, equation (13)]. The Sum PDF could be described as one that should apply if one were to randomly select one of the models, perform a new simulation, then calculate the warming trend from this.
 If results from a new model of similar type became available, one might expect the trend to lie within or nearby the range of the existing results, but not, surely, avoiding any gaps that appear in Sum. A smooth PDF for this new result would be reasonable, particularly since it is intended to represent the real-world value. Two simple methods for constructing or fitting such a PDF are illustrated in Figure 1. One is a normal distribution with the same mean and SD as the Sum case. These statistics can be determined analytically from
For the 22 warming trends, σS = 0.59 K. The curves have been discretized on 1000 x values, evenly spaced between end points given by the 0.1 and 99.9 percentiles of this normal distribution, 0.97 K and 4.62 K.
 The third curve in Figure 1 is the beta distribution
(and zero elsewhere), with the beta function B given by
and Γ the usual gamma function.
 The beta distribution was recently used in atmospheric modeling by Tompkins  and it has a role in statistical theory [e.g., Hogg and Craig, 1970, p. 134]. Here we simply specify the parameters by choosing the end points a and b and then solving for p and q, such that the distribution's moments
match those of Sum. For a curve that spans the model range, we let the end points be the extremes of the model values, extended by the individual uncertainty 1 × σi. (This modest factor of σi was chosen to avoid extremes being greatly extended by chance, in some cases.) If necessary we extend this further by incrementing a and b along the discretized values to ensure that the range spans the 1 and 99 percentiles of the Sum fit. To avoid a distribution with tails that approach zero sharply, we also ensure that p and q are both at least 2.5, by similar increments.
 The “Beta” distribution (Figure 1) for global warming has a = 1.44 K, b = 4.50 K, p = 2.50, and q = 3.12. With p ≠ q, this form is asymmetric or skewed, relative to the mean. Indeed, the end points just span the range for this case stated in AR4. None of the three curves, Sum, Normal and Beta, exceed this range as much as would be consistent with the “likely” description, however.
 The remaining curve plotted in Figure 1 is a broader beta distribution “Beta-wide,” with parameters a = 0.67 K, b = 10.78 K, p = 2.00, and q = 7.51. This was constructed to have a net probability between 1.7 K and 4.4 K of 0.67, the smallest probability that would be consistent with “likely,” while retaining the mean of 2.8 K. The relaxed condition on p aided this fitting. The increased skewness, compared to Beta, coincides with a smaller median (2.58 K, compared to 2.77 K).
 In the examples that follow, the focus will be on the Beta PDF as a simple representation of global mean warming over the century under A1B. While idealized, it is consistent with the mean, SD and extremes of warming from the CMIP3 GCMs. With respect to the mean and skewness, at least, it is also consistent with the assessment of change (from 1980–1999 to 2090–2099) in the real world from AR4. The other PDFs will be used to test the sensitivity of the results to the form and broadness of the warming PDF. Aside from Sum, the PDFs constructed here have a unimodal shape. If there is a case for which this was not desirable, (perhaps for a local scaled change), other forms could be considered.
 The continuous nature of the PDFs, including the Sum case that reflects individual model values, is convenient for their use in the mathematical framework of section 2. It is also worth noting that Santer et al.  used a similar normal distribution applied to net changes to form a PDF. They ignored the uncertainties σi, but setting these to zero in equation (8) will often make little difference to σS.
CSIRO 2007 used the Beta form as a standard shape for all periods and scenarios, simply scaling a and b by the ratio of the best estimate of warming for each case from AR4, to 2.8 K. While the Beta form is convenient, there is no reason to believe the underlying distribution of possible GCM predictions for future climate change are indeed distributed in this way. The form is certainly not reliable with regard to the largest (or smallest) possible warming into the future, for which there is no specific bound [Meehl et al., 2007]. It seems unlikely that more extreme ΔTg should be used in our linear pattern scaling approach, but it is clear that the bounded extremes of net change here are a product of the method and choices. Furthermore, they depend on the outlying values of the CMIP3 results.
4. Application to Temperature
4.1. Determination of Standardized Data
 For our first set of examples, the mathematical framework is applied to temperature changes, ΔT, over Australia simulated by the CMIP3 GCMs. Following Whetton et al.  and others, the basic standardized data are determined by regression of values from the A1B simulations (or the A2 simulation in the case of model bcr). For each model grid point, seasonal means of temperature were calculated for each year of the 21st century. Consistent with the linear form of equation (1), these were regressed against the yearly Tg series from the same model. This produces both a “trend” coefficient of best fit and an estimated uncertainty (from equation (5)). Suppiah et al.  used coefficients calculated using this approach. Note that with respect to equation (1), it is assumed that the relevant base climate values from which anomalies are taken fall on the linear trend line (with respect to Tg).
 The use of regression of data from the whole century, as opposed to simple differencing of time slice means, should minimize uncertainty generated by internal variability, although it will obscure any variation in the pattern of forced change. Mitchell et al.  suggest an advantage in filtering out periodic variability, such as ENSO, by using averages over multiple years in the regression. For the standard statistical model described in section 3, there should be little difference from such averaging in either the coefficient or its uncertainty. An example for precipitation where this model is less valid is noted in section 5.
 As for CSIRO 2007, the fields of coefficients and uncertainties (restricted to model land points in this case) were interpolated to points on a common grid with a spacing of 1° in both latitude and longitude. For comparison with other studies, we consider results for the case of temperatures in southern summer (December–February (DJF)) and average the coefficients over the southern Australia SAU region defined as Australian land south of 28° S. The 23 values of μi, or xi, are depicted in Figure 2. These range from 0.81 to 1.45 (with unit K per K), and have a median of 1.17. The uncertainties σi are typically only 0.1 (and might be even smaller if the spatial averaging was done on yearly values, before the regression).
4.2. PDFs of Change per Degree
 For the purpose of constructing local change PDFs, weights have been assigned to the individual GCMs, based on their skill in simulating the overall Australian climate, in the spirit of Whetton et al. . The weights are proportional to the skill scores given in Table 1. These vary by a factor of barely 2, so they do not have a major impact on the results presented. Their calculation is assigned to Appendix A, along with discussion on alternatives.
 With these weights, and with the SAU regression coefficients, we can now use the procedure of section 3 to form the Sum, Normal, and Beta PDFs for the standardized temperature x, which are depicted in Figure 2. These have, by construction, the same mean 1.14 and SD 0.19. The end points of x are set at xs = 0.55 and xe = 1.74. In this case, these three forms are quite similar. The Normal fit produces longer tails, although each represents little probability. The Beta form can again be described as a PDF that matches the mean and SD of the GCM values, and spans their range.
 The fourth PDF depicted in Figure 2 is a simple “Uniform” PDF between the extreme xi values from the models. The previous PDFs extend somewhat beyond these. A fifth PDF “Narrow” is inspired by the statistical theory of Tebaldi et al. [2005, equations (7) and (8)]. It is constructed as Normal, but with the SD reduced by the factor Neff−0.5, where Neff is the inverse of the sum of the squared weights. (This “effective” number of models, which is here 22.5, is reduced a little from N = 23 because of uneven weighting.) The Narrow PDF may approximate that for the real-world value, if one regarded (perhaps unreasonably) the model results to be independent samples from a normal distribution centered on this unknown value. As discussed in the introduction, this form is presented not as a recommended alternative, but simply to provide an indication that PDFs could be narrower if different assumptions are made, and to see the effects of this.
4.3. PDFs of Net Change
 The joint distribution formed using the Beta distribution for global warming (as y) and the Sum distribution for summer, SAU temperature change per degree (as x) is shown in Figure 3a (the range extensions should be ignored). The second method described in section 2.3 has been used in calculating the CDF for this case, and for each of the other four PDFs in Figure 2. The resulting five curves are shown in Figure 4a. The PDFs calculated from these are in Figure 4b.
 Consider first the Narrow case, which has only a small range of x. The PDF has a similar shape to the global warming PDF, with slightly extended tails. The range of the product, v, or ΔT here, is roughly that of y but amplified by the x mean. The Uniform case has a greater range, but still a smooth shape, despite the corners in the x PDF. The other three results are a little broader again, due to the tails in x, but are barely distinguishable from each other.
 The means and SDs of ΔT, as calculated using each v PDF, are given in Table 2 (cases 1 to 5). The means of all but the Uniform case match the product of the multimodel component means, as expected. The SD of the Narrow case is smaller. This is reflected in the percentile (P10, P50, P90) and threshold statistics, which are determined directly from the CDFs (Figure 4a). The similarity of the CDFs and the percentiles for cases 1 to 4 are partly the result of the relatively broad global warming range, which limits the effect of the small differences between the x PDFs. Similarly, given the relatively broad Beta x PDF, the statistics do not depend much on the details of the global warming PDF (for the same mean and SD). This is shown by comparing case 3 with cases 6 and 7, in which the Sum and Normal ΔTg PDFs of Figure 2 are used.
Table 2. Statistics Derived From the Probability Density Function for Net Change of SAU T in December–February From 2000 to 2100 Under A1B, Using Various Fitting Methods on Results From 23 Global Climate Modelsa
2 K, %
4 K, %
Here “Fit” is the form of PDF for regional change per degree, as in Figure 2; “ΔTg Fit” is the form of probability density function (PDF) for global mean warming, as in Figure 1; the fit mean (case 8) is the result based on the weighted mean of the global climate model (GCM) scaled change values (with zero range); likewise, the ΔTg fit mean (case 9) is the single value 2.8 K. For case 11, “raw” means the sum form of fitting the individual net regional changes from the GCMs. The first two statistics are the mean and standard deviation of the PDF. The others are the change at the 10th, 50th, and 90th percentiles and the probability of exceeding two thresholds (2 K and 4 K), both determined from the corresponding cumulative distribution function (CDF).
 Other alternatives that do affect the statistics are also given in Table 2. If we follow the example of Dessai et al. , and simply use the mean x value as a trivial, zero-range PDF, we obtain statistics (case 8) determined by the shape of the ΔTg PDF. The values given are barely different to the Narrow case. Using the Beta x PDF but a single value for ΔTg, the mean one, gives case 9. The P10 and P90 values for both case 8 and case 9 are not as extreme as those for case 3, because of the narrowing due to the use of a single value for one of the factors. Note though, that taking the product of the P10 values from each factor of case 3, gives a value (1.8 K) that is more extreme, close to the P03 value of case 3. Likewise, the product of the P90 values (5.1 K) is close to the P97 value of case 3. This comparison indicates that the joint distribution approach can produce a representative uncertainty range (e.g., P10 to P90) that is less broad than would be obtained by simply multiplying near-extremes of the two factors, as was typically done by Whetton et al. .
 The use of the broader global warming PDF (from Figure 1), with the Beta x, produces a larger SD and P90, and a larger chance of warming greater than 4 K (comparing case 10 with case 3). The P10 value is smaller than before. Interestingly, the median is also smaller, consistent with a smaller median for the ΔTg PDF.
 Finally we consider the case 11 of fitting the Sum form to the net change data directly from the models, denoted Raw in Table 2. In not using pattern scaling, this case is comparable to the approach of Giorgi and Mearns , although the weighting method and the construction of a continuous PDF are different. For comparison with the other results, it is useful to produce the model results by multiplying each model's scaled change xi (and its uncertainty) by its own warming yi. Since the 100-year global warming is not available for the bcr model (Table 1), the average over the other 22 models is used. The resulting 100-year regional warmings range from 2.1 K to 4.5 K, with a median of 3.1 K. These values are depicted in Figure 4b. The P50 value from the resulting Sum form (Table 2) matches this, while P10 and P90 have a smaller span. The range is not as large as from case 1, which factored in a range of warmings.
 The comparison of net change ranges will depend, among other things, on the correlation between the two factors xi, yi across the ensemble. In this case, the correlation for the 23 model results is r = −0.26. This is a further reason the range for Raw is a little smaller than that for case 1, for example. However, the magnitude of r is not statistically significant, with respect to the assumption of independence of the factors.
 The probability of exceeding the threshold value 2 K, 0.93 for the Beta-Beta case, is larger than the corresponding value 0.55 in Dessai et al. [2005, Table 5], despite being for a smaller mean warming. A smaller value would apply here if the small oceanic warming of the Australian Bight was allowed to contribute to the regional means. The result here is more like that of Giorgi and Mearns (as quoted by Dessai et al. ), namely, 0.88.
5. Application to Precipitation
5.1. Linear Assumption
 As a second example, the pattern scaling technique is applied to precipitation (here rainfall). We use the same weights and the Beta global warming PDF, and, initially, the same assumption of linearity. As in CSIRO 2007, we consider values at a single location of the 1 degree common grid, rather than averaged over the SAU region. Note that while this grid is only a little finer than that of the model hir, a typical GCM resolution is 2° or 3°. Naturally, the PDFs can only represent changes at this spatial scale.
 It is of interest to focus on a location with a small rainfall, and in the subtropics, where the projections are for a likely decrease, particularly during winter. The grid point at 134° E, 24° S, close to the town of Alice Springs in central Australia is used here. The weighted multimodel mean rainfall (rate) for the June–August (JJA) season during 1961–1990 from simulated years that precede those under scenario A1B is 0.70 mm d−1. Regressing the 2000–2100 data against global warming produces the individual trend values shown in Figure 5a. All but 5 of the 23 models simulate a decrease. The weighted mean of the values is −0.075 mm d−1 per K. It is helpful to express this relative to the 1961–1990 value: −10.7% per K. In fact, the observational mean for the grid point (see Appendix A) is only 0.36 mm d−1. Overall, the models simulate too much winter rain in the Australian interior and too little near the southern coasts. This is one reason for presenting % change, if one believes that the relative real-world change will be more accurately projected than the absolute change.
 The three forms of PDF fitted to the model data are shown in Figure 5a (again, shown as a percentage of 0.70 mm d−1). While the PDFs have the same mean and SD (18.1% per K), their shapes are rather different, partly because the largest decrease from the 23 models is a clear outlier. One model (Mk3) simulates the large decrease −0.50 mm d−1 per K, with uncertainty, σ = 0.18 mm d−1 per K. The starting point a for the Beta case is set at −0.68 mm d−1 K−1, or −97% K−1, although the Beta and Normal PDFs are negligible beyond −70% K−1 (as seen).
 Calculating the probabilistic net rainfall change v, or ΔP here, produces results that depend somewhat on the PDF for x. However, we focus on the Beta fit used by CSIRO 2007, and the resulting CDF for ΔP shown in Figure 6a, label mm d−1, and corresponding PDF in Figure 6b. As seen from Table 3, the mean change, after some 3 K of warming, is the expected, and plausible, −30%. However, the range is large. The CDF is nonzero at −100% and even P10 exceeds a decrease of 100% (Table 3).
Table 3. Statistics Derived From the PDF for Net Percentage Change of Central Australian Rainfall in June–August From 2000 to 2100 Under A1B, Using Various Approaches on Results From 23 GCMsa
The beta PDF for ΔTg is applied. Results for four methods of estimating rainfall change PDFs are given, as in Figure 6. The result from using raw percentage changes is also given. The statistics are the mean and SD of the PDF, and the change at the 10th, 50th, and 90th percentiles of the PDF, all in percent of the present climate value. The remaining statistics are the probability (in percent) of rainfall change being less than two thresholds (0% and –20%), as determined from the corresponding CDF.
 One reason for this unphysical result is that the large absolute decrease from Mk3 is being amplified by warming larger than this model simulates (Table 1), an extrapolation beyond the range of the regression. In addition, the Mk3 change is from an unrealistic base of 1.26 mm d−1, so that in percentage terms, its scaled change is a more plausible −40% K−1 (as shown, or −54% K−1, allowing −1 σ). The ranges can be ameliorated a little by fitting such individual percentage changes to form the x PDFs. To indicate that a different variable “relative (or percentage) change in precipitation” is now being considered, this is denoted as ΔP′ (with P′ being P divided by a base rainfall). The results, shown in Figure 5b, have a similar mean, −10.8% K−1, but are less broad, with SD 15% K−1. However, all three PDFs have considerable probability beyond −40% K−1. The resulting net change CDFs and PDFs still extend beyond −100%, with the Beta case again shown in Figure 6 (as % linear). Despite some contraction of the CDF (Figure 6a), the overall chance of a decrease in rain (Table 3) is a little larger. It is worth comparing this with the Sum PDF constructed from the raw data for ΔP′, derived using the method of the previous section and plotted in Figure 6b. The range is reduced a little (see %-raw in Table 3), but the CDF (not shown) is still positive at −100%.
 Further examination of the Mk3 series over the 21st century shows that the seasonal rainfall amounts data may not fit the statistical model of regression well. Values much higher than the mean occur in the early years, but low values are, of course, bounded. If the regression with Tg is done on decadal averages, the trend coefficient is less extreme, at −26% K−1, except for the standard error 12% K−1. This will still cause a problem, nevertheless.
5.2. Exponential Approach
 It is evident from these unphysical results that the linear approach involving products of rainfall change from a model with low sensitivity, and global warming consistent with high sensitivity, can readily breakdown toward the end of the 21st century. A breakdown could also occur for rainfall from a single model too, beyond this time. For instance, if a local decrease of 20% is simulated by a model for a 1 K warming, the decrease if the model is forced to a warming of 5 K will be not be as great as 100%, or no rain at all. Conceivably, the rainfall may decrease in a compounding fashion for each 1 K warming, thus a net decrease of perhaps 67%. This would be a nonlinear, indeed exponential, relationship between the actual rainfall P and warming, y, that can be expressed as
where P0 is the base climate rainfall, when y = 0, and α is assumed constant.
 Exponential relationships are commonly considered in statistics, of course [e.g., Yamane, 1967], and do feature in climate studies [e.g., Ferro et al., 2005]. Without comprehensive assessment of alternatives, the justification for the form here is limited to the avoidance of unphysical results. In any case, the relationship with warming still satisfies the first assumption made in section 2.1. Further, if we take the logarithm of equation (13), we get
which is in the linear form of equation (1), if v is now the variable “change in log rain,” and x is the coefficient α.
 For small warming, the exponential relationship approximates that used in the previous analysis of the variable ΔP′. For a single model, the coefficient α could be evaluated by a regression of ln P/P0 against y. Assuming that changes in individual models are not far out of the linear regime, we can simply use the previous x values, allowing for the factor 100 used in the conversion to a percentage. (In fact, applying the Grace software for the exponential form on the Mk3 example produces similar rates of change as before, for both yearly and decadal series.) The calculation of net change PDFs then proceeds as for the previous case. We merely need to convert the discrete v values (in % change) to
Note that this variable cannot be less than −100%. The PDF h(v) converts into the new form h′ using a change of variable:
Forming the CDF, using discrete integration, must allow for the uneven spacing of the new ΔP′ points.
 Under this exponential approach, the PDFs for x are the same as those plotted in Figure 5b. Using the Beta form, we obtain the PDF and CDF for net percentage change shown as the third curves in Figure 6.
 The new approach puts much less weight on changes that leave the actual rainfall near zero (and none on negative rainfall), and more weight on modest decreases. P10 is less extreme than even the %-raw case (Table 3). The PDF for positive changes is somewhat broader that the %-linear one. From the CDFs, and Table 3, we see that each approach gives a median value around −25%. In fact, the chance of a decrease (the 0% threshold), is identical to the %-linear result. With only positive values in the warming PDF, this chance is the same as the fraction of negative x values (hence unchanged).
 The mean change (Table 3) is smaller than the previous results. It is worth noting that this is no longer proportional to the mean warming, as positive changes are amplified more than negative ones as warming increases. To illustrate this, consider using just a single warming value, ΔTg, as a factor. When this is very small the mean ΔP′ decreases at the −10.8% per K linear rate. However, for a warming of 4 K it reaches only −23.6%. In fact, for a less negative x PDF, the mean net rain change can even be positive, of the opposite sign of the initial rate, as occurs at some Australian locations using the same warming PDF.
5.3. Mixed Approach
 Not only do the CDFs of the linear and exponential results meet at 0%, the PDFs do also, due to the near linearity of equation (13) at small change. This makes it straightforward to consider an alternative approach that combines the two forms. For negative changes we use the exponential relation, while for positive change we use the linear relation.
 The resulting PDF and CDF are already available in Figure 6, if we follow the exponential curve up to 0%, then switch to the linear curve. Incorporating the “mixed” approach into the computational programs produces the results plotted in Figure 6 and the statistics given in Table 3, line 5. The new method produces the smallest SD of all.
 The exponential approach is surely more valid for larger decreases, as it avoids unphysical net rain rates. Whether it is better for the larger positive changes may depend on the location and season. There is little indication of global mean rainfall increasing at a nonlinear rate, relative to ΔTg in analyses for the 21st century by Meehl et al. . Likewise, Mitchell  supports the usual linear assumption, in general. For practicality, CSIRO 2007 uses the mixed approach for all Australian rainfall cases, however further assessment of alternatives is warranted.
6. Summary and Conclusions
 An application of the pattern scaling approximation to the development of probabilistic projections of regional climate change associated with global warming is outlined, borrowing from the work of Whetton et al.  and Dessai et al. , in particular. This assumes that the global mean warming and the standardized (or scaled) regional change per degree of warming can vary independently over the ranges that are considered possible from GCM simulations. A mathematical framework for the problem is developed, using a joint probability distribution for the product of the two factors. Two ways of calculating the probability density function for the net change are described, given the assumptions. Three simple approaches to constructing a continuous PDF for each factor are presented; Sum, based on adding weighted results from individual GCMs, and both the Normal and Beta distributions fitted to Sum. The Beta distribution provides a smooth PDF that matches the mean and standard deviation of the GCM results and also their range, allowing skewness.
 The method is applied to changes in Australia over the 21st century under the A1B forcing scenario, using results from 23 CMIP3 GCMs. The weighting of models is based on their skill in simulating the observed climate. A beta distribution representation of the global warming factor, consistent with AR4, is used.
 Results for southern Australian summer warming over the 21st century under the A1B scenario are presented. Statistics for the net change, including threshold exceedences, are given for various choices. Because of the range of global warming considered possible, the net regional warming is rather insensitive to the choice of PDF for standardized change.
 For the case of central Australian winter rainfall, the GCM changes range from moderate increases to large decreases. The usual linear approach produces a net change PDF that extends to unphysically large decreases, even if percentage changes from individual models are used. An alternative approach based on an exponential relation between rainfall and warming is proposed. This is readily implemented using the same framework, using a logarithmic transformation, and it provides more plausible decreases. Given that linearity appears satisfactory for increases in precipitation, a mixed approach, combining the exponential and linear results may be the best for routine application to precipitation.
 The method has recently been applied to generate PDFs of change at all Australian locations, four seasons and the annual case, and to several other variables. Statistics from these are presented in map form by CSIRO 2007 and are also available online (http://www.climatechangeinaustralia.gov.au).
 It should be reiterated that there are various caveats to the pattern scaling method and the choices of weighting and constructing the sample-dependent PDFs made here. Further assessment of these is recommended. Approaches more strongly based on theory have recently been developed [e.g., Furrer et al., 2007; Murphy et al., 2007], although these also depend on assumptions. Subject to its caveats, the present method can be offered as one that estimates PDFs for change through relatively simple calculations, and which are dependent on GCM results in a straightforward way.
Appendix A:: Calculation of Plausible Weights
 The forms of PDF used here for standardized change allow for a weighting of models, based on the expectation that some are more likely to be accurate over a region than others. A standardized field does not directly depend on the global sensitivity of the GCM, rather the ability of a model to represent patterns of change over the region. Given this, we follow Whetton et al.  and consider the overall skill of each model in simulating the observed climate, based on three climatological fields, temperature (T), precipitation rate (P), and sea level pressure (SLP), for the four usual seasons.
 A convenient nondimensional skill score is the “arcsin-Mielke” measure M from Watterson  and Meehl et al. , applied to quantify the similarity of the model fields to the gridded observational fields, over the domain of all Australian land grid points. With mse being the mean square error between the model field X and observed field Y,
where V is the variance and G the mean, with all statistics calculated over the domain.
 The model fields are averages over years 1961–1990, using multiple simulations where available. The observational data for T and P are from the Australian Bureau of Meteorology, and for SLP, the ERA40 reanalyses [Källberg et al., 2004]. The averaging period was extended to that of ERA40, 1958–2001, to reduce unwanted effects of variability, as supported by a small mean reduction in the difference with the model results for rainfall.
 The average M value (×1000) for the 12 comparisons is given for each model in Table 1. With M = 0 indicating no skill and M = 1 perfect skill, it is evident that most models show considerable skill over Australia, although biases remain, particularly for precipitation. The weights wi used here are simply these M averages, after normalization. The range of weights is not great, with the best model being favored only by a factor 2.3 over the lowest scoring model. CSIRO 2007 used this set of weights for all cases, and it is used in the examples here. Consistent with various assessments (see AR4), the overall score for the multimodel mean fields (with uniform weights) is 0.75, better than the score for each individual model.
 With regard to any ranking of models, it should be noted that statistical uncertainty in both model and observed fields and systematic biases in the latter still influence these scores. In the absence of a thorough assessment of possible errors, it is of interest to consider the replacement of the observed fields by the multimodel mean, which may reduce some biases. All but one of the overall model scores increase, with the average score rising from 0.58 to 0.64. The ranking of the models is largely unchanged, except that the top model in Table 1 falls to third. Differences between scores less than the size of this sensitivity to the testing field, 0.06, are perhaps not significant.
 The use of weights based on such M values in constructing PDFs is subjective, but receives some support from Whetton et al. . They show that similarity in such scores between pairs of models tends to result in similarity of the fields of trend coefficients, and they hypothesize that this would occur if one of the pair were replaced by real-world fields. The best form of weight is unclear however, and several alternatives could readily be used in the present framework. Dessai et al.  also apply a regionally based skill score, using the REA approach of Giorgi and Mearns . That approach also weights more strongly models whose change fields “converge” to the mean change. The effect may be to narrow the spread of the resulting scaled change PDF somewhat. Nevertheless, for the ΔT case in section 4, the relative spread of weights is comparable to that in Dessai et al.  (for a different set of models). For further discussion of weighting of models, see Murphy et al. .
 This work contributes to the Australian Climate Change Science Program. The author is grateful to the many modelers who contributed their climate simulations to the CMIP3 database and to PCMDI for making this available. Janice Bathols produced the climate means and regression results used here. The work benefitted from extensive discussions with Penny Whetton and other colleagues of the CSIRO Climate Impact Group and Bureau of Meteorology as well as with Claudia Tebaldi and Reinhard Furrer, in particular. Major improvements in the presentation followed from the advice of three reviewers.