Incorporating model uncertainty into attribution of observed temperature change



[1] Optimal detection analyses have been used to determine the causes of past global warming, leading to the conclusion by the Third Assessment Report of the IPCC that “most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations”. To date however, these analyses have not taken full account of uncertainty in the modelled patterns of climate response due to differences in basic model formulation. To address this current “perfect model” assumption, we extend the optimal detection method to include, simultaneously, output from more than one GCM by introducing inter-model variance as an extra uncertainty. Applying the new analysis to three climate models we find that the effects of both anthropogenic and natural factors are detected. We find that greenhouse gas forcing would very likely have resulted in greater warming than observed during the past half century if there had not been an offsetting cooling from aerosols and other forcings.

1. Introduction

[2] An important goal of climate research is to determine whether climate has changed as a result of human activity since the beginning of the industrial revolution. To address this question, climate models are used to explain any signals emerging in the observations and attribute such changes to particular forcings of the climate system. Such model-data fusion yields differentiation of human influence on the Earth system from natural variation; correct depiction of this is crucial to assessing potential future climate change expected to occur as a consequence of enriched atmospheric greenhouse gas concentrations. Good performance of climate simulations verified through data-model intercomparisons implies a strong predictive capability, and is a pre-requisite to enable policymakers to define more accurately what might constitute “dangerous” emission profiles of greenhouse gases into the future.

[3] There are three main groups of climatological forcings that we consider in this paper. These are the influence of raised greenhouse gas concentrations due to human activity (predominantly carbon dioxide, but also others such as methane, nitrous oxides and CFCs), the cooling effect of increased atmospheric aerosols (mainly sulphates) and natural factors (including changes in solar irradiance and stratospheric aerosols following volcanic eruptions). These forcings, which we refer to henceforth as GHG, SUL and NAT respectively, have distinct spatial and temporal “fingerprints” on surface climate, which allow their differentiation. Optimal detection methods [Hasselmann, 1997] utilise these contrasting responses to isolate the strength of influence of different forcings by comparison to sets of (atmosphere-ocean) General Circulation Model (GCM) emulations as driven by individual modelled forcings. Implicit in the method is that the climate signals are additive [Gillett et al., 2004] and so the optimal detection method (also sometimes referred to as “optimal fingerprinting” or “climate detection and attribution”) is based directly on multivariate linear regression. It is these methods that led the Third Assessment Report of the IPCC [Intergovernmental Panel on Climate Change, 2001; Mitchell et al., 2001] to conclude that “most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations”.

[4] Typically, to understand human influence on climate, output from single GCMs has been compared against observations in order to determine whether a significant anthropogenic climate change has been detected and to produce estimates of the contributions from different forcing agents to observed temperature changes. Single model analyses have been applied to an increasingly wide range of GCMs being developed by research groups around the world. The formulations of these models differ, particularly for components of the Earth System where understanding is still in its infancy.

[5] Stott et al. [2006] compared optimal detection analyses applied to three different climate models, the HadCM3, PCM and GFDL R30 models. They found that global mean warming over the 20th century attributable to anthropogenic greenhouse gas emissions is well constrained by (i.e. apparent in) the observational record. Models are brought into better agreement through differential scaling on the components from different models; for example the scaling on the lower climate sensitivity PCM model's greenhouse response is generally larger than higher sensitivity models. Stott et al. [2006] combined the results from the three models by averaging the attributable warming from each model analysis to provide a probability density function of transient climate response. However, they did not take full account of inter-model differences and associated uncertainty. Here, we provide an extension of the optimal detection method (based on a method called “error-in-variables”) to include the effects of inter-model differences in projection of change for various forcings, thereby achieving single estimates of attributable causes of “global warming”.

2. Methodology

[6] Allen and Tett [1999] and Allen and Stott [2003] describe in detail the application of optimal fingerprinting methods to climate detection and attribution, using simulations from a single GCM. For each modelled signal, regression co-efficients are calculated that give the best fit (by minimising “total least squares”) to a linear model as:

equation image

Here y are measurements, xi are the I model derived signals from GCM output (I = 3 for GHG, SUL and NAT), ν0 and νi represent climate “noise” present in both observations and model respectively (assumed to have similar structure) and βi are regression co-efficients. The modelled signals may be based upon means of ensemble members, and so νi is scaled accordingly (for just one simulation, νi = ν0). In the absence of long temperature records (i.e. longer than century timescale) required to evaluate climate “noise”, this is instead estimated from long GCM control simulations with fixed atmospheric forcing representative of the pre-industrial period. The regression co-efficients βi provide information on whether individual signals are present; the regression analysis returns uncertainty bounds at a given confidence level, and if these do not include zero, then that signal has been detected in the observed record and some of the measurement record can be attributed to that particular signal. If unity is included in the uncertainty bounds, then the GCM projections are said to be “consistent”.

[7] The actual quantities used in equation (1) are the co-efficients of the leading n spatio-temporal Empirical Orthogonal Functions (EOFs) derived from the long control series representing the main modes of behaviour of the climate system for fixed forcings. Here y, xi (for each signal i) are vectors with n components and are such EOF co-efficients calculated for both the measurement record and the i model-derived responses to different climatic forcings. The regression analysis then shows whether the individual signals are emerging from the noise by exciting the modes of variability depicted by the control simulations. The estimates of “noise” in the EOF co-efficients are also derived from the control simulations.

[8] Here we extend the optimal detection method given by equation (1) to include information from simulations by multiple GCMs:

equation image

The methodology contains two new components, which are first, a calculation across models of a mean set of EOF co-efficients, equation imagei (and associated value of νi) and second, the capturing of uncertainty in model projections introduced by the consideration of multiple GCMs. The latter is given by μi for each signal.

[9] If the co-variance structure of μi is the same as νi then we can capture model error by simply scaling noise, as done by Gillett et al. [2002]. However, typically the noise structure will be different and so here we use a method called Error In Variables (EIV), following development of a similar regression algorithm by Nounou et al. [2002] for use in chemical analysis. We assume that inter-model differences are not just random fluctuations about a common mean signal that all the models share, but instead recognise that different models provide alternative but at present equally plausible representations of reality, and thereby introduce an extra component of uncertainty.

[10] We use the overall mean across models, equation imagei, as the best estimate of each signal i and assess this against inter-model uncertainty. The individual responses for each signal, i, and for each model, j, are regressed to fit the model mean for each signal, equation imagei. These new co-efficients are called equation imagei,j, and the departures of these reformed EOF co-efficients from the model mean, equation imagei, are used to estimate inter-model spread; the decision to regress the response patterns to the mean is because here we are interested in pattern uncertainty and not overall amplitude uncertainty. The inter-model co-variance for each signal μi is calculated as

equation image

for 1 ≤ n1, n2n.

[11] In all calculations each model is given equal weighting (an alternative would be to simply average across all ensemble members, but this will weight analysis towards those GCMs with higher numbers of available simulations). Hence the variance νi is given by μν0 where μ = (equation imageKj−1)/J2 and Kj is the number of ensemble members for each model j (4, 4, and 3 for the HadCM3, PCM and GFDL R30 models respectively for all three signals).

[12] A further decision is required on what constitutes the basis set of EOFs derived from control GCM simulations. The optimal detection methodology requires that individual signals i are mapped onto a range of n dominant EOFs. The value of n (called the truncation) is chosen to satisfy both a consistency test of the residuals [Allen and Tett, 1999] and where there are sufficient EOFs included that the derived regression coefficients βi have become relatively insensitive to the truncation value.

3. Results and Discussion

[13] Stott et al. [2006] used the optimal detection method to search for attributable GHG, SUL and NAT signals in each of the HadCM3, PCM and GFDL R30 models separately. Each model has a different pattern of response to a particular forcing, although qualitatively the patterns share many similarities. First, we repeat their analysis, but project modelled patterns of response onto a common basis set derived from the PCM control simulation. Further details of the models used and the forcings applied are given by Stott et al. [2006] and references therein.

[14] Results are shown as the red (HadCM3), green (PCM) and blue (GFDL R30) bars in Figure 1 and Figure 2 and are presented as the β regression co-efficients and associated uncertainty bounds at the 10% two-tail confidence level (5 to 95 percentiles, Figure 1) and as attributable warmings over both the last century and the last 50 years (Figure 2). We find that the results are relatively insensitive to a range of truncation values when the PCM control simulation is used to derive the basis set but not when HadCM3, GFDL R30 or a combination of control simulations is used. It is noted that we are trying to estimate spatio-temporal patterns with a relatively small number of corresponding EOFs, suggesting that the PCM signals do not project well onto the EOF patterns from other models. Here, we select a value of n = 15, as this represents the start of such insensitivity for increasing truncation using the PCM control.

Figure 1.

Scaling factors for individual GCMs considered and for the combined “EIV” method. Values are presented for the greenhouse gas, sulphate and natural signals.

Figure 2.

Attributable temperature trends associated with the scaling factors shown in Figure 1 for (top) the last century and (bottom) the last 50 years. Also presented are the overall temperature trends from the combination of all three forcings.

[15] We then calculate results from the EIV analysis, projecting simulations from each of the three climate models onto the same PCM-derived basis set (black bars in Figure 1 and Figure 2). At present few models have undertaken simulations for individual forcings (from GHG, SUL and NAT), and we estimate the covariance structure of “pattern uncertainty” from just three models. A better estimate of inter-model uncertainty in future will require analysis of a larger number of models.

[16] Figure 1 shows that, according to the EIV method, a significant influence of the effects of each of GHG, SUL and NAT is detected at the 5% confidence level. The black bars in Figure 2 show cross-GCM estimates of temperature change attributable to each of these three forcing factors over the 20th century as a whole and over the last 50 years of the 20th century. For the latter, warming trends, in units of oC/50 years, are 0.76 to 1.03 (with best estimate of 0.89) for GHG, −0.11 to −0.29 (with best estimate of −0.19) for SUL, and −0.10 to −0.27 (with best estimate of −0.18) for NAT, which combine overall to give a warming of between 0.44 and 0.59 with a best estimate of 0.51. These results take account of inter-model uncertainty and show that it is likely that greenhouse gas forcing during the past half century caused greater warming than observed. The confidence intervals for the EIV method are in general smaller than those from the total least squares applied to the individual models. Hence reduction in uncertainty due to the EIV method accessing in effect a larger ensemble outweighs any increase in uncertainty due to the inter-model covariance structure.

4. Conclusions

[17] We have examined the near-surface temperature responses from ensembles of simulations of three coupled climate models when they are forced with increasing greenhouse gas concentrations and changes in sulphate aerosols, and natural variations in atmospheric radiative forcing resulting from changing solar irradiance and explosive volcanic eruptions. We have extended previous optimal detection analyses applied to individual models to include inter-model uncertainty, thereby allowing single uncertainty bounds to be presented on our results across output from different global climate modelling centres. The main element of this is the introduction of an additional co-variance matrix that captures inter-model differences as a further source of uncertainty.

[18] For the three GCMs considered, we find that the EIV method synthesizes their output such that when compared with the temperature record, we find that the effects of greenhouse gases, aerosols and natural forcings are each detected at the 5% confidence level. Taking account of inter-model uncertainty, it is likely that greenhouse gas forcing during the past half century caused greater warming than observed.

[19] These estimates are based on a relatively crude sampling of inter-model uncertainty, being based on only three models. It is expected that in future, more GCM modelling centres will provide simulations driven by individual forcings, including the separate responses to greenhouse gases, aerosols, and natural factors. By deriving single uncertainty bounds for each climatological forcing, the EIV methodology described here is a potential tool that can aid policymakers planning to mitigate and adapt to future climate change. The EIV method explicitly incorporates “structural” uncertainty, and as large multi-model ensembles of simulations for separate forcings become available, is of potential future benefit in refining estimates of the relative contributions of greenhouse gases and other anthropogenic and natural forcings to observed climate change at global and regional scales.


[20] This work was funded by UK Department for Environment, Food and Rural Affairs under contract PECD 7/12/37 with additional funding from the Centre for Ecology and Hydrology “Science Budget” (CH) and the NOAA/DoE International Detection and Attribution Group (MRA).