Our site uses cookies to improve your experience. You can find out more about our use of cookies in About Cookies, including instructions on how to turn off cookies if you wish to do so. By continuing to browse this site you agree to us using cookies as described in About Cookies.

[1] Using a linear stochastic dynamical system, we further develop a recently proposed criteria of measuring variations in the predictability of ENSO. It is found that model predictability is intrinsically related to how the initial signal variance (ISV) projects on to its eigenmode space. When the ISV is large, the corresponding prediction is found to be reliable, whereas when the ISV is small, the prediction is likely to be less reliable. This finding was validated by results from a more realistic model prediction system for the period 1964–1998. A comparison of model skill and ISV for prediction made with and without data assimilation reveals that the role of data assimilation in improving model predictability may be mainly due to a further increase of ISV. Furthermore, model skill may result mainly from a few successful predictions associated with large ISV.

If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.

[2] An important issue in predictability studies is to measure the reliability of predictions. The technique used has primarily been ensemble prediction. The ensemble spread is a major measure for prediction reliability. However, the relationship between ensemble spread and model predictability can be model and norm dependent.

[3] Other issues concerning ensemble prediction are its complexity and computational expense. A great deal of effort and computational expensive cost are often required to generate optimal error growth modes and carry out ensemble runs [e.g., Moore and Kleeman, 1998]. Therefore developing simple but effective methods to measure prediction reliability is of great interest in the study of climate predictability. Kleeman and Moore [1999] (hereinafter referred to as KM99) proposed an alternative criteria for determining the variations in predictability. They found that the reliability of predictions is greatly influenced by signal amplitude in the initial fields. More specifically, when the signal size characterized by system modes is large, the modes are able to “resist” dissipation by the more chaotic components of the system, leading to a reliable prediction. KM99 validated this criteria using an intermediate coupled model of ENSO. In this note, we will further develop KM99's criteria using a linear stochastic dynamical system, and then examine it using a different and independent model of ENSO. This note is structured as follows: Section 2 and 3 briefly describe the theoretical framework, and the coupled model used here. Section 4 and 5 present the analyses of model predictability, followed by a summary and discussion in Section 6.

2. A Preliminary Framework for Linear Predictability

[4] A commonly used measure of skill for ENSO predictions is correlation between the predicted and observed values of various ENSO indices (e.g., Niño3). Considering a collection of initial conditions from which predictions are made, and further assuming that an ensemble of possible predictions occur from each of these initial states due to unavoidable (but small) uncertainty in their precise value, KM99 demonstrated that the maximum correlation skill that can be obtained with respect to a particular ENSO index is given by

where the index j refers to the each initial condition from which the ensemble is produced; k is the lead time of predictions; μ_{j}(k) is the mean value of the ENSO index for the ensemble while σ_{j}(k) is the standard deviation of the same ensemble. An overbar denotes an average over the different initial conditions (i.e., over all values of j).

[5]KM99 argued that may be viewed as the mean value of the “signal” in the predictions while can be viewed as the mean value of the “noise” as it measures the spread of possible predictions. The prediction skill therefore depends on signal to noise ratio, which is very reminiscent of the traditional concept of potential predictability [e.g., Madden, 1976; Charney and Shukla, 1981; Zwiers, 1987].

[6] To evaluate equation (1), KM99 developed a restrictive theory to deal with which assumes that the dynamical propagator matrix, denoted A, is a weak function of initial conditions, and that the phases of each eigenmode of A are uncorrelated. Here we will evaluate equation (1) using a different approach that avoids these assumptions, and consider a linear stochastic dynamic system. It has been well recognized that ENSO can be described as a linear, damped, stochastically forced system [e.g., KM99; Kleeman and Moore, 1997]. Therefore, ENSO can be modeled by the following discrete, linear, stochastic equation,

where X_{k−1} is the system state vector at the time k − 1, Φ_{k−1} is the state transition matrix for the system, w_{k−1} is a white noise, so that E〈w_{k}〉 = 0, = δ(t_{1} − t_{2})Q and Q is a time invariant matrix.

[7]Equation (2) can be understood as the linear tangent model of any realistic ENSO dynamical prediction [e.g., Chen et al., 2004; Tang and Hsieh, 2002]. The state transition matrix Φ_{k−1} is usually a real, asymmetric matrix. Denoting the eigenvectors and eigenvalues of Φ_{k−1} as P_{k−1} and Λ_{−1}, we have P_{k−1}^{−1}Φ_{k−1}P_{k−1} = Λ_{k−1}. Here Λ_{k−1} and P_{k−1} could be complex and Λ_{k−1} is a diagonal matrix.

[8] Multiplying both sides of equation (2) by P_{k−1}^{−1}, and denoting a new white noise _{k−1} for P_{k−1}^{−1}w_{k−1} with the zero mean and white covariance (i.e., δ(t_{1} − t_{2})), we have:

where U_{k−1} = P_{k−1}^{−1}X_{k−1}. For a large-scale, slowly varying climate system, such as ENSO, the difference between transition matrix Φ at two adjacent times k − 1 and k (in particular the difference in the few leading eigenmodes of Φ at k − 1 and k) P_{k−1} and P_{k}, will be small and is assumed to be negligible. Equation (3) is thus simplified:

[9]Equation (4) describes the trajectory of the dynamical system of equation (2) in terms of its leading eigenmodes (U = P^{−1}X). According to equation (4), the element u_{k}^{1} corresponding to the first eigenvalue λ_{k−1}^{1} in the vector U_{k} will satisfy

u_{k}^{1} is the system state X projected onto the first eigenmode which explains the most significant signal variances and represents the spatial patterns onto which the model uncertainties must project in order to maximize error growth over a given time interval. If we use u_{k}^{1} to evaluate μ_{j}(k) of equation (1), we have

^{1} denotes the contribution of the stochastic variance projected on the first eigenmode. The super-scribe 1 is ignored in equation (6) for conciseness. Equation (6) also holds for other eigenvectors.

[10] The calculation of σ_{j}(k) in equation (1) is more complex (KM99). However it has been found that σ_{j}(k) typically depends very little on the initial conditions [e.g., KM99; Kleeman and Moore, 1997; Eckert and Latif, 1997]. Our interest here is to measure the variations in predictability associated with uncertain initial conditions. Therefore, in this note we will explore the predictability resulting only from as in KM99.

[11]Equation (6) indicated that the is determined by its value of previous step (k − 1), and the eigenvalue of the transition matrix Φ_{k−1} of system (2). For a realistic ENSO dynamical prediction model, Φ_{k−1} and its eigenvalues can be obtained by a linear tangent and adjoint operator of the nonlinear system [e.g., Moore and Kleeman, 1998].

[12] Applying equation (6) iteratively till the initial time (t = 0), we have ∝ . Here is the amplitude of leading eigenmodes projected on the initial field. Therefore, the predictability at lead time of k can be evaluated by exploring the , i.e., the amplitude of leading eigenmode present in initial states.

[13] For simplify, we use POP (Principal Oscillation Pattern) leading modes instead of the eigenmodes of system (2) to evaluate as in KM99. POP modes are a special case of the eigenmodes of dynamical system (2) when the system (2) is assumed to be stationary. This is a reasonable approximation for ENSO system because ENSO evolves slowly. One can find that under this approximation equations (2)–(5) constitute the POP analysis [von Storch and Zwiers, 1999].

[14] With equations (6) and (1), we conclude that model prediction skill depends on the leading eigenmode amplitudes present in the initial conditions, referred as the initial signal variance (ISV). According to equation (1), the larger the ISV, the higher the skill. We next examine this conclusion using a realistic model of ENSO.

3. The Coupled Model and Its Eigenmodes

[15] The hybrid coupled model used here has quite different oceanic and atmospheric components compared to those used in KM99. It has been run routinely to produce ENSO forecasts and has demonstrated reasonable level of skill [Tang and Hsieh, 2002].

[16] POP modes were estimated using a combined fields of SST, upper ocean heat content (HC) and zonal wind stress, from a 100 year run of the coupled model. The leading POP mode describes a typical delayed-action oscillatory mechanism, and displays ENSO characteristics similar to those found in KM99 and other work [e.g., Latif and Graham, 1992]. A detailed POP analysis of this coupled model can be found in Tang [2002].

[17] As the propagator matrix of POP model is usually asymmetric, the POPs do not form a set of orthogonal patterns, so the POPs coefficients (u_{k}) are not given as the dot product of the patterns with the original field. This complicates their calculation. One effective method is to calculate the adjoint POPs (APOPs), which are the eigenmodes of the adjoint matrix of POP propagator [Xu and von Storch, 1990]. By definition, APOPs and conventional POPs form a biorthogonal set so that the POPs coefficients can be obtained from the dot product of the APOPs with the original field.

4. Model Predictability and ISV

[18] The ocean model was forced with the observed FSU wind stress from 1961 to 1998. From April, 1964 onward, at three months intervals (1 January, 1 April, 1 July, 1 October), the state of this so-called control run were selected as the initial conditions for prediction. A detailed discussion of the predictive skill of this model is presented in Tang and Hsieh [2002].

[19]Figure 1a shows ISV present in initial conditions. The heat content anomaly (HCA) of upper 250 m was chosen as the measure of ISV because it is the primary source of memory for the coupled system and is important for ENSO dynamics. As can be seen, large values of ISV mainly reside in a few predictions such as those of the 1973/74, 1983 and 1997 ENSO events. For many other initial conditions, ISV is small.

[20] To explore the relation between ISV and prediction skill, we examine the contribution (denoted as C) of each prediction to correlation skill of r,

where T denotes the normalized Niño3 SSTA index with zero mean; superscript p denotes prediction, and o observation, t denotes the lead time of the prediction, and N is the number of samples used to calculate r.

[21]Figure 2 shows the variations of C with lead times and initial time. A striking feature shown in Figure 2 is that there is a large variation of C with initial conditions. While some initial conditions lead to good predictions that account for significant contributions to r, most initial conditions correspond with a very small C. On the other hand, the variation of C with lead time is trivial although it slightly increases with lead time for some cases, which seems reminiscent of the fact that the initial conditions play a critical role in ENSO prediction skill for all lead times.

[22] Comparing Figure 1a with Figure 2 reveals that a large ISV generally corresponds a large C. This is particularly true for several typical ENSO events. For example, the ISV is far larger in the predictions initialized in 1973, 1983 and 1997/1998 than at other times. Correspondingly, the accumulated contributions C to r(t) from these predictions exceed 35% for a lead time of 6 months. Such a good relationship between R and C also can be demonstrated by their correlation coefficients, which are all over 0.6 for lead times of 1–6 months (not shown).

[23]Figure 3a compares the correlation skills of Niño 3 SSTA between two groups of predictions. The first has the initial conditions of ISV greater than 0.1 (dotted line), and the second is for ISV less than 0.1 (solid line). There are 63 prediction members for the first and 76 for the second during the period from 1964–1998. It is apparent that the prediction skill with the initial conditions of large ISV is significantly larger than that with the initial conditions of small ISV, in particular for lead times under 6 months. To further explore the contribution of large ISV to overall prediction skill, we also recalculated the prediction skill, where predictions with ISV ≥ 0.4 (a total of 27 predictions) were removed. The result shows that the overall model prediction skill in this case significantly decrease (not shown).

[24]Figure 3b shows the variations of the RMSE (Root Mean Square Error) of each prediction during the period from 1964–1998. The observed and the model simulated Niño3 SSTA from a control run forced with observed wind stress are respectively used to calculate the RMSE. The latter is the model perfect predictability. As can be seen, when the ISV is large (small), the RMSE is often small (large). This is particularly obvious for several typical ENSO events such as 1973, 1982/83, 1989 and 1997.

5. Data Assimilation and ISV

[25] It is of interest to explore the relationship between data assimilation and ISV. As argued above, ISV dominates model prediction skill, therefore data assimilation should increase the value of the ISV. To examine this, we show in Figure 1b the ISV present in the initial conditions that were generated with the assimilation of HCA for the period 1981–1998. The assimilation scheme used here is a 3-Dimensional variational (3D Var) algorithm. With this scheme, HCA can significantly improve the predictive skill of this model [Tang and Hsieh, 2003].

[26] Comparing Figure 1a with Figure 1b reveals that the ISV from the initial fields with data assimilation is significantly larger than its counterpart without data assimilation (note the different ISV scale). This is consistent with the difference in the model prediction skills with and without HCA assimilation as shown in Figure 4a, indicating the key role of ISV in determining the model predictability. This also suggests that an important contribution of data assimilation might be to increase the value of the ISV of the initial fields.

[27] Another interesting feature of Figure 1b is that ISV significantly increases only for a few initial conditions such as 1982/83 and 1997/98. For many other initial conditions, ISV exhibits significantly less variations due to data assimilation. This implies that the contribution of data assimilation to model skill may be due mainly to a few cases. To examine this, we recompute model prediction skill after removing 9 predictions with large ISV, as shown in Figure 4b. Obviously, there are no significant differences in prediction skill between cases with and without data assimilation after removing the selected predictions. This also confirms the conclusion drawn in previous sections, i.e., model prediction skill may be due mainly to a few prediction cases.

6. Discussion and Summary

[28] A key issue in ENSO predictability studies is to measure the reliability of the prediction. By applying a linear theoretical framework, we have examined this issue using a realistic hybrid coupled model. It was found that there is a good relationship between prediction skill and the initial signal variance (ISV) in eigenmode space. Overall, when ISV is large, the corresponding prediction was found to be reliable whereas when ISV is small, the prediction is less likely to be reliable. This finding has practical significance since it suggests that the reliability of ENSO predictions can be estimated very cheaply a priori using ISV without the need of expensive ensemble of forecasts. This work also explained well the recent results in Chen et al. [2004] which found that the period with the highest overall scores are dominated by strong and regular ENSO events whereas the lower skill usually corresponds with the period there being fewer and smaller events to predict.

[29] An interesting question is why there is a good relationship between ISV and model predictability. One possible explanation is that ISV is the amplitude of leading POP mode present in initial conditions, which describes the delayed-action oscillator mechanism [Tang, 2002]. A large ISV therefore corresponds to a strong delayed-action oscillator signal residing in the initial fields, leading to a reliable prediction. It has been found in the literature that delayed-action oscillator signals dominate the model predictability for many ENSO models.

[30] Another finding is that model prediction skill may be due mainly to a few successful predictions with large ISV. The role of data assimilation in improving model skill may mainly result from the increase of the value of the ISV present in initial conditions, especially for the initial conditions with large ISV.

[31] This should be noted that the theoretical framework developed in this study is based on a linear stochastic dynamics system that has dominant and persistent eigenmodes. ENSO system is approximately such a case. For other sophisticated systems, a generalized framework for climate predictability study should be applied as in Kleeman [2002].