Recent developments in statistical time series analysis: Examples of use in climate research

Authors


Abstract

[1] In this paper we present some recently developed time series analysis methods. Further, we apply these methods to a suite of climatological and synthetic time series. We show what information (or statistical significance) that can be drawn from such time series and which otherwise, i.e. by simpler methods, would be difficult to extract. We conclude by recommending the use of advanced statistical time series analysis for a wide range of applications connected to studies of climate variability and climate change.

1. Introduction

[2] In climate research, time series analyses are important for quantifying changes that can be interpreted as being ‘significant’ over specific time scales. In some cases, the climate signal, as for example represented by proxy temperature through a glacial-interglacial transition is obvious [Petit et al., 1999]. However, climate-related time series usually contain a complicated combination of information at different time scales, some of the signals being much more difficult to detect [Mann et al., 1999]. The techniques presented in this paper are developed in the time domain whereas Ghil et al. [2002] and Torrence and Compo [1998] focus on spectral domain and wavelet techniques, respectively. Our approach is a scale-space technique based on the technique developed by Chaudhuri and Marron [1999] where detection of significant features in one-dimensional signals disturbed by independent, random noise is addressed. Their kernel-based smoothing approach is entitled SiZer (Significant Zero crossings of derivatives). A two-dimensional version of SiZer, entitled Significance in Scale-Space (SSS), is developed in Godtliebsen et al. [2002]. For methods using kernel-based smoothing techniques on climate data, see Rajagopalan and Lall [1995].

[3] A common factor for those applications is that they attempt to find the significant features that are present in the observed data. The features found typically depend on the level of detail for which the time series is considered. An example is the potential global warming effect observed during the last few decades. On a much longer time scale, the recently observed global warming will barely be detected or not at all.

[4] A major problem in the practical use of smoothing methods is: Which features observed in a smooth are really there? Smoothing experts can usually answer this, but even for them grey areas exist where quantification would be helpful. The main purpose of our approach is to speed up the process for experienced smoothers in deciding which features are really present.

[5] Scale-space approaches use a very wide range of bandwidths thereby avoiding the need to choose one bandwidth. Obviously, this has advantages when studying climate records (i.e. time series) that will exhibit variations at different (time) scales.

[6] Scale-space approaches using wavelets represent an interesting alternative to our approach. The concept of scale in our approach is the same as in wavelet analysis in the sense that the scale can easily be related to physical scale in the time domain for both methods [Chaudhuri and Marron, 1999; Percival and Walden, 2000]. Readers interested in more details about the wavelet scale-space technique are referred to Torrence and Compo [1998] and Percival and Walden [2000].

[7] In this short paper, we present a small selection of various climate-related parameters that cover different time scales. Further, we apply different advanced statistical methods to these time series to show the strength of using objective tools for more accurate interpretation of climate variability.

2. Description of Methodology

2.1. SiZer

[8] For the readers convenience, we illustrate the use of scale-space ideas by the use of SiZer. The data points are given as dots in the upper panel of Figure 1 and they illustrate the annual maximum snow depth from 1920 to 2000 in Tromsø, Norway. Since our main aim with this example is to illustrate the SiZer methodology, we shall assume that the observed annual maximum snow depths are independent random variables (although this assumption may be debatable). A relevant nonparametric regression problem for this is to attempt to use data of the form

display math

where m(x) is the target curve. Here, we assume that the xi are equally spaced on the range of x, that m is smooth and that the ϵi are independent Gaussian variables with mean 0 (which makes m the regression curve of yi on xi) and variance Var(ϵi) = σi2. At each location in time a local linear kernel estimator is used to produce smooths of the observed signal. More precisely, at point xj, equation imageh(xj) equals the fit equation image where equation image minimizes

display math

In (2), Kh(·) = (1/h) K(·/h), where K is a kernel function chosen as a unimodal probability density function that is symmetric around zero. The parameter h, frequently called bandwidth, controls the degree of smoothness in the estimate equation imageh. Smooths obtained by the local linear kernel estimator are given as the thin solid curves in the top graph of Figure 1. The thick solid smooth is the result obtained by using the Sheather and Jones [1991] estimate of h in the local linear kernel estimator. Since the top panel of Figure 1 is showing a family of smooths, it is denoted a family plot. In SiZer, the notion of scale is controlled through the bandwidth in the kernel estimator. For each scale and location of the signal, a test is performed to see whether the smooth has a derivative significantly different from zero. In the local linear kernel estimator this means testing whether α1 ≠ 0 in (2) for each (x, h) location. The main idea of SiZer is that significant features are found at different scales, i.e. at different levels of smoothing. In the lower part of Figure 1 a SiZer map is given as a function of location and scale. A significantly positive derivative is flagged as black while a significantly negative derivative is flagged as light grey. Grey is used at locations where the derivative is not found to be significantly different from zero. The colour dark grey is used to indicate that too few data are available to do inference. Typically the dark grey colour occurs at very small scales for SiZer, as indicated in our example where areas with log10(h) less than 0.4 are coloured dark grey. This corresponds to a time scale of approximately 7 years. For intermediate scales, i.e. log10(h) values in the interval [0.4, 0.9], significant features are found. More precisely, a significant decrease in the curve is found from 1920 to around 1927 and a significant increase is found from 1935 to 1940. At these intermediate levels of resolution, this is the only feature found since grey, indicating derivative zero, is shown for all other years. For large scales, i.e. log10(h) larger than 1.5, all locations are classified as being significantly positive. If the assumption about independent observations is accepted for this data set, the following interpetations can be made. For intermediate levels it appears that the maximum snow depth has been significantly less in the period 1925 to 1935 than in the rest of the period. At large scales, it shows an increasing amount of snow, indicating a change in climate, here manifested by higher precipitation in winter. Over the time period from 1920 to 2000, the maximum snow depth in Tromsø has increased by 70%.

Figure 1.

SiZer analysis of annual maximum snow depth in Tromsø.

2.2. SiNos

[9] One serious limitation with the SiZer methodology is that it is designed to handle independent observations, an assumption frequently violated for time series. It is therefore important to have scale-space methods that can handle dependent data. Recall from Section 2.1 that SiZer assumes the model described in (1) where there is no stochastic dependence between different points of the time series. Our new methodology is entitled Significant Non-stationarities (SiNos). It is designed to handle time series where there is stochastic dependence between different data points. This method explores potential non-stationarities in a stochastic process. SiNos simultaneously looks for significant changes in the mean, variance, and the first lag auto-correlation of the observed time series when the null hypothesis shows that the process is stationary.

[10] For a given scale (window width) and location of the time series, a change in e.g. the mean is claimed to be present if the means of the window widths to the right and the left of the location are significantly different. Tests for changes in the variance, and the first lag auto-correlation are performed in a similar way.

[11] The underlying model of the observed time series yt in the top panel of Figure 2 is an auto-regressive model of the 1st order (i.e. AR(1)) with varying mean and first lag auto-correlation. More precisely, this gives

display math

where yt is the observed process at n different times, μy is the mean of the observed process, ϕ is a parameter in the interval (−1, 1) and ϵt is a white noise process with mean zero and variance σ2. In (3), we use the notational convention y0 = 0. The first and last 1500 observations of the time series in Figure 2 have μy equal to 0 and 0.2 respectively. The parameter σ2 equals 0.1 for all values of t. For the 1000 first, middle and last observations, ϕ equals 0.6, 0.1, and 0.8 respectively. In the lower panel of Figure 2, a SiZer plot of this situation is shown. Clearly, the change in the mean at t = 1500 is detected. Note, however, that several features are incorrectly flagged significant at small scales. For this example it is therefore clear that SiZer fails to give a good description of the underlying stochastic process. A SiNos plot of the data points mean in Figure 2 is given in Figure 3. The plots in Figure 3 should be interpreted in an analoguous way as described for SiZer in Section 2.1. In the lower part of Figure 3, black is used to indicate a significant increase in the mean while white indicates a significant decrease. Areas where there is no significant change in the mean, are depicted with grey. For light grey areas no inference is being performed. Note that the change in the mean at t = 1500 is detected on all scales. There are a few spurious significant features between t = 2200 and t = 2600. Both for SiNos and SiZer plots spurious features of this type typically appear. In a practical situation, one should of course consider carefully whether features detected only for one scale and one point are really there. Note that a lot more significant features were found in the SiZer plot than in the SiNos plot of the mean. The reason for this is that SiZer treats the data as independent and, hence, assumes that the data set contains a lot more information than it actually does. A good recommendation is therefore to be critical of the features found by SiZer from dependent data. In Figure 4, a SiNos plot of the auto-correlation is shown. From this plot, the changes in ϕ are clearly detected at a wide variety of scales. When the window width is 246, a spurious significance is detected around t = 1750. If this was an observed data set, one would typically be reluctant to claim that this feature really exists. A SiNos plot of the variance also detects the changes in statistical properties at t = 1000 and t = 2000 but this plot is not shown here to save space.

Figure 2.

SiZer analysis of the synthetic data example.

Figure 3.

SiNos analysis of the mean for the synthetic data example.

Figure 4.

SiNos analysis of the first lag auto-correlation for the synthetic data example.

[12] In our final example, we study the reconstructed Northern Hemishpere temperatures for the past millennium [Mann et al., 1999]. These observations are given as dots in the upper panel of Figure 5.

Figure 5.

SiZer analysis of Northern Hemisphere temperatures.

[13] By visual inspection of these data it appears that there has been a decrease in the temperature over the period from year 1000 to approximately 1900. At the end of the period, i.e. from 1900 until today, there seems to be a clear increase in temperature. A crucial question is whether these changes are statistically significant. SiNos is designed to answer questions of this type. In particular, changes on different scales can be detected. Family and SiNos plots of the mean were examined first. These are shown in the upper and lower panel of Figure 6, respectively. From Figure 6 it is clear that the above mentioned potential decrease in temperature indeed is significant on time scales of 300 years and more. In addition, the much more recent potential increase is detected as significant for time scales of 30 to 50 years. From the SiNos analysis of the first lag auto-correlation, we found that the observed data are positively correlated. For a large amount of data in the 1000 year period, the data have a first lag auto-correlation above 0.5. Due to the lessons learnt from the synthetic dependent data example described above, it is clear that SiZer may flag spurious details as significant at small scales when we are dealing with dependent data. Because of this, it is generally not recommended to apply SiZer to a data set which is correlated as the present one. We have nevertheless used SiZer to illustrate its behaviour. The outcome of the SiZer analysis to the Northern Hemisphere temperature data is presented in the lower panel of Figure 5. In this particular example it turns out that all significant features identified in Figure 5 match with corresponding climate fluctuations as reported by Mann et al. [1999]. In general, however, SiZer typically detects too many features for dependent data. For independent data, however, SiZer is superior to SiNos. Finally, it should be pointed out that SiNos may of course detect other types of stationarities (e.g. changes in the first lag auto-correlation) in a time series than SiZer. This is an important advantage of SiNos compared to SiZer.

Figure 6.

SiNos analysis of the mean for Northern Hemisphere temperatures.

3. Conclusion

[14] We have applied advanced statistical time series analysis to some selected climatological time series. These examples emphasize the need for using objective tools for identifying real (or statistical significant) features in data sets containing complicated climate variability. By using such methods we are able to identify at what time scales the significant features appear. We suggest that climate researchers should make use of such methodology in their studies more often.

Ancillary