SEARCH

SEARCH BY CITATION

Keywords:

  • autocorrelation;
  • autoregressive model;
  • stationary;
  • trend;
  • Yule-Walker equation;
  • autoregressive neural network;
  • modular neural network

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

The present paper has adopted an autoregressive approach to inspect the time series of monthly maximum temperature (Tmax) over northeast India. Through autocorrelation analysis the Tmax time series of northeast India is identified as non-stationary, with a seasonality of 12 months, and it is also found to show an increasing trend by using both parametric and non-parametric methods. The autoregressive models of the reduced Tmax time series, which has become stationary on removal of the seasonal and the trend components from the original time series, were generated through Yule–Walker equations. The sixth order autoregressive model (AR(6)) is identified as a suitable representative of the Tmax time series based on the Akaike information criteria, and the prediction potential of AR(6) is also established statistically through Willmott's indices. Subsequently, autoregressive neural network models were generated as a multilayer perceptron, a generalized feed forward neural network and a modular neural network. An autoregressive neural network model of order four (AR-NN(4)), in the form of a modular neural network (MNN), has performed comparably well with that of AR(6) based on the high values of Willmott's indices and the low values of the prediction error. Therefore, AR-NN(4)-MNN will be a better option than AR(6) to forecast a time series, i.e. the monthly Tmax time series of northeast India, because AR-NN(4)-MNN requires fewer predictors for a superior forecast of a time series. Copyright © 2010 Royal Meteorological Society


1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

The observed and the projected global warming in the 20th and 21st centuries has affected and will continue to affect agriculture, the hydrological cycle, environmental conditions and ecological systems (Lianchun et al., 2007). The average air temperature at the surface of the Earth is the most frequently used parameter for sensing the state of a climatic system (Ceschia et al., 1994). Different forcing actions of external factors such as the carbon dioxide concentration (greenhouse effect), the amount of solar radiation and the particulates reaching the stratosphere due to the volcano eruptions, significantly influence the surface temperature (Cracknell and Varotsos, 2007; IPCC, 2007). Therefore, advective processes exerted by atmospheric circulation are crucial factors that control regional air temperature changes (Xoplaki et al., 2003). The forecasting of surface temperature has long been an area of interest of the scientific community in general, and meteorologists in particular. Since the late 1930s, different statistical methodologies have been attempted to forecast the surface temperature on hourly (Spreen, 1956), daily (Mantis and Dickey, 1945; Gilbert, 1953), monthly (Kangieser, 1959), and seasonal (Van Loon and Jenne, 1975; Kumar et al., 1997) time scales. Namias (1948) was among the first to state that the mean monthly geopotential height fields for mid tropospheric levels determine monthly air temperature anomalies.

The non-linear nature of the relationship between changes in mean temperature and the corresponding changes in extreme temperature events is well documented in the literature (Mearns et al., 1984; Meehl et al., 2000), and the major impact of relatively small alterations in the mean state in the probabilities of extreme events has been identified (Griffiths et al., 2005). The studies mentioned above were based on simple linear statistical approaches. Some improved methodologies in this direction include Klein and Lewis (1970), Klein et al. (1971), Klein and Marshall (1973), whilst authors such as Calvo and Gregory (1994), Massie and Rose (1997), and Vil'fand et al. (2007) adopted the regression approach in various forms to forecast surface air temperature.

Studies on long-term variations in surface air temperature have shown a rising trend during the last few decades (Hingane et al., 1985; Willmott and Matsuura, 1995; Shrestha et al., 1999), and various authors have emphasized the need for proper forecasts of the surface temperature (e.g. Hussain, 1984; Rehman et al., 1990; Said, 1992). Tasadduq et al. (2002) mentioned the importance of acquiring the knowledge of the variability of surface ambient temperature in weather forecasting, surface budget studies, total solar radiation estimation, cooling and heating degree-days calculations, micrometeorological studies, initialization of planetary boundary models, calculation of thermal load on buildings, air pollution studies and upper air heating rate calculations. Luterbacher et al. (2004) discussed the evolution of European winter, summer and annual mean temperatures for more than 500 years in the context of estimated uncertainties, emphasizing the trends, spatial patterns for extreme summers and winters, and changes in both extreme and mean conditions. Elliott and Angell (1987) studied the relation between Indian monsoon rainfall, the southern oscillation and the hemispheric air and sea surface temperatures, and revealed that the correlations between sea surface temperature and the monsoon rainfall are higher than the correlations between monsoon rainfall and the various pressure indices over northeast India. Kumar et al. (1999) suggested that the inverse relationship between the El Niño-Southern Oscillation (ENSO) and the Indian summer monsoon (weak monsoon arising from warm ENSO event) has broken down in recent decades after analyzing a 140 year historical record. Frias et al. (2005) discussed the limitations of general circulation models and subsequently applied a multi-model ensemble system to investigate the predictability of monthly average maximum temperature, and they found that the results were not dependent upon model formulation. Mandal et al. (2007) discussed the association of sea surface temperature with genesis of a severe storm over India and concluded that the sea surface temperature and its gradient have a significant impact on modulating the intensity of the storm, and the peak intensity of the storm reached over the warmest sea surface.

The present paper is concerned with the monthly maximum temperature over northeast India. Most of northeast India and much of north India are subject to a humid sub-tropical climate. Northeast India refers to the easternmost region of India consisting of the states of Arunachal Pradesh, Assam, Meghalaya, Manipur, Mizoram, Nagaland, Tripura, Sikkim and parts of North Bengal (districts of Darjeeling, Jalpaiguri and Koch Bihar). Weston (1972) studied the flow pattern over northeast India and revealed that solar heating has a positive impact upon cumulus and cumulonimbus convection over northeast India during the pre-monsoon season, March, April and May. The early cloud observations over the Indian landmass reported that the premonsoon convection in northeast India is more intense than the monsoonal convection (Ludlam, 1980; Chaudhuri and Chattopadhyay, 2001; Zuidema, 2003; Yamane and Hayashi, 2006). The convective intensity may be aided by the presence of mid-tropospheric dry air, which increases downdraft evaporation and, thereby, the intensity of the cold pool. The convective activities over northeastern India have been studied by Pattanaik (2007), and several authors (e.g. Peterson and Mehta, 1995; Yamane and Hayashi, 2006) have documented that severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur in northeastern India during the pre-monsoon season. The role of the maximum surface temperature in the genesis of pre-monsoon thunderstorms has been established by Chaudhuri and Chattopadhyay (2005). It can, therefore, be surmised that a forecast model for maximum temperature may be a good contribution to the forecasting of thunderstorms. Moreover, the univariate modelling approach adopted in the present research depends upon the past values of the same time series, i.e. the monthly maximum temperature. Therefore, without the help of any other climatic predictor, the present modelling can be used for forecasting the future values of temperature over northeast India.

Northeast India has a great economic dependence on crops such as paddy, tea and forest products. Rainfall, evaporation, transpiration and evapotranspiration are vital components of the hydrological cycle and are significant for irrigation processes and agricultural practices. Temperatures, along with other climatic parameters such as sunshine duration, wind speed and humidity, affect the processes of evaporation and evapotranspiration. No significant trend in rainfall has been observed in the northeast region of India as a whole (Das and Goswami, 2003; Das, 2004). However, a significant decreasing trend in seasonal rainfall at a rate of 11 mm decade−1 was reported in the South Assam Meteorological Sub-division covering the hilly states of Nagaland, Manipur, Mizoram, Tripura and parts of the Barai Hills in southern Assam during the last century (Mirza et al., 1998; Das, 2004). Jhajharia et al. (2009), examined trends in total rainfall by using the Mann–Kendall non-parametric test for 11 sites in northeast India over different durations (yearly, winter, pre-monsoon, monsoon and post-monsoon seasons), and reported that, except for increasing trends at Agartala in winter and for the yearly and pre-monsoon season data at Chuapara, and decreasing trends at Nagrakata in yearly and monsoon rainfall, no statistically significant trends in any of the yearly and seasonal rainfall were observed at most of the sites of northeast India. Das (2004) observed that the mean maximum temperature is found to be rising at a rate of 0.11 °C decade−1. The annual mean temperature is also reported to be rising at a rate of 0.04 °C decade−1 in the northeast region of the country (Pant and Rupa Kumar, 1997; Das, 2004). Also, Jhajharia et al. (2009) reported that five and six sites of northeast India observed statistically significant increasing trends in maximum temperature in the monsoon and post monsoon seasons, respectively, obtained through the Mann–Kendall (MK) test at the 5% significance level. However, 10 sites each out of total 11 sites of northeast India observed no significant trends in maximum temperature in the winter and pre-monsoon seasons. Also, 10 and 9 sites witnessed no significant trend through the MK test at the 5% level of significance in minimum temperature in the winter and pre-monsoon seasons, respectively, over northeast India. However, almost half of the sites analysed in the study witnessed increasing trends in minimum temperature in the monsoon season over the northeast region of India (Jhajharia et al., 2009). Tea is one of the main cash crops of northeast India, and the relationship between temperature and tea yield is discussed in Wijeratne (1992) and Ghosh Hajra and Kumar (1999). The present study, therefore, may also help in agrometeorological modelling of tea.

The organization of the paper is as follows. In Section 2 the autocorrelation structure of the time series of monthly maximum temperature time series has been investigated and the trend in the time series has been tested using a parametric as well as non-parametric approach. Subsequently, the time series has been deseasonalized and detrended. In Section 3, the autoregressive models have been generated using Yule–Walker equations. Development of the autoregressive neural network models are described in Section 4, and a comparative statistical analysis of all of the models described in Section 5. Conclusions are presented in Section 6.

2. Statistical features of the data

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

2.1. Analysis of the autocorrelation structure

The present paper has explored the monthly maximum temperature data over northeast India (Figure 1) prepared by Indian Institute of Tropical Meteorology (IITM), Pune, India. The data period is 1901–2003, which contains 1236 months. Details of the data are available in the website of IITM, Pune, and in Kothawale and Rupa Kumar (2005). The region pertaining to the dataset lies in 22–29°N and 84–97°E. The autocorrelation function is a very useful tool in analysing the structure of a climatological time series (e.g. Delleur and Kavvas, 1978; Zwiers and Storch, 1995). Details of the autocorrelation function (ACF) are available in Box et al. (2007). The memory effect has been established in time-series of various geophysical quantities such as CO2, solar flux and atmospheric ozone, and some examples in this area can be found in Varotsos and Cracknell (2004), Varotsos (2004) and Varotsos et al. (2007). The autocorrelation function is computed up to lag-96 for the data of the monthly maximum temperature time series over northeast India during the period from 1901 to 2003. The autocorrelation function is presented in Figure 2 and this figure shows that there are significant positive and negative spikes at regular intervals. Furthermore, the ACF regains its pattern in every 12 month interval: thus, a periodicity of 12 months in the time series is observed. The ACF exhibits a sinusoidal characteristic, but does not decay to 0 with increase in lags. This means that the time series is repeating its pattern in every 12 month period. It is, therefore, felt that the time series is varying with time and thus it is non-stationary. High positive lag-1 autocorrelation indicates statistical dependence with their own past or future values. In the terminology of the atmospheric sciences, this dependence through time is usually known as persistence. Persistence can be defined as the existence of (positive) statistical dependence among successive values of the same variable, or among successive occurrences of a given event (Wilks, 2006). This type of periodicity is not always available in the case of meteorological/geophysical time series. Chattopadhyay (2007), and Chattopadhyay and Chattopadhyay (2008) found that there is no periodicity or persistence in the summer monsoon rainfall time series over India. Chattopadhyay and Chattopadhyay (2009a, 2009b) found a periodicity in the monthly total ozone time series over a region of India, however, the ACF gradually tends to 0. Chattopadhyay et al. (2009) found that no periodicity exists in the potential evapotranspiration time series over Gangetic West Bengal belonging to India.

thumbnail image

Figure 1. Location map of northeast India

Download figure to PowerPoint

thumbnail image

Figure 2. Autocorrelation function (ACF) of the monthly maximum temperature time series for the period 1901–2003 over Northeast India. Confidence limits are shown by the horizontal lines

Download figure to PowerPoint

2.2. Testing trend in the time series

The purpose of trend testing is to determine if the values of a random variable generally increase (or decrease) over some period in statistical terms (Onoz and Bayazit, 2003; Wu et al., 2007). Parametric or non-parametric statistical tests can be used to decide whether there is a statistically significant trend. Any test for trend is based on testing the null hypothesis H0, that there is no trend against the alternative hypothesis H1, that there is a trend.

2.2.1. Parametric test for trend

A basic approach in testing for trends is the regression approach (Woodward and Gray, 1993, 1995) and its suitability in analysing climatological time series is discussed by Visser and Molenaar (1995). In the present problem, the months (t) constitute the independent variable and maximum temperature (Yt) the dependent variable. Thus, the model is Yt = a + bt + Et, where Et are the residuals, a is the regression constant, b is the regression parameter, and t varies from 1 to n. Following the equations available in El-Fandy et al. (1994) for the maximum temperature time series under consideration (n = 1236), the least square estimators for b and a come out to be = 0.0087 and â = 23.656. Under the assumption that the residuals are independent and normally distributed with zero mean, the estimated standard error (SE) of (Woodward and Gray, 1993, 1995), i.e. SE1() is 0.0004. The null hypothesis H0:b = 0 is based on the assumption that equation image is distributed as Student's t with (n − 2) degrees of freedom. In the present paper, n = 1236, â = 23.656, = 0.0087, and SE(1)() is 0.0004, so equation image. From the table it found that t0.01 = 2.326 and t0.05 = 1.658 as the degrees of freedom tend to ∞. Thus, the null hypothesis can be rejected, which is why the conclusion that follows is that there exists a trend in the time series (Woodward and Gray, 1993). For the yearly averaged data (n = 103), = 0.1036 and â = 281.17. Thus, equation image. Thus, the time series of yearly averaged maximum temperature has an increasing trend. A similar test was carried out for all the seasons. In each case the null hypothesis of no trend was rejected either at 1% or at 5% level of significance and, consequently, existence of increasing trend is revealed in all scales. The results are presented in Table I.

Table I. Trend analysis of monthly maximum temperature time series by parametric and non-parametric (Mann–Kendall) tests over Northeast India
Time scaleType of trend
 Parametric testNon-parametric test
  1. Where; b, statistically significant increasing trend at 1% level of significance, and c, no trend. The symbols b and b denote statistically significant trends at the 5% and 10% level of significance, respectively.

Januaryb*b*
Februaryb*b
Marchb*c
Aprilbb
Mayb*b**
Junebb
Julybb
Augustbb
Septemberbb
Octoberbb
Novemberbb
Decemberbb
Annualbb
Winter (January to February)bb
Pre-monsoon (March to May)bb
Monsoon (June to September)bb
Post-monsoon (October to December)bb
2.2.2. Non-parametric test for trend

It has already been observed that the given time series is characterized by non-stationarity. In recent studies (e.g. Onoz and Bayazit, 2003; Wu et al., 2007; Jhajharia et al., 2009), it has been discussed that a non-parametric test for trend is more feasible for non-stationary time series than the parametric methods. Parametric methods require the data to be independent and normally distributed. However, even for smaller departures from normality, non-parametric methods are sometimes better than parametric methods (Hirsch et al., 1991). Non-parametric methods use the ranks of observations rather than their actual values, which relax the requirements concerning the distribution of the data. The MK test is a very popular tool for identifying the existence of increasing or decreasing trend within a time series. Detailed description of the MK test is available in Yue et al. (2002), Onoz and Bayazit (2003) and Jhajharia et al. (2009). The MK test is the rank-based nonparametric test for assessing the significance of a trend, and has been widely used in climatological trend detection studies. Examples include Lettenmaier et al. (1994), Onate and Pou (1996), Hanssen-Bauer and Førland (1998), Shrestha et al. (1999), Domonkos et al. (2003). The null hypothesis, H0, is that a sample of data {Yt:t = 1, 2, …, n} is independent and identically distributed. The alternative hypothesis H1 is that a monotonic trend exists in {Yt}. Each pair of observed values (yi, yj), where i > j is inspected to find out Yi > Yj (first type) or Yi < Yj (second type). There is a correction for the case Yi = Yj. If the numbers of first type and second observations be P and M respectively, then the statistic S is defined as S = PM. A standard normal variate Z is now constructed following Yue et al. (2002). In a two-sided test for the trend, the null hypothesis of no trend is rejected if |Z | > Zα/2, where α is the significance level. Though the present paper deals with monthly maximum temperature time series, the MK test is executed in various time scales like monthly, seasonal and annual as well. The MK tests are carried out for 1, 5 and 10% level of significance. The results of trend analysis performed on the maximum temperature series of northeast India are presented in Table I. It is observed from Table I that in the yearly and seasonal time series, the values of the test statistics (Z) are greater than Zα/2(Zα/2 = 1.96) for the case of 5% significance level) and are positive, and therefore, the null hypothesis of no trend is rejected in every case. Thus, the existence of increasing trend in the time series is identified through the non-parametric MK-test. Furthermore, the trend has also been tested for each month (e.g. January of each year). From the application of the MK test (see Table I), it is observed that there are increasing trends in the monthly time series of maximum temperature for each month, except for March. However, some other scales have been tested for trend only to have a clear picture of the entire data set from different points of view. For example, seasonal and yearly data derived from the entire time series have been tested to see whether the maximum temperature has any seasonal variation in its trend. Twelve time series have been formed, one for each separate month and each containing 103 data points, to see whether the monthly maximum temperature time series has any specific trend patterns for different months. However, these investigations are supplementary to the actual experiment, that is, the autoregressive modelling of monthly maximum temperature time series having 1236 entries.

From the autocorrelation function of the monthly maximum temperature time series (n = 1236) it has been observed that there is a significant positive lag-1 autocorrelation in the time series. In a handful of studies (e.g. Yue et al., 2002 and references therein), it has been demonstrated that the existence of significant lag-1 autocorrelation may lead to a false rejection of the null hypothesis of no trend in the MK test. Thus, it may increase the possibility of type-I error. Keeping this in mind a modified MK test has been applied to the said time series. Following Yue et al. (2002) a data pre-whitening method has been used to remove the effect of lag-1 autocorrelation from the time series {Xt} as follows:

  • equation image(1)

where, r1 denotes the lag-1 autocorrelation, and Yt denotes the pre-whitened data. The data under consideration have been pre-whitened using the equation explained above. Applying the MK test to the modified time series the value of the test statistic is 2.898, which indicates that there is a rising trend in the time series at 1% level of significance. This indicates that there is an increasing trend in the time series under consideration.

2.2.3. Deseasonalizing and detrending the time series

Since seasonality of 12 months has been identified in the time series, the seasonal indices are calculated for each month, and the indices are presented in Table II. The seasonal components have been computed under the assumption of a multiplicative model, and consequently the available data are deseasonalized by dividing the available monthly observed data by the corresponding seasonal indices and then multiplying by 100. The method of computation of seasonal indices is explained in Goon et al. (1994). There are various methods of calculating the seasonal indices. In the present paper, the method of ratio-to-moving-average method has been used here with periodicity of 12 months. A discussion is required regarding the choice among multiplicative and additive models. Both of the models have been tested and the seasonal decomposition has been carried out in both of the cases. The seasonal indices have been computed using SPSS software and the seasonally adjusted time series have been observed. It has been found that the seasonally adjusted time series obtained from the additive and multiplicative models are very close to each other (Figure 3). Thus, both of the models have almost equal performance. However, considering the increasing trend in the time series the multiplicative model has been finally chosen to deseasonalize the data. The deseasonalization has been done to smooth the time series which would be exposed to artificial neural network training and it has been already established in the literature (Zhang and Min, 2005; Zhang and Kline, 2007) that removing the seasonal components and trends from a time series has a positive impact upon artificial neural network modelling. Earlier in this section it was shown that there is an increasing trend in the time series. To remove the trend from the deseasonalized time series, a trend equation is fitted to the deseasonalized values (zi) using the same method explained earlier in this section. The trend equation is:

  • equation image(2)

After fitting this trend equation, the deseasonalized values (z) are divided by and the detrended time series is obtained. In this way, the deseasonalized and detrended time series is obtained for the monthly maximum temperature time series. Before the deseasonalization and detrending, the mean and standard deviations of the data were 3.51 and 28.66 respectively. The histogram of these data is presented in Figure 4(a). The histogram shows that the data are far from normal. However, after the deseasonalization and detrending, the data are becoming almost normal (Figure 4(b)) with mean 0.03 and standard deviation 0.982 respectively. That is, the data are becoming approximately normal with mean 0 and standard deviation 1. This normalized time series would be useful while implementing the neurocomputing procedure.

thumbnail image

Figure 3. Additive and multiplicative decomposition models of seasonally adjusted time series. The dotted and the continuous lines represent multiplicative and additive models, respectively

Download figure to PowerPoint

thumbnail image

Figure 4. Histograms of (a) the monthly maximum temperature time series (in degrees Celsius) over northeast India, and (b) the corresponding deseasonalized and detrended values

Download figure to PowerPoint

Table II. The seasonal indices for different months
MonthJanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
  1. Index 1, multiplicative model; Index 2, additive model.

Index178.2285.13101.88111.71114.26111.57107.03106.51105.94102.8393.1181.82
Index 2− 6.25− 4.270.543.364.033.322.011.861.700.81− 1.97− 5.21

3. Autoregressive modelling using the Yule–Walker equations

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

After deseasonalizing and detrending the time series, the ACF and the partial autocorrelation function (PACF) have been computed for the reduced time series. The ACF (Figure 5(a)) shows that after the first few lags the autocorrelations are very close to 0 and, consequently, the autoregressive (AR) process can be attempted. This gradual decaying of the ACF to 0 indicates that the reduced time series is stationary. The PACF has been presented in Figure 5(b) and it is described in Box et al. (2007) that the PACF of an AR process of order p has a cut off after lag p. From the plot of PACF in Figure 5(b) it is understood that the PACF is becoming exactly equal to 0 at lag 7. Thus, AR(6) can be a suitable autoregressive model for the given time series. However, all the partial autocorrelations before lag-6 being very close to 0 (excepting lag-1) the (p) processes are examined for p varying from 1 to 6 and their performances are finally judged by Akaike information criteria (AIC). The AIC is given by (Storch and Zwiers, 1999):

  • equation image(3)

Details of the symbols used in the above expression are given in page 167, of Storch and Zwiers (1999). Some examples of application of AR modelling for climatological time series include Davies and Milionis (1994), Koscielny and Duchon (1984), Leite and Peixoto (1996) and Besse et al. (2000).

thumbnail image

Figure 5. (a) Autocorrelation function (ACF) of the deseasonalized and the detrended times series of the monthly maximum temperature for the period 1901–2003. (b) Partial autocorrelation function (PACF) of the deseasonalized and the detrended times series of the monthly maximum temperature for the period 1901–2003

Download figure to PowerPoint

The set of adjustable parameters ϕ1, ϕ2, …, ϕp of an autoregressive process of order p, i.e. the AR(p) process:

  • equation image(4)

satisfies certain conditions for the process to be stationary. Here, t = zt − µ. The parameter ϕ1 of an AR(1) process must satisfy the condition |ϕ1|< 1 for the time series to be stationary. It can be shown that the autocorrelation function satisfies the equation:

  • equation image(5)

Substituting k = 1, 2, …, p in Equation (5) we get the system of Yule–Walker equations (Box et al., 2007):

  • equation image(6)

The Yule–Walker estimates of the autoregressive parameters ϕ1, ϕ2, …, ϕp are obtained by replacing the theoretical autocorrelation ρk by the estimated autocorrelation rk. Thus, the matrix notation, the autoregression parameters can be written as:

  • equation image

where

equation image

The six autoregressive models, AR(1), AR(2), AR(3), AR(4), AR(5) and AR(6), have been generated for the normalized time series, which has been already identified as a stationary time series using the method explained above. The autoregressive parameters for the six models are presented in Table III. For each model, the AIC has been computed and the magnitudes of the AIC corresponding to the models have been presented in Figure 6. This figure shows that the minimum magnitude of the AIC occurs for AR(6), and subsequently it is finally established that AR(6) is the best autoregressive model for the time series under consideration. The autoregressive model fitting is based on the methodology explained in Box et al. (2007).

thumbnail image

Figure 6. Magnitudes of the Akaike Information Criteria (AIC) for the six autoregressive AR(p) models

Download figure to PowerPoint

Table III. Autoregressive parameters corresponding to the six AR(p) models
AR(p)ϕ1ϕ2ϕ3ϕ4ϕ5ϕ6
AR(1)0.517
AR(2)0.0620.421
AR(3)0.128− 0.1930.699
AR(4)0.0440.072− 0.0990.604
AR(5)0.111− 0.1000.237− 0.3650.831
AR(6)0.0470.0570.0150.048− 0.0110.474

4. Development of autoregressive neural network model

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

4.1. Description of the models

Several statistical approaches to predict the surface temperatures have been discussed in various literatures. A few examples are Barnett and Preisendorfer (1987), Tang et al. (2000), Colman and Davey (2003). In recent years, application of various non-linear methodologies such as detrended fluctuation analysis (Khan et al., 2005; Varotsos et al., 2008), chaos theory (Sivakumar, 2005), and wavelet analysis (Chou, 2007) have been adopted to model the complex geophysical time series. Several authors have proposed artificial neural networks (ANNs) for time series decomposition and prediction (Khan et al., 2005) and its applicability in atmospheric modeling has been thoroughly discussed in Hsieh and Tang (1998) and Gardner and Dorling (1998). The present section describes the application of three types of artificial neural network model to forecast the maximum temperature time series over northeast India. The advantages of ANNs in forecasting a time series are well documented in literature. In an extensive review, Tim et al. (1994) discussed the advantages of an ANN which include the following:

  • ANNs can automatically approximate whatever functional form best characterizes the data;

  • Not only do ANNs estimate non-linear functions well but also they can extract any residual non-linear elements from the data after linear terms are removed, and,

  • ANNs have a modest capability for building piece-wise non-linear models.

Some examples of the application of ANN in surface temperature forecasting are Snell et al. (2000), Tang et al. (2000) and Ustaoglu et al. (2008). The types of ANN that are adopted in the present study include Multilayer Perceptron (MLP), Generalized Feed Forward Neural Network (GFFNN) and Modular Neural Network (MNN). The ANNs are mathematical model that are developed to mimic certain information storing and processing capabilities of the brain of higher animals. The MLP is a feed-forward ANN, which consists of a number of units or neurons. The neurons connected by weighted links are described with two entities: its activation function and its bias or threshold. The units are organized in several layers, namely, the input layer, one or more hidden layers and the output layer. The advent of the backpropagation (BP) algorithm opened avenues for the application of MLP for many problems of practical interest (Kamarthi and Pittner, 1999). The sigmoidal function is the most commonly used activation function and using this function, the output of a neuron equation image is given by (Sarkar, 1995):

  • equation image(8)

In Equation (6)

  • equation image(9)

where outlj is the output of neuron equation image of the layer l, equation image is the bias of neuron equation image of the layer l + 1, and equation image is the weight of the link between neuron equation image in layer l and neuron equation image in layer l + 1. In Equation (8), the term β determines the steepness of the activation function. In the present paper β is assumed equal to 1. Further details of MLP can be found in such surveys as Sarkar (1995) and Widrow and Lehr (1990).

GFFNNs are a generalization of the MLP such that connections can jump over one or more layers (Bouzerdoum and Mueller, 2003; Chattopadhyay and Chattopadhyay, 2008).

Modularity is defined as subdivision of a complex object into simpler objects. The subdivision is determined either by the structure or by the function of the object and its subparts. Modular neural networks (MNN) (Ortín et al., 2005) are a particular class of MLP. These networks process their input using several parallel MLPs, and then recombine the results. This tends to create some structure within the topology, which will foster specialization of function in each sub-module. Unlike the MLP, modular networks do not have full interconnectivity between their layers. Therefore, a smaller number of weights is required for the same size network. This tends to speed up training times and reduce the number of required training patterns. Detailed discussion of MNNs is available in Melin and Castillo (2005). Equation (4) which represents a linear autoregressive process, can be expressed in the following form:

  • equation image(10)

In Equation (10), a linear function FL can be introduced, in which case the equivalent form of Equation (10) would be (Dorffner, 1996):

  • equation image(11)

Replacing L by an ANN model leads to the autoregressive neural network (AR-NN) model. A few examples of AR-NN studies include Temizel and Caset (2005) and Trapletti et al. (2000). In the present case the AR-NN models would be

  • equation image(12)
  • equation image(13)
  • equation image(14)

4.2. Architecture of the models

In the present paper, each type of AR-NN model has been developed in three orders of autoregressive process. For example, the MLP based AR-NN model has been developed for second, third and fourth order autoregressive processes. Similarly, GFFNN and MNN based AR-NN models have also been developed for second, third and fourth order autoregressive processes. Thus, nine AR-NN models have been generated for the maximum temperature time series.

Before implementing the various ANN models, the deseasonalized and detrended maximum temperature data x are scaled to y belonging to the closed interval [0.2, 0.8] using the equation (Comrie, 1997):

  • equation image(15)

This transformation is performed to remove the asymptotic effect arising from the sigmoid activation function to be used in the ANN model. A thorough discussion on the usefulness of scaling of data prior to ANN model generation is presented in Section 5 of Maier and Dandy (2000). In the case of a sigmoid activation function (f(x) = (1 + ex)−1), the data are transferred between 0 and 1. However, if the values are scaled to the extreme limits of the transfer function, the size of the weight updates is extremely small and flat spots in training are likely to occur. To avoid this problem, the scaling presented in Equation (15) has been used.

Implementation of the AR-NN models consists of two steps. First, the AR-NN is trained in on-line mode. Second, the trained AR-NN is used to forecast the mean monthly maximum temperature involving data other than those used for training. The former step can be called training while the latter is validation. After validation, the AR-NN is ready to be used for forecasting mean monthly maximum temperature for the location it is trained for. For developing the AR-NN models, the mean monthly maximum temperature data pertaining to the period 1901–2003 for northeast India have been used. Consequently, there are (103 × 12 = 1236) data points. The orders of the proposed AR-NN models have been used to determine the number of inputs to the neural network. For example, an AR-NN model of order 3 has the data corresponding to three consecutive months as the inputs and the data corresponding to the fourth month as the desired output. Thus, in the case of AR-NN model of order 3, the order of the input matrix is ((1236 − 3)× 4 = 1233 × 4). The first three columns correspond to the input variables and the fourth column corresponds to the desired output. The first 50% of all the rows are considered as the training cases and the last 50% are considered as the test cases. In all the cases, the networks are trained up to 1000 epochs and the stopping criterion is taken as the root mean squared error. The number of nodes in the hidden layer is determined by the number of adjustable parameters [n0 + nh(ni + n0 + 1)], where ni denotes the number of input units, n0 denotes the number of output units, and nh denotes the number of hidden units. In AR-NN model of order 3, where ni = 3, n0 =1, and there are 300 training patterns, nh cannot be greater than 59. Similar approach has been adopted in the case of all of the AR-NN models. It should be further mentioned that all of the ANN models have been trained through momentum learning. The outcomes of the training and testing procedures are presented in the subsequent sections.

5. Comparative study

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

Before training the AR-NN models, the prediction capacity of the AR(6) model is judged by computing the prediction error (PE), and Willmott's indices of order 1 (WI1) and 2 (WI2). These statistics are given by:

  • equation image(16)
  • equation image(17)
  • equation image(18)

The predicted values of the deseasonalized and detrended time series generated by the AR(6) model are brought to its original scale by means of the process reverse to that of deseasonalizing and detrending. Equations (16)–(19) are then considered to compute the statistical measures of goodness of fit of the AR(6) model. The results are presented in Table IV, where it is seen that the values of WI1 and WI2 are very close to 1, which indicates a very good prediction by the AR(6) model.

Table IV. Values of various statistics measuring the goodness of fit of various models
ModelsaPEWI1WI2
  • a

    MLP, Multilayer Perceptron; GFF, Generalized Feed Forward Neural Network; MNN, Modular Neural Network; AR, Autoregressive; AR-NN, Autoregressive Neural Network.

AR(6)0.030.870.98
AR-NN(2) (MLP)0.110.450.70
AR-NN(3) (MLP)0.090.560.76
AR-NN(4) (MLP)0.080.500.76
AR-NN(2) (GFFNN)0.040.770.94
AR-NN(3) (GFFNN)0.060.730.96
AR-NN(4) (GFFNN)0.180.450.56
AR-NN(2) (MNN)0.060.700.89
AR-NN(3) (MNN)0.030.830.83
AR-NN(4) (MNN)0.030.830.97

The purpose of implementing an AR-NN is to examine whether the same accuracy as in AR(6) can be achieved by means of less number of predictors. As explained earlier, AR-NN has been developed for three types of ANN and the statistics PE, PCC, WI1 and WI2 have been computed for each model. The results are available in Table IV. It is apparent from the table that AR(6) and AR-NN(4), where MNN has been chosen as the form of the neural network, are producing almost similar values of the statistics under consideration. Thus, it is established that AR(6) model that requires past six values of the time series can be replaced by AR-NN(4) model where the modular neural network (MNN) is used as the form of the neural network. It is further found that AR-NN(4) model is producing much less values for the same statistics when it is trained in the form of MLP and GFFNN. Thus, AR(6) outperforms the AR-NN(4) model when MLP or GFFNN is used as the ANN models. The PE values are equal for AR(6) and AR-NN(4) when MNN is used. It should be further noted that AR-NN(3) with MNN as the form of ANN, performs very well, but considering all the statistics, AR-NN(4) with MNN as the form of ANN is more acceptable. The other two models that perform well are AR-NN(2) and AR-NN(3) with GFFNN as the form of ANN. These two models have produced very high values of WI2. The model producing the worst prediction is AR-NN(4) model with GFFNN as the form of ANN. It is noticed that in this case the prediction error attains its highest value and WI1 and WI2 attain their lowest values, which are significantly away from one. Finally, AR-NN(4) is identified as the most acceptable autoregressive neural network when it is trained as MNN, that is, modular neural network. While developing this particular AR-NN model, the network has been trained thrice and minimization of mean squared error (MSE) has been taken as the stopping criterion. The evolution of MSE with the increase in the number of epochs is presented in Figure 7.

thumbnail image

Figure 7. The evolution of MSE with the increase in the number of epochs in the case of AR-NN(4)

Download figure to PowerPoint

6. Conclusions

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References

In the first phase of the present study, the relevance of studying the monthly maximum surface temperature over northeast India has been discussed. Subsequently, the structure of the time series has been analyzed and it is discerned through the autocorrelation analysis that the monthly maximum temperature time series of northeast India is characterized by significant non-stationarity and seasonality of 12 months. Increasing trends in the maximum temperature time series were observed through both the Mann–Kendall non-parametric test and parametric test in almost all months and seasons over northeast India. The seasonal and the trend components were removed from the original time series, which was found to be non-stationary, in order to produce a stationary time series. The reduced time series, generated after the removal of seasonal and trend components from the original series, is found to be normally distributed. The autoregressive models have been generated through Yule–Walker equations and the autoregressive process of order six (AR(6)) is found to be a representative model and the best autoregressive process for the maximum temperature time series of northeast India by examining both the partial autocorrelation function of the reduced time series and the Akaike information criteria. The prediction potential of the AR(6) model is also established through the high values of the Pearson correlation, Willmott's indices and low value of the prediction error.

In the next phase, autoregressive neural network models have been attempted to reduce the number of predictors from six as required in AR(6). Three types of neural network, multilayer perceptron, generalized feed forward neural network and modular neural network, have been attempted with variable orders of autoregression. An autoregressive neural network with four predictors, denoted as AR-NN(4), with modular neural network as the form of the neural network has prediction performance comparable to that of the AR(6) process based on the high and the low values of Willmott's indices and the prediction error, respectively. Thus, it is finally concluded that implementation of a modular neural network in the autoregressive structure potentially reduce the number of predictors from six to four and predict the monthly maximum temperature over the northeastern region of India. In the present paper, the maximum temperature time series has been univariately modelled using artificial neural network as well as the autoregressive approach. A similar univariate approach can be used, for further future study in the field of applied hydrology or meteorology, to scrutinize the time series of monsoon rainfall over different regions of India and India as a whole.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Statistical features of the data
  5. 3. Autoregressive modelling using the Yule–Walker equations
  6. 4. Development of autoregressive neural network model
  7. 5. Comparative study
  8. 6. Conclusions
  9. Acknowledgements
  10. References
  • Barnett TP, Preisendorfer R. 1987. Origins and levels of monthly and seasonal forecast skill for United States surface air temperatures determined by canonical correlation analysis. Monthly Weather Review 115: 18251850.
  • Besse PC, Cardot H, Stephenson DB. 2000. Autoregressive forecasting of some functional climatic variations. Scandinavian Journal of Statistics 27: 673687.
  • Bouzerdoum A, Mueller R. 2003. Genetic and Evolutionary Computation—GECCO 2003. Springer: Berlin, and Heidelberg, Germany.
  • Box GEP, Jenkins GM, Reinsel GC. 2007. Time Series Analysis: Forecasting and Control, 3rd edn. Dorling Kindersley (India) Pvt. Ltd.: New Delhi, India.
  • Calvo JC, Gregory JD. 1994. Predicting monthly and annual air temperature characteristics in North Carolina. Journal of Applied Meteorology 33: 490499.
  • Ceschia M, Linussio A, Micheletti S. 1994. Trend analysis of mean monthly maximum and minimum surface temperatures of the 1951–1990 period in Friuli-Venezia Giulia. Il Nuovo Cimento 17: 511521.
  • Chattopadhyay S. 2007. Feed forward Artificial Neural Network model to predict the average summer-monsoon rainfall in India. Acta Geophysica 55: 369382.
  • Chattopadhyay S, Chattopadhyay G. 2008. Comparative study among different neural net learning algorithms applied to rainfall time series. Meteorological Applications 15: 273280.
  • Chattopadhyay G, Chattopadhyay S. 2009a. Predicting daily total ozone over Kolkata, India: skill assessment of different neural network models. Meteorological Applications 16: 179190.
  • Chattopadhyay G, Chattopadhyay S. 2009b. Autoregressive forecast of monthly total ozone concentration: A neurocomputing approach. Computers and Geosciences 35: 19251932.
  • Chattopadhyay S, Jain R, Chattopadhyay G. 2009. Estimating potential evapotranspiration from limited weather data over Gangetic West Bengal, India: a neurocomputing approach. Meteorological Applications 16: 403411.
  • Chaudhuri S, Chattopadhyay S. 2001. Measure of CINE-a relevant parameter for forecasting pre-monsoon thunderstorms over GWB. Mausam 52: 679684.
  • Chaudhuri S, Chattopadhyay S. 2005. Neuro-computing based short range prediction of some meteorological parameters during the pre-monsoon season. Soft Computing 9: 349354.
  • Chou C-M. 2007. Efficient nonlinear modeling of rainfall-runoff process using wavelet compression. Journal of Hydrology 332: 442455.
  • Colman AW, Davey MK. 2003. Statistical prediction of global sea-surface temperature anomalies. International Journal of Climatology 23: 16771697.
  • Comrie AC. 1997. Comparing neural networks and regression models for Ozone forecasting. Journal of Air and Waste Management Association 47: 653663.
  • Cracknell AP, Varotsos CA. 2007. The IPCC fourth assessment report and the fiftieth anniversary of Sputnik. Environmental Science and Pollution Research 14: 384387.
  • Das PJ. 2004. Rainfall regime of northeast India: a hydrometeorological study with special emphasis on the Brahmaputra basin, Unpublished PhD thesis, Gauhati University, Assam, India.
  • Das PJ, Goswami DC. 2003. Long-term variability of rainfall over northeast India. Indian Journal of Landscape Systems and Ecological Studies 26(1): 120.
  • Davies TD, Milionis AE. 1994. Box-Jenkins univariate modelling for climatological time series analysis: an application to the monthly activity of temperature inversions. International Journal of Climatology 14: 569579.
  • Delleur JW, Kavvas ML. 1978. Stochastic models for monthly rainfall forecasting and synthetic generation. Journal of Applied Meteorology 17: 15281536.
  • Domonkos P, Kysel JY, Piotrowicz K, Petrovic P, Likso T. 2003. Variability of extreme temperature events in South–central Europe during the 20th century and its Relationship with large-scale circulation. International Journal of Climatology 23: 9781010.
  • Dorffner G. 1996. Neural network for time series processing. Neural Network World 6: 447468.
  • El-Fandy MG, Ashour ZH, Taiel SMM. 1994. Time series models adoptable for forecasting Nile floods and Ethiopian rainfalls. Bulletin of the American Meteorological Society 75: 8394.
  • Elliott WP, Angell JK. 1987. The relation between Indian monsoon rainfall, the southern oscillation, and hemispheric air and sea temperature: 1884–1984. Journal of Climate and Applied Meteorology 26: 943948.
  • Frias MD, Fernandez J, Saenz J, Rodriguez-Puebla C. 2005. Operational predictability of monthly average maximum temperature over the Iberian Peninsula using DEMETER simulations and downscaling. Tellus 57A: 448463.
  • Gardner MW, Dorling SR. 1998. Artificial neural networks (the multilayer perceptron)-a review of applications in the atmospheric sciences. Atmospheric Environment 32: 26272636.
  • Ghosh Hajra N, Kumar R. 1999. Seasonal variation in photosynthesis and productivity of young tea. Experimental Agriculture 35: 7185.
  • Gilbert CG. 1953. An aid for forecasting the minimum temperature at Denver, Colo. Monthly Weather Review 81: 233245.
  • Goon AM, Gupta MK, Dasgupta B. 1994. Fundamentals of Statistics, Vol. 2. The World Press Private Limited: Kolkata, India.
  • Griffiths GM, Chambers LE, Haylock MR, Manton CMJ, Nicholls N, Beak BH, Choi DY, Della-Marta PM, Gosai FA, Iga AN, Lata GR, Laurent HV, Maitrepierre IL, Nakamigawa JH, Ouprasitwong KN, Solofa LD, Tahani ML, Thuy NDT, Tibig OL, Trewin PB, Vediapanq BK, Zhir P. 2005. Change in mean temperature as a predictor of extreme temperature change in the Asia–pacific region. International Journal of Climatology 25: 13011330.
  • Hanssen-Bauer I, Førland EJ. 1998. Long-term trends in precipitation and temperature in the Norwegian Arctic: can they be explained by changes in atmospheric circulation patterns? Climate Research 10: 143153.
  • Hingane LS, Kumar KR, Murty BVR. 1985. Long term trends of surface air temperature in India. International Journal of Climatology 5: 521528.
  • Hirsch RM, Alexander RB, Smith RA. 1991. Selection of methods for the detection and estimation of trends in water quality. Water Resources Research 27: 803814.
  • Hsieh WW, Tang B. 1998. Applying neural network models to prediction and data analysis in meteorology and oceanography. Bulletin of American Meteorological Society 79: 18551870.
  • Hussain M. 1984. Estimation of global and diffuse irradiation from sunshine duration and atmospheric water vapour content. Solar Energy 33: 21720.
  • Intergovernmental Panel on Climate Change (IPCC). 2007. In Climate Change 2007: Mitigation of Climate Change, MetzB, DavidsonO, BoschP, DaveR, MeyerL (eds). Cambridge University Press: New York, NY.
  • Jhajharia D, Shrivastava SK, Sarkar D, Sarkar S. 2009. Temporal characteristics of pan evaporation trends under the humid conditions of northeast India. Agricultural and Forest Meteorology 149: 763770.
  • Kamarthi SV, Pittner S. 1999. Accelerating neural network training using weight extrapolations. Neural Networks 12: 12851299.
  • Kangieser PC. 1959. Forecasting minimum temperatures on clear winter nights in an arid region. Monthly Weather Review 87: 1928.
  • Khan S, Ganguly AR, Saigal S. 2005. Detection and predictive modeling of chaos in finite hydrological time series. Nonlinear Processes in Geophysics 12: 4153.
  • Klein WH, Lewis F. 1970. Computer forecasts of maximum and minimum temperatures. Journal of Applied Meteorology 9: 350359.
  • Klein WH, Lewis F, Hammons GA. 1971. Recent developments in automated max/min temperature forecasting. Journal of Applied Meteorology 10: 916920.
  • Klein WH, Marshall F. 1973. Screening improved predictors for automated max/min temperature forecasting. Preprints of 3rd Conference on Probability and Statistics. American Meteorological Society: Boulder, CO; 3643.
  • Koscielny AJ, Duchon CE. 1984. Autoregressive modelling of the tropical stratospheric quasi-biennial oscillation. International Journal of Climatology 4: 347363.
  • Kothawale DR, Rupa Kumar K. 2005. On the recent changes in surface temperature trends over India. Geophysical Research Letters 32: L18714.
  • Kumar KK, Kumar KR, Pant GB. 1997. Pre-monsoon maximum and minimum temperatures over India in relation to the summer monsoon rainfall. International Journal of Climatology 17: 11151127.
  • Kumar KK, Rajagopalan B, Cane MA. 1999. On the weakening relationship between the Indian Monsoon and ENSO. Science 284: 21562159.
  • Leite SM, Peixoto JP. 1996. The autoregressive model of climatological time series: an application to the longest time series in Portugal. International Journal of Climatology 16: 11651173.
  • Lettenmaier DP, Wood EF, Wallis JR. 1994. Hydro-climatological trends in the continental United States, 1948–1988. Journal of Climate 7: 586607.
  • Lianchun S, Cannon AJ, Whitfield PH. 2007. Changes in seasonal patterns of temperature and precipitation in China during 1971–2000. Advances in Atmospheric Sciences 24: 459473.
  • Ludlam F. 1980. Clouds and Storms. Pennsylvania State University Press: Pennsylvania.
  • Luterbacher J, Dietrich D, Xoplaki E, Grosjean M, Wanner H. 2004. European seasonal and annual temperature variability, trends, and extremes since 1500. Science 303: 14991503.
  • Maier HR, Dandy GC. 2000. Neural networks for the prediction and forecasting of water resources variables: a review of modelling issues and applications. Environmental Modelling and Software 15: 101124.
  • Mandal M, Mohanty UC, Sinha P, Ali MM. 2007. Impact of sea surface temperature in modulating movement and intensity of tropical cyclones. Natural Hazards 41: 413427.
  • Mantis HT, Dickey WW. 1945. Objective methods of forecasting the daily minimum and maximum temperature. Report Number 4, U.S. Army Air Force, Weather Station, New York University.
  • Massie DR, Rose MA. 1997. Predicting daily maximum temperatures using linear regression and Eta Geopotential thickness forecasts. Weather and Forecasting 12: 799807.
  • Mearns LO, Katz RW, Schneider SH. 1984. Extreme high-temperature events: changes in their probabilities with changes in mean temperature. Journal of Applied Meteorology 23: 16011613.
  • Meehl GA, Karl T, Easterling DR, Changnon S, Pielke R Jr, Changnon D, Evans J, Groisman PY, Knutson TR, Kunkel KE, Mearns LO, Parmesan C, Pulwarty R, Root T, Sylves RT, Whetton P, Zwiers F. 2000. An introduction to trends in extreme weather and climate events: observations, socio-economic impacts, terrestrial ecological impacts, and model projections. Bulletin of the American Meteorological Society 81(3): 413416.
  • Melin P, Castillo O. 2005. Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing: An Evolutionary Approach for Neural Networks and Fuzzy Systems (Studies in Fuzziness and Soft Computing). Springer-Verlag, New York, Inc.: Secaucus, NJ.
  • Mirza MMQ, Warrick RA, Ericksen NJ, Kenny GJ. 1998. Trends and persistence in precipitation in the Ganges, Brahmaputra and Meghna basins in South Asia. Hydrological Sciences Journal 43: 845858.
  • Namias J. 1948. Evolution of monthly mean circulation and weather patterns. Transactions of American Geophysics U29: 777788.
  • Onate JJ, Pou A. 1996. Temperature variations in Spain since 1901: A preliminary analysis. International Journal of Climatology 16: 805815.
  • Onoz B, Bayazit M. 2003. The power of statistical tests for trend detection. Turkish Journal of Engineering and Environmental Science 27: 247251.
  • Ortín S, Gutiérrez JM, Pesquera L, Vasquez H. 2005. Nonlinear dynamics extraction for time-delay systems using modular neural networks synchronization and prediction. Physica A 351: 133141.
  • Pant GB, Rupa Kumar K. 1997. Climates of South Asia. John Wiley and Sons: New York, NY; 320.
  • Pattanaik DR. 2007. Variability of convective activity over the North Indian Ocean and its associations with monsoon rainfall over India. Pure and Applied Geophysics 164: 15271545.
  • Perez P, Trier A, Reyes J. 2000. Prediction of PM2.5 concentrations several hours in advance using neural networks in Santiago, Chile. Atmospheric Environment 34: 11891196.
  • Peterson RE, Mehta KC. 1995. Tornadoes of the Indian subcontinent. Paper presented at 9th International Conference on Wind Engineering, International Association of Wind Engineering, New Delhi, Indiaxs.
  • Rehman S, Husain T, Halawani TO. 1990. Application of one-dimensional planetary boundary layer model to the regional transport of pollutants—a case study. Atmospheric Research 25: 52138.
  • Said SAM. 1992. Degree-day base temperature for residential building energy prediction in Saudi Arabia. ASHRAE Transactions 98: 34653.
  • Sarkar D. 1995. Methods to speed up error back-propagation learning algorithm. ACM Computing Surveys 27: 519544.
  • Shrestha AB, Wake CP, Mayewski PA, Dibb JE. 1999. Maximum temperature trends in the Himalaya and its vicinity: an analysis based on temperature records from Nepal for the period 1971–1994. Journal of Climate 12: 27752786.
  • Sivakumar B. 2005. Chaos in rainfall: variability, temporal scale and zeros. Journal of Hydroinformatics 7: 175184.
  • Snell SE, Gopal S, Kaufmann RK. 2000. Spatial interpolation of surface air temperatures using artificial neural networks: evaluating their use for downscaling GCMs. Journal of Climate 13: 886895.
  • Spreen WC. 1956. Empirically determined distributions of hourly temperatures. Journal of Atmospheric Sciences 13: 351355.
  • Storch HV, Zwiers FW. 1999. Statistical Analysis in Climate Research. Cambridge University Press: Cambridge, UK.
  • Tang B, Hsieh WW, Monahan AH, Tangang FT. 2000. Skill comparisons between neural networks and canonical correlation analysis in predicting the Equatorial Pacific Sea surface temperatures. Journal of Climate 13: 287293.
  • Tasadduq I, Rehman S, Bubshait K. 2002. Application of neural networks for the prediction of hourly mean surface temperatures in Saudi Arabia. Renewable Energy 25: 545554.
  • Temizel TT, Caset MC. 2005. A comparative study of autoregressive neural network hybrids. Neural Networks 5–6: 781789.
  • Tim H, Leorey M, Marcus O, William R. 1994. Artificial neural network models for forecasting and decision making. International Journal of Forecasting 10: 515.
  • Trapletti A, Leisch F, Hornick K. 2000. Stationary and integrated autoregressive neural network processes. Neural Computation 10: 24272450.
  • Ustaoglu B, Cigizoglu HK, Karaca M. 2008. Forecast of daily mean, maximum and minimum temperature time series by three artificial neural network methods. Meteorological Applications 15: 431445.
  • Van Loon H, Jenne RL. 1975. Estimates of seasonal mean temperature, using persistence between seasons. Monthly Weather Review 103: 11211128.
  • Varotsos C. 2004. Atmospheric pollution and remote sensing: implications for the southern hemisphere ozone hole split in 2002 and the northern mid-latitude ozone trend. Monitoring of Changes Related to Natural and Manmade Hazards Using Space Technology 33: 249253.
  • Varotsos C, Assimakopoulos MN, Efstathiou M. 2007. Technical note: Long-term memory effect in the atmospheric CO2 concentration at Mauna Loa. Atmospheric Chemistry and Physics 7: 629634.
  • Varotsos CA, Cracknell AP. 2004. New features observed in the 11-year solar cycle. International Journal of Remote Sensing 25: 21412157.
  • Varotsos CA, Milinevsky G, Grytsai A, Efstathiou M, Tzanis C. 2008. Scaling effect in planetary waves over Antarctica. International Journal of Remote Sensing 29: 26972704.
  • Vil'fand RM, Tishchenko VA, Khan VM. 2007. Statistical forecast of temperature dynamics within month on the basis of hydrodynamic model outputs. Russian Meteorology and Hydrology 32: 147153.
  • Visser H, Molenaar J. 1995. Trend estimation and regression analysis in climatological time series: An application of structural time series models and the Kalman filter. Journal of Climate 8: 969979.
  • Weston KJ. 1972. The dry-line of Northern India and its role in cumulonimbus convection. Quarterly Journal of Royal Meteorological Society 98: 469471.
  • Widrow B, Lehr MA. 1990. 30 years of adaptive neural networks: perceptron, Madaline, and backpropagation. Proceedings of the IEEE 78: 14151442.
  • Wijeratne MA. 1992. Vulnerability of Sri Lanka tea production to global climate change. Water, Air, and Soil Pollution 92: 8794.
  • Wilks DS. 2006. Statistical Methods in Atmospheric Sciences, 2nd edn. Academic Press: San Diego, California.
  • Willmott CJ, Matsuura. K. 1995. Smart interpolation of the annually averaged air temperature in the United States. Journal of Applied Meteorology 34: 25772586.
  • Woodward WA, Gray HL. 1993. Global warming and the problem of testing for trend in time series data. Journal of Climate 6: 953962.
  • Woodward WA, Gray HL. 1995. Selecting a model for detecting the presence of a trend. Journal of Climate 8: 19291937.
  • Wu Z, Huang N, Long SR, Peng CK. 2007. On the trend, detrending, and variability of nonlinear and nonstationary time series. Proceedings of the National Academy of Sciences 104: 1488914894.
  • Xoplaki E, Gonzalez-Rouco JF, Luterbacher J, Wanner H. 2003. Mediterranean summer air temperature variability and its connection to the large-scale atmospheric circulation and SSTs. Climate Dynamics 20: 723739.
  • Yamane Y, Hayashi T. 2006. Evaluation of environmental conditions for the formation of severe local storms across the Indian subcontinent. Geophysical Research Letters 33: L17806, DOI: 10.1029/2006GL026823.
  • Yue S, Pilon P, Phinney B, Cavadias G. 2002. The influence of autocorrelation on the ability to detect trend in hydrological series. Hydrological Processes 16: 18071829.
  • Zhang GP, Kline DM. 2007. Quarterly time-series forecasting with neural networks. IEEE Transactions on Neural Networks 18: 18001814.
  • Zhang GP, Min Q. 2005. Neural network forecasting for seasonal and trend time series. European Journal of Operations Research 160: 501514.
  • Zuidema P. 2003. Convective clouds over the Bay of Bengal. Monthly Weather Review 131: 780798.
  • Zwiers FW, Storch HV. 1995. Taking serial correlation into account in tests of mean. Journal of Climate 8: 336350.