The parameters of integer autoregressive models with Poisson, or negative binomial innovations can be estimated by maximum likelihood where the prediction error decomposition, together with convolution methods, is used to write down the likelihood function. When a moving average component is introduced this is not the case. To address this problem an efficient method of moment estimator is proposed where the estimated standard errors for the parameters are obtained using subsampling methods. The small sample properties of the estimator are investigated using Monte Carlo methods, while the approach is demonstrated using two well-known examples from the time series literature.

]]>Indirect estimators usually emerge from two-step optimization procedures. Each step in such a procedure may induce complexities in the asymptotic theory of the estimator. In this note, we are occupied with a simple example in which the estimator defined by the inversion of the binding function has a ‘discontinuous’ limit theory even in cases where the auxiliary one does not. This example lives in the framework of estimation of the MA (1) parameter. The ‘discontinuities’ involve the dependence of the rate of convergence on the parameter, the non-continuity of the limit distribution w.r.t. the parameter and the estimator's *non-regularity*. We are also occupied with a more complex example where the discontinuities occur because of complexities induced in any step of the defining procedure. We present some Monte Carlo evidence on the quality of the approximations from the limit distributions. Copyright © 2014 Wiley Publishing Ltd

A time-series model in which the signal is buried in noise that is non-Gaussian may throw up observations that, when judged by the Gaussian yardstick, are outliers. We describe an observation-driven model, based on an exponential generalized beta distribution of the second kind (EGB2), in which the signal is a linear function of past values of the score of the conditional distribution. This specification produces a model that is not only easy to implement but which also facilitates the development of a comprehensive and relatively straightforward theory for the asymptotic distribution of the maximum-likelihood (ML) estimator. Score-driven models of this kind can also be based on conditional *t* distributions, but whereas these models carry out what, in the robustness literature, is called a soft form of trimming, the EGB2 distribution leads to a soft form of Winsorizing. An exponential general autoregressive conditional heteroscedastic (EGARCH) model based on the EGB2 distribution is also developed. This model complements the score-driven EGARCH model with a conditional *t* distribution. Finally, dynamic location and scale models are combined and applied to data on the UK rate of inflation.

We discuss robust M-estimation of INARCH models for count time series. These models assume the observation at each point in time to follow a Poisson distribution conditionally on the past, with the conditional mean being a linear function of previous observations. This simple linear structure allows us to transfer M-estimators for autoregressive models to this situation, with some simplifications being possible because the conditional variance given the past equals the conditional mean. We investigate the performance of the resulting generalized M-estimators using simulations. The usefulness of the proposed methods is illustrated by real data examples.

]]>This is a revision of a paper that I presented at the John Nankervis Memorial Conference in July 2013. The purposes are to describe the research produced jointly by John and I and to give some personal comments.

]]>In a recent paper, Harvey *et al.* (2013) (HLT) propose a new unit root test that allows for the possibility of multiple breaks in trend. Their proposed test is based on the infimum of the sequence (across all candidate break points) of local GLS detrended augmented Dickey–Fuller-type statistics. HLT show that the power of their unit root test is robust to the magnitude of any trend breaks. In contrast, HLT show that the power of the only alternative available procedure of Carrion-i-Silvestre *et al.* (2009), which employs a pretest-based approach, can be very low indeed (even zero) for the magnitudes of trend breaks typically observed in practice. Both HLT and Carrion-i-Silvestre *et al.* (2009) base their approaches on the assumption of homoskedastic shocks. In this article, we analyse the impact of non-stationary volatility (for example, single and multiple abrupt variance breaks, smooth transition variance breaks and trending variances) on the tests proposed in HLT. We show that the limiting null distribution of the HLT unit root test statistic is not pivotal under non-stationary volatility. A solution to the problem, which does not require the practitioner to specify a parametric model for volatility, is provided using the wild bootstrap and is shown to perform well in practice. A number of different possible implementations of the bootstrap algorithm are discussed.

This article explores the problem of estimating stationary autoregressive models from observed data using the Bayesian least absolute shrinkage and selection operator (LASSO). By characterizing the model in terms of partial autocorrelations, rather than coefficients, it becomes straightforward to guarantee that the estimated models are stationary. The form of the negative log-likelihood is exploited to derive simple expressions for the conditional likelihood functions, leading to efficient algorithms for computing the posterior mode by coordinate-wise descent and exploring the posterior distribution by Gibbs sampling. Both empirical Bayes and Bayesian methods are proposed for the estimation of the LASSO hyper-parameter from the data. Simulations demonstrate that the Bayesian LASSO performs well in terms of prediction when compared with a standard autoregressive order selection method.

]]>The article reviews methods of inference for single and multiple change-points in time series, when data are of retrospective (off-line) type. The inferential methods reviewed for a single change-point in time series include likelihood, Bayes, Bayes-type and some relevant non-parametric methods. Inference for multiple change-points requires methods that can handle large data sets and can be implemented efficiently for estimating the number of change-points as well as their locations. Our review in this important area focuses on some of the recent advances in this direction. Greater emphasis is placed on multivariate data while reviewing inferential methods for a single change-point in time series. Throughout the article, more attention is paid to estimation of unknown change-point(s) in time series, and this is especially true in the case of multiple change-points. Some specific data sets for which change-point modelling has been carried out in the literature are provided as illustrative examples under both single and multiple change-point scenarios.

]]>We consider a heteroscedastic nonparametric regression model with an autoregressive error process of finite known order *p*. The heteroscedasticity is incorporated using a scaling function defined at uniformly spaced design points on an interval [0,1]. We provide an innovative nonparametric estimator of the variance function and establish its consistency and asymptotic normality. We also propose a semiparametric estimator for the vector of autoregressive error process coefficients that is consistent and asymptotically normal for a sample size *T*. Explicit asymptotic variance covariance matrix is obtained as well. Finally, the finite sample performance of the proposed method is tested in simulations.

This article proposes a hybrid bootstrap approach to approximate the augmented Dickey–Fuller test by perturbing both the residual sequence and the minimand of the objective function. Since innovations can be dependent, this allows the inclusion of conditional heteroscedasticity models. The new bootstrap method is also applied to least absolute deviation-based unit root test statistics, which are efficient in handling heavy-tailed time-series data. The asymptotic distributions of resulting bootstrap tests are presented, and Monte Carlo studies demonstrate the usefulness of the proposed tests.

]]>This article investigates the statistical properties of the recently introduced quantile periodogram for time series with time-dependent variance. The asymptotic distribution of the quantile periodogram is derived in the case where the time series consists of i.i.d. random variables multiplied by a time-dependent scale parameter. It is shown that the time-dependent variance is represented approximately additively in the mean of the asymptotic distribution of the quantile periodogram. It is also shown that the strength of the representation is proportional to the squared quantile of the i.i.d. random variables, so that a stronger characterization is expected at upper and lower quantile levels if the time series is centred at zero. These properties are further demonstrated by simulation results. The series of daily returns from the Dow Jones Industrial Average, which is known to exhibit heteroscedastic volatility, serves to motivate the investigation.

]]>We consider a model for the discrete nonboundary wavelet coefficients of autoregressive fractionally integrated moving average (ARFIMA) processes in each scale. Because the utility of the wavelet transform for the long-range dependent processes, which many authors have explained in semi-parametrical literature, is approximating the transformed processes to white noise processes in each scale, there have been few studies in a parametric setting. In this article, we propose the model from the forms of the (generalized) spectral density functions (SDFs) of these coefficients. Since the discrete wavelet transform has the property of downsampling, we cannot directly represent these (generalized) SDFs. To overcome this problem, we define the discrete non-decimated nonboundary wavelet coefficients and compute their (generalized) SDFs. Using these functions and restricting the wavelet filters to the Daubechies wavelets and least asymmetric filters, we make the (generalized) SDFs of the discrete nonboundary wavelet coefficients of ARFIMA processes in each scale clear. Additionally, we propose a model for the discrete nonboundary scaling coefficients in each scale.

A frequency domain methodology is proposed for estimating parameters of covariance functions of stationary spatio-temporal processes. Finite Fourier transforms of the processes are defined at each location. Based on the joint distribution of these complex valued random variables, an approximate likelihood function is constructed. The sampling properties of the estimators are investigated. It is observed that the expectation of these transforms can be considered to be a frequency domain analogue of the classical variogram. We call this measure frequency variogram. The method is applied to simulated data and also to Pacific wind speed data considered earlier by Cressie and Huang (1999). The proposed method does not depend on the distributional assumptions about the process.

]]>A two-step estimation method is proposed for periodic autoregressive parameters via residuals when the observations contain trend and periodic autoregressive time series. The oracle efficiency of the proposed Yule–Walker-type estimator is established. The performance is illustrated by simulation studies and real data analysis.

]]>