We consider the amount of available information about an arbitrary future state of a Gaussian stochastic process. We derive an infinite series for the marginal mutual information in terms of the autocorrelation function. We derive an infinite series for the newly available information for prediction, the conditional mutual information, in terms of the moving average parameters, and directly characterize predictability in terms of sensitivity to random shocks. We apply our results to long memory, or more generally, hyperbolic decay models, and give information-theoretic characterizations of the transition from persistence to anti-persistence, stationary long memory to nonstationarity, and a stationary regime where the mutual information is not summable.

]]>Time series data collected from arrays of seismometers are traditionally used to solve the core problems of detecting and estimating the waveform of a nuclear explosion or earthquake signal that propagates across the array. We consider here a parametric exponentially modulated autoregressive model. The signal is assumed to be convolved with random amplitudes following a Bernoulli normal mixture. It is shown to be potentially superior to the usual combination of narrow band filtering and beam forming. The approach is applied to analyzing series observed from an earthquake from Yunnan Province in China received by a seismic array in Kazakhstan.

]]>The traditional and most used measure for serial dependence in a time series is the autocorrelation function. This measure gives a complete characterization of dependence for a Gaussian time series, but it often fails for nonlinear time series models as, for instance, the generalized autoregressive conditional heteroskedasticity model (GARCH), where it is zero for all lags. The autocorrelation function is an example of a *global* measure of dependence. The purpose of this article is to apply to time series a well-defined *local* measure of serial dependence called the local Gaussian autocorrelation. It generally works well also for nonlinear models, and it can distinguish between positive and negative dependence. We use this measure to construct a test of independence based on the bootstrap technique. This procedure requires the choice of a bandwidth parameter that is calculated using a cross validation algorithm. To ensure the validity of the test, asymptotic properties are derived for the test functional and for the bootstrap procedure, together with a study of its power for different models. We compare the proposed test with one based on the ordinary autocorrelation and with one based on the Brownian distance correlation. The new test performs well. Finally, there are also two empirical examples.

Let *X* and *Y* be random variables such that *P*(*X* > *x*) ≤ *P*(*Y* > *x*) for all *x*. Then the random variable *X* is said to be smaller than the random variable *Y* in the usual stochastic order (denoted by *X* ≤_{st} *Y*). Stochastic orders and inequalities have been used in many areas such as reliability theory, survival analysis, economics, insurance and actuarial science where probability and statistics play a major role. This book is an introduction to the topic of stochastic orders. The authors mention that ‘the aim of this work is to provide a general background on this topic for students and researchers who want to use stochastic orders as a tool for their research’ in the preface for the book. Chapter 1 is devoted to the introduction of several concepts for univariate and multivariate distributions. The ideas and applications of usual stochastic order, increasing convex order, hazard rate and mean residual life order, dispersive order, concentration order and the total time on test transform order along with their applications to the comparison of coherent systems and collective risk models are discussed in Chapter 2. Some characterization results based on these stochastic orders, sufficient conditions for the stochastic orders to hold and preservation of stochastic orders under convergence, mixtures, transformations and convolutions are presented. Different types of multivariate stochastic orders such as the multivariate usual stochastic order, multivariate increasing convex order, multivariate residual life order, multivariate likelihood order and multivariate dispersive order along with few applications are presented in Chapter 3. A section on comparison of mixtures of conditionally independent models and comparisons of ordered data is included in this chapter. The book is a concise introduction to the theory of stochastic orders and their applications for students at masters level. The authors succeeded well in their aim, and a plus point for the book is its up-to-date extensive list of references on stochastic orders and their applications. For students and researchers who would like to know more about stochastic orders at an advanced level, I suggest the book ‘Stochastic Orders’ by M. Shaked and J.G. Shantikumar, Springer, 2007. With the advent of e-commerce and applications of functional data analysis for e-commerce, it would have been useful if the authors included a short chapter or some basic ideas on extension of the stochastic orders to infinite dimensional setup for comparison of queuing models, for comparison of time series such as financial time series and in general for comparison of different types of stochastic processes. It would have been better if the book went through copy-editing by a copy editor as far as the use of english language is concerned.

Given a stationary sequence
, we are interested in the rate of convergence in the central limit theorem of the empirical quantiles and the empirical distribution function. Under a general notion of weak dependence, we show a Berry–Esseen result with optimal rate *n*^{−1/2}. The setup includes many prominent time series models, such as functions of ARMA or (augmented) GARCH processes. In this context, optimal Berry–Esseen rates for empirical quantiles appear to be novel.

Heteroskedasticity is a common feature of financial time series and is commonly addressed in the model building process through the use of autoregressive conditional heteroskedastic and generalized autoregressive conditional heteroskedastic (GARCH) processes. More recently, multivariate variants of these processes have been the focus of research with attention given to methods seeking an efficient and economic estimation of a large number of model parameters. Because of the need for estimation of many parameters, however, these models may not be suitable for modelling now prevalent high-frequency volatility data. One potentially useful way to bypass these issues is to take a functional approach. In this article, theory is developed for a new functional version of the GARCH process, termed fGARCH. The main results are concerned with the structure of the fGARCH(1,1) process, providing criteria for the existence of strictly stationary solutions both in the space of square-integrable and continuous functions. An estimation procedure is introduced, and its consistency and asymptotic normality are verified. A small empirical study highlights potential applications to intraday volatility estimation.

]]>In this article, an exact factor model is considered, and a Lagrange multiplier-type test is derived for a homogeneous unit root in the idiosyncratic component. It is shown that under sequential asymptotics, its null limiting distribution is standard normal, regardless of whether the factors are integrated, cointegrated or stationary. In a simulation study, the size and local power of the Lagrange multiplier-type test and some popular non-likelihood-based tests are compared. The simulation results show that the Lagrange multiplier-type test has the highest local power as the panel dimensions tend to infinity, with the actual size tending to the nominal size.

]]>A corrected statement and proof of Theorem 4 of Jach, McElroy, and Politis (2012) is provided.

]]>We show how different data types (stocks and flows) and temporal aggregation affect the size and power of the dynamic ordinary least squares residual-based Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test of the null of cointegration. Size may be more effectively controlled by setting the minimum number of leads equal to one – as opposed to zero – when selecting the lag/lead order of the dynamic ordinary least squares regression using aggregated data, but at a cost to power. If high-frequency data for one or more series are available – that is, the model has mixed sampling frequencies – we show how to effectively utilize the high-frequency data to increase power while controlling size.

]]>This paper considers estimation and inference in semiparametric smooth coefficients dynamic panel data models. It proposes a class of local estimators that can be given an interesting information-theoretic interpretation and a number of test statistics that can be used to test for the (local) correct specification of the model and for the constancy of the smooth coefficients. The results of the paper are rather general as they allow for the three cases of ‘large *N*, small *T*’, ‘small *N*, large *T*’ and ‘large *N*, large *T*’, for the possibility that some of the regressors might be correlated with the unobservable errors and for the possibility that some of the variables used in the estimation might not be directly observable. Simulations show that the proposed method have competitive finite sample properties.

We propose a testing procedure based on the Wilcoxon two-sample test statistic in order to test for change-points in the mean of long-range dependent data. We show that the corresponding self-normalized test statistic converges in distribution to a non-degenerate limit under the hypothesis that no change occurred and that it diverges to infinity under the alternative of a change-point with constant height. Furthermore, we derive the asymptotic distribution of the self-normalized Wilcoxon test statistic under local alternatives, that is, under the assumption that the height of the level shift decreases as the sample size increases. Regarding the finite sample performance, simulation results confirm that the self-normalized Wilcoxon test yields a consistent discrimination between hypothesis and alternative and that its empirical size is already close to the significance level for moderate sample sizes.

]]>We propose outlier a robust and distribution-free test for the explosive AR(1) model with intercept based on simplicial depth. In this model, simplicial depth reduces to counting the cases where three residuals have alternating signs. The asymptotic distribution of the test statistic is given by a specific Gaussian process. Conditions for the consistency are given, and the power of the test at finite samples is compared with alternative tests. The new test outperforms these tests in the case of skewed errors and outliers. Finally, we apply the method to crack growth data and compare the results with an OLS approach.

]]>This article consists of two parts. The first one contains a brief review of global and local dependence measures, including the local Gaussian correlation. For this, the local correlation at a point *x* is obtained by approximating the given bivariate density *f* at *x* by a bivariate Gaussian and then taking the correlation of that bivariate Gaussian as the definition of local Gaussian correlation. The second part consists in a bias study of the local Gaussian correlation. A small simulation experiment is performed. In the Appendix, the possibility of a neighbourhood-free definition of local Gaussian correlation is explored.

This article proposes an exactly/nearly unbiased estimator of the autocovariance function of a univariate time series with unknown mean. The estimator is a linear function of the usual sample autocovariances computed using the observed demeaned data. The idea is to stack the usual sample autocovariances into a vector and show that the expectation of this vector is a linear combination of population autocovariances. A matrix that we label, **A**, collects the weights in these linear combinations. When the population autocovariances of high lags are zero (small), exactly (nearly) unbiased estimators of the remaining autocovariances can be obtained using the inverse of upper blocks of the **A** matrix. The **A**-matrix estimators are shown to be asymptotically equivalent to the usual sample autocovariance estimators. The **A**-matrix estimators can be used to construct estimators of the autocorrelation function that have less bias than the usual estimators. Simulations show that the **A**-matrix estimators can substantially reduce bias while not necessarily increasing mean square error. More powerful tests for the null hypothesis of white noise are obtained using the **A**-matrix estimators.

This article investigates approximation and supremum approaches for testing linearity in smooth transition autoregressive (STAR) models. We show that since the approximation of STAR models by Taylor series expansions may not accurately describe the specific transition dynamic when the process is away from the null, LM-type tests may fail to detect the form of nonlinearity for which they are designed for. Investigating a supremum approach, the article provides the asymptotic distribution of a SupWald test that is obtained by taking the supremum of a Wald statistic over the Cartesian product of the spaces for the transition and threshold parameters. Simulated asymptotic critical values for the resulting tests are provided for a wide range of autoregressive orders and shown to differ across exponential and logistic STAR (ESTAR and LSTAR) models. Monte Carlo experiments show that SupWald tests for ESTAR and LSTAR models outperform LM-type tests, compares well relative to the recently developed score-based tests and each SupWald statistic performs the best against the true alternative for which it is formed. SupWald tests also provide results that are consistent with the findings from (independently) estimating and diagnostic testing of STAR models in real exchange rate data.

]]>We introduce a wavelet characterization of continuous-time periodically correlated processes based on a linear combination of infinite-dimensional stationary processes. The finite version of this linear combination converges to the main process. The first-order and second-order estimators based on the wavelets are presented. Under a simple and easy algorithm, the periodically correlated process is simulated for a given autocovariance function. The proposed algorithm has two main advantages: first, it is fast, and second, it is distribution free. We indicate through four examples that the simulated data are periodically correlated with the desired period.

]]>In this article, we propose a Bayesian non-parametric model for the analysis of multiple time series. We consider an autoregressive structure of order *p* for each of the series and borrow strength across the series by considering a common error population that is also evolving in time. The error populations (distributions) are assumed non-parametric whose law is based on a series of dependent Polya trees with zero median. This dependence is of order *q* and is achieved via a dependent beta process that links the branching probabilities of the trees. We study the prior properties and show how to obtain posterior inference. The model is tested under a simulation study and is illustrated with the analysis of the economic activity index of the 32 states of Mexico.

Of interest is comparing the out-of-sample forecasting performance of two competing models in the presence of possible instabilities. To that effect, we suggest using simple structural change tests, sup-Wald and *UDmax* for changes in the mean of the loss differences. It is shown that Giacomini and Rossi () tests have undesirable power properties, power that can be low and non-increasing as the alternative becomes further from the null hypothesis. On the contrary, our statistics are shown to have higher monotonic power, especially the UDmax version. We use their empirical examples to show the practical relevance of the issues raised.

We propose a variance ratio-type unit root test where the nuisance parameter cancels asymptotically under both the null of a unit root and a local-to-unity alternative. Critical values and asymptotic power curves can be computed using standard numerical techniques. Our test exhibits higher power compared with tests that share the virtue of being free of tuning parameters. In fact, the local asymptotic power curves of our procedure get close to the power functions of the point optimal test, where the latter suffers from the drawback of having to correct for a nuisance parameter consistently.

]]>For discrete panel data, the dynamic relationship between successive observations is often of interest. We consider a dynamic probit model for short panel data. A problem with estimating the dynamic parameter of interest is that the model contains a large number of nuisance parameters, one for each individual. Heckman proposed to use maximum likelihood estimation of the dynamic parameter, which, however, does not perform well if the individual effects are large. We suggest new estimators for the dynamic parameter, based on the assumption that the individual parameters are random and possibly large. Theoretical properties of our estimators are derived, and a simulation study shows they have some advantages compared with Heckman's estimator and the modified profile likelihood estimator for fixed effects.

]]>The aim of this article is to estimate the probability distribution of power threshold generalized autoregressive conditional heteroskedasticity processes by establishing bounds for their finite dimensional laws. These bounds only depend on the parameters of the model and on the distribution function of its independent generating process. The application of this study to some particular models allows us to conjecture that this procedure is an adequate alternative to the corresponding estimation using the empirical distribution functions, particularly useful in the development of control charts for this kind of models.

]]>Quantile autoregression (QAR) is particularly attractive for censored data. However, unlike the standard regression models, the autoregressive models must take account of censoring on both response and regressors. In this article, we show that the existing censored quantile regression methods produce consistent estimators for QAR models when using only the fully observed regressors. A new algorithm is proposed to provide a censored QAR estimator by adopting imputation methods. The algorithm redistributes probability mass of censored points appropriately and iterates towards self-consistent solutions. Monte Carlo simulations and empirical applications are conducted to demonstrate merits of the proposed method.

]]>Bartlett correction, which improves the coverage accuracies of confidence regions, is one of the desirable features of empirical likelihood. For empirical likelihood with dependent data, previous studies on the Bartlett correction are mainly concerned with Gaussian processes. By establishing the validity of Edgeworth expansion for the signed root empirical log-likelihood ratio statistics, we show that the Bartlett correction is applicable to empirical likelihood for short-memory time series with possibly non-Gaussian innovations. The Bartlett correction is established under the assumptions that the variance of the innovation is known and the mean of the underlying process is zero for a single parameter model. In particular, the order of the coverage errors of Bartlett-corrected confidence regions can be reduced from *O*(*n*^{−1}) to *O*(*n*^{−2}).

Perron and Zhu (2005) established the consistency, convergence rate and limiting distributions of parameter estimates in time trends with a change in slope with or without a concurrent level change for the cases with I(1) or I(0) errors. We extend their analysis to the general case of fractionally integrated errors with memory parameter d^{∗}. Our results uncover interesting features; e.g., with a level shift allowed, the convergence rate for the break date estimate is the same for all d^{∗}∈(−0.5,0.5). In other cases, it is decreasing as d^{∗} increases. We also provide results about the so-called spurious break issue.

The existing estimation methods for the model parameters of the unified GARCH–Itô model (Kim and Wang, ) require long period observations to obtain the consistency. However, in practice, it is hard to believe that the structure of a stock price is stable during such a long period. In this article, we introduce an estimation method for the model parameters based on the high-frequency financial data with a finite observation period. In particular, we establish a quasi-likelihood function for daily integrated volatilities, and realized volatility estimators are adopted to estimate the integrated volatilities. The model parameters are estimated by maximizing the quasi-likelihood function. We establish asymptotic theories for the proposed estimator. A simulation study is conducted to check the finite sample performance of the proposed estimator. We apply the proposed estimation approach to the Bank of America stock price data.

]]>This article examines asymptotically point optimal tests for parameter instability in realistic circumstances when little information about the unstable parameter process and error distribution is available. We first show that, under a correctly specified error distribution, if the unstable parameter processes converge weakly to a Wiener process, then any asymptotic optimal tests for structural breaks and time-varying parameters are asymptotically equivalent. Our finding is then extended to a semi-parametric set-up in which the error distribution is treated as an unknown infinite-dimensional nuisance parameter. We find that semi-parametric tests can be adaptive without further restrictive conditions on the error distribution.

]]>Multivariate processes with long-range dependent properties are found in a large number of applications including finance, geophysics and neuroscience. For real-data applications, the correlation between time series is crucial. Usual estimations of correlation can be highly biased owing to phase shifts caused by the differences in the properties of autocorrelation in the processes. To address this issue, we introduce a semiparametric estimation of multivariate long-range dependent processes. The parameters of interest in the model are the vector of the long-range dependence parameters and the long-run covariance matrix, also called functional connectivity in neuroscience. This matrix characterizes coupling between time series. The proposed multivariate wavelet-based Whittle estimation is shown to be consistent for the estimation of both the long-range dependence and the covariance matrix and to encompass both stationary and nonstationary processes. A simulation study and a real-data example are presented to illustrate the finite-sample behaviour.

]]>Many studies record replicated time series epochs from different groups with the goal of using frequency domain properties to discriminate between the groups. In many applications, there exists variation in cyclical patterns from time series in the same group. Although a number of frequency domain methods for the discriminant analysis of time series have been explored, there is a dearth of models and methods that account for within-group spectral variability. This article proposes a model for groups of time series in which transfer functions are modelled as stochastic variables that can account for both between-group and within-group differences in spectra that are identified from individual replicates. An ensuing discriminant analysis of stochastic cepstra under this model is developed to obtain parsimonious measures of relative power that optimally separate groups in the presence of within-group spectral variability. The approach possesses favourable properties in classifying new observations and can be consistently estimated through a simple discriminant analysis of a finite number of estimated cepstral coefficients. Benefits in accounting for within-group spectral variability are empirically illustrated in a simulation study and through an analysis of gait variability.

]]>This article explores the problem of estimating stationary autoregressive models from observed data using the Bayesian least absolute shrinkage and selection operator (LASSO). By characterizing the model in terms of partial autocorrelations, rather than coefficients, it becomes straightforward to guarantee that the estimated models are stationary. The form of the negative log-likelihood is exploited to derive simple expressions for the conditional likelihood functions, leading to efficient algorithms for computing the posterior mode by coordinate-wise descent and exploring the posterior distribution by Gibbs sampling. Both empirical Bayes and Bayesian methods are proposed for the estimation of the LASSO hyper-parameter from the data. Simulations demonstrate that the Bayesian LASSO performs well in terms of prediction when compared with a standard autoregressive order selection method.

]]>The article reviews methods of inference for single and multiple change-points in time series, when data are of retrospective (off-line) type. The inferential methods reviewed for a single change-point in time series include likelihood, Bayes, Bayes-type and some relevant non-parametric methods. Inference for multiple change-points requires methods that can handle large data sets and can be implemented efficiently for estimating the number of change-points as well as their locations. Our review in this important area focuses on some of the recent advances in this direction. Greater emphasis is placed on multivariate data while reviewing inferential methods for a single change-point in time series. Throughout the article, more attention is paid to estimation of unknown change-point(s) in time series, and this is especially true in the case of multiple change-points. Some specific data sets for which change-point modelling has been carried out in the literature are provided as illustrative examples under both single and multiple change-point scenarios.

]]>We consider a heteroscedastic nonparametric regression model with an autoregressive error process of finite known order *p*. The heteroscedasticity is incorporated using a scaling function defined at uniformly spaced design points on an interval [0,1]. We provide an innovative nonparametric estimator of the variance function and establish its consistency and asymptotic normality. We also propose a semiparametric estimator for the vector of autoregressive error process coefficients that is consistent and asymptotically normal for a sample size *T*. Explicit asymptotic variance covariance matrix is obtained as well. Finally, the finite sample performance of the proposed method is tested in simulations.

No abstract is available for this article.

]]>No abstract is available for this article.

]]>In this article, we propose a nonparametric procedure for validating the assumption of stationarity in multivariate locally stationary time series models. We develop a bootstrap-assisted test based on a Kolmogorov–Smirnov-type statistic, which tracks the deviation of the time-varying spectral density from its best stationary approximation. In contrast to all other nonparametric approaches, which have been proposed in the literature so far, the test statistic does not depend on any regularization parameters like smoothing bandwidths or a window length, which is usually required in a segmentation of the data. We additionally show how our new procedure can be used to identify the components where non-stationarities occur and indicate possible extensions of this innovative approach. We conclude with an extensive simulation study, which shows finite-sample properties of the new method and contains a comparison with existing approaches.

]]>Long-memory effects can be found in many data sets from finance to hydrology. Therefore, models that can reflect these properties have become more popular in recent years. Mandelbrot–Van Ness fractional Lévy processes allow for such stationary long-memory effects in their increments and have been used in different settings ranging from fractionally integrated continuous-time ARMA–GARCH-type setups to general stochastic differential equations. However, their conditional distributions have not yet been considered in detail. In this article, we provide a closed formula for their conditional characteristic functions and suggest several applications to continuous-time ARMA–GARCH-type models with long memory.

]]>A two-step approach for conditional value at risk estimation is considered. First, a generalized quasi-maximum likelihood estimator is employed to estimate the volatility parameter, then the empirical quantile of the residuals serves to estimate the theoretical quantile of the innovations. When the instrumental density *h* of the generalized quasi-maximum likelihood estimator is not the Gaussian density, both the estimations of the volatility and of the quantile are generally asymptotically biased. However, the two errors counterbalance and lead to a consistent estimator of the value at risk. We obtain the asymptotic behavior of this estimator and show how to choose optimally *h*.

For autoregressive count data time series, a goodness-of-fit test based on the empirical joint probability generating function is considered. The underlying process is contained in a general class of Markovian models satisfying a drift condition. Asymptotic theory for the test statistic is provided, including a functional central limit theorem for the non-parametric estimation of the stationary distribution and a parametric bootstrap method. Connections between the new approach and existing tests for count data time series based on moment estimators appear in limiting scenarios. Finally, the test is applied to a real data set.

]]>This work develops maximum likelihood-based unit root tests in the noncausal autoregressive (NCAR) model with a non-Gaussian error term formulated by Lanne and Saikkonen (2011, *Journal of Time Series Econometrics* 3, Issue 3, Article 2). Finite-sample properties of the tests are examined via Monte Carlo simulations. The results show that the size properties of the tests are satisfactory and that clear power gains against stationary NCAR alternatives can be achieved in comparison with available alternative tests. In an empirical application to a Finnish interest rate series, evidence in favour of an NCAR model with leptokurtic errors is found.

Stationary processes are a natural choice as statistical models for time series data, owing to their good estimating properties. In practice, however, alternative models are often proposed that sacrifice stationarity in favour of the greater modelling flexibility required by many real-life applications. We present a family of time-homogeneous Markov processes with nonparametric stationary densities, which retain the desirable statistical properties for inference, while achieving substantial modelling flexibility, matching those achievable with certain non-stationary models. A latent extension of the model enables exact inference through a trans-dimensional Markov chain Monte Carlo method. Numerical illustrations are presented.

]]>