This article extends and generalizes the variance-ratio (VR) statistic by employing an estimator of the asymptotic covariance matrix of the sample autocorrelations. The estimator is consistent under the null for general classes of innovations exhibiting statistical dependence including exponential generalized autoregressive conditional heteroskedasticity and non-martingale difference sequence processes. Monte Carlo experiments show that our generalized test statistics have good finite sample size and superior power properties to other recently developed VR versions. In an application to two major US stock indices, our new generalized VR tests provide stronger rejections of the null than do competing VR tests.

]]>Many unit root test statistics are based on detrended data, with the method of generalized least squares (GLS) detrending being popular in the setting of a near-integrated model. This article determines the properties of some associated limiting distributions when the GLS detrending is based on a linear time trend. A fundamental result for the moment generating function of two key functionals of the relevant stochastic process is provided and used to compute probability density functions and cumulative distribution functions, as well as means and variances, of the limiting distributions of some statistics of interest. The exact moments and percentiles of some of these distributions are compared with those obtained by simulations, and it is found that, even with a large number of replications and a large sample size, the errors resulting from the simulation methods are not negligible. Some further applications, including a comparison of limiting power functions of different unit root test statistics and the consideration of a more complicated statistic, are also provided.

]]>This article applies the mildly explosive/multiple bubbles testing methodology developed by Phillips *et al.* (2015a, International Economic Review, forthcoming) to examine the recent time series behaviour of the six main London Metal Exchange non-ferrous metals prices. We detect periods of mild explosivity in the cash and 3-month futures price series in each of copper, nickel, lead, zinc and tin, but not in aluminium. We argue that convenience yield, although the formal counterpart to dividend yield in commodity markets, is not a useful basis on which to assess whether observed explosivity is indicative of bubbles. We construct other measures that suggest the observed explosivity in the non-ferrous metals market can be associated with tight physical markets.

Contextual factors usually assume an important role in determining firms' productive efficiencies. Nevertheless, identifying them in a regression framework might be complicated. The problem arises from the efficiencies being correlated with each other when estimated by Data Envelopment Analysis, rendering standard inference methods invalid. Simar and Wilson (2007) suggest the use of bootstrap algorithms that allow for valid statistical inference in this context. This article extends their work by proposing a double bootstrap algorithm for obtaining confidence intervals with improved coverage probabilities. Moreover, acknowledging the computational burden associated with iterated bootstrap procedures, we provide an algorithm based on deterministic stopping rules, which is less computationally demanding. Monte Carlo evidence shows considerable improvement in the coverage probabilities after iterating the bootstrap procedure. The results also suggest that percentile confidence intervals perform better than their basic counterpart.

]]>This article examines the ability of recently developed statistical learning procedures, such as random forests or support vector machines, for forecasting the first two moments of stock market daily returns. These tools present the advantage of the flexibility of the considered nonlinear regression functions even in the presence of many potential predictors. We consider two cases: where the agent's information set only includes the past of the return series, and where this set includes past values of relevant economic series, such as interest rates, commodities prices or exchange rates. Even though these procedures seem to be of no much use for predicting returns, it appears that there is real potential for some of these procedures, especially support vector machines, to improve over the standard GARCH(1,1) model the out-of-sample forecasting ability for squared returns. The researcher has to be cautious on the number of predictors employed and on the specific implementation of the procedures since using many predictors and the default settings of standard computing packages leads to overfitted models and to larger standard errors.

]]>In this article, we show that in times series models with in-mean and level effects, persistence will be transmitted from the conditional variance to the conditional mean and vice versa. Hence, by studying the conditional mean/variance independently, one will obtain a biased estimate of the true degree of persistence. For the specific example of an AR(1)-APARCH(1,1)-in-mean-level process, we derive the autocorrelation function, the impulse response function and the optimal predictor. Under reasonable assumptions, the AR(1)-APARCH(1,1)-in-mean-level process will be observationally equivalent to an autoregressive moving average (ARMA)(2,1) process with the largest autoregressive root being close to one. We illustrate the empirical relevance of our results with applications to S&P 500 return and US inflation data.

]]>Unlike with independent data, smoothed bootstraps have received little consideration for time series, although data smoothing within resampling can improve bootstrap approximations, especially when target distributions depend on smooth population quantities (e.g., marginal densities). For approximating a broad class statistics formulated through statistical functionals (e.g., LL-estimators, and sample quantiles), we propose a smooth bootstrap by modifying a state-of-the-art (extended) tapered block bootstrap (TBB). Our treatment shows that the smooth TBB applies to time series inference cases not formally established with other TBB versions. Simulations also indicate that smoothing enhances the block bootstrap.

]]>This article investigates the accuracy of bootstrap-based bias correction of persistence measures for long-memory fractionally integrated processes. The bootstrap method is based on the semi-parametric sieve approach, with the dynamics in the long-memory process captured by an autoregressive approximation. With a view to improving accuracy, the sieve method is also applied to data prefiltered by a semi-parametric estimate of the long-memory parameter. Both versions of the bootstrap technique are used to estimate the finite-sample distributions of the sample autocorrelation coefficients and the impulse response coefficients and, in turn, to bias adjust these statistics. The accuracy of the resultant estimators in the case of the autocorrelation coefficients is also compared with that yielded by analytical bias adjustment methods when available. The basic sieve technique is seen to yield a reduction in the bias of both persistence measures. The prefiltered sieve produces a substantial further reduction in the bias of the estimated impulse response function, whilst the extra improvement yielded by prefiltering in the case of the sample autocorrelation function is shown to depend heavily on the accuracy of the prefilter.

]]>In a standard cointegrating framework, Phillips (1991) introduced the weighted covariance (WC) estimator of cointegrating parameters. Later, Marinucci (2000) applied this estimator to fractional circumstances and, like Phillips (1991), analysed the so-called small-*b* asymptotic approximation to its sampling distribution. Recently, an alternative limiting theory (fixed-*b* asymptotics) has been successfully employed to approximate sampling distributions. With the purpose of comparing both approaches, we derive here the fixed-*b* limit of WC estimators in a fractional setting, filling also some gaps in the traditional (small-*b*) theory. We also provide some Monte Carlo evidence that suggests that the fixed-*b* limit is more accurate.

In this paper, we propose a model-free bootstrap method for the empirical process under absolute regularity. More precisely, consistency of an adapted version of the so-called dependent wild bootstrap, which was introduced by Shao () and is very easy to implement, is proved under minimal conditions on the tuning parameter of the procedure. We show how our results can be applied to construct confidence intervals for unknown parameters and to approximate critical values for statistical tests. In a simulation study, we investigate the size properties of a bootstrap-aided Kolmogorov-Smirnov test and show that our method is competitive to standard block bootstrap methods in finite samples.

]]>In a recent paper, *Cavaliere et al.*, develop bootstrap implementations of the popular likelihood-based co-integration rank tests and associated sequential rank determination procedures of *Johansen* . By using estimates of the parameters of the underlying co-integrated VAR model obtained under the restriction of the null hypothesis, they show that consistent bootstrap inference can be obtained for processes whose deterministic component is either zero, a restricted constant or a restricted trend. In this article, we extend their bootstrap approach to allow the deterministic component to follow the practically relevant cases of either an unrestricted constant or an unrestricted trend from *Johansen* . A full asymptotic theory is provided for these two cases, establishing the asymptotic validity of the resulting bootstrap tests. Our results, taken together with those in *Cavaliere et al.*, , therefore show that the bootstrap approach based on imposing the reduced rank null hypothesis is valid for all five of these deterministic settings. Monte Carlo evidence demonstrates the improvements that the proposed bootstrap methods can deliver over the corresponding asymptotic procedures.

Let be an almost periodically correlated process and {*N*(*t*),*t*≥0} be a homogeneous Poisson process and {*T*_{k},*k*≥1} be its jump moments. We assume that and {*N*(*t*),*t*≥0} are independent. Moreover, the process is not observed continuously but only in the time moments {*T*_{k},*k*≥1}; In this paper, we focus on the estimation of the cyclic means of . The asymptotic normality of the rescaled error of the estimator is shown. Additionally, the bootstrap method based on the circular block bootstrap is proposed. The consistency of the bootstrap technique is proved, and the bootstrap pointwise and simultaneous confidence intervals for the cyclic means are constructed. The results are illustrated by a simulated data example.

We propose an integer-valued stochastic process with conditional marginal distribution belonging to the class of infinitely divisible discrete probability laws. With this proposal, we introduce a wide class of models for count time series that includes the Poisson integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) model (Ferland et al., 2006) and the negative binomial and generalized Poisson INGARCH models (Zhu, 2011, 2012a). The main probabilistic analysis of this process is developed stating, in particular, first-order and second-order stationarity conditions. The existence of a strictly stationary and ergodic solution is established in a subclass including the Poisson and generalized Poisson INGARCH models.

]]>This article considers bootstrap inference in a factor-augmented regression context where the errors could potentially be serially correlated. This generalizes results in Gonçalves & Perron (2014) and makes the bootstrap applicable to forecasting contexts where the forecast horizon is greater than one. We propose and justify two residual-based approaches, a block wild bootstrap and a dependent wild bootstrap. Our simulations document improvement in coverage rates of confidence intervals for the coefficients when using block wild bootstrap or dependent wild bootstrap relative to both asymptotic theory and the wild bootstrap when serial correlation is present in the regression errors.

]]>The paper introduces a *functional* time series (lagged) regression model. The impulse-response coefficients in such a model are operators acting on a separable Hilbert space, which is the function space *L*^{2} in applications. A spectral approach to the estimation of these coefficients is proposed and asymptotically justified under a general nonparametric condition on the temporal dependence of the input series. Since the data are infinite-dimensional, the estimation involves a spectral-domain dimension-reduction technique. Consistency of the estimators is established under general data-dependent assumptions on the rate of the dimension-reduction parameter. Their finite-sample performance is evaluated by a simulation study that compares two ad hoc approaches to dimension reduction with an alternative, asymptotically justified method.

This paper makes two contributions in relation to the use of information criteria for inference on structural breaks when the coefficients of a linear model with endogenous regressors may experience multiple changes. First, we show that suitably defined information criteria yield consistent estimators of the number of breaks, when employed in the second stage of a two-stage least squares (2SLS) procedure with breaks in the reduced form taken into account in the first stage. Second, a Monte Carlo analysis investigates the finite sample performance of a range of criteria based on Bayesian information criterion (BIC), Hannan–Quinn information criterion (HQIC) and Akaike information criterion (AIC) for equations estimated by 2SLS. Versions of the consistent criteria BIC and HQIC perform well overall when the penalty term weights estimation of each break point more heavily than estimation of each coefficient, while AIC is inconsistent and badly over-estimates the number of true breaks.

]]>This article is devoted to extending the notion of robustness in the context of Markovian data, based on their (pseudo-)regenerative properties and by studying its impact on the regenerative block-bootstrap (RBB). Precisely, it is shown how to possibly define the ‘influence function’ in this framework, so as to measure the impact of (pseudo-)regeneration data blocks on the statistic of interest. We also define the concept of regeneration-based signed linear rank statistic and *L*-statistic, as specific functionals of the regeneration blocks, which can be made robust against outliers in this sense. The asymptotic validity of the approximate RBB (ARBB), is established here, when applied to such statistics. For illustration purpose, we compare (A)RBB confidence intervals for the mean, the median and some *L*-statistics related to the (supposedly existing) stationary probability distribution *μ*(d*x*) of the chain observed and for their robustified versions as well. Copyright © 2015 Wiley Publishing Ltd

We propose an approach to investigate the unit root properties of individual units in a time series panel or large multivariate time series, based on testing user-defined increasing proportions of hypothesized I(0) units sequentially. Asymptotically valid critical values are obtained using the block bootstrap. This sequential approach has an advantage over multiple testing approaches as it can exploit the (cross-sectional) dimension of the system, which the multiple testing approaches cannot do effectively. A simulation study and an empirical application are conducted to analyse the relative performance of the approach in comparison with multiple testing approaches. These demonstrate the usefulness of our method, in particular in systems with a relatively small time dimension.

]]>We propose a new resampling method, the dependent random weighting, for both time series and random fields. The method is a generalization of the traditional random weighting in that the weights are made to be temporally or spatially dependent and are adaptive to the configuration of the data. Unlike the block-based bootstrap or subsampling methods, the dependent random weighting can be used for irregularly spaced time series and spatial data without any implementational difficulty. Consistency of the distribution approximation is shown for both equally and unequally spaced time series. Simulation studies illustrate the finite sample performance of the dependent random weighting in comparison with the existing counterparts for both one-dimensional and two-dimensional irregularly spaced data.

]]>This article examines tests for a unit root in skip-sampled data. A generalization of the usual discrete time framework that allows for a continuous time detrending procedure prior to estimation of the resulting discrete time dynamic model that embodies exactly the restrictions imposed by the process of temporal aggregation is proposed. A simulation study reveals that taking these restrictions into account can yield improved size and power properties compared to a statistic based on a model that ignores the temporal aggregation, and an empirical illustration of the methods using monthly producer price data for the UK and the USA is provided. Further avenues for investigation in future work are also highlighted.

]]>Many statistical applications require the forecast of a random variable of interest over several periods into the future. The sequence of individual forecasts, one period at a time, is called a path forecast, where the term *path* refers to the sequence of individual future realizations of the random variable. The problem of constructing a corresponding joint prediction region has been rather neglected in the literature so far: such a region is supposed to contain the entire future path with a prespecified probability. We develop bootstrap methods to construct joint prediction regions. The resulting regions are proven to be asymptotically consistent under a mild high-level assumption. We compare the finite-sample performance of our joint prediction regions with some previous proposals via Monte Carlo simulations. An empirical application to a real data set is also provided.

This paper is based on one presented to the John Nankervis Memorial Conference at the University of Essex in July 2013. It outlines John's pioneering contribution to the econometric modelling of the demand for mail.

]]>The concept of autoregressive sieve bootstrap is investigated for the case of vector autoregressive (VAR) time series. This procedure fits a finite-order VAR model to the given data and generates residual-based bootstrap replicates of the time series. The paper explores the range of validity of this resampling procedure and provides a general check criterion, which allows to decide whether the VAR sieve bootstrap asymptotically works for a specific statistic or not. In the latter case, we will point out the exact reason that causes the bootstrap to fail.

The developed check criterion is then applied to some particularly interesting statistics.

We develop some asymptotic theory for applications of block bootstrap resampling schemes to multivariate integrated and cointegrated time series. It is proved that a multivariate, continuous-path block bootstrap scheme applied to a full rank integrated process succeeds in estimating consistently the distribution of the least squares estimators in both the regression and the spurious regression case. Furthermore, it is shown that the same block resampling scheme does not succeed in estimating the distribution of the parameter estimators in the case of cointegrated time series. For this situation, a modified block resampling scheme, the so-called residual-based block bootstrap, is investigated, and its validity for approximating the distribution of the regression parameters is established. The performance of the proposed block bootstrap procedures is illustrated in a short simulation study. Copyright © 2014 Wiley Publishing Ltd

]]>This is a revision of a paper that I presented at the John Nankervis Memorial Conference in July 2013. The purposes are to describe the research produced jointly by John and I and to give some personal comments.

]]>In a recent paper, Harvey *et al.* (2013) (HLT) propose a new unit root test that allows for the possibility of multiple breaks in trend. Their proposed test is based on the infimum of the sequence (across all candidate break points) of local GLS detrended augmented Dickey–Fuller-type statistics. HLT show that the power of their unit root test is robust to the magnitude of any trend breaks. In contrast, HLT show that the power of the only alternative available procedure of Carrion-i-Silvestre *et al.* (2009), which employs a pretest-based approach, can be very low indeed (even zero) for the magnitudes of trend breaks typically observed in practice. Both HLT and Carrion-i-Silvestre *et al.* (2009) base their approaches on the assumption of homoskedastic shocks. In this article, we analyse the impact of non-stationary volatility (for example, single and multiple abrupt variance breaks, smooth transition variance breaks and trending variances) on the tests proposed in HLT. We show that the limiting null distribution of the HLT unit root test statistic is not pivotal under non-stationary volatility. A solution to the problem, which does not require the practitioner to specify a parametric model for volatility, is provided using the wild bootstrap and is shown to perform well in practice. A number of different possible implementations of the bootstrap algorithm are discussed.

This article explores the problem of estimating stationary autoregressive models from observed data using the Bayesian least absolute shrinkage and selection operator (LASSO). By characterizing the model in terms of partial autocorrelations, rather than coefficients, it becomes straightforward to guarantee that the estimated models are stationary. The form of the negative log-likelihood is exploited to derive simple expressions for the conditional likelihood functions, leading to efficient algorithms for computing the posterior mode by coordinate-wise descent and exploring the posterior distribution by Gibbs sampling. Both empirical Bayes and Bayesian methods are proposed for the estimation of the LASSO hyper-parameter from the data. Simulations demonstrate that the Bayesian LASSO performs well in terms of prediction when compared with a standard autoregressive order selection method.

]]>The article reviews methods of inference for single and multiple change-points in time series, when data are of retrospective (off-line) type. The inferential methods reviewed for a single change-point in time series include likelihood, Bayes, Bayes-type and some relevant non-parametric methods. Inference for multiple change-points requires methods that can handle large data sets and can be implemented efficiently for estimating the number of change-points as well as their locations. Our review in this important area focuses on some of the recent advances in this direction. Greater emphasis is placed on multivariate data while reviewing inferential methods for a single change-point in time series. Throughout the article, more attention is paid to estimation of unknown change-point(s) in time series, and this is especially true in the case of multiple change-points. Some specific data sets for which change-point modelling has been carried out in the literature are provided as illustrative examples under both single and multiple change-point scenarios.

]]>We consider a heteroscedastic nonparametric regression model with an autoregressive error process of finite known order *p*. The heteroscedasticity is incorporated using a scaling function defined at uniformly spaced design points on an interval [0,1]. We provide an innovative nonparametric estimator of the variance function and establish its consistency and asymptotic normality. We also propose a semiparametric estimator for the vector of autoregressive error process coefficients that is consistent and asymptotically normal for a sample size *T*. Explicit asymptotic variance covariance matrix is obtained as well. Finally, the finite sample performance of the proposed method is tested in simulations.

Many empirical findings show that volatility in financial time series exhibits high persistence. Some researchers argue that such persistency is due to volatility shifts in the market, while others believe that this is a natural fluctuation explained by stationary long-range dependence models. These two approaches confuse many practitioners, and forecasts for future volatility are dramatically different depending on which models to use. In this article, therefore, we consider a statistical testing procedure to distinguish volatility shifts in generalized AR conditional heteroscedasticity (GARCH) model against long-range dependence. Our testing procedure is based on the residual-based cumulative sum test, which is designed to correct the size distortion observed for GARCH models. We examine the validity of our method by providing asymptotic distributions of test statistic. Also, Monte Carlo simulations study shows that our proposed method achieves a good size while providing a reasonable power against long-range dependence. It is also observed that our test is robust to the misspecified GARCH models.

]]>This article proves consistency and asymptotic normality for the conditional-sum-of-squares estimator, which is equivalent to the conditional maximum likelihood estimator, in multivariate fractional time-series models. The model is parametric and quite general and, in particular, encompasses the multivariate non-cointegrated fractional autoregressive integrated moving average (ARIMA) model. The novelty of the consistency result, in particular, is that it applies to a multivariate model and to an arbitrarily large set of admissible parameter values, for which the objective function does not converge uniformly in probability, thus making the proof much more challenging than usual. The neighbourhood around the critical point where uniform convergence fails is handled using a truncation argument.

]]>Test procedures for assessing whether two stationary and independent time series with unequal lengths have the same spectral density (or same auto-covariance function) are investigated. A new test statistic is proposed based on the wavelet transform. It relies on empirical wavelet coefficients of the logarithm of two spectral densities' ratio. Under the null hypothesis that two spectral densities are the same, the asymptotic normal distribution of the empirical wavelet coeffcients is derived. Furthermore, these empirical wavelet coefficients are asymptotically uncorrelated. A test statistic is proposed based on these results. The performance of the new test statistic is compared to several recent test statistics, with respect to their exact levels and powers. Simulation studies show that our proposed test is very comparable to the current test statistics in most cases. The main advantage of our proposed test statistic is that it is constructed very simply and is easy to implement.

]]>This article advances the theory and methodology of signal extraction by developing the optimal treatment of difference stationary multivariate time-series models. Using a flexible time-series structure that includes co-integrated processes, we derive and prove formulas for minimum mean square error estimation of signal vectors in multiple series, from both a finite sample and a bi-infinite sample. As an illustration, we present econometric measures of the trend in total inflation that make optimal use of the signal content in core inflation.

]]>Vine copulae provide a graphical framework in which multiple bivariate copulae may be combined in a consistent fashion to yield a more complex multivariate copula. In this article, we discuss the use of vine copulae to build flexible semiparametric models for stationary multivariate higher-order Markov chains. We propose a new vine structure, the M-vine, that is particularly well suited to this purpose. Stationarity may be imposed by requiring the equality of certain copulae in the M-vine, while the Markov property may be imposed by requiring certain copulae to be independence copulae.

]]>The Gaussian mixture autoregressive model studied in this article belongs to the family of mixture autoregressive models, but it differs from its previous alternatives in several advantageous ways. A major theoretical advantage is that, by the definition of the model, conditions for stationarity and ergodicity are always met and these properties are much more straightforward to establish than is common in nonlinear autoregressive models. Another major advantage is that, for a *p*th-order model, explicit expressions of the stationary distributions of dimension *p* + 1 or smaller are known and given by mixtures of Gaussian distributions with constant mixing weights. In contrast, the conditional distribution given the past observations is a Gaussian mixture with time-varying mixing weights that depend on *p* lagged values of the series in a natural and parsimonious way. Because of the known stationary distribution, exact maximum likelihood estimation is feasible and one can assess the applicability of the model in advance by using a non-parametric estimate of the stationary density. An empirical example with interest rate series illustrates the practical usefulness and flexibility of the model, particularly in allowing for level shifts and temporary changes in variance. Copyright © 2014 Wiley Publishing Ltd