Subsampling inference for the autocovariances and autocorrelations of long-memory heavy- tailed linear time series


  • Disclaimer This report is released to inform interested parties of research and to encourage discussion. The views expressed on statistical issues are those of the authors and not necessarily those of the U.S. Census Bureau.

Correspondence to: Tucker McElroy, U.S. Census Bureau, 4600 Silver Hill Road, Washington, D.C. 20233.


We provide a self-normalization for the sample autocovariances and autocorrelations of a linear, long-memory time series with innovations that have either finite fourth moment or are heavy-tailed with tail index 2 < α < 4. In the asymptotic distribution of the sample autocovariance there are three rates of convergence that depend on the interplay between the memory parameter d and α, and which consequently lead to three different limit distributions; for the sample autocorrelation the limit distribution only depends on d. We introduce a self-normalized sample autocovariance statistic, which is computable without knowledge of α or d (or their relationship), and which converges to a non-degenerate distribution. We also treat self-normalization of the autocorrelations. The sampling distributions can then be approximated non-parametrically by subsampling, as the corresponding asymptotic distribution is still parameter-dependent. The subsampling-based confidence intervals for the process autocovariances and autocorrelations are shown to have satisfactory empirical coverage rates in a simulation study. The impact of subsampling block size on the coverage is assessed. The methodology is further applied to the log-squared returns of Merck stock.