Subsampling inference for the mean of heavy-tailed long-memory time series



This article is corrected by:

  1. Errata: Corrigendum to ‘Subsampling Inference for the Mean of Heavy-Tailed Long-Memory Time Series’ by A. Jach, T. S. McElroy and D. N. Politis Volume 37, Issue 5, 713–720, Article first published online: 23 March 2016

  • This report is released to inform interested parties of research and to encourage discussion. The views expressed on statistical issues are those of the authors and not necessarily those of the US Census Bureau.

Dimitris N. Politis, Department of Mathematics, University of California, San Diego, 9500 Gilman Drive, Mail Code 0112, La Jolla, CA 92093-0112, USA.



In this article, we revisit a time series model introduced by MCElroy and Politis (2007a) and generalize it in several ways to encompass a wider class of stationary, nonlinear, heavy-tailed time series with long memory. The joint asymptotic distribution for the sample mean and sample variance under the extended model is derived; the associated convergence rates are found to depend crucially on the tail thickness and long memory parameter. A self-normalized sample mean that concurrently captures the tail and memory behaviour, is defined. Its asymptotic distribution is approximated by subsampling without the knowledge of tail or/and memory parameters; a result of independent interest regarding subsampling consistency for certain long-range dependent processes is provided. The subsampling-based confidence intervals for the process mean are shown to have good empirical coverage rates in a simulation study. The influence of block size on the coverage and the performance of a data-driven rule for block size selection are assessed. The methodology is further applied to the series of packet-counts from ethernet traffic traces.