SEARCH

SEARCH BY CITATION

Keywords:

  • double robustness;
  • estimating equations;
  • Grassmann manifold optimization;
  • missing at random;
  • sliced inverse regression

Abstract

Sufficient dimension reduction (SDR) is effective in high-dimensional data analysis as it mitigates the curse of dimensionality while retaining full regression information. Missing predictors are common in high-dimensional data, yet are only discussed occasionally in the SDR context. In this paper, an inverse probability weighted sliced inverse regression (SIR) is studied with predictors missing at random. We cast SIR into the estimating equation framework to avoid inverting a large scale covariance matrix. This strategy is more efficient in handling large dimensionality and strong collinearity among the predictors than the spectral decomposition of classical SIR. Numerical studies confirm the supremacy of our proposed procedure over existing methods. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2011