Robust inference on the average treatment effect using the outcome highly adaptive lasso
Cheng Ju and David Benkeser contributed equally to this manuscript.
Abstract
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both. It is often beneficial to utilize flexible techniques, such as semiparametric regression or machine learning, to estimate these quantities. However, optimal estimation of these regressions does not necessarily lead to optimal estimation of the average treatment effect, particularly in settings with strong instrumental variables. A recent proposal addressed these issues via the outcome‐adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators. However, a notable limitation of this approach is that its application is restricted to parametric models. We propose a more flexible alternative that we call the outcome highly adaptive lasso. We discuss the large sample theory for this estimator and propose closed‐form confidence intervals based on the proposed estimator. We show via simulation that our method offers benefits over several popular approaches.
Citing Literature
Number of times cited according to CrossRef: 2
- Nima Hejazi, Jeremy Coyle, Mark van der Laan, hal9001: Scalable highly adaptive lasso regression in R, Journal of Open Source Software, 10.21105/joss.02526, 5, 53, (2526), (2020).
- Fujin Zhu, Jie Lu, Adi Lin, Guangquan Zhang, A Pareto-smoothing Method for Causal Inference using Generalized Pareto Distribution, Neurocomputing, 10.1016/j.neucom.2019.09.095, (2019).




