Discussions with Clive Granger, Graham Elliott, and Andrew Patton were essential to the paper. Useful comments from a co-editor and three anonymous referees led to a considerably improved version of the paper. We also thank Lutz Kilian for insightful suggestions and Farshid Vahid, Matteo Iacoviello, Mike McCracken, and seminar participants at UCSD, Nuffield College, LSE, University of Exeter, University of Warwick, University of Manchester, Cass Business School, North Carolina State University, Boston College, Texas A&M, University of Chicago GSB, the International Finance Division of the Federal Reserve Board, University of Houston, UCLA, Harvard/MIT, and the 2002 EC2 conference in Bologna, Italy for helpful comments. We thank Vince Crawford for the use of the UCSD Experimental and Computational Lab.
Tests of Conditional Predictive Ability
Article first published online: 1 NOV 2006
Volume 74, Issue 6, pages 1545–1578, November 2006
How to Cite
Giacomini, R. and White, H. (2006), Tests of Conditional Predictive Ability. Econometrica, 74: 1545–1578. doi: 10.1111/j.1468-0262.2006.00718.x
- Issue published online: 1 NOV 2006
- Article first published online: 1 NOV 2006
- Manuscript received April, 2003; final revision received April, 2006.
- Forecast evaluation;
- hypothesis test
We propose a framework for out-of-sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine-tune the forecast selection to current economic conditions. To this end, we propose a two-step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter-reduction methods for macroeconomic forecasting using a large number of predictors.