SEARCH

SEARCH BY CITATION

Abstract

Evaluations should be issues driven, not methods driven. The starting point should be priority programs to be evaluated or policies to be tested. From this starting point, a list of evaluation questions is identified. For each evaluation question, the task is to identify the best available method for answering that question. Hence it is likely that any one study will contain a mix of methods. A crucial question for an impact evaluation is that of attribution: What difference did the intervention make to the state of the world? (framed in any specific evaluation as the difference a clearly specified intervention or set of interventions made to indicators of interest). For interventions with a large number of units of assignment, this question is best answered with a quantitative experimental or quasi-experimental design. And for prospective, or ex ante, evaluation designs a randomized control trial (RCT) is very likely to be the best available method for addressing this attribution question if it is feasible. But just the attribution question will be answered. A high-quality impact evaluation will answer a broader range of evaluation questions of a more process nature, both to inform design and implementation of the program being evaluated and for external validity. Mixed methods combine the counterfactual analysis from an RCT with factual analysis with the use of quantitative and qualitative data to analyze the causal chain, drawing on approaches from a range of disciplines. The factual analysis will address such issues as the quality of implementation, targeting, barriers to participation, or adoption by intended beneficiaries. ©Wiley Periodicals, Inc., and the American Evaluation Association.