education research methods
Kirkpatrick’s levels and education ‘evidence’
Article first published online: 13 DEC 2011
© Blackwell Publishing Ltd 2012
Volume 46, Issue 1, pages 97–106, January 2012
How to Cite
Yardley, S. and Dornan, T. (2012), Kirkpatrick’s levels and education ‘evidence’. Medical Education, 46: 97–106. doi: 10.1111/j.1365-2923.2011.04076.x
- Issue published online: 13 DEC 2011
- Article first published online: 13 DEC 2011
- Received 22 October 2010; editorial comments to authors 14 December 2010, 26 April 2011; accepted for publication 1 June 2011
Medical Education 2012: 46: 97–106
Objectives This study aims to review, critically, the suitability of Kirkpatrick’s levels for appraising interventions in medical education, to review empirical evidence of their application in this context, and to explore alternative ways of appraising research evidence.
Methods The mixed methods used in this research included a narrative literature review, a critical review of theory and qualitative empirical analysis, conducted within a process of cooperative inquiry.
Results Kirkpatrick’s levels, introduced to evaluate training in industry, involve so many implicit assumptions that they are suitable for use only in relatively simple instructional designs, short-term endpoints and beneficiaries other than learners. Such conditions are met by perhaps one-fifth of medical education evidence reviews. Under other conditions, the hierarchical application of the levels as a critical appraisal tool adds little value and leaves reviewers to make global judgements of the trustworthiness of the data.
Conclusions Far from defining a reference standard critical appraisal tool, this research shows that ‘quality’ is defined as much by the purpose to which evidence is to be put as by any invariant and objectively measurable quality. Pending further research, we offer a simple way of deciding how to appraise the quality of medical education research.