Assessing professional competence: from methods to programmes
Article first published online: 23 FEB 2005
Volume 39, Issue 3, pages 309–317, March 2005
How to Cite
Van Der Vleuten, C. P. M. and Schuwirth, L. W. T. (2005), Assessing professional competence: from methods to programmes. Medical Education, 39: 309–317. doi: 10.1111/j.1365-2929.2005.02094.x
- Issue published online: 23 FEB 2005
- Article first published online: 23 FEB 2005
- Received 1 March 2004; editorial comments to authors 4 May 2004; accepted for publication 19 July 2004
- educational measurement/*methods;
- professional competence/*standards
Introduction We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment.
Empirical and theoretical developments Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence.
Assessment as instructional design When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective.
Implications for development and research Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.