Different written assessment methods: what can be said about their strengths and weaknesses?

Authors


Dr L W T Schuwirth, Department of Educational Development and Research, University of Maastricht, PO Box 616, 6200 MD Maastricht, The Netherlands. Tel: 00 31 43 388 1129; E-mail: l.schuwirth@educ.unimaas.nl.

Abstract

Introduction

Written assessment techniques can be subdivided according to their stimulus format – what the questions asks – and their response format – how the answer is recorded. The former is more important in determining the type of competence being asked for than the latter. It is nevertheless important to consider both when selecting the most appropriate types. Some major elements to consider when making such a selection are cueing effect, reliability, validity, educational impact and resource-intensiveness.

Response formats

Open-ended questions should be used solely to test aspects that cannot be tested with multiple-choice questions. In all other cases the loss of reliability and the higher resource-intensiveness represent a significant downside. In such cases, multiple-choice questions are not less valid than open-ended questions.

Stimulus format

When making this distinction, it is important to consider whether the question is embedded within a relevant case or context and cannot be answered without the case, or not. This appears to be more or less essential according to what is being tested by the question. Context-rich questions test other cognitive skills than do context-free questions. If knowledge alone is the purpose of the test, context-free questions may be useful, but if it is the application of knowledge or knowledge as a part of problem solving that is being tested, then context is indispensable.

Conclusion

Every format has its (dis)advantages and a combination of formats based on rational selection is more useful than trying to find or develop a panacea. The response format is less important in this respect than the stimulus.

Ancillary