Commentary: Getting help to mark multiple choice questions

Authors

  • Graham R. Parslow

    Corresponding author
    1. Russel Grimwade School of Biochemistry and Molecular Biology, The University of Melbourne, Victoria 3010, Australia
    • Russel Grimwade School of Biochemistry and Molecular Biology, The University of Melbourne, Victoria 3010, Australia
    Search for more papers by this author

The task of marking questions with predictable answer sets seems to be an easy one to solve with technology. Everyone wins when there is rapid and supportive feedback to both students and examiners, but it is hard to achieve in practice. The rhetoric has been conveniently summarized by Richard James et al. [1] in a comprehensive document addressing learning in universities. The paper by James et al. states in the prologue “there is considerable scope to make assessment in higher education more sophisticated and more educationally effective. Assessment is often treated merely as the end point of the teaching and learning process. There remains a strong culture of ‘testing’ and an enduring emphasis on the final examination, leaving the focus predominantly on the judgmental role of assessment rather than its potential to shape student development. In all, we believe assessment can be more fully and firmly integrated with teaching and learning processes. Assessment should not only measure student learning but also make a contribution to it.” In reality, very few teachers make time to “reverse engineer” their teaching so that the instructional mode is determined from how best to prepare students for the assessment (or as a response to assessment). We find it too easy to be content driven. Assessment is a chore that many would gladly hand over to question-banks and automated marking technologies, including on-line assessment. My observations are that technology applied to examination has been of marginal help in decreasing the cost in time and labor of preparation and marking.

It did not surprise me that there were a number of keen positive responses when my university faculty (Medicine and Health) asked department heads for an expression of interest in purchasing an expensive card scanner to automate multiple-choice question (MCQ)11 marking. Some responses were annotated with comments to the effect that “it seems like something we should have.” Some more wary respondents were aware that this type of system was not an entirely positive fix. I had used such a system for some years and been particularly annoyed that the scanning technology is defeated by poor pen choice, ambiguous markings, and attempts to erase and correct. When I used such forms it was a chore to inspect all student forms, and correct many of them, before submitting to machine assessment. I have never been positively moved by the esthetics of commercial forms and feel that it depersonalizes the student experience in their courses. Nevertheless, the work of Andrew Booth, at the University of Leeds and in the Caribbean, provides an example of this technology that has been successfully sustained over a 20-year period [2]. Professor Booth has been mindful of enhancing feedback to students as they progress through their course and using technology in a cost-effective manner. Through item analysis of machine-marked MCQs, Professor Booth has assessed students' progress, identified any topics found difficult to understand, and modified their tutorial support. The degree of integration with total teaching is reflected by the course management. Every week, all first-year students in the biochemistry course at Leeds complete a short MCQ test and pen their answers on specially designed cards. The students undertake the test in the first 20 min of a 3-h practical. Staff can have the results within an hour for a class of 380 students. More management details have been provided on the web by Professor Booth [2]. You may counter that the result would be instantaneous if the test was done on-line. One of my colleagues was highly motivated to do just this for the mid-semester examination of a similar size class, but could not make it practicable by any simultaneous or sequential use of the “large” computer laboratories at my university (let alone using our 50 departmental student computers).

Using a data-logging service is a third way between card scanning and on-line assessment for marking MCQs and is my current choice for cost-effective help. Our mark-sheets are prepared in the same examination document for printing so the answer sheets cannot be overlooked (or unavailable) at assembly time. I have used a commercial data-logging service for many years and find this approach commended by the following attributes First, it is accurate. Two operatives create separate files and compare output to ensure consistency. The human interpreters of the data are far more sophisticated than any electronic scanner and easily accommodate cross-out corrections. The operatives draw our attention to papers that have ambiguous markings. Second, it is rapid. With MCQ-only mid-semester tests, we have often returned results to students on the next day. Third, it is confidential. The data-logging service does not do the marking; they merely generate a record of responses. Fourth, it is relatively cheap (less than $0.50 per student).

Presently, on-line examinations for large classes have not proved practical for many reasons. The problems will doubtless be solved and lead to higher efficiency and learning effectiveness. I am particularly hopeful that I can apply some of the enhancements envisaged by James et al. [1] who describe presenting complex scenarios with images, sound, and simulation to better test the skills and learning of students.

Footnotes

  1. 1

    The abbreviation used is: MCQ, multiple-choice question.

Ancillary