Commentary: When can we claim to have made teaching better using multimedia?

Authors

  • Graham R. Parslow

    Corresponding author
    1. Russel Grimwade School of Biochemistry and Molecular Biology, The University of Melbourne, Victoria 3010, Australia
    • Russel Grimwade School of Biochemistry and Molecular Biology, The University of Melbourne, Victoria 3010, Australia
    Search for more papers by this author

Unfortunately there is no “magic bullet” that can tell simply and easily whether a certain application of information and communications technology has had an effect on student learning. Indeed we cannot evaluate any multimedia intervention in isolation: we have to look at student learning in the teaching and learning environment as a whole [1]. In my local environment a number of bodies make grant moneys available for multimedia projects, and there are always an excess of proposals over the pool of money to be shared. Each application is undoubtedly achievable and reflects the applicants' enthusiasm for their subject and their undoubted technical abilities to create a multimedia product. Since I have written a number of these grant proposals I know that the natural tendency is to spend most of the time lavishing detail on the effect that an animated insulin receptor (for example) will have on uplifting our teaching. The grant committees of course have negligible interest in an animated insulin receptor and settle down to look at the budget that is typically added with little appreciation of true costs, being either laughably greedy or unrealistically low in the costing. So, ignoring all the academic inspiration, the committees can make a first cull by looking at the paragraph on the budget. The second round of culling can be made by looking at the approach to evaluation.

Evaluation is such an after-the-event chore for most of us that we do not have a mind set that acknowledges that it is important. Evaluation in our planning mirrors the tension in motivation that bedevils much of our teaching (we want to inspire and promote learning; students want a good grade). We want to teach, but others want to be convinced that we have taught. We know our multimedia application will work, and it is an annoying distraction to be obliged to contemplate objective evaluation. I still think this way emotionally, although I know objectively that evaluation is central to a professional approach and ultimately being able to share the fruits of the development work with peers via refereed journal articles. If you feel a need to take evaluation more seriously then you may be disappointed with the lack of agreed standards [2]. Faced with the need to take some form of approach I would commend you to the following article by Zimitat and McAlpine titled “Student use of computer-assisted learning (CAL) and effects on learning outcomes” [3]. In that article you will find that a standardized Study Process Questionnaire can be adapted to produce analyses of deep and shallow learning by students using computer tutorials. The article by Zimitat and McAlpine points out another problem with our mind set of evaluation. We want to prove that our teaching or multimedia product is sound, but the students are a variable lot, and our best effort can be thwarted by factors seemingly beyond our control. Of course if we know what factors are contributing to poor outcomes then we can do something about it.

Ancillary