Assessment not only drives learning, it may also help learning

Authors


Timothy J Wood, 2283 St Laurent Boulevard, Ottawa, Ontario K1G 3H7, Canada. Tel: 00 1 613 521 6012 (ext 2231); Fax: 00 1 613 521 8059;
E-mail: twood@mcc.ca

There is considerable research in cognitive psychology which could provide guidance to medical education researchers and teachers, but which is often ignored because it can be difficult to generalise results from the material or tasks commonly used in psychology laboratories to medical education. It was with great interest, therefore, that I read an article on test-enhanced learning, by Larsen, Butler and Roediger, in the October issue of Medical Education.1 I was then asked to comment on a related article in this issue of the journal that demonstrates the phenomenon in a clinical teaching setting. Larsen et al. described a line of research from cognitive psychology called the ‘testing effect’, which had been applied to both laboratory and education-based contexts. They were forced to conclude simply with a discussion of its potential implications for medical education. The Kromann et al. paper, in this issue of the journal, suggests that some of that potential can be realised.2

Repeated testing during learning was shown to promote better memory for content than a single test at learning

The testing effect describes a phenomenon of improved memory and learning that occurs following a test relative to conditions in which students are exposed to the same material during learning without a corresponding test. Larsen et al. argue that including tests as learning activities for students rather than just as end-of-course assessments, as usually happens, increases the amount of information that students will learn and remember. To support this claim, the authors describe a number of studies from cognitive psychology. In particular, they focus on three sets of findings:

  • 1repeated testing during learning was shown to promote better memory for content than a single test at learning, which in turn produces better memory than no test at all;1,3
  • 2tests that require students to construct an answer, like a short-answer examination, appear to produce better memory than tests that require students to recognise an answer, such as a multiple-choice question (MCQ) examination,1,3 and
  • 3delivering feedback on a test at study was reported to promote better memory for content than testing without feedback, although such tests were still found to produce better memory than no tests at all.1,3

Based on these findings, medical educators should integrate multiple tests into their teaching activities; these tests should require constructed responses and educators should ensure students are given feedback.

It is imperative to determine whether or not the effects generalise to medical education beyond simple, general knowledge-based tests

Larsen et al. conclude by citing several unanswered questions about the testing effect that might be explored by researchers, including determining whether teaching style (lecture, problem-based learning, etc.) facilitates the testing effect, how long the effect lasts and what impact it has on patient care.

The types of tests cited by Larsen et al. are general knowledge tests (e.g. MCQs, short-answer questions). It seems to me imperative to determine whether or not the effects generalise to medical education beyond the realm of simple, general knowledge-based tests (e.g. whether or not testing effects are present when skills, rather than content, are being learned). One such study is described in the paper by Kromann et al. in this issue of Medical Education.2 As part of a course on cardiac arrest, one group of students were given a set of scenarios and were tested on a cardiac arrest simulator as part of a learning exercise at the end of a course. A second group of students received the course and the same set of scenarios, but, rather than being tested, were asked to discuss the scenarios as part of a learning exercise. Two weeks after the course, students were tested on one of the studied scenarios. The students who had received the test during learning scored higher than students who had only studied the scenarios, thus indicating that the testing effect can occur for performance-based examinations.

Will the testing effect generated from written questions transfer to a performance-based examination?

As always with interesting research, studies should raise more questions than they answer both these papers do exactly that. For example, in medical education, especially high-stakes testing, there is an emphasis on MCQs that test the application of knowledge rather than rote memorising.4 Because these questions require the application of knowledge, does their use on a test lead to additional increases in memory? And how does learning that occurs with these types of MCQs compare with learning that occurs when short-answer questions are used? Kromann et al.2 were able to integrate simulations into a learning activity, but this may be an expensive exercise for most educators. An interesting question, therefore, concerns whether a testing effect can occur across formats. For example, if written questions, which are cheaper to create, are used during learning, will the testing effect transfer to a performance-based examination such as an objective structured clinical examination?

Another issue that might impact on how medical educators incorporate tests into their teaching requires some exploration into whether the content tested during learning should be mixed or blocked. That is, should the test only include topics that were discussed during a particular lecture (i.e. blocked) or should it include content from that lecture and from other previous lectures (i.e. mixed)? Because end-of-course assessments test everything in a course, presumably the use of mixed tests during learning would produce better results.

What might happen if people were to learn erroneous information during the testing process?

There are negative effects of testing during learning that also need to be explored. Certainly, there may be a backlash from students or faculty staff who think there is enough testing already. Another concern refers to what might happen if people were to learn erroneous information during the testing process. For example, MCQ examinations use false information to generate response options; students who convince themselves of the accuracy of an incorrect answer may have difficulty unlearning that information. This negative effect of testing was demonstrated in a study by Roediger and Marsh.5 Participants read a set of passages and then took an MCQ test that covered material from the passages that had been read and from material that had not been read. They then performed a short-answer test in which the participants were presented with the question stems from the MCQ examination but had to generate a response. Roediger and Marsh found a testing effect in that material tested at study was answered better than material that had not been tested. However, they also found that participants would occasionally list one of the incorrect response options from the MCQ as an answer to a short-answer question. This negative effect of testing is of concern if it produces errors in later clinical performance.

More generally, however, the intersection between cognitive psychology and medical education research displayed in the Larsen et al. and Kromann et al. papers is exciting. I look forward to seeing what impact this line of investigation will have on medical education as research into the phenomenon evolves.

Ancillary