SEARCH

SEARCH BY CITATION

In our field of educational psychology, we are often asked to review papers reporting educational intervention studies that are unfortunately disheartening. Hypotheses are not well justified, methods are weak, sample sizes are too small to provide for sufficient power, and analyses are very sloppy. Thus, we were very pleased and even impressed by the quality of the studies to be discussed in this commentary.[1, 2] Both are good examples of well-conducted educational intervention studies that provide opportunities for practitioners in our field to learn from those in medical education. At the same time, both studies may profit from insights derived from educational psychology and particularly from productive theories and proper guidelines.

Bjerrum and colleagues[1] investigated the effects of modelling examples on learning the complex procedural skill of bronchoscopy. The main finding – that adding extra modelling improves case-based learning – is strong and has high practical value. As we read the article,[1] we were intrigued to explore the theoretical rationale provided for the findings.

Both studies may profit from insights derived from educational psychology

The authors used cognitive load theory as their guiding theoretical framework. Although cognitive load theory has offered a fruitful research framework over the past 30 years, giving rise to many instructional guidelines, the theory has recently been subject to severe criticism.[3, 4] The point of greatest concern is that it is difficult or even impossible to falsify the central claim of cognitive load theory, which is that working memory load can have both positive and negative effects on learning. The main reason for this difficulty is that existing measures of cognitive load cannot validly distinguish among the different aspects of load (intrinsic, extraneous and germane) specified by the theory. Consequently, cognitive load theory can explain every possible outcome of a comparison between educational interventions after the fact.

It is difficult to falsify the central claim of cognitive load theory: that working memory load can have both positive and negative effects on learning

For example, in their introduction, Bjerrum et al.[1] argue that modelling examples will reduce the extraneous (= ineffective) load during learning and thus improve learning. However, had the authors not found a difference between conditions, they could have argued that the intrinsic (= task-based) load was not high enough to show a benefit of reducing extraneous load, or that the learners did not use freed working memory capacity to increase germane (= effective) load.[1] In sum, there is no a priori way to predict the specific load of instructions for specific learners. Moreover, the authors of the current paper[1] did not measure cognitive load, which makes any explanation in terms of cognitive load highly speculative.

Existing measures of cognitive load cannot validly distinguish among the different aspects of intrinsic, extraneous and germane load specified by the theory

So what explains the benefit of the modelling examples illustrated by Bjerrum et al.?[1] We would propose three alternative explanations. Firstly, whereas most studies of worked examples demonstrate benefit by comparing the influence of worked examples with that of conventional problem-solving tasks that include the same content,[5, 6] in the current study,[1] the modelling examples did not ‘replace’ any conventional problem-solving task in the control condition, but were ‘added’ to the training. Consequently, the difference in learning performance may have derived from the knowledge conveyed by the examples themselves, rather than the modelling per se. Interestingly, the inclusion of these examples did not simply improve performance by increasing the time spent on following instructions because no overall difference in training time was found. In fact, seeing the modelling examples made learners process the training cases much more efficiently, as was evident from the study times.

The difference in learning performance may have derived from the knowledge conveyed by the examples themselves, rather than the modelling per se

Secondly, in a recent overview of factors relevant to the design of modelling examples, Wulf et al.[7] mentioned that alternating the roles of observer and performer is highly effective for skill learning. This form of alternation is exactly what happened in the study by Bjerrum and colleagues[1] and may provide further explanation of the observed benefit of modelling examples. Finally, it is plausible that the modelling worked so well because the examples were presented in a spaced fashion (with two cases between examples), rather than consecutively. The literature of cognitive psychology demonstrates that spacing learning benefits memory over massing.[8]

Of factors relevant to the design of modelling examples, alternating the roles of observer and performer is highly effective in skill learning

In sum, when it comes to further refining and improving modelling techniques for training in complex procedural skills, it might be more fruitful to move away from cognitive load theory and to focus more on specific design factors that make modelling examples more or less effective in case-based learning.[7]

This brings us to the work conducted by Boutis et al., in which the authors directly investigated the effect of one such design factor.[2] In this study, novice medical learners studying ankle radiographs were provided with a feedback hint to increase the time they spent on reviewing their interpretations of radiographs. The feedback always consisted of a screen text indication that the interpretation may be incorrect, as well as an invitation to reconsider the interpretation. The two feedback groups outperformed the control group on the 50-case learning set, but this effect failed to transfer to an immediate and delayed post-test. The authors concluded that more guidance might be necessary in making the hint an effective tool in case-based learning.[2]

It might be fruitful to focus on design factors that make modelling examples more or less effective in case-based learning

But why was the feedback hint not effective? When we read the paper,[2] it occurred to us that the procedure in the two feedback conditions might actually have been very challenging to the students. Firstly, because the feedback was ambiguous (it told the learner that his or her interpretation might be incorrect), students still needed to judge for themselves whether their initial interpretations were correct or not. This type of metacognitive reasoning is likely to be demanding, particularly for novice learners, and a lot of empirical evidence suggests that learners are actually quite poor at making accurate judgements of their learning.[9] Secondly, even if learners correctly decided to reconsider their initial interpretation, it was not clear what they could do to arrive at a correct interpretation. This might lead to what we would like to call ‘reflection in vain’.

A lot of empirical evidence suggests that learners are quite poor at making accurate judgements of their learning

The next question, then, concerns what can be done to make the feedback hints more effective. In terms of practical advice, and perhaps also with reference to future research, it might be worthwhile to apply some guidelines developed in research on effective tutoring.[10] Specifically, the effectiveness of the hinting procedure might be enhanced not only by providing unambiguous feedback (i.e. the answer is either correct or not), but by scaffolding learners in the reconsideration of their original interpretation. This scaffolding might take the form of having an expert explain how he or she arrived at the correct interpretation. Interestingly, Bjerrum et al.[1] clearly demonstrated that such modelling can be very effective. These expert explanations may be further strengthened by using visual hints that guide the learner's attention to the crucial elements in the radiographs.[11] Moreover, the expert explanations might be elaborate at the beginning of the 50-case sequence and might gradually become more succinct as the learner progresses through the set. Additionally, cases that are correctly interpreted might be dropped from the 50-case set and those that are not might be repeated until the correct interpretations are derived.[12] To sum up, the study by Boutis and colleagues[2] offers a perfect example of a potentially effective strategy of providing hints that may benefit from further design guidelines to make it more effective.

We hope our discussion has provided some ideas for theories and guidelines that may inspire future research, as well as the further improvement of complex (skill) training in medical education.

References

  1. Top of page
  2. References