SEARCH

SEARCH BY CITATION

You are biased.

It is a simple enough assertion that is as close to fact as anything one might express in three words or less.

I am biased.

This is neither absolutely good nor absolutely bad. It is merely an unavoidable reality that we all bring our own past experiences to bear on our understanding and interpretation of new information. Sometimes we intentionally bias ourselves when we do not consider another's perspective or, less offensively, when we decide to eliminate particular sources of data from consideration. To some extent, our bias is a product of our environment in that we are more likely to be aware of data and theory emanating from domains in which we have training and experience than of data and theory from other domains. My training as an experimental psychologist would lead me to believe that to some extent our biases are biological (i.e. that we are wired in such a way that new material is more understandable and more memorable when we can relate it to memories we already maintain).1 Still being biased runs counter to the archetype of the good scientist, who is generally perceived as someone who is objective at all times and, therefore, is expected to allow the data to tell the story rather than overlaying his or her own beliefs, suspicions and wishes onto some underlying ‘reality’.

Researchers steeped in qualitative traditions will argue that a great strength of their methodologies is their recognition that the world is a complex place and that researchers themselves inevitably add to that complexity by viewing the world and the data they have collected from it through their own personal lenses. For example, a social scientist working in an interpretive tradition embraces subjectivity and intentionally embeds him- or herself in the research process. It is a caricature to suggest that all researchers working within quantitative research traditions fail to appreciate the role of context and the impact our methods and assumptions have on the data we collect and the ‘answers’ we derive. Still, it is implied in many quantitative methodological and analytic strategies that we are striving to isolate ‘the number’ that most accurately reflects some greater phenomenon. Nowhere is this tendency more apparent than in the relatively recent emphasis on generating systematic reviews as the best evidence one can muster to address a particular research question.

The meta-analysis of psychotherapy outcome studies by Smith and Glass (1977) is usually cited as the first truly systematic review in which the researchers went to great lengths to identify all relevant data and combine them into one over-arching (meta-)analysis that, as a result of their having been combined, would dilute the biases inherent in any of the individual studies.2 The notion, quite simply, is that different studies will be biased in different ways and that if we systematically (i.e. without bias) collect the outcomes of studies that satisfy at least minimal standards of quality and relevance, we will be able to derive the best estimate of the magnitude of the effect of interest (i.e. we can deliver as unbiased an estimate of the extent to which something ‘works’ better or less well than something else). The strength of this argument relies upon the extent to which the sources of bias (and the direction of their impact) are random in nature. If a study of problem-based learning versus conventional curricula erroneously suggests a particular advantage in one direction or the other, then another study (and, at this point, many have been completed) should ‘correct’ for this bias and the aggregation of both studies should show that indeed there is no effect if none actually exists.

That all sounds reasonable enough, but what if the biases are not random? In this issue of Medical Education, Colliver et al. explore the challenges inherent in performing meta-analyses on quasi-experimental research.3 They argue that many sources of bias in the quasi-experimental designs that are prevalent within the education literature are ‘constant’ (i.e. they point the same way – typically in favour of the intervention – and thus preclude the accumulated errors from cancelling one another out). In doing so, they argue that mathematical meta-analytic techniques are often inappropriate given the state of the education literature and that, as a result, ‘the medical education field might be better served in most instances by systematic narrative reviews that describe and critically evaluate individual studies and their results, rather than obscure biases and confounds by averaging’. I would take the argument a step further and place less emphasis on the ‘systematic’ and more on the ‘narrative’.

A good education research literature review, in my opinion, is one that presents a critical synthesis of a variety of literatures, identifies knowledge that is well established, highlights gaps in understanding, and provides some guidance regarding what remains to be understood. The result should give a new perspective of an old problem, rather than simply paraphrasing what all other researchers and scholars in the field have shown or said in the past. The author of the critical review should feel bound by a moral code to try to represent the literature (and the various perspectives therein) fairly, but need not adopt a guise of absolute systematicity. I say ‘guise’ because far too commonly the quality of a review is judged (in part) on the basis of its systematicity, defined by its identification of ‘x-thousand’ abstracts, even though only three papers are ultimately found to have a sufficiently narrow focus to meet all the researchers’ inclusion and exclusion criteria (which themselves will vary in the hands of different individuals).

Indeed, the checklists adopted by systematic reviewers as indicators of the quality of the research under review are themselves a source of bias (and an important one, given Bordage’s findings that checklists do not adequately capture scholars’ opinions of a research paper’s quality).4 It is much more valuable if the researcher considers the literature broadly in order to fundamentally redefine the way the focal question is conceived in a meaningful and insightful manner, rather than going to elaborate lengths to establish that every paper relevant to a very narrow question has been considered. Thinking of reviews in this way will hopefully preclude the tendency of some authors to provide a one-paragraph summary of every related study without offering any critical synthesis or advancement in perspective.

But without systematicity the paper will be biased! Yep. Live with it. Better still, embrace it. Even with systematicity the paper will be biased as in practice the education literature is so diverse that all interpretation and synthesis will be somewhat idiosyncratic anyway. There are ways of minimising overt bias: collaborate with others (especially those with different backgrounds); submit to peer review; strive to generate and consider other possible interpretations of one’s readings. Different individuals may come to different conclusions, but such differences should serve as the foundation on which we create productive debate. There are some ‘does it work’ questions for which absolute systematicity and mathematical synthesis are perfectly appropriate, but, as Glenn Regehr pointed out recently, rarely do ‘does it work’ questions represent the most important and most interesting issues in education research.5 Understanding why and when something does or does not work is inevitably more informative and can only be addressed by fully representing the complexity of the literature and enabling individuals to put forward their own biases so the rest of us can better understand ways of interpreting the world that can be expressed in the absence of our own biases.

References

  1. Top of page
  2. References