Tim Dornan, Department of Endocrinology, Hope Hospital, Stott Lane, Salford, Manchester M6 8HD, UK. Tel: 00 44 161 206 1384; Fax: 00 44 161 206 5989; E-mail: firstname.lastname@example.org
‘It is a capital mistake to theorise before you have all the evidence. It biases the judgement.’
Sir Arthur Conan Doyle, A Study in Scarlet
Evidence-based medicine came into existence because doctors too readily made Sherlock Holmes’‘capital mistake’ of treating patients before they had all the evidence. Woe betide modern day doctors who, at interview, do not say they will let their clinical practice be guided by meta-analyses of randomised controlled trials (RCTs) and use online literature sources at the bedside. Is there not, however, an opposite, equally capital mistake? A prevalent belief in the ‘evidence era’ is that concerted research effort can deliver a single best answer to everyday questions, leaving little scope for a best guess based on the current state of knowledge.
The belief that concerted research effort can deliver a single best answer to everyday questions leaves little scope for a best guess in the current state of knowledge
The founders of evidence-based medicine were pilloried for imposing such pseudo-simplicity onto the complex world of medicine, although, in reality, they made it quite clear that the craft of practice lies in applying general evidence to specific patients, each of whom is unique.1 This commentary considers how medical educators and education researchers stand in the evidence era.
Evidence-based practice is not the concern of only scientific medicine. The Campbell Collaboration website,2 for example, currently lists 5 completed reviews of evidence-based practice in education, 9 in crime and justice, and 16 in social welfare. Medical teaching’s own evidence movement, ‘Best Evidence Medical Education’ (BEME), came into existence in 1999 and was brought to a wider audience by several articles in an issue of Medical Teacher the following year. One article in that issue argued that there was a need for evidence because, when it came to education, scientists often forsook intellectual rigor for seductive but misleading ‘intuition and tradition’.3 Those arguments did not fall on deaf ears: with 6 reviews published and 13 more on the way, BEME is now part of the medical education landscape.
With 6 reviews published and 13 more on the way, Best Evidence Medical Education is now part of the medical education landscape
It was recently suggested by Todres et al.4 that all is not well in medical education research. These authors enumerated some of its shortcomings as involving: a paucity of externally funded studies; a focus on learners rather than on patient outcomes; a predominance of ‘observational designs’ rather than experiments; a lack of statistical meta-analysis, and poorly conducted RCTs. Any call for research to ‘raise its game’ will never be out of place and neither will a criticism of shoddy work, but critical readers must decide if Todres et al.’s more fundamental criticisms of the methodologies of medical education research are pushing such research up its learning curve or pulling it back down. Must we accept the standard of quality that Todres et al. apply? Their argument that lack of premier research funding equates with lack of quality hinges on that standard.
Writing 7 years before Todres et al., Professor Geoff Norman, a leading educational methodologist who has declared elsewhere that RCT stands for ‘results confounded and trivial’,5 said: ‘There is little reason to continue with an attitude of despair and hand-wringing about the quality of educational research.’ For Norman, ‘…methodological rigor is not necessarily unidimensional’, and ‘attempts to examine the strength and extent of evidence presuppose a single worldview’.6 To amplify that point, a research paper published at about the same time as that of Todres et al. purports to show a correlation between external funding and methodological quality but completely excludes qualitative research from consideration.7 By their implicit advocacy of RCTs, which require massive external funding to meet requisite standards of rigor, Todres et al. could be accused of imposing a ‘single best answer’ way of thinking. How well, we should ask, do the research methods of scientific medicine fit a discipline like education where ‘the best answer’ for a particular student in a specific context may be the worst answer for a different student in a different context?
The best answer for a particular student in a specific context may be the worst answer for a different student in a different context
Professor Henk Schmidt gave a helpful perspective when he overviewed the entire research effort directed at problem-based learning (PBL) in a 2005 keynote address. He divided publications into 3 categories, as follows:
- 1‘this is what we did’: descriptive accounts that are not truly ‘research’;
- 2‘justification research’, which answers questions such as ‘How much better is PBL than conventional education?’, and
- 3‘clarification research’, which, for example, tells educators how PBL can benefit people with specified attributes, and how it could most effectively be applied in specified educational contexts.
Todres et al.4 might have been dismayed to hear Professor Schmidt call for more clarification research, but other audience members might have been glad to hear him praise observational studies focused on learners rather than patient outcomes that could be well conducted without external funds.
Against that background, a paper in this issue of Medical Education discusses empirical data that are as far removed from the impersonal objectivity of clinical trial end-points as it is possible to get: personal, subjective narrative. Sceptics of subjectivity in research might do well to note that the article comes from the pens of authors who have written equally well about ‘objective’ evidence-based practice. Greenhalgh and Wengraf conducted a Delphi exercise to guide researchers, reviewers and ethics committees as to when use of narrative can correctly be described as ‘research’, and what constitutes ‘good’ narrative research. The 20 ‘guiding principles’ resulting from this exercise help readers understand what narratives are, how a collection of them can be assembled, and the important ethical and intellectual issues that surround their collection and interpretation. Would-be narrative researchers may be disappointed, however, to receive no more guidance on how to analyse narratives than ‘rigorously and transparently’, making ‘logical and coherent links between findings and conclusions’. Thus, Greenhalgh and Wengraf’s very act of constructing a set of principles seems to support Norman’s prediction that ‘the range of methodologies used by educational researchers, and the range of possible findings, many of which can never be reduced to an effect size, will frustrate any attempt to devise objective standards of evidence, whether these are formulated as checklists or ratings’.6 Researchers who have experienced the creative buzz of conducting good qualitative research will understand why Greenhalgh and Wengraf, like the authors of a quality checklist for qualitative research,8 are so non-prescriptive about how to analyse narrative research just when novices most feel the need for cut-along-the-dotted-line instructions.
BEME is currently identifying ‘logical and coherent links’ within and between the narratives of its early topic reviewers so their successors can more easily assemble disparate evidence into interpretations that will guide education practice (Hammick et al.; in preparation). History will judge whether, by doing so, BEME is perpetuating what the British Medical Journal characterised as a state of ‘stagnation’ in medical education research (in a banner headline to the article by Todres et al.),4 or whether it is helping a fledgling discipline find its own way forward. But what did Sherlock Holmes’ mean by ‘all the evidence’ and is evidence collection ever complete? We should not be so naïve as to regard medical education research as a Whodunit.
We should select the best from the rich variety of research disciplines that can inform education
To raise the game in a way that is neither confounded nor trivial, we should select the best from the rich variety of research disciplines that can inform education. We should follow Norman’s advice not to use arbiters of rigor that are ‘unidimensional’.6 We should conduct education research that, like Greenhalgh and Wengraf’s narrative research, begins and ends in the real world of practice.
We should conduct education research that begins and ends in the real world of practice