SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. The failure of ‘common sense’ and the success of multiple disciplines
  5. The usefulness of multiple disciplines
  6. The future of MER
  7. Acknowledgments
  8. References

Medical Education 2011; 45: 785–791

Context  Medical education research has been an academic pursuit for over 50 years, tracing its roots back to the Office of Medical Education at the State University of New York at Buffalo, New York, with George Miller. As the field has matured, the nature of the questions posed and the disciplinary bases of its practitioners have evolved.

Methods  I identify three chronological ‘generations’ of academics who have contributed to the field, at intervals of roughly 10–15 years.

Results  Members of the first generation came from diverse and unrelated academic backgrounds and essentially learned their craft on the job. A second generation, emerging in the 1980s and 1990s, consisted of individuals with PhD-level training in relevant fields such as psychology, psychometrics and sociology, who actively chose a career in health sciences education, often during graduate work. These individuals brought a strong disciplinary orientation to their research. Finally, the proliferation of graduate programmes in medical education means that we are now seeing the evolution of a new type of academic, often a health professional, whose only discipline is medical education.

Conclusions  I propose that we should strike a balance between seeking to create a separate specialty of medical education and continuing to actively recruit from other academic disciplines. I believe that the strong disciplinary roots of these individuals are a critical element in the continuing growth and progress of medical education research.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. The failure of ‘common sense’ and the success of multiple disciplines
  5. The usefulness of multiple disciplines
  6. The future of MER
  7. Acknowledgments
  8. References

The field we now call medical education research (MER) was born in the 1950s, with George Miller at the State University of New York at Buffalo, New York.1 He gathered around him the first generation of educators, including Hilliard Jason and Steve Abrahamson. As Hitchcock1 states, the project in medical education at Buffalo was the first funded, sustained effort to improve teaching and learning in medical schools.

In the 1960s these individuals moved on to other institutions – Miller to Illinois, Abrahamson to Los Angeles and Jason to Michigan State – where they founded the first research-intensive (and PhD-intensive) offices of medical education and began to employ new faculty members. It is that generation of faculty, myself included, who can be seen as first-generation immigrants – people who migrated into MER more or less through a process of random wandering. Our backgrounds were diverse: I had a PhD in physics; Dave Irby held a degree in divinity, and Christine McGuire was formerly an economist and English major. All of us made the decision to emigrate after our formal education, and many of us came to the field ill equipped with the necessary skills. Training in statistics, psychometrics, psychology or qualitative methods was conspicuous by its absence. We muddled ahead using common sense, as that was about the only relevant tool we possessed. As McGuire2 said: ‘I just made it up… Evaluation is probably the most logical field in the world and if you use a little bit of logic it just fits together and jumps out at you… It’s very common sense.’ We were like first-generation immigrants to another country in that we were unable to use our often highly developed, but largely irrelevant, skills from our host disciplines. (The European situation was somewhat different; MER came a little later to Europe and many of its first-generation researchers in the 1970s – individuals such as Chris McManus and Colin Coles in the UK, and Wijnand Wijnen, Henk Schmidt and Cees van Boven in the Netherlands – had undergone formal postgraduate training in psychology.)

The 1990s saw a new generation of researchers enter the field in North America. Although they may not have been trained in MER, they were, nevertheless, like the Europeans, actively recruited from relevant fields, including those of cognitive psychology, anthropology and epidemiology. Their theses had frequently been researched in a medical environment. They often had specific graduate experience of working with clinicians. This generation can be likened to the children of immigrants. They belong to both the old country – their host discipline – and the new – MER. They can converse in the new language of MER, but they also retain the mothertongue of their host discipline. They remain attuned to the culture of their host discipline and see the world through its lens.

Unlike my generation, second-generation MER academics did not need extensive on-the-job training. This transition is reflected in the far greater sophistication of the research methods they have brought to the field. They use multivariate statistical methods like item response theory (IRT) and structural equation modelling (SEM). They do complex and clever experiments. They can speak about and understand the differences between grounded theory and ethnomethodology.3 Critically, when I say ‘they’ I do not mean that each and every one of this generation can move from IRT to ethnomethodology, but, rather, that each individual in this second generation has undergone in-depth training in a particular discipline in the social or behavioural sciences and that collectively they enrich the field with a wide variety of sophisticated skills rooted in their host disciplines.

One index of the difference between the two generations is initial productivity. I conducted a straw poll of four of the most highly regarded members of the ‘old guard’ of academics who joined MER in the 1970s. Individuals in this group published a mean of 5.5 MER articles in peer-reviewed publications over the first 5 years, ranging from one (me) to 13 (by a well-trained individual with a PhD in educational psychology). By contrast, individuals in another non-random sample of second-generation folks from the 1990s published a mean of 25 papers each in peer-reviewed journals (range: 12–36).

We are now into a third generation. Graduate programmes in MER are proliferating and many offer both Masters degrees and PhDs. Their students are typically health professionals who have actively chosen this career path. They are mentored and taught by medical education researchers of the prior two generations, but they belong entirely to the new discipline. They speak the languages of other disciplines only haltingly; their disciplinary mothertongue is that of MER. They cannot claim any of the culture of another host discipline. Although they may be passingly familiar with psychology, psychometrics or ethnomethodology, their familiarity is, at best, based on a single course or a thesis. If they visit another discipline, they do so as tourists, not as returning prodigal sons.

Of course, the boundaries are not as distinct in space or time as I make them out to be. There were some people with appropriate degrees in my generation and there are people with PhDs from social and behavioural sciences entering MER currently. It is the characteristics of the three cohorts, not their temporal sequence, that matter.

This movement toward a stand-alone discipline may be viewed by some as a sign of the maturation of MER, which is now able to stand on its own two feet. No longer need we import specialists from other lands; we are producing sufficient of our own number to be independent. Perhaps, but my central thesis is that this assimilation and maturation carries with it some potential dangers. I begin with the point made by Lee Shulman4 many years ago in a paper called ‘Disciplines of Inquiry in Education’, in which he states:

‘A major reason why research methodology in education is such an exciting area is that education is not itself a discipline. Indeed, education is a field of study [his italics], a locus containing phenomena, events, institutions, problems, persons, and processes, which themselves constitute the raw material for inquiries of many kinds. The perspectives and procedures of many disciplines can be brought to bear on the questions arising from and inherent in education as a field of study. As each of these disciplinary perspectives is brought to bear on the field of education, it brings with it its own set of concepts, methods and procedures, often modify in them [sic] to fit the phenomena or problems of education. Such modifications, however, can rarely violate the principles defining these disciplines from which the methods were drawn.’4

I am concerned that the strength of medical education resides in the bringing together of various disciplinary perspectives. If we succeed in moving from a disciplinary mosaic, in which diversity is recognised and celebrated, to a melting pot, in which practitioners become homogenised, the range of theoretical and methodological knowledge is broad, but immigration from other disciplines is reduced, we may lose a major asset.

What, specifically, do these diverse disciplines bring to MER? One contribution is methodology. But my sense is that these disciplines have more commonality than diversity in methodology. I cannot speak authoritatively about qualitative methods, but within the quantitative domain, experimental and quasi-experimental approaches are similar in education, psychology, epidemiology and clinical research. Similarly, the psychometric tradition cuts across many disciplines related to health research. It is not simply their methods that enrich MER; it is their perspectives, their programmes, their theories and their epistemologies. Bordage5 has described these eloquently as ‘conceptual frameworks’ which:

‘…represent ways of thinking about a problem or a study, or ways of representing how complex things work. They can come from theories, models or best practices. Conceptual frameworks illuminate and magnify one’s work. Different frameworks will emphasise different variables and outcomes, and their inter-relatedness.’5

In a similar manner, Regehr6 speaks of the ‘metaphors’ that research provides to explain local phenomena. However, my point is to make explicit the fact that different disciplines enrich a field not because of differing methodologies (although this is sometimes the case) or different theories (as theories rarely emerge), but precisely because they provide access to a diversity of conceptual frameworks. Such specialised knowledge frequently leads to new and counter-intuitive findings that, in turn, advance understanding. To substantiate this claim, I will first describe a number of issues in medical education in which common sense has led us, or is leading us, seriously astray. In each case, I will then show how the specialised knowledge of particular disciplines has led to significant advances, with conclusions that often run directly counter to those that might be predicted according to common sense.

The failure of ‘common sense’ and the success of multiple disciplines

  1. Top of page
  2. Abstract
  3. Introduction
  4. The failure of ‘common sense’ and the success of multiple disciplines
  5. The usefulness of multiple disciplines
  6. The future of MER
  7. Acknowledgments
  8. References

Learning

Much research in medical education is driven by commonsense notions about the nature of learning: (i) learners have different needs, motivations, aptitudes and abilities; instruction should accommodate these; (ii) different instructional methods and formats lead to different outcomes, and (iii) the more the instructional method encourages activities that are closer to those carried out in the real world, the better the learning. The first assumption is codified in ‘learning styles’ under various different rubrics: spatial versus verbal learning; surface versus deep learning, and Kolb’s quadrants of learning.7 The effect of instructional format leads to comparisons across formats, such as e-learning versus some alternative approach, or high-fidelity versus low-fidelity simulation. Further, the supposition that methods which encourage behaviours resembling learning outcomes will enhance learning and transfer is delivered under various guises, including those of ‘situated cognition’, ‘authentic learning’ and ‘context-based learning’, and has become a rationale in advocacy for methods like problem-based learning (PBL), in which students are encouraged to solve problems, and high-fidelity simulations.

The failures

Unfortunately, despite multiple studies, none of these commonsense axioms finds much support in the literature. Matching learning styles has never been demonstrated to lead to superior outcomes, in either general8 or medical9,10 education. The idea that different instructional formats lead to consistently different outcomes has been challenged by several systematic reviews of e-learning11 and PBL.4

The successes

Conversely, theories derived from cognitive psychology point to some general instructional strategies that lead to predictable and consistent benefits.12 Problem-based learning curricula that are directed at the activation of prior knowledge lead to better transfer.13 The use of multiple examples and commonsense analogies that direct students to the deep conceptual structure of the problem results in significant gains in transfer.14 Mixed practice, in which examples from different categories are mixed together, and distributed learning, in which learning is spread over several occasions, are both effective strategies directly related to cognitive theories of learning.15,16 Cognitive load theory17 provides a strong theoretical basis for effective instructional design, is generally directed to simpler presentations and has consistent and strong effects.

Clinical problem-solving skills and clinical reasoning

The failure

Perhaps the best example of a failure of common sense derives from McGuire’s own work.2 In the 1960s and 1970s, there was a strong movement to release education from the constraints imposed by the straitjacket of behavioural psychology and behavioural objectives. New medical schools pioneered radical approaches such as PBL and, concurrently, assessment gurus tried to develop methods that moved away from the right-or-wrong knowledge-driven approach of multiple-choice questions (MCQs) and toward measures of ‘problem-solving skills’. The term had common appeal; it was proposed that the differences between experts and novices derived from the fact that experts had good problem-solving skills and novices had still to acquire them.

The best-developed of these was McGuire and Babbott’s patient management problem (PMP),18 a multiple-page problem that asked the candidate to select options in the given history, physical examination, etc. by exposing each alternative answer with a special pen. The selected items were then weighted and summed to provide measures of ‘proficiency’, ‘efficiency’, ‘competence’, etc. Patient management problems found their way onto licensing examinations in Canada and the USA, and many schools used them as part of final examinations.

However, both psychometric study of PMPs and fundamental research around ‘clinical problem solving’ revealed a disquieting finding. However, it was measured, the correlation across problems was 0.1–0.3.19,20 Further, careful studies showed that students performed very differently on PMPs and parallel examinations with simulated patients.21 By the 1980s, PMPs had virtually disappeared from examinations.

The success

The consistent finding of content specificity led inexorably to the conclusion that general skills are evanescent and that knowledge plays a central role in expertise; this represented a counter-intuitive result at a time when proclamations that knowledge was irrelevant and fundamentally changed every 5 years were rife. However, studies of expertise in many other disciplines came to the same conclusion.22,23

This led to a small paradigm shift within the domain and the birth of a new generation of research strongly driven by theories of knowledge. Patel et al.24 used proposition analysis, both a method and a theory, derived from research in reading.25 Bordage initially pursued prototype theory from psychology26 in his work with Zacks,27 and then semantic axes, derived from semiotics.28 My own work29 was based on exemplar theories of categorisation.30 Although each theory provides at best only a partial representation of the phenomenon of clinical reasoning, the collective provides a far richer, and more powerful, explanation of the phenomenon that is frequently at variance with common sense.

Assessment

The failure

Parallel to commonsense views of learning, assessment research has been driven by some plausible but incorrect views about assessment. First is that different assessment formats necessarily assess different abilities; thus, for example, a long-standing debate continues between advocates of short-answer questions and proponents of MCQs. Arguments are that the former measures recall and the latter recognition, or that short answers are more likely to test ‘higher-order’ skills. A second assumption is that the closer the assessment resembles reality (and the higher up the Miller31 pyramid it goes), the better the assessment. As Schuwirth and van der Vleuten state:

‘Authenticity should have high priority when programmes for the assessment of medical competence are being designed. This means that the situations in which a candidate’s competence is assessed should resemble the situation in which the competence will actually have to be used.’32

Regrettably, again, the research evidence does not support either assumption. Several studies have shown that correlations between MCQs and short-answer questions range from 0.95 to 1.0.33 Moreover, an accumulation of studies have shown that written, usually MCQ-based, tests have predictive validity in terms of actual clinical performance in practice that is equal to, and often exceeds, performance tests such as the objective structured clinical examination.34–36

The success

Quite early in its history, medical education began to adopt state of-the-art psychometric methods. These changes were primarily driven by the national licensing bodies in the USA and Canada. The consequence was sweeping changes in approaches to summative assessment, with the adoption of contemporary methods like generalisability theory and IRT for analysis, to the extent that medical education overtook general education in its methodological rigour.37 A consequence of this is that current approaches to licensure and certification remain centred on proven-standard formats such as the ‘one best answer’ MCQ, supplemented by other formats. Not only has the efficiency of testing been improved by the application of methods like IRT, but the latter has also been accompanied by consistent evidence of high predictive validity.34,38,39

The usefulness of multiple disciplines

  1. Top of page
  2. Abstract
  3. Introduction
  4. The failure of ‘common sense’ and the success of multiple disciplines
  5. The usefulness of multiple disciplines
  6. The future of MER
  7. Acknowledgments
  8. References

In these examples I have highlighted theories from cognitive psychology and psychometrics, not because these have privileged status among relevant social science disciplines, but simply because these are the areas with which I am most familiar. I have no doubt that others can find equally powerful examples from other disciplines in the social and behavioural sciences.

As these examples illustrate, one great strength of MER derives from its ability, like the English language, to effortlessly absorb other disciplinary languages. Effective strategies to enhance learning derived from strong theories of cognition are gradually entering our curricular innovations.40 Our understanding of clinical reasoning has been enhanced by adapting theories of categorisation from cognitive psychology, sociology and other domains. Our assessment methods have shown dramatic improvements in effectiveness since the adoption of psychometric methods.

What makes this possible? Firstly, medical education is a rich proving ground for different perspectives, whether the issue of concern relates to factors in interprofessional communication in the operating theatre or a study of pattern recognition and perception in radiology. Secondly, and most importantly, from its humble origins in Buffalo, MER has always attracted a coterie of dedicated health professionals with an interest in enriching the teaching environment. Finally and, in my view, most critically, MER has attracted and continues to attract scholars from multiple disciplines in the behavioural and social sciences. It is the specialised knowledge drawn from different domains that permits interesting and potentially illuminating predictions, not McGuire’s ‘little bit of logic’ and ‘just common sense’.2 Indeed, what makes education science, like all science, such an exhilarating voyage of discovery is that findings so often turn out to be not predictable by common sense, but instead appear to be counter-intuitive, at least until one reconstructs one’s intuitions.

A second insight to emerge from this review refers to the fact that the examples I have chosen provide, I believe, a strong antidote to somewhat nihilistic recent commentaries,6,41 the thesis of which appears to be that education is so complex and ill defined that any attempt to find generalisable truths is bound to fail. Many of the studies I have cited on the ‘success’ side are based on realistic tasks and environments – such as diagnosis of electrocardiograms or learning from resident half-days – yet show large effects. The world of learning may appear complex simply because, in the absence of a theory, it is difficult to identify the aspects on which to focus. The nature of theory is, of course, to simplify by identifying relevant variables, but there remains a difference between simple and simplistic representations.

These observations also have bearing on the historical theory versus application debate. As the examples indicate, one need not choose one over the other. Good theory informs successful application; well-designed applications lead to elaborations of theory. It takes two to tango.

The future of MER

  1. Top of page
  2. Abstract
  3. Introduction
  4. The failure of ‘common sense’ and the success of multiple disciplines
  5. The usefulness of multiple disciplines
  6. The future of MER
  7. Acknowledgments
  8. References

It should be evident that my perspective on this field remains optimistic. The field is expanding, both in terms of the numbers of people drawn to it and the numbers of products (papers, books, etc.) created by these individuals. Although MER may indeed represent a poor cousin in terms of research funding, it is no longer an illegitimate child. Indeed, we might point with pride, to the output–input relationship in the field, which indicates how relatively small sums of research funding spent on MER can result in significant additions to knowledge.

At the same time, I believe its strength derives in part from the fact that MER remains a field of study of interest to multiple disciplines and any attempt to make it into a unitary discipline and to exclude new waves of immigrants should be resisted. Schuwirth and van der Vleuten said:

‘Rigorous and relevant research requires a combination of well-trained educationalists and researchers with good practical knowledge of medicine and teaching. One conclusion from all of these is that a close collaboration between doctors and educationalists is indispensable for good medical education and development of better education. Any monodisciplinary endeavour will lead to a suboptimal result.’32

I fully agree. I would only add that, in my view, the value added from disciplinary diversity derives from the diversity of the systematic bodies of knowledge practitioners bring to the field. Their methods may vary, their theories are probably local and ‘ungrand’ like most theories in social sciences, but their metaphors are essential tools.

Funding:  none.

Conflicts of interest:  none.

Ethical approval:  not required.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. The failure of ‘common sense’ and the success of multiple disciplines
  5. The usefulness of multiple disciplines
  6. The future of MER
  7. Acknowledgments
  8. References