SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

Context  The health professional education community is struggling with a number of issues regarding the place and value of research in the field, including: the role of theory-building versus applied research; the relative value of generalisable versus contextually rich, localised solutions, and the relative value of local versus multi-institutional research. In part, these debates are limited by the fact that the health professional education community has become deeply entrenched in the notion of the physical sciences as presenting a model for ‘ideal’ research. The resulting emphasis on an ‘imperative of proof’ in our dominant research approaches has translated poorly to the domain of education, with a resulting denigration of the domain as ‘soft’ and ‘unscientific’ and a devaluing of knowledge acquired to date. Similarly, our adoption of the physical sciences’‘imperative of generalisable simplicity’ has created difficulties for our ability to represent well the complexity of the social interactions that shape education and learning at a local level.

Methods  Using references to the scientific paradigms associated with the physical sciences, this paper will reconsider the place of our current goals for education research in the production and evolution of knowledge within our community, and will explore the implications for enhancing the value of research in health professional education.

Conclusions  Reorienting education research from its alignment with the imperative of proof to one with an imperative of understanding, and from the imperative of simplicity to an imperative of representing complexity well may enable a shift in research focus away from a problematic search for proofs of simple generalisable solutions to our collective problems, towards the generation of rich understandings of the complex environments in which our collective problems are uniquely embedded.

Medical Education 2010: 44: 31–39


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

In the January 2008 issue of Educational Researcher, Robert Slavin began an article with the claim: ‘Throughout the history of education, the adoption of instructional programmes and practices has been driven more by ideology, faddism, politics, and marketing than by evidence.’1 He went on to argue, ‘A key requirement for evidence-based policy is the existence of scientifically valid and readily interpretable syntheses of research on practical, replicable education programmes.’1 Although his claims and entreaties are recent, they are not new. In fact, they are remarkably reminiscent of the appeal made over 40 years earlier by Campbell and Stanley for an approach to education research that was capable of ‘…settling disputes regarding educational practice, …verifying educational improvements, and … establishing a cumulative tradition in which improvements can be introduced without the danger of a faddish discard of old wisdom in favour of inferior novelties’.2  The calls for evidence-based practice in education and the determination of what constitutes evidence are long standing and powerful. Yet those who seek out such evidence in a systematic way are often left with little to show. Tom Russell wrote in his 1999 book, The No Significant Difference Phenomenon, ‘More than 10 years ago I decided to identify studies that would “document” the “fact” that technology improved instruction… A startling finding was that there were/are an enormous number of studies – by far the vast majority of them – that showed no significant difference…’3  Thus, the education field has created for itself a conundrum. Either we must conclude that little we do makes a difference, or we must raise doubts about the value of the research we are doing to address the goals we have set for ourselves.

The field of medical education is also struggling with these issues. It has become apparent to many that something is amiss with the practice of research in the medical education field, and several of our key leaders have expressed concerns. However, we seem to still be struggling to identify exactly what the problem is. We have scientific models of research to emulate, but, despite our efforts to follow these models, we are not sure that we are getting where we want to go, and we are not sure where we are going wrong. The problem has been articulated from several perspectives. From a methodology perspective concerns have been expressed by some that our studies have failed to effectively tie curricular interventions to relevant long-term outcomes.4,5 Others have suggested that such grand educational experimentation is probably useless and advances are most likely to come ‘from many small, tightly controlled studies, occurring in many labs, with many replications and with systematic variation of the factors in the interventions, driven by theories of the process’.6  In another line of argument, some have suggested that the problem with education research and practice lies, in part, in its failure to draw on relevant education theory,7 whereas others have worried that the theories expressed in education are too weak to bear such weight because they have not evolved sufficiently to allow specific and valuable predictions.8

In this paper, I would like to take a different perspective on the issue. Rather than address why we are having such a hard time accomplishing our goals, I would like to take a step back and question the goals themselves. I believe that among the many disagreements regarding how the problems of medical education research might be solved, there are certain assumptions that are implicitly agreed upon by those most actively partaking in these public discussions. I believe that embedded within these public debates is a common metaphor of ‘legitimate’ science, and goals for science, which is shaping the conceptualisation of what constitutes ‘good’ research in education: namely, the metaphor of the physical sciences. There are two ways in which this implicit metaphor is shaping the construction of ‘good’ research in our discussions and in our work. Firstly, the hypothesis-testing framework that is the hallmark of the traditional physical sciences has been adopted and adapted into an implicit ‘imperative of proof’ that, as Eva9 suggests, has manifested as a (narrowly defined) search for ‘evidence’. Secondly, the ‘ultimate search for simplicity’, which, as Bill Bryson10 proposes, has been the reference goal of the traditional physical sciences, has been adopted as an implicit ‘imperative of simplicity’ that has manifested as a desire for simple, generalisable findings that can be applied broadly across a variety of domains and contexts.

The combination of these imperatives in the context of the applied world of education has led to a construction of the ‘science’ of medical education as having the goal of demonstrating ‘evidence’ or proof for a set of simple, generalisable ‘truths’, which ultimately should be relevant to applied education practices. Thus, these adaptations of the physical sciences imperatives as constituting appropriate goals for research have implications for our thinking about the research agenda in medical education, and these implications may be getting in the way of moving the conversation forward. In some ways these implications are very explicit, as in the increasing calls for ‘standards’ in medical education research.11 In other ways, the implications are more subtle, as in the determination of reasonable word lengths for articles in the field, with the most liberal of our medical education journals publishing papers that are substantially shorter than those seen in, for example, the social sciences. However, regardless of whether they are explicit or subtle, these borrowed and adapted goals exert a kind of gravitational pull on our collective work.12

It is important, therefore, that we examine our community’s interpretation of the imperatives underpinning this model of science and the goal of science that these interpretations engender. In this paper, I will explore both the imperative of ‘proof’ and the imperative of ‘simplicity’, as well as the resulting goal of producing simple generalisable findings that are broadly relevant to the applied education context. Having elaborated and questioned these values, I will offer an alternative framework for thinking about the ‘science’ of education and a framework for imagining how we might profitably evolve our community practices as a result.

As a final note before elaborating these points, however, it is important to acknowledge that my own professional background is embedded in a strongly quantitative experimental paradigm, and it is from this experience and perspective that I draw many of my examples and on which I focus much of my discussion. This is not to say that I believe this set of concerns is limited to quantitative studies in medical education. As Lingard has suggested, even qualitative studies within health professions education are ‘subject to the pull of this domain’s dominant epistemological forces’12  and she has elaborated, in particular, on the pressure towards simplification and its impact on her own programme of research. Thus, it is not my intention to critique the quantitative paradigm per se. Rather, I will use examples from this paradigm to highlight the impact of this particular set of dominant epistemological forces on the work of our field, and I will leave it to those who are better positioned to address how these forces might be impacting research from other paradigms.

The imperative of proof

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

To frame the discussion regarding the imperative of proof, I would like to return to the quotation from Campbell and Stanley and their celebration of the experiment in education research ‘as the only means for settling disputes regarding educational practice, as the only way of verifying educational improvements, and as the only way of establishing a cumulative tradition in which improvements can be introduced without the danger of a faddish discard of old wisdom in favour of inferior novelties’.2  This entreaty is compelling and likely captures many of our community’s intuitions about science, not necessarily in its celebration of the experiment’s uncompromised ability to uniquely deliver on this promise, but rather in its articulation of the nature of the promise itself: the desire to supply incontrovertible proof. Whether the proof is for the efficacy of education interventions or for the affirmation of a theory, proof is at the core of this implicitly held belief about the purpose of education science: ‘Did it work? Was I right?’

The roots of this approach are grounded in a rich tradition of the physical sciences, in which individuals such as Descartes (1596–1650) used the ‘scientific method’ as early as the 17th century to establish the ‘laws of nature.’ In this tradition, experimental hypotheses were functional extensions of theoretical predictions and, by demonstrating ‘experimental control’ over phenomena of interest, researchers were empirically evaluating their theories, and extending or limiting their generalisability and applicability. Yet, in the hands of education researchers, the hypothesis-testing paradigm developed in a highly practical direction,13 and the approach came to be defined less by the epistemology (the elaboration of causal theories through hypothesis testing) and more by the methodology by which the epistemology was enacted (the randomised control trial).14 With this shift, the goal of research in this tradition was reinvented as a much more practical desire ‘to establish credible links between exposures and outcomes’.15  Particularly in medical education, which has been functioning for some time in the larger context of the evidence-based medicine movement, this goal evolved further into a demand for evidence of the efficacy for education interventions, which, again as Eva has suggested, has been equated with ‘proof that something works’.9  This focus on evidence and the attendant implicit adoption of proof as the goal of research has had important repercussions for practices in the field of medical education.

For many programme developers, this imperative of proof has led to summative models of programme evaluation and to an almost exclusive focus on the question of whether the intervention ‘worked’16 (i.e. whether those who received the intervention improved on a predetermined outcome measure). One implication of this summative approach to programme evaluation is that a typical process of education innovation looks something like this:

  • 1
     identify a content area that needs to be taught;
  • 2
     develop a teaching module to match the content and implement the module;
  • 3
     test to see if it ‘works’;
  • 4
     try to figure out what went wrong;
  • 5
     tweak the design and delivery;
  • 6
     test to see if it works now (if it does not, go back to step 4 or, eventually, give up), and
  • 7
     publish the success as demonstrating that the content area can be taught in this way.

The result of such an approach to programme development is that any iterative modifications made to the programme are based on post hoc guesses rather than on systematically collected data. Thus, programmes are, at best, improved in a suboptimal way and, more often, are abandoned before they have a chance to mature effectively through systematic and sustained innovation.17 More importantly for the science of education, regardless of whether the programme ultimately ‘works’ or is abandoned, we don’t learn anything meaningful from these efforts because we are more focused on whether a programme works than on why it does or doesn’t and the implications of its ‘success’ or ‘failure’ both for our understanding of learning and, through this understanding, for future education practices.

Interestingly, the limitations of having adopted an imperative of proof are not restricted to the developers. Even for those who focus on the study of theory in the hypothesis-testing tradition, there has been some gravitational pull towards producing proof of a theory rather than a test of its applicability and generalisability. Thus, for many who attempt to engage in theoretical work in the context of this imperative, a distressingly similar road to theory ‘development’ can be travelled:

  • 1
     identify a data pattern expected from the theory;
  • 2
     develop a study to demonstrate that result;
  • 3
     run the pilot subjects to see if the experimental manipulation ‘works’;
  • 4
     if it does not, try to figure out what went wrong;
  • 5
     tweak the materials and instructions;
  • 6
     test to see if it works now (if not, go back to step 4 or, eventually, give up), and
  • 7
     publish the success as demonstrating that the theory is plausible.

Here, the implication of having given precedence to the imperative of proof is that our expectations are usually confirmed by the data we ultimately use to represent our understanding of a phenomenon. In the process, we often fall prey to the confirmation bias (i.e. if our results are predicted by our theory, then our theory must be right)13 and spend little time looking deeper into what else might explain our data or what else might be going on in our dataset. But again, importantly for the science of education, we don’t learn anything meaningful from the ultimate result because we are more focused on finding the conditions that prove our theories than we are on when they do or don’t work and the implications of the ‘success’ or ‘failure’ of our studies for our broader understanding of the theories we are exploring.

Thus, by having adopted too completely the imperative of proof for determining the quality and value of our research efforts, we have brought about a general movement towards a ‘decision-oriented’ model of education research.17 Rather than dwelling on the questions of what is going on, we jump straight to the issue of whether it worked. We keep tweaking when the answer is ‘No’, but are satisfied as soon as the answer is ‘Yes’. We celebrate and publish our positive results as proof of our rightness and treat the negative results as ‘failures’ to be ignored or even buried. As a result, the information we share with the larger education research community through the talks we give and the studies we publish tends to feel more like a ‘show-and-tell’ exercise than an engaging and challenging contribution to the community’s understanding of learning processes and education practices.

As educators, we are well aware that little improvement in future performance arises from the two-state summative feedback mechanism of ‘pass/fail’. We are bound to learn more from our own work and that of others if we systematically examine and document our struggles than if we loudly proclaim our successes. From the perspective of improving our education science, therefore, there is always a better question than ‘Did it work?’ and there is always a better answer than ‘Yes’. Yet the imperative of proof that underlies much of this work implicitly and explicitly values these dichotomous pass/fail questions as representing the highest form of ‘evidence’, creating one of the gravitational pulls that shape much of the work carried out in the field. I would join others,9,18,19 therefore, in calling for such work to be placed in its rightful position as a potentially valuable tool in a cycle of programmatic research that seeks to improve our understanding of education-related phenomena and not as the pinnacle and primary goal of research activity.14

The imperative of simplicity

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

A second, equally important imperative that our community has implicitly adopted in its model of science is the imperative of simplicity. Here, the analogy to the physical sciences and, more specifically, to physics is at its most powerful in our determination of the quality and value of research outcomes. It appears to be a natural human tendency to seek out simple generalisable rules for how the world functions (and how we function in it). This process is nowhere more deeply systematised than it is in the early models of science in physics. As Bryson commented: ‘…physics is really nothing more than the search for ultimate simplicity.’10  That is, early physics was driven by the search for a minimal set of simple laws that could be understood in isolation and combined in simple ways. There is a great tradition of success with such an effort in physics, as for example with Newton’s three laws and Einstein’s single equation. Furthermore, such a goal seems quite deeply entrenched in the valuation of science in education, as is clearly articulated in both Slavin’s call for ‘readily interpretable syntheses of research on practical, replicable education programmes’1  and Norman’s call for ‘small, tightly controlled studies … with systematic variation of the factors in the interventions’.6 

It is worth noting, however, that physics itself has not necessarily followed this path of simplicity for some time. During the 20th century, there occurred a major shift in thinking within physics that evolved with the developing field of subatomic physics. It was a shift so fundamental that Gino Segrè dubbed it ‘a struggle for the soul of physics’.20 So radical was the thinking required to interpret and understand this new field that Neils Bohr suggested ‘a person who wasn’t outraged on first hearing about quantum theory didn’t understand what had been said’.10  With reference to its violation of the imperative of simplicity, Leon Lederman stated: ‘It is too complicated… there is a deep feeling that the picture is not beautiful.’10  Yet, as unsettling as it might be, bowing to the ‘complicated’ is the only reasonable way to make sense of what has been learned: light is a series of packets (quanta) that act like waves; electrons in an atom move from orbit to orbit in quantum leaps of non-existence; the parts of a split photon are ‘entangled’ and therefore affect each other instantaneously across light years of distance without any obvious connection to each other. Through the discovery of these and other phenomena, the basic imperative of physics shifted from the construct of simplicity to a construct of uncertainty. This new understanding of physics was perhaps best captured by Richard Feynman, who wrote: ‘A philosopher once said, “It is necessary for the very existence of science that the same conditions always produce the same results.” Well, they do not. You set up the circumstances, with the same conditions every time, and you cannot predict behind which hole you will see the electron.’21

Meanwhile, in the world of macro physics, the concept of chaos theory was being evolved to explain how certain dynamic systems can be highly deterministic (i.e. they have a clear and simple set of rules to describe their behaviour), but still be highly unpredictable just a short distance out from their initial state.22 These chaotic systems are inherently unstable because of their extreme sensitivity to initial conditions, which manifest as an ‘exponential growth of perturbations’. In short, for such systems, the simplifying assumption that ‘error’ cancels out simply does not hold. Rather ‘error’ multiplies, so the most minute deviations at the initial state manifest as massive deviations in a very short time. A simple example can be demonstrated by placing a marble on the top of an inverted bowl. The marble never rolls down the same path twice because it can never be placed in exactly the same position on top of the bowl and the most minor deviations in initial placement compound on the trip down, leading to very different endpoints. As with the smoke from a lit cigarette, the system can appear highly linear and stable in the short-term, but can bloom into unpredictability at a moment’s notice.

The discoveries and descriptions found in these more ‘recent’ movements in physics have a ring of familiarity in the education domain. Consider, for example, the quantum construct of ‘entanglement’, which describes a phenomenon whereby two or more objects are linked together so that one object can no longer be adequately described without full mention of its counterpart(s), even though the individual objects may be spatially separated. Or consider the chaotic construct of ‘the exponential growth of perturbations’, which describes situations in which minor initial variations can have major influences on the final outcome. These more complex constructs may be better descriptions of education contexts than the classic models of simplicity and predictability that were the organising principles of 19th century physics. The problem with trying to define everything simply is that when everything interacts, nothing is simple. As a result, the ‘simplifying’ assumptions that drive the classic paradigm for the establishment of causality (hold all else equal and determine the simple effect of a single variable) may be hopelessly flawed. In entangled and chaotic domains, the effect of a single variable is not merely hidden in the ‘noise’ of other variables. Rather, its effect is fundamentally transformed through interactions with those other variables. So the goal of reductionism by way of ‘simplifying assumptions’ may be a chimera. Instead, as Bryson concluded about physics, ‘…so far all we have is a kind of elegant messiness’.10  And perhaps this elegant messiness is the best we can hope for.

From a practical perspective, this complexity probably indicates that meaningful, simple, generalisable findings that address common problems in education are fundamentally unachievable.14 Environmental differences between one education domain and the next, between one education context and the next, between one student–teacher dyad and the next, and even between one interaction in a student–teacher dyad and the next may involve a set of unique perturbations sufficient to render cross-context predictions meaningless. Of course, this is not news to anyone who has attempted a multi-institutional trial of an education intervention. One site can’t do it at the same time because its curriculum won’t allow it. Another site must modify the instructions because its evaluation system is different. Another site must combine the intervention with a similar intervention being proposed for another purpose. And a fourth site must alter the intervention because of limitations in the number of faculty available to deliver it. Thus, by the time the multi-institutional trial is set up functionally, it becomes obvious that it is not a multi-institutional intervention, but, rather, that a different intervention (or even set of interventions) will be carried out at every single site because, in order to be locally meaningful, the intervention must be localised to the particular constraints, issues, values and structures of the relevant institution (and the particular members of that institution).

Howell suggested that ‘context is the irreducible covariate’.23 This was highlighted wonderfully in the conclusions of a paper by Haidet and colleagues, who wrote: ‘We conducted this study to ask whether patient-centred learning environments at nine schools were substantially similar or different, both in strength and character. Our results indicate a more complex reality than the question would suggest.’24  The world is a complex and complicated place. By adopting the imperative of representing this complex and complicated world simply, in an emulation of 19th century physics, we have failed to represent the beauty and richness of variation and context. And we have missed the opportunity to evolve methods by which we can represent this complexity well, choosing instead to dismiss it as noise in our eagerness to achieve simple, generalisable solutions.

The problem of generalisable solutions

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

In order to provide another perspective on these issues, I would like to invoke a construct from cognitive psychology: the distinction between ‘weak’ and ‘strong’ problem-solving routines.25 As defined in cognitive psychology, weak problem-solving routines are those processes that are broadly generalisable to many situations, but are of limited applicable value in any particular situation. A good example of a weak problem-solving routine might be represented as follows: identify the relevant parameters of the initial state; identify the relevant parameters of the desired end state, and identify the steps necessary to move from the initial state to the desired end state. Although it is true that this problem-solving routine is generalisable to pretty much every problem-solving situation we might encounter, from the perspective of application, it is of no practical value in helping us decide what to do next in a particular situation. By contrast, strong problem-solving routines are routines that have become highly specialised to a very specific type of problem in a very specific context (as demonstrated by the television repair person, who always checks that the television is plugged in before doing anything else). As such, these highly specialised routines often allow the user to achieve a solution very quickly. However, these routines, because they are inextricably bound to the contextually relevant parameters and conceptualisations of the locally manifested problem, are not particularly generalisable. These strong problem-solving routines are now recognised and celebrated as a hallmark of expertise,25 although they are also identified and denigrated as ‘limits’ to human thinking in the form of ‘heuristics and biases’,26 the ‘restricted solution sets’ of experts,27 and ‘failures of analogical transfer’ from one problem to the next.28

These descriptions of weak and strong problem-solving routines have interesting parallels for our discussions of the difficulties we face in the science of education. Earlier, I described some of the editorial debates among our medical education research leaders. One set of debates related to the value of ‘theory’ in our field. Using the analogy of weak versus strong problem solving, we might reinterpret Colliver’s concern about the value of general education theory8 as a concern that current education theory functions as a weak problem-solving routine. That is, our current theories may be generally ‘true’, but they are too generalisable to be of practical local value: even if we choose to acknowledge their ‘truth’, we still do not have a clear sense of what to do with this ‘truth’ in terms of improving the daily practice of teaching and curricular development.

At the same time, relevant to the second set of debates regarding the call for research with practical, clinically valuable outcomes, it may be that the most demonstrably successful interventions can best be understood as strong problem-solving routines in that they are effectively adapted to local constraints and contexts, but, as a result, they are too deeply embedded in the local curricular context to be of much general value. This concern regarding the lack of generalisability for local interventions is reminiscent of the argument put forth by Norman and Schmidt that much of our literature is of such local interest that it is little more than ‘market research’:29 even if your course works for you, it is not clear how that will help me resolve my curricular problems on a day-to-day basis. Thus, local programme descriptions may be of limited value to others and, for reasons described earlier, multi-institutional randomised control trials may be of little value to anyone.

Combining these two constructions of the problem of education research has serious implications for the field. If generalisable education theories are too weakly generalisable to be of local practical value, and if localised solutions are too strongly embedded in the local context to be of general practical value, we must conclude that there may be no generalisable solutions to our collective education problems.

The ‘science’ of education

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

Norcini has been credited with suggesting that context specificity is ‘the one fact of medical education’.8  To the extent that this idea is broadly accepted, every health profession will have to begin grappling with the implications of this ‘fact’ for the definition and evaluation of competence, because it will mean that competence does not exist in the individual, but in the individual’s interaction with the constantly evolving context in which he or she is practising. Perhaps the time has also come to consider the possibility that context specificity is also a core ‘fact’ of medical education research. If so, we as a research community must begin grappling with the definition and construction of ‘science’ for our field. That is, if local context is the ‘irreducible covariate’, if the search for ‘replicable educational programmes’ is an implausible goal, if there are no meaningfully generalisable solutions in health professional education, then what is the purpose and value of science and scientific discourse in the health professional education field?

As a start towards addressing this question, I would like to suggest that the science of education is not about creating and sharing better generalisable solutions to common problems, but about creating and sharing better ways of thinking about the problems we face. It is about exposing our underlying metaphors and assumptions, and examining the relative value of these metaphors and assumptions for interpreting the education issues that we are individually and collectively trying to address in our own local contexts. Thus, the value of our scientific discourse (our talks and papers) will arise not from our ability to create a general solution that will apply to everyone’s problems or even our ability to solve each other’s problems, but rather from our ability to help each other think better about our own versions of the problems. Likewise, the value of reading the literature will not depend on our finding a solution that we can blindly adopt, but, rather, on reflecting on how to incorporate others’ interpretations of a problem into our own context, on what needs to be adapted to make those interpretations relevant to our context, and on why that adaptation is necessary. Through such a discourse, we can learn not only about our education practices, but also about our various contexts, and through this process we can learn something about the explicit and implicit theories that underlie both. To borrow Richard Shillington’s description of data analysis, the education literature should be ‘an aid to thinking, not a replacement for’.30

Thus, we should not construct our papers and talks around the idea that we have an answer to some problem that others can blindly adopt. Our scientific discourse should not focus on the answers at all. Rather, it should focus on expressing a new understanding of the problem or on the flaws in traditional understandings of the problem. Of course, this understanding should arise from the systematic collection, analysis and synthesis of data. And, of course, such an orientation towards understanding should not deny a place for the seeking of evidence of the success of a local solution as part of demonstrating the value of a particular construction of the problem. But these are merely the methods of science, not the definition. Science is about increasing communal understanding and the methods are merely the tools by which this goal is enacted. To use the methods of science without explicit efforts to move towards the goal of greater understanding is to have missed the point.

A focus on increasing our understanding of the problems of health professional education rather than on a particular solution will require a deeper description of the problem we are trying to solve and a more elaborate description of the parameters of the local context accommodated in the implementation. It will involve a more elaborate exploration of the assumptions about learning and competence that underlie a particular education approach. It will include descriptions not only of what worked, but of what did not work, an elaboration of the messy parts of our efforts to intervene, for it is in the messy parts that we will find the important clues to our unhelpful assumptions and constructions regarding learning and competence. Most importantly, it will involve a shift away from demonstrating we were right towards articulating what we learned along the way and how our thinking changed as a function of our experience.

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References

Effective evolution in the science of education has been limited in part because of the underlying model of science we have adopted for determining what constitutes appropriate goals, valuable evidence and legitimate discourse in our research community. This underlying model is so deeply embedded in current discussions about how to improve the science of education that it has been relatively unchallenged throughout our recent debates. If we are to move beyond the current conundrum, we must excavate the assumptions that are associated with this model of science and explore the extent to which they continue to be valuable in helping to evolve our field. In this paper I have discussed two ‘imperatives’ that I believe are implicit in the dominant discourse about the science of education: the imperative of proof and the imperative of simplicity. I have suggested the potential of reorienting education research from the imperative of proof to an imperative of understanding, and from the imperative of simplicity to an imperative of representing complexity well. If we adopt these new imperatives, and represent the nature of these new imperatives in both the way we approach our problems and the way we present our studies, we will find opportunities to move away from the current model of ‘show-and-tell’ papers and talks to a model which will allow us to start building a shared understanding together. Of course, education is not only a theory-building discipline; it is also a field of practice and, by focusing this discussion on a movement towards understanding, I do not suggest that we ignore the goal of positively affecting education practice. What I do suggest, however, is that in our efforts to achieve this goal, we must be wary of the model of science we adopt and the resulting path we follow. Education research is not rocket science, which is built on a structured, linear system with a straightforward set of factors which we can stick into a well-articulated formula to predict a clearly defined outcome. Rather, if we must make analogies to the physical sciences, we might do better to look to quantum mechanics and chaos theory. Such analogies will lead us away from the search for proofs of simple generalisable solutions to our collective problems, and towards the generation of rich understandings of the complex environments in which our collective problems are uniquely embedded.

Funding:  at the time of writing, GR was supported as the Richard and Elizabeth Currie Chair in Health Professions Education Research at the University of Toronto.

Conflicts of interest:  none.

Ethical approval:  not applicable.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. The imperative of proof
  5. The imperative of simplicity
  6. The problem of generalisable solutions
  7. The ‘science’ of education
  8. Conclusions
  9. Acknowledgments
  10. References