Not a philosophy of clinical medicine: a commentary on ‘The Philosophy of Evidence-based Medicine’ Howick, J. ed. (2001)

Wiley-Blackwell, Oxford. ISBN- 9781405196673, pp. 248

Authors


Mark R Tonelli, University of Washington, 1959 NE Pacific St., Seattle, WA 98195-6522, USA

Introduction

When the birth of evidence-based medicine (EBM) was announced some two decades ago, that nascent concept emerged far from fully formed but with seemingly boundless potential. Promulgated as a way to teach and practise medicine [1], EBM largely ignored the fundamental philosophical assumptions and positions upon which it necessarily relied, choosing instead to focus on teaching the skills deemed necessary to become a bona fide EBM practitioner. Some of the earliest and strongest critiques of EBM, however, were those that took up the underlying philosophical issues, arguing that the premises upon which EBM rested were neither self-evident nor secure [2–8]. While there have been some previous attempts to address these criticisms in a piecemeal fashion [9–12], more full-fledged explications and defences of the philosophy of EBM have come only in the last several years [13]. In The Philosophy of Evidence-Based Medicine, Jeremy Howick, PhD offers by far the most fully formed defence to date of the epistemology of EBM.

Well-written and as approachable as any work in the philosophy of science might hope to be, The Philosophy of Evidence-Based Medicine is at its core defensive, if not always in tone, clearly in spirit. While one might be quick to dismiss this defensive stance as an example of ‘funding source bias’, given that the author is in the employ of the Centre for Evidence-based Medicine at the University of Oxford, Howick's commitment to the cause of EBM comes through as both thoughtful and genuine. Howick clearly wants to improve EBM by adding rigor and clarity where it has been lacking. A great deal of his attention is devoted to the internal consistency of EBM, specifically with regard to the understanding the strengths and limitations of the knowledge produced by clinical research that employs randomization, masking and placebos. Clearly, this epistemology of clinical research methodology is what interests Howick the most, taking up approximately two-thirds of the body of the monograph. His arguments and analyses in this arena are considered and often compelling. Applying the analytical tools of the philosophy of science to the field of medical epidemiology is welcome and long overdue. A philosophy of medical epidemiology, however, is not a philosophy of medicine. Unless, that is, clinical medicine is or ought to be simply applied clinical epidemiology. That, perhaps, is the core epistemic claim of EBM.

Howick defends this core epistemic claim of EBM in the latter third of his book. Here his analysis is much less thorough and convincing. Howick accepts and defends EBM's positivist assertion of the relative value of knowledge derived from clinical research and the corollary asserting a hierarchy of evidence rather than taking on the question directly and without preconceived conclusions. And in asserting a claim for the superior value of knowledge generated from clinical research, the author must devalue medical knowledge derived from other sources, specifically mechanistic reasoning and clinical experience.

The (non-existent) paradox of effectiveness

Embarking upon an analysis of the value and effect of randomization, masking and placebos in generating knowledge, Howick addresses the ‘paradox of effectiveness’ pointed out (though not so named) by numerous authors critical of EBM, including satirically [14]. The paradox is said to arise from the recognition that many interventions common in medical practice appear to be quite effective (moreover, physicians would generally agree that we know them to be beneficial) despite the fact that they have never been subjected to clinical trials of any sort, much less randomized trials. Examples offered by Howick include the Heimlich manoeuvre, general anaesthesia and electrical cardioversion, but the list is certainly much more extensive and not always as dramatic, including antibiotics for pneumonia and anticoagulation for pulmonary thromboemboli. Howick states that ‘the EBM view must be modified to overcome the paradox of effectiveness’ (p. 40), but his modification must still fit within the presumption of EBM that randomized trials offer the ‘best evidence’ of clinical effectiveness, at least generally. After defending randomized trials from various critiques, Howick's solution to the ‘paradox of effectiveness’ comes down to an assertion that we are allowed to claim knowledge about the effectiveness of interventions without randomized controlled trials as long as ‘the effect size is larger than the combined effect of plausible confounders’ (p. 56). This methodological concession leads to the recognition that, despite hierarchies to the contrary, at times ‘a carefully controlled observational study with a large effect could provide stronger evidence than a confounded randomized trial with a small effect’. Howick clearly hopes to limit the challenges to the supremacy of randomized trials to this narrowly defined (though difficult to operationalize) group of interventions with large effect sizes, at least large enough to overcome the potential effects of all plausible confounders. Unfortunately, it seems there are many examples in medicine where the effect size is insufficient (or unknowable) to overcome the combined effect of plausible confounders, yet still no reasonable clinician would advocate for controlled clinical trials. I would put anticoagulation for pulmonary thromboembolism, vasopressors for septic shock and pancreatic enzyme replacement for cystic fibrosis on such a list. None seems to have a dramatic effect size, at least in terms of mortality, and potential confounders are many. Howick's rule does not seem to explain how we can know these interventions to be beneficial, yet clinicians do. Importantly, the assessment of what constitutes all ‘plausible confounders’ cannot be an evidence-based pursuit. That is, elucidating confounders and deciding whether they are plausible requires incorporation of mechanistic reasoning and clinical experience, two pathways that Howick will later reject as a means to generalizable medical knowledge. That is Howick's actual paradox, that approaches rejected as a means to generalizable knowledge in medicine are required in order to determine when his only acceptable means to knowledge generation, controlled clinical research, is not necessary.

Of course, the simplest way to resolve the ‘paradox of effectiveness’ is to acknowledge that randomized clinical trials represent only one pathway to knowledge in medicine. The paradox ceases to exist when we recognize that sound knowledge in clinical medicine can derive from a variety of sources and that evidence in support of a claim of effectiveness does not belong on a hierarchy, but rather can be additive. Evidence supporting the effectiveness of pancreatic enzymes in cystic fibrosis, for instance, comes from clinical experience and pathophysiological (mechanistic) understanding of the disease, not from some claim about effect size. Likewise, the reduction in mortality associated with heparin use after initial pulmonary embolism will be low, as the vast majority of patients survived these events before the therapy was introduced. Yet the mechanistic imperative to reduce new clot formation to avoid any excess mortality is compelling. However, to acknowledge that mechanistic reasoning and clinical experience might provide compelling evidence to support a claim of effectiveness undermines the founding premise of EBM and renders hierarchies obsolete. Howick is clearly not ready to abandon these shaky tenets of EBM just yet. His defence takes the familiar form of dismissing mechanistic reasoning and clinical experience as a means to generate valuable medical knowledge. As with most defenders of EBM, he focuses his attention on the relative weakness of mechanistic reasoning and experience to provide evidence in support of a claim of effectiveness for a therapeutic intervention. Howick seeks to avoid the mistake of many EBM advocates who assume that demonstrated effectiveness is all that is required for clinical decision making, noting that ‘no particular course of action follows from determining whether a medical intervention works’ (p. 121). However, in the end he cannot seem to help conflating the limitations of mechanistic reasoning and clinical experience for supporting a hypothesis with limitations in supporting a clinical judgement. Not fully appreciating the distinction between effectiveness and clinical usefulness, between the goals of scientific inquiry and the goals of the clinician and patient, has been a hallmark of EBM advocacy [5,12].

Mechanistic (pathophysiological) reasoning

Howick begins his analysis of the two alternatives to clinical research for generating medical knowledge by emphasizing that EBM has from (almost) the beginning noted the value of mechanistic reasoning and clinical experience in the actual practice of medicine. However, the roles prescribed by EBM are severely limited, with mechanist reasoning only being helpful in generalizing results of clinical trials and clinical expertise for integrating the results of clinical trials with patients’ goals and values.

In devaluing mechanistic reasoning, Howick takes the standard EBM tack of providing examples where interventions based on pathophysiological reasoning have subsequently been demonstrated to be ineffective or even harmful by clinical studies (I suspect that the Cardiac Arrhythmia Suppression Trial [15] has been cited far more often by defenders of EBM than by cardiologists.). Howick even provides an appendix of 19 such interventions (pp. 154–157). Yet this line of argument remains disingenuous, unconvincing and unevenly applied. Using the same logic, we could put together a much longer appendix of clinical research studies where subsequent studies or meta-analyses have demonstrated contradictory results. Even different meta-analyses of the same intervention have reached different conclusions. However, when a randomized trial turns out not to produce reliable knowledge, proponents of EBM (nor nearly anybody, for that matter) do not dismiss randomized trials as a means of generating knowledge. So why should 19 examples of faulty mechanistic reasoning lead us to completely reject mechanistic reasoning as a means for generating medical knowledge? The many examples where pathophysiological reasoning has led to therapies that have been subsequently supported by clinical research are conveniently ignored. What the examples cited demonstrate is that mechanistic reasoning is not without limitations and is fallible, but the same can be said of randomized controlled trials and meta-analyses.

Howick also dismisses the value of mechanistic reasoning as necessary for supporting causal claims from clinical research, going so far as to deny mechanistic reasoning the power to reject the conclusions of clinical research that supports clearly implausible conclusions, such as the value of retroactive intercessory prayer [16]. Homeopathy has provided an another example of an intervention where mechanistic reasoning, in the form of biological plausibility, has been used to reject what otherwise might be seen as compelling clinical research supporting effectiveness [17]. Howick believes a careful examination of the clinical studies themselves is sufficient for us to ‘remain skeptical about any hypothesis’ supported, but scepticism does not mean rejection of the hypotheses, which clearly seems appropriate in these cases, based on theoretical rather than empiric arguments [18]. Certainly, the published studies on retroactive intercessory prayer and homeopathy were rigorous enough that they would have been met with less scepticism, and maybe been viewed as convincing, had they involved an intervention that was biologically plausible. I have seen many an intervention becomes part of clinical practice guidelines or care bundles based upon less rigorous studies. Howick's insistence that scepticism be based only on methodological limitations would lead to a demand for more research into the effects of retroactive intercessory prayer or ultra-high dilutions in human illness, given the promising studies thus far published. (This universal call for more research is the standard final statement in the conclusion of just about every published clinical research paper.) Such a pursuit seems, to the mechanistically inclined, a great waste of time and resources.

Mechanistic reasoning and clinical research are each valuable and complementary in assessing effectiveness. When mechanistic reasoning is simple and strong enough, clinical research supporting it may be unnecessary. For example, exerting positive pressure on the abdomen increases thoracic pressure forcing air out of the lungs, which exerts a force on a foreign body in the larynx increasing the likelihood of its expulsion. Decreasing the velocity at which one is falling from the sky increases the likelihood of survival upon meeting the ground [14]. Alternatively, when the body of clinical research supporting an intervention is large, consistent and robust, it may not matter whether or not we can explain mechanistically why the intervention works. Biological plausibility allows us to accept the results of clinical research as compelling earlier than we might otherwise and biological implausibility should cause us to be more sceptical of the conclusions of a clinical research study. Both mechanistic reasoning and clinical research can be flawed; each can serve as a check or support of the other.

Howick eventually gives some grudging respect to this notion, acknowledging that ‘high-quality mechanistic reasoning can bolster the strength of evidence in favor of claims that treatments are effective . . .’ (p. 153). He is not quite willing, however, to endorse the notion of ‘high-quality mechanistic reasoning as a separate potential source of evidence’ (p. 154). Nor is he willing to allow mechanistic reasoning to be employed in the care of individual patients.

Clinical experience/expertise

Howick begins his examination of the role of clinical expertise, derived from experience, by dismissing it as a source of generalizable medical knowledge. While this position is not particularly contentious, it does ignore the importance of clinical experience in the initial recognition of new manifestations of disease (such as novel H1N1 influenza) or rare adverse reactions to therapeutic interventions.

Focusing most of his attention on devaluing the notion of clinical judgement (based upon experience) in individual cases, Howick invokes the iconic work of Paul Meehl [19] to come down firmly on the side of statistics in the classic debate regarding the relative merits of clinical versus statistical approaches to decision making. Howick cites several studies in the arena of clinical medicine that appear to demonstrate that mechanical rule-following (e.g. algorithms, computer decision aids, clinical prediction rules, etc.) outperforms expert clinical judgement. The empirical research offered constitutes an interesting class of studies, one where an algorithm or clinical decision rule is designed (using empirical data and/or expert opinion) to answer a relatively simple diagnostic or therapeutic question. Once the algorithm is created, its performance is compared to that of practitioners, who are generally provided only the information that goes into the algorithm. Not surprisingly, the algorithm almost always outperforms clinicians under such circumstances. Unfortunately for Howick, these artificial and simplistic tasks generally have very little relevance for the actual practice of medicine. More importantly, Howick has neglected the studies that demonstrate the value of clinical judgement.

Even if we were to accept Howick's conclusion that rule-following outperforms expert judgement in artificial head-to-head comparison, the conclusion that clinical judgement has no value does not follow. The value of expert judgement can still be demonstrated when it is considered as a source of additional information, an independent variable in a mathematical model or decision aide. In my own field of practice, incorporating clinician judgement improves the predictive accuracy of diagnostic tests for diagnosing pulmonary embolism [20] and improves prognostic accuracy of actuarial estimates of survival in critically ill patients [21]. Just as we saw with pathophysiological reasoning, clinical judgement based upon experience offers a different set of warrants for clinical decision making, one that may conflict with or be complementary to the relevant clinical research. If conflicting, nothing in Howick's argument demonstrates that the statistically derived conclusion should always trump the clinical judgement. If clinicians intend to benefit individual patients (rather than simply provide for the public health), they must be able to recognize when it is appropriate to ignore the rules and guidelines derived from clinical research [5]. Empirical evidence suggests that clinicians are overwhelmingly correct when they choose to deviate from evidence-based clinical practice guidelines in the care of individual patients. Persell and colleagues recently demonstrated that clinicians can make explicit the reasoning for such deviations and that their judgement constituted appropriate care in over 90% of such instances [22].

Howick concludes his examination of the role of clinical judgement with a grudging acceptance of the current EBM position, slightly broadened, that clinical expertise is necessary ‘for producing and interpreting evidence, performing clinical skills, and integrating the best research evidence with the patient values and circumstances’ (p. 183). He is not any more helpful than other EBM proponents in describing how this integration is to occur, beyond noting that it requires attentive and empathetic listening. This conversion of clinical expertise into a set of skills underlies the continued rejection by EBM of clinical experience as a legitimate source of medical knowledge that can be employed in the care of individual patients.

Effectiveness versus clinical usefulness

As a philosopher of science, Howick is most interested in the epistemology of clinical epidemiology, exploring the nature of the knowledge that can be derived from clinical research. The founders and proponents of EBM, primarily physicians, were ill-suited to such investigation and have largely ignored it. To date, scrutiny of EBM by professional philosophers has generally ended badly for EBM [23–25]. Howick seeks to shore up the epistemic claims of EBM, with a qualified defence of randomized trials, a recognition of the limitations of masking, and a explication of the promise and pitfalls of placebos. In his examination of the epistemic ramifications of various methodological tools of clinical research, Howick occasionally recognizes that methodologically rigorous clinical research does not necessarily produce useful knowledge. He provides welcome support for active control studies (comparative effectiveness trials) rather than placebo control trials, recognizing that the latter generally do not provide information that the clinician and patient need to know, which is how a new treatment ‘compares with the best existing treatment, not how it compares with placebo’ (p. 113).

The sum of his argument, however, is that properly conducted clinical research offers the most reliable path to determining the effectiveness of a therapeutic intervention. Effectiveness, in this analysis, means something very specific. Howick argues for clinical research that seeks to measure the ‘characteristic effects’ of a given intervention, those that are directly and causally related to the intervention and not to other features of provision of care. While he cautions that simply controlling with placebos does not allow us to discern ‘characteristic effects’ from ‘non-characteristic effects’ (which might be related to the confidence of the clinician, time spent with patient, etc.), his goal is to help design better studies that will allow this distinction to be made. Teasing out the characteristic effects of an intervention is what requires some combination of controls, placebos, randomization and masking.

This focus on demonstrating direct or characteristic effects of an intervention is a primary mission of EBM. Only interventions that produce direct effects are to be recommended for clinical use. This claim and approach represent a reductionistic approach to medicine, one that can clearly produce meaningful knowledge regarding direct effects of interventions, but knowledge that turns out to not be particularly useful in clinical medicine.

The end of clinical medicine is neither knowledge nor truth. Most clinicians and all patients are interested in only one end, the benefit of the individual seeking medical care. While patients may be curious about how things work in medicine and physicians generally fancy themselves scientists, the distinction between direct and indirect, characteristic and non-characteristic effects has no meaning to the person seeking relief from pain or incapacitation. The provision of benefit requires much more than the application of the reductionism of EBM, which seems only interested in improving outcomes if we can improve them in some direct and characteristic manner. EBM is not interested in whether patients feel better unless they feel better in a very specific way.

The reductionism of EBM runs counter to the rising focus on providing patient-centred medical care, care aimed at improving the well-being of the whole person [26]. Just as the reductionist science of nutritionism leads to a diet of vitamins and supplements, centring medical care on interventions with demonstrated direct effects leads to health care characterized by multiple interventions targeted at isolated maladies. EBM demands that clinicians start by formulating simple questions that can be answered by appealing to clinical research. Broader questions regarding the overall well-being of patients are not germane. By focusing on direct effects, EBM devalues the indirect effects that lead to better outcomes, indirect effects that Howick acknowledges may be larger in magnitude than many of the direct effects. Ultimately, patients do not care why they feel better, only that they do. Patient-centred medicine therefore represents an existential threat to EBM [27].

The role of clinical research in clinical practice

The practice of clinical medicine, aimed at the benefit of the individual patient, is aided greatly by the availability of clinical research results. However, the clinical research that is most valuable to the clinician, research that will appropriately lead to changes in practice, is not necessarily the most statistically robust. A study designed to search for evidence of a direct effect of some therapeutic intervention may have no relevance to practice at all, regardless of the ‘strength of evidence’ it produces [24]. What makes clinical research compelling to clinicians is much more complex than proponents of EBM have been willing to acknowledge to date. While the GRADE working group [28], to which Howick contributes, has recognized that features such as effect size need to be considered when producing clinical practice guidelines, the proponents of EBM have not broadened their perspective enough. Clinicians will find clinical research more compelling if the results are biologically plausible and consistent with prior knowledge. An intervention will be more likely to be incorporated into practice if it is inexpensive, easy to use and safe. Clinicians and patients alike will more readily accept a therapy that has a rapid and large magnitude effect on an outcome that is highly valued. Clinical research is less compelling when conflicts of interest on the part of the funders, authors and investigators are evident [29]. Valuing these features of clinical research does not represent cognitive bias on the part of clinicians; rather it is a reasoned and reasonable approach to incorporating information that is incomplete and fallible into the care of patients.

More compelling clinical research will be welcomed by clinicians, but clinical research will never be prescriptive [5]. Howick attempts to severely limit the role of clinical experience and mechanistic reasoning for medical decision making in individual cases. EBM's call to ‘integrate’ pathophysiological rationale and clinical experience into clinical decisions employs a very narrow definition of integration. A good clinician, however, must bring all relevant knowledge to bear when making a decision for an individual patient. The results of clinical research have no special priority over mechanistic reasoning and clinical experience [30]. Counter to the view of EBM, mechanistic reasoning and clinical experience do not become irrelevant or minimally useful once some bit of clinical research is produced that bears upon a medical decision. A hierarchy of evidence cannot be defended, at least as far as clinical decision making is concerned. Other proponents of EBM [13], philosophers [31] and policy makers [32] have recognized this fact and Howick would be wise to follow suit.

In the end, The Philosophy of Evidence-Based Medicine represents another stage in the evolution of EBM away from many of its original, poorly conceived epistemic positions [33]. Howick is to be commended for his careful analysis of epistemic ramifications of many of the methodologies of clinical epidemiology. Unfortunately, he stops short of acknowledging that the philosophical pillars supporting EBM, the general priority of clinical research results and the hierarchy of evidence, are unsupportable. To do so, of course, would send EBM crashing back to earth. His exposition of a philosophy of EBM is welcome, but fails to represent a coherent philosophy of clinical medicine.

Ancillary