SEARCH

SEARCH BY CITATION

The task is not so much to see what no one yet has seen, but to think what no one yet has thought about that which everyone sees. – Arthur Schopenhauer (1788–1860)

In 1799, at age 67 years, George Washington was perhaps the most famous man in the world. One day, he developed a fever with increasing dyspnoea. Martha Washington called for a doctor, and he was eventually treated by three of the finest physicians, who all administered the standard of care: bloodletting. He was no better after each treatment. Eventually, with the agreement of all doctors on the treating team, he was bled in total about 4 L of blood. Although this would be unfathomable today, the observational studies at the time clearly demonstrated that bloodletting was a safe and effective treatment. It took decades for this to become discredited.

Medical reversal refers to the phenomenon of a new superior trial that contradicts current clinical practice.[1] Perhaps, one of the greatest medical reversals in the past 50 years was the discovery that peptic ulcers are caused by Helicobacter pylori. I remember as a medical student being taught by the professor of gastroenterology that peptic ulcers were due to the hyperacidity of the gastric environment with a possible association with stress. This lead to many gold standard treatments, such as vagotomy and partial gastrectomy, with all their attendant complications and harms. Further, the concept that bacteria might be the cause was initially ridiculed because everyone ‘knew’ that it was simply not possible for bacteria to survive in such an acid environment.

There are in fact multiple examples of doctors being convinced that a treatment is the right thing to do, only to later conclude the opposite. These medical reversals might have been originally based on flawed theory and small studies, for example: steroids and prophylactic hyperventilation for traumatic brain injury, the military anti-shock trousers suit for hypovolaemic shock, and aggressive volume resuscitation for shock associated with penetrating truncal trauma. Similarly, there are many examples of positive results from trials based on surrogate measures (i.e. disease-oriented outcomes) that are subsequently overturned by trials based on clinically meaningful (i.e. patient-oriented) outcomes, such as high-dose steroids for spinal cord injury, calcium in cardiac arrest, cyclo-oxygenase-2 enzyme (COX-2) inhibitors, early decompressive craniectomy in traumatic brain injury, vest CPR for out-of-hospital cardiac arrest, and drotrecogin alfa for sepsis. These are now discredited, but at the time, they seemed like a logical and good idea.

Unfortunately, a logical theory often trumps reality.[2] Many medical reversals involve a standard of care that has been promoted and based on our incomplete or flawed understanding of the pathophysiology of the condition.[3] Maybe half of these practices are wrong.[1] Perhaps more than half. In these cases, it is evident that clinicians have been using medications or procedures, such as those outlined above, in vain and causing harm. Yet, these same treatments have been promoted by professional bodies and consensus guidelines.[3] Indeed, exaggerated results in medical literature have reached epidemic proportions in recent years.[4] Many treatments that claim a benefit have turned out not to be true. Examples of this are increasingly being published, and papers reporting this phenomenon are becoming common.[5] Moreover, even when the effects are genuine and confirmed on replication of the study, their true magnitude is typically smaller than originally claimed.[4, 6] This might be due to underlying publication bias with journals preferring positive data over null results, and the selective reporting of results. As Richard Palmer states: ‘We cannot escape the troubling conclusion that some – perhaps many – cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a priori beliefs often repeated.’[6] Scientists frequently find ways to confirm their preferred hypothesis, disregarding what they do not want to see.[6]

In his landmark paper, the most downloaded in the history of the peer-reviewed, open-access journal PLoS Medicine, John Ioannides highlights that in reality, most published research findings are false.[7] Using statistical modelling, Ioannides demonstrates that for most study designs and settings, it is more likely for a research claim to be false than true. Importantly, claimed research findings are likely to simply represent accurate measures of the prevailing bias.[7] He goes on to list the important corollaries that would make it less likely for a research finding to be true: the smaller the study; the smaller the effect size; the greater the number and the lesser the selection of tested relationships; the greater the flexibility in design, definitions, outcomes and analytical modes; the greater the financial or other interests and prejudices; and the ‘hotter’ a scientific field (with more scientific teams involved).

Ioannides stresses that what matters is the totality of the evidence.[7] Some might consider that meta-analysis overcomes these concerns, but recent reports highlight that we cannot rely on meta-analysis.[8] Increasingly, there are reports that meta-analyses are missing over half of all patient data, particularly negative study results. When a meta-analysis is redone with inclusion of the missing data, the result can be completely different, such as with the antidepressant reboxetine.[9] And after publication of a Cochrane review into the effectiveness of oseltamivir in 2009, the reviewers received access to almost 30 000 pages of previously unavailable data, which shook their faith in published reports and changed their approach to systematic reviews.[10] As a result, the British Medical Journal (BMJ) has called for urgent action to restore the integrity of the overall medical evidence base.[9] And of course, methodological flaws continue to contaminate the literature and are neither rectified nor eliminated by meta-analysis.[8]

Another challenge is that conflicts of interest are common in medical research, leading many to question the validity and integrity of results from industry-sponsored trials. The non-publication of negative results, or biased or slanted interpretation of results significantly affects the perception of an intervention's impact.[11] Finally, there are prominent examples, some resulting from litigation with documents made publically available via subpoena, of what appears to be fraudulent and openly misleading reporting of data from industry-sponsored studies.[11, 12] Thus, Fiona Godlee, editor-in-chief of the BMJ has recently called for all clinical trial data to be routinely available for independent scrutiny once a regulatory decision has been made.[10, 13]

All these issues reflect the prevailing paradigm of our implicit belief in expertise, but experts are often wrong.[14] In his book ‘WRONG Why Experts* Keep Failing Us – And How to Know When Not to Trust Them’, David H Freedman highlights some of the issues above, and provides an outline of those characteristics typical of less trustworthy advice: it is simplistic, it is supported by a single study, it is ground-breaking, and it is pushed by people or organisations that stand to benefit.[14] Also, the characteristics of expert advice we should ignore include: it is mildly resonant, it appears in a prestigious journal, other experts embrace it and the experts backing it boast impressive credentials.[14] Perhaps more importantly, Freedman also provides those characteristics of more trustworthy expert advice: it is a negative finding, it is heavy on qualifying statements, it is candid about refutational evidence, it provides some context for the research and it provides perspective, with candid, blunt comments.[14]

We can be sure that much existing so-called ‘expert’ practice, currently considered ‘safe and effective’ or the ‘standard of care’ will turn out to be harmful or not effective, much like bloodletting. Such an epiphany can result in a disequilibrium from the challenge to our long-held beliefs. Pride and subculture often prevent people from acknowledging that long-held beliefs have been shaken. However, we should not fear the disequilibrium that comes from truth,[2] but embrace it as the path to better patient care.

References

  1. Top of page
  2. References
  • 1
    Prasad V, Gall V, Cifu A. The frequency of medical reversal. Arch. Intern. Med. 2011; 171: 16751676.
  • 2
    Newman DH. Hippocrates' Shadow. Secrets from the House of Medicine. New York: Scribner, 2008.
  • 3
    Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307: 3738.
  • 4
    Ioannidis JP. An epidemic of false claims. Competition and conflicts of interest distort too many medical findings. Sci. Am. 2011; 304 (6): 16.
  • 5
    Scott IA, Glasziou PP. Improving effectiveness of clinical medicine: the need for better translation of science into practice. Med. J. Aust. 2012; 197: 374378.
  • 6
    Lehrer J. The truth wears off. Is there something wrong with the scientific method? The New Yorker, December 13, 2010.
  • 7
    Ioannidis JP. Why most published research findings are false. PLoS Med. 2005; 2: e124.
  • 8
    Marini JJ. Meta-analysis: convenient assumptions and inconvenient truth. Crit. Care Med. 2008; 36: 328329.
  • 9
    Godlee F, Loder E. Missing clinical trial data: setting the record straight. BMJ 2010; 341: c5641.
  • 10
    Doshi P, Jones M, Jefferson T. Rethinking credible evidence synthesis. BMJ 2012; 344: d7898.
  • 11
    The NNT Group. Industry sponsored data. In: Newman DH, ed. The Number Needed to Treat (NNT). 2010. [Cited November 2012.] Available from URL: http://www.thennt.com/our-methodology/
  • 12
    Outterson K. Punishing health care fraud – is the GSK settlement sufficient? N. Engl. J. Med. 2012; 367: 10821085.
  • 13
    Godlee F. Clinical trial data for all drugs in current use. BMJ 2012; 345: e7304.
  • 14
    Freedman DH. WRONG Why Experts* Keep Failing Us – And How to Know When Not to Trust Them. New York: Little, Brown and Company, 2010.