SEARCH

SEARCH BY CITATION

I liked the study on pushing techniques in labor by Yildirim and Beji when I first saw the manuscript. When I told Diony (I have reached the age where it is permissible to call even the editor in chief by her first name) how much I liked it, she seemed a bit surprised. She had liked it, too, but wondered at my uncharacteristically enthusiastic response.

I wondered a bit myself. Why did it stand out? It was a fairly routine type of study, an undistinguished, straightforward randomized controlled trial, mounted to compare a woman’s spontaneous pushing in the expulsion stage of labor with the Valsalva pushing that was accepted practice in their community. Nothing special about it. My positive reaction to the paper must just reflect my biases. Yes. I am biased.

I don’t believe that anyone is—or can be—free from bias. Despite the myth of scientific objectivity, clinical research is always biased. As clinical epidemiologists, we are aware of, and can sometimes avert, a few of the major biases that appear during the conduct of a randomized controlled trial. We can overcome allocation bias by meticulous randomization, and diminish ascertainment bias by careful masking (although this is not always possible). We are usually less aware of the biases that appear before the trial is even started, when we are trying to decide what conditions or illnesses to study, what interventions to test, who test them on, and who or what should comprise the control group. We tend to forget about the equally important biases that come only after the trial is completed, such as if, where, when, and how to publish (or suppress) the findings. Perhaps the most important bias of all resides in the (potential) reader, who determines how (or if) the results will be read and interpreted (1).

So I am not ashamed of being biased. Indeed, I am rather proud of my biases. There are advantages in having them out in the open. They come at least in part from my varied background (family doctor, obstetrician, [honorary] midwife, clinical epidemiologist, researcher, writer, and editor). With these accumulated sources of bias, I am now an intolerant, opinionated old coot with strong feelings about what he likes, and doesn’t like. It is from this biased standpoint that I look on this study by Yildirim and Beji as an exemplar of how clinical research can be done, and should be done more often.

What, then, are my relevant biases that influenced my reaction to this study? High on the list would be my bias against industry-funded studies. Although, of course, studies that are funded all or in part by commercial interests can sometimes be methodologically sound, and meticulously carried out and reported, I tend to be wary of them. The results can favor the intervention tested far more often than independently funded trials. I liked this study because it was funded, by an independent, rather than a commercial institution.

I am also prejudiced against large-scale, multicentered trials in which important local differences are submerged and lost in the chaos of amorphous data. The current emphasis on these expensive (and sometimes misleading) megatrials takes resources away from the small-scale studies that are so needed to address locally important issues. What we really want to know is “What are the likely effects, good and bad, of this intervention for our patients, in our setting, with our resources, our skills?” We are less interested in the big question “Does this intervention work on average everywhere?” I appreciated this study because the researchers carried it out in their own setting, and they described that setting clearly. The results are important to them, and are also relevant to (and only to) others with similar populations and settings…as most study results should be.

I give short shrift to studies that ignore important outcomes, like what women think about the interventions compared. In this study the researchers asked the women’s opinion, instead of just focusing on physiological variables. Good! This is, sadly, too often forgotten.

I am biased against ho-hum confirmatory studies that tell us only what we already know (or think that we know). In this study Yildirim and Beji questioned accepted practice in their own community. There is something satisfying about a study that challenges locally accepted dogma. They showed audacity and courage, characteristics that should be encouraged.

Many years ago I and my co-authors firmly stated our (unproved) belief that “the only justification for practices that restrict a woman’s autonomy, her freedom of choice, and her access to her baby would be clear evidence that these restrictive practices do more good than harm; and that any interference with the natural process of pregnancy and childbirth should also be shown to do more good than harm. We believe that the onus of proof rests on those who advocate any intervention that interferes with either of those principles”(2,3). This randomized trial tested the locally accepted intervention of forced Valsalva pushing against spontaneous pushing according to the woman’s own rhythm. The burden of proof was on the intrusive intervention to show itself to be clearly better. As is often (but by no means always) the case, it failed to do so.

Finally, I have restricted the references for this commentary to publications I have co-authored, because they express some of my strongly held beliefs. I have stated these beliefs both recently and long ago, and I welcome this opportunity to state them again. They explain why I appreciated the study by Yildirim and Beji; I am pleased that the authors submitted it to Birth, and that the editor chose to publish it. Those readers who share my biases will be pleased as well.

References

  1. Top of page
  2. References
  • 1
    Jadad AR, Enkin MW. Randomized Controlled Trials: Questions, Answers and Musings. 2nd ed. Oxford: Blackwell Publishing, 2007.
  • 2
    Enkin M, Keirse MJNC, Neilson J, Crowther C, Duley L, Hodnett E, Hofmeyr J. A Guide to Effective Care in Pregnancy and Childbirth. 3rd ed. Oxford: Oxford University Press, 2000.
  • 3
    Chalmers I, Enkin MW, Keirse MJNC. Effective Care in Pregnancy and Childbirth. Oxford: Oxford University Press, 1989.