Advertisement

Arcsine test for publication bias in meta-analyses with binary outcomes

Authors

  • Gerta Rücker,

    Corresponding author
    1. Institute of Medical Biometry and Medical Informatics, University Medical Centre, Freiburg, Germany
    2. German Cochrane Centre, University Medical Centre, Freiburg, Germany
    • Institute of Medical Biometry and Medical Informatics, University of Freiburg, Stefan-Meier-Straße 26, D-79104 Freiburg, Germany
    Search for more papers by this author
  • Guido Schwarzer,

    1. Institute of Medical Biometry and Medical Informatics, University Medical Centre, Freiburg, Germany
    2. German Cochrane Centre, University Medical Centre, Freiburg, Germany
    Search for more papers by this author
  • James Carpenter

    1. Institute of Medical Biometry and Medical Informatics, University Medical Centre, Freiburg, Germany
    2. Freiburg Centre for Data Analysis and Modeling, University of Freiburg, Freiburg, Germany
    3. Medical Statistics Unit, London School of Hygiene & Tropical Medicine, London, U.K.
    Search for more papers by this author

Abstract

In meta-analyses, it sometimes happens that smaller trials show different, often larger, treatment effects. One possible reason for such ‘small study effects’ is publication bias. This is said to occur when the chance of a smaller study being published is increased if it shows a stronger effect. Assuming no other small study effects, under the null hypothesis of no publication bias, there should be no association between effect size and effect precision (e.g. inverse standard error) among the trials in a meta-analysis.

A number of tests for small study effects/publication bias have been developed. These use either a non-parametric test or a regression test for association between effect size and precision. However, when the outcome is binary, the effect is summarized by the log-risk ratio or log-odds ratio (log OR). Unfortunately, these measures are not independent of their estimated standard error. Consequently, established tests reject the null hypothesis too frequently.

We propose new tests based on the arcsine transformation, which stabilizes the variance of binomial random variables. We report results of a simulation study under the Copas model (on the log OR scale) for publication bias, which evaluates tests so far proposed in the literature. This shows that: (i) the size of one of the new tests is comparable to those of the best existing tests, including those recently published; and (ii) among such tests it has slightly greater power, especially when the effect size is small and heterogeneity is present. Arcsine tests have additional advantages that they can include trials with zero events in both arms and that they can be very easily performed using the existing software for regression tests. Copyright © 2007 John Wiley & Sons, Ltd.

Ancillary