SEARCH

SEARCH BY CITATION

To the Editor:

We read with great interest the recent systematic review and meta-analysis by Trallero-Araguás et al on the relationship between the anti-p155 autoantibody and cancer-associated myositis (1). By using the pooled sensitivity and specificity of the selected 6 studies, the authors concluded that the diagnostic odds ratio (OR) for anti-p155 was 27.26 (95% confidence interval [95% CI] 6.59–112.82), the positive likelihood ratio was 6.79 (95% CI 4.11–11.23), and the negative likelihood ratio was 0.25 (95% CI 0.08–0.76).

While these findings are impressive, we believe the results should be interpreted with caution. The validity and usefulness of a meta-analysis depends largely on the appropriate inclusion of small and similar studies to improve statistical power. A major limitation of this methodologic approach is that findings can be greatly compromised by publication bias—the concept that small studies are more likely to be published if a significant association was identified. Small studies require a larger effect size to meet statistical significance, resulting in bias in the estimated effect sizes. In this study, the authors constructed a funnel-plot and determined that the slope coefficient was not statistically significant (P = 0.15), thereby concluding that no publication bias existed. However, this deserves further exploration.

We evaluated for publication bias according to the recommendations of the study by Deeks et al (2) that is referenced by Trallero-Araguás and colleagues. Deeks et al recommend formally evaluating for publication bias and sample size effects using a funnel-plot of the log OR against the inverse of the square root of the effective sample size followed by testing for asymmetry using related regression or rank correlation tests. Notably, statistical power is considered inadequate for these tests when fewer than 10 studies are included in the analysis (3), as is the case for the study by Trallero-Araguás and colleagues. Therefore, the P values should be deemphasized.

After graphing the funnel-plot (Figure 1), we found suggestions of asymmetry. The results from the recommended regression analysis (R2 = 0.24, P = 0.3) suggest that 24% of the observed between-study differences in the diagnostic OR were due to between-study differences in sample sizes (i.e., smaller studies with greater effect sizes). This does not seem trivial. In addition, we performed a rank correlation test and found the Spearman's rho to be 0.77 with a P value approaching significance (P = 0.07). Although the results of these tests for asymmetry were not statistically significant, the likelihood of a Type I error cannot be ignored given the small number of included studies and the magnitude of the associations.

thumbnail image

Figure 1. Trim and fill funnel-plot. LnDOR = log diagnostic odds ratio.

Download figure to PowerPoint

Based on the asymmetry seen in the funnel-plot, we used the “trim and fill” method to further characterize the possible effect of publication bias (3). By imputing values for the hypothesized missing counterparts on the left side of the funnel-plot to create a symmetric funnel shape, we evaluated for the effect of potential publication bias (Figure 1). Our pooled diagnostic OR decreased from 17.6 (95% CI 8.0–37.0) to 13.6 (95% CI 5.8–32.1) using this method, suggesting that the presence of publication bias in this study could have led to an inflated estimate of the test's properties. Since the effect of publication bias on sensitivity and specificity is likely to be even more pronounced, the test characteristics suggested by this meta-analysis should be interpreted with even greater caution (2).

In conclusion, our analysis highlights the importance of fully assessing for publication bias in meta-analyses. Small studies with large effect sizes are more likely to be published, and this can lead to overinflation of pooled estimates. Testing for asymmetry can be subject to Type I error when only a few studies are included in the meta-analysis; therefore, presentation of P values alone is often inadequate. We recognize that the anti-p155 antibody may indeed have diagnostic utility in predicting cancer-associated myositis. However, the evidence of publication bias may compromise the usefulness and reliability of the test characteristics presented as part of this meta-analysis.

  • 1
    Trallero-Araguas E, Rodrigo-Pendas JA, Selva-O'Callaghan A, Martinez-Gomez X, Bosch X, Labrador-Horrillo M, et al. Usefulness of anti-p155 autoantibody for diagnosing cancer-associated dermatomyositis. Arthritis Rheum 2012; 64: 52332.
  • 2
    Deeks JJ, Macaskill P, Irwig L. The performance of tests of publication bias and other sample size effects in systematic review of diagnostic test accuracy was assessed. J Clin Epidemiol 2005; 58: 88293.
  • 3
    Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical assessment of effect of publication bias on meta-analyses. BMJ 2000; 320: 15747.

Rennie L. Rhee MD*, Joshua Baker MD, MSCE*, * University of Pennsylvania, Philadelphia, PA.