Anti-p155 autoantibodies as a diagnostic marker for cancer-associated dermatomyositis: Comment on the article by Trallero-Araguás et al
Version of Record online: 27 AUG 2012
Copyright © 2012 by the American College of Rheumatology
Arthritis & Rheumatism
Volume 64, Issue 9, page 3059, September 2012
How to Cite
Rhee, R. L. and Baker, J. (2012), Anti-p155 autoantibodies as a diagnostic marker for cancer-associated dermatomyositis: Comment on the article by Trallero-Araguás et al. Arthritis & Rheumatism, 64: 3059. doi: 10.1002/art.34579
- Issue online: 27 AUG 2012
- Version of Record online: 27 AUG 2012
- Accepted manuscript online: 21 JUN 2012 09:26AM EST
To the Editor:
We read with great interest the recent systematic review and meta-analysis by Trallero-Araguás et al on the relationship between the anti-p155 autoantibody and cancer-associated myositis (1). By using the pooled sensitivity and specificity of the selected 6 studies, the authors concluded that the diagnostic odds ratio (OR) for anti-p155 was 27.26 (95% confidence interval [95% CI] 6.59–112.82), the positive likelihood ratio was 6.79 (95% CI 4.11–11.23), and the negative likelihood ratio was 0.25 (95% CI 0.08–0.76).
While these findings are impressive, we believe the results should be interpreted with caution. The validity and usefulness of a meta-analysis depends largely on the appropriate inclusion of small and similar studies to improve statistical power. A major limitation of this methodologic approach is that findings can be greatly compromised by publication bias—the concept that small studies are more likely to be published if a significant association was identified. Small studies require a larger effect size to meet statistical significance, resulting in bias in the estimated effect sizes. In this study, the authors constructed a funnel-plot and determined that the slope coefficient was not statistically significant (P = 0.15), thereby concluding that no publication bias existed. However, this deserves further exploration.
We evaluated for publication bias according to the recommendations of the study by Deeks et al (2) that is referenced by Trallero-Araguás and colleagues. Deeks et al recommend formally evaluating for publication bias and sample size effects using a funnel-plot of the log OR against the inverse of the square root of the effective sample size followed by testing for asymmetry using related regression or rank correlation tests. Notably, statistical power is considered inadequate for these tests when fewer than 10 studies are included in the analysis (3), as is the case for the study by Trallero-Araguás and colleagues. Therefore, the P values should be deemphasized.
After graphing the funnel-plot (Figure 1), we found suggestions of asymmetry. The results from the recommended regression analysis (R2 = 0.24, P = 0.3) suggest that 24% of the observed between-study differences in the diagnostic OR were due to between-study differences in sample sizes (i.e., smaller studies with greater effect sizes). This does not seem trivial. In addition, we performed a rank correlation test and found the Spearman's rho to be 0.77 with a P value approaching significance (P = 0.07). Although the results of these tests for asymmetry were not statistically significant, the likelihood of a Type I error cannot be ignored given the small number of included studies and the magnitude of the associations.
Based on the asymmetry seen in the funnel-plot, we used the “trim and fill” method to further characterize the possible effect of publication bias (3). By imputing values for the hypothesized missing counterparts on the left side of the funnel-plot to create a symmetric funnel shape, we evaluated for the effect of potential publication bias (Figure 1). Our pooled diagnostic OR decreased from 17.6 (95% CI 8.0–37.0) to 13.6 (95% CI 5.8–32.1) using this method, suggesting that the presence of publication bias in this study could have led to an inflated estimate of the test's properties. Since the effect of publication bias on sensitivity and specificity is likely to be even more pronounced, the test characteristics suggested by this meta-analysis should be interpreted with even greater caution (2).
In conclusion, our analysis highlights the importance of fully assessing for publication bias in meta-analyses. Small studies with large effect sizes are more likely to be published, and this can lead to overinflation of pooled estimates. Testing for asymmetry can be subject to Type I error when only a few studies are included in the meta-analysis; therefore, presentation of P values alone is often inadequate. We recognize that the anti-p155 antibody may indeed have diagnostic utility in predicting cancer-associated myositis. However, the evidence of publication bias may compromise the usefulness and reliability of the test characteristics presented as part of this meta-analysis.
- 1Usefulness of anti-p155 autoantibody for diagnosing cancer-associated dermatomyositis. Arthritis Rheum 2012; 64: 523–32., , , , , , et al.
- 2The performance of tests of publication bias and other sample size effects in systematic review of diagnostic test accuracy was assessed. J Clin Epidemiol 2005; 58: 882–93., , .
- 3Empirical assessment of effect of publication bias on meta-analyses. BMJ 2000; 320: 1574–7., , , , .
Rennie L. Rhee MD*, Joshua Baker MD, MSCE*, * University of Pennsylvania, Philadelphia, PA.