Letters to the Editor
Does the intraclass correlation coefficient always reliably express reliability? Comment on the article by Cheung et al
Version of Record online: 2 SEP 2010
Copyright © 2010 by the American College of Rheumatology
Arthritis Care & Research
Volume 62, Issue 9, pages 1357–1358, September 2010
How to Cite
ten Cate, D. F., Luime, J. J., Hazes, J. M. W., Jacobs, J. W. G. and Landewé, R. (2010), Does the intraclass correlation coefficient always reliably express reliability? Comment on the article by Cheung et al. Arthritis Care Res, 62: 1357–1358. doi: 10.1002/acr.20255
- Issue online: 2 SEP 2010
- Version of Record online: 2 SEP 2010
- Accepted manuscript online: 12 MAY 2010 12:00AM EST
The reliability of ultrasonographic examination in rheumatology is a matter of ongoing debate. In an article published recently in Arthritis Care & Research, Cheung et al systematically reviewed the reliability of B-mode and power Doppler (PD) ultrasonography (US) to detect synovitis in rheumatoid arthritis (RA) in 35 studies, comprising 1,415 patients and reporting high interobserver and intraobserver reading reliability, especially of PD (1). However, the “reliability of the measures of reliability” is not beyond question.
Among other measures of reliability, the intraclass correlation coefficient (ICC) had been used to assess reliability. Two challenges exist in the use and interpretation of the ICC. First, the ICC is highly dependent on the heterogeneity of the study sample, and as a consequence is generalizable only to samples with a similar variation (2). The ICC is basically a signal-to-noise ratio. This may be difficult to comprehend conceptually, but it can be clarified by looking at the following equation:
This equation states that the heterogeneity of patients under investigation determines for a large part the value of the ICC. When the variance between patients, shown as Variance (patients), is low, the ICC is likely to be low as well, and vice versa. This also applies to rheumatologic US. When only a few of the total number of joints under investigation show signs of synovitis, which means a low variance between patients, the ICC will probably also be low and largely independent of the level of variance between observers. So the outcome of the ICC does not necessarily express the reliability reliably, nor the variance between observers and variance due to error.
The second issue regarding the ICC is that its value depends on which ICC equation is used. For different study designs there are several different ICC equations (3, 4). The most important distinction to be made is between the ICC for agreement and the ICC for consistency, which consists of different equations (2, 4). It is possible that observers fully agree on ranking patients into a low, intermediate, or high level of pathology according to the assessed scores, resulting in a high ICC for consistency among observers. However, this does not necessarily mean that the observers also reach agreement in the raw values of the scores given, which is the basis of the ICC for agreement. This is illustrated in Table 1. Two observers are asked to score the disease activity of 3 patients on a scale from 0–100. This situation would give an “almost perfect reliability” when calculating the ICC for consistency, since the 3 patients are ranked according to disease activity scores in the same order by both observers. However, when calculating the ICC for agreement, the result will be “poor reliability,” since the scores between the observers clearly differ by ∼10-fold.
|Patient||Observer A||Observer B|
In studies using the ICC, the extent of heterogeneity within the study population should be analyzed and described, as heterogeneity clearly influences the ICC. An example of how this can be done is described in an Outcome Measures in Rheumatology Clinical Trials article on magnetic resonance imaging (5). Furthermore, authors using an ICC should describe which method is used to calculate it and the rationale for their choice. This way, readers can better appreciate the reported reliability. In the articles reviewed by Cheung et al, only some make reference to which formula of ICC has been used (6), and a measure of heterogeneity, as a variable in the outcome of the ICC, is not given in any of these articles. These issues are the axe at the root of the robustness of the review of Cheung and colleagues, and should, in our opinion, have been acknowledged and discussed by Cheung et al in their article.
- 1Reliability of ultrasonography to detect synovitis in rheumatoid arthritis: a systematic literature review of 35 studies (1,415 patients). Arthritis Care Res (Hoboken) 2010; 62: 323–34., , .
- 2When to use agreement versus reliability measures. J Clin Epidemiol 2006; 59: 1033–9., , , .
- 3Intraclass correlations: uses in assessing rater reliability. Psych Bull 1979; 86: 420–8., .
- 4Forming inferences about some intraclass correlation coefficients. Psych Meth 1996; 1: 30–46., .
- 5Scoring inflammatory activity of the spine by magnetic resonance imaging in ankylosing spondylitis: a multireader experiment. J Rheumatol 2007; 34: 862–70., , , , , , et al.
- 6Longitudinal power Doppler ultrasonography assessment of joint inflammatory activity in early rheumatoid arthritis: predictive value in disease activity and radiologic progression. Arthritis Rheum 2007; 57: 116–24., , , , , , et al.
D. F. ten Cate BA, MD*, J. J. Luime PhD*, J. M. W. Hazes MD, PhD*, J. W. G. Jacobs MD, PhD, R. Landewé MD, PhD, * Erasmus Medical Center, Rotterdam, The Netherlands, University Medical Center Utrecht, Utrecht, The Netherlands, Maastricht University Medical Center, Maastricht, The Netherlands.