SEARCH

SEARCH BY CITATION

The reliability of ultrasonographic examination in rheumatology is a matter of ongoing debate. In an article published recently in Arthritis Care & Research, Cheung et al systematically reviewed the reliability of B-mode and power Doppler (PD) ultrasonography (US) to detect synovitis in rheumatoid arthritis (RA) in 35 studies, comprising 1,415 patients and reporting high interobserver and intraobserver reading reliability, especially of PD (1). However, the “reliability of the measures of reliability” is not beyond question.

Among other measures of reliability, the intraclass correlation coefficient (ICC) had been used to assess reliability. Two challenges exist in the use and interpretation of the ICC. First, the ICC is highly dependent on the heterogeneity of the study sample, and as a consequence is generalizable only to samples with a similar variation (2). The ICC is basically a signal-to-noise ratio. This may be difficult to comprehend conceptually, but it can be clarified by looking at the following equation:

  • equation image

This equation states that the heterogeneity of patients under investigation determines for a large part the value of the ICC. When the variance between patients, shown as Variance (patients), is low, the ICC is likely to be low as well, and vice versa. This also applies to rheumatologic US. When only a few of the total number of joints under investigation show signs of synovitis, which means a low variance between patients, the ICC will probably also be low and largely independent of the level of variance between observers. So the outcome of the ICC does not necessarily express the reliability reliably, nor the variance between observers and variance due to error.

The second issue regarding the ICC is that its value depends on which ICC equation is used. For different study designs there are several different ICC equations (3, 4). The most important distinction to be made is between the ICC for agreement and the ICC for consistency, which consists of different equations (2, 4). It is possible that observers fully agree on ranking patients into a low, intermediate, or high level of pathology according to the assessed scores, resulting in a high ICC for consistency among observers. However, this does not necessarily mean that the observers also reach agreement in the raw values of the scores given, which is the basis of the ICC for agreement. This is illustrated in Table 1. Two observers are asked to score the disease activity of 3 patients on a scale from 0–100. This situation would give an “almost perfect reliability” when calculating the ICC for consistency, since the 3 patients are ranked according to disease activity scores in the same order by both observers. However, when calculating the ICC for agreement, the result will be “poor reliability,” since the scores between the observers clearly differ by ∼10-fold.

Table 1. The disease activity of 3 patients rated by 2 observers
PatientObserver AObserver B
1359
2675
3993

In studies using the ICC, the extent of heterogeneity within the study population should be analyzed and described, as heterogeneity clearly influences the ICC. An example of how this can be done is described in an Outcome Measures in Rheumatology Clinical Trials article on magnetic resonance imaging (5). Furthermore, authors using an ICC should describe which method is used to calculate it and the rationale for their choice. This way, readers can better appreciate the reported reliability. In the articles reviewed by Cheung et al, only some make reference to which formula of ICC has been used (6), and a measure of heterogeneity, as a variable in the outcome of the ICC, is not given in any of these articles. These issues are the axe at the root of the robustness of the review of Cheung and colleagues, and should, in our opinion, have been acknowledged and discussed by Cheung et al in their article.

  • 1
    Cheung P, Dougados M, Gossec L. Reliability of ultrasonography to detect synovitis in rheumatoid arthritis: a systematic literature review of 35 studies (1,415 patients). Arthritis Care Res (Hoboken) 2010; 62: 32334.
  • 2
    De Vet HC, Terwee CB, Knol DL, Bouter LM. When to use agreement versus reliability measures. J Clin Epidemiol 2006; 59: 10339.
  • 3
    Shrout P, Fleiss J. Intraclass correlations: uses in assessing rater reliability. Psych Bull 1979; 86: 4208.
  • 4
    McGraw K, Wong S. Forming inferences about some intraclass correlation coefficients. Psych Meth 1996; 1: 3046.
  • 5
    Lukas C, Braun J, van der Heijde D, Hermann KG, Rudwaleit M, Ostergaard M, et al. Scoring inflammatory activity of the spine by magnetic resonance imaging in ankylosing spondylitis: a multireader experiment. J Rheumatol 2007; 34: 86270.
  • 6
    Naredo E, Collado P, Cruz A, Palop MJ, Cabero F, Richi P, et al. Longitudinal power Doppler ultrasonography assessment of joint inflammatory activity in early rheumatoid arthritis: predictive value in disease activity and radiologic progression. Arthritis Rheum 2007; 57: 11624.

D. F. ten Cate BA, MD*, J. J. Luime PhD*, J. M. W. Hazes MD, PhD*, J. W. G. Jacobs MD, PhD†, R. Landewé MD, PhD‡, * Erasmus Medical Center, Rotterdam, The Netherlands, † University Medical Center Utrecht, Utrecht, The Netherlands, ‡ Maastricht University Medical Center, Maastricht, The Netherlands.