Intra- and interobserver agreement when describing adnexal masses using the International Ovarian Tumor Analysis terms and definitions: a study on three-dimensional ultrasound volumes
Article first published online: 4 MAR 2013
Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Ultrasound in Obstetrics & Gynecology
Volume 41, Issue 3, pages 318–327, March 2013
How to Cite
Sladkevicius, P. and Valentin, L. (2013), Intra- and interobserver agreement when describing adnexal masses using the International Ovarian Tumor Analysis terms and definitions: a study on three-dimensional ultrasound volumes. Ultrasound Obstet Gynecol, 41: 318–327. doi: 10.1002/uog.12289
- Issue published online: 4 MAR 2013
- Article first published online: 4 MAR 2013
- Accepted manuscript online: 23 AUG 2012 04:06AM EST
- Manuscript Accepted: 1 JUL 2012
- Doppler ultrasound;
- ovarian neoplasms;
- reproducibility of results;
- 3D imaging;
To estimate intraobserver repeatability and interobserver agreement in: (1) describing adnexal masses using the International Ovarian Tumor Analysis (IOTA) terms and definitions; (2) the risk of malignancy calculated using IOTA logistic regression model 1 (LR1) and model 2 (LR2); and (3) the diagnosis made on the basis of subjective assessment of ultrasound images.
One-hundred and three adnexal masses were examined by transvaginal gray-scale and power Doppler ultrasound. Three-dimensional ultrasound volumes of the mass were saved. After 12–18 months the volumes were analyzed twice, 1–6 months apart, by each of two independent experienced sonologists who used the IOTA terms and definitions to describe the masses. The risk of malignancy was calculated using LR1 and LR2. The sonologists also classified the masses as benign or malignant using subjective assessment.
Eighty-four masses were benign, eight were borderline and 11 were invasively malignant. There was substantial variability within and between observers in the results of measurements included in LR1 and LR2 and some variability also when assessing categorical variables included in the models (agreement = 51–100% and kappa = 0.42–1.00). This resulted in substantial variability in the calculated risk of malignancy, the limits of agreement indicating that the calculated risk of malignancy could vary by a factor of 5–20 within and between observers. The reliability of the calculated risk of malignancy was moderate (LR1) or poor (LR2) when the calculated risk of malignancy was > 10% (intraclass correlation coefficients varied from 0.21 to 0.73). Interobserver agreement when classifying tumors as benign or malignant using the predetermined risk of malignancy cut-off of 10% was fair to good (agreement = 85% and kappa = 0.61 for LR1; agreement = 81% and kappa = 0.52 for LR2). Intra- and interobserver agreements for subjective assessment were 96%, 96% and 96% with kappa values of 0.89, 0.87 and 0.88, respectively.
Intra- and interobserver agreement in classifying tumors as benign or malignant using the risk of malignancy cut-off of 10% for LR1 and LR2 was fair or good, whilst the reproducibility of subjective assessment was excellent. The reliability of calculated risks > 10% was poor, and calculated risk > 10% cannot be used to discriminate between individuals at different risk. These results cannot be extrapolated to real-time ultrasound examinations.