See Appendix 1.
Real-Time Inter-Rater Reliability of the Council of Emergency Medicine Residency Directors Standardized Direct Observation Assessment Tool
Article first published online: 8 DEC 2009
© 2009 by the Society for Academic Emergency Medicine
Academic Emergency Medicine
Special Issue: CORD Educational Advances Supplement
Volume 16, Issue Supplement s2, pages S51–S57, December 2009
How to Cite
LaMantia, J., Kane, B., Yarris, L., Tadros, A., Ward, M. F., Lesser, M., Shayne, P. and The SDOT Study Group II (2009), Real-Time Inter-Rater Reliability of the Council of Emergency Medicine Residency Directors Standardized Direct Observation Assessment Tool. Academic Emergency Medicine, 16: S51–S57. doi: 10.1111/j.1553-2712.2009.00593.x
Funding Sources: None.
CoI: The author has no financial conflicts to report.
Presented: 2008 National SAEM Annual Meeting, Washington, DC, May 29, 2008; 2008 Southeastern Regional SAEM Meeting, Louisville, KY, March 14–15, 2008; 2008 Western Regional SAEM Meeting, Costa Mesa, CA, March 28–29, 2008; 2008 New York Regional SAEM Conference, New York, NY, April 30, 2008; 2008 New England Regional SAEM Meeting, Shrewsbury, MA, April 30, 2008.
We would like to thank the attending physicians at each site who participated in this study.
- Issue published online: 8 DEC 2009
- Article first published online: 8 DEC 2009
- Received August 5, 2009; accepted August 7, 2009.
- inter-rater variation
Objectives: Developed by the Council of Emergency Medicine Residency Directors (CORD), the standardized direct observation assessment tool (SDOT) is an evaluation instrument used to assess residents’ clinical skills in the emergency department (ED). In a previous study examining the inter-rater agreement of the tool, faculty scored simulated resident–patient encounters. The objective of the present study was to evaluate the inter-rater agreement of the SDOT in real-time evaluations of residents in the ED.
Methods: This was a multi-center, prospective, observational study in which faculty raters were paired to simultaneously observe and independently evaluate a resident’s clinical performance using the SDOT. Data collected from eight emergency medicine (EM) residency programs produced 99 unique resident–patient encounters and reported on 26 individual behaviors related to specific core competencies, global evaluation scores for each core competency, and an overall clinical competency score. Inter-rater agreement was assessed using percentage agreement analyses with three constructs: exact agreement, liberal agreement, and binary (pass/fail) agreement.
Results: Inter-rater agreement between faculty raters varied according to category of measure used. Exact agreement ranged from poor to good, depending on the measure: the overall competency score (good), the competency score for each of the six core competencies (poor to good), and the individual item scores (fair to very good). Liberal agreement and binary agreement were excellent for the overall competency score and the competency score for each of the six core competencies and very good to excellent for the individual item scores.
Conclusions: The SDOT demonstrated excellent inter-rater agreement when analyzed with liberal agreement and when dichotomized as a pass/fail measure and fair to good agreement for most measures with exact agreement. The SDOT can be useful and reliable when evaluating residents’ clinical skills in the ED, particularly as it relates to marginal performance.