The process of rater training for observational instruments: Implications for interrater reliability

Authors

  • Alexandria H. Castorr,

    Search for more papers by this author
    • Alexandria H. Castorr, MS, RN; Kathleen O. Thompson, MS, RN; and Carol Y. Phillips, MS, RN, are doctoral candidates at the University of Maryland School of Nursing at Baltimore. Judith W. Ryan, PhD, RN, is an assistant research professor; Patricia A. Prescott, PhD, RN, is a professor; and Karen L. Soeken, PhD, is an associate professor, all at the University of Maryland School of Nursing at Baltimore.

  • Kathleen O. Thompson,

  • Judith W. Ryan,

  • Carol Y. Phillips,

  • Dr. Patricia A. Prescott,

    Corresponding author
    • University of Maryland School of Nursing, 655 West Lombard Street, Baltimore, MD 21201
    Search for more papers by this author
  • Karen L. Soeken


Abstract

Although the process of rater training is important for establishing interrater reliability of observational instruments, there is little information available in current literature to guide the researcher. In this article, principles and procedures that can be used when rater performance is a critical element of reliability assessment are described. Three phases of the process of rater training are presented: (a) training raters to use the instrument; (b) evaluating rater performance at the end of training; and (c) determining the extent to which rater training is maintained during a reliability study. An example is presented to illustrate how these phases were incorporated in a study to examine the reliability of a measure of patient intensity called the Patient Intensity for Nursing Index (PINI).

Ancillary