Get access

Reliability of a Perinatal Outcomes Measure: The Optimality Index–US

Authors

  • Julia S. Seng CNM, PhD,

    Corresponding author
      University of Michigan Institute for Research on Women and Gender, G120 Lane Hall, Ann Arbor, MI 48109-1290. E-mail: jseng@umich.edu
    Search for more papers by this author
    • Julia S. Seng, CNM, PhD, is a research associate professor at the University of Michigan Institute for Research on Women and Gender and School of Nursing and research assistant professor in the School of Medicine Department of Obstetrics and Gynecology.

  • Emeline Mugisha,

    Search for more papers by this author
    • Emeline Mugisha is completing a self-designed Community Health Sciences major in the College of Literature, Science and the Arts at the University of Michigan. She is interested in global and women's health.

  • Janis M. Miller APRN, PhD

    Search for more papers by this author
    • Janis M. Miller, APRN, PhD, is a women's health nurse practitioner and associate research scientist and assistant professor at the University of Michigan School of Nursing and research assistant professor in the School of Medicine, Department of Obstetrics and Gynecology.


University of Michigan Institute for Research on Women and Gender, G120 Lane Hall, Ann Arbor, MI 48109-1290. E-mail: jseng@umich.edu

Abstract

The Optimality Index–US, a recently developed perinatal clinimetric index, has been validated with both clinical and research databases. Documentation of the reliability of the instrument for medical record abstraction is needed. This paper reports outcomes of interrater reliability assessments conducted for two projects. Abstraction was supervised by the same investigator, but staffed by different coders who had a variety of qualifications (perinatal nurse, nurse-midwife, clinical trial professional, student research assistants). Medical records were entirely paper at one site and partially electronic at another. Reliability (reproducibility) was assessed via percent agreement between pairs of coders on charts randomly selected for audits. Mean percentage agreement was 92.7% in both projects with a range from 89.1% to 97.8% in the first project, and a range from 88.5% to 96.2% in the second project. The sources of error differed between clinician and lay abstractors, but the number of errors did not differ. The average time per chart was assessed in the first project. Once proficiency was achieved, the average time needed to complete coding was 24 minutes, with some additional time needed for ordering paper charts. These analyses indicate that excellent reproducibility can be achieved with the Optimality Index–US.

Get access to the full text of this article

Ancillary