Is the CVI an acceptable indicator of content validity? Appraisal and recommendations

Authors

  • Denise F. Polit,

    Corresponding author
    1. Humanalysis, Inc., 75 Clinton Street, Saratoga Springs, NY 12866
    2. Griffith University School of Nursing, Gold Coast, Australia
    • Humanalysis, Inc., 75 Clinton Street, Saratoga Springs, NY 12866.
    Search for more papers by this author
    • President.

    • Adjunct Professor.

  • Cheryl Tatano Beck,

    1. University of Connecticut School of Nursing, Storrs, CT
    Search for more papers by this author
    • Professor.

  • Steven V. Owen

    1. School of Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX
    Search for more papers by this author
    • Professor.


Abstract

Nurse researchers typically provide evidence of content validity for instruments by computing a content validity index (CVI), based on experts' ratings of item relevance. We compared the CVI to alternative indexes and concluded that the widely-used CVI has advantages with regard to ease of computation, understandability, focus on agreement of relevance rather than agreement per se, focus on consensus rather than consistency, and provision of both item and scale information. One weakness is its failure to adjust for chance agreement. We solved this by translating item-level CVIs (I-CVIs) into values of a modified kappa statistic. Our translation suggests that items with an I-CVI of .78 or higher for three or more experts could be considered evidence of good content validity. © 2007 Wiley Periodicals, Inc. Res Nurs Health 30:459–467, 2007.

Ancillary