Validation of an operating room immersive microlaryngoscopy simulator

Authors


  • Presented at the American Academy of Otolaryngology–Head and Neck Surgery Annual Meeting, San Francisco, California, U.S.A., September 11–14, 2011.

  • The authors have no funding, financial relationships, or conflicts of interest to disclose.

Abstract

Objectives/Hypothesis:

To assess the face and construct validity of two assessment tools for a microlaryngoscopy simulator—a Checklist Assessment for Microlaryngeal Surgery and Global Rating Assessment for Microlaryngeal Surgery.

Study Design:

Blinded experimental simulator-based study.

Methods:

There were 15 candidates divided into a novice (≤50 procedures performed) or experienced (>50 procedures) group depending on their previous microlaryngoscopy experience. Each candidate undertook a 10-minute simulation of a microlaryngoscopy and excision biopsy, and two blinded experts rated their performance live on each of the two assessment tools. To assess face validity, each candidate subsequently completed a questionnaire about the simulator.

Results:

The model demonstrated good face validity across all levels of experience. The global rating assessment demonstrated excellent interrater reliability (0.9) compared to the checklist assessment (0.7). The checklist assessment was able to differentiate experienced and novice candidates and therefore demonstrated construct validity. The global rating tool, however, was unable to differentiate candidates. There was a significant correlation between the two assessment tools (correlation coefficient = 0.624).

Conclusions:

This study is the first reported study of a high-fidelity microlaryngoscopy simulator with task-specific rating tools. Use of these tools is recommended within otolaryngology training programs, with the global rating assessment for use as a frequently used feedback tool, and the checklist assessment as a confirmatory evaluation of competency at transitions of professional training.

Ancillary