SEARCH

SEARCH BY CITATION

Keywords:

  • clinical competence/*stand;
  • *internship;
  • *emergency service, hospital;
  • New South Wales

Summary. Fifty-four interns agreed to a study in which their clinical performance in an outpatient unit with standardized patients was recorded on videotape. In order to examine whether they could distinguish standardized from real patients, the interns were asked to note any patients who they thought might be simulating their complaints and report these to the researchers at the end of each 2-day period of study. Thirty-two of the interns were assessed again at the end of their internship, using the same clinical problems presented by different simulators. The consultations took place in the casualty department of a large urban hospital. At the beginning of the year there were 152 consultations with standardized patients and 328 consultations with appropriate genuine patients. Standardized patients were identified definitely as ‘not genuine’ in only 12 of the 152 consultations (sensitivity 7.8%) whereas 320 of the 328 genuine consultations were accepted by the interns as genuine (specificity 97.8%). When the level of confidence required to distinguish the two groups was reduced from ‘definite’ to ‘probable’, the number of correctly identified simulator consultations increased to 36/152 (27%) but the rate of misclassification of genuine patients also increased from 8 to 37 out of 328 consultations (11%). At the end of the year there were 81 consultations with standardized patients and 149 consultations with genuine patients. Identification rates were only slightly changed. We conclude that simulator identification is not a problem in applying standardized patients to evaluate the quality of care provided in a hospital casualty.