• review [publication type];
  • periodicals as topic/*standards;
  • *education, medical;
  • biomedical research;
  • random allocation;
  • patient simulation

Context  In order to assess or replicate the research findings of published reports, authors must provide adequate and transparent descriptions of their methods. We conducted 2 consecutive studies, the first to define reporting standards relating to the use of standardised patients (SPs) in research, and the second to evaluate the current literature according to these standards.

Methods  Standards for reporting SPs in research were established by representatives of the Grants and Research Committee of the Association of Standardized Patient Educators (ASPE). An extensive literature search yielded 177 relevant English-language articles published between 1993 and 2005. Search terms included: ‘standardised patient(s)’; ‘simulated patient(s)’; ‘objective structured clinical examination (OSCE)’, and ‘clinical skills assessment’. Articles were limited to those reporting the use of SPs as an outcome measure and published in 1 of 5 prominent health sciences education journals. Data regarding the SP encounter, SP characteristics, training and behavioural measure(s) were gathered.

Results  A random selection of 121 articles was evaluated according to 29 standards. Reviewers judged that few authors provided sufficient details regarding the encounter (21%, = 25), SPs (16%, = 19), training (15%, = 15), and behavioural measures (38%, = 44). Authors rarely reported SP gender (27%, = 33) and age range (22%, = 26), whether training was provided for the SPs (39%, = 47) or other raters (24%, = 29), and psychometric evidence to support the behavioural measure (23%, = 25).

Conclusions  The findings suggest that there is a need for increased rigor in reporting research involving SPs. In order to support the validity of research findings, journal editors, reviewers and authors are encouraged to provide adequate detail when describing SP methodology.