Purpose. Three studies examined the degree to which investigative interviewers' adherence to best-practice guidelines is consistent across similar mock interviews.
Method. In each study, two interviews were administered within a period of several hours. Further, group and individual stability of interviewer performance was analysed, and performance was measured by calculating the proportion of open-ended and leading questions as well as the presence of predetermined problem behaviours. The studies varied depending on the type of interview paradigm employed. Interviewer performance in Study 1was measured in a group context where participants rotated between the role of interviewer, child respondent, and observer. In Study 2, an adult played the role of a child recalling abuse but this occurred in isolation (participants did not observe others or play the child). Study 3 was similar to Study 2 except that in each interview an unfamiliar child aged 5–7 years recalled an innocuous event.
Results. Interviewer performance was relatively stable across tasks, although the strength of the relationship between measures varied across analyses. Improvement in open-ended question usage occurred in Study 1 but not Studies 2 and 3. Irrespective of the assessment context, the dichotomous rating scale yielded greater consistency than when questions were tallied. Further, group stability overestimated individual stability. The practical implications of these findings for trainers and researchers are discussed.