The effect of multiple exposures in scenario‐based simulation—A mixed study systematic review

Abstract Aims To examine the use and effects of multiple simulations in nursing education. Design A mixed study systematic review. Databases (CINAHL, Medline, PubMed, EMBASE, ERIC, Education source and Science Direct) were searched for studies published until April 2020. Method Researchers analysed the articles. Bias risk was evaluated using the Critical Appraisal Skills Programme and Cochrane Risk of Bias tool. Results In total, 27 studies were included and four themes identified. Students participated in multiple simulation sessions, over weeks to years, which included 1–4 scenarios in various nursing contexts. Simulations were used to prepare for, or partly replace, students’ clinical practice. Learning was described in terms of knowledge, competence and confidence. Conclusion Multiple scenario‐based simulation is a positive intervention that can be implemented in various courses during every academic year to promote nursing students’ learning. Further longitudinal research is required, including randomized studies, with transparency regarding study design and instruments.


| Background
Nursing student competence is a complex concept-combining knowledge, skills and performance. Moreover, the time available for developing competence in contact with real patients is increasingly limited. To help students transfer theoretical knowledge and alleviate "transition shock" (Beyea et al., 2010), simulation has become an important part of nursing education (Lavoie & Clarke, 2017).
In a recent review, Cerra et al. (2019) present strong evidence that high-fidelity simulation (HFS) can improve learning compared with other teaching methods. Additionally, a systematic review shows that HFS positively contributes to students' self-confidence . In a literature review, Kim and Yoo (2020) examined the use of debriefing in healthcare simulation and recommended that educators choose appropriate debriefing for learners to achieve maximum learning effects.
Clinical learning in real clinical settings allows nursing students to integrate theory with practice and maximize clinical competencies. Therefore, in all European Union countries, a nursing bachelor education must be completed at least half through supervised clinical practice (European Parliament. Directive, 2013/55). Simulations allow students to develop competence related to various medical fields-including paediatrics (Edwards et al., 2018), internal medicine and surgery (Kaddoura et al., 2016) and mental health (Olasoji et al., 2020). Simulation-based learning can be combined with clinical practice to improve competence (Larue et al., 2015), but cannot yet fully substitute for supervised clinical practice.
To implement simulation as a substitute for direct experience with patients, it is important to determine the "dose" of simulation that best promotes learning. Therefore, it is essential to critically assess studies on the effects of multiple simulation sessions during an education programme. Earlier reviews have focused on simulation use, with the aim of identifying optimal strategies related to specific elements of the simulation session, but no reviews specifically summarize multiple simulation sessions. Thus, there is a need to analyse these studies to better understand of how several simulation sessions affects nursing students' learning.

| Aim
The specific aim of the review was to identify, describe and summarize evidence related to multiple simulation sessions in nursing education. This review was guided by two questions: "How are multiple simulations used as interventions to develop nursing students' learning?" and "What is the effect of multiple simulations on nursing students' learning?"

| Design
For this mixed study review, we applied a convergent synthesis design (Hong et al., 2017;Pluye & Hong, 2014), for both qualitative and quantitative research. The results are reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (Moher et al., 2009).

| Search methods
The search strategy was based on an initial broad search developed by a research librarian in cooperation with the other authors. The search strategy included various terms relating to multiple simulations and education, with both medical subject headings and entry terms, as follows: "scenario-based simulation" OR "clinical simulation" OR "simulation-based learning" OR "simulation training" AND "simulation series" OR "simulation sessions" OR "multiple simulations" OR "repeated exposure" AND "nursing students" OR "nursing education." Table S1 presents an example of a search string.

| Search outcomes
Broad inclusion criteria were applied to generate a comprehen-  Figure 1 illustrates details of the selection process.

| Quality appraisal
To assess the risk of bias in the studies, the Critical Appraisal Skills Programme (CASP) was adapted to systematically appraise the methodological quality of the included articles (CASP, 2013).
Three researchers rated the 27 studies using nine criteria: aim, design, methods, sample, ethical considerations, results, limitations, implications and study sponsor (Table S2). To supplement the CASP evaluation, the articles were also independently assessed for quality by pairs of researchers using the Cochrane Risk of Bias tool. This tool considers bias in terms of selection, performance, detection, attrition, reporting and other bias and rates studies as having a high, low or unclear risk of bias for each domain (Table S3). No studies were excluded due to inadequate rigour or substantial bias.

| Data abstraction and synthesis
A convergent synthesis was adapted to incorporate the integration of qualitative and quantitative data into the results (Pluye & Hong, 2014). Articles were analysed by three researchers-first individually and then the researchers discussed and identified topics for thematic analysis. In the first stage, results were extracted from Reasons: -Wrong populaƟon (n = 2) -ArƟcles that did not include debriefing (n = 2) -ArƟcles focusing on one day session or less than a week (n = 25) -ArƟcles that did not examine students competence (n = 23) -ArƟcles unavailable (n = 3)

| Time frame
TA

| Context and number of scenarios in each simulation session
Students participated in several simulation sessions during the educational programmes, with varying numbers of scenarios in each session. Table 4 shows that simulation sessions varied between one-four scenarios, implemented in different contexts. In 25 studies, HFS sessions were implemented and only two studies did not specify fidelity level (Cummings & Connelly, 2016;Moule et al., 2008).

| Contributions of multiple simulation to prepare and substitute for clinical placements
In most studies evaluating multiple simulations over a semester or more, sessions were not held in conjunction with clinical practice.
Ten other studies evaluated multiple simulations related to students' clinical practice, but each over less than a semester.  (2017) studied students who participated for seven weeks in the simulation laboratory and then switched roles with students in a medicalsurgical practicum. They found that the students' competency was not significantly associated with the sequence to which they were assigned.
Seven studies examined the use of multiple sessions as substitutes for traditional clinical placements. In a paediatric course (Meyer et al., 2011), students attended 25% of their clinical practicum in simulation. Throughout an eight-week clinical rotation, students' performance was evaluated every second week. Students who first participated in the simulation had higher performance scores than those who participated later in the course, although this difference was non-significant. This study indicated that early exposure to simulation allowed students to more quickly achieve competence

| Effect on students' learning
The final theme is the effect of multiple simulations on students' learning. The core outcomes are represented in three sub-categories: knowledge, competence and confidence. The studies used different instruments to analyse students' learning.
Studies have revealed that multiple simulations appeared to have significant impact on students' knowledge, using a HESI medical-surgical specialty examination (Curl et al., 2016)

| Competence
Different terms were used to describe students' clinical competence: clinical judgement, critical thinking, clinical and patient safety competence and performance. Bussard (2018) and Schlairet and Fenster (2012)  Four studies focused on students' critical thinking. Only one study (Chiang & Chan, 2014) found that multiple simulation sessions yielded significant improvement in overall critical thinking, using the California Critical Thinking Disposition Inventor (CCTDI).
Melenovich ( Performance is the final concept presented to determine students' competence. One study (Hart et al., 2014) used the

Emergency Response Performance Tool and Patient Outcome
Tool to evaluate students' performance in recognizing and responding to deteriorating patients. Over three sessions, students showed significant increases in performance and time to emergency response. Over a one-year period, Unsworth et al. (2016) implemented three simulation sessions related to recognition and rescue of the deteriorating patient. A "Discrepancy Discovery data collection tool" was developed to allow students to select aspects of their performance to develop before the next simulation session. The results showed significant differences in performance from the first to the last simulation experience. In a multi-site study (Hill, 2014), students were exposed to a scenario three

| Confidence
Among eight studies examining students' confidence, six showed that significant improvement of confidence score over time. Hicks et al. (2009) developed the Self-Confidence Scale for use in analysing how simulation may influence students' confidence levels compared with clinical experience. Four dimensions describe students' ability to recognize, assess, intervene and assess the effectiveness of implemented interventions-all in the respiratory, cardiac and neurological areas. The results indicated significantly increased self-confidence among students with simulation experiences or with combined simulation and traditional clinical experiences, but not among students who only participated in clinical experience. Thomas and Mackey (2012) used the same instrument and reported significant between-group differences in all four dimensions.
Studies with a quasi-experimental design also revealed significantly increased confidence scores. In a study involving two simulation days, three weeks apart, the confidence score significantly improved over time (Lacue, 2017). Zapko et al. (2018) reported that multiple simulation experiences over a two-year period seem to increase students' confidence. Australian researchers (Mould et al., 2011) developed a questionnaire for their study and found significantly increased self-confidence scores after simulation over nine weeks. Liaw et al. (2014)

used the Preparedness for Hospital
Practice Questionnaire and reported significantly improved confidence levels after a simulation programme preparing students for their transition to graduate nursing. Cummings and Connelly (2016) used "The Self-Confidence in Learning" and found that repeated simulation can increase student confidence levels, although the results are poorly presented, making it difficult to draw conclusions.
Results from focus group interviews with students indicated confidence development during years of simulation sessions. Students became more comfortable and emphasized that multiple simulations enabled them to predict what would happen during sessions and that simulation was a safe arena where to learn from mistakes (Najjar et al., 2015).

| Risk of bias
Most studies showed moderate methodological quality. Based on CASP and the Cochrane Risk of Bias assessment (Table S3), the major risk of bias in the experimental studies was due to lack of participant blinding, which is difficult to achieve in educational interventions.
In the quasi-experimental studies, the bias risk was related to nonrandom sampling. Most studies had a low reporting bias and clearly presented the findings. Studies also reported whether participants had withdrawn. Significant results were obtained both in sessions over two weeks (Schlairet & Pollock, 2010) and two years (Zapko et al., 2018). The use of a randomized study design (Hansen & Bratt, 2017;Hicks et al., 2009;Melenovich, 2012;Meyer et al., 2011;Schlairet & Fenster, 2012;Schlairet & Pollock, 2010) increased the validity, despite small sample sizes.  (Chen et al., 2017).
Students in this review generally reported high levels of learning in relation to multiple simulations. However, another review (Cantrell et al., 2017) revealed that simulation affects students' emotions and increases their stress and anxiety levels. Najjar et al. (2015) reported that increased confidence over time improved the students' ability to prepare for and make progress in simulation sessions. Self-confidence is a foundation for learning (Woda et al., 2017), and our present findings show that participating in several simulation sessions can increase students' self-confidence, although only two studies had an experimental design (Hicks et al., 2009;Thomas & Mackey, 2012).
Increased confidence was related to the decision-making process (Hicks at al., 2009;Thomas & Mackey, 2012), confidence in learning with simulation (Lacue, 2017;Zapko et al., 2018) and coping with emotions and stress in the simulation session (Liaw et al., 2014).
These results show that confidence is widely described and not necessarily transferable between sessions.
In non-randomized studies, students seemed to attain greater experience, competence and confidence. Such studies were rated as having a low to high risk of bias (Table S2). Twelve studies in this review lacked a control group, which decreases the validity. All of them used pre-test and post-test designs to examine progression in student learning. One non-randomized study (Shin et al., 2015) included 237 senior students from three schools-a sample size that increases the validity. Another quantitative study (Ironside et al., 2009) lacked an experimental design; however, all 69 students underwent the same intervention of multiple simulations and were evaluated on patient safety competencies. Thomas and Mackey (2012) performed a study of 24 students with a quasi-experimental design. The results carry a high risk of bias, as there were only 14 students in the experimental group and 10 in the control group.
Research evaluating the effects of simulation must apply valid and reliable instruments, as this can influence the results and generalizability of findings. The studies in this review employed a mix of well-validated and less well-validated instruments, and most authors provided some discussion of the reliability and validity of the instruments used. The nursing examinations are considered valid and reliable standardized assessments of students' knowledge. Ten studies described the validity of the instruments used to measure competence. CCEI, LCJR and CCTDI are well-known and validated instruments for measuring the effectiveness of clinical learning through simulation. Additionally, four instruments were developed specifically for the studies and their validity was not specified. Instruments measuring confidence all involved students self-reporting their reactions to the simulation. Reliability was tested using Cronbach's alpha, which was between 0.87-0.97. One study developed an instrument specifically for measuring confidence in that study and did not specify the reliability.

| Limitations
This review did not include studies of other healthcare students, thus limiting the generalizability of findings to other student categories. Additionally, some of the included studies did not clearly report all relevant aspects of their methods, context and findings (Table S2), making it difficult to assess the results.

| CON CLUS ION
The present review provides support for using multiple simulations.
However, it offers no clear answer to the questions regarding the minimal effective simulation dose, that is how many scenarios or sessions should be implemented to maximize students' learning. It appears to be beneficial to combine simulation and clinical placement.
Little is known about how multiple simulations experienced over more than a year has an impact on students' learning, and there are few randomized studies. In educational research, both randomized and longitudinal studies can be challenging due to the complexity of educational programmes. Further research should implement simulations in a longitudinal perspective and provide detailed descriptions of the context and numbers of scenarios and sessions, making it easier to draw conclusions about the effects of simulation.

CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest.

AUTH O R CO NTR I B UTI O N S
All authors have agreed on the final version and meet at least one of the following criteria [recommended by the ICMJE (http://www. icmje.org/recom menda tions/)]: All author provided substantial contributions to conception and design, acquisition of data, analysis and interpretation of data, drafting the article or revising it critically for important intellectual content.

DATA AVA I L A B I L I T Y S TAT E M E N T
The data used to support the findings of this study are available from the corresponding author on reasonable request.