Correspondence: Sarah Yardley, Research Institute for Primary Care and Health Sciences, Keele University, Keele, ST5 5BG, UK Tel: 00 44 1782 734694; E-mail: firstname.lastname@example.org
Concurrent exposure to simulated and authentic experiences during undergraduate medical education is increasing. The impact of gaps or differences between contemporaneous experiences has not been adequately considered. We address two questions. How do new undergraduate medical students understand contemporaneous interactions with simulated and authentic patients? How and why do student perceptions of differences between simulated and authentic patient interactions shape their learning?
We conducted an interpretative thematic secondary analysis of research data comprising individual interviews (n = 23), focus groups (three groups, n = 16), and discussion groups (four groups, n = 26) with participants drawn from two different year cohorts of Year 1 medical students. These methods generated data from 48 different participants, of whom 17 provided longitudinal data. In addition, data from routinely collected written evaluations of three whole Year 1 cohorts (response rates ≥ 88%, n = 378) were incorporated into our secondary analysis dataset. The primary studies and our secondary analysis were conducted in a single UK medical school with an integrated curriculum.
Our analysis identified that students generate knowledge and meaning from their simulated and authentic experiences relative to each other and that the resultant learning differs in quality according to meaning created by comparing and contrasting contemporaneous experiences. Three themes were identified that clarify how and why the contrasting of differences is an important process for learning outcomes. These are preparedness, responsibility for safety, and perceptions of a gap between theory and practice.
We propose a conceptual framework generated by reframing common metaphors that refer to the concept of the gap to develop educational strategies that might maximise useful learning from perceived differences. Educators need to ‘mind’ gaps in collaboration with students if synergistic learning is to be constructed from contemporaneous exposure to simulated and authentic patient interactions. The strategies need to be tested in practice by teachers and learners for utility. Further research is needed to understand gaps in other contexts.
Contemporaneous experience of simulated and authentic patient–student interactions occurs in medical curricula across the world and is required by the General Medical Council in the UK. We define ‘simulated patients’ (SPs) as lay people who are trained to act as patients in medical interviews and give feedback from a patient perspective. Commonly, these people are participating in role-plays based in teaching environments remote from clinical practice. ‘Authentic early experience’ denotes human contact that occurs in clinical or social workplaces for the purpose of learning.
Developing communication skills through SP and authentic patient interactions
The advantages of simulated interactions include a reduction in the risk for harm (including psychological distress) to patients or students, the ability to control ‘patient’ supply and demand, partial control of the content of interactions, and the provision of opportunities for students to ‘practise’ different scenarios and responses in order to develop appropriate knowledge, skills and behaviours,[3-6] with accompanying patient-referenced feedback. By contrast, authentic early experiences are a form of workplace-based learning and are intended to encourage students to contextualise the curriculum and ease the transition into clinical learning during later years.[7-10] Differences are seen by students when good practice ideals promoted in the classroom are not replicated by the health care professionals they see consulting in the workplace[11, 12] and when authentic patients respond differently from SPs. As with other forms of workplace-based (also called ‘experience-based’) learning, students require support to maximise the learning potential of these variable yet authentic experiences.[11, 13, 14]
The importance of physical and psychological fidelity during simulation is debated in the literature,[4, 15] largely in relation to simulator equipment; less attention has been paid to that within SP encounters. It is helpful in this debate to take the social character of simulation into account by acknowledging that participants and organisers enter into a ‘fiction contract’, which allows them to treat the simulation as if it were real in order to practise transferable skills. How critical physical or psychological fidelity are to supporting the fiction contract depends on whether the desired educational goals are to learn psychomotor dexterity, procedural knowledge, decision making, interpersonal skills or team-working norms and values, or a combination of these. The simulation of communication skills is often conceptualised as representing preparation for the workplace, ‘bridging the gap’ between the classroom and clinical practice.[17, 18] This is based on the premise that authentic experience sequential to simulated experience is safer for both students and patients[4, 17-20] and that simulation offers an opportunity to instil understanding of ideal practice prior to experiences of pragmatism in authentic workplaces.
Few studies directly compare the two educational settings and consequently how new medical students handle contemporaneous experiences of interactions with simulated and real patients has not been adequately considered. Students exposed sequentially to SPs in the early years of training and then to real patients in the later years report that real patients are more focused on students' understanding of medical content than on their ability to communicate. Students can be suspicious that SPs have been told to withhold information by faculty staff, but may still describe simulated interactions as useful preparation for real encounters or for practising skills in worst case scenarios.
There is also evidence that learners commonly struggle to transfer knowledge between contexts.[26-28] Transfer will be impaired if there is a perceived gap between what is taught in medical school and the reality of medicine as it is practised in the workplace. This creates potential for dissonance between student experiences of simulation and authentic practice.[4, 30] Unless there is understanding of how and why students conceptualise their experiences, and particularly of how they handle these differences, we cannot seek to improve patient care through integrated simulation-based and authentic workplace-based education.[4, 31, 32]
This is an exploratory study to clarify student perceptions of contemporaneous interactions with simulated and authentic patients so that we can better understand the consequences for the development of communication skills. In this paper, we present an interpretative thematic secondary analysis of data before developing a conceptual framework for educational strategies in order to make sense of, and learn from, gaps or differences.
Qualitative secondary analysis is attracting increasing interest in social science research disciplines (for examples, see [33-35]) and amongst research funders. Although there is no universally agreed definition of secondary analysis, this term is used to describe situations in which researchers conduct further analysis of one or more datasets for purposes not defined or predicted in the original study design (see Heaton and Thorne for an overview of different types of secondary analysis). The attractions of secondary analysis as a methodology include: (i) the facilitation of data analysis across datasets (e.g. when each individual dataset provides relevant and complementary data with which to explore a particular question), and (ii) the further exploration of unrelated novel questions or unexpected findings generated as byproducts of studies with different foci. It has also been suggested that the secondary analysis of data from different sources may improve the generalisability of qualitative findings. Many of the potential criticisms of secondary analysis (aside from issues that apply to any qualitative approach, whether primary or secondary), such as loss of contextual information, are negated or resolvable when researchers from the original studies are involved in a rigorous process of secondary analysis. We have re-examined data originally generated from students in Years 1 and 2 during research studies conducted by each of the authors (Table 1). All three of the original studies, albeit that they addressed different research questions, were situated within an interpretative constructionist paradigm and so shared commonalities in theoretical perspective.
Table 1. Summary of primary studies from which data sets were drawn
Orientated towards the principles of constructionism, interactionism and interpretivism. Socio-cultural theories were used to interrogate empirical data and empirical data was used to refine and develop these theories within the field of Medical Education
Action research to design and evaluate the curriculum during its first year of delivery drawing on adult learning theories within a constructivist approach to learning
Interpretative analysis of evaluation data (free text)
Original research questions
‘How and why do students construct useful knowledge and meaning-making from authentic early experience?’ and ‘How and why do students make authentic early experiences work for them?’
In the communication skills curriculum what curricular content should be taught, by what methods, in what locations, by whom, to achieve which desired learning outcomes with respect to the first year of the curriculum?
The survey asked for 6 open text responses:
Comments on learning activities
Unit 1: Emergencies: What did you enjoy the most?
What aspects of this unit have you found most challenging?
Unit 2: Infections and Immunity: What did you enjoy the most?
List the 2 best things about Semester 1.
List the 2 things that could be improved about Semester 1 and suggestions for how they could be improved.
UK medical school established in 2002 and implementing a new locally designed curriculum from 2007/2008. Students participated in simulated patient interactions (during classroom communication skills training) and in real patient interactions (during authentic early experience placements in workplaces) from the start of their first year. Students were provided with a paper briefing prior to each session in the classroom or workplace outlining intended communication skills learning outcomes.
The same UK medical school as Yardley  in the 2 years surrounding the launch of its new curriculum. The action research team included stakeholders such as communication skills tutors, clinical tutors from workplace settings, a fourth year medical student representing the student perspective and simulated patients to represent informed lay perspectives.
The cohort of Year 1 students are divided into 15 tutor-led small groups for each communication skills teaching session. Five groups run at a time and each session is followed by a tutor debriefing involving the 5 tutors reporting back to the lead tutor who compiles a written lead tutor report.
Recruitment, sampling and participation
Students were recruited from academic years commencing in 2007/2008 and 2008/2009. Participation was voluntary following recruitment via e-mail and lecture announcements. Students were sampled from both Years 1 and 2 of the undergraduate degree as the authentic early experience programme spanned both years.
2007/2008 cohort: n = 4 (individual interviews alone), n = 8 (individual interviews and discussion group), n = 3 (discussion group alone)
2008/2009 cohort n = 5 (individual interviews alone), n = 6 (individual interviews and discussion group), n = 9 (discussion group alone).
The study population comprised Year 1 students whose undergraduate degree commenced in 2007/2008 and 2008/2009, plus the tutors and simulated patients working with these student cohorts. A subset of students in the 2007/2008 cohort was also recruited to join focus group discussions (3 groups, n = 16 in total). This subset volunteered following announcements in a lecture and online.
All 133 Year 1 students in the cohort commencing in 2010/2011 were sent an electronic evaluation using ‘SurveyMonkey TM’ software at the end of semester 1. 100% responded. In addition all students were invited to give verbal feedback on their first placement experience to their small group tutor in the fourth communication skills session, at the end of Semester 1. The feedback from 15 tutors, taken over the 3 sets, after this session was summarised into 3 lead tutor reports (students' views were hence subject to reinterpretation).
Interviews and discussions groups (when results from prior interviews were shared with students divided by year and previous participation) were used to generate data between January 2009 and March 2010. Students had been on between 2 and 4 placements in their current year of study, and the second year students had completed up to 6 placements in their first year of study. Participants were asked to provide examples of their experiences and encouraged to explain their own interpretations of these during semi-structured interviews.52 Interviews included discussion of the expectations, processes and consequences of authentic early experiences. The discussion groups53–55 were designed to allow student participants to comment on developing findings and to enhance understanding of the student perspective through discussion of views amongst peers to identify areas of consensus or difference, and collective meaning-making.56 Data were audio-recorded and independently transcribed. Three complementary overarching data types were generated; phenomenological themes, narrative (content and structure/language) and presented meaning. Analysis was conducted in discussion with 3 other researchers using mixed qualitative methods that drew on strategies from thematic analysis, narrative and discourse analysis and interpretative phenomenological analysis. Further details of the original study including the full study design and methodology are available in Ref. 57
Routine evaluation data 2007/2008 cohort: 121/137 Year 1 students (88% of cohort). 57% of responders were female, 11.6% graduates and 9.9% repeating the year.
2008/2009 cohort: 124/133 Year 1 students (93%), 60 (45% of respondents) were female, 14.5% graduates and 5.6% repeating the year.
Analysis of routine evaluation data for communication skills programme collected through a student questionnaire containing 16 questions and space for free text. For further details including the evaluation forms see Lefroy, Brosnan and Creavin.3 Students self-identified comparisons between communication skill sessions and their authentic early experiences during the process of evaluation.
16 students took part in 3 focus groups at the end of the first cycle of Year 1 (2007/2008 cohort) – 7 male and 9 female of whom 2 were graduates. Allocation to focus groups was by availability and in order to disperse PBL group members as much as possible. Efforts were also made to obtain an even gender balance within groups.
Student focus groups were analysed using a modified grounded theory approach. Data was transcribed and coded by the author into themes. The moderator and assistant were known to the students as the course evaluators and were not their tutors. The focus groups explored a range of issues relating to the new curriculum, and as part of this broader evaluation each group was asked to discuss their experiences of communication skills sessions and placements.
Focus groups were audio-recorded and transcribed with written consent from participants. Thematic analysis of transcript data was performed using NVivo 2.0 software. Tentative interpretations were developed at the time of data collection and the relevant literature was scanned to widen the interpretation. Assumptions were discussed by the action research group in light of findings, highlighting exceptions and seeking explanations for apparent disagreement. For further details see Lefroy39.
Students' open text comments to the 6 survey questions were transcribed verbatim. Thematic analysis identified 72 comments from 54 students of relevance to simulated patient teaching and authentic patient interactions.
Thematic analysis of the elements of the 3 lead tutor reports relevant to comparison and contrast between simulated and authentic patient encounters.
Original data re-used in secondary analysis
Interview and discussion group verbatim transcripts.
Free text from questionnaires. Transcripts from focus groups.
Verbatim transcriptions from survey (tutor reports provided contextual information).
Unique participants contributing research data to the secondary analysis data set
35 Unique participants contributed through either an individual interview or discussion groups.
13 Unique (in addition to Yardley ) participants contributed through focus groups.
Routine evaluation data included in the secondary analysis data set
n = 245
n = 133
The studies from which our dataset originates were all designed to look at aspects of student interactions with either simulated or authentic patients (Table 1). During the primary analysis of one dataset, an unexpected finding was observed by the first author of this paper: not only did students compare learning in different settings, but this comparison led students to make value judgements about what was valid knowledge. Students were generating knowledge and meaning from their simulated and authentic experiences relative to each other. Although one might reason that expanded learning could emerge from students' comparisons of simulated and authentic experiences, with each offering complementary aspects of learning, it is also possible that learning from either sort of experience might be reduced as students contrasted experiences when making value judgements. The impact of comparison and contrast on learning from concurrent simulated and authentic experiences has not previously been studied in detail. The other two authors of the present paper had also separately identified a similar need to better understand the impact of students' comparing and contrasting of simulation with clinical experience in their own masters' studies[39, 40] and observations during teaching. Secondary analyses of qualitative data look at the data through a different ‘lens’ and with fresh research questions. Our objective, therefore, was to explore and clarify effects of the contemporaneous provision of both types of experience through two research questions. (i) How do new undergraduate medical students understand contemporaneous interactions with simulated and authentic patients? (In the study context, ‘new’ refers to students in the first 2 years of medical school.) (ii) How and why do student perceptions of differences between simulated and authentic patient interactions shape their learning? The second of these questions emphasises our objective of developing understanding of learning outcomes or consequences arising from exposure to difference.
All three of the studies from which we drew our dataset were conducted in a single UK medical school with an integrated curriculum for undergraduates. The curriculum uses a hybrid model incorporating problem-based learning, experiential learning within the medical school, laboratory sessions, lectures and authentic early experience placements.
Students interact with simulated and authentic patients from the start of their studies. Authentic patients are encountered predominantly in clinical placements, although patients are also used in classroom teaching. In their first term, students have four classroom-based, tutor-facilitated communication skills teaching sessions. The first explains and explores the use of role-play in teaching and the principles of feedback. The following three sessions use SPs. The first clinical placement occurs between the third and fourth sessions, and is supported by a student briefing at the end of the third session and debriefing at the start of the fourth session, which, respectively, prepare students for and enable them to reflect upon their first authentic patient experiences.
Simulated interactions in the early years involved no simulation of the environment, but only of the ‘patient’ role. The general stated purpose of classroom sessions with SPs was to offer students practice prior to their interactions with authentic patients; each session has its own specific written learning objectives. During authentic early experience placements students were supervised (but not directly observed) by nominated professionals within workplaces. Usually (among other activities), the supervisor would set up an encounter with a patient, whom the students would then interview in pairs.
All three original studies[11, 39, 41] were subject to independent peer review and prospective ethical approval was gained from Keele University School of Medicine Research Ethics Committee for the elements of work in each study that exceeded normal procedures for evaluation of the curriculum (for which ethical approval is not currently required in this setting). All participants gave informed consent for the data contained in this paper to be used in research. The methodological framework of our secondary analysis was also peer reviewed.
We conducted our secondary analysis on data generated from research methods and routine evaluations. The complete dataset comprised research data from individual interviews (n = 23), focus groups (three groups, total participants n = 16), and discussion groups (four groups, total participants n = 26) taken from two sequential year cohorts (entering in the 2007/2008 and 2008/2009 academic years) of undergraduate medical students. The interviews had been audio-recorded with undergraduate students in Years 1 and 2 participating in a study of meaning making and knowledge construction from authentic early experience. Students from these year cohorts later (in Years 2 and 3 of their training) participated in audio-recorded discussion groups, which were transcribed verbatim. The focus groups were also audio-recorded and transcribed verbatim in a study of student experiences with SPs. Overall, these methods generated data from 48 different participants, of whom 17 provided longitudinal data as a result of their sequential participation (a feature of one of the original studies meant 14 students participated in sequential interviews and discussion groups; three of the students who participated in this study also participated in one of the others). We have ensured that no individual students' views are over-represented in our secondary analysis dataset by cross-checking transcriptions. In addition, routine written evaluation data obtained from three Year 1 cohorts of students (n = 378, as described in Table 1) were incorporated into the secondary analysis dataset. Given that response rates for all forms of evaluation data were ≥ 88%, we would expect to find that students who participated in other forms of data generation were also represented in the routine evaluation data, but because of the anonymising of the evaluation data, we cannot confirm this. Details of the conduct of each original study from which the datasets were drawn are outlined in Table 1, in which we have summarised the theoretical framework, original research questions, setting, recruitment, sampling and participation, and methods of each study. We have sequentially recorded verbatim quotations (rather than retaining the different original dataset classification systems) for the purposes of this paper in order to support readability. Selected quotations were drawn from different participants.
The data from all three sources were combined before an interpretative thematic analysis addressing the research questions outlined in this paper was conducted. All text was read and coded for type of experience, subject matter, comparison or contrast narratives, and comments on similarities and differences between simulated and authentic patients. Similarities and differences between each dataset were sought. Themes in the data were identified through discussion of these codes by all three authors. Data extracts are presented in the Results section to illustrate specific points within the analysis. Attention was paid to the social construction of the data and the language used. Our interpretation was developed through a rethinking of existing metaphors of ‘gap’. This reframing produced an alternative conceptual model for using difference and contrast to potentiate learning and enabled the development of our proposed educational strategies (see Discussion).
Both forms of learning were well received amongst the student body as evidenced by levels of satisfaction reported in contemporary written routine evaluations. In this section, we present three key cross-cutting themes derived from secondary analysis of data in which students compare or contrast communication differences between simulated and authentic patient interactions:
preparedness for being a student on placement or for becoming a doctor;
responsibility for the safety of the patient and student, and
student perceptions of a gap between theory and practice.
The initial analysis is presented and our interpretation is further developed.
Preparedness for being a student on placement or for becoming a doctor
On evaluating their introductory communication skills course, 118 (99%) Year 1 students in November 2007 and 121 (98%) in November 2008 agreed that communication skills classes prepared them well for placements. However, in the interviews and discussion groups conducted a few months later by SY, although students reported that their expectations of simulation had been met, they also argued that it would not have been possible to fully prepare them for their experiences with authentic patients:
‘…although we were adequately prepared for placements, I didn't feel that prepared because I hadn't actually gone out and spoken to patients yet because… what I mean is the actual development of getting better at talking to patients is by talking to more patients and, so I think I really needed to develop the confidence, really… get out in the real world before I felt adequately prepared for placements.' (S1)
Students participating in focus groups also reported satisfaction with the realism of the SPs, but questioned whether learning arising from these sessions could really be directly considered as ‘preparedness’:
‘I thought it [simulated sessions with student choice of level of patient emotion] was really useful, but a couple of weeks after that I had a placement where a patient did actually start crying. Even though it was useful and I knew more what to expect, you still feel completely overwhelmed when you are sat there in a room with two of you. Maybe because it was a male patient I felt that there was a guy sat there crying and you're there like what do we do, what have we done? …because you know with the simulated patient that you can't offend them if you upset them, it's not actually… it doesn't prepare you that much for real emotion, you are still completely overwhelmed by it.’ (S2)
Both the students quoted imply that simulation is useful in acquiring skills, but is less useful in preparing the learner for how he or she is going to feel when faced with reality. For such students, the fiction contract that is in place during simulation does not extend to consideration of their own feelings. Instead, students in simulation focus on personal performance or the reactions of their peers and tutors.
Students particularly value the educational role of the SP
‘…what is more helpful with the simulated patients is the feedback that they give you afterwards, because they've obviously done it plenty of times before, they know what they're looking for, they know what… they know what a good history is all about, so they can give constructive feedback which is invaluable really. Simulated patients are really invaluable in that respect.’ (S3)
However, students in discussion groups framed interactions with SPs as more awkward or antagonistic than with authentic patients as they were felt to require prescribed student behaviours to ‘unlock’ phases of the patient script:
‘Yes, the simulated patients like, it's like they've been primed, they've only been told that they can say certain things if you ask a question in the correct way. If you don't say it in the correct way, they don't give you that bit of information that you need to then ask your next question whereas a normal patient you can just ask them one question and they can go on forever and you can pick up loads of points to then ask them.’ (S4)
The finding that students felt they were participating in a script during simulated sessions is not unique to this setting, as illustrated by general debate, within the field of medical education, surrounding the hidden curriculum in multiple spheres of learning, and about fidelity issues within simulations. It does, however, suggest that students might need more support to engage in the ‘fiction contract’. Nonetheless, the ‘artificial’ aspects of simulated interactions did provide students with learning opportunities that would otherwise perhaps not have occurred. For example, the option to ‘pause’ and seek advice mid-interaction promoted student learning:
‘I think the pause and the rewind… commands were really useful, because you could stop and talk to the group and things like that and that helped a lot rather than carrying on to fail and then talking about how badly you failed. It gave you a chance to correct what you are doing if you were making a mistake.’ (S5)
The use of ‘gospel’ as a metaphor by the following student could suggest a perception that the medical school, unlike the student, believes there is a single correct way to communicate. This is supported by the use of ‘right’ and ‘wrong’ when describing feedback:
‘…if they just gave us communication skills and left it at that, it would just be learning a set of theories or a set of questions… you can't take this rigid structure as gospel anyway, it's meant to be a framework which you work from because not every patient's gonna be the same… But it's… invaluable to have the grounding first… with… simulated patients… with a tutor there to guide you where you're going wrong and to tell you when you're going right… then actually going out and doing it.' (S6)
Tutors, SPs and students are, in fact, instructed to facilitate feedback in terms of clarifying what worked and offering alternatives rather than judgements of what is right or wrong, although we do not know if these instructions were always followed during the study sessions. Taken to a logical conclusion, these findings suggest that students may feel pressure to behave in one way in classrooms and another in workplaces. Their comments indicated that many students conceived the purpose of simulation sessions as being limited to the short-term goal of coping with authentic experiences as students. By contrast, at least some of the students viewed authentic experiences as preparation for future practice. For these students the impact was considerable:
‘Placements … all three were very different, memorable experiences that encouraged me through giving me a vision of what I could be doing in 5 years’ time. They helped me understand the patient experience and communicate with patients.' (S7)
With respect to learning content and practically applicable knowledge for the future, the unpredictable agendas of authentic patients were reported as providing valuable opportunities to learn and derive meaning. Students were able to identify potential learning beyond the faculty-designed objectives when interacting with authentic patients, such as in understanding the patient's life:
‘They might come out with… a lot of things which you don't expect or which you never asked but somehow it came out… they came out with something totally unrelated but still a good insight to their lives.’ (S8)
Responsibility for safety: patient and student
Placements can be disappointing, especially if providers seem unprepared or unwelcoming, and the expected educational opportunities do not materialise. Some students in the 2010/2011 cohort reported, for example, that ‘…the provider didn't even know we were coming!’ (S9), despite there being clear administrative processes to book and confirm placements well in advance of student visits. Some providers seemed unclear about students' intended educational objectives:
‘When I went to placements, I felt I was abandoned sometimes and I didn't know what to do apart from interviewing patients. Someone should be beside me while I was interviewing the patient and I should be given feedback at the end. Therefore, I could learn from mistakes and I could improve my communication and interviewing skills.’ (S10)
This comment also suggested simulation might create student dependence upon a level of supervision that is not always available in clinical practice. The student body had taken to heart concerns of some faculty members about risk and the potential harm that may arise from authentic interactions. The faculty intention as expressed in briefings was to reassure students that they should not be pressured into acting above their competencies. For some students, at least, this resulted in anxieties that limited them:
‘He [a workplace supervisor] just said out of the blue‚ “Would you like to take a history off the patient?” and I just thought right, well, I'd rather not do it terribly and, you know, potentially make the patient worse off because of it …why put her through a history that's not going to be properly taken…' (S6)
‘I'd say with reference to the communication skills, being able to get the practice in with simulated patients before was definitely beneficial rather than just getting straight out and interviewing a patient because the potential for mistakes is quite high.’ (S11)
Some students believed that authentic patients might not detect underperformance (because they expect competency); this created a sense of responsibility by contrast with the ‘safe’ experience afforded by interacting with SPs, which created a sense of performance. The following example shows how a student's self-confidence is affected by the performance she perceives the patient to expect:
‘…you know you can do it and you know that the patient's not going to know if you've done it wrong… when it's a normal patient… well they expect me to know what I'm doing, so… it's easier to have the confidence because there isn't somebody there to scrutinise you.’ (S12)
Other students were more cautious, voicing concerns about upsetting patients and crossing the expected norms of lay interactions, which might produce unpredictable reactions from patients:
‘You can't harm simulated patients… you can't really make them upset… whereas a real patient… they perceive us as doctors.’ (S13)
‘…there's a lot more to think about when you're with a real patient… you really are delving into their personal, private lives… whereas the simulated patients are told to react in a certain way, these patients could act any which way they want to… and you have to… go… a bit more cautious.’ (S14)
For some students, these unknowns are exciting and challenging; for others they are unsettling. A combination of simulation and authenticity was sometimes created by inviting authentic patients into classroom settings. These sessions were valued by the students and appeared to be viewed as less risky:
‘They were really useful. The fact that they were very willing to talk about their experiences and were willing if you asked them anything. Their answer would be fantastic. You didn't feel worried to ask them a question because of the environment we were in; it just felt very open and easy to talk to them.’ (S15)
Student perceptions of a gap between theory and practice
Some comments reveal a substantial gap from student perspectives:
‘…skills acquired in EL [experiential learning] are impossible to be applied on placement. EL and placement are completely different situations.’ (S16)
‘…they don't do it the way you teach us to…’ (S17)
‘…whereas a real patient obviously isn't [primed by the medical school]… so it just feels more like a real conversation… whereas I think with an SP obviously you're doing things to try and tick off the right things… what you learn would be quite different. On simulated patients you are basically practising what you have been taught during that session… what you should do with consent and so on… It's quite rigid.’ (S18)
The clearest example of a student-perceived gap between theory and practice related to the discussion of consent and confidentiality in the two types of interaction. In particular, the perception that the medical school was mistaken about the importance of consent and confidentiality was common amongst students. This was because although these aspects had been identified as important in the classroom, students had not seen placement providers explicitly talk about these issues at the start of every patient encounter in practice. Some concluded that SPs were following the medical school's rules, rather than representing a valid patient perspective:
‘I think simulated patients try to do things a lot more by the book, whereas real patients… they aren't as, you know, sort of straightforward as you might think… you wouldn't normally go through confidentiality with them and then consent and that sort of stuff, 'cause they just… they don't see it as being important, whereas simulated patients will … that's only probably because they've been told to… by the medical school.’(S19)
Despite the differing requirements for consent within clinical and primarily educational encounters (the latter type predominantly refer to early patient interviews involving novice learners), none of the student interviewees described considering such nuances. Very few students appeared to realise that often practitioners had continuing professional understandings with their patients, or that some patients might, in certain circumstances, see consent or confidentiality as of vital importance. A student may spontaneously draw the conclusion that real patients do not see confidentiality and consent as important, rather than considering alternative explanations such as, for example, that real patients believe the observation of good practice in these areas to be a given and, therefore, not to require discussion.
To interpret the meanings of the three themes identified, we have developed a conceptual framework suggesting alternative meanings for metaphors which refer to the notion of the ‘gap’ that teachers, supervising clinicians and learners might find useful in developing educational strategies for making sense of, and learning from, gaps or differences. The three themes identified in our secondary analysis can be conceptualised as contributing to an overarching theory–practice gap between simulated and authentic patient interactions. Our key finding is not that SPs are perceived differently from authentic patients (we suggest that this will be self-evident), but is a clarification of how students actively use their perceptions of difference to compare and contrast and so construct learning from their contemporaneous experiences. Our analysis identifies that students generate knowledge and meaning from their simulated and authentic experiences relative to each other, and that similarities or differences seen in the workplace reinforce or negate classroom learning in complex ways. When difference was identified during interactions with patients, students made meaning about what was ‘real’ in the workplace and what was important to the medical school faculty (identified through SPs and tutors who were perceived as agents of the medical school). Students found it difficult to suspend the sense of giving a performance in the classroom. Authenticity produced a contrasting sense of responsibility towards patients, whereas many students remained reluctant to be assertive about their learning needs. In authentic situations, students believed patients might not detect underperformance because they would expect competency. This meant that some students were actually more at ease during real patient experiences, but the associated responsibility caused others some discomfort.
We have interpreted student talk of the exemplar differences and resultant meaning making illustrated in the present data as representative of ‘gaps’ that require recognition and explanation if we are to maximise the learning opportunities to be derived from contemporaneous exposure to simulated and authentic early experiences. The ‘gap’ arises commonly in metaphors in both everyday language and the fields of medicine and medical education. For example, we may talk about the gap between theory and practice, that between expectations and achievement, or that between the teaching that is delivered and the learning that is generated. The notion of the ‘gap’ is present in communication skills literature[17, 18] and in clinical and teaching practice. Use of the concept of a physical ‘gap’ is often associated with solutions to remove the gap, or eliminate its effects, illustrated through the common use of phrases such as ‘bridging the gap’ or ‘closing the gap’ in everyday life. These terms suggest that gaps are conceptualised as sources of disconnection or risk, rather than as metaphorical spaces for development. This is by contrast with the work of Vygotsky, who conceptualised learning and meaning as social and cultural rather than individual processes.. He describes a metaphorical gap or space (the zone of proximal development) to define the additional potential a learner has to expand understanding, through interaction with other agents and structures, beyond what might be achieved alone. To understand and explain gaps requires a critical approach to the purpose of metaphor, and consideration of whether different meanings could underlie the metaphor. Our interpretation reframes the meaning of metaphors of the gap to develop educational strategies for teachers and learners.
Educational opportunities in the theory–practice gap
Both simulated and authentic patient–student interactions are social practices: they are contextual events which occur in space and time in which people interact with one another, artefacts and the environment for learning purposes. We have already drawn on the work of Dieckmann et al. by building on their use of the term ‘fiction contract’ to describe how participants who suspend disbelief and conduct simulated interactions as if they are authentic may benefit more in terms of educational value. In addition, we suggest that the educational value of both simulated and authentic interactions may be synergistically increased through explicit attention to, and discussion of, difference. To date, few studies have directly compared the two educational settings. Our findings demonstrate that students continually make comparisons for themselves, and that the spontaneous meanings of difference which students construct can lead to a process of ‘competitive contrast’, in which the student rejects learning constructed from simulation that appears to conflict with the practice he or she observes in authentic workplaces. Exposure to both modes of teaching could be better used to expand overall learning by actively encouraging students to critically appraise their simulated and authentic experiences in comparison with each other, asking why difference occurs and seeking to assimilate and accommodate the resulting understanding into their evolving conceptual frameworks of good clinical practice.
Moving from ‘competitive contrast’ to ‘constructive comparison’ of difference
Theoretical and empirical evidence in other areas of medicine has previously shown that reasoning and meaning making often involve the use of comparison and contrast.[4, 23-25, 43, 44] In spontaneous processes of meaning making, difference is more striking than similarity.[45, 46] Figure 1 summarises our evolving conceptual framework of the two teaching environments and the physical, intellectual and emotional gaps between them. We propose that these gaps, or at least the ‘solutions’ to them, must be reconceptualised to maximise the educational value of students' concurrent engagement in, respectively, simulated and authentic patient interactions. Metaphors can be helpful in conceptualisation, but can also lead to assumptions of common understanding rather than to discussion of what different people perceive the work of the metaphor to be. Rather than seeking to ‘close’ or even to ‘bridge’ the gaps, we suggest, based on our findings, that educators – within medical schools and workplaces alike – in collaboration with their students, need to ‘mind’ these gaps, or to acknowledge them by thinking differently and critically about them.
In order to move students' learning from the ‘competitive contrast’ of ideals with the pragmatic and nuanced realities of workplace learning (as the present data show this reasoning to result in the rejection of these ideals), we need to develop educational strategies which allow students to make ‘constructive comparisons’ and to generate learning from differences. This finding is not dissimilar to that observed in general practice clerkships by van der Zwet et al., who describe how developmental space is needed to learn and develop a professional identity. Space is created when context and interactions with others allow students opportunities to ‘mind their learning’ with educators' support.
Practical implications for educators
We suggest that educators need to be mindful of gaps between student experiences of, respectively, simulated and authentic patient interactions. The educator has a role to play in driving a continual cycling of constructive comparison (indicated by the arrows in Fig. 1 and the panel describing the educator's role). We suggest the following strategies for putting mindedness into practice.
Don't ignore a gap as this risks paradoxical meaning making, the rejection of ideals in the face of contrast in reality, and the creation of dichotomies and misunderstandings. For example, Kneebone, who has written extensively and thoughtfully about the use of simulation in surgery, used the ha-ha wall as a metaphor to elucidate the different perspectives of novices and experts in order to illustrate the dangers of ignoring a gap.
Manage the gap: educators who recognise and understand gaps can work collaboratively with students to discuss perceived differences and make constructive comparisons. This requires explicit expectation of difference, making the educator's role one of facilitating the student's making of meaning, which includes encouraging the student to theorise about how and why identified differences occur. It also requires the educator to acknowledge that placements may require a level of adaptability and self-directedness over and above that which students may have needed in the classroom, and to provide the necessary support for students during the process and debriefing elements of their interactions, without relying solely on preparedness.
Use it – being ‘mindful’ of the gap: the use of Epstein's term is intentional. Mindfulness can be considered an element of students' reflective practice that leads to personal and professional development. Tutors also need to be mindful, however, of how they portray the other side of the gap and of their potent effect as role models (both positive and negative). Regardless of the quality, breadth and depth of the ‘communication skill toolkits’ offered to students in classrooms, simulation cannot achieve the same potency as exposure to the daily professional practice of qualified clinicians.
Strengths and limitations
Qualitative research studies usually produce data that exceed the researchers' original purpose and that generate interesting findings beyond the specific research questions for which the study was designed. Seeking to interpret the data rather than simply to confirm expected findings represents a marker of robust and rigorous qualitative analysis. It is therefore important that any unexpected findings that are identified are given due consideration. Secondary analysis provides a mechanism for this. The differences noted by students impact on their learning in either setting and we found that students actively construct meanings to explain these differences. Our research questions focused not on whether students perceive difference, but, rather, on how students perceive differences and what effects these perceptions have on their learning. The congruence and replication of findings within our data can be considered as representative of a form of triangulation through the process of secondary analysis. However, there are also potential limitations to our work. A secondary analysis (or meta-analysis or systematic review for that matter) will depend, at least in part, on the quality of the original studies, although, as we returned to the original data, our analysis was not dependent on any pre-existing interpretation. Data from all three original studies[11, 39, 41] were derived from the same UK medical school. It is possible that elements of the findings represent the circumstances of that particular school. Yardley and Lefroy sampled the same cohort in 2007/2008 (the first cohort to undertake a new curriculum in the medical school) and therefore the views of this cohort may not be representative of those of other years once the curriculum had become embedded. We also do not know what impact was effected by the fact that two of the authors (AWI and JL) taught on the classroom communication skills programme at the time of the research. We are, however, reassured by the congruence of their findings with those of Yardley, who was then known to students only as an education researcher. It is possible to construct the sequential participation of some students within the data as either a strength or weakness. Some of the original studies deliberately generated data longitudinally.[11, 39] Within the secondary analysis, it could be argued, for example, that this overlap is a strength as the consistency of views across studies suggests students were not simply trying to please particular researchers or meet particular study expectations. As with any secondary analysis, we cannot know if our participants might have offered different perspectives or different explanations for their handling of perceived gaps had such questions been directly put to them in a primary qualitative study. This area requires further exploration.
‘Minding the gap’ is an interpretative metaphor that we offer on the basis of our analysis. We do so to suggest that students will construct meaning in the gaps they perceive between classroom and authentic practice out of an intrinsic human desire to reconcile or explain lived experiences. Metaphor is defined as understanding one conceptual domain (the target domain) in terms of another conceptual domain (the source domain), which leads to the identification of a conceptual metaphor. The metaphor itself may not be spoken out loud but may be apparent (e.g. in our data, the phrase ‘they don't do it the way you teach us’ (S17) clearly illustrates the presence of a conceptual gap even if this is not explicitly named as such) or interpreted in interactions between people, such as teachers and learners. It is important to pay attention not just to what is said or not said, but to how and why it is said in order to more fully understand the meaning of the utterance for the speaker.
This research shows that learning context is significant, but also that different contexts can be positively contrasted by students to potentiate learner-created meaning. We have generated a conceptual framework that challenges people to think critically about the use of gap metaphors and what they personally mean when invoking the notion of the gap as a metaphorical tool. We hope our suggested educational strategies will be of practical use for teachers and supervisors engaging in simulated and authentic patient experiences with students by providing them with insight into students' perceptions and reasoning. In the medical school at which this work was conducted, Year 1 students are now explicitly briefed to think about the ‘gap’ between simulated and authentic patient interactions and are given guidance on how the recognition of differences may be an opportunity to extend their learning.
It is important to recognise that interactions in both simulated and authentic contexts can be subject to complex interpretations by students. We should neither reject simulation as lacking in reality nor be seduced into expecting it to resolve all the challenges of developing effective communication skills in practice. Instead, we should seek to find ways of minding the gap to increase the learning potential of concurrent simulation and authentic experiences.
In order to clarify how gaps between theory and practice influence learning, and whether more specific discussion of differences is beneficial, further research is required. We hope that the concept of ‘minding the gap’ might be considered more widely and in other contexts in order to further exploration of whether this concept has potential to encourage the development of transferable learning. Further studies might also usefully consider how student expectations of contemporaneous interactions with simulated and authentic patients are formed and whether interventions might target this process to further potentiate the development of communication skills. The conceptual framework and educational strategies we suggest need to be tested in practice by teachers and learners for utility. The outcomes and impact of using our conceptual framework and educational strategies for teaching and learning should be evaluated through further research. In addition, research to understand gaps between theory and practice in other contexts might also usefully contribute to understanding of the importance of differences for students in shaping their learning.
SY conceived the idea for this paper based on her observations of difference, contrast and comparison in data generated as part of her doctoral thesis on authentic early experience. Her work led to the development of theory using the concept of a ‘gap’ that might be used to potentiate educational value. SY wrote the first draft of the paper, contributed to the analysis and integration of data from all three of the earlier studies for this purpose and finalised the submitted version. AWI developed strategies for ‘minding the gap’ from themes within the background reading for her master's dissertation. She analysed data, supplied references, and contributed to the drafting, redrafting and proof-reading of this manuscript. JL contributed to the collation and analysis of research data, the formulation of the conclusions, and the drafting and revision of this paper. All authors approved the final manuscript for publication.
Conflicts of interest
prospective ethical approval was gained from Keele University School of Medicine Research Ethics Committee for the elements of work that exceeded normal procedures for evaluation of the curriculum (for which ethical approval is not currently required in this setting).