SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Medical Education 2011: 45: 636–647

Context  Conceptualisations of self-assessment are changing as its role in professional development comes to be viewed more broadly as needing to be both externally and internally informed through activities that enable access to and the interpretation and integration of data from external sources. Education programmes use various activities to promote learners’ reflection and self-direction, yet we know little about how effective these activities are in ‘informing’ learners’ self-assessments.

Objectives  This study aimed to increase understanding of the specific ways in which undergraduate and postgraduate learners used learning and assessment activities to inform self-assessments of their clinical performance.

Methods  We conducted an international qualitative study using focus groups and drawing on principles of grounded theory. We recruited volunteer participants from three undergraduate and two postgraduate programmes using structured self-assessment activities (e.g. portfolios). We asked learners to describe their perceptions of and experiences with formal and informal activities intended to inform self-assessment. We conducted analysis as a team using a constant comparative process.

Results  Eighty-five learners (53 undergraduate, 32 postgraduate) participated in 10 focus groups. Two main findings emerged. Firstly, the perceived effectiveness of formal and informal assessment activities in informing self-assessment appeared to be both person- and context-specific. No curricular activities were considered to be generally effective or ineffective. However, the availability of high-quality performance data and standards was thought to increase the effectiveness of an activity in informing self-assessment. Secondly, the fostering and informing of self-assessment was believed to require credible and engaged supervisors.

Conclusions  Several contextual and personal conditions consistently influenced learners’ perceptions of the extent to which assessment activities were useful in informing self-assessments of performance. Although learners are not guaranteed to be accurate in their perceptions of which factors influence their efforts to improve performance, their perceptions must be taken into account; assessment strategies that are perceived as providing untrustworthy information can be anticipated to have negligible impact.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Conceptualisations of the role of self-assessment in professional practice are changing. Once seen as an individualised, inward-looking activity, self-assessment in professional development is now more broadly viewed as externally informed through activities that enable access to and integration of data from external sources.1–5

The concept of self-assessment as being informed or guided is not new. Boud described self-assessment in educational settings and considered it an externally informed process that requires drawing upon both internal and external data about one’s performance and comparing these with a standard that may be implicit or explicit to make a judgement about how one is doing.1 Informed facilitation is also required to guide learners in the process and to ensure that it is a meaningful learning experience and not a superficial activity.1,6,7 Similarly, Epstein et al.3 defined self-assessment within the context of clinical performance as both an externally and internally informed process of interpreting data about one’s performance and comparing it with an explicit or implicit standard.

Drawing upon these perspectives, we recently used qualitative inquiry to develop a framework of ‘informed’ self-assessment in an effort to better understand how external data influence self-perceptions of clinical (workplace) performance.8 Informed self-assessment emerged as a dynamic, complex process comprised of five inter-related elements including (i) accessing, (ii) interpreting and (iii) using internal and external information to inform self-perceptions. These three components were seen to be further influenced by two other elements represented by (iv) multiple internal and environmental conditions and (v) tensions created between competing data sources and conditions. Reflection appeared to be integral to informed self-assessment, especially during the interpretation phase.8

Further understanding of informed self-assessment as a complex and dynamic process can be drawn from social psychology and that field’s recognition of the critical interplay between external and internal factors and the influence of this dynamic interaction upon individual behaviour.9 The three basic tenets of social psychology are: (i) situational (external) factors are powerful influences of behaviour; (ii) the individual’s perceptions of situational factors are influential, and (iii) the social system and individual perceptions are in a dynamic state of flux and are constantly changing. Hence, making inferences about individual behaviours is complex and difficult and we need to better understand the perceptions of individual learners in order to understand which sources of external information are likely to impact upon their efforts to improve performance.

From this conceptual framework, the purpose of this paper was to increase understanding of the specific ways in which undergraduate and postgraduate learners perceive themselves to use specific learning and assessment activities to inform self-assessments of their clinical performance. Building on the previously published framework of informed self-assessment, we explored how a variety of specific activities aimed at stimulating learners’ reflections on their performance are perceived to influence their self-perceptions and efforts to improve performance. These activities included: compiling learning portfolios (collections of learning and assessment experiences and reflections upon them);10,11 designing personal learning plans (plans for one’s learning based upon assessed performance gaps, often developed as part of a portfolio);12 using outcome objectives and competencies (formal assessment of one’s performance against external standards);13 using multi-source feedback (MSF) (formal feedback about behaviours from multiple reviewers used to inform perceptions of performance);14 auditing one’s patient records with feedback and gap analysis,15 and using questionnaire-based self-assessments of clinical performance.16

The specific research questions were:

  • 1
     What features of activities intended to promote reflection and self-direction in clinical performance did learners find most useful to inform their self-assessments?
  • 2
     What influenced the perceived usefulness of these activities in informing self-assessment?

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

We selected five learner programmes (three undergraduate, two postgraduate) in four countries (the UK, the USA, the Netherlands, Belgium) identified by research team members who were familiar with the programmes and based upon the degree of rigour of their formal self-assessment activities. To determine the rigour of these activities, we considered the criteria for well-informed self-assessment, such as the availability of performance data and standards, and the opportunity for learners to reflect upon and interpret their data in comparison with those standards.1,3 Selected activities included portfolios, personal learning plans, outcome objectives and competencies, and formalised practice improvement using audit and feedback of one’s own patient records. Table S1 (online) shows a brief description of each selected programme. Research ethics approval was obtained according to the protocols of participating institutions’ ethics review boards.

We invited learners to attend one of two focus groups for each programme to explore their use of formal and informal activities to inform self-assessment of clinical performance. Programme managers distributed invitations to learners in clinical settings at their major teaching centre; participation was voluntary. No incentives to participate were offered. Focus groups were held at the learners’ respective education sites. All were conducted in English and facilitated by one member of the research team, an experienced qualitative researcher not connected to any of the programmes. The facilitator was assisted by either a member of the research team associated with the learners’ programme (two programmes) or by a research associate from that site familiar with the culture and programme (three programmes).

We asked learners to describe their general perceptions of and experiences in self-assessment, and their use and perceptions of the specific programme activities designed to support reflection and potentially inform self-assessment. We also asked them to describe the extent to which other formal curricular activities, such as workplace assessments, and informal activities, including feedback from supervisors, peers and patients, informed their self-assessment. (Appendix S1 gives a list of initial focus group questions.) Discussions were recorded and transcribed. The transcripts of groups in which English was not the participants’ first language were reviewed to ensure accuracy by the research associate who assisted with the groups and was fluent in both languages.

According to the tradition of grounded theory, we modified questions in response to themes emerging from or potentially overlooked in previous focus groups and conducted the analysis iteratively as a team.17 Once an initial coding framework had been generated, at least five team members individually read each transcript and met through a series of teleconferences to discuss and confirm emerging themes, revise the framework and resolve differences in interpretation. We used NVivo Version 8 (QSR International Pty Ltd, Doncaster, Vic, Australia) to manage data and the emerging coding framework. We held two face-to-face meetings, including an initial brief meeting (2 hours) to confirm emerging themes and form working subgroups to conduct detailed analyses of related categories of data, and a subsequent 2-day meeting to explore the detailed analyses in depth, investigate them across groups and activities, and construct a preliminary conceptual diagram of findings. As a second phase of analysis, we used a comparative approach to compare and contrast participant learners’ descriptions of their experiences and perceptions of informed self-assessment across activities. The purpose was to develop in-depth understanding of three central concepts: the degree to which the activity was perceived as being useful to inform learners’ self-assessments; learners’ perceptions and reasoning as to the conditions influencing its usefulness, and, finally, an overarching comparative perspective of strategies and conditions for informing learner self-assessment. Saturation with respect to these three concepts was reached, across groups and across activities, allowing us to offer insights into the nature of these perceptions as a whole.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

A total of 85 learners (53 undergraduate, 32 postgraduate) participated in the 10 focus groups. The three undergraduate programmes were based in the UK, the Netherlands and Belgium (University of Manchester, Maastricht University and University College Arteveldehogeschool, Ghent, respectively). The two postgraduate programmes were based in the UK and the USA (University of Manchester and University of Pennsylvania, Philadelphia, PA, respectively). Groups ranged in size from six to 12 participants. Focus groups varied with regard to the degree to which they were representative of the gender distribution of students or trainees within their programmes; the gap between the proportion of women in our sample and that in the respective programme ranged from 0% to 20% (Table S1). All participants in all groups took part in the discussions. The involvement of research team members associated with the participants’ programmes did not appear to dampen discussions. Learners engaged in critical discussions of their programmes’ self-assessment activities or approaches, and their implementation, contributing both positive and negative views, experiences and emotional reactions.

Despite the diversity of programmes with respect to geographic location, level of learner and the activity used to inform self-assessment, data analyses led to generally consistent findings across groups and activities. Of note, two overarching findings emerged. The first was that the effectiveness of formal and informal assessment and instructional activities in informing self-assessment appeared to be both person- and context-specific. No curricular activities were considered either generally effective or ineffective and the effectiveness of each was perceived to be moderated by personal and contextual influences. That said, the availability of explicit performance standards and credible objective performance data was generally thought to increase the effectiveness of an activity. The second overarching finding was that the informing of learners’ self-assessment required the engaged and informed effort and guidance of supervisors.

In the following sections we discuss these findings under three categories: structured activities for which programmes were selected for this study; other formal assessment activities, and informal activities. Table 1 summarises the findings by highlighting the factors perceived to influence usefulness. It also maps the influences to the overarching findings described above.

Table 1.   Use of curricular activities for self-assessment purposes, factors influencing their effectiveness, and mapping factors to overarching findings
Learning activityFactors influencing the effectiveness of an activity for informing self-assessment
Factors enhancing effectivenessFactors limiting effectiveness
  1. Mapping to overarching findings:

  2. * Effectiveness of each activity in informing self-assessment was perceived as moderated by personal and contextual influences

  3. † Explicit performance standards and credible/objective performance data increased the perceived effectiveness of activities

  4. ‡ Informing learners’ self-assessment required the engaged and informed effort and guidance of supervisors

  5. PIMs = performance improvement modules; MSF = multi-source feedback; DOPS = direct observation of procedural skills; mini-CEX = mini-clinical examination; OSCE = objective structured clinical examination

Formal activities for which programmes were selected for this study
 Portfolios and personal  learning plansSupervisors’ informed engagement, feedback, and mentoring Supervisors’ understanding and valuing of the activities for learner development Clear curricular objectives and standards for self-assessing performancePersonal learning styles inconsistent with portfolio use* Tensions created by the activities’ dual purposes (summative versus formative assessment)* Perceptions of the activity as contrived and superficial*
 Outcome objectives and  competenciesExplicitly and clearly stated Supervisor’s knowledgeable and engaged use Supervisor’s narrative feedback based on direct observation 
 Patient record audit and  feedback (PIMs)Objective performance data and measures for self-assessmentTime-consuming*
Other formal programme activities
 Workplace clinical assessments:  MSFNarrative feedback from supervisors observing learner performanceReceiving only numerical scores*
 Workplace clinical assessments:  DOPS, mini-CEX, placement  evaluation questionnaires Lack of supervisor engagement Superficial assessment*
 Objective structured clinical  examinationsFormative OSCEs with immediate feedback and supervisormodelling of desirable performanceDiffering performance standards between the ‘test’ (OSCE) and ‘real’ (clinical workplace) contexts
Informal activities and data sources
 Formative feedback from  supervisorsSpecific, timely, direct, respectful feedback based on observation Learner initiative to obtain feedback*Lack of formative feedback
 Patients as information sourcesPatient insights via questionnaires can add another perspective on performance* Objectivity of patient progress and outcome data for measuring one’s performance (see PIMs, above)Scepticism regarding the patient’s ability to knowledgeably judge performance* Lack of consistent access to patient outcome data
 Peers as information sourcesAccessible, supportive, useful data obtained from a group of peers*Questionable validity of data in some instances

Perceptions of formal programme activities designed to enhance reflection and self-directed learning

Portfolios and personal learning plans

The three undergraduate programmes and one postgraduate programme used portfolios. Learners held conflicting views of their usefulness for informing self-assessment, ranging from a ‘make-work’ project having little benefit (e.g. ‘…a lot of the portfolio I do because I feel I have to…’ [Undergraduate B2]) to a useful activity for informing self-assessment and growth. Many suggested that, at the very least, the portfolio was useful as a record of performance over time and a number of learners identified that reviewing and seeing their progress increased their confidence:

‘Until this year when I had my portfolio review, I never really saw the point, it was just an extra piece of work to do, a hassle. And then when I had my meeting with my portfolio tutor and I actually read back some of the things that I wrote in first year and actually saw how much I’d learned and progressed and grown in confidence since then, I realised how important it actually was to do that, so you can see how far you’ve come. It’s quite a good confidence boost.’ (Undergraduate C9)

For some, keeping a portfolio stimulated reflection on specific incidents and progress. Several observed that writing can facilitate a deeper level of self-analysis and benefit:

‘Formalising, writing something down, although it seems like a pain gets you to think more deeply, like, “Well, I was really annoyed in that situation. Why was I annoyed?” And you do actually sit and think, “What have I learned from it? I must have learned something.” It makes you look deeper into the situation. I think that makes me a better doctor because I’ll use that information for the future.’ (Postgraduate E1)

Others found the portfolio less useful, suggesting that ‘one size’ did not fit all. Some described reflecting every day ‘in their heads’ and not needing to write the reflection down, whereas others suggested that reflecting was not ‘their learning style’. Others identified tensions arising from the portfolio’s multiple purposes in contexts in which the portfolio might be used for learning and for formal assessment and, in two programmes, for scrutiny by prospective postgraduate placement coordinators or employers. They reported that formal assessment and external scrutiny diminished the personal reflective and learning value of the portfolio. Perceptions that supervisors evaluated the ‘thickness of the paper’ rather than the nature or quality of its content and the reflections it contained also reduced the portfolio’s perceived helpfulness. Lack of clear performance standards was considered problematic. The availability of specific target tasks, competencies or standards upon which to reflect and with which to compare one’s performance was considered to enable more productive reflection and self-assessment.

Personal learning plans were one component of portfolios. Learners were required to reflect upon their progress, identify overall strengths and weaknesses, and develop a plan for continued learning and improvement. Some learners found this more global self-assessment of progress more helpful than reflection on individual incidents. They observed that documenting personal learning plans could force periodic self-assessment of progress:

‘…kind of forces you to think about this placement, what did I want to get out of it? And you know, maybe I could be a bit more proactive in this area... I’ve only got a month left and I still haven’t done four out of five of the things that I said I was gonna do at the beginning.’(Postgraduate D3)

For both personal learning plans and portfolios, learners perceived that their effectiveness in informing self-assessment and development was moderated by contextual influences. A sense that supervisors did not understand and value these activities for learning and development decreased their perceived usefulness. More positive influences included mentoring by and the provision of informed feedback from engaged supervisors, the availability of clear curricular objectives and standards, and opportunities for learning experiences provided by the placement.

Outcome objectives and competencies

Two undergraduate programmes were selected because their curricula included specific standards (i.e. detailed outcome objectives or competencies required for completion of specific programme levels). These contrasted with the lengthy formal lists of curriculum objectives undifferentiated by educational level (e.g. for an entire residency programme). One programme used an electronic list of outcome objectives and activities for its first clinical rotation. Students used this to guide their selection of learning experiences and as a checklist to ensure they were accomplishing all they should. The second programme used a framework of 22 outcome competencies levelled and detailed for each clinical placement and programme year. Learners were responsible for ensuring that they completed required activities and were observed and assessed for each competency.

Learners in both programmes agreed that having explicit objectives and competencies against which to measure themselves was helpful to gauge and inform their progress.

Explicit outcomes encouraged self-directed learning by providing standards for self-monitoring in specific clinical practice and over time. Learners used them to identify knowledge gaps in their clinical experiences and problem-based learning activities, and to set personal learning objectives for specific experiences in the immediate future. They appeared to be helpful in guiding reflection on clinical performance and progress in meeting outcomes, and reportedly stimulated deeper reflection and cued some learners to seek additional practice opportunities or learning experiences.

Learners in the second programme were additionally required to seek and obtain signed narrative assessment of their performance from the supervisors observing them. They generally described these assessments as valuable sources of information for self-assessment, particularly if supervisors were knowledgeable of standards and engaged in their learning and assessment:

‘[They] write: “This was good, this was good, this was good,” and not: “They did that, it was ok, bye.” ... [They write] more specific information.’ (Undergraduate G1)

‘Then you also know that they’ve really taken the time.’ (Undergraduate G2)

Patient record audit and feedback

Performance improvement modules (PIMs), which represent a formal audit, feedback and improvement process, were used by one postgraduate programme. For the PIM, residents conducted a retrospective patient record audit of a clinical population (e.g. patients with diabetes) and compared their practice results with published clinical practice guidelines (i.e. a standard). Residents participated as groups in ambulatory care clinics, in which each resident contributed five patient record audits.

Residents reported that having access to patient outcome data was helpful in informing their performance and their assessment of it. Comparing real patient data with clinical practice guidelines identified gaps in their practice and specific improvement needs:

‘And then, retrospectively we tried to find out what we needed to do better. What were we lacking in? We found that smoking cessation was something we weren’t addressing. A pap smear was another, we needed to bring it up more often and make sure it’s done annually. Then we put that prospectively and said, okay, these are the things we need to do better and let’s address them more actively.’ (Postgraduate O2)

The objective quality of the data captured the attention of the recipients, making them aware of deficits in performance compared with evidence-based standards and motivating change:

‘[To] have real numbers, real results in front of you, then you go, “Oh wow, I really do have to work on this.” Because you may not... say I need to work on that. You may not even know how significant it is.’ (Postgraduate Q8)

‘Right, I agree... You need objective data so you can say, “I’m not making the cut.”’ (Postgraduate Q3)

Participating in the PIM process reportedly fostered habits of mindful practice; the focus on guidelines and patient data served as a reminder. Although it was a time-consuming process, most residents saw the value of systematically collecting practice-level patient data to inform performance self-assessment. Some even reported being eager to mine electronic data sources more routinely in order to access larger patient samples and compare results with those of peers.

Perceptions of the role of other formal programme activities in informing self-assessment

Workplace clinical assessments

Learners in one undergraduate and one postgraduate programme used MSF, a formal workplace assessment activity, to inform their progress and self-assessments. They included their MSF reports and reflections in their portfolios. They spoke of valuing the scope of the feedback received over time from different health professionals and regarding different competencies (e.g. clinical, professional, communication-related). Some expressed surprise at the level of detail of feedback received:

‘I was surprised by how much stuff people wrote, and how insightful it was. I just thought people wouldn’t be bothered and they’d just write, oh, “Very good.” But people had... really written stuff that I hadn’t realised that they’d paid that much attention to, to be able to make those comments about my performance.’ (Postgraduate D3)

Although some learners expressed initial disappointment with negative MSF, they noted that narrative comments, not scores, were more effective in stimulating reflection and informing self-assessment as they provided specific information. In fact, learners who did not receive narrative comments expressed disappointment, even with an otherwise positive evaluation. As for other external assessment reports, learners found thoughtful feedback from those who had observed their performance to be most helpful in informing self-assessments.

Two other workplace-based assessments, the direct observation of procedural skills (DOPS) and the mini-clinical examination (mini-CEX)15,16 were described by learners in one postgraduate programme. They reported that these assessments were intended to provide immediate feedback on clinical activities, but that their usefulness in informing self-assessment depended on several external factors. Lack of supervisor engagement and motivation to use the assessments for performance improvement appeared to lead to their trivialisation:

‘But the fact that I just get someone to sign a form that’s watched me do it, to say I’ve done it, it’s just jumping [through hoops], you know, it’s just formalising it.’ (Postgraduate D1)

‘And that’s what these tick-boxes things do actually, isn’t it? As long as you’ve got two DOPS and two mini-CEXs every placement, you’re okay. But actually those DOPS or CEXs might have been on something really simple and you’re not improving your clinical skills, but as long as you’ve ticked the boxes… fine.’ (Postgraduate D6)

Similarly, residents who received numerical ratings on their regular clinical placement evaluation questionnaires queried the sincerity and relevance of their supervisors’ scoring. It appeared that scored checklists risked superficial use by supervisors, rather than the engaged assessment that is intended to help learners improve.

Objective structured clinical examinations

Learners in one undergraduate and one postgraduate programme used objective structured clinical examinations (OSCEs) to inform self-assessments and learning. Students described experiences in which more senior students prepared them for formal OSCEs designed to assess their clinical skills. They observed that although the feedback of student supervisors was helpful in informing their progress and self-assessments, procedures learned and examined in OSCEs represented the reference standard of the theoretical world and were not always readily transferable to busy workplace clinical units:

‘What actually happens on the wards is very different to what we’re taught to do in an OSCE exam.’ (Undergraduate C6)

Residents in one programme compared the usefulness of two types of OSCE for self-assessment and learning. The traditional model was the annual summative evaluation multi-station OSCE. Residents received a score and no other feedback. When they suggested to their supervisors that this format limited opportunity for learning from the OSCE experience, the supervisors responded by designing a series of monthly, single-station formative OSCEs depicting common clinical situations. Each resident was observed and rated by supervisors, who provided immediate feedback for improvement. One supervisor then modelled the patient interaction and clinical management. Residents unanimously found this most helpful in informing their self-assessments and development as they received immediate and specific feedback from the supervisor observing their performance and were also able to observe ‘good’ performance (i.e. a demonstration of what was expected of them).

In summary, formal performance assessment approaches using checklists and scores appeared vulnerable to trivialisation. They tended to become ritualised and superficial and to represent a matter of ‘jumping through hoops’ rather than providing meaningful feedback to inform self-assessment and guide improvement.

Perceptions of the role of informal activities in informing self-assessment

Formative feedback from supervisors

Learners generally spoke of lack of direct verbal formative feedback on a day-to-day basis from their supervisors. They believed such feedback was necessary to inform their perceptions of how they were doing and expressed concern that, without it, they might remain unaware of any need to improve:

‘But that’s something I find throughout the whole of medicine, I’m continuously unnerved by the lack of feedback that we get all the time... Not really knowing whether you’re sinking or swimming.’ (Postgraduate E4)

Similarly, a student highlighted the lack of formative feedback for improvement by comparing it with the high degree of feedback received when learning another complex task, such as how to drive a car:

‘The difference between learning to drive and learning to perform medicine is, in learning to drive you have always someone sitting next to you who can correct you.’ (Undergraduate I7)

Learners at all levels consistently described the lack of regular formative feedback as a barrier to their informed self-assessment. Without feedback, they did not know if they needed to improve or how to improve. For example, learners described the lack of feedback on the accuracy of their patient admission history and physical examinations as being especially problematic. Junior residents in one programme reported that the only way to determine whether they had done these correctly was to overhear senior residents and staff talking about their or other learners’ work (e.g. ‘...when you’re in the ward and you hear somebody else say “That’s not good” about someone else’s work…’ [Postgraduate E3]). They also reported that such comments were often made in a critical, disrespectful manner, which was unhelpful and embarrassing for the subject. When patients were transferred to another unit, learners reported having to check patient records on the hospital electronic medical record system to determine if their findings had been confirmed by other, more senior practitioners because they had not received direct feedback.

Feedback from supervisors, when received, was often not specific enough to guide the learner’s self-assessment and performance improvement. When they did offer feedback, many supervisors tended to provide general reassurances, such as: ‘I haven’t heard anything, you’re doing fine.’ Learners responded to this in varying ways. For some it was reassuring, but most were cynical of its value as they believed that their supervisors had not observed their performance adequately and were not sufficiently informed about their progress to judge how they were doing. Residents described having to interpret lack of direct feedback from attending doctors (e.g. when an attending doctor added nothing further to a resident’s management plan) as confirmation or validation of their performance.

Alternatively, a number of learners recognised that personal initiative was required to obtain feedback; engaging their supervisors required effort as some felt feedback was not available unless they actively sought it. When they did, they found that they might be received positively or they might not. They struggled to find a balance between obtaining feedback and risking their seniors’ annoyance. Further, the clinical subculture influenced learning and the seeking of feedback. Students cognisant of power imbalances in hierarchical clinical settings found asking for feedback from certain individuals difficult; they reported feeling more vulnerable when seeking feedback in some specialties than in others. Learners at all levels described having to learn to ask for feedback. Asking for focused feedback was usually productive and was a way of engaging the supervisor:

‘But you also have to learn how to ask for feedback. Because I found it difficult at first, but no, it’s really easy. I know they don’t mind. Most of them even find it nice or they even do want to give me feedback now.’ (Undergraduate I6)

Patients

Learners described three ways in which patients informed their self-assessments, including: structured approaches such as the patient questionnaire component of the PIM and MSF, and in portfolio written reflections; informal patient feedback, and access to patient progress data. With regard to structured approaches, residents described receiving surprising feedback from patient questionnaires that identified needs for practice improvement. With reference to portfolios, students and residents spoke of learning through their reflections upon critical patient incidents recorded in their portfolios. That said, some learners also expressed doubt about the capacity of patients to directly provide knowledgeable feedback to inform their performance:

‘I have the feeling that patients judge you on how nice you are.’ (Undergraduate H4)

Participants described informal ways in which patients provided feedback for self-assessment and learning. For example, a resident who reported finding little useful information in his formal clinical evaluations interpreted a family’s response to his discussion about end-of-life care as feedback on his communication skills:

‘If I’m trying to discuss end-of-life care with a family and they get upset – like I probably did something poorly there. That’s sort of the feedback that we get [from patients], I think.’ (Postgraduate Q5)

A student described using intentional learning from patients to inform his self-assessments and improve:

‘If you asked me how I self-assessed myself, I took a lot of histories... I would just pick patients randomly off the ward, go and see them, take a history, maybe do an examination if I thought it was necessary, and see if I could diagnose them without looking at the notes [in the patient record]. If I got it right, then I would say I was getting better; if I was miles off I knew I was horrendous, but that was really the only way I could find to self-assess myself in the skill.’ (Undergraduate C1)

Accessing patient progress and outcome data was the third manner in which learners used patients to inform their self-assessment of the care they provided. Access to these data was variable. In acute care units, particularly in intensive care units, patient progress data were readily available via patient-by-patient debriefings with the health care team and patient records. In out-patient clinics and upon patient discharge, outcome data were not available and hence feedback by which to assess the impact of one’s actions was more limited. Learners lamented the lack of patient outcome data to inform their assessments of the accuracy of their diagnostic and treatment activities.

Peers

For both students and residents, peers were important sources of support, feedback, guidance and performance data. They spoke of informal, frequent meetings with peers to discuss problems encountered, provide and obtain emotional support and reassurance, and give one another feedback. For example, one group met weekly for discussions that revolved around sharing their experiences and, particularly, around how they managed problematic situations when they were unable to reach their supervisor:

‘It’s just talking about what you experienced the whole week. And it’s not even trying to get some solutions, but more “I do it this way, she does it that way”.’ (Undergraduate I9)

Discussions with peers were valuable for reflecting upon, benchmarking, confirming and informing one’s performance. Peers also enabled the processing of emotional reactions to clinical experiences, helped put experiences into perspective and appeared to serve as motivators:

‘But I think a lot of it is learning by doing. And part of that is gauging yourself. That’s one of the reasons you go to your peers. You want to rise to the level of your peers and, and they can challenge you. That’s why it’s nice to be with a strong group of people if you’re weak at something.’ (Postgraduate Q1)

Some learners recognised risks in relying solely on peer feedback, comparing it with ‘the blind leading the blind’ (Postgraduate Q7). However, peers appeared to be significant and sometimes represented the primary sources of feedback.

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

We framed this study within the construct of ‘informed’ self-assessment, a dynamic process in which individuals differentially access, interpret and use internal and external information to inform self-perceptions of their performance, a process influenced by multiple factors and related tensions.8 We also drew upon earlier work in which the processes of interpreting external and internal data about one’s performance and comparing this performance with an explicit or implicit standard have been described as integral to informed self-assessment.1,3 This study aimed to extend understanding of learners’ perceptions of the usefulness of specific activities intended to promote reflection and inform self-assessment, as well as to explore the particular factors perceived to moderate their usefulness.

Overall, participants generally perceived the effectiveness of both formal and informal assessment and feedback activities as being both person- and context-specific. There appeared to be no generally effective or ineffective strategies. The success of each approach, as seen through the learners’ eyes, was moderated predominantly by external factors, mainly by supervisors’ engagement in their learning and improvement, awareness of appropriate performance standards, and skill in facilitating specific approaches and providing feedback. In other words, the value and validity of the approach were not inherent in the approach itself, but in how the approach was used. Within the framework of informed self-assessment, these findings reinforce the influence of external and internal conditions, and the tensions that emerge from competing conditions and data sources, upon accessing, interpreting and using data to inform one’s self-assessment.8

Previous research has demonstrated that multiple factors influence the effective use of structured learning and assessment activities (e.g. portfolios, personal learning plans, workplace assessments) for learning.10,18–21 The results of this study suggest that these factors also influence the extent to which these activities are thought to inform self-assessment. The preparation and engagement of supervisors and staff interested in supporting learning and improvement appeared to be most instrumental. Preparation includes knowledge of curricula and level-specific standards, skill in using assessment approaches and providing feedback, and understanding learners’ levels and performance. Faculty engagement refers to being interested in learners, and fostering activities in which relevant performance information is regularly shared with learners to inform their self-assessment and improvement. Learners discounted faculty members' opinion when they perceived a lack of supervisory engagement or had other reason to doubt the assessor’s credibility with respect to his or her knowledge of the learner’s performance. Similarly, the findings suggest that learners can gain by becoming more skilled in asking for feedback.

Participants perceived that regular verbal formative feedback, although valued, was rarely received. Although our participants were selected based on their enrolment in programmes that used formal learning activities intended to help them better understand their progress, many described feeling lost, unclear about how they were performing, and having to self-assess their performance in the absence of knowledgeable feedback. Although self-directedness in learning is desirable, these results suggest the need to balance such self-directedness with regular specific verbal formative feedback on progress.22–24 Self-assessment and self-directed learning do not mean that learners should be left on their own; they require structuring and scaffolding of learning experiences, guidance and feedback.1,6,12,25–28

One particular finding that requires further exploration refers to the contribution of explicit performance standards and objective performance data to increasing the effectiveness of activities for informing self-assessment.1,3 The curricular activities varied with respect to the presence and rigour of objective standards and data. For example, although learners in three programmes recorded their performance data in portfolios and learning plans, they frequently reported feeling unclear about the standards against which they should be interpreting and judging their performance, and hence questioned the usefulness of the process. Alternatively, students in the programme that used clearly defined competencies and required signed narrative performance assessments from their supervisors reported more satisfaction with the clarity of data and standards. Similarly, the residents who collected clinical performance data from their patients’ records and compared these with published clinical practice guidelines reported confidence in using objective data and standards for self-assessment. More frequently, however, learners tended to cite the lack of clear standards or objective patient data for self-calibrating as deterrents to well-informed self-assessment. As noted above, the lack of involvement of informed supervisors to guide the interpretation of data and standards also diminished the perceived effectiveness of most activities.

Notably, peer interaction and feedback appeared to be central to informing learners’ self-assessments, as is true for doctors.29 Discussion with peers enabled reflection upon and benchmarking of performance, either confirming current performance or informing improvements. We cannot help but question whether the relative importance of peer feedback may have reflected the lack of supervisory feedback, the relative accessibility of peers or some other factor. These are questions for further study.

The study’s findings are limited by the small number of education programmes studied, as well as by their diversity. That said, we believe that the consistency of findings across settings, education levels and activities indicates their robustness. The analysis was strengthened by the diverse professional and theoretical perspectives of the researchers, who were drawn from medical education and assessment, psychology, anthropology, grounded theory and clinical practice. Our diverse perspectives led us to extensive questioning of one another’s thinking and the rationale for it in order to clarify understanding and to ensure that the emerging themes resonated with each of us. The process frequently required us to return to the data for confirmation.17,30 Another limitation requires us to consider the focus upon learners’ perceptions. We cannot guarantee that using perceptions alone as a basis for establishing formal activities intended to inform self-assessment in a way that avoids the pitfalls identified will necessarily yield better self-assessment and performance. However, we can be fairly confident that assessment activities that are not perceived as valuable can be anticipated to have negligible or negative impacts.

In summary, it appears that several contextual and personal conditions consistently influenced learners’ use of formal and informal learning and assessment activities to inform self-assessment of their performance. Their perceptions of the credibility and authenticity of the performance data and sources appeared central. Further study of the influence of the nature of the data and performance standards upon informed self-assessment is suggested. Knowledgeable supervisor engagement in learning and in learners’ assessment activities also appeared to be pivotal. Faculty development geared towards raising awareness of the importance of informing self-assessment and the related need to structure and scaffold learners’ self-assessment activities could prove helpful. Studies to explore faculty members’ responses to and understandings of these findings, and to design and test interventions for scaffolding activities to inform self-assessment, are logical next avenues of inquiry.

Contributors:  JS, KE, TD, JL, KM and CvdV contributed to the conception and design of the study; JS, KE, HA, BC, TD, EH, JL, EL, KM, CvdV contributed to the acquisition, analysis and interpretation of data; JS, KE, HA, BC, TD, EH, JL, EL, KM, CvdV contributed to the critical revision of the manuscript. All authors approved the final manuscript for publication.

Acknowledgements:  the authors thank Tanya Hill MSc, Dalhousie University, Halifax, Nova Scotia, Canada for providing administrative support and reviewing the manuscript.

Funding:  this research was supported by the Medical Council of Canada, the American Board of Internal Medicine and the Office of Continuing Medical Education, Faculty of Medicine, Dalhousie University.

Conflicts of interest:  none.

Ethical approval:  this study was approved by the Health Sciences Research Ethics Board of Dalhousie University, Halifax, Nova Scotia, Canada, the Conjoint Health Research Ethics Board, University of Calgary, Calgary, Alberta, Canada, the New England Institutional Review Board, Wellesley, Massachusetts, USA, and the Stockport Research Ethics Committee, Stockport, Greater Manchester, UK. Ethical approval was not required in the Netherlands or Belgium.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information
  • 1
    Boud D. Enhancing Learning through Self-Assessment. London: Kogan Page 1995;11–35.
  • 2
    Eva KW, Regehr G. Self-assessment in the health professions: a reformation and research agenda. Acad Med 2006;80 (10 Suppl):4654.
  • 3
    Epstein RM, Siegel DJ, Silberman J. Self-monitoring in clinical practice: a challenge for medical educators. J Contin Educ Health Prof 2008;28 (1):513.
  • 4
    Eva KW, Regehr G. ‘I’ll never play professional football’ and other fallacies of self-assessment. J Contin Educ Health Prof 2008;28 (1):149.
  • 5
    Sargeant J. Toward a common understanding of self-assessment. J Contin Educ Health Prof 2008;28 (1):14.
  • 6
    Boud D, Keough R, Walker D. Reflection: Turning Experience into Learning. London: Kogan Page 1985;18–40.
  • 7
    Patterson C, Crooks D, Lunyk-Child O. A new perspective on competencies for self-directed learning. J Nurs Educ 2002;41 (1):2531.
  • 8
    Sargeant J, Armson H, Chesluk B, Dornan T, Eva K, Holmboe E, Lockyer J, Loney E, Mann K, van der Vleuten C. Processes and dimensions of informed self-assessment: a conceptual model. Acad Med 2010;85 (7):1212–20.
  • 9
    Ross L, Nesbitt R. The Person and the Situation: Perspectives of Social Psychology. New York, NY: McGraw-Hill 1991;8–82.
  • 10
    Driessen E, van Tartwijk J, van der Vleuten C, Wass V. Portfolios in medical education: why do they meet with mixed success? A systematic review. Med Educ 2007;41 (12):122433.
  • 11
    Buckley S, Coleman J, Davison I et al. The educational effects of portfolios on undergraduate student learning: A Best Evidence Medical Education (BEME) systematic review. BEME Guide no. 11. Med Teach 2009;31 (4):28298.
  • 12
    Goldman S. The educational kanban: promoting effective self-directed adult learning in medical education. Acad Med 2009;84 (7):92734.
  • 13
    ten Cate O. Entrustability of professional activities and competency-based training. Med Educ 2005;39 (12):11767.
  • 14
    Lockyer JM, Violato C. An examination of the appropriateness of using a common peer assessment instrument to assess physician skills across specialties. Acad Med 2004;79 (10 Suppl):58.
  • 15
    Duffy FD, Lynn LA, Didura H, Hess B, Caverzagie K, Grosso L, Lipner R, Holmboe E. Self-assessment of practice performance: development of the ABIM Practice Improvement Module (PIM). J Contin Educ Health Prof 2008;28 (1):3846.
  • 16
    Musolino GM. Fostering reflective practice: self-assessment abilities of physical therapy students and entry-level graduates. J Allied Health 2006;35 (1): 3042.
  • 17
    Corbin J, Straus A. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 3rd edn. Thousand Oaks, CA: Sage Publications 2008;45–64.
  • 18
    Davies H, Archer J, Bateman A, Dewar S, Crossley J, Grant J, Southgate L. Specialty-specific multi-source feedback: assuring validity, informing training. Med Educ 2008;42 (10):101420.
  • 19
    Holmboe ES, Hawkins RE. Practical Guide to the Evaluation of Clinical Competence. Philadelphia, PA: Mosby/Elsevier 2008;226–7.
  • 20
    Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide no. 31. Med Teach 2007;29 (9):85571.
  • 21
    Grant A, Kinnersley P, Metcalf E, Pill R, Houston H. Students’ views of reflective learning techniques: an efficacy study at a UK medical school. Med Educ 2006;40 (4):37988.
  • 22
    Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. JAMA 2009;302 (12):13301.
  • 23
    Hattie J, Timperley H. The power of feedback. Rev Educ Res 2007;77 (1):81112.
  • 24
    Archer J. Delivering feedback: state of the science in health professional education: effective feedback. Med Educ 2010;44 (1):1018.
  • 25
    Candy PC. Self-direction for Lifelong Learning: A Comprehensive Guide to Theory and Practice. San Francisco, CA: Jossey-Bass 1991;121–53.
  • 26
    Dornan T, Hadfield J, Brown M, Boshuizen H, Scherpbier A. How can medical students learn in a self-directed way in the clinical environment? Design-based research. Med Educ 2005;39 (4):35664.
  • 27
    Eraut M. Informal learning in the workplace. Stud Contin Educ 2005;26 (2):24773.
  • 28
    Tochel C, Haig A, Hesketh A, Cadzow A, Beggs K, Colthart I, Peacock H. The effectiveness of portfolios for postgraduate assessment and education: BEME Guide no. 12. Med Teach 2009;31 (4):299318.
  • 29
    Armson H, Kinzie S, Hawes D, Roder S, Wakefield J, Elmslie T. Translating learning into practice: lessons from the practice-based small-group learning programme. Can Fam Physician 2007;53 (9):147785.
  • 30
    Liamputtong Rice P, Ezzy D. Qualitative Research Methods: A Health Focus, 2nd edn. Oxford: Oxford University Press 2005;32–44.

Supporting Information

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Table S1. Undergraduate and postgraduate programs and focus groups participants.

Appendix S1. Focus group study.

FilenameFormatSizeDescription
MEDU_3888_sm_AppendixA-ONLINEONLY.doc56KSupporting info item
MEDU_3888_sm_TableS1.doc67KSupporting info item

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.