SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Interpersonal and communication skills (ICS) are a key component of several competency-based schemata and key competency in the set of six Accreditation Council for Graduate Medical Education (ACGME) core competencies. With the shift toward a competency-based educational framework, the importance of robust learner assessment becomes paramount. The journal Academic Emergency Medicine (AEM) hosted a consensus conference to discuss education research in emergency medicine (EM). This article summarizes the initial preparatory research that was conducted to brief consensus conference attendees and reports the results of the consensus conference breakout session as it pertains to ICS assessment of learners. The goals of this consensus conference session were to twofold: 1) to determine the state of assessment of observable learner performance and 2) to determine a research agenda within the ICS field for medical educators. The working group identified six key recommendations for medical educators and researchers.

The Accreditation Council for Graduate Medical Education (ACGME) core competencies were set out over a decade ago as part of the Outcomes Project. These core competencies, similar to the Scottish Doctor (UK) and the CanMEDS (Canada), set out to define the inherent skill set needed of good physicians. All of these hallmark documents in medical education highlight the need for solid communication and interpersonal skills in medicine. The ACGME has defined interpersonal and communication skills (ICS) as a core competency (Table 1).[1]

Table 1. ACGME Core Competencies Definitions[1]
  1. ICS = interpersonal and communication skills.

Residents must be able to demonstrate ICS that result in effective information exchange and teaming with patients, their patients families, and professional associates. Residents are expected to:
  • Create and sustain a therapeutic and ethically sound relationship with patients;
  • Use effective listening skills and elicit and provide information using effective nonverbal, explanatory, questioning, and writing skills; and
  • Work effectively with others as a member or leader of a health care team or other professional group.

In 2002, the Council of Emergency Medicine Residency Directors (CORD) convened a consensus conference to address competencies in emergency medicine (EM).[2] The results of that conference, interpreted by Hobgood et al.[2] for the ICS competencies, are noted in Table 2. Direct observation was seen as more beneficial at the time, due to its immediate availability.

Table 2. ICS Competencies as Defined by CORD
  1. From Hobgood et al.[2]

  2. CORD = Council of Emergency Medicine Residency Directors; ICS = interpersonal and communication skills.[2]

—Demonstrate the ability to respectfully, effectively, and efficiently develop a therapeutic relationship with patients and their families.
—Demonstrate respect for diversity and cultural, ethnic, spiritual, emotional, and age-specific differences in patients and other members of the health care team.
—Demonstrate effective listening skills and be able to elicit and provide information using verbal, nonverbal, written, and technological skills.
—Demonstrate ability to develop flexible communication strategies and be able to adjust them based on the clinical situation.
—Demonstrate effective participation in and leadership of the health care team.
—Demonstrate ability to elicit patient's motivation for seeking health care.
—Demonstrate ability to negotiate as well as resolve conflicts.
—Demonstrate effective written communication skills with other providers and to effectively summarize for the patient upon discharge.
—Demonstrate ability to effectively use the feedback provided by others.
—Demonstrate ability to handle situations unique to EM:
  1. Intoxicated patients
  2. Altered mental status patients
  3. Delivering bad news (death notification, critical illness)
  4. Difficulties with consultants
  5. Do-not-attempt-resuscitation/end-of-life decisions
  6. Patients with communication barriers (non–English-speaking, hearing-impaired)
  7. High-risk refusal-of-care patients
  8. Communication with out-of-hospital personnel as well as nonmedical personnel (police, media, hospital administration)
  9. Acutely psychotic patients
  10. Disaster medicine

In 2004, Chapman et al.[3] considered integrating all ACGME core competencies, including ICS, into the Model of Clinical Practice of Emergency Medicine. ICS skills that were defined in three simple areas with more specific examples given were: 1) the physician–patient relationship, 2) obtaining and providing information, and 3) working with others. Assessment of these skills was not discussed.

ABEM & ACGME have jointly developed the ACGME Core Competency Milestones for EM.[4] There are two ICS categories: ICS1 patient-centered communication and ICS2 team management. These are shown in greater detail in Tables 3 and 4. With the advent of these new milestones, an opportunity exists to revisit learner assessment in the emergency department (ED) to evaluate the usefulness and applicability of the current literature assessing ICS skills in EM learners. We summarize the current state of assessment tools for ICS found in the literature and present recommendations from the Academic Emergency Medicine (AEM) breakout session on the direction of future research.

Table 3. ACGME Competency (ICS1): Patient-centered Communication[4]
Demonstrates ICS That Result in the Effective Exchange of Information and Collaboration With Patients and Their Families
Level 1Level 2Level 3Level 4Level 5
  1. ICS = interpersonal and communication skills.

Establishes rapport with and demonstrates empathy toward patients and their families.Elicits patients’ reasons for seeking health care and expectations from the ED visit.Manages the expectations of those who receive care in the ED and uses communication methods that minimize the potential for stress, conflict, and misunderstanding.Uses flexible communication strategies and adjusts them based on the clinical situation to resolve specific ED challenges, such as drug seeking behavior, delivering bad news, unexpected outcomes, medical errors, and high-risk refusal-of-care patients. Teaches communication and conflict management skills.
Listens effectively to patients and their families. Negotiates and manages simple patient/family-related conflicts. Effectively communicates with vulnerable populations, both patients at risk and their families. Participates in review and counsel of colleagues with communication deficiencies.
Table 4. ACGME Competency (ICS2): Team Management
Leads Patient-centered Care Teams, Ensuring Effective Communication and Mutual Respect Among Members of the Team 4
Level 1Level 2Level 3Level 4Level 5
  1. ACGME = Accreditation Council for Graduate Medical Education; ICS = interpersonal and communication skills.

Participates as a member of a patient care team. Participates in team-based care; supports activities of other team members and communicates their value to the patient and family.Develops working relationships across specialties and systems of care.Recommends changes in team performance as necessary for optimal efficiency.Participates in and leads interdepartmental groups in the patient setting and in collaborative meetings outside of the patient care setting.
 Communicates pertinent information to emergency physicians and other health care colleagues.Ensures transitions of care are accurately and efficiently communicated.Ensures clear communication and respect among team members.Designs patient care teams and evaluates their performance.
  Communicates with out-of-hospital and nonmedical personnel, such as police, media, hospital administrators. Uses flexible communication strategies to resolve specific ED challenges such as difficulties with consultants and other health care providers. Seeks leadership opportunities within professional organizations.

Search Methodology

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

An initial search utilizing the PubMed database from the National Library of Medicine was performed. The search was limited to human and English papers utilizing “and/or” combinations of variations of the following keywords: transition(s), handoff(s), signoff(s), interpersonal, relations, communication, internship, residency, medical students, EM (or “service”), and internal medicine. A total of 245 citations were obtained and reviewed. Removing “emergency medicine” and “emergency service” from a repeat search yielded 147 new unique citations, which were also reviewed. Of the 392 studies reviewed, 27 studies were found to be about ICS in residents and students.

A second search using a novel method was also undertaken via GoogleScholar. This method has not been previously published. For each combination of keywords listed below, the abstracts for the top 500 search results were reviewed and relevant literature extracted for further evaluation. The keywords “communication skills assessment residents” yielded over 183,000 results. A total of 105 of the top 500 were found to be relevant. A repeat search with the keywords “communication skills assessment emergency medicine” yielded over 92,700 results. Fifty-two of the first 500 were duplicates from the first search; 59 additionally relevant sources were found. Next, the keywords “house officer assessment UK” were reviewed, with one of these repeated from the first two searches, and 17 additional relevant items. Finally, keywords “house officer assessment Australia” yielded 58,300 items, with one repeated from prior searches and three new relevant references.

The Literature Around ICS Assessment

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Many of the more significant advances in ICS assessment have occurred since Hobgood et al.[2] produced their article in 2002. Assessment of ICS skills in EM residents primarily uses direct observation in either a simulation or workplace environment (i.e., standardized direct observation tool [SDOT]), external feedback (360-degree assessment), or a global summative assessment done at the end of a specific period of time by faculty. Unfortunately, reliability and validity of the instruments were not assessed in a majority of studies.

Direct observation in the workplace or simulated setting has been an area of great interest in the literature for assessment of resident ICS. In 2001, Reisdorff et al.[5] introduced an assessment tool focusing on several areas of resident performance, secondarily linked to the six ACGME core competencies, which included communication and interpersonal skills. A similar tool was developed within the Metrohealth EM residency for assessment of ICS.[6] They retrospectively standardized the scores of their residents relating to ICS skills. These authors found residents with problems related to ICS domains by comparing mean scores to a “standardized” resident score. Inter-rater reliability and validity were not determined for either of these tools.

In 2006, the CORD SDOT was presented and subsequently assessed for reliability.[7] The SDOT looks at 26 behaviors that cover all six of the ACGME core competencies. Overall, the SDOT had good inter-rater reliability in standardized patient (SP) encounters when utilized by faculty of various academic backgrounds, but the ICS correlations had the poorest inter-rater reliability.[7] The validity of the instrument in the clinical setting was not evaluated in this study.[7] Simulated scenarios were also often studied.[8, 9] The assessment tools in simulated environments range from performance checklists to rating scales, commonly based on some form of a Likert scale.

External feedback as a mechanism to assess resident ICS has also been explored in EM. Rodgers and Manifold[10] described a 360-degree assessment system (a.k.a. multisource feedback [MSF]) for resident assessment. Their work suggests that ICS and professionalism would be the best competencies evaluated by 360-degree assessments.

In 2008, McLaughlin et al.[11] reviewed the use of simulation in EM resident education. They found all ACGME core competencies had been incorporated into simulator-based instructional programs. They noted that simulation allows assessment of communication skills with interdisciplinary teams in a structured clinical environment. Review of current literature did not reveal any EM-derived global rating scale scores developed specifically for ICS competencies.

Lessons from Other Disciplines

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Lessons From the Operating Room

Simulation scenarios hold great promise as a method to assess residents’ ability and skills, especially as they pertain to application of medical knowledge, teamwork, communication skills, and procedural skills. Many of the anesthesia-based tools are based on crew resource management (CRM) principles—namely, the NOTECH and ANTS assessments.[9, 12, 13] This has similarly been used by surgery[14] and pediatric intensive care.[15] These are detailed later in this article. Other studies from these disciplines have primarily utilized SPs and objective structured clinical examination (OSCE) stations to test ICS.[16-18]

Lessons From Primary Care

Schirmer et al. formed a consensus group that evaluated multiple assessment scales available against the Kalamazoo consensus statement[19] of essential elements in a patient–physician interaction.[20] Assessment tools developed have consisted of global assessment scales as well as checklists. These have been used in multiple settings such as videotape review of patient encounters,[21] direct observation,[22] and SPs, and OSCEs.[23-25] Several studies have successfully looked at modifications of the American Board of Internal Medicine (ABIM) patient survey for external feedback.[26, 27] Many scales have been developed for OSCE-type assessments of pediatric[25] and primary care residents, such as the RUCIS[23] and CIS-OSCE.[24] The SP/OSCE scenario has been used to specifically evaluate resident skills with non–English-speaking patients.[28]

More Lessons From Other Specialties

Faculty in palliative care have developed excellent tools for communicating end-of-life issues and delivering bad news.[29, 30] Physical medicine and rehabilitation have several studies looking at creating MSF tools (360-degree assessments).[31, 32] Use of portfolios was evaluated in psychiatry education[33] and in neurology.[34]

Other Countries

For the purposes of this overview, we will concentrate mainly on the other English-speaking countries that have similar training systems; namely Canada, Australia, and the United Kingdom. Canadian and Australian systems use the conceptual framework called CanMEDS,[35] while the Scottish Doctor Collaboration is utilized in the United Kingdom.[36] Both of these conceptual frameworks overlap with the ACGME competencies in the areas of communication[35, 36] and collaboration,[35] allowing the sharing of resources in these domains to examine the ACGME competencies.

The Canadian EM academic programs use two predominant types of tools. The first of these tools is the daily encounter card.[37-39] Most EM residency programs in Canada require some form of daily encounter card, which is composed of multiple subjective rating scales along the CanMEDs framework, some element of qualitative feedback, and a listing of patient encounters or procedures. The other tool is the in-training evaluation report,[38, 40] which is a summative evaluation that is completed, as mandated by the Royal College of Physicians and Surgeons of Canada, after each rotation and includes a separate score report for communication skills.

The recent development of the Foundation years curriculum in the United Kingdom has yielded many advances in direct observation tools. The British have developed tools that emphasize communication skills such as the SPRAT[41] and the Mini-PAT.[42, 43] The Mini-CEX is often used as an assessment tool during training in the clinical context.[44, 45]

Thematic Strengths in the Literature

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Three main themes in the assessment of ICS resident skills have been noted: 1) “breaking bad news” to patients and family, 2) ICS in situations requiring teamwork such as medical or trauma codes, and 3) physician-to-physician communication, such as sign-outs at times of shift change or conversations with consultants.

Breaking Bad News

Breaking bad news and death disclosure have had the largest amount of work done with EM residents. For EM residents, Quest et al.[46] used SPs to teach and assess death disclosure skills. Residents had two SP encounters after a didactic session and were rated by faculty and the SP and through self-evaluation.[46] An assessment tool was used that included a behavior checklist and a final overall three-point global rating measure for competency.[46] This tool showed modest inter-rater agreement between SPs and faculty assessors. In 2006, Quest et al.[47] added an affective competency score (ACS) to another SP assessment tool that was utilized with fourth-year medical students using the death notification curriculum. The ACS was found to have high inter-rater agreement between faculty and SPs, and higher ACS scores correlated with higher global rating assessments.[47] Other tools have been developed for direct observation of death notification,[48] breaking bad news with simulated patients,[49] and dealing with “difficult discussions” involving simulated pediatric cases (e.g., giving family members reassurance).[50]

Teamwork

Examples of teamwork evaluation primarily come from simulation assessments. Shapiro et al.[51] used a behavior-anchored rating scale to assess performance of teamwork during a simulation exercise. Rosen and team[52] recommended using a simple behavioral checklist to assess performance of essential teamwork behaviors. There is also the CAT-T, which is a direct observational assessment tool used by patients to assess the overall effectiveness of the team's communication with the patient.[53]

Flin and Maran[9] adapted the NOTECH (nontechnical skills) global assessment form and framework for teaching CRM in the airline industry into the ANTS (anesthesia nontechnical skills) framework to teach and evaluate anesthesiologists’ communication skills. Moorthy et al.[14] also developed assessment tools for nontechnical skills similar to the NOTECH global rating form. When filled out by expert human factor researchers, this assessment was found to have a very high internal consistency of 0.87 between skills areas and inter-rater reliability of 0.84 to 0.87.[14] The NOTECH form has similarly been adapted for use in neonatal resuscitation.[15]

Physician-to-Physician Communication

Farnan et al.[54] developed a “hand-off” OSCE (dubbed the “objective structured HANDOVER examination” or OSHE), but did not look at its validity or reliability. Apker and colleagues[55] developed a handoff communication assessment tool to determine what communication skills are used during hand-offs, but did not validate tools to assess those skills.

Very little has been written about assessment of communicating with consultants. Sibert and colleagues[56] determined a list of observable skills and a list of principles and attitudes (nonobservable) used in conversations with consultants. No one has specifically developed assessment tools for EM residents, but there is promise of tools being developed.[57]

The Tools

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Workplace Assessments: Feedback Forms, Daily Encounter Cards, and Direct Observation

Various tools that assess multiple competencies have been developed for use in the workplace setting. A recent systematic review[22] of direct observation assessment tools found validity and usability to vary greatly. Of those tools that have reported inter-rater reliability (~0.70),[22] the Mini-CEX was found to be the most evaluated across several specialties, with improved validity after recent modification.[22] It was also found to have increased sensitivity to detect poorly performing residents.[58] The ACGME competency-based EM SDOT has shown high inter-rater reliability.[7] The Four Habits Coding Scheme has been validated for assessing practicing physicians’ interactions with patients.[59, 60] Daily encounter cards and feedback forms are increasingly used to encapsulate fragmented rotational experiences, like the EM rotation, where residents have varying assessors each day. Examples of this are found throughout the medical education literature.[22, 37-39, 61] These tools range from purely qualitative tools,[38] to more objective, reference-standard anchored, Likert-based scoring systems.[7, 37, 39] These daily encounter cards allow for global ratings along multiple competencies. A leniency bias of assessors still persists with the cards.[62]

Few tools have been developed for assessing communication skills in isolation. Examples of such tools include the Arizona Clinical Interview Rating Scale (ACIR),[63] Calgary-Cambridge Observational Guide,[64] and the SEGUE.[65] Certain tools are better for less experienced faculty (i.e., checklists like the Kalamazoo and SEGUE), while expert faculty members may find other rating scale tools (i.e., Four Habits[59, 60]) more useful.[20]

Interim Assessments

Interim qualitative assessments created by faculty can act as narratives of resident achievement to date. In this area, the Canadians more prominently utilize interim assessments compared to current American systems.

Elements commonly involved in the in-training evaluation reports (ITERs) are multiple rating scales assessing broad categories of competence (i.e., communicator) and narrative summaries of resident performance (often an amalgam of comments from various daily encounter cards). Difficulties with these techniques include: the increased editorialization of the resident performance creates variability[61]; timeliness may be jeopardized because the narrative tasks may take time to collect[61]; and assessors may not “mark” similarly, resulting in poor inter-rater reliability.[61]

External Feedback (e.g., 360-degree Assessments, Mini-PAT, Patient Surveys)

External feedback provides a window into how a resident interacts and communicates with others. This type of assessment is particularly valuable since residents may not interact similarly with attending physicians and allied health colleagues. The MSF assessment is a concept from the business sector.[10] In this form of external feedback, one gives and receives input from everyone within one's sphere of influence. In medicine, this may include not only clinical preceptors but also peers, nursing staff, and patients, as well as self-reflection. A common phenomenon within medicine has been to adapt the traditional bidirectional ‘360-assessment’ tool (e.g., all individuals participate in reciprocal assessments) to MSF about particular individuals (e.g., a learner gets rated by multiple assessors that range from students to colleagues to supervisors).

The ABIM patient survey has been adapted for use in MSF with success.[26, 27] Symons et al.[26] modified the survey for resident self-assessment of communication skills, finding high internal consistency (κ = 0.86). Brinkman et al.[27] used the tool as a 360-degree assessment. When used with coaching, they found improvement in the ICS skills of residents compared to decline in a control group of residents using standard feedback. Similarly, in the United Kingdom, peer-review questionnaires such as the SPRAT[41] and Mini-PAT[42, 43] are used.

Few studies have attempted to validate the components of a MSF tool. Massagli et al.[32] evaluated a tool developed specifically to assess the ACGME core competencies, excluding medical knowledge, for physical medicine and rehabilitation. Residents were assessed by nursing, allied health professionals, and medical students. The study found strong inter-rater reliability as well as an improving score across training year. The multisource assessment developed by Joshi et al.[66] for obstetrics and gynecology residents was subsequently validated in the pediatrics environment.[67] This particular tool remains one of the few tools validated in a cross-discipline manner.

Reflection (Self-reflection, Guided Reflection, or Coached Reflection)

Reflective feedback is another tool used to assess resident performance. Self-reflection and evaluation is an important part of adult learning theory.[68] The literature is unclear about the ability of a physician to accurately self-assess.[69, 70] Residents with poor performance tend to rate themselves higher than external raters do, and those with very high performance tend to rate themselves lower than external raters do.[71] In contrast, the cultural competency and communications skills survey of internists showed a very good correlation between self-identification and patients’ ratings.[72]

Simulation

As previously noted, there is an increasing use of simulation to teach learners in EM. Simulators may also serve as a medium for assessment. Currently, the literature in EM focuses on using simulators as a teaching tool, and there are few EM-based assessment tools for ICS.[73] The incorporation of simulation-based assessments was called for in 2007 by the AEM consensus panel on simulation.[74]

Faculty in anesthesia and pediatrics have used training and assessment of nontechnical skills (e.g., NOTECH, ANTS) in high-fidelity simulation settings.[12, 13] The literature from other fields suggests that checklists or expert-observer based qualitative assessments are most effective. Checklists may reward thoroughness as opposed to advanced skill.[75] Moreover, both checklists and global rating scales are wrought with leniency bias.[75]

The use of OSCE has also been featured in undergraduate medical education, internal medicine, and pediatrics. Likely in programs with larger numbers of learners, OSCE-type assessments pool valuable resources and allow for direct observation time by assessors in a reproducible manner. Multiple studies have focused on developing evaluator tools for the SPs to complete regarding the learner's performance,[20, 76-78] as well as external assessors such as faculty members.[19] SEGUE, primarily used in an OSCE, was found to have good reliability but did not discriminate between different levels of training.[79] Because of these findings, the 27-item “WyMii” was developed to better assess ICS in the context of a real or standardized ambulatory medical settings.[80]

The Current Toolbox

The ACGME EM milestones document suggests some assessment tools for the overall assessment of the two major domains within ICS (see Tables 3 and 4). However, much of the assessment tools to date focus on more granular aspects of ICS. Table 5 lists current tools according to the 2002 CORD-EM framework,[22] which outlines key EM ICS skills at a granular level.[2, 5, 7, 9, 10, 14, 20, 22, 24, 27, 28, 32, 39, 40, 48, 55, 65, 66, 72, 81-84]

Table 5. Current Toolbox for ICS Assessment
 Direct Observation Tools (Workplace Based)External Narrative ToolsInternal Narrative ToolsOther Tools
Tool 1 (Direct obs–Checklist)Tool 2 (Direct obs–Behavioral Anchors)Tool 3 (Direct obs–Relative)Tool 4 (Interim Assessments)Tool 5: External Feedback (360; Mini-PAT)Tool 6: ReflectionTool 7: Global Rating ScalesTool 8: Simulation Assessments (Will Denote Setting Too)
  1. This table builds on the work of Hobgood et al.,[2] utilizing their framework of key EM competencies and describes assessment tools that have been studied for assessing these skills.

  2. CIS-OSCE = communication and interpersonal skills objective structured clinical examination; ICS = interpersonal and communication skills; IM = internal medicine; ITER = in-training evaluation report; MSF = multisource feedback; obs = observation; PMR = physical medicine and rehabilitation; SDOT = standardized direct observation tool.

Building therapeutic relationship

Multiple tools: Schirmer et al.[20]

Makoul et al.[65]IM-based

Multiple tools: Schirmer et al.[20] 

Joshi et al.[66]OB/GYN based

Rodgers et al.[10]

Massagli et al.[32]PMR-based

 Reisdorff et al.[5]MSFCIS-OSCE Yudkowsky et al.[24]
Respecting diversity    

Zabar et al.[28]

Fernandez et al.[70]

Massagli et al.[32]PMR-based

 Sherbino[39]
Listening skillsMakoul et al.[65]IM-based

Zabar et al.[28]

CIS-OSCE Yudkowsky et al.[24]

 

Joshi, et al.[6]6—OB/GYN-based

Rodgers et al.[10]

Fernandez et al.[70]

Massagli32

 

Reisdorff et al.[5]MSF

Sherbino[39]

CIS-OSCE Yudkowsky et al.[24]
Flexibility      Reisdorff et al.5—MSF
Leadership and participation with team    

Joshi et al.[66]OB/GYN-based

Massagli et al.[32]PMR-based

Weaver et al.[82]

Crow et al.[83]

Flin et al.[9]

Moorthy et al.[14]

Sherbino[39]

High-fidelity sim: Flin et al.[9]

Moorthy et al.[14]

Specific sit

  1. bad news
  2. end of life
  3. Intoxicated patient
  4. Altered LOC
  5. Difficult consultants
  6. Communication barrier
  7. High-risk refusal-of-care patient
  8. Communication with out-of-hospital personnel
  9. Psychotic
  10. Disaster med
e. Apker et al.[55]b. Benenson et al.[48]

b. Buss et al.[84]

a, g. OSCE Yudkowsky et al.[24]

 

Fernandez et al.[72]for patient with language/cultural barrier

Benenson et al.[48]

CIS-OSCE Yudkowsky et al.[24]—consent, bad news, domestic violence
Written communication      Reisdorff et al.[5]MSF
Demonstrate ability to effectively use the feedback provided by others    

Brinkman[27]

Massagli et al.[32]PMR-based

  CIS-OSCE Yudkowsky et al.[24]
Demonstrate ability to elicit patient's motivation for seeking health careMakoul et al.[65]IM based    Fernandez et al.[72]for patient with language/cultural barrier Epstein et al.[81]—med students
Demonstrate ability to negotiate as well as resolve conflicts
GLOBAL ICS competency GRS ONLY Turnbull et al.[40]ITER Shayne et al.[7]SDOT

Validated Tools

Within the medical education literature, there is a paucity of validation studies for direct observational assessment tools. Table 6 summarizes validated ICS tools. Overall, there are very few validated tools ready for use in the ED learning environment.[5, 22, 24, 44, 59, 60, 67, 81]

Table 6. Validated ICS Assessment Tools
ToolValidationComments
  1. MSF = multisource feedback; SP = standardized patient.

  2. a

    Validated EM Study for ICS.

Mini-CEXWidely validated (Kogan et al.[22] for references)

In Kogan et al.[22] systematic review of 55 different direct assessment tools utilized in undergraduate and graduate medical education, only the Mini-CEX had enough studies to show an accrual of validity evidence (development of a stronger behavioral anchored scale over time).

The Mini-CEX may only have one to two items that can be used to assess ICS. Only 10 other tools had at least two levels of validity evidence. None of the evaluated tools had any studies to look at their effect on educational outcomes.

Nine-point global rating tool for six core competencies (61 separate behavioral items) Reisdorff et al.[5]aThis tool had 86 measurement points, many of which applied to more than one competency. Seven to 20 items were merged for each competency score. Principle component analysis was used to determine validity. In this analysis, ICS had a large first factor proportion with small sequential increases with subsequent factor proportions as more points were added. ICS skills seemed to measure separate from other competencies and were less affected by other factors (i.e., medical knowledge affecting professionalism).
360-degree instrument by Joshi et al.[66] (derived in obstetrics and gynecology)Chandler et al.67 in pediatrics [67]The tool of Joshi et al.66 was used by Chandler et al.[67] to assess pediatrics residents. They found lack of correlation between patient/family evaluators, nurses, and faculty, although similar results within each group. The authors postulated this may indicate poor validity of the tool itself.
SP simulation-based stations (ICS assessment tool)Yudkowsky et al.[24]Internal medicine and family medicine residents were assessed by SPs in areas such as informed consent, treatment refusal, and giving bad news using a five-point Likert scale based tool. The content was validated by comparing assessed behaviors to the ACGME competency descriptions. The study also looked at sex and level of training effects on the scores by factor analysis of individual case ratings with global assessment ratings. Although correlation was shown, the global ratings were derived from the case rating scores.
CIS and RUCIS scales (ICS measured via standardized OSCE patients)Iramaneerat et al.[23]Initially an 18-item tool using a five-point Likert scale, completed by SPs in the stations described above, by Yudkowsky et al.[24] Authors found that the lowest rating category was underutilized and use of the middle rating categories was inconsistent. The authors adjusted the CIS to the RUCIS, a 13-item ICS assessment tool that used a four-point behavior anchored scale. The RUCIS had better validity with a more uniform distribution of ratings and a better fit with the measurement model.
Mini-PAT (MSF tool)Archer et al.[41]A criterion-based six-point Likert scale was used for 16 items covering five content areas. Scores tended to be higher in the two content areas containing ICS behaviors, but mean score difference with level of training was significant.

Four Habits Coding Scheme (4HCS)

ICS assessment tool based on Kaiser Permanente's Four Habits Communication Model, which is taught to their clinicians.

Frankel and Stein[59]

Validated:

Krupat et al.[60]

The 4HCS uses specific five-point behavior anchored scales for 25 items. The 4HCS was validated by comparing its data with previous measures of ICS behavior, including the Roter Interaction Analysis System (RIAS), back-channel responses, and nonverbal behavior measures. 4HCS were consistently significantly related to these measures. 4HCS did not have a significant correlation with length of visit or patient postvisit evaluations.
Rochester Communications Rating Scale (RCRS)Epstein et al.[81]RCRS based around the Kalamazoo essential elements communication checklist and used a six-point Likert scale. The RCRS used checklists completed by observing physicians with students’ self-assessment. Postencounter and take-home written assessments were also completed by the students. High RCRS scores correlated with higher scores on separate information gathering and patient counseling observational assessments, but there were no significant correlations with the written assessments.

Implementation Issues

One might expect that a direct observational tool (e.g., 20-minute Mini-CEX) might be acceptable on an internal medicine ward after rounds have been completed, but in a busy single-coverage ED would not be feasible.

One of the major limitations seen in the literature has been the lack of robust rater-training reporting. Rater training, or lack thereof, affects the reliability of any assessment tool.[22]

Resource measures for various tools and cost analysis work has not yet been reported for assessment tools. For instance, a direct observation tool may have an effect on earning for fee-for-service, nonsalaried physicians. Likewise, the building of a multiroom simulation center will have a significant financial cost for a given residency program.

Recommendations from the Working Group

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Our recommendations are based on our working group's initial survey of the existing medical education literature in ICS and the fruitful discussions of the May 2012 AEM consensus conference on education research in EM. Below we highlight six key recommendations for medical educators and researchers from conference proceedings.

Match the Right Assessment Tool to the Right Competency

The newly released ACGME milestones for EM provide a guide for a summative assessment resulting in promotion to higher postgraduate levels. Many of these milestones could potentially be assessed using direct observational tools based in the clinical setting, such as SDOT or MSF. However, specific situations that must be assessed in the milestones, e.g., delivering bad news, may be more easily assessed in settings such as high-fidelity simulation, oral board simulation, or OSCE utilizing tools such as checklists or global rating scales.

Need for Both Summative and Formative Assessment Tools

Our assessment toolbox for ICS needs to have both a summative assessment section for milestones and a formative ICS assessment toolbox that is applied to application of that knowledge through behaviors and remediation. For instance, our consensus panel experts suggested that while qualitative methods may provide useful feedback for formative assessment, given the high stakes nature of milestone assessment, more reliable tools such as global rating scales with behavioral anchors may be more appropriate.

Strengthen Evidence Behind the Tools

To guide the further development of assessment tools, specific observable behaviors need to be identified and a consensus or best practices determined. Primary research on the nature and best practices of physician communication specific to the ED is needed. For example, while some research identifying behaviors that are present in accurate and efficient consultation requests from the ED has recently been completed,[85, 86] similar studies are needed for all the milestones within patient-centered care and team management. Most importantly, the greatest challenge is determining validity of assessment tools.[87]

Evaluate for Outcomes at the Education Level and Beyond

Educators need to evaluate their performance in creating educational interventions that improve ICS. Assessment methods should be studied to make certain applicable milestones are being learned and applied by residents in the course of their training. In accordance with Kirkpatrick's evaluation model,[87] rather than focusing on the lowest level in the schema, learner satisfaction, focus should be placed on higher levels such as change in learner behavior.[74, 75] We should aim to ultimately improve the care of our patients by improving ICS competencies.

Develop and Validate ICS Tools That Work in EM

To improve the reliability of the assessment of a learner's ICS, the tools used—whether they be bimodal checklists (e.g., tasks are done or not done) or Likert-type scales—should be based on specific behaviors the instructor directly observes.[58] Direct observation of skills could take place in many settings, such as the simulation lab or a video review of a real or SP encounter or during a clinical shift in the ED. Assessment tools that were developed in other fields could also be adapted for our use in our field. There may be assessment instruments, such as an OSCE checklist, already developed for these standardized settings, which could be transferred into a context more commonly used in EM, e.g., tools such as NOTECHS or ANTS to assess team management.[9, 12, 13]

Link Our Instructional Strategies to Assessments

Of the few tools that do exist, most assessment tools focus broadly on the topic of ICS or team management. Tools need to be developed and validated that can be used to assess the skills in those clinical scenarios. An example of this would be to extend the work done by Park et al.,[88] where they developed a simulation to teach breaking bad news using the SPIKES protocol. An example of a possible follow-up study could use an assessment tool, such as a checklist related to the six steps of SPIKES, to assess learner behavior both in simulated and clinical settings.

Conclusions

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References

Overall, EM-specific validated tools to assess the interpersonal and communication skills are lacking. Accrediting bodies of North America and Europe are now becoming committed to competency-based training, specifically at the level of graduate (or postgraduate) medical education. Assessments should be mapped to the needs, goals, and objectives of interpersonal and communication skills competencies. This article summarizes interpersonal and communication skills assessment tools that have been developed. Now, resources should be combined and focused on the development of evidence-based, comprehensive assessment systems that include multiple modalities as well as the validation of such tools in learner assessment—whether that is of de novo EM assessment tools or the validation of tools from other specialties, countries, or phases of medical education in our setting.

References

  1. Top of page
  2. Abstract
  3. Search Methodology
  4. The Literature Around ICS Assessment
  5. Lessons from Other Disciplines
  6. Thematic Strengths in the Literature
  7. The Tools
  8. Recommendations from the Working Group
  9. Conclusions
  10. References