The unique communication skills required of competent emergency physicians (EPs) have been previously defined.3 Building on this previous work, we focused on the outcomes anticipated from practitioners who excelled in this specific competency. We also identified high-leverage areas for data collection and possible methods for enhancing both the face validity of our measures and the practical tips for implementing data collection. A summary of the measures developed is presented in Table 2.11–14
Table 2. Emergency Medicine Relevant Communication Competencies
|Condition||Measure||Data Collection Method or Assessment Technique|
|Therapeutic relationship||Establishment of a therapeutic relationship||Validated patient interpersonal skill inventories11–13 SP, PR, 360|
|Effective communication of care processes||Press Ganey14 scores||Press Ganey survey|
|Patient Satisfaction surveys|
|Physician satisfaction scores|
|“My physician was excellent at informing me about the outcomes of my care”|
|AMA||Number of AMA||CR|
|AMA||Physician invites AMA patients to return for recommended treatment||CR, OSCE, SP, S|
|Death notification||Family satisfaction with resident interpersonal communication skills||SP, S|
|May use any validated interpersonal skills inventory|
|Written communication skills—chart documentation ||Physician documentation supports correct level of billing||CR|
|Leadership of critical care resuscitation team||Physician leadership inventory||S, DO|
The most critical communication skill required of all EPs is the ability to rapidly develop a therapeutic relationship with their patients. Outcome measures unique to this skill are numerous, primarily focusing on the patient’s perception of the individual physician’s communication skills. The group endorsed the concept of using previously validated patient interpersonal communication inventories to measure the success of individual residents at the outset of the therapeutic relationship. These measures include, but are not limited to, the Calgary Communication Inventory,11 the interpersonal skills and communication instrument of Schnabl et al.,12 and the longitudinal communication skills initiative of Rucker and Morrison.13
Data collection methods for these instruments will vary depending upon individual residency program and departmental logistics. Suggested methods other than these, seen in Table 2, include faculty interview of patients following an assessment using the SDOT, resident-directed patient sampling, and exit interview sampling of a random selection of patients at the conclusion of their emergency department (ED) stay. Regardless of the method chosen, care must be taken to mitigate potential sample bias that can be introduced in a variety of ways, particularly by resident-directed sampling, patient illiteracy, or patient lack of English language proficiency.
Other communication skills important to measure surround the areas of high-risk communications: patients leaving against medical advice (AMA), death notification, and refusal of resuscitation (do not attempt to resuscitate/do not intubate) orders. Although the group easily achieved consensus on the skills that constituted excellence in this competency, difficulty arose in determining practical measurement methods. For example, for patients leaving AMA, some would argue that the best outcome and most desirable communication skill is the ability to effectively convince the person to remain in the ED and continue treatment. Others would state that this outcome is paternalistic, and the only valuable measure is whether the patient received an unbiased communication of the risks and benefits of his or her medical decision. In this scenario, sampling difficulty arises for both the numerator and the denominator. For the numerator, if one selects the percentage of patients who originally planned to leave AMA, but declined following communication with their provider, one would miss all those patients who ultimately decided to depart but were adequately informed of the risks and benefits of their decision. The construction of the denominator for the measure is equally difficult, as most of the discussions that providers have about leaving AMA with patients who then ultimately remain in the department are not captured by standard charting methods. In other words, the number of patients who depart AMA (numerator) is known, but the total number of patients who had discussed this option with their providers (denominator) is not.
The group discussed another potential measure of best practice in the case of the patient who desires to leave AMA, namely, whether the resident encouraged the patient to return if the condition were to worsen or if the patient were to have a change of mind about seeking treatment. For this measure, written documentation of an invitation to return should be noted on the chart and discoverable by review. It was felt that this measure could easily be collected during standard review of all AMA patient charts.
Death notification is another area of significant risk for all EPs. Assessment tools exist to measure resident competency in this difficult communication encounter.15 Measurement of family member satisfaction with the physician communicating this information can be obtained via a telephone survey call-back after an appropriate time interval, or a mail survey.
Another key communication skill for EPs is the ability to communicate effectively in writing, particularly through chart documentation. Components of a well-documented chart include a clear, concise description of medical decision-making, as well as an adequate number of history, review of systems, and physical examination items to support correct billing levels. Data for these measures are supported by chart review.
Physician leadership and conflict resolution skills should also be measured. No known validated instruments exist to measure specific leadership skills of EPs. Measures may exist in aviation, anesthesia, or crew resource training for components of these skills, but they have yet to be adapted to EM.
Residents need to be evaluated not only for their ability to pick the right intervention for a particular patient complaint, but also for their ability to carry out the appropriate therapeutic intervention. Proposed outcome measures surrounding patient care are highlighted in Table 3.
Table 3. Emergency Medicine Relevant Patient Care Competencies
| ||Measure||Data Collection Method or Assessment Technique|
|Knowledge of proper procedure as defined by preexisting quality assurance programs (e.g., JCAHO, CMS)||Compliance with medication, e.g. administration of aspirin and beta-blockers in patients with ACS ||RR, S, DO|
|Electrocardiogram ordered and interpreted within 30 min of patient arrival ||RR|
|Knowledge of critical components to timely, appropriate diagnosis and management as specified, e.g., in The Clinical Practice of Emergency Medicine or national data on chief complaints ||Documentation of pulse oximeter reading for patients presenting with shortness of breath||RR|
|Administration of oxygen for patients with abnormal pulse oximeter readings||RR, S|
|Chest radiograph ordered and properly interpreted in patients with shortness of breath or symptoms consistent with pneumonia||RR, S, CSR|
|Urinalysis ordered for pain in patients with lower abdomen or flanks||RR, S, CSR|
|Pregnancy test ordered for all women of child bearing age with abdominal pain||RR, S, CSR|
|Vital signs recorded and addressed/treated if abnormal||RR, S|
|Serial abdominal exams performed and documented if prolonged ED stay for patients with abdominal pain chief complaint||RR|
|Pain documented and treated when present||RR|
|Presence or absence of peritoneal signs documented in patients with abdominal pain||RR|
|Imaging considered for elder patients with abdominal pain; if performed, results documented||RR, S, CSR|
|Universally accepted procedural competencies||Endotracheal intubation||RR, S, CSR|
|Documentation that endotracheal tube placement was confirmed by at least two measures|
|Number of attempts and success rate|
Considering the limited time, personnel, and financial support of residency programs, outcome data should parallel and/or dovetail that information required for ongoing reporting systems for regulatory agencies such as The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and Centers for Medicare and Medicaid Services (CMS). Individual resident data as well as collective residency data documenting compliance with accepted therapeutic standards can be expressed in percentage metrics. For example, if a patient presents with the chief complaint of chest pain, the type of metrics that could be documented are compliance with administration of aspirin and beta blockers, as well as the rapid ordering and interpretation of electrocardiograms.
Residency programs should identify common EM chief complaints using sources such as TheClinical Practice of Emergency Medicine,16 published clinical policies,17 or national data on chief complaints, most of which, unfortunately, are limited. Steps critical to timely and appropriate diagnosis and management could be identified as metrics for evaluating individual and program-specific outcomes. Metrics must be objective and universally accepted (i.e., not site-specific). Outcome measures should not be life-or-death dichotomies, but rather should assess whether the resident’s patient care was appropriate and within acceptable norms for EM.
An example of a common EM chief complaint would be shortness of breath. Measures of appropriate care would include whether the resident obtained, documented, and properly interpreted a pulse oximeter reading and chest radiograph and whether he or she acted upon abnormal results. Another example of a chief complaint, abdominal pain, and its associated appropriate care metrics are elaborated on in Table 3.
Compliance to these metrics could also be assessed using simulated patient encounters, computer-based simulations, or an oral boards-type setting, many of which exist in residency programs. Competency in patient care could be assessed retrospectively by residents performing chart audits using predefined criteria for specific chief complaints. Residents could add this to their portfolios along with self-reflection comments, thus enhancing individual academic growth. The program director could gather the data from residents to assess how well the program as a whole teaches patient care related to various chief complaints and make directed educational interventions to correct deficiencies.
In addition to assessing residents in their ability to choose the correct procedure for a particular chief complaint, program directors should also assess residents in their ability to carry out procedures competently. It is not enough to simply attain a count of completed procedures and to document that number in each resident’s semiannual evaluation. Instead, metrics for key procedures should be identified and residents should be assessed on compliance and complication rates (an example of metrics related to endotracheal intubation that programs might consider can be found in Table 2). Functionally, assessing resident competency with key procedures can be accomplished through a variety of means. Some programs dedicate a day to procedural competency, during which residents are assessed in their ability to perform procedures in a simulated setting. Other programs use checklists to document competency. It is important, regardless of the method used, that key metrics are identified in advance and that they are communicated to the learner and faculty assessing the procedural skills.
Practice-Based Learning and Improvement
Practice-based learning refers to the ability to appropriately modify practice based on new literature and patient outcomes and to teach others current medical knowledge and standards. These skills, along with the workgroup’s proposed outcome measures, are listed in Table 4.
Table 4. Emergency Medicine Relevant Practice-based Learning and Improvement Competency
|Physician Task||Measure||Data Collection Method or Assessment Technique|
|Analyze and assess practice experience, perform practice-based improvement||Impact of PI program||Depends on project goal|
|Learner ability to self-reflect, identify deficits, and improve||RR, CSR, P, SP|
|Locate, appraise, and utilize scientific evidence related to patient health problems||Ability to find a specific piece of information ||Appraisal of search strategy|
|Adherence to evidence-based recommendations from Cochrane Collaboration and Agency for Healthcare Research and Quality||RR, CSR|
|Competency in applying knowledge of study design and statistical methods to appraise medical literature||Adherence to the appraisal process, such as described in JAMA Guides to Medical Literature series||Topic appraisal using EBM techniques|
|Utilize information technology to enhance learning and improve patient care||Number of quantified prescription or order-entry errors||RR|
|Skilled in facilitating the learning of emergency medicine principles and practice by others||Impact of teaching on other practitioners||Teaching evaluations|
Competence in practice-based learning signifies that one is able to analyze and assess practice experience, reflect upon it, and identify and implement means by which to improve that practice.6 Accurate self-assessment is a critical component of this competency and can be measured by determining a learner’s ability to review the care he or she delivered and to identify future improvements for components of care. For instance, through the performance of follow-up to identify missed diagnoses, record review to assess adherence to national and local standards, and self-reflection of individual patient encounters via portfolios, a learner’s ability to identify and correct suboptimal practice patterns can be assessed. Outcome measures include, but are not limited to, improvements in the metrics outlined in other sections of this article.
Current Residency Review Committee for EM requirements stipulate that “Each resident must actively participate in emergency department continuous performance quality improvement (PI) programs.”18 A natural extension of this requirement would be the design of outcome measures that evaluate the impact of such a program. Learners at all levels, from medical students to residents, have been found to have an influence on PI initiatives.19 By measuring this influence, one can accurately determine a learner’s ability to identify a problem and implement a plan for improvements. One case series describes a cohort of internal medicine residents that identified an overuse of intravenous catheters and then developed an intervention that decreased use from 43% to 27%.20 Because PI projects often impact outcomes involving multiple competencies, measures may generate results that can be applied across many domains of resident competency acquisition.
Residents must also be able to locate, appraise, and utilize scientific evidence related to patient health problems and to the larger population from which they are drawn. The ability to find pertinent information, to appropriately assess its validity, and to thoughtfully implement it into practice is critical to a practitioner’s growth. Outcomes for this skill are tied to the assessment methods used. For example, in assessing one’s ability to use tools to find evidence, one could determine the practitioner’s ability to find a specific piece of information; as well, assessment of the search technique could also be used. Objective assessment of appraisal and implementation of this evidence is problematic due to the inherent controversies in determining the “gold standard.” However, by using objective evidence-based recommendations, such as those collected by organizations such as the Cochrane Collaboration and the Agency for Healthcare Research and Quality (AHRQ), one can determine the frequency with which a practitioner deviates from the standard of care for specific diagnoses.
Residents must show competency in applying knowledge of study design and statistical methods to critically appraise medical literature. Numerous guides exist for systematically using evidence-based medicine techniques. The inherent subjectivity of the outcomes could be minimized by focusing on the appraisal process rather than on the conclusion. One method of structured appraisal, described in depth, is published in the Users’ Guides to Medical Literature series in the Journal of the American Medical Association (JAMA).21,22 Interpretation of rudimentary statistical tests is included in the board certification process.
Another skill is the ability to utilize information technology to enhance learning and improve patient care. Presumably, the use of information technology should decrease errors. Practitioners must be able to find and use information pertinent to positively impacting patient care. Assessment tools include 360-degree evaluations and practical examinations, which measure the ability to rapidly access pertinent information to guide care.6 Other surrogates for gauging the accuracy of information retrieval could include the examination of errors in prescription writing or order-entry errors, both of which can be quantified.
Finally, practice-based learning and improvement means that residents are skilled in facilitating the learning of EM principles and practice by students, colleagues, and other health care professionals. Standard evaluation forms can be used to assess the ability of a practitioner to teach others. To better assess outcomes, however, one would need to determine the impact of the teaching on the other practitioners of the health care team. This can be done in simulated settings using either global or checklist evaluations. Due to the specific skills required, several different outcomes measures are likely needed to determine the efficiency and accuracy with which one can find and appraise information, apply it to one’s practice to maintain the highest standard of care, and disseminate the knowledge to other health care providers.
The workgroup segmented model behaviors of professionalism into those considered most important to patients and their families, and those deemed most important to employers and colleagues of EPs. Table 5 highlights the consensus group’s proposed measures. The skills falling under category of “sensitivity and respect for patients” are: 1) treating patients and family with respect; 2) demonstrating sensitivity to patient’s pain, emotional state, and gender, and ethnicity issues; 3) shaking hands with the patient and introducing oneself to the patient and family; 4) showing unconditional positive regard for the patient and family; and 5) being open and responsive to input or feedback of patients and their families. The group agreed that the best assessment methods to evaluate the skills surrounding sensitivity and respect for patients would be the 360-degree evaluation, the Press Ganey Patient Satisfaction survey,14 the SDOT, and any of a number of means to record patient complaints.
Table 5. Emergency Medicine Relevant Professionalism Competency
|Physician Task||Measure||Data Collection Method or Assessment Technique|
|Exhibits professional behaviors toward patients and families||Demonstrates sensitivity to patient’s pain, emotional state, and gender/ethnicity issues||360, Patient satisfaction surveys, PR|
|Shakes hands with patient and introduces himself to patient and family||360, DO|
|Shows unconditional positive regard for patients and families||360, patient satisfaction surveys, PR|
|Remains open/responsive to input/ feedback of patients and families||360, patient satisfaction surveys|
|Exhibits professional behaviors toward employers and colleagues||Honesty||360, patient complaint|
|Arriving to work on time||Time sheets, PR|
|Willingly seeing patients throughout entire shift||Chart audit of patients seen in last hour of shift, PR|
|Conducting appropriate sign-outs||PR|
|Punctually completing medical records||Chart completion audit|
|“Total instances of delinquent charting”|| |
|Attending mandatory meetings and conferences||Conference attendance roster audit|
|Lack of substance abuse||PR|
The following aspects of professionalism were considered to be important by employers and colleagues: honesty, timely compliance with scheduled requirements, and lack of substance abuse. The group decided that with regard to honesty, outcome measures could include the 360-degree evaluation, the SDOT, patient complaints, and any episode of falsification of medical records. The group noted that lying on the part of physicians is often very difficult to measure.
A number of professional skills fall under “compliance with scheduled requirements,” including arriving on time, prepared for work; willingly seeing patients throughout the entire shift; conducting appropriate sign-outs; and punctually completing medical records. The best outcome measures for this skill set are tracking punctuality through time cards or sign-in sheets, reviewing medical records to obtain the number of patients seen per shift or to uncover any instances of delinquent charting, and conducting peer evaluations related to sign-outs. Attendance at mandatory meetings and conferences is also an easy outcome to measure by means of a sign-in sheet or roll.
Appropriate outcome measures regarding “substance abuse” could be any reported violation of the ED’s substance abuse policy and failure to seek treatment when a problem has been identified. Because physician impairment policies vary by state, the standards of each state medical board will dictate specific outcome measures.
The difficulty in measuring certain aspects of professionalism begs the question of whether these aspects actually should be measured. Assessment and outcome measurement of professionalism are fraught with subjectivity and bias. Group discussion was limited not only in determining which elements of professionalism were most important to measure, but also in deciding which were even possible to measure. For example, it was noted that it is extremely difficult, if not impossible, to measure skills such as recognizing the influence of marketing and advertising, using humor and language appropriately, or properly administering symptomatic care.
Systems-Based Practice (SBP)
Emergency medicine educators can incorporate several measures into their curricula to document progressive improvement with respect to the SBP competency. The proposed measures can be found in Table 6.23–30
Table 6. Emergency Medicine Relevant Systems-based Practice Physician Tasks
|Physician Task||Measure||Data Collection Method or Assessment Technique|
|Out-of-hospital care||Resident discusses relevant information with out-of-hospital providers ||RR|
|Resident reviews out-of-hospital run sheet||CSR, S, RR|
|Documentation of out-of-hospital care (i.e., aspirin and nitroglycerin given in the field) ||RR|
|Modifying factors||Resource utilization||CSR, S, 360|
|Consultation of interpreter for language barrier||RR|
|Legal/professional issues||Explanation of AMA indications, risks, and benefits||RR DO|
|Explanation of alternative treatments and options||RR, CL, P, S, CSR|
|Documentation of patient capacity for decision-making||RR, CL, P, S, CSR|
|Documentation of invitation to return for recommended treatment||RR, CL, P, S, CSR|
|Documentation of patient handoff at change of shift||RR|
|Diagnostic studies||Consideration of evidence-based decision rules||RR, CL, P, S, CSR|
| NEXUS C-spine rules27|
| Ottawa ankle rules28|
| Ottawa knee rules29|
| Canadian Head CT rules30|
|Documentation of deviation from decision rules||RR, CL, P, S, CSR|
|Documentation of procedures||RR, CL, P, S, CSR|
|Consultation and disposition||Timely notification of cardiac catheterization team for AMI||RR, CL, P, S, CSR, CR|
|Timely notification of stroke team for acute CVA||RR, CL, P, S, CSR, CR|
|Utilization of PSI31 or PORT32 score in CAP for disposition||RR, CL, P, S, CSR, CR|
|CIWA33 score for alcohol withdrawal||RR, CL, P, S, CSR, CR|
|Consultant interactions||Appropriateness of consultation||RR, S|
|Documentation of indications for consultation||RR|
|Timely disposition (admission or discharge)||RR, S|
|Prevention and education||Appropriate discharge instructions written for understandability at the patient’s level||RR, CL, CSR, S, OSCE, 360|
|Discharge instructions document a follow-up provider||RR|
|Discharge instructions provide an explanation of medications||RR, S|
|Reasons to return for further care||RR, S|
|Appropriate discharge medications provided for key medical conditions, e.g., steroids/MDI in asthma, antibiotic choice for indication||RR, CL, CSR, S|
|Multitasking and team management||JCAHO ORYX Core measures34||RR, CL, CSR, S, 360, WE|
| Administration of aspirin and beta-blockers in AMI|
| PTCA within 90 min of arrival|
| Thrombolysis within 39 min of arrival|
| Oxygen assessment|
| Blood cultures|
| Initial antibiotic administration <4 hr|
| Initial antibiotic choice for ICU and non-ICU patients|
|Time to administration of pain medications||RR, S|
|Anticoagulation in atrial fibrillation||RR, S|
|Nursing, staff, housestaff interactions||DO, 360, PR, S|
|Appropriate role assignment and direction of team by the resident for a medical or trauma resuscitation|
Successful outcomes assessment will require the employment of multiple measurement tools and will necessarily vary by institution depending on the relative strengths of each program. The consensus group chose specific criteria for each physician task based on generalizability across programs, acceptance as performance standards based on current guidelines (e.g., AHRQ standards), reliability, validity, and ease of implementation. The group also identified existing resources that support outcome measures for SBP.
Standards of care are available for more than 1,600 diseases on the AHRQ Web site (http://www.ahrq.gov/). Embedded within the site is a link to the National Guideline Clearinghouse (http://www.guideline.gov/), which provides more than 1,800 listings of practice guidelines based on disease, treatment, or quality assessment tools. The AHRQ also has a Web page entirely focused on outcomes and effectiveness (http://www.ahrq.gov/clinic/outcomix.htm).
The Joint Commission on Accreditation of Healthcare Organizations, recently coined “The Joint Commission,” introduced the ORYX31 initiative in February 1997 to integrate outcomes and other performance measurement data into the accreditation process. In addition, ORYX measurement requirements are intended to support Joint Commission–accredited organizations in their quality improvement efforts. In July 2002, accredited hospitals began to collect data on standardized, or “core,” performance measures.31 The Hospital Quality Measures currently utilized by the Joint Commission and CMS are acute myocardial infarction (AMI), heart failure, pneumonia, and surgical infection prevention. With respect to EM, the relevant outcomes to be measured for AMI include administration of aspirin and beta blockers, percutaneous transluminal coronary angioplasty within 90 minutes of arrival, or thrombolysis within 30 minutes of arrival. For pneumonia, they include oxygen assessment, blood cultures, antibiotic administration within 4 hours of arrival, and antibiotic choice for intensive care unit (ICU) and non-ICU patients. One caveat with respect to these measures is that residents cannot control certain aspects of the time-critical events. For instance, time to electrocardiogram (ECG) is institution-dependent, and time to needle from the time of notification is entirely dependent on the invasive cardiologist and the cardiology team framework; therefore, residents can actually only be assessed on timely notification of cardiology.
Using the 3-hour window for stroke team activation for tissue plasminogen activator administration, or door-to-needle times for AMI as examples, a resident’s records can be reviewed for timing or documentation of notification of the stroke team after interpretation of the initial head computed tomography (CT) or notification of the catheterization team after interpretation of the initial ECG. However, door-to-needle time as a whole encompasses other institutional factors, such as time to initial ECG and time for arrival of the consulting service. Each of these metrics is beyond resident control; however, some would argue that these measures could be used as institutional metrics, providing an indicator of appropriateness of the training environment for graduate medical education.
Other outcomes that easily could be evaluated using record review and checklists in the case of AMI, for example, include documentation of aspirin and beta-blocker administration. Residents can be evaluated based on their documentation of medication administration in the ED or by out-of-hospital caregivers. If medications were not administered, resident evaluation should be based on documentation of appropriate contraindications. The checklist format allows for items to be scored as either binary (“Yes” or “No”) or by level of compliance using a Likert-type measurement (total, partial, or incorrect) for each individual parameter. The individual items can then either be scored as a composite (percentage of items performed) or an all-or-none measurement.32 Missing or incomplete documentation of care is interpreted as not having met the accepted standard.
Chart-stimulated recall oral exam cases can be tailored to assess resident understanding of specific systems-based issues. Areas of assessment might include the resident’s use of clinical decision rules for utilization of diagnostic studies (e.g., NEXUS23 criteria for c-spine clearance) or disposition (e.g., PORT28 score for pneumonia or CIWA29 score for alcohol withdrawal).
One outcome measure for the requisite physician skill of multitasking and team management would be time to administration of pain medications. Core measures for JCAHO and ORYX specify guidelines for performance and outline the way in which quality is to be assessed.31 Using these metrics, a program director also can measure individual resident performance and can determine the aggregate performance of the program. The information will yield formative feedback at both the individual and the program levels. Repeat measurement will allow systematic improvement and will provide ample documentation of a systematic approach to improvement for accreditation agencies.
An EM-specific simulation curriculum has been designed to address SBP topics.33 One case involves a patient with a language barrier who suffers from an AMI and who wishes to leave AMA. Another case involves an intoxicated patient with a Level 1 pelvic trauma requiring transport to a specialized facility. SBP issues pertinent to the case include transport protocols, understanding of the Emergency Medicine Treatment and Active Labor Act (EMTALA), and knowledge of local regulations regarding disclosure of driving under the influence of alcohol. Another innovative assessment method for SBP involves the use of simulation for presenting morbidity and mortality conferences. In this scenario, the resident must confront significant issues with patient advocacy, consultation and disposition, and team management.34 OSCEs may also have a role in assessing items such as modifying factors (cultural issues), legal/professional issues (AMA), prevention, and education.
Portfolios may also provide an opportunity for educators to gather data to measure systems-based practice outcomes. An example of an SBP-specific portfolio entry would be a resident quality assurance project to determine institutional performance with respect to measures such as aspirin and beta-blocker administration in patients with AMI. Outcome measurement would use these results (before and after) to evaluate the impact of the SBP project.
Data collected from 360-degree assessments could also potentially be used for SBP measures. These could include rating a resident’s ability to provide appropriate discharge instructions or to converse with a patient about leaving AMA. For example, was the resident discussing the instructions at the patient’s level of understanding? Did the resident provide a follow-up provider and appropriate time interval for follow-up? Did the resident indicate specific criteria (e.g., worsening signs or symptoms) for which medical attention should be sought immediately? Were appropriate medications provided, and were they explained to the patient or caregiver?