SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Acknowledgments
  4. References

ACADEMIC EMERGENCY MEDICINE 2011; 18:545–548 © 2011 by the Society for Academic Emergency Medicine

Abstract

Medical residency is an educational enterprise directed toward producing clinicians who recognize and correctly manage disease. While formal graduate medical education provides didactics and bedside teaching to improve knowledge, individual learning efforts are essential to the educational experience. Keeping track of patient outcomes after disposition from the emergency department (ED) is a useful exercise in reviewing gaps in knowledge of the individual and deficiencies in systems-based care. In reviewing the agreement between admission and discharge diagnoses of a single resident over 4 years of residency, significant improvement in diagnostic accuracy was observed. This method of self-correction has potential to supplement formal residency education in emergency medicine.

To cover the vast field of medicine in four years is an impossible task. We can only instill principles, put the student in the right path, give him methods, teach him how to study, and early to discern between essentials and non-essentials.1

Residency is learning medicine while receiving guidance. Inevitably, there will be mistakes in judgment that result in incorrect diagnosis, treatment, or harm. Obtaining superior instruction in avoiding errors to emerge a competent physician is important in applicants’ methods of residency selection; indeed the top considerations in a recent study were the reputation of the institution, the facilities, and the residency director.2 Fledgling interns want to feel that they have a dedicated institution behind them that will assist in their learning endeavors. But residency is not just about credentials and reputation: residents are attempting to develop habits that will reduce errors, promote patient care, and ultimately help them sleep easily after any given shift.

Residency training has been shown to decrease the rate of malpractice claims in comparison with non–residency-trained practitioners.3 The amount of variance in learning between and within residencies has yet to be explored. While the Accreditation Council for Graduate Medical Education Residency Review Committee in Emergency Medicine mandates a minimum of 5 hours of didactic time per week and a set curriculum of basic knowledge (the Model of Clinical Practice),4 the implementation of this requirement varies between institutions. As witnessed by any casual observer during weekly didactics, individual resident participation within any given conference can vary from intense note-taking to comatose osmosis. Accordingly, recent data suggest that conference attendance does not correlate with in-training examination scores,5 although the generalizability of this observation to clinical outcomes and patient care is less clear.

While individual learning styles and participation may vary, attempts to standardize the educational content and experience of residency serve to make sure that all residents receive adequate training and knowledge that results in improved patient care. The struggle to find the right balance between “service-related activities,” and newer, much-lauded “personalized education,” such as high-fidelity simulation, audience response systems, personalized learning approaches, and on-line videos and tutorials, will inevitably continue. While residency directors recognize there must be a balance, no one has yet found a way to make residency all education, without any mind-numbing drudgery.6

Learning from one’s mistakes is an essential element of medical education, whether these are addressed publicly or individually. Most residency programs have some forum for review of both individual and systematic mistakes, although these differ in content and efficacy between institutions.7 The nearly ubiquitous morbidity and mortality conference is one such exercise, but triggers for cases and quality review programs vary, and while most program directors and faculty feel these conferences are a worthwhile exercise, rarely do they highlight all individual errors.8 Physician errors routinely occur—but how are we able to learn from mistakes if they are missed?

Like many beginning their internship, I feared that I would commit mistakes that would slip through the cracks, only for me to repeat them. While I worried that I would need external oversight to ensure that I did no harm, I happened on a timely and appropriate remedy. My solution lay within a speech given by Sir William Osler to medical students over 100 years ago. In reference to taking notes on patients that they saw on the wards in a pocket book, he remarked:

Begin early to make a three-fold category—clear cases, doubtful cases, mistakes. And learn to play the game fair, no self-deception, no shrinking from the truth; mercy and consideration for the other man, but none for yourself, upon whom you have to keep an incessant watch. You remember Lincoln’s famous mot about the impossibility of fooling all of the people all of the time. It does not hold good for the individual, who can fool himself to his heart’s content all of the time. If necessary, be cruel; use the knife and the cautery to cure the intumescence and moral necrosis which you find in the posterior parietal region, in Gall and Spurzheim’s area of self-esteem, where you will find a sore spot after you have made a mistake in diagnosis. It is only by getting your cases grouped in this way that you can make any real progress in your post-collegiate education; only in this way can you gain wisdom with experience. It is a common error to think that the more a doctor sees the greater his experience and the more he knows.1

While frequently we look for administrative or educational solutions to resident shortcomings, I realized that my education was largely my responsibility. At our institution, residents had the onerous task of dictating all charts and then signing them a few days later once transcribed—truly part of the “scutwork” that no one felt properly compensated for or that seemed to aid our education in any way. After reading Osler’s pronouncement, I realized that this chore could have a silver lining for my education. If I waited to sign them until a month after seeing patients, the encounter and clinical impressions would still be fresh in my mind, but I could then check my hypothesis against the inpatient admission or a return to the emergency department (ED) in the intervening weeks. I quickly discovered that while I had hitherto been feeling rather secure during internship, I still had a lot to learn.

The next logical step was to quantitate how bad I really was. Realizing that tracking down everyone I discharged would be time-consuming, I began to focus on my track record with the patients I had good follow-up on: those who got admitted to an inpatient service. As I reviewed my charts, I started a simple Excel spreadsheet that Osler would appreciate: a record of every patient admitted to the hospital, my working diagnosis at the time of admission, the final diagnosis at discharge, and evidence supporting the discharge diagnosis. In time this became a compulsive desire to know the outcome for each patient seen—I could not sign a dictation, no matter how routine, without scanning further in the chart to find a potential omission in my workup.

What I collected over 4 years was an impressive list of near-misses and mistakes as well as some good saves to which I had been oblivious. Most of these would not have been picked up in the triggers for our departmental morbidity and mortality conference, and rarely were referred to me by the inpatient team or anyone else who saw them after.

I saw the danger around sign-out and rushing a patient during the end of a shift. A man was brought in by police for a minor laceration repair and medical clearance before being seen by psychiatry for a psychotic break. Fifteen minutes left in my shift, I irrigated and repaired the laceration with alacrity fostered by a desire to get home to a waiting bed after a grueling 12-hour onslaught of patients. I confess I barely spoke with him other than to have him confirm that people were breaking into the abandoned house he called home and persecuting him. Handed off to the next physician waiting to go to the psychiatry service, he ended up in the medical intensive care unit (ICU). My brevity failed to notice asterixis and ataxia as well as a pale complexion borne of having a severe gastrointestinal bleed and hepatic encephalopathy. His psychosis resolved nicely with a few units of blood. There were multiple factors at work. Improvements to hand-off, shift change, and communication between the prehospital team and me likely would have lessened the chances of this catastrophe. Based on this experience, I made new resolutions in my personal approach to patients toward the end of my shift and during sign-out.

I learned the dangers of neglecting admitted patients. I discovered a routine ICU transfer that I received because the hospital was full, who was promptly admitted. I waited for him to ascend to the nether regions of the hospital, sent some labs that I never followed-up on, and signed him out. He made the day shift interesting for my fellow resident by having a hyperkalemic arrest (with associated successful precordial thump!) while waiting to go upstairs. While I had technically done nothing wrong, diving into the chart revealed a host of problems: the lab had lost several samples, potassium results were delayed because they were elevated and needed to be rechecked (which was communicated to a nurse, but never to a physician), the upstairs team had not been briefed by the transferring ICU about the patient’s most recent labs, and several medications given at the outside hospital prior to transfer may have precipitated the arrest. This, and several similar cases, resulted in a departmental review and action plan to improve laboratory result reporting of critical labs and communication between transferring hospitals with the ED and the ICU.

Further review of patient charts revealed constant reminders of the shortcomings of my waning physical exam techniques. An elderly woman concerned about some painless rectal bleeding, whom I brusquely determined was simply having symptomatic internal hemorrhoids, stubbornly decided to go to surgery clinic for another opinion. A few months later she was battling advanced rectal cancer.

I had no shortage of hubris as a senior resident. I decided to read my own x-rays on a minor trauma patient to get the “meat moving” on a gridlocked day. I remember assertively stating to her and her husband that her chest x-ray looked fine to me as I sent her home. She had to be called back a few hours later when the radiologist asked where the follow-up x-ray was on her moderate pneumothorax. She returned and had a chest tube placed, and I was painfully made aware that my approach to reading radiographs needed refinement.

There could have been an entire spreadsheet of nonspecific complaints, coughs, and atypical chest pains to whom I threw antibiotics, pain medication, anti-inflammatories, cough suppressants, and lectures on narcotic-seeking behavior that came back when their pulmonary emboli did not get better on my therapy.

I had to face that I was jaded at times. I sent home supposed pain-seeking misanthropes who came back and saw more compassionate residents who found central cord compression, spinal stenosis, and discitis. I was reminded that losing compassion and ignoring complaints—even from the most emotionally painful patients—results in diminished care and sometimes missing an important diagnosis.

And at one point, during my fourth year as a senior supervising resident, I committed the unpardonable ED sin. I sent home a rupturing abdominal aortic aneurysm as “back pain.” With Percocet. (Her subsequent surgery was, fortunately, uneventful.)

But while the retrospection was informative in finding my errors, I saw some good judgment as well—the patient in whom I noticed a new murmur in the busy ED who had an aortic regurgitation confirmed on an echo and ended up with an aortic valve replacement that is still going strong. A patient with nonclassic chest pain who “just didn’t seem right” that I admitted despite the cardiology team’s complaints about yet another bogus chest pain admission. He had his 99% circumflex lesion stented the following morning. I followed up a healthy 41-year old woman who collapsed at triage, to whom we administered textbook care from a team of nurses and medics and a picture perfect handoff to the cath lab team who ended up walking out of the hospital with her daughter within 48 hours, despite her 100% right coronary artery occlusion.

Residency did, at times, turn into a daily grind from which I felt I gleaned little new knowledge on any given shift. Particularly as one gains experience, it is easy to assume a learning plateau, and stop trying to learn from mistakes. Setting up a reliable way to check up on oneself is another step in preventing the profession from becoming a boring occupation that fails to teach or excite the practitioner. Over 4 years, I can clearly say that my dogged routine of checking the chart on each patient I dictated over time proved more valuable than any other educational effort that residency provided me, simply because I was forced to stare straight into the face of my own deficiencies.

After residency, I went back to examine my track record. The results are shown in Table 1. A total of 932 admissions over the course of 4 years were recorded; 93 were omitted due to lack of follow-up data (patient left against medical advice, no discharge note was written, or the records were lost). The resulting data points were stratified by whether or not the ED diagnosis agreed with the discharge diagnosis. I then compared my diagnostic accuracy with that of the inpatient team, who in most cases spent more time and had more information at their disposal to make the final diagnosis. I then looked to see if I improved with time, and whether that proved statistically significant.

Table 1.    Agreement Between ED and Discharge Diagnosis by Year of Residency Training
PGYNumber of Diagnoses in AgreementNumber of ED Diagnoses Found to Be IncorrectAgreement Percentage*
  1. PGY = postgraduate year.

  2. *Chi-square via regression by prevalence (df = 3) = 24.330, p < 0.0001.

11543880.2
21622487.1
33672793.2
463494.0

Correlation of my admitting diagnosis with the discharge diagnosis agreed with greater frequency over the course of 4 years of residency, at rates of 80.2% during the first year, 87.1% during the second year, 93.1% during the third year, and 94.0% during the fourth year. Chi-square analysis by prevalence demonstrates a statistical significance of increasing agreement between the ED diagnosis and the discharge diagnosis between years (p < 0.001).

These data probably make the worst scientific study in the world—a single resident in a single program limited to only those patients admitted to the hospital, with no control group, no blinding, and every bias and confounder your friendly neighborhood statistician could postulate. But they made the best 4-year personal review (and ego check) imaginable. My review of initial impressions against the discharge diagnosis over the course of residency served to demonstrate three things:

First, a resident can improve and learn over time from his or her mistakes. Following up on inpatient admissions provided an invaluable educational experience. Over the course of residency, my ability to make a diagnosis that agreed with the discharge diagnosis statistically improved. While many limitations prevent generalization of these results to all residents and residency programs, following patient outcomes appears to still be a worthwhile approach to resident education.

Second, despite the many advances in resident education and technology, our profession is one where learning should take place on a daily basis, while we engage in the routine of seeing patients and charting their courses. While medical charts today are optimized for billing, their original purpose was to track symptoms and signs to aid practitioners in their recognition of disease. By turning the act of signing a chart into a exercise in determining whether my impression was accurate, I was forced to learn from my successes and failures that would have otherwise remained uncovered. Nobody can be expected to do that for me, and by doing it myself I enhanced my clinical education immensely.

Finally, I am far from done. My diagnostic accuracy will never be 100%. Thankfully, as an attending, I still have to sign my charts.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Acknowledgments
  4. References

The author acknowledges the University of Cincinnati Emergency Medicine Residency for the experiences related herein and the remarkable residents of the Class of 2009 for keeping him afloat and Chris J. Lindsell, PhD, for statistical assistance.

References

  1. Top of page
  2. Abstract
  3. Acknowledgments
  4. References