Assessing Patient Care: Summary of the Breakout Group on Assessment of Observable Learner Performance

Authors


  • The list of breakout session participants can be found as the appendix of a related article on page 1486.
  • The paper reports on a breakout track of the Academic Emergency Medicine consensus conference “Education Research In Emergency Medicine: Opportunities, Challenges, and Strategies for Success” held May 9, 2012, in Chicago, IL.
  • The authors have no relevant financial information or potential conflicts of interest to disclose.

Address for correspondence and reprints: James Kimo Takayesu, MD, MS; e-mail: jtakayesu@partners.org.

Abstract

There is an established expectation that physicians in training demonstrate competence in all aspects of clinical care prior to entering professional practice. Multiple methods have been used to assess competence in patient care, including direct observation, simulation-based assessments, objective structured clinical examinations (OSCEs), global faculty evaluations, 360-degree evaluations, portfolios, self-reflection, clinical performance metrics, and procedure logs. A thorough assessment of competence in patient care requires a mixture of methods, taking into account each method's costs, benefits, and current level of evidence. At the 2012 Academic Emergency Medicine (AEM) consensus conference on educational research, one breakout group reviewed and discussed the evidence supporting various methods of assessing patient care and defined a research agenda for the continued development of specific assessment methods based on current best practices. In this article, the authors review each method's supporting reliability and validity evidence and make specific recommendations for future educational research.

Ancillary