SEARCH

SEARCH BY CITATION

Keywords:

  • assessment;
  • medical education;
  • GME

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies

This discussion provides faculty, or anyone conducting their own assessment, with five key strategies for useful assessment in graduate medical education (GME) and the broader higher education environment with or without the benefit of pre-existing learning objectives. Higher education faculty, including GME, are not trained to develop their own assessment strategies (Hutchings, 2010). Consequently, developing assessment “from the ground up” can be an arduous process, particularly without previous strategies in place. We introduce the strategies using a free-standing evidence-based medicine curriculum as a case. The five strategies are: (1) know your situation; (2) clarify your purpose; (3) use what you have; (4) fit the instrument to your purpose, not the other way around; and (5) get consistent and critical feedback. These five strategies should prove beneficial in making sure effective assessment data is gathered efficiently as medical education moves towards a competency-based model.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies

The impact of the accountability movement of the last decade has reached into all areas and levels of education. However, unlike the K–12 environment, higher education assessment oftentimes must rely on individual instructors or department leaders for the development of assessment tools and strategies. Few national standardized instruments are readily available for many higher education applications, including graduate medical education (GME). This task is especially complicated because faculty are traditionally not trained to develop their own assessment instruments or strategies (Hutchings, 2010; Kramer, 2008). Consequently, faculty are required to develop assessment “from the ground up” and this can be an arduous process, particularly when no current learning objectives or assessment strategies are available to build upon. Any educational environment without a well-crafted, structured set of learning objectives can fall victim to this problem. Accordingly, the following discussion provides faculty, or anyone conducting their own assessment, with five key strategies for useful assessment in GME and the broader higher education environment with or without the benefit of pre-existing learning objectives.

We begin by introducing general assessment practices and how they relate to integrated course design (Fink, 2013), a central theoretical framework underlying the five key strategies. Next, we explore each of the strategies while establishing a freestanding evidence based medicine (EBM) curriculum as a case study. Finally, we review how these strategies can cultivate an integrated course and assessment design approach as well as how they relate to upcoming changes to medical education policy.

Background

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies

Assessment Defined

Student learning assessment is an ongoing process of (1) developing clear learning objectives; (2) providing students opportunities to meet the objectives; (3) systematically gathering and interpreting evidence as to how well students are meeting those objectives; and (4) using the findings to improve student learning (Suski, 2009). The sheer number and diversity of disciplines in the university, medical education included, have led to a variety of assessment approaches as well as multiple languages used to describe these approaches (Suski, 2009). Although assessment occurs at the institution, program, and curriculum/course levels, throughout this discussion we will be using the term “assessment” to refer to the assessment of student learning at the course or curriculum level.

Assessment and Teaching Strategies in Graduate Medical Education

Assessment, in some form, has been an expectation of the Accreditation Council for Graduate Medical Education (ACGME) standards since 1999 (Holt, Miller, & Nasca, 2010). Approximately 98–99 percent of accredited pipeline and subspecialty programs self-report using at least one type of assessment to examine resident performance on the ACGME Core Program Standards; however, the overwhelmingly dominant format remains direct observation of resident performance (90.0 percent) (Holt et al., 2010). Direct observation, while useful for clinical skills, becomes more difficult and less useful in assessing non-clinical standards such as Medical Knowledge or Practice-based Learning & Improvement where the predominant assessment method is still either project-based assessment (8 percent) or in-house written examination (31 percent) (Holt et al., 2010). GME educators must face the challenge of designing effective assessment methods for non-clinical standards that are as rigorous as their clinical counterparts.

Integrated Course Design

The integrated course design approach can be particularly useful for designing rigorous assessment, learning objectives, and teaching and learning activities in any higher education setting. Proposed by Fink (2003), the integrated course design model brings both fluid and relational elements to what could be viewed as a rigid college course design process. The model is comprised of four interconnected parts: (1) learning goals, (2) feedback and assessment, (3) teaching and learning activities, and (4) situational factors. The key to this integrated model is that each of these components informs the others in a fluid process rather than a linear “checklist.”

The integrated course design model requires instructors to ask five essential questions:

  1. “What are the important situational factors in a particular course and learning situation?
  2. What should our full set of learning goals be?
  3. What kinds of feedback and assessment should we provide?
  4. What kinds of teaching and learning activities will suffice, in terms of achieving the full set of learning goals we set?
  5. Are all components connected and integrated, that is, are they consistent with and supportive of each other?” (Fink, 2003, p. 63)

Each of these questions must be considered across all elements of the course design process. The benefit of an integrated design lies in the simultaneous consideration of content, learning objectives, teaching and learning activities, and assessment instruments as integral elements with the conceptualizing of a new course. The model also flips traditional thinking of course design on its head by placing assessment before teaching and learning activities. This backward design has the instructor start by asking, “What do I want students to be able to do after this course?” (learning goals/objectives). This is followed by, “What would the students have to do to convince me that they achieved these goals?” (assessment). And finally, “What do they need to do during the course to be able to succeed at these activities?” (activities) (Fink, 2003, p. 63).

Case Setting

In May 2011, a four-member team of instructors at a Graduate School of Medicine (GSM) at a large, public regional university medical center were asked to redesign the Office of Medical Education and Development. The team inherited the existing EBM curricula that dealt exclusively with research and statistical methodology. Up until that time, there had been no attempt to generate either learning objectives or assessment instruments to evaluate residents' competency in these areas. Likewise, there had not been any attempt to assess the needs of this population with regard to research and statistical skills. This case is used to illustrate the authors' personal experiences; it is not used to introduce or report on research within the particular setting.

The Five Strategies

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies

As the team reflected on our experiences, the following five strategies crystallized as keys to our success (Fig. 1).

image

Figure 1. The Five Strategies Can Be Visualized as a Recursive Series of Steps: (1) Know Your Situation; (2) Clarify Your Purpose; (3) Use What You Have; (4) Fit the Instrument to Your Purpose; and (5) Get Consistent and Critical Feedback. Curved lines represent feedback loops in the strategies such as continuing to clarify your purpose (Strategy Two) after each subsequent step in the process. Assessing situational factors (Strategy One) is placed alongside the others because it influences all other parts of the heuristic.

Download figure to PowerPoint

The five strategies became a heuristic we visualized as a recursive series of steps. Strategy One is placed alongside the others to illustrate how assessing situational factors influence all other parts of the heuristic. Also, several feedback loops exist within these steps where the instructor is encouraged to return to previous steps as new information becomes available. For example, as the instructor finds additional information from existing data (i.e., Strategy Three), she is better able to clarify her purpose (Strategy Two). The next section will describe each of these strategies in the context of the case study that gave rise to them.

Strategy One: Know Your Situation

The first strategy is very closely tied to an integrated course design (Fink, 2013). To know your situation is to acknowledge the importance of considering situational factors when designing effective assessment strategies just as an integrated design acknowledges the same idea for course development. This consideration is especially relevant for GME. Teaching in the GME environment is uniquely challenging compared to both undergraduate medical education and training in other disciplines due largely to the “situational” realities of being a physician. Properly assessing situational factors using this strategy requires attention towards (1) learning environment, (2) population-specific factors, and (3) availability of resources. These situational factors were brought into consideration throughout the construction of the assessments in this case, which was necessary because it became clear that the instructors aimed to assess a series of topics no one wanted to learn in a place where no one had the time to learn them.

First, the learning environment should play an important role in the way an assessment project is approached, designed, and conducted. In an integrated course design, assessment strategies, teaching and learning activities, and learning outcomes are all affected by situational factors, one of which is the learning environment of the course and the students who are in the course (Fink, 2013). With regard to the course itself, some of the questions that are necessary to ask are, what is being taught? what is the nature of the course itself?, and/or what is the environment in which the course is being taught? Along with these questions come the population-specific factors which also impact the way an assessment will be structured and conducted. These factors can include demographic factors such as student age, learning style, and background experience with the topic, as well as students' external obligations.

In our case, we were teaching statistics and research methods, which are notorious for invoking some degree of fear or anxiety within students (Fink, 2013; Garfield & Gal, 1999). To further complicate the issue, we were teaching non-statisticians with little research experience. Previous research has shown not only an inadequate level of resident competence in research methods and statistics, but also a general anxiety or resistance to learning about these topics (Hack, Bakhtiari, & O'Brien, 2009; Novack, Jotkowitz, Knyazer, & Novack, 2006; Windish, Huot, & Green, 2007). This lecture series was also not associated with a grade, which made accountability and attendance a voluntary endeavor. Without mandatory attendance or a grading system, it was necessary to adapt an assessment that would not be held to the bounds of grading, but instead focus on educative assessment (Fink, 2013). To this end, participants were encouraged to work together on each activity, and specific time was taken out of the start of every workshop to go over the answers as a large group. Both students and instructors received immediate feedback from this process; not only did this formative assessment give the students an opportunity to practice what they had learned, but it also allowed for the instructors to learn with what concepts students continued to struggle.

In terms of population-specific factors, our team had to pay attention to the traits of the students and how they play into curriculum design and development. Our population was medical professionals who were faced with an 80-h work week (with up to an additional 80 h of moonlighting), 16–24 h shifts, and also had to engage in scholarly activity (ACGME, 2013). Their most immediate obligation was to their residency work, so we needed to be cognizant of that as we planned both assessment and teaching and learning activities. Along with these factors, we had to meet the needs of residents who were international, of differing ages, and had varied familiarity with the subject matter. Our population necessitated making the assessments understandable by students of differing nationalities, short enough that it was not cumbersome, and written in a way that it was easy to relate to their practice (Sahai, 1999).

Finally, the availability of resources should be taken into account because even the best assessment plan cannot succeed without the infrastructure to support it. The type of resources that are most restricted will likely vary across situations. For example, large educational departments may not have the same type of financial barriers as small ones; therefore, they could conduct an assessment on a larger scale. On the other hand, the size of the larger department may create more of a strain on the time and/or personnel resources available. Some example questions to ask oneself at this point could be, what is the budget for the assessment?, what is the size of the assessment team?, what type of support sources (e.g., information technology or statistical support offices) are available?, and how long and how often do you have to deliver the material?

In our case, the four instructors made up the assessment team, we had no budget, and no additional support resources. The consequence of having no additional support and a small team was that we were limited to a very small-scale, inexpensive assessment plan. The lectures were to be given in varying time slots as the residency departments could fit the topics into their education, which further emphasized the need to make any assessment plan as efficient as possible. It also meant that we would need to be sure our assessments not only gave feedback on resident performance, but also could be used as a tool for learning. Simply put, we had no time to waste their time with irrelevant assessment tools. Taking this educative assessment (Fink, 2013) approach allowed us to fit robust assessment instruments into the small time we had to deliver course content. Embedding assessment in this way maximized the gain for both the student and the instructor because it turns the assessment process into a teaching and learning activity as opposed to an add-on to instruction (Patton, 2012). Specifically, we embedded the pretest measures as the opening and closing activities for the workshop by saying, “The first activity we will be doing this week is taking a quick self-assessment of your confidence (indirect assessment) and knowledge (direct assessment) of some of the topics we plan to cover this week.” Similarly, the post-workshop assessment was framed as the final activity rather than something “extra” that would be administered to the students after the workshop had concluded.

Strategy Two: Clarify Your Purpose

Properly assessing the situational factors inherent to any environment will aid the instructor in clarifying the purpose for the proposed assessment strategy. The purpose of an assessment strategy should be more global than the outcomes of a single instrument. It can, and dare we say “should,” go beyond simply discovering what residents know about a topic in order to assign them a grade. Knowledge acquisition may be the purpose of a single instrument; however, the purpose of the whole assessment framework may include assessing students' attitudes towards the topic (and course activities) and/or assessing their ability to apply the concepts from the course to tasks in the future.

An assessment's purpose can be drawn from a well-defined set of learning objectives, but it can be more of a challenge when there are no structured objectives in place. In the latter situation, a more global approach to clarifying your purpose can start with the assessment team reflecting on two important questions: how will the instructor(s) benefit from the assessment results?, and, how will the students benefit from the assessment results? Answers to these questions will either be the genesis of new learning objectives, or inform improvements upon existing ones. Moreover, beginning with these two global questions provides a more forward-looking view of assessment because it puts an emphasis on how the results will be used rather than merely what the results will be.

There are a number of parties that stand to benefit from any assessment process. The assessment team must judge how these multiple stakeholders will be impacted by the data they plan to gather. The values and needs of each group must be taken into account when identifying the purpose of any assessment instrument. Some broader stakeholder groups include the other instructors in the educational program, any office that holds the curriculum to accountability, and those who indirectly benefit from the assessment such as future students. Still, with a primary focus on teaching and learning, two of the most immediate stakeholders are the students and the instructor(s).

In our situation, the primary stakeholders consisted of the assessment team (instructors), the students, the GSM faculty, and the GSM administration. Everyone needed to know what gains were made in the residents' knowledge after participating in the lecture series. More specifically, the instructors needed to know the residents' baseline knowledge and their feedback on the lectures in order to improve the course. Conversely, the residents needed feedback to inform their own understanding of the course content. Further, administration was dependent on our assessment and learning outcomes in order for the institution to continue to be ACGME accredited. Finally, other GSM faculty needed to understand the extent of what had been taught in order to incorporate that knowledgebase in their own learning environments.

Strategy Three: Use What You Have

Effectively using the data and resources already available within the organization can not only reduce the time and effort towards a project, but can also give a more complete view of what the organization sees as important. A parallel to this approach can be found in program evaluation (Skolits, Morrow, & Burr, 2009). Evaluators are asked to assume numerous roles during the lifetime of any given project, and the same can be said for anyone involved in assessment. The role of the “detective” is particularly key in the early phases of a project in which, “The evaluator asks questions, reviews documents, makes observations, and seeks other clues that will provide insight regarding the program, its context, and its suitability for an evaluation” (Skolits et al., 2009, p. 285). The benefit of this fact-finding mission is to uncover (1) what instructors think the students are learning, (2) what is actually being taught, and consequently (3) where gaps exist in the curriculum.

It was important for our team to play detective in order to gather the information to which we already had access before making steps towards curriculum overhaul. To get started, we gathered information from the following sources: (1) course content, (2) GSM faculty interviews, (3) lecture observations, (4) the medical literature, and (5) staff members' consulting experience at the GSM.

First, our team requested all of the existing lectures on research methods and statistics, and analyzed them for salient topics as well as areas of the material that felt insufficient or ill defined. Sifting through the lectures provided important background information on what was already being taught, and it also illuminated what was not being taught.

Oftentimes faculty have no trouble explaining the overarching constructs they want their students to learn, but they may not take the time to articulate these endpoints as learning objectives (Fink, 2013). So to delve deeper, the faculty who previously taught the statistics and research methods lectures were informally interviewed through a series of meetings with the team. These interviews revealed a more nuanced answer to the “what is being taught” question. Perhaps more importantly, these conversations highlighted what was intended to be taught. Our interviews assisted all parties in better understanding the purpose, goals, and shortcomings of the lecture series as it stood. For comparison, our team attended a number of lectures to observe both the instructor and the audience, which allowed us to assess the degree to which the content that faculty purported to teach was actually being delivered in the classroom.

Finally, each team member reflected on both their prior professional experience consulting at the GSM and the medical literature in order to identify the most commonly used and published research methods and statistical analyses. This reflection let us triangulate what we were encountering in everyday practice and previous research with the curriculum being assessed. Reflection on content expertise and the results of the aforementioned activities gave the team sufficient information regarding where the assessment needed to be directed in order to address the gaps in the curriculum.

Strategy Four: Fit the Instrument to Your Purpose, Not the Other Way Around

Properly aligning the assessment instrument to the purpose should be considered one of the most important and salient elements of the assessment process. This strategy is very similar to the process of designing a research study. In research, the research questions are what drive the choice of study design rather than the design giving rise to the questions; in assessment, the purpose (usually derived from learning objectives) should govern what type of assessment method the team chooses. One example from epidemiologic research is that certain study designs are stronger for answering particular types of research questions. Studying the incidence of a particular health-related outcome can usually only be examined by using a cohort study (Hulley, Cummings, Browner, Grady, & Newman, 2007); therefore, aligning the purpose (studying incidence) with the appropriate method (cohort design) must be done a priori. Had the researcher put the cart before the horse, so-to-speak, and chose a case–control or cross-sectional research design before considering their purpose, then the resources spent on the study would be in vain because incidence is not examined with those designs.

There are numerous templates, instruments, and guidelines in the assessment literature (Angelo & Cross, 1993; Case & Swanson, 1998; Suski, 2009, for example), but it is up to the assessment team to discern which method is the most appropriate for their situation. Fortunately, if they have taken a comprehensive look at the environmental, population, and resource-specific situational factors (Strategy One), have clarified their purpose (Strategy Two), and reviewed what was already being collected (Strategy Three), then this part of the process should be fairly straightforward.

Nevertheless, using this strategy was invaluable to our work with the EBM curriculum. Our purpose for the assessment had become to investigate two primary questions: (1) what are students' background knowledge and attitudes towards statistics and research methods before the course? and (2) what is the change in knowledge and attitudes after the course? We decided to use a background knowledge probe (Angelo & Cross, 1993) as a direct measure of student knowledge as well as an attitudinal survey as an indirect measure of students' confidence in statistics and research methods. Both of these approaches were ideal for our situation as the background knowledge probe not only addressed purpose one, but it also made an effective starting point for instruction (Angelo & Cross, 1993). The two methods could also be used as a pre–post assessment of students' change in attitude and knowledge, and all without placing a large time demand on them. Finally, since we were moving towards an integrated course design, these two assessment strategies helped inform learning objective development as well as changes to our teaching and learning activities for future courses.

Strategy Five: Get Consistent and Critical Feedback

Assessment development/administration must be viewed as a dynamic and iterative process, so the final strategy focuses on eliciting consistent, critical feedback. There are two important types of feedback: feedback on teaching and learning, and feedback on the assessment strategy itself. With regard to the former, recall that assessment was earlier defined as an ongoing process of (1) developing clear learning objectives, (2) providing students opportunities to meet those objectives, (3) systematically gathering and interpreting evidence for how well students are meeting those objectives, and (4) using those findings to improve student learning (Suski, 2009). While a program may not assess every objective every year (and we advise not to do so in most cases [Suski, 2009]), a consistent cycle of student feedback should be collected to evaluate both student and instructor performance. Situations change, new accreditation or education requirements emerge, university administration turns over, and student needs change over time—all of these changes will affect the assessment strategies as per the four strategies discussed thus far.

The second type of important feedback is on the assessment strategy itself. Assessment should not be a solitary endeavor, and having a group of trusted colleagues or team members with whom one can exchange drafts, ideas, and so on is an invaluable resource to have at hand. Also, after an assessment is carried out, it is important to bring together this same group of individuals to critically reflect on the experience (Schön, 1983). What went well?, what did not?, and, what can we improve for next time? Just as feedback on teaching and learning informs changes to curriculum, an assessment instrument is developed or modified, tested, the testing generates feedback, and this feedback leads to modifications to the instrument. Figure 2 provides a visual explanation of how this instrument feedback fits into the larger cycle of assessment.

image

Figure 2. Feedback Cycle of Assessment Development (Strategy Five) as It Fits Within the Broader Cycle of the Assessment Process.

Download figure to PowerPoint

Overall, this approach incorporates many of the same ideas as the integrated course design model (Fink, 2013) discussed in the introduction to this article; however, it emphasizes the cyclical “feedback loop” of each process in the context of a dynamic learning environment such as GME. This cyclical process portrays assessment as a continuous and ever-changing effort to revise instruction; improve learning outcomes; and more broadly, to keep assessment professionals and instructors “on their toes” in the learning environment in which they work.

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies

Through this article, we have shared five straightforward strategies for developing useful student assessment using our own experience developing an EBM curriculum at a Graduate School of Medicine. As with Fink's (2013) integrated course design model, the situational factors of an assessment setting will have a great impact on the way it is carried out. Considering these environmental, population-specific, and resource factors will help the team to clarify the purpose of their assessment. This purpose should be learner-centered, and take stock of all relevant stakeholder perspectives when thinking about how the proposed assessment results will be used. We next stressed the importance of using what you have and fitting the assessment instrument to the purpose; not the other way around because taking stock of what data is already available and putting the purpose first can help avoid collecting irrelevant assessment data rather than answering the important questions. Although these five strategies could be seen as a stepwise approach to developing assessment, it is important to note that changes will be occurring at all five levels as the learning environment, purpose, and resources shift over time. Remember, assessing teaching and learning is an ongoing cycle of development, data collection, analysis, and interpretation (Suski, 2009). Similarly, creating a proper assessment tool is an iterative process of development, modification, testing, and eliciting consistent and critical feedback.

As the ACGME and other areas of medical education move towards a competency-based developmental outcome model, the need to develop new assessment strategies will likely be stronger than ever. In GME, the ACGME has been working towards a competency-based set of core outcomes in preparation for the Next Accreditation System. These milestones focus on the progressive development of residents' skills throughout their residency as opposed to performance at a single evaluation. In a recent presentation on this new policy, Swing and Edgar (2013) stated that programs will be developing their own methods for assessing resident competency, but they “highly recommend [them] to align [assessment] tools with milestones.” As programs develop these methods, the five strategies shared in this discussion should prove beneficial in making sure effective assessment data is gathered efficiently, even if starting from scratch.

Notes

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies

This discussion was written based on the two following previous works:

Barlow, P. B., T. L. Smith, W. Metheny, and R. E. Heidel. (2012). Methods for Developing Assessment Instruments to Generate Useful Data in the Presence of Vague Course Objectives. Oral presentation given at the annual meeting of the American Evaluation Association (AEA), Minneapolis, MN.

Barlow, P. B., and T. L. Smith. (2013, July 5). AHE TIG Week: Pat Barlow and Tiffany Smith on Higher Education Assessment in Graduate Medical Education [Online]. http://bit.ly/1e47BSc. Accessed July 6, 2013.

The authors would also like to acknowledge the following individuals for contributing to this work:

William Metheny, Ph.D., Associate Dean for Graduate Medical and Dental Education, The University of Tennessee Graduate School of Medicine

Robert E. Heidel, Ph.D., Assistant Professor of Biostatistics, The University of Tennessee Graduate School of Medicine.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies
  • Accreditation Council for Graduate Medical Education (ACGME). 2013. The Next Accreditation System [Online]. http://www.acgme-nas.org/. Accessed March 14, 2014.
  • Angelo, T., and K. C. Cross. 1993. Classroom Assessment Techniques: A Handbook for College Teachers. San Francisco, CA: Jossey-Bass.
  • Case, Susan, and David Swanson. 1998. Constructing Written Test Questions for the Basic and Clinical Sciences [Online]. http://ibmi3.mf.uni-lj.si/mf/fakulteta/prenova/stomatologija/mcq.pdf. Accessed March 14, 2014.
  • Fink, L. Dee. 2003. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. San Francisco, CA: Jossey-Bass.
  • Fink, L. Dee. 2013. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses, 2nd ed. San Francisco, CA: Jossey-Bass.
  • Garfield, Joan B., and Iddo Gal. 1999. “ Assessment and Statistics Education: Current Challenges and Directions.” International Statistical Review 67 (1): 1. doi: 10.2307/1403562.
  • Hack, Jason B., Poopak Bakhtiari, and Kevin O'Brien. 2009. “ Emergency Medicine Residents and Statistics: What Is the Confidence?The Journal of Emergency Medicine 37 (3): 31318.
  • Holt, Kathleen D., Rebecca S. Miller, and Thomas J. Nasca. 2010. “ Residency Programs' Evaluations of the Competencies: Data Provided to the ACGME About Types of Assessments Used by Programs.” Journal of Graduate Medical Education 2 (4): 64955. doi: 10.4300/JGME-02-04-30.
  • Hulley, S., S. Cummings, W. Browner, G. Grady, and T. Newman. 2007. Designing Clinical Research, 3rd ed. Philadelphia, PA: Lippencott, Williams & Wilkins.
  • Hutchings Pat. (2010). Opening Doors to Faculty Involvement in Assessment. NILOA Occasional Paper. http://atlantic.edu/about/research/documents/OpeningDoorstoFacultyInvolvement.pdf. Accessed March 14, 2014.
  • Kramer, Philip I. 2008. The Art of Making Assessment Anti-Venom: Injecting Assessment in Small Doses to Create a Faculty Culture of Assessment. Paper presented at The Association for Institutional Research Annual Forum, Seattle, WA.
  • Novack, L., A. Jotkowitz, B. Knyazer, and V. Novack. 2006. “ Evidence-Based Medicine: Assessment of Knowledge of Basic Epidemiological and Research Methods among Medical Doctors.” Postgraduate Medical Journal 82 (974): 81722.
  • Patton, Michael Q. 2012. Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: Sage Publications, Inc.
  • Sahai, H. 1999. “ Teaching Biostatistics to Medical Students and Professionals: Problems and Solutions.” International Journal of Mathematical Education in Science and Technology 30 (2): 18796. [Online] http://www.tandfonline.com/doi/full/10.1080/002073999287978.
  • Schön, D. A. 1983. The Reflective Practitioner: How Professionals Think in Action. New York, NY: Basic Books.
  • Skolits, Gary J., Jennifer A. Morrow, and Erin M. Burr. 2009. “ Reconceptualizing Evaluator Roles.” American Journal of Evaluation 30 (3): 27595. doi: 10.1177/1098214009338872.
  • Suski, Linda. 2009. Assessing Student Learning: A Common Sense Guide. San Francisco, CA: Jossey-Bass.
  • Swing, Susan, and Laura Edgar. 2013. Milestones Project Update. Paper presented at the 2013 ACGME Annual Educational Conference, Orlando, FL.
  • Windish Donna M., Stephen J. Huot, and Michael L. Green. 2007. “ Medicine Residents' Understanding of the Biostatistics and Results in the Medical Literature.” JAMA: The Journal of the American Medical Association 298 (9): 101022.

Biographies

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. The Five Strategies
  6. Conclusion
  7. Notes
  8. References
  9. Biographies
  • Patrick B. Barlow is a Ph.D. Candidate in Evaluation, Statistics, & Measurement in the Department of Educational Psychology & Research at the University of Tennessee.

  • Tiffany L. Smith is a Ph.D. Candidate in Evaluation, Statistics, & Measurement in the Department of Educational Psychology & Research at the University of Tennessee.

  • Gary Skolits, Ed.D., is Associate Professor in Evaluation, Statistics, & Measurement in the Department of Educational Psychology & Research at the University of Tennessee.