The five strategies became a heuristic we visualized as a recursive series of steps. Strategy One is placed alongside the others to illustrate how assessing situational factors influence all other parts of the heuristic. Also, several feedback loops exist within these steps where the instructor is encouraged to return to previous steps as new information becomes available. For example, as the instructor finds additional information from existing data (i.e., Strategy Three), she is better able to clarify her purpose (Strategy Two). The next section will describe each of these strategies in the context of the case study that gave rise to them.
Strategy One: Know Your Situation
The first strategy is very closely tied to an integrated course design (Fink, 2013). To know your situation is to acknowledge the importance of considering situational factors when designing effective assessment strategies just as an integrated design acknowledges the same idea for course development. This consideration is especially relevant for GME. Teaching in the GME environment is uniquely challenging compared to both undergraduate medical education and training in other disciplines due largely to the “situational” realities of being a physician. Properly assessing situational factors using this strategy requires attention towards (1) learning environment, (2) population-specific factors, and (3) availability of resources. These situational factors were brought into consideration throughout the construction of the assessments in this case, which was necessary because it became clear that the instructors aimed to assess a series of topics no one wanted to learn in a place where no one had the time to learn them.
First, the learning environment should play an important role in the way an assessment project is approached, designed, and conducted. In an integrated course design, assessment strategies, teaching and learning activities, and learning outcomes are all affected by situational factors, one of which is the learning environment of the course and the students who are in the course (Fink, 2013). With regard to the course itself, some of the questions that are necessary to ask are, what is being taught? what is the nature of the course itself?, and/or what is the environment in which the course is being taught? Along with these questions come the population-specific factors which also impact the way an assessment will be structured and conducted. These factors can include demographic factors such as student age, learning style, and background experience with the topic, as well as students' external obligations.
In our case, we were teaching statistics and research methods, which are notorious for invoking some degree of fear or anxiety within students (Fink, 2013; Garfield & Gal, 1999). To further complicate the issue, we were teaching non-statisticians with little research experience. Previous research has shown not only an inadequate level of resident competence in research methods and statistics, but also a general anxiety or resistance to learning about these topics (Hack, Bakhtiari, & O'Brien, 2009; Novack, Jotkowitz, Knyazer, & Novack, 2006; Windish, Huot, & Green, 2007). This lecture series was also not associated with a grade, which made accountability and attendance a voluntary endeavor. Without mandatory attendance or a grading system, it was necessary to adapt an assessment that would not be held to the bounds of grading, but instead focus on educative assessment (Fink, 2013). To this end, participants were encouraged to work together on each activity, and specific time was taken out of the start of every workshop to go over the answers as a large group. Both students and instructors received immediate feedback from this process; not only did this formative assessment give the students an opportunity to practice what they had learned, but it also allowed for the instructors to learn with what concepts students continued to struggle.
In terms of population-specific factors, our team had to pay attention to the traits of the students and how they play into curriculum design and development. Our population was medical professionals who were faced with an 80-h work week (with up to an additional 80 h of moonlighting), 16–24 h shifts, and also had to engage in scholarly activity (ACGME, 2013). Their most immediate obligation was to their residency work, so we needed to be cognizant of that as we planned both assessment and teaching and learning activities. Along with these factors, we had to meet the needs of residents who were international, of differing ages, and had varied familiarity with the subject matter. Our population necessitated making the assessments understandable by students of differing nationalities, short enough that it was not cumbersome, and written in a way that it was easy to relate to their practice (Sahai, 1999).
Finally, the availability of resources should be taken into account because even the best assessment plan cannot succeed without the infrastructure to support it. The type of resources that are most restricted will likely vary across situations. For example, large educational departments may not have the same type of financial barriers as small ones; therefore, they could conduct an assessment on a larger scale. On the other hand, the size of the larger department may create more of a strain on the time and/or personnel resources available. Some example questions to ask oneself at this point could be, what is the budget for the assessment?, what is the size of the assessment team?, what type of support sources (e.g., information technology or statistical support offices) are available?, and how long and how often do you have to deliver the material?
In our case, the four instructors made up the assessment team, we had no budget, and no additional support resources. The consequence of having no additional support and a small team was that we were limited to a very small-scale, inexpensive assessment plan. The lectures were to be given in varying time slots as the residency departments could fit the topics into their education, which further emphasized the need to make any assessment plan as efficient as possible. It also meant that we would need to be sure our assessments not only gave feedback on resident performance, but also could be used as a tool for learning. Simply put, we had no time to waste their time with irrelevant assessment tools. Taking this educative assessment (Fink, 2013) approach allowed us to fit robust assessment instruments into the small time we had to deliver course content. Embedding assessment in this way maximized the gain for both the student and the instructor because it turns the assessment process into a teaching and learning activity as opposed to an add-on to instruction (Patton, 2012). Specifically, we embedded the pretest measures as the opening and closing activities for the workshop by saying, “The first activity we will be doing this week is taking a quick self-assessment of your confidence (indirect assessment) and knowledge (direct assessment) of some of the topics we plan to cover this week.” Similarly, the post-workshop assessment was framed as the final activity rather than something “extra” that would be administered to the students after the workshop had concluded.
Strategy Two: Clarify Your Purpose
Properly assessing the situational factors inherent to any environment will aid the instructor in clarifying the purpose for the proposed assessment strategy. The purpose of an assessment strategy should be more global than the outcomes of a single instrument. It can, and dare we say “should,” go beyond simply discovering what residents know about a topic in order to assign them a grade. Knowledge acquisition may be the purpose of a single instrument; however, the purpose of the whole assessment framework may include assessing students' attitudes towards the topic (and course activities) and/or assessing their ability to apply the concepts from the course to tasks in the future.
An assessment's purpose can be drawn from a well-defined set of learning objectives, but it can be more of a challenge when there are no structured objectives in place. In the latter situation, a more global approach to clarifying your purpose can start with the assessment team reflecting on two important questions: how will the instructor(s) benefit from the assessment results?, and, how will the students benefit from the assessment results? Answers to these questions will either be the genesis of new learning objectives, or inform improvements upon existing ones. Moreover, beginning with these two global questions provides a more forward-looking view of assessment because it puts an emphasis on how the results will be used rather than merely what the results will be.
There are a number of parties that stand to benefit from any assessment process. The assessment team must judge how these multiple stakeholders will be impacted by the data they plan to gather. The values and needs of each group must be taken into account when identifying the purpose of any assessment instrument. Some broader stakeholder groups include the other instructors in the educational program, any office that holds the curriculum to accountability, and those who indirectly benefit from the assessment such as future students. Still, with a primary focus on teaching and learning, two of the most immediate stakeholders are the students and the instructor(s).
In our situation, the primary stakeholders consisted of the assessment team (instructors), the students, the GSM faculty, and the GSM administration. Everyone needed to know what gains were made in the residents' knowledge after participating in the lecture series. More specifically, the instructors needed to know the residents' baseline knowledge and their feedback on the lectures in order to improve the course. Conversely, the residents needed feedback to inform their own understanding of the course content. Further, administration was dependent on our assessment and learning outcomes in order for the institution to continue to be ACGME accredited. Finally, other GSM faculty needed to understand the extent of what had been taught in order to incorporate that knowledgebase in their own learning environments.
Strategy Three: Use What You Have
Effectively using the data and resources already available within the organization can not only reduce the time and effort towards a project, but can also give a more complete view of what the organization sees as important. A parallel to this approach can be found in program evaluation (Skolits, Morrow, & Burr, 2009). Evaluators are asked to assume numerous roles during the lifetime of any given project, and the same can be said for anyone involved in assessment. The role of the “detective” is particularly key in the early phases of a project in which, “The evaluator asks questions, reviews documents, makes observations, and seeks other clues that will provide insight regarding the program, its context, and its suitability for an evaluation” (Skolits et al., 2009, p. 285). The benefit of this fact-finding mission is to uncover (1) what instructors think the students are learning, (2) what is actually being taught, and consequently (3) where gaps exist in the curriculum.
It was important for our team to play detective in order to gather the information to which we already had access before making steps towards curriculum overhaul. To get started, we gathered information from the following sources: (1) course content, (2) GSM faculty interviews, (3) lecture observations, (4) the medical literature, and (5) staff members' consulting experience at the GSM.
First, our team requested all of the existing lectures on research methods and statistics, and analyzed them for salient topics as well as areas of the material that felt insufficient or ill defined. Sifting through the lectures provided important background information on what was already being taught, and it also illuminated what was not being taught.
Oftentimes faculty have no trouble explaining the overarching constructs they want their students to learn, but they may not take the time to articulate these endpoints as learning objectives (Fink, 2013). So to delve deeper, the faculty who previously taught the statistics and research methods lectures were informally interviewed through a series of meetings with the team. These interviews revealed a more nuanced answer to the “what is being taught” question. Perhaps more importantly, these conversations highlighted what was intended to be taught. Our interviews assisted all parties in better understanding the purpose, goals, and shortcomings of the lecture series as it stood. For comparison, our team attended a number of lectures to observe both the instructor and the audience, which allowed us to assess the degree to which the content that faculty purported to teach was actually being delivered in the classroom.
Finally, each team member reflected on both their prior professional experience consulting at the GSM and the medical literature in order to identify the most commonly used and published research methods and statistical analyses. This reflection let us triangulate what we were encountering in everyday practice and previous research with the curriculum being assessed. Reflection on content expertise and the results of the aforementioned activities gave the team sufficient information regarding where the assessment needed to be directed in order to address the gaps in the curriculum.
Strategy Four: Fit the Instrument to Your Purpose, Not the Other Way Around
Properly aligning the assessment instrument to the purpose should be considered one of the most important and salient elements of the assessment process. This strategy is very similar to the process of designing a research study. In research, the research questions are what drive the choice of study design rather than the design giving rise to the questions; in assessment, the purpose (usually derived from learning objectives) should govern what type of assessment method the team chooses. One example from epidemiologic research is that certain study designs are stronger for answering particular types of research questions. Studying the incidence of a particular health-related outcome can usually only be examined by using a cohort study (Hulley, Cummings, Browner, Grady, & Newman, 2007); therefore, aligning the purpose (studying incidence) with the appropriate method (cohort design) must be done a priori. Had the researcher put the cart before the horse, so-to-speak, and chose a case–control or cross-sectional research design before considering their purpose, then the resources spent on the study would be in vain because incidence is not examined with those designs.
There are numerous templates, instruments, and guidelines in the assessment literature (Angelo & Cross, 1993; Case & Swanson, 1998; Suski, 2009, for example), but it is up to the assessment team to discern which method is the most appropriate for their situation. Fortunately, if they have taken a comprehensive look at the environmental, population, and resource-specific situational factors (Strategy One), have clarified their purpose (Strategy Two), and reviewed what was already being collected (Strategy Three), then this part of the process should be fairly straightforward.
Nevertheless, using this strategy was invaluable to our work with the EBM curriculum. Our purpose for the assessment had become to investigate two primary questions: (1) what are students' background knowledge and attitudes towards statistics and research methods before the course? and (2) what is the change in knowledge and attitudes after the course? We decided to use a background knowledge probe (Angelo & Cross, 1993) as a direct measure of student knowledge as well as an attitudinal survey as an indirect measure of students' confidence in statistics and research methods. Both of these approaches were ideal for our situation as the background knowledge probe not only addressed purpose one, but it also made an effective starting point for instruction (Angelo & Cross, 1993). The two methods could also be used as a pre–post assessment of students' change in attitude and knowledge, and all without placing a large time demand on them. Finally, since we were moving towards an integrated course design, these two assessment strategies helped inform learning objective development as well as changes to our teaching and learning activities for future courses.
Strategy Five: Get Consistent and Critical Feedback
Assessment development/administration must be viewed as a dynamic and iterative process, so the final strategy focuses on eliciting consistent, critical feedback. There are two important types of feedback: feedback on teaching and learning, and feedback on the assessment strategy itself. With regard to the former, recall that assessment was earlier defined as an ongoing process of (1) developing clear learning objectives, (2) providing students opportunities to meet those objectives, (3) systematically gathering and interpreting evidence for how well students are meeting those objectives, and (4) using those findings to improve student learning (Suski, 2009). While a program may not assess every objective every year (and we advise not to do so in most cases [Suski, 2009]), a consistent cycle of student feedback should be collected to evaluate both student and instructor performance. Situations change, new accreditation or education requirements emerge, university administration turns over, and student needs change over time—all of these changes will affect the assessment strategies as per the four strategies discussed thus far.
The second type of important feedback is on the assessment strategy itself. Assessment should not be a solitary endeavor, and having a group of trusted colleagues or team members with whom one can exchange drafts, ideas, and so on is an invaluable resource to have at hand. Also, after an assessment is carried out, it is important to bring together this same group of individuals to critically reflect on the experience (Schön, 1983). What went well?, what did not?, and, what can we improve for next time? Just as feedback on teaching and learning informs changes to curriculum, an assessment instrument is developed or modified, tested, the testing generates feedback, and this feedback leads to modifications to the instrument. Figure 2 provides a visual explanation of how this instrument feedback fits into the larger cycle of assessment.
Figure 2. Feedback Cycle of Assessment Development (Strategy Five) as It Fits Within the Broader Cycle of the Assessment Process.
Download figure to PowerPoint
Overall, this approach incorporates many of the same ideas as the integrated course design model (Fink, 2013) discussed in the introduction to this article; however, it emphasizes the cyclical “feedback loop” of each process in the context of a dynamic learning environment such as GME. This cyclical process portrays assessment as a continuous and ever-changing effort to revise instruction; improve learning outcomes; and more broadly, to keep assessment professionals and instructors “on their toes” in the learning environment in which they work.