Value and Impact of Feedback in an Online Environment
Results from this study highlight the importance of feedback in an online environment and support the assumption that students’ postings can reach and be sustained at a high level of quality through a combination of instructor and peer feedback. In general, students’ postings across 17 discussion questions averaged 1.32 on a 2-point quality scale. Although we expected that the quality of students’ postings might gradually improve over the semester, as was demonstrated in a similar study by Ertmer and Stepich (2004), our results showed no significant improvement in students’ postings from the beginning to the end of the course.
We suspect that a number of factors may have mediated students’ efforts to achieve high quality postings. First, the online course was structured such that students were required to submit two postings each week: an initial post to the weekly discussion question as well as one response to another student. Additional postings were not required, nor did students expect them to be scored for quality. Therefore, once the initial and follow-up postings were made in a specific forum, students had little motivation to strive for high quality with additional postings. Furthermore, scoring postings with a grading rubric that allowed for only two meaningful levels of quality may not have provided enough room for growth, thus causing a ceiling effect to occur. Since students started out with relatively high scores on their two required posts, there was little opportunity to demonstrate improvement in these scores during the semester. In the future, it might be important to include a scoring rubric that will allow for more variation among scores. The disadvantage to this, however, is that as the scale becomes more finely gradated, it becomes increasingly difficult to differentiate among the various levels of quality.
Another reason students may not have demonstrated increased quality in their postings relates to the discussion questions used. In this course, many of the DQs, especially those developed by student leaders, were not particularly conducive to high-level responses. As Meyer (cited in Meyer, 2004) explains, “Questions created to trigger personal stories [do] so, and questions targeted to elicit information or higher-level analysis [do] so” (p. 112). Specific to this study, student leaders tended to ask their peers to provide examples of current issues they faced in their classrooms or schools (e.g., how to integrate technology, how to cope with security issues, how to apply distance learning opportunities in the classroom). While these types of discussions might be expected to stimulate responses related to the application level on Bloom’s taxonomy (score = 1 point), they would not readily engender responses related to analysis, synthesis, or evaluation (score = 2 points). As noted earlier, “most online discussion consists of sharing and comparing information, with little evidence of critical analysis or higher order thinking” (Black, 2005, p. 19). Thus, instructors need to be cognizant that the discussion questions themselves must enable students to demonstrate higher-order thinking.
Communication in online courses serves many functions, only some of which are specifically content-focused (Ko & Rossen, 2001). However, in this study, we rated every response posted in 17 different discussion forums, including responses that were intended solely for interpersonal or motivational purposes. While these types of postings serve important roles, they would not be likely to receive a high-quality score, based on Bloom’s taxonomy. Given this, we considered scoring only the required posts in each forum; however, it was difficult to determine, post-hoc, which postings students intended to count as their required two postings. Additionally, this would have reduced the total number of analyzed postings from 778 to 160, which would have greatly limited our ability to measure changes in posting quality. In the future, it will be important to clarify exactly how many postings will be scored in a discussion forum, while also leaving room for students to make additional postings that serve to build a sense of community and trust.
Perceptions of Value: Peer vs. Instructor Feedback
Despite the fact that the quality of students’ postings was maintained with the use of peer feedback, students still tended to favor instructor feedback over that received from peers. Furthermore, despite participating in what they themselves described as a valuable process, students began and ended the course believing that instructor feedback was more important to their learning. This perception is similar to that reported by a number of researchers (Ko & Rossen, 2001; Topping, 1998) who have noted that students often believe that their peers are lax in their assessment approaches or that they lack required skills to provide valuable feedback. As Topping suggests, if learners perceive peer feedback to be invalid, they may end up de-valuing the entire peer feedback process. This suggests the importance of explicitly addressing students’ perceptions up front and taking steps to counter their strong pre-conceived ideas of the relatively weaker value of peer feedback.
In this study, students expressed concerns about being qualified to give feedback to each other. This may have led, on the one hand, to the perception that they were receiving superficial or low-quality feedback or, on the other hand, feeling apprehensive about being consistent and fair while evaluating peers’ postings. As noted earlier, “the ability to give meaningful feedback … is not a naturally acquired skill” (Palloff & Pratt, 1999, p. 123), and students may experience initial anxiety about the process (Topping, 1998). In this study, these concerns appeared to be related to a more fundamental concern about how peer scores would impact course grades, whether their own or others. To help the peer feedback process work most effectively, students need to be assured that postings will be fairly and consistently evaluated, with the instructor mediating the process to ensure fairness, and students need to appreciate the additional benefits made possible through the peer feedback process.
One of the potential advantages to using peer feedback, as noted by Topping (1998), is the increased timeliness in receiving feedback. However, in this study, students’ feedback was channeled through the instructor, thus causing a delay in delivery time, initially taking as long as two weeks. The significantly higher rating at the end of the course of the importance of timeliness of feedback may have been in reaction to the perceived delay in receiving peer feedback. This lag time, then, may have cancelled out one of the proposed benefits of peer feedback, that is, increasing the timeliness of receiving feedback.
Still, despite these logistical problems, the majority of students indicated that peer feedback positively impacted the quality of their discussion postings. They described a number of specific benefits including recognition of their ideas, access to multiple perspectives, and receiving a greater quantity of feedback than would have been received from the instructor alone. This is similar to what Topping (1998) reports: “Peer feedback might not be of the high quality expected form a professional staff member, [but] its greater immediacy, frequency, and volume compensate for this” (p. 255). Students also noted positive aspects of the peer feedback process, including the ability to provide anonymous feedback and the ability to receive a grade that reflected the average score given by two different peers.
In addition to impacting the quality of their discussion postings, students also described how peer feedback helped them improve the quality of feedback that they, in turn, provided to others. In other words, after receiving initial peer feedback, some students realized they had not been as in-depth or constructive as they could have been in providing feedback to others and, as a result, improved the quality of their own feedback. Ko and Rossen (2001) note that the ability to cross-check one’s understanding is an essential step in the learning process.
Learning by Doing: Benefits to Giving Peer Feedback
Perhaps the greatest potential benefit of the peer feedback process lies in the constructive aspect of forming and justifying peer feedback. For example, in this study many students described how they benefited from providing peer feedback. Through this process, they reflected more critically on the discussion postings for which they were providing feedback, as well as on their own postings and how they could be improved in a similar manner. Many authors have suggested that this type of reflection contributes to the assessor’s comprehension of the topic by forcing him/her to analyze postings reflectively and to think about what constitutes high-quality work (Henderson, Rada, & Chen, 1997; Topping, 1998). According to Dunlap and Grabinger (cited in Dunlap, 2005), “the process of reviewing someone else’s work can help learners reflect on and articulate their own views and ideas, ultimately improving their own work” (p. 20). Furthermore, requiring students to justify their peer ratings by specifying which level of Bloom’s taxonomy was demonstrated in the peer response forced them to engage in activities at a higher level of cognitive skill: providing explanations, making justifications, and drawing conclusions.
Limitations and Suggestions for Future Work
The results of this study are limited by the small sample size, the relatively short duration of the study, and the fairly limited scale used to judge the quality of student postings. Conducting the study over a longer period of time, with a rating scale that allows for greater improvement, could result in a measurable difference in the quality of student postings. Furthermore, providing more time up front to discuss the benefits of the peer feedback process and to train students to use the rating scale more effectively might impact students’ perceptions of the value of receiving feedback, particularly in relationship to the perceived value of instructor feedback. Given that feedback is likely to become an increasingly complex and important part of the online learning process (Mory, 2004), it is important that educational practitioners have access to relevant information regarding how to use peer feedback effectively to increase student learning. While the results of this study suggest that peer feedback is a viable alternative to instructor feedback, specifically related to maintaining the quality of student postings, additional research is needed to determine the most effective means for facilitating the process in an online learning context.