SEARCH

SEARCH BY CITATION

Keywords:

  • Faculty Rank and Faculty Performance;
  • Gender Bias;
  • Instructional Innovation;
  • Noninstructional Factors;
  • Norming;
  • Student Evaluation

ABSTRACT

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

Student Evaluations of Instruction (SEIs) from about 6,000 sections over 4 years representing over 100,000 students at the college of business at a large public university are analyzed, to study the impact of noninstructional factors on student ratings. Administrative factors like semester, time of day, location, and instructor attributes like gender and rank are studied. The combined impact of all the noninstructional factors studied is statistically significant. Our study has practical implications for administrators who use SEIs to evaluate faculty performance. SEI scores reflect some inherent biases due to noninstructional factors. Appropriate norming procedures can compensate for such biases, ensuring fair evaluations.


INTRODUCTION

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

Student Evaluations of Instruction (SEIs) are now commonplace among universities as a key mechanism for getting feedback regarding teaching practices. According to Seldin (1993), 86% of U.S. colleges and universities use SEIs to make key decisions about faculty. These SEIs also form a key component of evaluations of faculty teaching performance by the administration, and impact promotion and tenure decisions. As such, there is always a debate about the validity and appropriate use of these instruments. Brightman (2005) has argued that to be useful, an instrument must first be valid, and norming procedures must be in place to aid comparative interpretation of the data. Norming requires the identification of systematic biases in the ratings of overall instructor effectiveness (OIE) due to noninstructional factors.

A clear understanding of the impact of nonteaching-related factors is necessary to ensure fair evaluation of faculty. For example, if a factor like class size significantly affects overall ratings on an SEI for an instructor, then there should be a norming process used by administrators which compensates for class size differences when evaluating faculty. Researchers have examined the impact of various factors on SEI results to look for systematic biases in various fields, from psychology (Greenwald, 1997) to economics (Isley & Singh, 2005) and business (Isley & Singh, 2007; Liaw & Goh, 2003; Peterson, Berenson, Misra, & Radosevich, 2008). The nonteaching-related factors can be classified as student related, instructor related, course related, and administrative or situational (Peterson et al., 2008; Pounder, 2007). Student-related factors include the initial motivation of the student for the subject, grade expectation, grade point average, and gender. Instructor-related factors include the instructor's rank and gender, whereas course characteristics include type of course (qualitative vs. quantitative, core vs. noncore) and course level (graduate vs. undergraduate). Administrative factors influencing SEI ratings include class size, location, classroom and equipment, and time of day.

Some researchers believe that student grade expectations are positively correlated with SEI ratings (Zangenehzadeh, 1988), whereas others argue the opposite (Marsh & Roche, 2000). Centra (2003) analyzed more than 50,000 college courses controlling for class size, teaching method, and student perceived learning outcomes in the course. Learning outcomes turned out to have a large positive effect on SEIs. After controlling for learning outcomes, expected grades did not affect student evaluations.

Studies on teaching innovations demonstrate that a good innovation leads to improved student motivation and engagement, resulting in better student performance (Bergquist & Maggs, 2011; Snider & Eliasson 2013). Better student performance is in turn positively correlated with higher instructor effectiveness ratings (Davis, 2009). It is therefore plausible that improved teaching results in an increase in grade expectations as well as better student evaluation of teaching effectiveness.

The focus of this article is on the impact of noninstructional factors on student evaluations. We therefore exclude grade expectation from the study, since it is sufficiently intertwined with teaching ability to be considered a noninstructional factor.

Research Question

While many researchers have been examining the impact of nonteaching-related factors on instructor ratings in different disciplines, there is a need to conduct integrative studies to look for consistent patterns across universities and disciplines, or examine the differences as they appear. The noninstructional factors, especially administrative ones, are likely to be different in each institution, and a fair evaluation requires examination of the data at various institutions. This study focuses on SEIs from the College of Business at a large research university spanning across 4 years and 10 different departments.

We examine the following key research question:

Do the noninstructional factors (such as course type and level, instructor rank and gender, semester, time of day) have a significant effect on the OIE ratings?

If these factors are significant, and if the impact is large enough, they should be used for norming purposes when comparing faculty performances. The rest of the article is organized into the following sections: literature review, methodology, discussion of results, and reflections.

LITERATURE REVIEW

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

There is a debate in the literature about the validity of using SEIs for assessment of teaching. As some researchers argue, the goal of teaching is to improve student learning. Therefore, the learning must be measured, not the intervention. However, according to recent surveys of research on SEIs, most variables that correlate with student ratings of instruction are also related to instructional effectiveness and student learning (Benton & Cashin, 2012). Benton, Douchon, and Pallett (2013) found self-ratings of student learning to be positively correlated with student performance. Students who rate instructors higher also perform better on exams, and are better able to apply course material and show greater interest in pursuing the subject in later years (Davis, 2009).

One question goes beyond the validity of the instrument to ask if there are systematic biases due to factors that are extraneous to the student evaluation instrument. Scriven (2011) argues that an evaluation instrument must be credible as well as valid, with credibility referring to the audience's estimate of the validity. He states

“… evaluation design must sometimes involve considerations that go beyond validity. This must not be viewed as pandering to prejudice, but as of the essence of certification, of accountability, in a more general sense of the educational and social obligations of the evaluations. (“It is not enough that justice be done, it must also be the case that it must be seen that justice is done.”).”

In the context of higher education, norming of teaching effectiveness scores obtained from SEIs is the way to ensure that justice is done (and seen to be done) in evaluating faculty. If there are factors that bias the teaching effectiveness scores, then such biases must be compensated for. The factors causing such biases can be broadly categorized as course related, instructor related, and administrative (Feldman, 2007; Peterson et al., 2008; Pounder, 2007).

Course-Related Factors

Davies, Hirshberg, Lye, Johnson, and McDonald (2007) studied the impact of several noninstructional factors on instructor ratings in a study of undergraduates in Australia. They found course-related factors such as the quantitative nature of a subject to have a significant effect. Costin, Greenough, and Menges (1971) studied ratings by class designation and found instructors receiving higher ratings from seniors than from freshmen. It could be because better instructors are selected to teach higher level classes, indicating a selection bias of sorts. It could also be because the poorer students drop out in the first couple of years, and better students make it to the senior year, which also affects instructor ratings.

Peterson et al. (2008) find the senior-level students giving better ratings than sophomores and also better ratings than students taking graduate courses. Given that the 400- or senior-level courses are (a) in the discipline concentration, (b) student-selected electives, or (c) the required business capstone, one possible explanation for their significantly better student evaluations is what might be termed a “familiarity effect.” Students become more familiar with the professors from whom they have taken earlier classes and therefore have reduced anxiety.

Student ability and initial liking for the subject have an impact on instructor ratings (Aigner & Thum, 1986). Courses aimed at students of high ability get higher ratings, and those aimed at students with low ability get lower ratings. Some of that may translate to noncore classes getting higher ratings, since those courses are selected by students that presumably believe that they have some ability in that subject. Feldman (2007) found that students in major courses rated instructors higher than students in nonmajor courses. Also, students in elective courses rated instructors higher than those in required courses. Expecting ratings for graduate courses to be higher than undergraduate, and noncore higher than core, Brightman, Elliott, and Bhada (1993) used four categories—undergraduate core (UC), undergraduate noncore (UN), graduate core (GC), and graduate noncore (GN)—based on course level (undergraduate, graduate) and course type (core, noncore) to norm SEI data.

Instructor-Related Factors

Gender differences in performance evaluations in various fields have been studied extensively in the literature (Arvey, 1979; Dobbins, Cardy, & Truxillo, 1988; Mobley, 1982). Most of the studies of gender differences regarding SEIs have focused on the gender of the instructor rather than the student. Positive characteristics of stereotypical men include rationality, competence, and assertiveness, whereas for women warmth and expressiveness were seen as the main positive traits (Del Boca & Ashmore, 1980). Sprague and Massoni (2005) argue that the burden on female instructors is more labor-intensive, since the interpersonal relationship with students cannot be carried over from one semester to the next. Table 1 summarizes the conflicting findings regarding the ratings of male and female instructors.

Table 1. Gender differences in student ratings
Rated higher than male instructorsCentra (2009)—attributed to reasons other than bias.
 Feldman (1993)—rated higher by female students.
Rated lower than male instructorsLackritz (2004)
 Heckert, Latier, Ringwald, & Silvey (2006)
 Tatro (1995)
 Mohan (2011)
No gender difference foundBauer & Baltes (2002)
 Blackhart, Peruche, DeWall, & Joiner (2006)
 Centra & Gaubatz (2000)
 Reid (2010)
 Hancock, Shannon, & Trentham (1993)
 Kohn & Hatfield (2006)

Among the instructors’ attributes that potentially influence the ratings are the instructors’ positions or ranks, how demanding they are perceived to be, as well as experience, training, communication skills, and age (Blackburn & Lawrence, 1986). Isley and Singh (2007) found that while higher expected grades result in more favorable student evaluations, this relationship is significantly different depending upon faculty rank. Adjunct faculty ratings are most affected by student grade expectations, followed by tenured faculty, and lastly by tenure track (TT) faculty. Mohan (2011) also reports that nontenure track (NTT) faculty get higher ratings than TT faculty, although the effect can be altered, she argues, by inflating grades. Peterson et al. (2008) did not find any difference in ratings received by full-time faculty versus ratings received by adjunct faculty. Feldman (2007) reports higher ratings for higher ranked faculty compared with those of lower ranked faculty.

Administrative Factors

Several researchers have documented an absence of relationship between class timing and student ratings of instruction (Aleamoni, 1981; Benton & Cashin, 2012; Feldman, 1978). However, Peterson et al. (2008) found better ratings for daytime classes than evening classes. They attribute the finding to either higher expectation from students who work during day and taking evening classes, or to these students resenting being given homework that adds to their several preoccupations. They also found no evidence of any difference between spring and fall semester ratings.

Some classes are taught in modern facilities with stadium seating, spacious rooms, ports for student laptops, Internet connections, while others are still taught in fairly old, cramped rooms with students on chairs with a large arm on which to write. Anecdotal data suggest that there might be a relationship between the quality of classroom facilities and the ratings of instruction. No research has looked into this aspect.

There is some evidence in the literature indicating a relationship between class size and student ratings, with lower class sizes yielding higher ratings (Feldman, 1984, 2007; Isley & Singh, 2007; Liaw & Goh, 2003). For class sizes under 80, there is a relatively steep price to be paid for each additional student in terms of loss of ratings (Bedard & Kuhn, 2008). The difference in ratings per additional student is not so great in larger class sizes (80–150 students). On the other hand, some research finds U-shaped ratings with small and large class sizes yielding higher ratings than class sizes in between, due to a selection bias where teachers known to be good are assigned the really large classes (Marsh, Overall, & Kesler, 1979; Wood, Linsky, & Straus, 1974). In general, instructors believe smaller class sizes are easier to engage, and therefore result in higher ratings.

METHODOLOGY

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

We collected data on all student evaluations filled out between 2005 and 2009 in the college of business at a large public university. About 6,000 sections of various courses were taught during this period at the undergraduate and graduate levels. Table 2 shows the number of sections taught in each year, segmented into four categories based on course type and course level—GN, GC, UN, and UC.

Table 2. Number of sections taught in the business school by year and by category
YearGNGCGrad TotalUNUCUG TotalGrand Total
200513174205199151350555
20063232255484894068951,443
20073461995454944169101,455
20083032005035164379531,456
2009240124364293258551915
Grand total1,3438222,1651,9911,6683,6595,824

Data from four academic years starting 2005–2006 and ending with 2008–2009 was analyzed. Roughly 1,450 sections were offered every year, with about a third of them being graduate classes. PhD classes were eliminated from our analysis, since they tend to be very small in size, and sufficiently different from typical undergraduate or graduate courses. The average enrollment per section was 28.36, and the average number of responses to the SEIs per section was 18.20. The response rate for the SEIs overall across the 4-year span was roughly 64%, which is par for most universities. Richardson (2005) surveyed the literature on student evaluation instruments, and indicates that response rates of around 60% are common and that a 70% response rate would be considered good. Table 3 shows the number of student responses to the SEIs by year and by category.

Table 3. Number of responses to the seis by year and by category
YearGNGCGrad TotalUNUCUG TotalGrand Total
20051,8051,1632,9683,5353,5617,09610,064
20064,3833,3747,7578,4259,61318,03825,795
20074,2903,1987,4888,82810,21119,03926,527
20083,7863,2957,0819,50010,45019,95027,031
20092,9552,0424,9975,1306,43011,56016,557
Grand Total17,21913,07230,29135,41840,26575,683105,974

The SEI instrument used at this college is a modified version of one developed and originally validated at UC Berkeley. The modified version was validated at this college over 20 years ago by Brightman, Bhada, Elliott, and Vandenberg (1989). More recently, Nargundkar and Shrikhande (2012) found the instrument to still be valid. The instrument consists of 33 question items pertaining to various teaching related factors, and question 34 addresses the OIE. In this study, we use the OIE ratings (based on a five-point Likert scale, along with information regarding the noninstructional factors. The noninstructional factors are listed in Table 4 along with the possible values for each of them.

Table 4. Noninstructional factors used in the study
FactorValues
SemesterFall, Spring, Summer
Time of dayMorning (starting before noon)
 Afternoon (starting before 4:30 pm)
 Early Evening (starting before 7:00 pm)
Course Type and levelGraduate noncore (GN)
 Graduate core (GC)
 Undergraduate noncore (UN)
 Undergraduate core (UC)
Instructor genderFemale, Male
Instructor rankTenured
 Nontenure track (NTT)
 Part time instructor (PTI)
 Graduate teaching assistant (GTA)
 Tenure track (TT)
Class locationAderhold
 Brookhaven
 Alpharetta
 Classroom south
 General classroom building
 Sparks Hall
Class sizeNumeric variable with the number enrolled

Dummy variables were created to indicate various subgroups for time of day, location, rank, gender, course type and course level, and a regression analysis performed with the OIE score as the dependent variable, and the dummies as well as the class size as the independent variables.

The current norming process at our college involves using four segments initially proposed by Brightman et al. (1993)—UC, UN, GC, and GN. The impact of various noninstructional factors was therefore analyzed individually, within each of the four segments. Average scores for OIE for each noninstructional factor within all four segments were compared using two-sample t-tests and ANOVAs. The variances in the subgroups were not significantly different, making the use of t-tests and ANOVA appropriate. Where ANOVAs were significant, Tukey's two-way comparisons helped to determine specific differences among subgroups.

RESULTS

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

In order to examine the impact of all the nonteaching factors taken together on the overall rating of instruction, a regression was performed on the entire data set. OIE score was used as the dependent variable, and dummy variables were created for the categorical independent variables to represent the semester, time of day, location, course level and course type, instructor rank, instructor gender, and class size. Table 5 shows the final model with the significant variables.

Table 5. Regression of Q34 on noninstructional factors. Highlighting is to show groups of dummies for a given variable together
Adjusted R square.0390964   
Standard error.5276773   
Observations5,996   
 CoefficientsStandard Errort-Statp-Value
Intercept4.3709.0253172.4805.0000
Spring.0479.01553.0925.0020
Summer.1230.01846.6978.0000
Morning−.0568.0202−2.8133.0049
Afternoon−.1040.0210−4.9591.0000
Early evening−.0969.0176−5.4925.0000
UC−.0478.0182−2.6305.0085
GC−.0900.0208−4.3240.0000
Tenured.0433.02281.9046.0569
NTT.0752.02233.3723.0008
PTI−.0652.0254−2.5666.0103
GTA−.1268.0317−3.9979.0001
Numb enroll−.0018.0004−4.2404.0000

As seen above, overall ratings for summer and spring are significantly higher than for fall, summer ratings being the highest. Similarly time of day seems to matter, with each of the three times shown scoring less than the evening classes, with afternoon classes scoring the least. Core classes in general score lower than noncore, with GC scoring the least. Differences in faculty rank were also significant, with NTT faculty scoring the highest and graduate teaching assistants the lowest.

Given the significance of all these factors in the presence of the others, we examine each noninstructional factor separately, as has been done by various researchers.

Course Type and Level

Table 6 shows the results of a two-sample t-test for the mean OIE scores (Likert scale, 1 = low, 5 = high) for core and noncore classes.

Table 6. OIE ratings by type (Core vs. NC) overall
Course Type 
Core classes4.239
 n = 2,490
Noncore classes4.320
 n = 3334
 p < .001

Table 7 shows the results of a two-sample t-test for the mean OIE scores (Likert scale, 1 = low, 5 = high) for graduate and undergraduate classes.

Table 7. OIE ratings by Level (Grad vs. UG) overall
Course Level 
Graduate classes4.315
 n = 2,165
Undergraduate classes4.268
 n = 3,659
 p < .01

In both cases, there was a significant difference. Ratings for noncore classes were significantly higher than those for core classes, while graduate classes got higher ratings than undergraduate classes, consistent with expectations. Based on the above findings as well as Brightman et al. (1993) results, four segments were created based on the combination of course level and the course type dimensions, rather than looking at each dimension independently. The results are shown in Table 8.

Table 8. OIE ratings by segment—course level and type combined
 UndergradGraduate 
Core4.2284.260p > .10
 n = 1,668n = 822 
Noncore4.3014.349p < .05
 n = 1,991n = 1,343 
 p < .001p < .001 

Looking at the rows in the table, the ratings are not significantly different for UC and GC classes. Among noncore classes, however, ratings for graduate classes are significantly higher than for undergraduate classes. Looking at the columns in the table, ratings for noncore classes are higher than core classes in both the undergraduate and graduate segments. These findings are a little different from those in the regression analysis, which controls for all other factors.

Instructor Gender and Rank

Table 9 summarizes our findings regarding instructor gender within each of the four segments.

Table 9. OIE ratings by instructor gender by segment
 UndergradGraduate
Core  
Female4.237 (n = 929)4.285 (n = 217)
Male4.217 (n = 719)4.243 (n = 572)
 p > .10p > .10
Noncore  
Female4.355 (n = 688)4.286 (n = 244)
Male4.278 (n = 1,273)4.365 (n = 1,086)
 p < .01p < .05

For the core segment, no significant differences were found between male and female instructors. For the noncore segment, the ratings for female instructors were higher than for male instructors among undergraduate students, while the reverse was true among graduate students. There was no difference between the male and female instructor ratings when all four segments were combined.

Table 10 summarizes the results of OIE ratings by faculty rank.

Table 10. OIE ratings by faculty rank within each segment
 Undergrad Graduate 
Core1.Tenured4.32 (n = 134)1. NTT4.36 (n = 332)
 2. NTT4.28 (n = 703)2. Tenured4.26 (n = 248)
 3. GTA4.25 (n = 322)3. TT4.14 (n = 55)
 4. PTI4.19 (n = 381)4. PTI4.04 (n = 144)
 5. TT4.15 (n = 27)  
 1,2 > 3,4,5 and 3 > 5p < .051 > 3,4 and 2 > 4p < .05
Noncore1. NTT4.35 (n = 618)1. NTT4.41 (n = 362)
 2. PTI4.31 (n = 341)2. Tenured4.38 (n = 628)
 3. TT4.28 (n = 166)3. PTI4.20 (n = 150)
 4. Tenured4.25 (n = 547)4. TT4.13 (n = 144)
 5. GTA4.15 (n = 149)  
 1 > 4,5 and 2 > 5p < .051,2 > 3,4p < .05

In each of the four segments, the ANOVA was significant at p < .001 overall, meaning that the scores for all faculty status groups were not equal; there were some differences somewhere. Tukey's two-way comparisons showed the specific differences as shown in Table 10. For instance, for the UC segment, “1,2 > 3,4,5” means that the first two groups (Tenured and NTT) were not different from each other, but each of them was significantly better than groups 3, 4, and 5 (GTA, PTI, and TT). Furthermore, “3 > 5” means that group 3 (GTA) was significantly better than group 5 (TT).

Semester, Time, and Class Size

Overall ratings in the regression were found to be significantly higher during summer compared to spring, and likewise significantly higher for spring compared to fall. Examining the impact of semester within the four segments, we found the following results (Table 11):

Table 11. OIE ratings by semester for each of the four segments
 Undergrad  Graduate  
CoreSummer4.337n = 345Summer4.326n = 184
 Spring4.212n = 671Spring4.244n = 283
 Fall4.188n = 652Fall4.240n = 355
 Summer > Spring, Fall; p < .05  p < .05
NoncoreSummer4.397n = 464   
 Spring4.312n = 795   
 Fall4.229n = 732   
 Summer > Spring > Fall,Summer > Fall, p < .05
 p < .05 

Among UC classes, summer ratings were significantly higher than for spring and fall. There was, however, no significant difference in ratings for core graduate classes, perhaps due to the lower sample size in that category. Among UN classes, summer ratings were significantly higher than for spring, which were significantly higher than for fall. For GN classes, summer ratings were significantly higher than for fall, but ratings for spring were not significantly different from either fall or summer.

To test for differences in ratings for sections taught at various times during the day, the day was divided into four time segments. Classes that began before noon were in the “Morning” group; those that began at or after noon but before 4:30 pm were classified as “Afternoon”; those that began at 4:30 pm but before 7:15 pm were classified as “Early Evening,” while those that started at 7:15 pm or later were the “Evening” classes. The results are shown in Table 12.

Table 12. OIE ratings by time of day by segment
 Undergrad Graduate 
Core1. Afternoon4.2260 (n = 338)1. Morning4.4117 (n = 184)
 2. Morning4.2229 (n = 675)2. Afternoon4.3332 (n = 31)
 3. Early Evening4.2123 (n = 300)3. Evening4.2305 (n = 291)
 4. Evening4.2229 (n = 355)4. Early Evening4.1844 (n = 303)
 p > .10p < .001; Pairwise: 1 > 3,4
Noncore1. Morning4.3479 (n = 340)1. Evening4.3947 (n = 656)
 2. Early Evening4.3019 (n = 569)2. Morning4.3413 (n = 85)
 3. Evening4.2908 (n = 339)3. Afternoon4.3160 (n = 53)
 4. Afternoon4.2239 (n = 630)4. Early Evening4.2992 (n = 549)
 p < .05; Pairwise: 1,2 > 4p < .05; Pairwise: 1 > 4

The results are mixed. UC classes show no difference overall, whereas undergrad noncore do better in the morning and early evenings. GC classes score better in the mornings, while GN classes (which are mostly taught early evening or evening) score better in the evening compared to early evening. There was no difference in overall ratings between the four times of day when all four segments were combined.

Finally, a scatter plot of OIE ratings versus class size is shown in Figure 1.

image

Figure 1. OIE rating by class size.

Download figure to PowerPoint

It is difficult to discern a relationship between the two variables from the plot, given the high density of points. The only visible pattern seems to be a slightly downward trend among the very large class sizes (over 100).

The average class size was 28.36. We tested for differences in ratings between class sizes of 30 and below with class sizes over 30. Table 13 shows the results.

Table 13. OIE ratings and class size
 Class Size ≤ 30Class Size > 30
  1. We also compared class size 20 and under with class size 21–39 and class size 40+ with an ANOVA. The results were uniformly in the same direction, with higher overall ratings for smaller class sizes.

Mean4.344.24
Standard deviation.5515.5123
Sample size (number of sections)3,5962,400
  p < .001

The overall ratings for the smaller class sizes were significantly higher than for the larger ones.

DISCUSSION

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

Instructor ratings are significantly different for course-related factors like the course level and type. Ratings are higher for noncore classes compared to core classes. This is consistent with our expectations based on the literature. It seems to be fairly well established that initial liking for a course does in fact affect the ratings of an instructor. Graduate classes overall get better ratings than undergraduate classes. Graduate students are generally expected to be better prepared and have a greater liking for the subject than undergraduates. Among core classes, there is no difference in ratings for undergraduate and graduate classes. However, among noncore classes, there is a difference between the two.

Among core classes, there is no significant difference in ratings between male and female instructors. However, we see an interesting effect in the noncore classes. Undergraduate students rated female instructors higher than male instructors, whereas graduate students rated male instructors higher than female instructors. Younger students may prefer the nurturing characteristics attributed to female instructors. Similarly, the older graduate students perhaps prefer the perceived stereotypical qualities among male instructors of being forceful and goal driven.

Instructor rank or status also has an impact on overall ratings. In all four segments, NTT instructors consistently show higher ratings than untenured TT faculty. However, tenured faculty performed very well, especially in graduate classes. Among undergraduate classes, part time instructors (PTIs) have better ratings than untenured TT faculty. In our opinion, this finding is consistent with the incentive structure in place for faculty at research institutions. NTT faculty is primarily evaluated on teaching effectiveness, whereas TT faculty is evaluated primarily on research, with lower emphasis on teaching. However, when they do get tenure, the emphasis on research is reduced, giving them time to focus on teaching.

The influence of administrative factors like semester, time of day, and location (classroom quality) on overall ratings of instructors was mixed. Summer semester ratings are consistently higher than the ratings for spring or fall, with being GC classes being the only exception. Summer classes on average have around 20–25 students, whereas fall and spring classes have 30+ students on average. The regression analysis shows the effect of the semester to be significant even after controlling for the class size effect. An explanation for better summer ratings may be that students take fewer classes during summer, allowing greater focus on those classes. Furthermore, frequent meetings during summer may build a better rapport with the instructor and better retention of material.

As for time of day, the regression shows a progression of rating differences, with instructors being rated the highest for evening classes, followed by morning, early evening, and afternoon classes, respectively. When the effect of timing was examined by itself for each of the four segments, we find some differences. Within the GC, morning classes receive a higher rating than evening, and not many classes are offered in the afternoon. Also, many of these morning courses are offered on Saturdays, when the graduate students are relatively free from work-related pressures. Within the UC, morning and early evening classes scored higher than afternoon classes, consistent with our expectation based on tiredness/sleepiness after lunch. Finally, in the GN, evening classes score higher than early evening (there are very few classes taught in the morning or afternoon). This is also consistent with our expectations. After a long day at work, the students are typically tired for the early evening class, but get a second wind post dinner for the evening classes. None of the classroom location variables came in significant in the regression. In other words, location (and by proxy, classroom quality) did not affect OIE ratings.

Class size effect on OIE ratings is consistent with recent literature. Smaller class sizes have significantly higher ratings than larger ones. We first tested class sizes under 30 against 30+, since it was close to the overall average class size of a little over 28. To see if there was a hint of a U-shaped relationship as indicated by Wood et al. (1974), three groupings of class size—less than 20, 21–40, and 40+ were also tested. The results were unidirectional, with larger classes getting lower ratings on average.

CONCLUSION

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies

As Brightman (2005) points out, in order to effectively use SEIs for assessment, the instrument must first be valid. The validity of the instrument used at the College of Business of this large public university was established by Brightman et al. (1989) and the instrument was revalidated in recent times by Nargundkar and Shrikhande (2012). Furthermore, the results of the SEIs should be appropriately normed for fair feedback to faculty. In other words, the impact of noninstructional factors on overall ratings of instruction must be controlled for in evaluating faculty. Noninstructional factors are by definition not relevant to one's teaching ability or effectiveness, and are beyond the instructor's control. However, these factors have the ability to bias an instructor's effectiveness ratings, as shown in this article. This has a major implication for administrators evaluating faculty.

Based on our findings, administrators should look at various noninstructional factors when assessing faculty performance through student evaluations. At our business school, the four segments currently used for norming (UC, UN, GC, GN) by administrators are appropriate, given the results of this study. However, this study suggests that they are insufficient, and that several additional factors, namely, semester, time of day, instructor gender and rank, and class size also need to be considered. Based on our regression model, an instructor with an average score of 4.37 who happens to hit upon an adverse combination of these factors can in the worst case end up with a score of 4.05, while an instructor who hits upon the best combination of these factors can end up with a score of 4.57. In other words, two instructors with identical teaching effectiveness could get overall student ratings that differ by as much as .52 on a scale of 1–5. Given that most SEI ratings vary between 3.0 and 5.0 (a range of 2.0), a difference of .52 due to extraneous factors can be drastic. This implies that an administrator's perception of an instructor's effectiveness has the potential to be distorted to a significant degree by noninstructional factors beyond the instructor's control.

For other colleges, the implication of our study is that norming is essential, and administrators at each college must identify the noninstructional factors most relevant to norming in their institutional setting. Such a study is worth doing at every college that uses SEIs to evaluate faculty. The noninstructional factors we identified as significantly impacting student ratings of instruction may be specific to our institution alone.

Recent research (Benton & Cashin, 2012) suggested that it is a misconception to attribute poor overall ratings to such noninstructional factors. Our results suggest that while noninstructional factors cannot entirely explain poor (or good) ratings, they do have the potential to bias the ratings sufficiently to matter in administrative decisions. Peterson et al. (2008) in their study of a single department within a business school suggest the possibility that instructors may try to game the system by using noninstructional factors to improve their ratings without necessarily improving teaching effectiveness. Appropriate norming procedures can eliminate this problem.

Although our study suggests ways to mitigate the distortions caused by noninstructional factors on teaching effectiveness ratings, student evaluations are by no means the only measure of teaching effectiveness and student learning. Many researchers provide ways of guarding against potential bias in SEIs (Baldwin & Blattner, 2003). Using alternative approaches such as portfolios, peer feedback sessions, and informal student surveys in addition to SEIs can further help to combat or circumvent these potential biases. Scriven (2011) suggests three models for teacher evaluation in increasing order of desirability. First, a self-assessment by faculty members; second, student evaluation of instructors reported to administrators (the method most commonly adopted); and third, an external examiner evaluating student achievement and thereby inferring the efficacy of the teacher.

Overall, the debate in the literature tends to either extol the virtues of SEIs or denigrate them as useless. Our research shows that SEIs can be useful instruments as long as they are validated, and the biases that affect them are accounted for in the evaluation process.

REFERENCES

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies
  • Aigner, D. J., & Thum, F. D. (1986). On student evaluation of teaching ability. Journal of Economic Education, 17(Fall), 243265.
  • Aleamoni, L. M. (1981). Student ratings of instruction. In J. Millman (Ed.), Handbook of teacher evaluation. Beverly Hills, CA: Sage, 110145.
  • Arvey, R. D. (1979). Unfair discrimination in the employment interview: Legal and psychological aspects. Psychological Bulletin, 86, 736765.
  • Baldwin, T., & Blattner, N. (2003). Guarding against potential bias in student evaluations: What every faculty member needs to know. College Teaching, 51(1), 2732.
  • Bauer, C. B., & Baltes, B. B. (2002). Reducing the effects of gender stereotypes on performance evaluations of college professors. Sex Roles: A Journal of Research, 47, 465476.
  • Bedard, K., & Kuhn, P. (2008). Where class size really matters: Class size and student ratings of instructor effectiveness. Economics of Education Review, 27, 253265.
  • Benton, S. L., & Cashin, W. E. (2012). Student ratings of teaching: A summary of research and literature. IDEA Paper # 50, IDEA Center, Kansas State University.
  • Benton, S. L., Douchon, D., & Pallett, W. H. (2013). Validity of student self-reported ratings of learning. Assessment and Evaluation in Higher Education, 38(4), 377388.
  • Bergquist, T. M., & Maggs, A. (2011). A bookstore for bailey: A novel approach to teaching a small-business management course. Decision Sciences Journal of Innovative Education, 9(2), 269274.
  • Blackburn, R. T., & Lawrence, J. H. (1986). Aging and the quality of faculty job performance. Review of Educational Research, 56, 265290.
  • Blackhart, G. C., Peruche, B. M., DeWall, C. N., & Joiner, T. E. J. (2006). Factors influencing teaching evaluations in higher education. Teaching of Psychology, 33, 3739.
  • Brightman, H., Elliott, M., & Bhada, Y. (1993). Increasing the effectiveness of student evaluation of instructor data through a factor score comparative report. Decision Sciences, 24(1), 192199.
  • Brightman, H. J. (2005). Mentoring faculty to improve teaching and student learning. Decision Sciences Journal of Innovative Education, 3, 191203.
  • Brightman, H. J., Bhada, Y., Elliott, M., & Vandenberg, R. (1989). An empirical study to examine the reliability and validity of a student evaluation of instructor instrument. GSU College of Business Administration Internal Working Document, prepared by the Faculty Development Committee (FDC).
  • Centra, J. (2003). Will teachers receive higher student evaluations by giving higher grades and less course work? Research in Higher Education, 44(8), 495518.
  • Centra, J. A. (2009). Differences in responses to the student instructional report: Is it bias? Princeton, NJ: Educational Testing Service.
  • Centra, J. A., & Gaubatz, N. B. (2000). Is there a gender bias in student evaluations of teaching? Journal of Higher Education, 70, 1733.
  • Costin, F., Greenough, W. T., & Menges, R. J. (1971). Student ratings of college teaching: Reliability, validity, and usefulness. Review of Educational Research, 41, 511535.
  • Davies, M., Hirshberg, J., Lye, J., Johnson, C., & McDonald, I. (2007). Systematic influences on teaching evaluations: The case for caution. Australian Economic Papers, 46(1), 1838.
  • Davis, B. G. (2009). Tools for teaching (2nd ed.). San Francisco: Jossey Bass.
  • Del Boca, F. K., & Ashmore, R. D. (1980). Sex stereotypes and implicit personality theory. II. A trait-inference approach to the assessment of sex stereotypes. Sex Roles, 6(4), 519535.
  • Dobbins, G. H., Cardy, R. L., & Truxillo, D. M. (1988). The effects of purpose of appraisal and individual differences in stereotypes of women on sex differences in performance ratings: A laboratory and field study. Journal of Applied Psychology, 73, 551558.
  • Feldman, K. A. (1978). Course characteristics and college students’ ratings of their teachers: What we know and what we don't. Research in Higher Education, 9, 199242.
  • Feldman, K. A. (1984). Class size and college students’ evaluations of teachers and courses: A closer look. Research in Higher Education, 22(1), 45116.
  • Feldman, K. A. (1993). College students’ views of male and female faculty college teachers: Part II—Evidence from students’ evaluations of their classroom teachers. Research in Higher Education, 34, 151211.
  • Feldman, K. A. (2007). Identifying exemplary teachers and teaching: Evidence from student ratings. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence based perspective. Dordrecht, The Netherlands: Springer, 93129.
  • Greenwald, A. G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychologist, 52(11), 11821186.
  • Hancock, G. R., Shannon, D. M., & Trentham, L. L. (1993). Student and teacher gender in ratings of university faculty: Results from five colleges of study. Journal of Personnel Evaluation in Education, 6(3): 235248.
  • Heckert, T. M., Latier, A., Ringwald, A., & Silvey, B. (2006). Relation of course, instructor, and student characteristics to dimensions of student ratings of teaching effectiveness. College Student Journal, 40, 195203.
  • Isley, P., & Singh, H. (2005). Do higher grades lead to favorable student evaluations? Journal of Economic Education, 36, 2942.
  • Isley, P., & Singh, H. (2007). Does faculty rank influence student teaching evaluations? Implications for assessing instructor effectiveness. Business Education Digest, (XVI), 4759.
  • Kohn, J., & Hatfield, L. (2006). The role of gender in teaching effectiveness ratings of faculty. Academy of Educational Leadership Journal, 10(3): 121137.
  • Lackritz, J. R. (2004). Exploring burnout among university faculty: Incidence, performance, and demographic issues. Teaching and Teacher Education, 20(7), 713729.
  • Liaw, S.-H., & Goh, K.-L. (2003). Evidence and control of biases in student evaluations of teaching. The International Journal of Educational Management, 17(1), 3743.
  • Marsh, H. W., Overall, J. U., & Kesler, S. B. (1979). Validity of student evaluations of instructional effectiveness: A comparison of faculty self-evaluations and evaluation by their students. Journal of Educational Psychology, 71, 149160.
  • Marsh, H. W., & Roche, L. A. (2000). Effectiveness of grading leniency and low workload on students’ evaluation of teaching: Popular myths, bias, validity or innocent bystanders? Journal of Educational Psychology, 92(1), 202228.
  • Mobley, W. H. (1982). Supervisor and employee race and sex effects on performance appraisals: A field study of adverse impact and generalizability. Academy of Management Journal, 25, 598606.
  • Mohan, N. (2011). On the use of non tenure track faculty and the potential effect on classroom content and student evaluation of teaching. Journal of Financial Education, 37(Spring/Summer), 2942.
  • Nargundkar, S., & Shrikhande, M. (2012). An empirical investigation of student evaluations of instruction—The relative importance of factors. Decision Sciences Journal of Innovative Education, 10(1), 117135.
  • Peterson, R. L., Berenson, M. L., Misra, R. B., & Radosevich, D. J. (2008). An evaluation of factors regarding students’ assessment of faculty in a business school. Decision Sciences Journal of Innovative Education, 6(2), 375402.
  • Pounder, J. S. (2007). Is student evaluation of teaching worthwhile? Quality Assurance in Education, 15(2), 178191.
  • Reid, L. D. (2010). The role of perceived race and gender in the evaluation of college teaching on RateMyProfessors.com. Journal of Diversity in Higher Education, 3(3), 137152.
  • Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & Evaluation in Higher Education, 30(4), 387415.
  • Scriven, M. (2011). Evaluation bias and its control. Journal of Multi Disciplinary Evaluation, 7(15), 7998.
  • Seldin, P. (1993). Successful use of teaching portfolios. Bolton, MA: Anker Pub Co.
  • Snider, B. R., & Eliasson, J. B. (2013). Beat the instructor: An introductory forecasting game. Decision Sciences Journal of Innovative Education, 11(2), 147157.
  • Sprague, J., & Massoni, K. (2005). Student evaluations and gendered expectations: What we can't count can hurt us. Sex Roles, 53(11–12), 779793.
  • Tatro, C. N. (1995). Gender effects on student evaluations of faculty. Journal of Research & Development in Education, 28, 169173.
  • Wood, K., Linsky, A. S., & Straus, M. A. (1974). Class size and student evaluations of faculty. The Journal of Higher Education, 45(7), 524534.
  • Zangenehzadeh, H. (1988). Grade inflation: A way out. The Journal of Economic Education, 19, 217226.

Biographies

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. LITERATURE REVIEW
  5. METHODOLOGY
  6. RESULTS
  7. DISCUSSION
  8. CONCLUSION
  9. REFERENCES
  10. Biographies
  • Satish Nargundkar is a Clinical Associate Professor of Managerial Sciences at Georgia State University's J. Mack Robinson College of Business (RCB). He worked for AT&T and Acxiom Corp., and consulted with numerous companies in the area of Marketing and Risk analysis. His research interests are in Supply Chain Management, Strategic Decision Making, Process Improvement, and Teaching Innovation. He has published in several journals including the European Journal of Operations Research, Operations Management Education Review, Journal of Global Strategies, and the Journal of Managerial Issues. He received the RCB award for outstanding contributions to teaching in 2001 and 2012.

  • Milind Shrikhande is a Clinical Professor of Finance at Georgia State University's J. Mack Robinson College of Business (RCB). He has worked at Georgia Tech, Unilever, and the World Bank. He is also a Visiting Scholar at the Federal Reserve Bank of Atlanta. He was awarded the First Harvey Brightman Award for teaching innovation at the RCB in 2006, GSU's Instructional Effectiveness Award in 2007, and the Outstanding Professor Award in the Global Partners MBA Program at RCB in 2010. His research has been published in the Review of Financial Studies Journal.