The Determinants of Students' Perceived Learning Outcomes and Satisfaction in University Online Education: An Empirical Investigation*

Authors


  • *

    We thank the editor and two anonymous referees for their suggestions which improved the quality of this article significantly.

Corresponding author.

ABSTRACT

In this study, structural equation modeling is applied to examine the determinants of students' satisfaction and their perceived learning outcomes in the context of university online courses. Independent variables included in the study are course structure, instructor feedback, self-motivation, learning style, interaction, and instructor facilitation as potential determinants of online learning. A total of 397 valid unduplicated responses from students who have completed at least one online course at a university in the Midwest were used to examine the structural model. The results indicated that all of the antecedent variables significantly affect students' satisfaction. Of the six antecedent variables hypothesized to affect the perceived learning outcomes, only instructor feedback and learning style are significant. The structural model results also reveal that user satisfaction is a significant predictor of learning outcomes. The findings suggest online education can be a superior mode of instruction if it is targeted to learners with specific learning styles (visual and read/write learning styles) and with timely, meaningful instructor feedback of various types.

INTRODUCTION

The landscape of distance education is changing. This change is being driven by the growing acceptance and popularity of online course offerings and complete online degree programs at colleges and universities worldwide. The distance learning system can be viewed as having several human/nonhuman entities interacting together via computer-based instructional systems to achieve the goals of education, including perceived learning outcomes and student satisfaction. These two outcomes are widely cited as measures of the effectiveness of online education systems (e.g., Alavi, Wheeler, & Valacich, 1995; Graham & Scarborough, 2001).

The primary objective of this study is to investigate the determinants of students' perceived learning outcomes and satisfaction in university online education using e-learning systems. Using the extant literature, we begin by introducing and discussing a research model illustrating factors affecting e-learning systems outcomes. We follow this with a description of the cross-sectional survey that was used to collect data and the results from a Partial Least Squares (PLS) analysis of the research model. In the final section, we outline the implications of the results for higher educational institutions and acknowledge the limitations of the study and a future research agenda.

THE IMPORTANT FACTORS THAT CONTRIBUTE TO THE SUCCESS OF E-LEARNING SYSTEMS

Our conceptual model illustrating factors potentially affecting e-learning systems outcomes is built on the conceptual frameworks of Piccoli, Ahmad, and Ives (2001). Piccoli et al. (2001) refer to human and design factors as antecedents of learning effectiveness. Human factors are concerned with students and instructors, while design factors characterize such variables as technology, learner control, course content, and interaction. The conceptual framework of online education proposed by Peltier, Drago, and Schibrowsky (2003) consists of instructor support and mentoring, instructor-to-student interaction, student-to-student interaction, course structure, course content, and information delivery technology. Our research model is illustrated in Figure 1.

Figure 1.

Research model.

Student Self-Motivation

Students are the primary participants of e-learning systems. Web-based e-learning systems placed more responsibilities on learners than traditional face-to-face learning systems. A different learning strategy, self-regulated learning, is necessary for e-learning systems to be effective. Self-regulated learning requires changing roles of students from passive learners to active learners. Learners must self-manage the learning process. The core of self-regulated learning is self-motivation (Smith, 2001). Self-motivation is defined as the self-generated energy that gives behavior direction toward a particular goal (Zimmerman, 1985, 1994).

The strength of the learner's self-motivation is influenced by self-regulatory attributes and self-regulatory processes. The self-regulatory attributes are the learner's personal learning characteristics including self-efficacy, which is situation-specific self-confidence in one's abilities (Bandura, 1977). Because self-efficacy influences choice, efforts, and volition (Schunk, 1991), a survey question (Moti1 in Appendix A) representing self-efficacy is used to measure the strength of self-motivation. The self-regulatory processes refer to the learner's personal learning processes such as attributions, goals, and monitoring. Attributions are views in regard to the causes of an outcome (Heider, 1958). A survey question (Moti2 in Appendix A) representing a controllable attribution is used to measure the strength of self-motivation.

One of the stark contrasts between successful students is their apparent ability to motivate themselves, even when they do not have the burning desire to complete a certain task. On the other hand, less successful students tend to have difficulty in calling up self-motivation skills, like goal setting, verbal reinforcement, self-rewards, and punishment control techniques (Dembo & Eaton, 2000). The extant literature suggests that students with strong motivation will be more successful and tend to learn the most in Web-based courses than those with less motivation (e.g., Frankola, 2001; LaRose & Whitten, 2000). students' motivation is a major factor that affects the attrition and completion rates in the Web-based course and a lack of motivation is also linked to high dropout rates (Frankola, 2001; Galusha, 1997). Thus, we hypothesized:

  • H1a: Students with a higher level of motivation will experience a higher level of user satisfaction.

  • H1b: Students with a higher level of motivation in online courses will report higher levels of agreement that the learning outcomes equal to or better than in face-to-face courses.

Students' Learning Styles

Learning is a complex process of acquiring knowledge or skills involving a learner's biological characteristics/senses (physiological dimension); personality characteristics such as attention, emotion, motivation, and curiosity (affective dimension); information processing styles such as logical analysis or gut feelings (cognitive dimension); and psychological/individual differences (psychological dimension) (Dunn, Beaudry, & Klavas, 1989). Due to the multiples dimensions of differences in each learner, there have been continuing research interests in learning styles. Some 21 models of learning styles are cited in the literature (Curry, 1983) including the Kolb learning preference model (Kolb, 1984), Gardner's theory of multiple intelligence (Gardner, 1983), and the Myers-Briggs Personality Type Indicators (Myers & Briggs, 1995). The basic premise of learning style research is that different students learn differently and students experience higher level of satisfaction and learning outcomes when there is a fit between a learner's learning style and a teaching style.

This study uses the physiological dimension of the study of learning styles, which focus on what senses are used for learning. A popular typology for the physiological dimension of the learning styles is VARK (Visual, Aural, Read/write, and Kinesthetic) (Drago & Wagner, 2004, p. 2).

  • 1) Visual: visual learners like to be provided demonstrations and can learn through descriptions. They like to use lists to maintain pace and organize their thoughts. They remember faces but often forget names. They are distracted by movement or action but noise usually does not bother them.
  • 2) Aural: aural learners learn by listening. They like to be provided with aural instructions. They enjoy aural discussions and dialogues and prefer to work out problems by talking. They are easily distracted by noise.
  • 3) Read/write: read/write learners are note takers. They do best by taking notes during a lecture or reading difficult material. They often draw things to remember them. They do well with hands-on projects or tasks.
  • 4) Kinesthetic: kinesthetic learners learn best by doing. Their preference is for hands-on experiences. They are often high energy and like to make use of touching, moving, and interacting with their environment. They prefer not to watch or listen and generally do not do well in the classroom.

One can speculate that a different set of learning styles is served in an online course than in a face-to-face course. We assume that online learning systems may include less sound or oral components than traditional face-to-face course delivery systems and that online learning systems have more proportion of read/write assignment components, Students with visual learning styles and read/write learning styles may do better in online courses than their counterparts in face-to-face courses. Hence, we hypothesized:

  • H2a: Students with visual and read/write learning styles will experience a higher level of user satisfaction.

  • H2b: Students with visual and read/write learning styles will report higher levels of agreement that the learning outcomes of online courses are equal to or better than in face-to-face courses.

Instructor Knowledge and Facilitation

Some widely accepted learning models are objectivism, constructivism, collaborativism, cognitive information processing, and socioculturalism (Leidner & Jarvenpaa, 1995). Traditional face-to-face classes using primarily the lecture method, use the objectivist model of learning whose goal is transfer of knowledge from instructor to students. Even in distance learning, it is still a critical role of the instructor to transfer his/her knowledge to students, because the knowledge of the instructor is transmitted to students at different locations (Leidner & Jarvenpaa, 1995). Thus, we included a question to ask the perception of students in regard to the knowledge of the instructor: The instructor was very knowledgeable about the course.

Distance learning can easily break a major assumption of objectivism that the instructor houses all necessary knowledge. For this reason, distance learning systems can utilize many other learning models such as constructivist, collaboratism, and socioculturism. Constructivism assumes that individuals learn better when they control the pace of learning. Therefore, the instructor supports learner-centered active learning. Under the model of collaboratism, student involvement is critical to learning. The basic premise of this model of collaboratism is that students learn through shared understanding of a group of learners. Therefore, instruction becomes communication-oriented and the instructor becomes a discussion leader. Distance learning facilities promote collaborative learning across distances with facilities to enable students to communicate with each other. The socioculturism model necessitates empowering students with freedom and responsibilities because learning is individualistic.

E-learning environments demand a transition of the roles of students and the instructor. The instructor's role is to become a facilitator who stimulates, guides, and challenges his/her students via empowering students with freedom and responsibility, rather than a lecturer who focuses on the delivery of instruction (Huynh, 2005). The importance of the level of encouragement can be found in the model proposed by Lam (2005). We added two questions to assess the roles of the instructor as the facilitator and stimulator: “The instructor was actively involved in facilitating this course” and “The instructor stimulated students to intellectual effort beyond that required by face-to-face courses.” Therefore, we hypothesized:

  • H3a: A higher level of instructor knowledge and facilitation will lead to a higher level of user satisfaction.

  • H3b: A higher level of instructor knowledge and facilitation will lead to higher levels of student agreement that the learning outcomes of online courses are equal to or better than in face-to-face courses.

Instructor Feedback

Instructor feedback to the learner is defined as information a learner receives about his/her learning process and achievement outcomes (Butler & Winne, 1995) and it is “one of the most powerful component in the learning process” (Dick & Carey, 1990, p. 165). Instructor feedback intends to improve student performance via informing students how well they are doing and via directing students' learning efforts. Instructor feedback in the Web-based system includes the simplest cognitive feedback (e.g., examination/assignment with his/her answer marked wrong), diagnostic feedback (e.g., examination/assignment with instructor comments about why the answers are correct or incorrect), prescriptive feedback (instructor feedback suggesting how the correct responses can be constructed) via replies to student e-mails, graded work with comments, online grade books, and synchronous and asynchronous commentary.

Instructor feedback to students can improve learner affective responses, increase cognitive skills and knowledge, and activate metacognition. Metacognition refers to the awareness and control of cognition through planning, monitoring, and regulating cognitive activities (Pintrich, Smith, Garcia, & McKeachie, 1991). Metacognitive feedback concerning learner progress directs the learner's attention to learning outcomes (Ley, 1999). When metacognition is activated, students may become self-regulated learners. They can set specific learning outcomes and monitor the effectiveness of their learning methods or strategies (Chen, 2002; Zimmerman, 1989). Therefore, we hypothesized:

  • H4a: A high level of instructor feedback will lead to a high level of user satisfaction.

  • H4b: A higher level of instructor feedback will lead to higher levels of student agreement that the learning outcomes of online courses are equal to or better than in face-to-face courses.

Interaction

The design dimension includes a wide range of constructs that affect effectiveness of e-learning systems such as technology, learner control, learning model, course contents and structure, and interaction. Of these, the research model focuses on only interaction and course structure.

Among the many frameworks/taxonomies of interaction (Northrup, 2002), this research adopts Moore's (1989) communication framework which classified engagement in learning through (a) interaction between participants and learning materials, (b) interaction between participants and tutors/experts, and (c) interactions among participants. These three forms of interaction in online courses are recognized as important and critical constructs determining the performance of Web-based course quality. Most students who reported higher levels of interaction with instructor and peers reported higher levels of satisfaction and higher levels of learning (e.g., Swan, 2001). A number of previous research studies suggested that an interactive teaching style and high levels of learner-to-instructor interaction are strongly associated with high levels of user satisfaction and learning outcomes (e.g., Arbaugh, 2000; Swan, 2001).

Swan (2001) reported student perceptions of interaction with their peers to be related to four components: actual interactions in the courses, the percentage of the course grade that was based on discussion, required participation in discussions, and the average length of discussion responses. Graham and Scarborough (2001) bolstered Swan's findings as their survey determined that 64% of students claimed that having access to a group of students was important. Furthermore, Picciano (1998) discovered that students perceive learning from online courses to be related to the amount of discussion actually taking place in them. When students actively participate in an intellectual exchange with fellow students and the instructor, students verbalize what they are learning in a course and articulate their current understanding (Chi & VanLehn, 1991). Therefore, we hypothesized:

  • H5a: A high level of perceived interaction between the instructor and students and between students and students will lead to a high level of user satisfaction.

  • H5b: A higher level of perceived interaction between the instructor and students and between students and students will lead to higher levels of student agreement that the learning outcomes of online courses are equal to or better than in face-to-face courses.

Course Structure

Course structure is seen as a crucial variable that affects the success of distance education along interaction. According to Moore (1991, p. 3), the course structure “expresses the rigidity or flexibility of the program's educational objectives, teaching strategies, and evaluation methods” and the course structure describes “the extent to which an education program can accommodate or be responsive to each learner's individual needs.”

Course structure has two elements—course objectives/expectation and course infrastructure. Course objectives/expectation are to be specified in the course syllabus including what topical areas are to be learned, required workload in competing assignments, expected class participation in the form of online conferencing systems, group project assignments, and so on. Course infrastructure is concerned with the overall usability of the course Web site and organization of the course material into logical and understandable components. These structural elements, needless to say, affect the satisfaction level and learning outcomes of distance learners.

We theorize that course structure will be strongly correlated to user satisfaction and perceived learning outcomes, especially when the course material is organized into logical and understandable components and that the clear communication of course objectives and procedures will lead to the high levels of student satisfaction and perceived learning outcomes. Thus, we hypothesized:

  • H6a: A good course structure will lead to a high level of user satisfaction.

  • H6b: A good course structure will lead to higher levels of student agreement that the learning outcomes of online courses are equal to or better than in face-to-face courses.

METHODOLOGY

The six sets of hypotheses were tested using a quantitative survey of satisfaction and learning outcome perceptions of students who have taken at least one online course at a large Midwestern university in the United States. Structural equation modeling is employed to examine the determinants of these outcomes and student satisfaction. The challenge in deciding on the design of the study was to reduce the complexity of the research object without wrecking ourselves on an unjustifiable simplification or on an unmanageable research project. Details regarding the design of this research are provided in the following sections. First, the development of the survey instrument is described and a discussion of the sample subjects is provided. Next, specific measures used to assess the variables are identified and scale reliability and validity data are reported. This is followed by the presentation of the structural model results associated with the survey.

Survey Instrument

After conducting an extensive literature review, we designed a list of questions that we believed were logically associated with the factors in our model (see Appendix A). The survey questionnaire is in part adapted or selected from the commonly administered IDEA (Individual Development & Educational Assessment) student rating systems developed by Kansas State University.

In an effort to survey students using technology-enhanced e-learning systems, we focused on students enrolled in Web-based courses with no oncampus meetings. We collected the e-mail addresses from the student data files achieved with every online course delivered through the online program of a university in the midwestern United States. From these addresses, we generated 1,854 valid e-mail addresses. The 42 survey questions were generated by FrontPage 2000. The survey URL and instructions were sent to all valid e-mail addresses. We collected 397 valid unduplicated responses from the survey. Appendix B summarizes the characteristics of the student sample.

Research Method

The research model (Figure 1) was tested using the structural equation model-based PLS methodology for two reasons. First, PLS is well suited to the early stages of theory building and testing (Chin, 1998). It is particularly applicable in research areas where theory is not as well developed as that demanded by linear structural relationship (LISREL) (Fornell & Bookstein, 1982) as is the case with this research study. Second, PLS is most appropriately used when the researcher is primarily concerned with prediction of the dependent variable (Fornell & Bookstein, 1982; Garthwaite, 1994).

Measurement Model Estimation

The first step in data analysis involved model estimation. The test of the measurement model includes an estimation of the internal consistency and the convergent and discriminant validity of the instrument items. The composite reliability of a block of indicators measuring a construct was assessed with three measures—the composite reliability measure of internal consistency, Cronbach's alpha, and average variance extracted (AVE). The internal consistency measure is similar to Cronbach's alpha as a measure of internal consistency except the latter presumes, a priori, that each indicator of a construct contributes equally (i.e., the loadings are set to unity). Cronbach's alpha assumes parallel measures and represents a lower bound of composite reliability (Chin, 1998; Fornell & Larcker, 1981). The internal consistency measure, which is unaffected by scale length, is more general than Cronbach's alpha, but the interpretation of the values obtained is similar and the guidelines offered by Nunnally and Bernstein (1994) can be adopted. All reliability measures were above the recommended level of .70 (Table 1), thus indicating adequate internal consistency (Fornell & Bookstein, 1982; Nunnally & Bernstein, 1994). The AVE were also above the minimum threshold of .5 (Chin, 1998; Fornell & Larcker, 1981) and ranged from .616 to .783 (see Table 1). When AVE is greater than .50, the variance shared with a construct and its measures is greater than error. This level was achieved for all of the model constructs.

Table 1.  Convergent and discriminant validity of the model constructs.
VariableFactor Loading
  1. IC = internal consistency; AVE = average variance extracted.

Course structure
 IC = .89
 AVE = .73
 Struc1.8375
 Struc2.8681
 Struc3.8500
Instructor feedback
 IC = .93
 AVE = .77
 Feed1.8739
 Feed2.8295
 Feed3.9041
 Feed4.9017
Self-motivation
 IC = .75
 AVE = .62
 Moti1.5249
 Moti2.9783
Learning style
 IC = .80
 AVE = .67
 Styl1.8876
 Styl2.7441
Interaction
 IC = .77
 AVE = .62
 Intr1.8823
 Intr2.6845
Instructor knowledge and facilitation
 IC = .89
 AVE = .73
 Inst1.8468
 Inst2.9035
 Inst3.8055
User satisfaction
 IC = .90
 AVE = .76
 Sati1.8686
 Sati2.9065
 Sati3.8301
Learning outcomes
 ic = .92
 AVE = .78
 Outc1.8533
 Outc2.8991
 Outc3.9017

Convergent validity is demonstrated when items load highly (loading >.50) on their associated factors. Individual reflective measures are considered to be reliable if they correlate more than .7 with the construct they intend to measure. In the early stages of scale development, loading of .5 or .6 is considered acceptable if there are additional indicators in the block for comparative purposes (Chin, 1998). Table 1 shows most of the loadings were above .7 for the eight constructs.

Discriminant validity was assessed using two methods. First, by examining the cross-loadings of the constructs and the measures and, second, by comparing the square root of the AVE for each construct with the correlation between the construct and other constructs in the model (Chin, 1998; Fornell & Larcker, 1981). All constructs in the estimated model fulfilled the condition of discriminant validity (see Table 2).

Table 2.  Correlation among construct scores (square root of AVE in the diagonal).
 Course StructureInstructor FeedbackSelf-MotivationLearning StyleInteractionInstructor Knowledge and FacilitationUser SatisfactionLearning Outcomes
  1. Note: The boldface figures represent the square root of the AVE figures. They should be higher than the correlation figures.

Course structure.852 
Instructor feedback.721.878 
Self-motivation.243.229.784 
Learning style.293.214.265.819 
Interaction.415.564.394.276.789 
Instructor knowledge and facilitation.679.802.252.257.524.852 
User satisfaction.740.695.393.406.531.708.868 
Learning outcomes.547.486.391.443.441.539.773.884

Overall, the revised measurement model results provided support for the reliability and convergent and discriminant validities of the measures used in the study.

STRUCTURAL MODEL RESULTS

Because PLS makes no distributional assumptions in its parameter estimation procedure, traditional parameter-based techniques for significance testing and model evaluation are considered to be inappropriate (Chin, 1998). LISREL and other covariance structure analysis modeling approaches involve parameter estimation procedures, which seek to reproduce as closely as possible the observed covariance matrix. In contrast, PLS has its primary objective the minimization of error (or equivalently the maximization of variance explained) in all endogenous constructs. One consequence of this difference in objectives is that no proper overall goodness-of-fit measures exist for PLS.

Consistent with the distribution free, predictive approach of PLS (Wold, 1985), the structural model was evaluated using the R-squared for the dependent constructs, the Stone-Geisser Q2 test (Geisser, 1975; Stone, 1974) for predictive relevance, and the size, t statistics, and significance level of the structural path coefficients. The t statistics were estimated using the bootstrap resampling procedure (100 resamples). The results of the structural model are summarized in Table 3.

Table 3.  Structural (inner) model results.
 Path CoefficientObserved t ValueSig. Level
  1. ****p < .001, ***p < .010, **p < .050, *p < .100.

  2. ns= not significant.

Effects on user satisfaction (R2= .692)
 Course structure+.382 +7.4497****
 Instructor feedback+.119 +1.8457**
 Self-motivation+.141 +4.1396****
 Learning style+.147 +4.0405****
 Interaction+.087 +2.0773***
 Instructor knowledge & facilitation+.234 +4.2996****
Effect on learning outcomes (R2= .628)
 Course structure−.015  +.2483ns
 Instructor feedback+.118 +1.6713**
 Self-motivation+.075 +1.6169*
 Learning style+.135 +3.8166****
 Interaction+.031  +.7687ns
 Instructor knowledge and facilitation+.065 +1.0805ns
 User satisfaction+.720+12.4127****

R2 for Dependent Constructs

The results show that the structural model explains 69.2 percent of the variance in the user satisfaction construct and 62.8 percent of the variance in the learning outcomes construct. The percentage of variance explained for these primary dependent variables were greater than 10 percent, implying satisfactory and substantive value and predictive power of the PLS model (Falk & Miller, 1992).

The Stone-Geisser Q2 Test

In addition to examining the R2, the PLS model is also evaluated by looking at the Q2 for predictive relevance for the model constructs. Q2 is a measure of how well the observed values are reproduced by the model and its parameter estimates. Q2 is estimated using a blindfolding procedure that omits a part of the data for a particular block of indicators during parameter estimation (Chin, 1998). The omitted part is then estimated using the estimated parameters, and the procedure is repeated until every data point has been omitted and estimated. Two types of Q2 can be estimated. A cross-validated communality Q2 is obtained if prediction of the omitted data points in the blindfolded block of indicators is made by the underlying latent variable. A redundancy Q2 is obtained if prediction of the omitted data points is made by constructs that are predictors of the blindfolded construct in the PLS model (Veniak, Midgley, & Devinney, 1998). Q2 greater than 0 implies that the model has predictive relevance, whereas Q2 less than 0 suggests that the model lacks predictive relevance.

The blindfolding estimates are shown in Table 4. As seen in the table, using omission distances of 10 and 25 produced identical results, indicating that the model estimates are stable. The communality Q2 was greater than 0 for all constructs. Looking at the redundancy Q2, both user satisfaction and learning outcomes have positive redundancy Q2 values. Overall, the estimated model has good communality Q2 for the model measures and good predictive relevance for the two outcome constructs of user satisfaction and learning outcomes.

Table 4.  Blindfolding results.
ConstructR2Omission Distance = 10Omission Distance = 25
Communality Q2Redundancy Q2Communality Q2Redundancy Q2
  1. NA = not applicable.

Course structureNA.7259NA.7259NA
Instructor feedbackNA.7706NA.7706NA
Self-motivationNA.6163NA.6163NA
Learning styleNA.6708NA.6708NA
InteractionNA.6235NA.6235NA
Instructor knowledge and facilitationNA.7274NA.7274NA
User satisfaction.692.7551.5227.7551.5227
Learning outcomes.628.7832.4920.7832.4920

Structural Path Coefficients

As can be seen from the results, all of the antecedent constructs hypothesized to affect user satisfaction are significant, suggesting that course structure, instructor feedback, self-motivation, personality/learning style, interaction, and instructor knowledge and facilitation affect the perceived satisfaction of students who take Web-based courses. Of the same six factors hypothesized to affect the learning outcomes construct, only two were supported at p < .05. These were instructor feedback and personality/learning style. The structural model results also reveal that user satisfaction is a significant predictor of learning outcomes.

DISCUSSION

This study examined the factors that affect the perceived learning outcomes and student satisfaction in asynchronous online learning courses. The research model was tested by using a PLS analysis on the survey data. The hypotheses in this study received partial support. We found that all six factors—course structure, self-motivation, learning styles, instructor knowledge and facilitation, interaction, and instructor feedback—significantly influenced students' satisfaction. This is in accordance with the findings and conclusions discussed in the literature on student satisfaction.

Of the six factors hypothesized to affect perceived learning outcomes, only two (learning styles and instructor feedback) were supported. Contrary to previous research (LaPointe & Gunawardena, 2004), we found no support for a positive relationship between interaction and perceived learning outcomes. One possible explanation for this finding is that the study did not account for the quality or purpose of the interactions. Although a student's perception of interaction with instructors and other students is important in his/her level of satisfaction with the overall online learning experience, when the purpose of online interaction is to create a sense of personalization and customization of learning and help students overcome feelings of remoteness, it may have little effect on perceived learning outcomes. Furthermore, a well-designed online course delivery system is likely to reduce the need of interactions between instructors and students. The university under study has a very friendly online e-learning system and strong technical support system. Every class Web site follows the similar design structure which reduces the learning curve.

Another disputable point is the statistically insignificant relationship between online course structure and perceived learning outcomes. One possible explanation for this is that, for the students who visited the class Web site on a regular basis, what matters to their learning is not so much the usability of the course site as a measure of the quality of engagement in other learning activities. For instance, meaningful feedback that occurs among students or from a teacher may have a greater impact on perceived learning outcomes. As long as students received meaningful feedback about the course contents, an inadequate Web content design becomes less critical.

Contrary to other research findings, no significant relationships were found between students' self-motivation and perceived learning outcomes. We are unable to explain this aberration. Theoretically, self-motivation can lead students to go beyond the scope and requirements of an educational course because they are seeking to learn about the subject, not just fulfill a limited set of requirements. Self-motivation should also encourage learning even when there is little or no external reinforcement to learn and even in the face of obstacles and setbacks to learning. Additional work is needed to better specify the conditions under which self-motivation is likely to have a positive, negative, or neutral effect on perceived learning outcomes.

LIMITATIONS AND DIRECTIONS FOR FUTURE RESEARCH

Several limitations of this study can be identified to help drive future research. First, factors examined in other studies (Peltier et al., 2003) such as course delivery technology warrant investigation. Second, future research should seek to further investigate the nonsignificant relationships between the remaining constructs (course structure, self-motivation, and interactions) and perceived learning outcomes. To clarify the dispute over the issue, future studies should use more sophisticated measures of course structure, self-motivation, and interactions and their engagement in learning activities, either quantitatively or qualitatively. In our study, the learning outcome variables ask students about whether they perceive the quality of online learning to be better than that of face-to-face courses or whether students learned more in one than the other. Although students are in general satisfied with online courses, they believe that they did not learn more in online courses or they believe that the quality of online courses was not better than face-to-face class. In future research, it would be interesting to know the critical success factors for improving the quality of online learning using multilevel hierarchical modeling.

PRACTICAL IMPLICATIONS

Higher educational institutions have invested heavily to constantly update their online instructional resources, computer labs, and library holdings. Unfortunately, most institutions have not studied the factors that influence online student satisfaction or learning outcomes. This study is one of the first to extend the structural equation modeling to student satisfaction and perceived learning outcomes in asynchronous online education courses. The findings from the current study have significant implications for the distance educators, students, and administrators. In this study, what we questioned is whether all the six factors will lead to higher levels of student agreement that the learning outcomes of online courses are equal to or better than in face-to-face courses. The results indicated that online education is not a universal innovation applicable to all types of instructional situations. Our findings suggest online education can be a superior mode of instruction if it is targeted to learners with specific learning styles (visual and read/write learning styles) and with timely, helpful instructor feedback of various types. Although cognitive and diagnostics feedbacks are all important factors that improve perceived learning outcomes, metacognitive feedback can induce students to become self-regulated learners.

More specifically, there is a clear relationship between instructor feedback and student satisfaction and perceived outcomes. Feedback is a motivator to many students and should be incorporated into the design and teaching of online courses. Although students prefer feedback from the instructor, peer feedback can also be a valuable instructional tool. As this high level of interaction becomes time consuming, faculty may want to consider efficient teaching and time management strategies. Online quizzes can provide preprogrammed feedback to learners. In addition, instructors may want to develop feedback comments and frequently asked question databases that can be used repeatedly. This study may be useful as a pedagogical tool for instructors planning learning ventures or to justify technological expenditures at the administrative level. It is conceivable that, through this type of research, online learning will be enhanced when there is a better understanding of critical online learning factors.

Appendices

APPENDIX A: SURVEY QUESTIONS

Instructor

Inst1 = The instructor was very knowledgeable about the course.

Inst2 = The instructor was actively involved in facilitating this course.

Inst3 = The instructor stimulated students to intellectual effort beyond that required by face-to-face courses.

Course Structure

Struc1 = The overall usability of the course Web site was good.

Struc2 = The course objectives and procedures were clearly communicated.

Struc3 = The course material was organized into logical and understandable components.

Feedback

Feed1 = The instructor was responsive to student concerns.

Feed2 = The instructor provided timely feedback on assignments, exams, or projects.

Feed3 = The instructor provided helpful timely feedback on assignments, exams, or projects.

Feed 4 = I felt as if the instructor cared about my individual learning in this course.

Self-Motivation

Moti1 = I am goal directed, if I set my sights on a result, I usually can achieve it.

Moti2 = I put forth the same effort in online courses as I would in a face-to-face course.

Learning Style

Styl1 = I prefer to express my ideas and thoughts in writing, as opposed to oral expression.

Styl2 = I understand directions better when I see a map than when I receive oral directions.

Interaction

Intr1 = I frequently interacted with the instructor in this online course.

Intr2 = I frequently interacted with other students in this online course.

OUTPUTS

User Satisfaction

Sati1 = The academic quality was on par with face-to-face courses I've taken.

Sati2 = I would recommend this course to other students.

Sati3 = I would take an online course at Southeast again in the future.

Learning Outcomes

Outc1 = I feel that I learned as much from this course as I might have from a face-to-face version of the course?

Outc2 = I feel that I learn more in online courses than in face-to-face courses.

Outc3 = The quality of the learning experience in online courses is better than in face-to-face courses.

APPENDIX B: STUDENT CHARACTERISTICS

 NumberProportion (%)
Age
 <20 12  3.02
 20–24145 36.52
 25–34116 29.22
 35–44 69 17.38
 45–54 51 12.85
 >54  4  1.01
 Total397100.00
Gender
 Male111 27.96
 Female282 71.03
 Not answered  4  1.01
 Total397100.00
Year in school
 Freshman 14  3.53
 Sophomore 39  9.82
 Junior 60 15.11
 Senior115 28.97
 Graduate167 42.07
 Not answered  2  0.50
 Total397100.00
Area of study by colleges
 Education139 35.01
 Business 93 23.43
 Health and human services 62 15.62
 Science and math 18  4.53
 Speech communications 39  9.82
 University studies 13  3.27
 Polytechnic studies 11  2.77
 Others 22  5.54
 Total397100.00

Sean B. Eom is Professor of Management Information Systems (MIS) at Southeast Missouri State University. He received a PhD in Management Science with supporting fields in MIS and Computer Science from the University of Nebraska–Lincoln in 1985. In recognition of his continuing research contributions, he had been appointed as a Copper dome Faculty Fellow in Research at the Harrison College of Business of Southeast Missouri State University during the academic years 1994–1996 and 1998–2000. He is the author/editor of five books, including Decision Support Systems Research (1970–1999): A Cumulative Tradition and Reference Disciplines, Author Cocitation Analysis using Custom Bibliographical Databases: An Introduction to the SAS systems, Inter-Organizational Information Systems in the Internet Age, and Encyclopedia of Information Systems. He has published more than 50 refereed journal articles and 60 articles in encyclopedia, book chapters, and proceedings of conferences.

H. Joseph Wen is Chairperson and Associate Professor of Management Information Systems, Department of Accounting and Management Information Systems, Harrison College of Business, Southeast Misourri State University. He holds a PhD from Virginia Commonwealth University. He has published over 100 articles in academic refereed journals, book chapters, encyclopedias, and national conference proceedings. He has received over six million dollars in research grants from various state and federal funding sources. His areas of expertise are Internet research, electronic commerce, transportation information systems, and software development. He has also worked as a senior developer and project manager for various software development contracts since 1988.

Nicholas J. Ashill is Associate Professor in Marketing at the University of Victoria of Wellington. He has contributed to such journals as the European Journal of Marketing, Journal of Strategic Marketing, Journal of Services Marketing, Journal of Marketing Management, Qualitative Market Research: An International Journal, Journal of Asia-Pacific Business, and the International Journal of Bank Marketing.

Ancillary