Simulated Computer-Mediated/Video-Interactive Distance Learning: A Test of Motivation, Interaction Satisfaction, Delivery, Learning & Perceived Effectiveness

Authors

  • Ruth Guzley,

    1. Associate Professor of Communication Arts and Sciences at California State University, Chico. She teaches and does research in the areas of organizational communication, leadership, and doctor-patient-HMO relationships.
    Search for more papers by this author
  • Susan Avanzino,

    1. Assistant Professor of Communication Arts and Sciences at California State University, Chico. Her primary teaching and research areas include organizational communication research and theory, advanced communication skills, and organizational change & technology.
    Search for more papers by this author
  • Aaron Bor

    1. Professor of Communication Design at California State University, Chico. He teaches classes in media aesthetics, writing for the media, and advanced video editing.
    Search for more papers by this author

Abstract

This paper reports on an innovative, computer-mediated, educational technology application in a simulated distance learning environment. As an initial evaluation, real student groups completed an entire university course using this state-of-the-art, two-way synchronous audio/visual communication technology, Distributed Tutored Video Instruction (DTVI). The study reported here explored student perceptions of a simulated distance learning environment using the system. The learning environment was characterized by videotaped lectures by the course instructor, delivered in computer-mediated small group settings. Six separate groups made up of six to eight students and a facilitator were studied. Group members were in separate locations, interacting via synchronous audio and visual computer channels. Our findings indicate an overall high level of perceived effectiveness and satisfaction with the instructional mode. In addition, significant relationships were established between facilitator effectiveness and student satisfaction, student motivation and class participation, student exam grades and perceived amount of group discussion. Findings indicate innovations in computer-mediated instructional designs can achieve desired levels of participant interaction considered critical to effective distance education technology.

Introduction

There can be little doubt that distance education is currently one of the hottest topics in higher education (Abrahamson, 1998). It plays an increasingly important role in education and the change in education delivery systems. According to Beller and Or (1998, par. 13), “the successful implementation of technologies in leading universities has, among other things, increased the status of distance learning and is beginning to blur the distinction between on-campus and distance learners.”

The advantages of distance education demonstrate the enormous potential of this new instructional mode, regardless of subject content. Primary among these advantages is the flexibility with which it is associated (Newlands & McLean, 1996). In addition, Dede (1990) argues that distance learning is paving the way for new educational goals and instructional methods that have the potential to tap a wider range of student skills than has been achieved in traditional classrooms. Others note the increasing demand for educational distance technology in specific disciplines, such as business and management (Gerard & Sleeth, 1996; Meisel & Marx, 1999) and counseling (Lundberg, 2000). Greenhalgh (2001) contends that on-line education is simply becoming “inevitable” in medical education and training.

Communication researchers should be particularly interested in distance learning via technology. Although Kuehn (1994) notes the communication discipline has been less interested than others in researching distance learning modes such as computerized instruction, there are examples of growing interest in this area (Althaus, 1997; Christophel, 1990; Freitas, Myers & Avtgis, 1998; Frymier & Shulman, 1995; Guerrero & Miller, 1998; Scott & Rockwell, 1997). One notable limitation of the existing research is that it tends to study just one element of the distance education experience, one moment or task of the entire process, leaving the larger encounter unexplored. For example, studies have compared face-to-face and computer-mediated communication in terms of one-time only group decision making (Olaniran, Savage, & Sorenson, 1996), or the effects of single variables on decision quality, for example status (Hollingshead, 1996), as opposed to the effects of multiple variables. Guerrero and Miller (1998) related instructor competence and course content to nonverbal behaviors in videotaped lectures, but only in relation to first impressions (one time only). While these studies take important first steps in exploring the relationships between technology, education and communication, they represent a limited orientation of instructor/student interaction.

Associated with this limited orientation is the fact that little empirical research to date has addressed the effectiveness of technology or its use in distance learning (Neal, Ramsay, & Preece, 1997), despite the recognition that doing so is important (Zhang, 1998). In the minimal attempts thus far, one point of particular interest to communication researchers is clear: interaction between student and instructor and among students is a key factor deserving attention (Benbunan-Fich & Hiltz, 1999; Haythornthwaite, Kazmer & Robbins, 2000; Martinez & Sweger, 1996; Thach & Murphy, 1995).

Most researchers agree that interaction is necessary to the success of distance learning. For example, Bork (1995) argues that a “high level of interaction is critical for individualization of learning” (p. 232). There tends to be variance however, in opinions about the most appropriate media to provide a high level of interaction. Modern technology enabling two-way, synchronous, video/audio interaction in distance learning environments seems to hold the most promise for maximizing interaction between students and instructor and among students. Given the current expense of such equipment, however, it is not widely available in colleges and universities. Consequently, little research has been devoted to testing this mode of delivery for effectiveness in a distance learning context. The research that does exist, however, gives it high marks in bridging the gap between traditional classroom interaction and interaction in distance learning classrooms. According to Sipusic, Pannoni, Smith, Dutra, Gibbons, and Sutherland (1999), “video mediated communication can in fact support both the content and relational components of discourse that are necessary for effective collaborative learning” (p. 46).

Instructor/student interaction is but one of many aspects of instruction that have changed with the evolution of distance learning. As technology has increased in sophistication and become a more intricate part of the distance learning environment (Eddy, Burnett, Spaulding, & Murphy, 1997), it has called into question the need for change in both teaching techniques and the role of the instructor, a need articulated by Thach and Murphy (1995):

There is a growing realization that traditional teaching techniques will not work in distance education settings …. faculty and other professionals involved in teaching classes in a distance education environment need assistance in identifying the new roles they must assume to be successful. (p. 57–58)

Whereas instructors in traditional classrooms have used verbal and nonverbal immediacy to engage students, instructors in video/audio distance learning classrooms engage students through camera angles and close-up shots to imitate on-scene interaction. Likewise, while instructors in traditional classrooms have held the primary responsibility for instruction, their counterparts in distance learning classrooms may maintain a less front-and-center role. They may instead teach students in videotaped lectures or video-facilitated conferences coordinated by other distance learning staff (e.g. technicians, tutors, facilitators), some of whom share the responsibility for student learning.

As educational settings grow more diverse and technology becomes more innovative, research must also strive to match these dynamic qualities. Exploring the overall process of classroom interaction, instruction, and technology use over an extended time period is warranted. Likewise, including perceptions and interpretations of end-users in relation to multiple influences on effectiveness, learning, and satisfaction also seems warranted. Both points are the objective of this research, which represents a step toward enlarging our understanding of the role of interaction in distance learning environments, as well as the evolving role of facilitators in mediated environments. Specifically, this study tests the effectiveness of a combination video-computer based system (synchronous closed system) in a simulated distance learning environment for a mid-size Western university. To establish the foundation for the study, brief overviews are offered of three pertinent topic areas: 1) distance learning media; 2) interaction in distance learning environments; and 3) student satisfaction and motivation with distance learning environments.

Distance Learning Media

Briefly, distance education is “a process that creates and provides access to learning when time and distance separate the source of information and the learners” (Zhang, 1998, p. 1). Similarly, distance learning has been defined as “the quasi permanent separation of the teacher and learner throughout the length of the learning process” (Hodgson, 1993, p. 12). Bork (1995) offers a comprehensive view of various media for learning associated with distance education. His discussion can be consolidated into three broad categories (print, audio and video) and used to situate the technology involved in this study and to highlight the role of interaction.

First, the print medium, or text, represents perhaps the earliest form of distance learning and is more commonly described as the correspondence course model of distance education (Albrektson, 1995). Originally, mail was the method of transmission for correspondence course text. The more contemporary version uses e-mail. Characteristic of this delivery method is a lack of interaction between student and instructor beyond written comments, and students are likely to work in isolation from other students (Albrektson, 1995).

Albrektson (1995) successfully modified the correspondence course model with on-line technology to increase student/instructor interaction as well as student/student interaction. The modification resulted in what he labels as an “Online Mentored Seminar” which incorporated interaction with and among students via a list-server. Though Albrektson reports significant success of this asynchronous medium in terms of both quality and quantity of student interaction and critical thinking, it seems likely the course topic (Church History Survey) was at least partially responsible for the reported vigorous debate and passionate discussion. Newlands and McLean (1996) argue that while computer conferencing has the advantage of being asynchronous (i.e., not required to take place in real time), it still remains limited in terms of social quality of the interaction it provides.

Newer web-based technologies have evolved in ways that update this original “correspondence” model while retaining a simplistic focus on asynchronous interaction. LaRose, Gregg, and Eastin (1998) refer to current web-based courses (where an existing class is simply replicated on the Internet) as comparable to the “old stand-by, the telecourse” (par. 11). In their study, one additional element is added to their text-based web-course–audiotaped classes–capturing instructor lectures and classroom interaction. A student can review the material and listen to the videos at their discretion. “Our approach was to create an audiographic telecourse which used audio captured in a live classroom to augment text-based lecture outlines and graphics published on the Web” (par. 12). The authors assert the potential superiority of web courses in the age of student-centered and self-directed learning preferences, noting the ability for web courses to be more flexible and adaptable to individuals, compared to a traditional classroom setting. They also note the simplicity of the format, as well. Their study demonstrated the audiographic approach to be equally successful to the comparable traditional course. “A relatively modest audio-graphic approach to web courses based on the familiar telecourse model proved to be as educationally effective, immediate, and enjoyable to learners as live instruction” (par. 3). This approach also demonstrates a crossover between print and audio media.

Audio learning represents the second broad category of distance learning media (e.g., cassette tape, CD, and phone). Representative of this category would be a lecture that has been audiotaped and is provided to the student for study purposes. Bork (1995) notes that interaction is not generally characteristic of this medium; however, Newlands and McLean (1996) exemplify how the combined use of multiple audio media can in fact enhance student learning and interaction. In their study they complemented audiotaped lectures with telephone conferencing sessions conducted by tutors. Tutors were taught to compensate for the lack of eye contact and body language with “more precise … use of language” (p. 4) and increased verbal fluency. Student opinions of this learning experience were positive:

Tutorials were the single most valued aid to learning, probably reflecting the fact that the audio conferencing sessions are their only live teacher contact. Thus, the results of the survey suggest that distance students undergo a different type of educational experience from campus students, but there is no indication that they view it as an inferior experience (Newlands and McLean, p. 6).

Despite these findings, other research has acknowledged that two-way audio media coupled with video, enabling student and teacher/tutor to see and talk with one another, allow for more contact than those with merely audio interaction (Martinez & Sweger, 1996). Hedberg and McNamara (1989) argue that in distance learning environments, computer-mediated instruction “must be supplemented by other appropriate mechanisms, such as the use of tele-conferencing, video materials of some form, or direct contact with a human tutor” (p. 79). The need for interaction of some sort appears to be a desirable, if not critical component of distance education success.

Bork (1995) reports that “most formal distance learning institutions developed today depend more heavily on video [the third broad category of distance learning media] than on any other learning mode” (p. 233). For example, Sipusic, et al. (1999) argues that the video component of distance learning technology is particularly important:

… video mediated communication can in fact support both the content and relational components of discourse that are necessary for effective collaborative learning … can generate high levels of user satisfaction … higher academic performance and more enjoyment than classroom lecture. Distance learning no longer need be considered a poor cousin to face-to-face instruction (p. 46).

The range of application of video in distance learning, however, is rather wide. For example, a videotaped lecture or broadcast video provides students with both audio and visual access to an instructor but no feedback mechanism. Albrektson (1995) discusses what he calls the “simulated lecture” which is the equivalent of a satellite supported conference. This medium can provide one-way video, two-way audio, or two-way audio and two-way video service but it is typically characterized by limited interaction among participants. Teleconferencing, as another form of two-way video, offers the possibility of increased interaction between parties as long as the group is relatively small (Bork, 1995). The combination of video and computer systems provides the greatest of interaction options: computer-mediated communication systems (Benbunan-Fich & Hiltz, 1999; Meisel & Marx, 1999); computer-assisted learning (Greenhalgh, 2001); computer-aided instruction (Wang & Sleeman, 1993); and intelligent tutors (Coughlin 1996; Greer & Bull, 2000) are just a few of the innovative fixtures within this learning category.

However, the greatest opportunity for interaction within this realm comes with the newest state-of-the-art two-way synchronous audio/visual technology, such as the Distributed Tutored Video Instruction (DTVI) system developed by SUN Microsystems Laboratories. The DTVI system is unique in that it allows complete audio and visual interactivity among participants who are connected electronically via computers, forming a small group learning environment. All participants can be simultaneously seen on a video monitor configured in a manner similar to a tic-tac-toe board (see Figure 1). Every group member's visual and auditory information can be controlled individually. Participants have complete flexibility in hearing and seeing any combination of the other group members. As a distance learning system, DTVI uses videotaped lectures and group facilitators for each group of students, allowing for the lecture to be delivered to remote group members while still supporting synchronous interaction. (For more detailed information on the features and assessment of this system, see Sipusic, et al., 1999; The Kansas Project, 2001). The system is consistent with Newlands and McLean's (1996) discussion of synchronous audio and video conferencing as necessary components of the learning process and as links to understanding student motivation.

Figure 1.

DVTI Computer Screen

Interaction in Distance Learning Environments

Given the physical separation of instructor and students in distance learning, at least some scholars have concluded that these classrooms do not have the “interpersonal potential of the conventional classroom” (Freitas, Meyers, Avtgis, 1998, par. 6). The reduced potential for interaction is seen as problematic. For example, Abrahamson (1998) notes, “students often have difficulty when they do not have direct and ongoing contact with their instructor” (p. 2). Newlands and McLean (1996) address these problems in the form of feelings of isolation and withdrawal from courses. Other research (Haynes & Dillon, 1992) has concluded that distance learning students are not as likely to interact with the instructor as on-campus students, although they had more interaction with other students at the distance site than on-campus students.

Abrahamson (1998) reports that without the luxury of a learning site where a number of distance students gather for class, distance learning reduces the social networks students form in school. They do not have the same opportunities as on-campus students to determine their level of proficiency through feedback from other students. Newlands and McLean (1996) also note “the limited degree of interpersonal communication between teachers and students cast [sic] doubt on the quality of learning achieved by distance students” (p. 1). Along these same lines, Bates (1991; as cited in Zhang, 1998) argues that maximizing social interaction is a critical component of distance learning technology. This may be particularly true when learning objectives are tied to the interaction (Zhang, 1998). Within the broad realm of distance learning formats, the level of student/instructor interaction may also be influenced by the specific format used (e.g., teleconferencing vs. computer) (Freitas, Meyers, Avtgis, 1998). Thach and Murphy (1995) identify “promoting interaction” as one of seven competencies critical to distance education and one of the top five competencies for instructors. Across the board, interaction is considered integral to learning.

Research addressing interaction within distance learning environments has tended to go in two directions. The first addresses interaction of the student with the technology (e.g., comfort level of students in using the technology, familiarity with technology used), which is addressed in the next section. The second, and more predominant direction has examined the influence of student/instructor and student/student interaction on student satisfaction and learning. Studies following this direction have tended to conclude that conventional classrooms offer a greater frequency of student/instructor interaction when compared to distance learning environments (e.g., McHenry & Bozik, 1995). Anthaus (1997) acknowledges that “little is yet known about what constitutes an optimal ratio of on-line relative to face-to-face interaction” (p. 172). In contrast, Bork (1995) has argued a high level of interaction is not just desirable but critical for learning to occur. Contrasting views such as these subsequently impact the perceptions of the technology effectiveness and how it is used (Seal & Cann, 2000; Veerman, Andriessen & Kanselaar, 2000).

Though interaction is mentioned (both directly and indirectly) in most distance learning articles as a necessity for effective student learning, little attention has been given to providing a strong conceptual framework for it. Thus, it remains a “fuzzy” notion at best, implicitly used as a synonym for a variety of things. On a broad conceptual level it is probably safe to say that interaction—in conventional classrooms or distance learning environments—is demonstrated as some form of personal contact/communication between students and their instructor, or alternatively, among students.

In the distance learning context, interaction has been addressed most frequently as instructor behaviors that promote instructor/student contact (e.g., immediacy). Freitas, Meyers, and Avtgis (1998) provide a recent example of this approach. They compared student perceptions of instructor verbal and nonverbal immediacy behaviors in two types of classrooms: conventional and distributed learning (i.e., a synchronous interactive computer classroom). They found no significant differences between the groups with regard to perceived instructor verbal immediacy; however, the groups did significantly differ with regard to perceptions of instructor nonverbal immediacy behaviors (i.e., the extent to which gestures and eye contact were used, and the extent to which the instructor walked around the classroom). Freitas, Meyers, and Avtgis explain the significant difference in perceptions of nonverbal immediacy in this way:

In the conventional classroom, students may place a greater emphasis on instructor use of gestures, eye contact, and movement because these behaviors may stimulate student interest in either the instructor or the subject matter. In the distributed learning classroom, instructor use of gestures, eye contact, and movement may not have the same effect because these behaviors may be perceived as “practiced” or “forced.” (Discussion par.4)

They conclude that students in distributed learning classrooms may have lower expectations of instructor nonverbal immediacy than students in conventional classrooms.

Interaction has also been addressed in terms of teaching strategies/activities that promote learning in groups or teams. For example, in their study of an art studio course taught in a distance learning mode, DeVries and Wheeler (1996) found that “the interaction between students and the professor and between the students and their peers commenting on art work was pivotal to the overall success of the course” (p. 3). Sometimes, however, the technology employed in group learning interferes with rather than promotes interaction, leaving students unsatisfied with both interaction and the learning process. For example, Benbunan-Fich and Hilz (1999) explored student problem solving of case studies using asynchronous learning networks. When compared to groups who met face to face, the on-line groups produced better reports but were less satisfied with the problem-solving process because of difficulties with delayed feedback, coordination, and distribution of work.

Interaction, whether it is in the conventional classroom or a distance learning classroom, may automatically occur, but effective interaction—that which promotes learning—does not. Generally, it must be planned, (e.g., the method of monitoring the class). Albrektson (1995) selected a mentor model in his list-server discussions, choosing to provide the foundation for discussion through open-ended questions but to then step back and allow students to explore the topic among themselves. He monitored the interaction to ensure it met course objectives and stayed focused but did not interfere otherwise in the discussion. Student participation exceeded requirements (three times a week was required but most contributed around four to six times), and student enthusiasm also was greater than expected as students incorporated self-directed research and current event analogies into their responses. Dede (1990) also provides an example of planned interaction when suggesting that displaying photographs of students as they speak provides a more personalized way for students to connect speakers with their faces in distance learning environments where video components are not present.

Mazur (2000) provides an excellent recent example of how planning (as well as a strong theoretical framework) may influence the success of interaction in distance learning classrooms. She proposed that classic film theory and cinematic techniques could be used in a distance learning environment to “create a communicative environment in which dialogue and interaction are supported” (par. 4). She tested her proposition by analyzing videotaped distance education classes of a colleague who not only enjoyed teaching classes via this mode, but also received enthusiastic responses about the class from participating students. Her findings indicate that the camera became the instructor's mechanism for stimulating participation through such techniques as close-ups and medium close shots to create a sense of intimacy, and skilled use of eye contact directly into the camera to give the remote sites a stronger sense of her presence. In addition, the remote sites “rotate[d] responsibility for controlling the camera and on-line tools such as the video or computer” (par. 21) so that the visual space of the class included them, thus giving students at the remote sites a stronger sense of being part of the class rather than merely a satellite of it. The length of this paper prohibits an in-depth discussion of the study but readers are encouraged to review it for a glimpse of innovative ways to create the likeness of face-to-face interaction in distance learning environments through cinematic techniques.

Regardless of whether instructor behaviors or instructor strategies/activities are the focus of promoting interaction, most current distance learning research tends to put the onus of interaction squarely on the shoulders of the instructor. As Abrahamson (1998) notes, the “assumption is that for the telecourse to be meaningful and effective it needs some quality personal instructor contact with students” (p. 2). Creating that contact or interaction is a complicated task, as is demonstrated by the literature reviewed above. A small body of research, however, highlights how other distance learning personnel may work in concert with the instructor and with the technology employed to provide an acceptable and effective level of interaction to enhance student learning (Bork, 1995; Mazur, 2000). The use of student facilitators (similar to the use of graduate assistants to facilitate breakout sessions of large lecture classes) may also provide acceptable levels of interaction in distance learning environments.

Student Motivation and Satisfaction in Distance Learning Environments

The relationship between motivation and learning is well established in traditional educational research and applies equally well in distance learning environments. Bothun (1998) argues “the quality of learning depends on the student's level of motivation” (p. 5). In the traditional classroom, the responsibility for generating and maintaining motivation has generally been associated with the instructor. The process of generating and maintaining motivation in a distance learning environment, however, seems more complex than in traditional classrooms given the reduced direct interaction between students and instructors.

Not all distance learning literature credits the instructor's interaction with students as generating student motivation. Jones-Delcorde (1995) instead argues that a lack of instructor/student interaction can serve as a catalyst for student involvement in learning:

The absence of a live instructor can be viewed as encouraging the student to dig a little deeper into available resources and through this process emerge as a more independent and motivated learner, capable of self-instruction, a trait in which contemporary employers are very interested. (p. 28)

Given the potential reduction in student/instructor interaction in distance learning environments, technology becomes a motivational tool. As Abrahamson (1998) notes, “a primary function of the use of television, computers, and telecommunications in distance learning is to motivate students rather than just to provide information to them” (p. 2). Some research suggests that the medium employed regulates student motivation with regard to distance learning. If the media are challenging, students invest more effort. Alternatively, if they are less challenging, students slack off in their effort (Ksobiech and Salomon as cited in Clark and Salomon, 1986).

It is reasonable to assume that students' motivation is linked to their satisfaction with distance learning as a mode of instruction–that is, the degree to which they perceive it to be an effective and comfortable mode of instruction. For example, according to Hall (1995), students' perceptions influence the overall effectiveness of the learning, making their satisfaction with the learning environment and process critical. Minimal research has directly addressed in any organized fashion student satisfaction with distance learning, particularly that which uses interactive video and synchronous communication as the media of choice. Findings that do exist are contradictory. For example, Dunbar and Selby (1996) surveyed students who had recently completed a video-conferencing class and found their perceptions regarding the experience were quite negative. Students reported they were “less involved and felt less interest in their video-teleconferencing class compared to their traditional classes … they felt the video-teleconferencing class did not enhance the quality of their education” (p. 18).

In contrast, Hiltz (1993) found that when compared to traditional classroom environments, video-conferencing facilitates more course participation and improved access to professors. Similarly, Sipusic et al. (1999), found students in partially mediated and fully mediated courses performed at statistically significant higher levels than students who are in the same course but in a traditional lecture mode. Further, students in both mediated modes found their experience satisfactory and “neither group felt the technology was a significant barrier” (p. 18). These results are supported by those of Swan (1995) who found that high school students engaged in a two-way interactive network (all locations could see, hear, and talk to one another) liked the system, felt they learned as much in this form of distance learning as in the traditional classroom, and would take another interactive video network course if it were available. Finally, Martin and Bramble (1996) found adult learners in a two-way interactive video system to be satisfied with their training, and they achieved higher test scores compared to traditional training, even though their lowest evaluated aspect to the system was the interaction facilitated among students.

In summary, distance learning, though a topic of much conversation in education, is in need of continued and expanded attention, specifically in terms of evaluation. However, evaluation should strive to represent the total experience of a given distance learning system, rather than individual aspects in isolation or one step in the larger process. Limited empirical research exists examining student-instructor interaction, in spite of the fact that the quality of this interaction has been identified as critical to the success of distance learning. Of particular interest in this study is the fact that there is only limited knowledge of the role other distance learning personnel (e.g., facilitators) may play in maintaining acceptable levels of interaction. Notably missing from the distance learning research is evaluation of the technical effectiveness of the most current forms of interactive video instruction, media that offer a wide range of possibilities for student-instructor interaction. Finally, though student motivation is frequently discussed in distance learning literature as linked to quality of interaction, scant empirical research exists establishing this relationship. The current study is designed to address these issues by answering the following research questions:

  • 1To what degree do students perceive video-interactive distance learning to be technically effective?
  • 2How is student/facilitator interaction in a video-interactive distance learning environment related to student satisfaction with learning?
  • 3How is student/facilitator interaction in a video-interactive distance learning environment related to student motivation and student grades?

Method

Design

Delivery Technology Description. The delivery system used for the study was a state-of-the-art two-way synchronous audio/visual technology (Distributed Tutored Video Instruction/DTVI) developed by SUN Microsystems Laboratories. This equipment was aquired through a grant from Sun Microsystems Laboratories to the university; the third author was the project director. Sun provided software and technical expertise in addition to the equipment. The project team was responsible for the experimentation, data collection, and overall administration of the project. The final project report focusing on the technology was written by a team of researchers from Sun (The Kansas Project, 2001). Data for the current study were collected after the project was completed but before the project report was written.

The synchronous closed system is composed of eight computer/video stations with a video camera on each station. Each station is physically separate from the others, although for this study they were located in the same building. Thus, this design is a simulated distance-learning environment. Seven of the stations were reserved for students enrolled in the class, with a student facilitator, who was also responsible for the technical tasks involved in running the system for the group, using the eighth station,. All of the eight participants viewed the course on their monitors, with each group member placed in equal sized squares on the screen. The screen space looked similar to a tic-tac-toe board (see Figure 1). The ninth square was reserved for the video playback of course content. An alternative option was for the students to be able to change the screen configuration so that any of the nine squares could completely fill the screen. The system encouraged complete audio and video interactivity, with any combination being active at any time. Each member also had the ability to send messages to the facilitator. The information was presented live and in real time with realistic audio and smooth motion video.

Course Content. The course selected for this study was an entry-level media aesthetics class. All of the course content was recorded on videotape and presented through the system during each class session. Each group had complete control of stopping, pausing, and playing the tape at their discretion. The tapes consisted of a combination of lecture material and associated movie clip examples. The instructor's lectures were designed to be “viewer friendly,” with a casual style. The instructor was composed as a medium shot speaking directly into the camera (to the student).

Procedures. Each of the participants enrolled in the course without knowing the delivery system to be used. After a brief orientation, the students were given the option of dropping the course if they did not want to participate in the delivery system used, or staying. The study involved six groups of six to eight members. Each group had a student facilitator and the remainder of the group members were students enrolled in the course. The groups met twice weekly from 90 to 120 minutes, depending on the complexity of the course material and the level of group interaction. The group determined the amount of time for each session. Throughout the semester, the students had the option of contacting the instructor by electronic mail (FirstClass) or during office hours. Students' overall grades in the course were determined by performance on three equally weighted multiple-choice exams.

Facilitators. We modeled the mentor approach of Albrektson (1995) in designing the level of monitoring for the six groups of students described above. Four facilitators were selected who varied in: 1) their level of experience in facilitating group interaction; 2) experience with the delivery system used; and 3) in their knowledge of the course content involved.

One of the three facilitators was quite familiar with the delivery system used and had previously facilitated this class via the DTVI delivery system. She worked with three student groups throughout the semester. Alternatively, the three remaining facilitators received approximately two hours of training with one of the authors prior to the first class. The training discussed the role of the facilitator in keeping students on track in terms of class objectives, and encouraging students to engage in discussion about the course content conveyed via the video presentation without unduly interfering with that discussion. These facilitators also read the weekly assigned class readings to build familiarity with the topic. Each of them facilitated one student group throughout the semester.

Participants. Participants were 38 undergraduate students: 2 freshman, 14 sophomores, 10 juniors, and 3 seniors. There were 20 females and 18 males. Majors represented within their college were Media Arts (28), Communication Design (5), Instructional Technology (1), Graphic Arts (1), and Other (3). Computer use varied from 58% who used their computers every day, to 24% who used theirs computer several times a week, 10% who used them once or twice a week, and 8% who used them every other week. More specifically, computer use consisted of word processing (95%), e-mail (95%), internet/www (87%), Games (40%), database searches (29%), electronic chats (24%), CD ROM's (18%), and other non-specified uses (13%). Regarding e-mail use in particular, 37% reported using e-mail once a day, 26% once or twice a week, 13% more than once a day but no more than 5 times a day, 11% once every couple of weeks, and 5% other (8% did not respond to the question).

Instrumentation

Data to answer the research questions was gathered by survey questionnaire distributed to students at the end of the semester during the last week of classes. Five variables were represented in the survey: 1) Perceived Level of Technical Effectiveness of Video-Interactive Distance Learning; 2) Student/Facilitator Interaction; 3) Class Satisfaction; 4) Student Motivation; and 5) Classroom Behavior. A 5-point Likert-type response format was used for all questions (1=strongly disagree, 5=strongly agree, 3=undecided) unless otherwise noted.

Perceived Level of Technical Effectiveness of Video-Interactive Distance Learning. The survey items measuring perceived technical effectiveness of video-interactive distance learning media were developed by the research team and fell into three categories: student comfort with technology use in relation to this class (5 items); perceived quality of instructional videos (1 item); and perceived quality of course content presentation via video (1 item). Reliability (Cronbach's alpha) was computed for the 5-item comfort scale. One item proved to be unreliable and was removed (“I am comfortable with participating in electronic chats”). See Table 1 for the remaining four items of the comfort scale. The reliability of the remaining 4 items was .83. Reliabilities for this variable and all others appear in Table 2.

Table 1.  Scale Items
ScaleItems
  1. *= This scale used the following response options (5=always, 4=frequently, 3=occassionally, 2=rarely, 1=never).

Student Technology ComfortIt was easy for me to operate the equipment used in this class.
 I am comfortable with the idea of taking some classes via technology.
 I was comfortable with having the instructor present the course material on video as opposed to his being present in class to present the material.
 I was comfortable with interaction for this class (student/facilitator, student/student) taking place on a computer video scene as opposed to face to face.
Facilitator EffectivenessThe facilitator was good at managing the group's discussions.
 I sometimes wished the facilitator would stop encouraging discussion and just let me listen to the videotaped lecture. (R)
 The facilitator seemed interested in student comments/questions about the course.
 I felt comfortable interacting with the facilitator.
 The facilitator was prepared for every class.
 The facilitator was helpful to the group in learning the course content.
 Our group discussions were frequently unrelated to the course content. (R)
 I wish the facilitator had been more knowledgeable about the topic taught in this course. (R)
Student Classroom Behavior*Typically, in this class, did you:
 Ask questions during class
 Ask questions during office hours
 Thoroughly prepare for exams
 Ask the instructor for help with problems in the class
 Ask the facilitator for help with problems in the class
 Ask other students in your class for help with problems in the class
 Get class notes from other students in your class
 Study with other students from the class
 Take notes during class
Table 2.  Variable Means, Range, Reliability
VariableItemsMeanRangeReliability
  1. *=This item was answered either “yes” or “no” or “don't know.” Ninety percent responded yes.

Instruction Effectiveness
Student Technology Comfort416.94–20.83
Quality of Instructional Videos14.21–5
Quality of Instructional Presentation14.01–5
Benefit in Future Courses*    
Interaction
Facilitator Effectiveness8338–40.78
Frequency of Discussion13.81–5
Motivation
Student Classroom Behavior927.59–45.83
Student Motivation1161.211–77.86
Class Satisfaction14.451–5
Final Grades79.6565–92

Student/Facilitator Interaction. Student/facilitator interaction was measured by exploring two variables: perceived facilitator effectiveness, and frequency of course discussion about course content. Facilitator effectiveness was measured by 8 items constructed by the research team to address various aspects of facilitator skill and interaction behavior. For example, one item read “The facilitator was good at managing the group's discussions.” See Table 1 for all eight items in the interaction scale. Cronbach's alpha for the 8-item index was .78.

Frequency of discussion about course content was measured with one item. The response scale ranged from “1” (rare discussions about course content) to “5” (discussions in every class about course content).

Class Satisfaction. Class satisfaction was measured by one item that read “How would you describe your overall experience in this class.” Responses ranged from 1=very unsatisfying to 5=very satisfying (3=undecided).

Student Motivation & Classroom Behavior. Student motivation was measured in two ways. First, a measurement was taken of student perceptions of their classroom behaviors. Student classroom behaviors were measured by 9 items that asked students to identify the frequency of their classroom behaviors (e.g., asking questions during class, asking facilitator for help with problems in the class, studying with other students from the class) on a scale of “1” (Never) to “5” (Always). See Table 1 for all nine scale items. Reliability (Cronbach's alpha) for the 9-item classroom behavior index was .83.

The second measurement of motivation was made using an 11-item semantic differential scale (Christophel, 1990), including paired opposites (e.g., interested/uninterested, challenged/unchallenged, useful/useless, and motivated/unmotivated). The response format was a 7-point scale, with 1 being associated with the negative pole and 7 being associated with the positive pole. Cronbach's alpha for the 11-item index was .86.

Student Grades. Final grades for each of the six classes represent student grades in this study. The final grades were the average of the three equally weighted objective exam scores.

Results

With regard to RQ1 (To what degree do students perceive video-interactive distance learning to be technically effective?), the mean score for the 4-item student comfort with technology use index was 16.9 (range 4–20) indicating respondents had a high level of comfort with technology use associated with this class. Responses to the first question about quality of video instruction (The instructional videos were professionally prepared) yielded a mean score of 4.2, indicating that they held a favorable view of the instructional video material. Responses to the second question related to quality of instruction (The instructor presented the course content in a way that made it interesting) also indicated a favorable view of the instructor's presentation via video (mean=4.0). Pearson correlation coefficients were calculated to determine the relationship between the three aspects of technical effectiveness measured and student satisfaction with the class (see Table 3). Comfort had a strong and significant correlation with student satisfaction (r=.48, p<.01). Also, the quality of the instructional video had a moderate and significant correlation with student satisfaction (r=.38, p<.05). However, there was no significant correlation between student satisfaction and the quality of the instructional presentation.

Table 3.  Pearson Correlation Coefficients
  1. *=Correlation is significant at the 0.05 level (2-tailed).

  2. **=Correlation is significant at the 0.01 level (2-tailed).

 1.2.3.4.5.6.7.8.9.
1. Satisfaction1.00        
2. Student Technology Comfort.481*1.00       
3. Quality of Instructional Video.380*.412*1.00      
4. Quality of Instructional Presentation.235.581**.591**1.00     
5. Facilitator Effectiveness.370*.519*.459**.381*1.00    
6. Frequency of Group Discussion.157.305.113.132.0781.00  
7. Student Classroom Behavior.407*.277.151.031.240.521**1.00  
8. Student Motivation.296.216.391*.262.326.278.405*1.00 
9. Grades.094−.149.023.123−.088−.650**−.168−.0691.00

Analyses of responses with regard to RQ2 (How is student/facilitator interaction in a video-interactive distance learning environment related to student satisfaction with learning?) was conducted using Pearson correlation coefficients (see Table 3) and ANOVA (see Table 4) to determine variance among the six sections of the class. There was a moderate and significant correlation between perceived facilitator effectiveness and student satisfaction with the class (r=.37, p<.05). The mean score for perceived facilitator effectiveness (mean=33, range 8–40) indicates facilitators were perceived as highly effective in managing group interaction. In addition, the mean score for satisfaction with the class (mean=4.45) indicates a high level of satisfaction with the class. Results of ANOVA indicate there were no significant differences among the six facilitated groups with regard to perceptions of facilitator effectiveness (F[5,29]=.613, p=ns) and student satisfaction with the class (F(5,32)=.076, p=ns).

Table 4.  Analysis of Variance Between Groups
VariabledfFSignificance
Facilitator Effectiveness5.613.691
Group Discussion of Course Content56.927.000
Satisfaction with Class5.076.995
Student Classroom Behavior51.235.316
Student Motivation5.204.958
Exam Average51.55e+31.000
Student Technology Comfort5.420.831
Quality of Instructional Video5.892.499
Quality of Instructional Presentation5.903.492

The mean score for frequency of course discussion about course content was 3.76 (range 1–5), indicating that in most classes students had discussions about course content. Results of ANOVA, however, indicate there were significant differences among the groups with regard to frequency of discussion about course content (F[5,32]=6.927, p<.001). There was no significant correlation between frequency of course discussion about course content and class satisfaction.

To answer RQ3 (How is student/facilitator interaction in a video-interactive distance learning environment related to student motivation and student grades?) mean scores were examined and Pearson correlation coefficients were computed (see Table 3). The mean score for the student classroom behaviors index was 27.49 (range 9–45) indicating that students occasionally engaged in all nine classroom behaviors identified. Results of ANOVA indicate there were no significant differences among the six groups with regard to classroom behaviors.

The mean score for motivation (mean=61.2, range 11–77) indicates students had a fairly high level of motivation related to this class. Results of ANOVA indicate there were no significant differences among the six groups with regard to motivation level. Neither student classroom behaviors nor motivation were significantly correlated with perceived facilitator effectiveness. Classroom behaviors were significantly correlated with frequency of discussion about course content (r=.52, p=.001); however, motivation was not significantly related to frequency of discussion about course content. Student class participation behaviors were significantly related to student motivation (r=.41, p<.05).

The mean for student final grade in the course was 79.65 (range 65–92). Grades were not significantly correlated with perceived facilitator effectiveness; however, there was a significant negative correlation (r=−.65, p<.001) between grades and frequency of group discussion about course content. ANOVA results indicate a significant difference among the six groups with respect to final grades.

Discussion

There is every reason to believe that the integration of technology into education will continue to increase with technological advances. For those involved in computer-mediated environments, whether traditional classroom environments or distance learning environments, knowledge of student expectations and satisfaction with such technologies is important. Perhaps even more important is how this technology influences the learning environment and, subsequently, learning outcomes.

Our results reinforce the idea that the implementation of distance education technology into selected courses can be an effective mode of delivery. The results show that when utilized properly–with consideration given to subject matter, student needs and abilities, and instructor course goals–aspects of distance education can benefit the unique needs and limitations of many of our students. Beyond showing such simple effectiveness, this study provides a more in-depth and complex view than earlier research by focusing on the entire learning experience and the dynamic interacting qualities involved.

The results of this study in many ways support earlier findings related to distance learning environments. The participants in this study reported a favorable view of the quality of instructional videos and the instructor's presentation of the course content via video. These findings lend support to Dede's (1990) contention that “successful distance instruction depends on more than classroom management strategies …; creating an intellectually and emotionally attractive ‘telepresence’… is also vital”(p. 5). Participants also indicated a high comfort level with the technology used; comfort seems to have a stronger relationship to their satisfaction than either video quality or the instructor's presentation. Finally, the average grades indicate that this form of distance education can provide a satisfactory alternative to classroom instruction, inasmuch as typical grade distributions were achieved (Sipusic et al., 1999; Martin & Bramble, 1996).

As mentioned earlier, student/instructor interaction frequently has been noted as a critical component in the success of distance learning environments (Dede, 1990). Such interaction is challenging to create, however, requiring planning on the part of the instructor and sometimes the use of on-site facilitators and tutors (Albrekston, 1995; Thach & Murphy, 1995). The interactive capability of the DTVI system used in this study–coupled with the use of a facilitator–appears to have provided this group of students with an acceptable amount of interaction. Participants' favorable evaluation of facilitators' effectiveness indicates that a variety of forms of class interaction can be both acceptable to students and effective. In other words, interaction does not need to come solely from the instructor, nor is there one level that satisfies all student groups.

The significant difference among the groups with regard to the frequency of group discussion, and the negative correlation between students' final grade and frequency of group discussion regarding course content, however, adds an interesting twist. In those classes where group discussion was the most frequent, the average final grades for students were lowest. One explanation lies in the self-report of frequency of group discussion that may or may not be an accurate representation of actual group discussion frequency. For example, Sipusic et al. (1999) and Olaniran, Savage and Sorenson (1996) found participant self-report measures to under-report actual interaction. Martin and Bramble (1996) also found discrepancies between different on-site facilitators and student interaction outcomes.

An alternative explanation, however, seems more plausible. Perhaps some of these facilitators were more skilled than others at recognizing how much group discussion about course content was necessary to ensure learning/understanding of course concepts. For example, one group had the lowest mean score for frequency of group discussion (2.17), indicating that in approximately one fourth of the class meetings there was discussion about course content. Students in this class had the highest final grades (mean=85.2). Alternatively, two groups had the highest mean scores for frequency of group discussion (4.86, 4.57) indicating in almost every class meeting there was discussion about course content. Students in these classes, however, had the lowest final grades (mean=73.9, 75.7). Across the three classes using the same facilitator, reported frequency of group discussion varied more across groups (4.33, 3.50, 2.83) than did final grades (mean=81.2, 81.1, 82.4). It would seem that more discussion about course content is not necessarily better when it comes to grades in this type of course.

When these findings are considered in light of the varied content knowledge, level of delivery system experience, and facilitation experience of the four group facilitators, they provide an interesting and unexplored direction for distance learning research. For example, the group mentioned above that had the lowest mean score for frequency of group interaction but the highest final grades was led by a facilitator who had no previous knowledge of the course content but considerable experience in facilitating group interaction. While our study lacked the design mechanism to differentiate the effects of various facilitator characteristics on group interaction and student outcomes, this insight nonetheless challenges conventional wisdom about the need for facilitators to be content experts. It also challenges conventional wisdom that high levels of group discussion in CMC learning environments lead to improved student performance (e.g. higher grades). Despite the noted differences in facilitator experience and content knowledge, there were no significant differences among the six student groups in their perceptions of facilitator effectiveness and all facilitators were perceived as effective.

As mentioned earlier, while student motivation has been discussed frequently in distance learning literature, there is scant empirical study of it in that context. This study employed two measures of student motivation: one which addressed their specific classroom participation behaviors and the other which addressed their motivation toward the class in general. Neither classroom behaviors nor general motivation were significantly related to perceptions of facilitator effectiveness (one measure of student/facilitator interaction). This finding may address the unique qualities of the facilitator role, which included such elements as being an on-site technician, group discussion leader, and teaching assistant. Possibly students view the facilitator more as an extension of the course, rather than as someone who has direct impact on their behavior choices, which are seen as more individual and within their control. Whereas they may perceive an instructor having a direct effect on their motivation, the facilitator is simply seen as working along with them, covering the course content. Abrahamson (1998) noted the multi-faceted role of the on-site instructor but noted this role must be viewed as an extension of the instructor to be an effective part of the distance learning process. Perhaps these findings indicate the potential supportive features of on-site facilitators, as well as the limits to that role when left under-developed.

To give proper context to these findings and discussion, there are three limitations in this study that need to be addressed. First, our measure of class satisfaction–a one-item measure–is an obvious limitation of the study. One-item measures simply cannot address the complex nature of any variable. We believe, however, that the mean score for the one item addressing student satisfaction (4.5 out of a possible range of 1–5) does provide strong evidence that this method of class instruction was a good experience for the students involved. In addition, the mean scores (on a 7-point response scale) for a few particularly relevant items from the motivation index provide a broader-based framework from which to draw this conclusion. For example, students reported being involved in the class (mean=5.4), being stimulated by the class (mean=5.5), and looking forward to the class (mean=5.6). Nonetheless, future studies addressing student satisfaction with technology-mediated classrooms should employ instruments capable of capturing the richness and multi-dimensional nature of student satisfaction.

Second, the technology delivery method features limit the generalizability of the findings. Specialized delivery systems and instruction methods used in studies should not be generalized to other systems lacking in similar features (Haynes & Dillon, 1992). In this case, specialized features included eight-way synchronous interactions, individual control over screen viewing options, group control over content delivery speed, e-mail and electronic support features, group facilitators, as well as the ability to meet the instructor personally, during office hours.

The third limitation to this research would be the participants themselves. Although using students completing an entire university course for credit simulated a more realistic distance-learning environment, these particular students may have had a higher degree of comfort with technology due to their major in media studies. Further, six groups of six-to-eight students provides a limited sample. While it is true that the DTVI technology used in this study accommodates only small groups, more extensive research designs used in future research should strive for representative student populations so that findings are more generalizable.

Thach and Murphy (1995) argue that with advances in technology comes the need for recognizing instructors' roles must change. One of those changes may involve turning over responsibility for classroom interaction to other helping roles such as facilitators. Our findings pertaining to the unique position of the facilitator in mediated environments need to be pursued to determine with more specificity the essential features of this role. Results depicted the role of the facilitator as a support mechanism which can contribute effectively to sustaining interaction based on student preferences. However, there was only a modestly strong relationship between facilitator effectiveness and satisfaction. Depending upon the extent to which the facilitators are involved with the technological features of the distance learning environment, are they viewed as an extension of the system or as the live instruction? Possibly, the weak relationship was due to the one-item measure of satisfaction. Alternatively, as mentioned earlier, the facilitator may have been viewed more as an extension of the technology, rather than as the instructor–viewed in a more supportive role with less direct effects. Of particular importance in future research exploring the role of facilitator (as well as other distance learning personnel) is the impact that role has on outcomes such as student grades.

Future research should also seek to clarify the relationship between comfort with technology, quality of mediated content presentation, and satisfaction. In this study, technology comfort had a stronger relationship with satisfaction, compared to the quality of instruction. If the individual's level of comfort with technology is a potential mediating variable, then designers must focus attention in this area when designing distance learning applications. The findings reported here indicate that emphasis should be placed on making applications user-friendly, or on building comfort levels, rather than excessive investment in high-quality presentation media.

In summary, based on the findings from this study, expanded research on distance education technology should address the following areas: technological student competencies and their relation to distance education effectiveness (Althaus, 1997; Olaniran, Savage & Sorenson, 1996; Scott & Rockwell, 1997); the role of the facilitator/tutor in directing and/or assisting in the learning process; the changing role of the instructor; and the integration of other support media technology such as chat rooms, electronic mail, synchronous features, and interactive video (Hedberg & McNamara, 1989; Sipusic et al., 1999; Thach & Murphy, 1995).

Finally, we must continue to question the role of technology in instruction, with relation to the larger communication and meaning interpretation process. Klimczak and Wedman (1997) clearly demonstrated that there were a variety of often-conflicting interpretations regarding successful aspects of instructional development, depending on the stakeholder's perception. With regards to the educational technology end-users, who should be the focus of outcomes and evaluation, this point is clearly illustrated by two contrasting comments from students in response to an open-ended question asking for general comments about the class:

I want the Real World Book (not the TV Show). Computers are fun, but I'm not that bewitched by them that I want to spend a lot of time and the university money on separating us more than we already are. It seems to me this idea ultimately will discourage diversity. (Respondent #13)

I really think I liked the use of DTVI [because] technology is here to stay and it gives you a better taste, I believe, of the real world and how it operates. (Respondent #27)

These two drastically different beliefs about the nature of the “real world” may be a simple reminder that education is a receiver phenomenon. Thus, our research exploring integration of technology into instructional design – and specifically interaction issues – must always consider the perceptions of those who are the beneficiaries of such integration.

Acknowledgments:

Segments of this paper have been previously presented at the 1999 Broadcasters Education Association Annual Conference in Las Vegas, NV; and the 1999 National Communication Association Annual Convention Chicago, IL

Ancillary