How students read an e‐textbook in an engineering course
Abstract
Time on task has been recognized as an important variable in academic learning, but self‐report measures of study time are problematic. Therefore, this study employs an automated system for recording time spent reading a course textbook. College students in an introductory engineering course accessed their textbook online. The book contained pages of instructional text, worked examples, homework problems, and answers to homework problems. An instrumented document reader program called “STL Reader” recorded the time each student spent on each page, thus providing detailed measures of reading habits. Across the 10‐week course, students spent an average of 1.9 hr reading instructional text, 1.4 hr on worked examples, 22.1 hr on homework problems, and 0.9 hr on homework answers, indicating a preference for practicing to solve test problems (i.e., self‐testing) rather than being told (i.e., receiving direct instruction). Furthermore, course grade (based largely on solving problems on exams and quizzes) correlated significantly and positively with time viewing homework problems, but not with time viewing either instructional text or worked examples, indicating that achievement was related to time spent practicing for solving test problems but not to time spent being instructed. Results suggest a revision of the time‐on‐task hypothesis to include the value of spending time on tasks aligned to test requirements.
Lay Description
What is currently known?:
- Time‐on‐task theory states that students' time engaged in relevant material is an important factor in learning and achievement.
- How students choose to process presented information is important for academic learning.
- Undergraduate STEM students often read very little of the assigned course textbook.
What this paper adds:
- Technology‐enhanced data collection provides more accurate measure of students' engagement with e‐textbook.
- Time spent viewing homework problems is significantly and positively related to achievement in an undergraduate engineering course.
- Student grades were not positively correlated with time spent viewing instructional text or worked examples from the textbook.
Implications:
- Suggests revision of time‐on‐task hypothesis to include the value of spending time on tasks aligned to test requirements.
1 OBJECTIVE
Suppose a college instructor assigned an online textbook in a science, technology, engineering, and mathematics (STEM) course, but no one read it (or barely read the instructional text in it). Although instructors in STEM college courses frequently require the use of textbooks, there is growing evidence that students make little use of them (e.g., Berry, Cook, Hill, & Stevens, 2010; Junco & Clem, 2015; Seaton, Kortemeyer, Bergner, Rayyan, & Pritchard, 2014; Sikorski et al., 2002; Smith & Jacobs, 2003). Additionally, as publishers continue to migrate textbooks from print to electronic form, it is worthwhile to investigate how students read e‐textbooks in STEM courses. The present work investigates these questions.
As summarized in Table 1, in this study, students in a 10‐week introductory engineering course on “Statics” were assigned a digital textbook that contained five types of pages: instructional text, worked examples, homework problems, homework answers, and other. The instructional text and worked examples comprise what can be called learning by being told, whereas the homework problems and homework answers comprise what can be called learning by practicing. The primary goals of this study are to determine how much time students spend on each of these four types of pages and the extent to which time spent on each correlates with course grade.
| Textbook content types | Description |
|---|---|
| Instructional text | Explanatory material, including both text and graphics, presenting the principles of statics. |
| Worked examples | Worked examples illustrating how the principles of statics are applied to solve problems. |
| Homework problems | End‐of‐chapter problems that students solve for homework assignments. |
| Homework answers | Final numerical answers to the end‐of‐chapter homework problems. |
| Other | Other types of content including table of contents, index, and appendices. |
2 THEORY AND PREDICTIONS
2.1 Time on task
This work is motivated by the hypothesis that time on task provides a measure of a student's engagement during learning and that this measure is among the most important factors affecting learning and achievement (van Gog, 2013). The time‐on‐task hypothesis has deep roots in the science of learning, dating back to Ebbinghaus's (1885) classic studies showing a connection between time spent studying a word list and learning outcomes. In educational contexts, student engagement during learning—reflected in time on task—is often measured as the amount of time that students spend on learning (Carroll, 1963; van Gog, 2013). Through the years, the theory has evolved to incorporate the idea that engaged time on task, as opposed to simply time allocated to students for learning, is a more important factor in predicting positive learning outcomes (Karweit, 1984).
Recently, Rawson, Stahovich, and Mayer (2016) presented a basic model of academic learning that considered the fact that “engagement (as indicated by the amount of time that students allocate to a task) is a mechanism affecting learning outcomes (as indicated by achievement)” (p. 2). The researchers used digital smartpens to objectively measure time spent on writing homework assignments and found a significant positive correlation between this measure and final course grade. Our present research also considers the importance of time engaged in learning activities and seeks to collect objective measures of students' engagement with the assigned course e‐textbook.
2.2 Self‐testing as a learning strategy
Research has shown that academic learning depends not only on what is presented—such as a textbook—but on how the learner chooses to process the presented information, which can be called the learning strategy (Fiorella & Mayer, 2015). When students read a book—such as an e‐textbook in this study—they can control the reading process by choosing which pages to view and how long to view them. In regard to the types of pages summarized in Table 1, a student using a learning strategy based on learning by being told would focus heavily on the instructional text pages and the worked example pages, whereas a student using a learning strategy based on learning by practicing would focus heavily on the homework problem and homework answer pages.
Solving problems (i.e., learning by practicing) represents a form of self‐testing in which a learner engages in the kinds of activities that are required on the test, although learning by practicing is not always indicative of self‐testing. There is a considerable body of evidence that self‐testing can be an effective learning strategy (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013; Fiorella & Mayer, 2015; Karpicke & Aue, 2015; Roediger & Karpicke, 2006), although only a fraction of the research base involves educationally relevant material (e.g., Johnson & Mayer, 2009). A common form of self‐testing involves studying a list of words and then trying to recall it. A testing effect occurs when students perform better on a subsequent recall test when they spend learning time trying to recall the words on a list rather than restudying them. The major learning mechanism underlying the testing effect is that learners strengthen their skill in retrieving the targeted material, which is a cognitive process that is also required on a subsequent recall test. In short, the act of taking a test—even a practice test—can be an aid to learning. This study extends research on self‐testing to the domain of learning from an online textbook in an academic setting, although solving practice problems may involve more than self‐testing.
More specifically, asking students to solve problems is consistent with the long‐standing research base on the positive role of practice with feedback in achieving expertise within a domain (Dunlosky et al., 2013; Ericsson, 2016; Hattie, 2009). In particular, certain aspects of practice have been shown to be particularly effective such as spacing practice activity over time rather than concentrating it all at one time (i.e., spaced practice effect), interleaving different types of problems rather than blocking problems by type (i.e., interleaved practice effect), and practicing on problems at an increasing level of challenge with appropriate feedback (i.e., deliberate practice effect). This study examines how students use online textbooks in the context of an undergraduate engineering course and explores on how students manage the way they practice solving problems.
On the basis of the time‐on‐task hypothesis and the self‐testing hypothesis, we focus on two key questions in this study: (a) Which kinds of pages in the course e‐textbook do students view during learning? If students see their goal as being exposed to the material—that is, learning by being told—they should direct their efforts towards the instructional text pages and the worked example pages. If students see their goal as being able to perform well on the homework problems—that is, learning by practicing—they should direct their efforts towards the homework problem pages and homework answer pages. (b) How are the learner's reading strategies (i.e., how much they engage in each of the four types of pages) related to academic achievement in the course? According to an updated version of the time‐on‐task hypothesis and the self‐testing hypothesis, the degree to which students focus on homework question and homework answer pages should correlate positively with academic performance, such as course grade.
3 RELATED RESEARCH ON TEXTBOOK READING
3.1 Pedagogical considerations
Previous research has found that STEM students identify themselves as “readers” in surprisingly low rates. For example, just over 20% of students in an introductory physics course reported that they used the textbook for more than just referencing problems and equations (Cummings, French, & Cooney, 2002). Similarly, Podolefsky and Finkelstein (2006) conducted a survey of 1,000 participants across multiple introductory physics course offerings and found that only a little more than one third of the students reported reading more than 80% of the assigned readings. The same study also found that student's self‐reports of reading effort were not predictive of course performance or learning outcomes.
Previous research has speculated about possible factors affecting reported low usage of the textbook by students. For example, students have limited time to study many different resources and need to optimize study activities (Berry et al., 2010). If textbook reading is not directly tied to course grade, students may choose to study other resources they deem more beneficial to their grade, such as lecture materials (Podolefsky & Finkelstein, 2006). Other studies have found that the quality of the textbook and its content may affect students' willingness to use the resource. In one study, students' use of an e‐textbook in introductory physics courses varied greatly depending on whether or not the resource contained embedded assessments (Seaton et al., 2014). In another study, researchers surveyed students in introductory psychology courses and identified positive significant relationships between percentage of the textbook read and both the helpfulness of content (e.g., tables, figures, and examples) and quality of the writing (Landrum, Gurung, & Spann, 2012). When considering differences in course content, course structure, and student demographics, it is difficult to identify just how much time students should be spending with the textbook as many factors can play a part in students' decisions to study. Further, in many STEM disciplines, a heavy focus is made on applying concepts to solve problems. Often, the concepts are simple to explain and become clearer once applied to solve increasingly complex problems, and thus, it is possible that only a small amount of reading time is required to sufficiently learn the necessary material.
3.2 Textbook format considerations
The recent increase in use of electronic textbooks and open educational resources (OERs) has created a need to study the efficacy of these materials in relation to student use and learning. Recent research has indicated that even with a growing comfort with electronic media, students still prefer traditional textbooks to their electronic counterparts (Woody, Daniel, & Baker, 2010). Recent evidence suggests that readers interact differently with electronic and printed textbooks, but that media format does not affect student learning (Daniel & Willingham, 2012; Daniel & Woody, 2013; Rockinson‐Szapkiw, Courduff, Carter, & Bennett, 2013). Daniel and Woody conducted a controlled study of introductory psychology students' use of electronic and print textbooks for both in‐home and in‐lab conditions and found that e‐text users took twice as much time to read the content in both conditions. The authors investigated performance on a quiz based on the content and, for both conditions, found no significant differences based on media type. Similarly, OERs have grown in popularity as a more cost‐effective electronic alternative to traditional textbooks. Little research has been conducted to appropriately assess the effects this choice of resource may have on student study behaviours and performance (Griggs & Jackson, 2017; Gurung, 2017). In one carefully designed study, Gurung (2017) found that introductory psychology students spent significantly less time studying from OERs than printed text, a finding in contrast to studies examining electronic versus printed textbooks. Additionally, the author reported OER users scored lower on quiz questions drawn from the standardized advanced placement psychology exam.
The results from existing research clearly indicate that students use electronic resources differently than they use traditional printed text and that it is imperative to study this use to better understand the possible effects on student learning outcomes. The current research investigates students' use of an e‐textbook in an introductory mechanical engineering setting and aims to identify the relationships with students' course performance.
3.3 Methodological considerations
A potential methodological problem in much of the current research investigating students' textbook reading strategies is the reliance on surveys and self‐reported data (e.g., Berry et al., 2010; Cummings et al., 2002; Landrum et al., 2012; Podolefsky & Finkelstein, 2006; Schuman, Walsh, Olson, & Etheridge, 1985; Sikorski et al., 2002; Smith & Jacobs, 2003). This sort of data can be highly subjective and relies on retrospective reporting. For instance, Schuman et al. (1985) found little correlation between students' self‐reported study time and their grade. The authors speculated, “Students may not know how much they study, and there may also be some bias in willingness to report honestly” (p. 961). Additionally, Smith and Jacobs (2003) found a negative correlation between anticipated grade and self‐reported time spent using the textbook each week. In this case the authors concluded, “It is possible that a higher fraction of the weaker students consciously inflated their hours, or included inefficient time in their estimate” (p. 101). In our present research, we aim to overcome these methodological limitations by using computer‐based technology to accurately and objectively measure students' use of the textbook.
In recent years, the emergence of online content delivery systems (e.g., CourseSmart Analytics [https://www.vitalsource.com], OpenDSA [Shaffer, Karavirta, Korhonen, & Naps, 2011], LON‐CAPA [Kashy et al., 1993], and zyBooks [https://zybooks.zyante.com]) has provided one approach for measuring student reading effort. Studies using online content delivery systems to measure reading effort have found that students read at very low rates (Junco & Clem, 2015; Seaton et al., 2014). For example, one study examined the reading logs of thousands of students across multiple offerings of an introductory physics course (Seaton et al., 2014). In this study, reading time was assumed to be the time between subsequent page access events, but as a webpage can be displayed indefinitely, the researchers employed an upper bound on page viewing time. Assuming that meaningful page views had durations of between 10 s and 30 min, they found an average reading time of approximately 10 to 20 hr per student during the 15‐week term. Although weblogs provide a more objective measure of reading than surveys do, in this study, we developed an instrumented document viewing program that provides an even more precise method of measuring reading time.
4 METHOD
4.1 Participants and course setting
The participants were 143 undergraduate students enrolled in a course on statics at the University of California, Riverside during the winter quarter of 2015. Every student who enrolled in the course consented to participate in this study. Statics is an introductory mechanical engineering course focused on the equilibrium of bodies subjected to forces. Mechanical engineering undergraduate students who follow the recommended course plan take this course during the winter quarter of the sophomore year. Table 2 presents demographic information for the participants. The vast majority of students were men (87%) and engineering or computer science majors (85%). Sixteen of the participants did not complete a survey soliciting their year in school, but of the remaining students, all but 9% reported being undergraduates within their first 4 years. We followed guidelines for ethical treatment of human subjects and obtained Institutional Review Board approval for the study.
| Variables | N | % |
|---|---|---|
| Gender | ||
| Male | 125 | 87.4 |
| Female | 18 | 12.6 |
| Major | ||
| Bioengineering | 3 | 2.1 |
| Chemical engineering | 2 | 1.4 |
| Computer engineering | 14 | 9.8 |
| Computer science | 12 | 8.4 |
| Electrical engineering | 5 | 3.5 |
| Environmental engineering | 9 | 6.3 |
| Materials science and engineering | 12 | 8.4 |
| Mechanical engineering | 65 | 45.5 |
| Other | 21 | 14.7 |
| Year | ||
| Freshman | 8 | 5.9 |
| Sophomore | 72 | 49.3 |
| Junior | 20 | 14.0 |
| Senior | 15 | 10.3 |
| 5th year and beyond | 12 | 8.8 |
| Unknown | 16 | 11.8 |
4.2 Materials and apparatus
The materials comprised an online textbook and a document viewing program. The measures comprised reading measures and test measures.
4.2.1 Course e‐textbook
The course e‐textbook was Engineering Mechanics: Statics, 8th edition (Meriam, Kraige, & Bolton, 2015). We provided all students with a Windows RT tablet computer containing a copy of the e‐textbook at no cost. As the tablets used a version of Microsoft Windows, most students already had familiarity with the software environment. We did not prevent students from using their own printed version of the textbook, or any other instructional materials, but informed them that all necessary course materials would be provided free of charge with the tablet. The chapters in the book are organized into sections containing instructional text related to a particular topic. As the term suggests, instructional text, which includes both text and graphics, presents the principles of statics. Each section included worked examples, which are problems with annotated solutions illustrating how the principles of statics are used in problem solving. Each section also contained homework problems, which are statics problems that students solve on their own paper. The final numerical answers to the homework problems are listed in the homework answers section at the end of the book.
The course covered material from the following six chapters of the textbook: Introduction to Statics, Force Systems, Equilibrium, Structures, Distributed Forces, and Friction. These chapters span a total of 398 pages of content, of which 133 contain instructional text, 56 contain worked examples, 192 contain homework problems, and 17 contain homework answers. Students were instructed to read 92 of the 133 instructional text pages and 39 of the 56 worked example pages. Additionally, only 42 of the 192 homework problem pages contained problems that were assigned to students during the term. Although homework problems were assigned from the homework problem pages in the textbook, the assignments often contained modifications to the numerical values in the questions so that the answers did not directly match those found in the homework answer pages. For example, the assignment might specify a change to a length, angle, or magnitude of a force. Eight of the 17 homework answer pages contained answers to assigned homework problems.
4.2.2 Instrumented document viewer
To track students' reading habits, we created an instrumented document viewing program called “STL Reader.” The program was built using the open source μPDF (mupdf.com), PDF rendering software system. As illustrated in Figure 1, the viewer was designed for use on a Windows RT tablet and is typically operated in full‐screen mode. STL Reader provides functions for keyword search, bookmarking pages, and writing notes using either a stylus or a finger on the touch screen.

The user interface of the program is designed to enable accurate measurement of reading time. In particular, only a single page can be displayed at any given time so that there is no ambiguity about what page a student is viewing. Additionally, to ensure accurate measurement of engaged reading time, the program dims the display after 10 min of inactivity during which the student provides no input via the touch screen or keyboard. When the screen is dimmed, the student can “wake” the program by touching the screen. We provided students with the e‐textbook and other course documents as encrypted PDFs. The decryption password was built into STL Reader so that it could automatically display the documents. As students did not otherwise have access to the password, they could not view the course documents with other PDF viewers.
STL Reader creates a log file of time‐stamped page viewing events. An event occurs each time there is a change to the displayed document. For example, opening or closing a document, panning or zooming the display, and navigating to a new page are all recorded in the log. The program periodically uploads the log file to a secure file server.
Each student was provided with a Windows RT tablet on the second day of the quarter. The students were then required to register for an account and install STL Reader from the Windows Store. STL Reader then automatically retrieved an encrypted copy of the course e‐textbook from our secure file server.
STL Reader automatically uploads event logs to the server when the tablet is online. However, it is possible for a student to use the tablet without an internet connection. To ensure that all viewing data was collected, we manually extracted the log files from the tablets when students returned their equipment at the end of the quarter.
4.2.3 Reading measures
As described in Table 3, we consider 14 measures of reading. These measures are based on the notion of a page visit, which we define as a time interval of at least 15 s during which a particular page is continuously visible on the tablet. We consider intervals of less than 15 s to be page navigation rather than reading. In many cases, it is likely to take up to several minutes for a student to read and understand a typical page, because of the complexity of the highly technical content in the textbook. We selected a threshold of 15 s to eliminate episodes that comprise something other than careful reading for understanding, but there is no precise threshold that differentiates reading from navigating. A student may pan or zoom the view of a page without ending the current page visit. Navigating to a new page or exiting the program, however, ends a page visit. Ten minutes of inactivity typically ends the current page view. (As previously noted, the screen dims after 10 min of inactivity.) However, if the student wakes the program within 30 s of the screen dimming, the page visit continues. In computing the duration of a page visit, we subtracted any time the student spent performing keyword search, as the search tool obscured the displayed document. We found that, in some cases, STL Reader did not properly log an exit event when the program was closed. In these cases, we took the page visit duration for the final page to be the student's average page visit time. We calculated a student's total reading time as the sum of the page visit durations. Thus, the total time excluded page navigation (views less than 15 s in duration) and keyword search.
| Textbook viewing measures | Description |
|---|---|
| Total viewing time | Total time spent viewing the textbook |
| Instructional text viewing time | Time spent viewing pages containing instructional text |
| Worked example viewing time | Time spent viewing pages containing worked examples |
| Homework problem viewing time | Time spent viewing pages containing homework problems |
| Homework answer viewing time | Time spent viewing pages containing homework answers |
| Total page visits | Total number of visits to pages in the textbook |
| Instructional text page visits | Number of visits to pages containing instructional text |
| Worked example page visits | Number of visits to pages containing worked examples |
| Homework problem page visits | Number of visits to pages containing homework problems |
| Homework answer page visits | Number of visits to pages containing homework answers |
| Relative viewing time for instructional text | The percentage of time spent on pages containing instructional text |
| Relative viewing time for worked examples | The percentage of time spent on pages containing worked examples |
| Relative viewing time for homework problems | The percentage of time spent on pages containing homework problems |
| Relative viewing time for homework answers | The percentage of time spent on pages containing homework answers |
Our measures of reading include both the total number of page views and the total viewing time for the entire e‐textbook, as well as for each of the four types of pages (i.e., instructional text, worked examples, homework problems, and homework answer). We also characterize viewing time with a relative measure of time describing the fraction of the total time spent viewing each page type. For example, if a student viewed the textbook for a total of 100 min, and 20 min of that time was spent viewing worked examples, then the relative viewing time for worked examples would be 20%. In computing the various reading measures, we excluded viewing of the table of contents, index, and appendices. On average, students spent only 6.6 min during the quarter on such materials.
To illustrate our reading measures, consider a simplified example in which a student had the following reading activity: page 1 for 3 s, page 2 for 50 s, page 10 for 70 s, page 22 for 12 s, and page 10 for 80 s. In this case, there would be only three page viewing events: one event on page 2 and two events on page 10. As the time spent on each of pages 1 and 22 is less than 15 s, this activity would not comprise page viewing events, and these episodes would contribute neither to the number of page views nor to the viewing time. Imagine that page 2 is instructional text, whereas page 10 is homework problems. In this case, the total time for instructional text would be 50 s, whereas the total time for homework problems would be 150 s. Likewise, there would be no reading time for worked examples or homework answers. Finally, the relative reading time would be 25% for instructional text, 75% for homework problems, 0% for worked examples, and 0% for homework answers.
4.2.4 Test measures
We considered several test measures including homework and quiz score (based on 10 homework assignments and five quizzes), exam score (based on two midterm and one final exam), course grade (excluding class participation), Force Concept Inventory (FCI) score (Hestenes, Wells, & Swackhamer, 1992), and Statics Concept Inventory (SCI) score (Steif & Dantzler, 2005). We also administered a post‐study survey. The FCI measures students' understanding of Newtonian concepts of forces as taught in an introductory physics course, whereas the SCI measures understanding of the concepts taught in a statics course. All assessments (i.e., homework assignments, quizzes, and exams) contained statics problems requiring students to construct free‐form solutions using diagrams and equations and solve these equations to come to a final numerical solution; see Figure 2 for an example of a typical problem. The final exam also included two multiple choice questions and a professional ethics question.

Quiz and exam problems were graded using a rubric that examined the correctness of the major elements of the solution such as free body diagrams, geometric calculations, and equilibrium equations. The credit for the problem was divided over these elements according to their complexity, and points were deducted for errors. One problem on each homework assignment was also graded using this rubric scheme, whereas the remaining problems were graded on the basis of completion and correctness of the final answers. The problems assigned for homework were typically contained in the homework problem pages at the end of chapters in the textbook. Students solved these problems on paper with a digital pen and submitted the work electronically. The homework answer pages of the textbook contained final numerical answers for the problems, which enabled students to determine if they had solved the problems correctly. As noted above, however, the homework assignments typically specified changes to the numerical parameters of the problems so that the answers in the book did not match the problems as assigned.
The course grade was based on the following weighting: 5% for class participation, 10% for the homework score, 10% for the quiz score, 20% for the first midterm exam score, 20% for the second midterm exam score, and 35% for the final exam score. However, in our present analysis, we exclude class participation from the final course grade as this does not directly represent competence with the subject matter. For our short‐term measure of achievement, we combine homework and quiz grades with equal weights. Similarly, when considering the combined exam grade, we weight each midterm by 0.2 and the final exam by 0.35.
The post‐study survey included three questions related to reading activity. These questions, which have a 5‐point Likert scale, are as follows:
How important was the textbook to your learning of statics?
- Very unimportant
- Unimportant
- No opinion
- Important
- Very important
How convenient was the tablet (STL Reader) for reading course documents?
- Inconvenient
- Somewhat inconvenient
- No opinion
- Somewhat convenient
- Convenient
Do you prefer an electronic textbook or a paper textbook?
- Strongly prefer paper
- Prefer paper
- No opinion
- Prefer electronic
- Strongly prefer electronic
4.3 Procedure
At the start of the class, students completed the FCI (Hestenes et al., 1992) to assess their prior knowledge of mechanics concepts. Each week, students attended 3 hr of lecture focusing on the core concepts of statics and 1 hr of discussion focusing on problem‐solving skills. Throughout the course, students were assigned weekly reading assignments and homework problem sets from the course e‐textbook. The reading assignments comprised instructional text and worked example pages from sections relevant to the week's lectures and homework. The homework assignments contained problems from the homework problem pages. In five of the weeks of the 10‐week quarter, the students completed quizzes with problems similar to the most recently submitted homework assignment. At the end of the course, students completed the SCI (Steif & Dantzler, 2005) and the post‐study survey.
5 RESULTS AND DISCUSSION
5.1 Question 1: What do students read?
According to the self‐testing hypothesis, the most effective reading strategy is to practice solving problems by focusing on the homework pages. We quantified the reading behavior with three metrics—viewing time, number of page visits, and relative viewing time. The second column of Table 4 shows the mean viewing time (and standard deviation) for each of the four page types over the 10‐week quarter. Overall, students spent 26.3 hr (SD = 16.5 hr) on average reading the textbook (or 2.6 hr per week), including all types of content (i.e., instructional text, worked examples, homework problems, and homework answers). Due to violations of normality, we assessed the data using a non‐parametric Friedman test, with type of content as a within‐subjects factor and viewing time as the dependent variable, and found a significant difference among the four types of content, χ2(3) = 274.66, p < .001. Students spent the overwhelming majority of their reading time viewing homework problem pages (M = 22.1, SD = 13.4) and very little time reading instructional text (M = 1.9, SD = 3.1) and worked examples (M = 1.4, SD = 1.7).
| Content type | Mean viewing time, hr (SD) | Mean number of page visits (SD) | Mean relative viewing time, % (SD) |
|---|---|---|---|
| All contents (w/o other) | 26.3 (16.5) | 307.7 (191.3) | n/a |
| Instructional text | 1.9 (3.1) | 48.8 (48.0) | 6.8 (7.6) |
| Worked examples | 1.4 (1.7) | 27.2 (25.7) | 5.1 (4.7) |
| Homework problems | 22.1 (13.4) | 204.9 (122.0) | 84.8 (11.1) |
| Homework answers | 0.9 (1.4) | 23.1 (31.1) | 2.9 (3.5) |
| Other | 0.1 (0.24) | 3.8 (5.3) | 0.4 (0.8) |
- Note. n/a = not applicable.
The third column of Table 4 shows the mean number of page visits (and standard deviation) for each of the four page types over the 10‐week quarter. Due to violations of normality, we again assessed the data using a non‐parametric Friedman test, with type of content as a within‐subjects factor and number of page visits as the dependent variable, and found a significant difference among the four types of content, χ2(3) = 292.76, p < .001. Similar to the pattern for viewing time, there were far more visits to homework problem pages (M = 204.9, SD = 122.0) than for instructional text (M = 48.8, SD = 48.0) and worked examples (M = 27.2, SD = 25.7).
The fourth column of Table 4 shows the relative viewing time (and standard deviation) for each of the four page types over the 10‐week quarter. Due to violations of normality, we again assessed the data using a non‐parametric Friedman test, with type of content as a within‐subjects factor and relative viewing time as the dependent variable, and found a significant difference among the four types of content, χ2(3) = 274.66, p < .001. Students spent the majority of their time in the textbook viewing pages containing homework problems (M = 84.8, SD = 11.1) and a significantly smaller percentage of their time on instructional text (M = 6.8, SD = 7.6) and worked example (M = 5.1, SD = 4.7) pages.
We computed Pearson product moment correlations to investigate the relationships among the 14 reading measures listed in Table 3. Nearly all measures of viewing time and page visits were significantly and positively correlated with one another. Two exceptions to this pattern were found: viewing time and page visits for homework answer pages were positively, but not significantly, related to viewing time for instructional text. Table 5 presents the correlations among the four measures of relative viewing time. Relative viewing time for homework problems is significantly and negatively related to relative viewing times for instructional text and worked examples. Additionally, relative viewing times for instructional text and worked examples are significantly and positively related to one another. These findings illuminate a dichotomy in students' relative use of the textbook and further highlight the effect of students' choice of study strategies when viewing the textbook. Those students who tend to make greater relative use of the instructional text pages also tend to make greater use of worked example pages but not the homework problem pages. This pattern of behaviour is consistent with a student using a learning strategy based on learning by being told.
| Reading measures | 1 | 2 | 3 | 4 |
|---|---|---|---|---|
| 1. Instructional text relative viewing time | — | |||
| 2. Worked example relative viewing time | .377** | — | ||
| 3. Homework problem relative viewing time | −.865** | −.682** | — | |
| 4. Homework answer relative viewing time | .020 | .006 | −.330** | — |
- * p < .05.
- ** p < .01.
5.2 Question 2: How does student achievement correlate with what students read?
According to the time‐on‐task hypothesis, student reading effort (particularly on pages involving homework problems) should correlate positively with course grades. Table 6 shows the correlation between each of 14 viewing measures and course grade, exam grade, homework and quiz grade, and the SCI score, while controlling for the three measures of prior knowledge (high school grade point average [GPA], SAT score, and FCI score). Although all three types of measures (i.e., viewing time, page visits, and relative viewing time) show similar patterns, we focus on relative viewing time (in the bottom four rows) as an inclusive measure of selective viewing. Focusing on homework problems (measured by relative viewing time for homework problem pages) is significantly and positively correlated not only with homework and quiz grade but also with exam and course grades, whereas focusing on instructional text (measured by relative viewing time for instructional text) and worked examples (measured by relative viewing time for worked examples) is significantly and negatively correlated with all three grades. One should be careful to interpret these findings to mean worked examples and instructional text content are harmful for learning the material. In fact, there is a considerable amount of research speaking to the effectiveness of worked examples given the right conditions (e.g., Paas, Renkl, & Sweller, 2003; Renkl, 2002; Ward & Sweller, 1990). The pattern of results is consistent with the self‐testing hypothesis, which holds that practicing the activity required for the test is an effective study strategy. Similarly, this pattern extends the time‐on‐task hypothesis by highlighting the value of spending time on activities most closely related to the test activities.
| Textbook viewing measures | Homework and quiz grade | Exam grade | Final course grade | Statics Concept Inventory score |
|---|---|---|---|---|
| Total viewing time | .320** | .097 | .158 | −.011 |
| Instructional text viewing time | −.009 | −.186 | −.160 | .057 |
| Worked example viewing time | −.003 | −.193† | −.165 | −.035 |
| Homework problem viewing time | .370** | .168 | .229* | −.012 |
| Homework answer viewing time | .226** | .057 | .101 | −.045 |
| Total page visits | .171 | −.067 | −.017 | −.100 |
| Instructional text page visits | −.045 | −.197† | −.178 | .047 |
| Worked example page visits | −.078 | −.247* | −.228* | −.045 |
| Homework problem page visits | .268* | .026 | .085 | −.159 |
| Homework answer page visits | .209† | .065 | .104 | −.058 |
| Relative viewing time for instructional text pages | −.292* | −.287* | −.312** | .037 |
| Relative viewing time for worked example pages | −.309** | −.358** | −.376** | −.083 |
| Relative viewing time for homework problem pages | .287* | .344** | .359** | .044 |
| Relative viewing time for homework answer pages | .086 | −.007 | .014 | −.046 |
- Note. GPA = grade point average.
- * p < .05.
- ** p < .01.
- † .05 < p < .10.
Achievement on the SCI did not correlate significantly with any measure of reading effort (p ≥ .144 in all cases), perhaps because the e‐textbook did not specifically address the kinds of items on the SCI.
5.3 Supplemental questions
5.3.1 How does students' prior knowledge correlate with reading strategies?
One might expect that a student's prior knowledge would impact the amount of time the student needed to spend reading the e‐textbook. Thus, we examined the Pearson correlation (with p < .05) between each of the 14 viewing measures and three measures of prior knowledge including SAT score, high school GPA, and performance on the FCI. As presented in Table 7, SAT score is correlated negatively with most of the 14 measures of reading. However, only four of the correlations are significant. Both the total viewing time and the total page visits measures are significantly and negatively correlated with SAT score, as are the homework problem viewing time and page visits. High school GPA is correlated positively with most of the 14 measures, although only two correlations are significant. Both total viewing time and homework problem viewing time are significantly and positively correlated with high school GPA. Finally, performance on the FCI is negatively correlated with most of the 14 measures. Both the viewing time and page visits for homework problems are significantly and negatively correlated with FCI score. Additionally, total page visits and page visits to worked examples are also significantly and negatively correlated with FCI score. Finally, total viewing time approaches a significant negative correlation with FCI score. In this study, high SAT score and high FCI score appear to signify a high level of cognitive ability that enables students to learn the material quickly, whereas high GPA appears to signify a high level of academic motivation that supports persistence in learning.
| Textbook viewing measures | SAT score | High school GPA | FCI score |
|---|---|---|---|
| Total viewing time | −.239* | .194* | −.144† |
| Instructional text viewing time | −.099 | .046 | .020 |
| Worked example viewing time | −.155 | .066 | −.066 |
| Homework problem viewing time | −.241* | .208* | −.169* |
| Homework answer viewing time | −.082 | .083 | −.043 |
| Total page visits | −.226* | .137 | −.209* |
| Instructional text page visits | −.099 | .113 | −.083 |
| Worked example page visits | −.153 | .060 | −.211* |
| Homework problem page visits | −.249** | .123 | −.229** |
| Homework answer page visits | −.135 | .144 | −.072 |
| Relative viewing time for instructional text pages | .083 | −.115 | .058 |
| Relative viewing time for worked example pages | .052 | −.135 | −.090 |
| Relative viewing time for homework problem pages | −.086 | .116 | −.037 |
| Relative viewing time for homework answer pages | .064 | .046 | .108 |
- Note. FCI = Force Concept Inventory; GPA = grade point average.
- * p < .05.
- ** p < .01.
- † .05 < p < .10.
5.3.2 How do students' reading preferences correlate with reading strategies?
One might expect that a student's personal preference about reading textbooks would influence the way they read their e‐textbook. Thus, we examined the correlation between reading preferences, measured with the three survey questions presented above, and the 14 measures of e‐textbook reading. As shown in Table 8, there were no significant correlations between media preference (i.e., electronic vs. paper) and the reading measures. This finding is consistent with theories on learning with instructional media that hold that media do not cause learning but instructional methods cause learning (Clark, 2001). Perceived convenience of the STL Reader correlated positively with most measures of reading, although only four of the correlations were significant. More specifically, perceived convenience correlated positively and significantly with total page visits as well as page visits to instructional text, worked example, and homework problem pages. Similarly, perceived importance of the textbook correlated positively with most measures of reading. Here, eight of the correlations were positive and significant including, viewing time and page visits for all content, instructional text, worked examples, and homework problems. Thus, students who feel the textbook is important to their learning tend to use it more than students who do not. This pattern is consistent with expectancy‐value theories of academic motivation, which hold that learners exert more effort when they value the material they are learning (Wigfield, Tonks, & Klauda, 2009).
| Textbook viewing measures | Perceived importance of textbook (M = 3.1, SD = 1.1) | Perceived convenience of STL Reader (M = 2.9, SD = 1.6) | Media preference (electronic vs. paper; M = 2.8, SD = 1.4) |
|---|---|---|---|
| Total viewing time | .223* | .168† | .011 |
| Instructional text viewing time | .185* | .097 | .044 |
| Worked example viewing time | .226* | .159† | .067 |
| Homework problem viewing time | .199* | .162† | .000 |
| Homework answer viewing time | .030 | .039 | −.038 |
| Total page visits | .252** | .270** | .012 |
| Instructional text page visits | .245** | .179* | .014 |
| Worked example page visits | .307** | .232** | .070 |
| Homework problem page visits | .202* | .256** | .007 |
| Homework answer page visits | .088 | .163† | −.016 |
| Relative viewing time for instructional text pages | .095 | −.024 | .002 |
| Relative viewing time for worked example pages | .119 | .028 | −.014 |
| Relative viewing time for homework problem pages | −.083 | .019 | .005 |
| Relative viewing time for homework answer pages | −.081 | −.014 | .024 |
- * p < .05.
- ** p < .01.
- † .05 < p < .10.
6 GENERAL DISCUSSION
6.1 Empirical contributions
The first major finding of this study is that when college students in an engineering course view their e‐textbook, they tend to focus on homework problem pages (i.e., pages that allow them to practice solving problems), and they tend not to look at pages containing instructional text or worked examples (i.e., pages that tell them information). The second major finding of this study is that the amount of effort that students put into viewing homework problem pages correlates positively with achievement in the course, whereas the amount of effort that students put into viewing pages containing instructional text and worked examples correlates negatively with achievement in the course.
6.2 Theoretical contributions
The pattern of results is consistent with the self‐testing hypothesis, which holds that the activity of taking a practice test—such as solving homework problems—improves learning by providing a form of retrieval practice. The results also help to modify the time‐on‐task hypothesis to include the idea that academic achievement is related to effort on study tasks that are aligned with assessment (such as practicing solving problems when the assessment involves solving problems).
6.3 Practical contributions
Clearly, students in engineering courses do not put much effort into reading the instructional material in their e‐textbooks, skipping over a majority of the pages with instructional text or worked examples. This suggests that e‐textbooks in engineering should not be counted on to provide a rich base of conceptual knowledge for learners. Instead, students focus on the practical issue of completing homework problems, suggesting that e‐textbooks in engineering may best be used for building procedural and strategic knowledge in learners.
6.4 Methodological contributions
The STL Reader built for this project is an effective tool for assessing how students read e‐textbooks in academic courses. It has potential to contribute to future research on the relation between student study strategies and academic learning.
6.5 Limitations and future directions
This study involved one cohort in one course, so it would be useful to compare it with other studies. It is possible—although unlikely—that students used some resource other than the e‐textbook for instructional content, so future research should stringently monitor student use of external sources of information. However, the large amount of time students spent viewing homework problem pages provides strong evidence that students used the e‐textbook as their primary source for the textbook. At the time this study was conducted, the textbook was recently released, thus limiting the possibility of pirated versions being available online. Additionally, students were provided the electronic version of the textbook free of charge for use in the instrumented document viewing software. An additional way to minimize the possibility of students using a text other than the e‐textbook would be to employ a custom textbook designed specifically for the course. Because of the positive correlation between perceived convenience of the STL Reader and the amount of reading, care must be taken to ensure that document viewing software is carefully designed to match the needs of the students.
Additionally, the validity of the three self‐report measures should be interpreted in light of the fact that each is on a single item. We used only one item to tap each of the three self‐report factors because our main goal was not to develop a separate psychometrically tuned instrument for each factor but rather to take a preliminary look a few potentially interesting factors as a part of a supplementary analysis.
There is an abundance of research showing that a learner's interest in the subject matter can affect their engagement with learning activities (Renninger & Hidi, 2016). In an educational context, it is possible that those students who are more interested in the subject matter of the course will tend to read more of the instructional material and that this relationship may play a role in students' subsequent course performance. Although this is not the focus of this study, this is a rich area for further exploration.
7 CONCLUSION
Prior research suggests that students often do not do the required reading for courses. Our study, which used computer‐based technology to obtain detailed measures of reading habits, provides strong evidence of this. We found that college students in an introductory engineering course read surprisingly little of their online course textbook. Across a 10‐week course, the students spent an average of 1.9 hr reading instructional text, 1.4 hr on worked examples, 22.1 hr on homework problems, and 0.9 hr on homework answers, indicating a preference for practicing to solve test problems rather than being told. The students' course grades correlated significantly and positively with time viewing homework problems. However, course grades were not positively correlated with viewing either instructional text or worked examples. These results suggest that achievement was related to time spent practicing for solving test problems rather than time spent being told, suggesting a revision of the time‐on‐task hypothesis to include the value of spending time on tasks aligned to test requirements.
ACKNOWLEDGEMENT
This project was supported by the National Science Foundation under Award 1432820.




