Erratum: Correction to Robinson, M.A. (2010). An empirical analysis of engineers' information behaviors. Journal of the American Society for Information Science and Technology, 61(4), 640–658.
Article first published online: 16 JUL 2010
© 2010 ASIS&T
Journal of the American Society for Information Science and Technology
Volume 61, Issue 9, page 1947, September 2010
How to Cite
- Issue published online: 25 AUG 2010
- Article first published online: 16 JUL 2010
- Manuscript Received: 18 MAY 2010
- Manuscript Accepted: 18 MAY 2010
I recently discovered two errors in the cited paper. As the sole author, I accept full responsibility for these errors and would like to apologize to the readers and issue the following corrections. Note, however, that neither of these errors affects the main analyses reported (i.e., Research Aim 1, Hypotheses 1 to 8, inclusive, and the post hoc analyses), so the fundamental results and messages of the article remain the same. As will be apparent from the article itself, the dataset is a complex, multilevel one, and, while this in no way excuses these errors, it hopefully goes some way to explaining them.
First, on p. 650 of the article, in the section entitled Data Overview, Screening, and Response Rate, some of the confidence and precision levels stated for the 73 overall working time percentage results (reported in Tables 4 and 5) are incorrect. The levels stated only apply to the 29 results that relate to all 78 participants, who, between them, accounted for 11,137 sample points of PDA data. These levels do not apply to the remaining 44 results, however, because here the participants are divided into the four smaller groups of seniority grades, as shown in Table 4 and as addressed by Hypotheses 3 and 4. This division of participants reduces the numbers of sample points to 2,485 for Grade 1, 2,874 for Grade 2, 2,696 for Grade 3, and 958 for Grade 4 (the remaining 2,124 sample points are from those participants without seniority data). These fewer sample points, in turn, affect the confidence and/or precision levels with which these particular 44 results can be stated (for clarification of this issue and the relevant work sampling equation, see Figure 2). However, even when the 958 sample points of Grade 4 are considered, in relation to Hypotheses 3 and 4, the 5.37% of overall working time that these seven participants spent asking questions (see Table 4) is still as precise as ±26.58% with a 95% confidence level (i.e., it can be stated with 95% confidence that the actual figure had been between 3.94 and 6.80% of overall working time). Given that this is the least precise of the eight overall working time percentages of relevance to Hypotheses 3 and 4, and that neither of these hypotheses approached significance, an examination of the plotted results in Figure 3 suggests that this issue was unlikely to have affected these null findings.
The second error concerns Footnote 4, on p. 646 of the article in the section entitled Generation and content of the multilevel task categories. Here, the results of the analyses of the perceptual variables are incorrect, as I had not isolated the correct subset of data from the wider, multilevel dataset prior to conducting the analyses. The correct results are as follows. Using within-participants t tests once more, searching for information from other people was rated as more complex (M=1.88, SD=0.68) than doing so from nonhuman sources (M=1.44, SD=0.62), t(75)=5.74, p<0.001 (Cohen's d=0.66, a medium effect size), more important (M=2.58, SD=0.49) than doing so from nonhuman sources (M=2.25, SD=0.50), t(75)=6.89, p<0.001 (Cohen's d=0.67, a medium effect size), and more satisfying (M=2.08, SD=0.48) than doing so from nonhuman sources (M=1.88, SD=0.57), t(75)=3.53, p<0.001 (Cohen's d=0.39, a small effect size). However, there was no difference between the effectiveness ratings of searching for information from other people (M=2.29, SD=0.38) and nonhuman sources (M=2.25, SD=0.45), t(75)=0.97, n.s. (Cohen's d=0.11, a very small effect size).