Measuring ICT Use and Learning Outcomes: evidence from recent econometric studies


  • Federico Biagi,

  • Massimo Loi

Federico Biagi, Information Society Unit European Commission — Joint Research Centre (JRC)Institute for Prospective Technological Studies (IPTS) Edificio EXPO-C/ Inca Garcilaso, 3-41092 Sevilla-Spain,,

Massimo Loi, Istituto per la Ricerca Valutativa sulle Politiche Pubbliche (IRVAPP),Via S. Croce, 77-38122 Trento, Italy,


Based on PISA 2009 data, this article studies the relationship between students’ computer use and their achievement in reading, mathematics and science in 23 countries. After having categorised computer use into a set of different activities according to the skills they involve, we correlate students’ PISA test-scores with an index capturing the intensity of use for each of these activities and with the total number of activities they perform. Overall, we find that students’ PISA test scores in reading, mathematics and science increase with the intensity of computer use for Gaming activities while they decrease with the intensity of computer use for activities that are more related with school curricula (i.e. Communication and Collaboration activities; Technical Operations/Info Retrieval activities; Creation of Content and Knowledge Problem Solving activities). However, the number of activities (and hence the diversification of activities), irrespective of the intensity of computer use, is positively correlated with students’ proficiency in all three PISA domains in the vast majority of countries, indicating that computers breadth of use, as opposed to intensity of use in a given activity, has some positive effect on students’ PISA test scores.


Estimating the impact of ICT on learning is a daunting task for many reasons. First, the concepts need to be defined and then they must be properly measured. What do we mean by ICT? Are we referring to ICT infrastructures or to their actual use? Are the location of the infrastructures and what students do when they use ICT relevant? Is the intensity of use an important factor? Similar problems are encountered when trying to define and measure the concept of learning: are we referring to specific skills, competences, domains (e.g. mathematics, science or reading), to the performance in standardised tests at the national level, such as the British General Certificate of Secondary Education (GCSE), or at the international level such as the Programme for International Student Assessment (PISA), the Trends in International Mathematics and Science Study (TIMMS), and the Progress in International Reading Literacy Study (PIRLS), or to some other more holistic concept of learning? Things become even more complex when trying to capture the relationship between ICT use and learning (however defined). The same ICT infrastructures and intensity of use can give rise to different learning outcomes, due to the interplay of many factors (e.g. the degree of ICT confidence of teachers, students and parents, the accessibility of ICT resources at home, school or other relevant environment, peer effects, etc.).

In this article, we try to shed light on some of these questions, exploiting the features of the 2009 PISA ICT familiarity questionnaire that, for the first time, provides information on the type of activities 15-year-old students perform using ICT. Our main question is ‘How are the type and intensity of ICT use related to students' PISA test-scores?’ We are able to go beyond the simple dichotomisation between school vs. home (intensive) use of ICT (Spiezia, 2010) and, by looking at intensity in the context of different types of activities — from gaming to problem solving — we are able to better understand the links between ICT and learning (as measured by results in the PISA standardised tests). Moreover, by adopting a cross-country perspective, we can test whether the signs and magnitudes of the correlation coefficients between PISA test scores and intensity of ICT use in the various activities are country-specific or homogenous across countries and whether they are sensitive to the domain considered (i.e. language of instruction, mathematics and science). The soundness of our approach is confirmed by the fact that our results are fairly general, both across countries and domains.

This article is organised as follows. Section 2 summarises our view on the main causal relationships between ICT use and learning outcomes. Section 3 reviews the literature that, using econometric methodologies applied to large datasets, has tried to assess the impact of ICT use on students' performance in standardised tests1. Section 4 presents the main results of our recent research on PISA 2009 (Biagi & Loi, 2012) and Section 5 concludes.

Interpretative Framework for the Relationship between ICT Use and Learning Outcomes

Ideally, when running an econometric study, we would like to measure the causal effect of the explanatory variables on the dependent variable. In practice, we seldom have the chance to go beyond measures of association because, even if we have a clear view on the causal relationship between the left-hand and the right-hand side variables, we are not able to identify it through lack of data. For instance, it is very difficult to measure students' innate abilities or efforts. Similarly, we normally do not have data on parents' or teachers' attitudes to technology. In general, the more complex the causal model (e.g. due to feedbacks from the dependent to the explanatory variables or to complex interactions and loops among the latter), the more difficult the identification exercise becomes.

Evaluating the impact of ICT on learning outcomes is very difficult and this is confirmed in Figure 1 where we have summarised the main complex relationships between ICT use and learning outcomes. Following Scheuermann and Pedrò (2009), we take into account micro (student's and family's characteristics), meso (school's characteristics) and macro (institutional) level factors, as well as their interrelationships 2. On the one hand, we have the role of students' characteristics (e.g gender3, migration background and grade of enrolment) (Ainley et al., 2008; Broos, 2005; Livingstone & Helsper, 2007; Notten et al., 2009; Tømte & Hatlevik, 2011; Zhong, 2011) and family characteristics (e.g. socio-economic status and family structure4) which influence adolescents' confidence in and use of new technologies. These relationships may also be affected by the interaction with peers (i.e. classmates, out-of-school friends, brother(s)/sister(s)). But school-level factors are also important in exploiting the potential of ICT in education5. The literature identifies two main channels that are crucial in reaching effective integration of ICT in education. The first concerns school principals and teachers' behaviour and knowledge (Brummelhuis & Kuiper, 2008; Law & Chow, 2008; Pelgrum, 2008), while the second refers to schools' technological equipment, including software, Internet connectivity and technical and pedagogical support (Eurydice, 2010). Furthermore, school characteristics may not be independent from family characteristics, e.g. families with a higher socio-economic background can enrol their children in better equipped schools.

Figure 1.

Factors affecting students' ICT use and school performance

Institutional-level factors are also very important, as they play a role in moderating or accentuating the barriers at school level. Undoubtedly, technological infrastructures (e.g. broadband coverage and speed) affect access and use of ICT, both at home and at school (European Commission, 2012). Moreover, many countries recommend the use of ICT for teaching, offering support (practical advice and help for lesson planning, effective teaching, classroom management, use of various resources, etc.) for the effective integration of these tools in education (Condie & Munro, 2007). Furthermore, countries play a central role in promoting (national and local) policies aiming at providing teachers with knowledge and skills to integrate ICT in their teaching activities6. Finally, institutional settings condition the amount of resources available at the school level and their budget allocation decisions.

The micro, meso and macro level factors on the left side of Figure 1 influence the way students use ICT at home and at school, which, together with other micro, meso and macro level factors (right side of Figure 1) determine students' learning (however defined).

The practical estimation of this complex model is quite challenging. First, factors affecting the availability and use of ICT (gender, family structure and socio-economic status, school resources and autonomy, teachers' autonomy, etc.) may also affect students' school performance and PISA tests scores (Fuchs & Wöβmann, 2007), generating a collinearity problem. Second, students using ICT may be systematically different from those who do not (general or specific use, in all their declinations of intensity and place of use). We refer to this as selection bias. Third, the relationship between ICT use and learning may be affected by unobservable factors (e.g. attitude to ICT, ability, motivations and aspirations), resulting in omitted variables or measurement errors in the estimates. While in the case of randomised or quasi-natural experiments some of these problems can be dealt with, in general, it is almost impossible to obtain results that are immune to all of them.

Previous Findings

Recent technological improvements (such as Tablets), creating new opportunities for ICT use in teaching and learning processes, are leading many observers to emphasise the need to invest in new technologies to improve the learning experience of younger generations. However, despite the many claims by politicians and software/hardware producers and vendors, so far there is no unambiguous evidence of a substantial impact of ICT on students' learning (however defined). The presence of mixed results (revealing insignificant, positive or even negative impacts of ICT on students' learning) is certainly due in part to the complexity of this relationship, but also to the fact that it has been studied within different disciplines (e.g. pedagogy, sociology, computer science and economics) and even within the same discipline, using different methodologies.

Here, we briefly review the findings of some recent econometric studies on ICT and students' achievement in standardised tests. We first present the results of five natural experiments (the ‘golden rule’ in policy evaluation), then we summarise the findings of three quasi-experimental studies (considered the ‘second best’ in policy evaluation) and, finally, we review six econometric studies based on a multivariate analysis (the most common approach).

In the literature on the impact of ICT use on performance in standardised tests there are only few cases of randomisation or natural experiments. One is by Angrist & Lavy (2002) who analyse the effects of a large-scale computerisation policy in elementary and middle schools in Israel. Here, one can find appropriate control groups because, due to the programme characteristics, not all schools received funding (natural experiment set-up). Their findings reveal that, after controlling for observable characteristics, greater educational use of computers does not have a positive effect on standardised test scores. On the contrary, for 4th grade maths scores, the effect is negative and statistically significant. Results casting doubts on the usefulness of ICT for learning have also been found by Goolsbee & Guryan (2006) who study the impact of a US programme subsidising schools' investment in Internet and communications. While, as a byproduct of the programme, the number of Internet connections available at schools increased, there is no evidence of better performance in tests. Positive effects on students' learning outcomes, but limited to the domain of interest (language), have been found by Rouse et al. (2004) who, in a randomisation framework, study the impact of an instructional computer programme designed to improve language and reading skills in the US. Banerjee et al. (2007), who analyse the outcome of a randomised policy carried out in two Indian States to improve the quality of education in urban slums, find that a computer-assisted programme designed to reinforce mathematical skills had a positive impact on scores (but effects are limited to this domain as the programme did not affect the performance in other subjects). Finally, considering three US school districts, Barrow et al. (2009) report that students randomly assigned to a computer-aided instruction programme obtained higher scores in algebra and pre-algebra tests than those who are not.

Randomised experiments are rarely feasible because of time and budget constraints or ethical implications, and researchers often seek quasi-experiments. Leuven et al. (2004) exploit a discontinuity in a subsidy for the purchase of computers and software given to disadvantaged schools in the Netherlands. Using a difference-in-difference framework, they find that this subsidy had a negative effect on students' learning which was stronger for females. Machin et al. (2006), who analysed the causal impact of ICT investment on educational outcomes in English schools over the period 1999–2003, find evidence of a positive causal impact in primary schools after having controlled for the endogeneity of ICT investment through Instrumental Variable (IV) techniques7. Malamud & Pop-Eleches (2010) use a regression-discontinuity-design to estimate the effect of home computers on students' achievement in Romania. Exploiting a discontinuity in a subsidy to low-income families, they find that students from households that used the subsidy to purchase a home computer had significantly lower grades in mathematics, English and Romanian.

Most studies, because they operate in environments in which experimental and quasi-experimental analyses are not feasible or because instrumental variable are not available, tend to focus on correlation relationships. This is the case of Fuchs & Wöβmann (2007) who find a positive and significant correlation between the availability of computers at school and students' performance in PISA tests (but the estimated correlation is reduced when additional variables are brought into the regression as controls). Similarly, Notten & Kraaykamp (2009) report a positive relation between ICT availability at home and PISA test scores in science. A positive correlation between some measures of ICT use (and confidence) and PISA test scores in science is found for Canada, Austria (Luu & Freeman, 2011) and the Czech Republic (Kubiatko & Vlckova, 2010), while no substantial influence of home computer use on students' PISA test-scores in mathematics is estimated for Germany (Wittwer & Senkbeil, 2008).

In a study that uses the 2006 PISA ICT familiarity questionnaire, Spiezia (2010) tries to go beyond a simple correlation analysis and, controlling for the potential endogeneity of treatment8, finds that a greater frequency of computer use is positively associated with higher PISA test scores in science in all countries (with large across-country differences in the estimated coefficients). When controlling for where use is made (home vs school), he finds that the positive relationship between intensity of use and the PISA science test score is much stronger for those who use computers intensively at home than for those who use it intensively at school9 (the association between test scores and intensity of computer use at school is not significant for many countries). This result10 is interesting because it points at the low efficacy of ICT policies directed solely at schools. However, one drawback is that it is based on a general measure of intensity, since the PISA 2006 survey does not allow researchers to measure the intensity of ICT use for different activities. This is only possible with the 2009 PISA survey.

What emerges from this summary is that it is hard to find univocal and consistent evidence supporting the hypothesis of a positive impact of ICT use on students' performance in standardised tests. This, at least for the studies using PISA data, might be due to the fact that, until the 2009 survey, it was not possible to distinguish by type of ICT use.

Evidence from a Recent Study Based on PISA 2009

In the context of a Joint Research Centre-European Commission research project on ICT and learning, we have recently studied the relationship between ICT use and learning11 using PISA 2009 data (Biagi & Loi, 2012).

The primary source of data for our study is the fourth wave of PISA, administered in 2009. PISA is a cross-national survey that, each three years since 2000, assesses 15-year-old students' performance in mathematics, reading and science, as well as cross-curricular problem-solving skills. In addition, PISA collects contextual data on students', families' and schools' characteristics. Furthermore, it gives each country the option to administer a 10-minute questionnaire on students' familiarity with ICTs. Students are asked what kinds of technologies are at their disposal at home and at school, if they use them, how often and for what purposes. They are also asked to self-assess their level of proficiency in performing certain tasks using a computer and to express their attitude to computers (see for more information about the PISA survey).

This study only considers the European countries that completed the optional questionnaire on students' familiarity with ICT (plus Iceland, Norway and Turkey) and the student-level observations with no missing values on any variable of interest (list-wise deletion). The full sample is composed of 23 countries and most students have some experience in using ICT (about 97% in the selected dataset declared they had used a computer before the survey). France, Luxembourg, the UK and Romania are not in the dataset because they did not complete the PISA-ICT questionnaire. The Netherlands, although it completed the questionnaire, does not provide any information on the use of ICT at home for entertainment purposes (variables from IC04Q01 to IC04Q09 of the OECD-PISA dataset) and it is therefore not considered here. Similarly, Austria is not considered in the econometric analysis of the study because of data reliability issues.

The regressions are run country-by-country, with clustered standard errors at the school level and using normalised weights calculated following the procedure suggested by the PISA 2009 data analysis manual (OECD, 2009, p. 219).

The econometric estimates are mostly based on questions Q4, Q5 and Q6 of the PISA 2009 ICT familiarity questionnaire, which are meant to capture the use of ICT, both at home and at school. Q4 refers mainly to entertainment uses of ICT at home, while Q5 and Q6 capture school-related activities at home and at school, respectively (see Appendix A). There are various ways of reading the information provided by this set of questions. On the one hand, they allow researchers to distinguish in terms of sites in which the use of ICT takes place: home vs. school. On the other, they also provide information on the purpose of the activities: some are school-related (even if performed at home), while others are mostly entertainment-related. Realising that these activities involve different skills and competences, and following the approach adopted in a recent study by the JRC-IPTS Information Society Unit on Digital Competences (Ferrari, 2012), we divided them into the 4 groups presented in Table I. For each group we created an index of intensity of use combining the information on the number of basic activities performed and the related frequency of use. To generate this index, we first attributed a score ranging from 1 to 4 to the frequency with which a student performs one of the basic activities listed in Q4, Q5 and Q6 (1 corresponds to the lower frequency of use — never or hardly ever — and 4 to the highest frequency of use — every day or almost every day). Then, for each student and each group of activities, we computed two indicators. First, we generated the maximum intensity, which is obtained by multiplying by 4 (the highest intensity of use) the number of basic activities performed by the student within the group. Then, for each student and each group of activities, we computed the total score, obtained by summing up the scores corresponding to the frequency with which s/he performs the basic activities within the group. Finally, we obtained the index of intensity of use by taking the ratio between the student's total score and maximum intensity.

Table 1. Groups of ICT activities
GroupShort descriptionBasic activities (from Q4, Q5, Q6)
Gaming activitiesPlay individual or collective games, both online and off-line
  • Play one-player games
  • Play collaborative online game
Collaboration and communication activitiesLink with others, participate in online networks and communities, interact constructively and responsibly; communicating through online tools, taking into account privacy, safety and étiquette
  • Use e-mail
  • Chat on line
  • Publish and maintain a personal website, weblog or blog
  • Participate in online forums, virtual communities
  • Use e-mail for communication with other students about schoolwork
  • Use e-mail for communication with teachers and submission of homework or other schoolwork
  • Chat on line at school
  • Use e-mail at school
  • Use school computers for group work and communication with other students
Information Management and Technical OperationsIdentify, locate, access, retrieve, store and organise information; use technology and media, perform tasks through digital tools
  • Browse the Internet for fun
  • Download music, films, games or software from the Internet
  • Browse the Internet for schoolwork
  • Download, upload or browse material from your school's website
  • Check the school's website for announcements
  • Browse the Internet for schoolwork
  • Download, upload or browse material from your school's website
  • Post your work on the school's website
Creation of Content and Knowledge and Problem Solving activitiesIntegrate and re-elaborate previous knowledge and content, construct new knowledge; define problems to be solved or tasks to be achieved and resources and means for achievement
  • Play simulations at school
  • Practice and drilling, such as for foreign language learning or mathematics
  • Doing individual homework on a school computer

Following this approach, we generated four explanatory variables corresponding to the four different groups of activities: games_int (measuring the use of ICT for gaming activities); colcom_int (measuring the use of ICT for communication and collaboration activities); techinfo_int (measuring the use of ICT for technical operations and for info retrieval activities); contprob_int (measuring intensity in activities related to creation of content and knowledge and problem solving). We also computed an indicator capturing the total number of activities performed by a given student, irrespective of intensity (totactivities).

In addition to the measures of intensity and breadth of ICT use, the following variables were used as controls in the basic model: grade, gender, household's socio-economic status (an index created by the OECD capturing both income- and education-related household variables), a dummy variable for student's migration background, dummy variables capturing the household's composition (single parents, nuclear and mixed families), dummy variables for the number of books available at home, peer-effects as captured by the average school score in the corresponding test (i.e. Language of Instruction, Mathematics or Science), and school's average socio-economic status.

The econometric model was run for the three PISA domains, one country at a time, allowing for interactions between key explanatory variables (intensity of ICT use/breadth of ICT use) and the variable capturing the household's socio-economic status. This interaction is meant to inform us whether intensity and breadth in ICT use tend to complement household's social economic status in determining students' performance.

Table II summarises the main results of the relationship between ICT use and PISA test scores (for country-specific estimates, see Biagi & Loi, 2012; Appendix B reports synthetic results). The figures in parenthesis indicate the total number of countries satisfying the sign set to the left of the parenthesis.

Table 2. Summary of main results by PISA domain
VariableLanguage of instructionMathematicsScience
  1. Source: authors estimates from PISA 2009.
games_int+ (11/23)- TK+ (15/23) + (13/23) 
colcom_int- (15/23)+ PT- (14/23)+ SK- (15/23) 
techinfo_int- (16/23)+ NO- (17/23)+ NO- (15/23)+ NO
contprob_int- (21/23) - (19/23) - (20/23) 
totactivities+ (22/23) + (18/23) + (21/23) 

When looking at the relationship between the domain-specific PISA test score and the measures of intensity in the use of ICT, we find very consistent patterns across countries. First and contrary to our expectations, when there is evidence of a significant coefficient for the relationship between the domain-specific PISA test score and our measure of intensity in gaming activity (games_int) this is positive (in 15 countries out of 23 for maths, 13 for science and 11 for language of instruction: the only exception being Turkey, where the relationship is negative). For all the other variables measuring intensity we find that, where/if significant at all, the estimated coefficient is negative. For colcom_int this is so in 15 out of 23 countries for language of instruction (except Portugal) and science (no exceptions), and in 14 countries for mathematics (except Slovak Republic). For techinfo_int, the negative correlation applies to 16 countries out of 23 for language of instruction, to 17 for mathematics and 15 for science (the exception in the three cases being Norway). Finally, when looking at contprob_int, the coefficient is negative in 21 countries out of 23 for language of instruction, in 19 for maths and in 20 for science, with no exceptions. However, our results also indicate that the variable capturing the breadth of ICT use (the number of performed ICT activities: totactivities), irrespective of intensity, tends to be positively and significantly associated with the PISA test score (in 22 out of 23 countries for the language of instruction, 18 out of 23 for maths and 21 out of 23 for science). Moreover, we do not find evidence supporting the hypothesis that the use of ICT reinforces or alleviates pre-existing social and economic differences: the interactions between the variable capturing the household's socio-economic status and the variables expressing intensity in the use of ICT (or breadth of activity in ICT) are never significant.

These results could be read as evidence that the generalised expectations on the positive impact of new technologies on learning outcomes are not confirmed, leading us to conclude that investments in ICT are ill placed. However, we think that these conclusions cannot be drawn for at least two reasons. First, our results indicate that breadth of use of ICT (i.e. the number of ICT activities performed) is positively associated with PISA test scores in the three domains and in most countries. This is consistent with a framework in which the different activities are complementary in building competences that are relevant for the PISA tests. Second, ours is not a proper impact assessment based on counterfactual evaluation. For this we should have compared the PISA test scores obtained by students using ICT more intensively with the PISA test scores of an appropriate control group. However, finding such a control group is almost impossible, especially in countries (such as Nordic countries) where most students declare having access to and using computers both at home and at school.

While the negative correlations between the results in PISA tests and the use of ICT for Creation of content and knowledge and problem solving activities could be rationalised on the ground that the PISA test tends to focus on skills and competences that are mostly affected by the traditional aspects of the teaching-learning process (Bocconi et al., 2012), but not by the use of new technologies (Redecker & Johannessen and Istance & Kools in this issue), we find the results for Gaming quite surprising. We cannot exclude the presence of some selection bias, for instance if brighter students spend more time on gaming, but the consistency of the positive association between intensive use of ICT for gaming and PISA test scores induces us to put forward the hypothesis that gaming might indeed stimulate skills, competences and abilities — such as problem solving, strategic thinking, memory, fantasy, interaction, adaptation, etc. — that are well captured by standardised tests such as PISA. On this specific point, Wilson et al. (2009) review the literature on the relationship between gaming and learning and propose an interesting interpretative framework linking game attributes and learning outcomes. We summarise their main conclusions in Table III.

Table 3. Relationship between learning components and game attributes (Synthesis of Wilson et al., 2009)
Learning componentsGame attributesRelationship
Cognitive abilities (i.e. knowledge creation, organization and application)


(e.g. games presenting solvable problems)



(e.g. games providing progressive difficulty and uncertainty of obtaining a specific outcome)



(e.g. games taking into account/adapting to knowledge and skill level of the learner)


Skill-based outcomes

(e.g. technical or motor skills (perception, readiness to act, adaptation, etc.))


(e.g. physical and psychological representation of the tasks)



(e.g. interactive gaming activities/learner engagement)



(e.g. games giving the opportunity to navigate through the computer program on the base of their personal preferences)


Affective outcomes

(e.g. motivation and attitudes)

Specificity of rules/goals

(e.g. games with specific goals that encourage the user to become personally engaged in achieving the goal)



(e.g. games stimulating learners' fantasy and motivation)



(e.g. games giving specific/immediate feedbacks to the learner)



The findings of this study (Biagi & Loi, 2012) show that Gaming is the only activity for which a positive coefficient between PISA test scores and intensity of use is consistently found. For the other activities, the measures of intensity tend to be negatively correlated with students' PISA test score (exceptions are Norway, the Slovak Republic and Portugal). Moreover, this negative effect is particularly strong for the Creation of content and knowledge and problem solving activities, which seem to be closely related to the use of ICT in the school curriculum. While the negative correlations for such activities could be rationalised on the basis that ICTs are still somehow external to traditional school curricula (Punie et al., 2006) for which the PISA test score is constructed, the positive correlation for Gaming comes unexpected. We think that further research is needed on the impact of gaming on skills, and more generally on competences, taking into account that about 20% of time spent on the Internet is for gaming (Nielsen's data).

It is also important to stress that ours is not a counterfactual impact evaluation exercise, as we are not comparing the performance of a treated group with that of a proper control group. Future research — possibly using randomised or quasi-natural experiments — should try to compare the learning performance of different groups of students that are identical in every respect but for the fact that only one of them is exposed to a policy intervention in which teachers have been trained in ICT, ICT infrastructures have been improved and the school curriculum has been designed to integrate ICT hardware and software. While impact evaluation of technical advances can be of only limited use when technology is rapidly evolving, we believe that ICT technology has now reached a sufficiently mature stage, which makes engaging in policy evaluation worthwhile and, we think, not deferrable.


The views expressed are purely those of the author and may not in any circumstances be regarded as stating an official position of the European Commission.


We acknowledge three anonymous referees and the members of the DG-EAC expert group on ICT indicators in Education for their comments and inspiring discussions.


  1. 1

    We do not look at learning outcomes that differ from results in standardised tests and we do not consider evidence from case studies.

  2. 2

    The need for a holistic approach is also stressed in Punie et al. (2006).

  3. 3

    Female students use Internet less often than male students (Livingstone & Helpser, 2007; Notten et al., 2009) who tend to use computers and Internet for entertainment rather than for school-related tasks (Ainley, Enger & Searle, 2008; Tømte & Hatlevik, 2011).

  4. 4

    Notten et al. (2009) show that students from high socio-economic two-parent households are more likely to have Internet access at home than those from lower-status families and to use the web more frequently to obtain information and extend their social networks.

  5. 5

    This is confirmed by two recent international studies conducted by the International Association for Evaluation of Academic Achievement (namely the Second International Technology in Education Study — SISTES 2006 — and the Trends in International Mathematics and Science Study – TIMSS 2007).

  6. 6

    Most European countries include ICT in initial teachers training, provide ICT-related continuing professional development opportunities and evaluate periodically teachers' ICT skills (Eurydice, 2010).

  7. 7

    Endogeneity of treatment could arise if some variables influencing the learning outcomes (such as ability) also affect the likelihood of undergoing treatment (intensity of ICT use), so that a positive relationship between treatment and learning outcomes is spurious due to unobserved ability. Randomisation or natural experiments are immune to this, as assignment to treatment is due to some external factor that is not controllable by the individual undergoing treatment. Instrumental variables are variables that are correlated to treatment status but uncorrelated to the learning outcome and can hence be used to obtain measures of causal impacts.

  8. 8

    Spiezia is well aware of the issue of endogenous treatment and tries to correct it by using a methodology developed by Gourieroux et al. (1987) based on two steps. First, there is a selection equation in which the likelihood of treatment is estimated as a function of some observable characteristics. Second, there is an outcome equation where the score in the PISA test is regressed against observables characteristics that potentially influence the outcomes and the treatment probabilities estimated in step 1.

  9. 9

    Greater computer use at school could be more strictly related to the educational activities of the class, so that students can benefit from teachers' support, while greater computer use at home could be an indication of students' additional and extra activity and hence capture some variations in learning attitudes that are not well represented by the other available variables.

  10. 10

    We suspect that there might be some persistent selection or omitted variable bias, for instance due to unobserved factors that are positively associated both with high performance in the test and the likelihood of a more intensive use of the PC at home (such as positive attitude to technology use at the household level).

  11. 11

    In this context learning means results in PISA tests.

Appendix: Appendix A — PISA 2009 ICT familiarity questionnaire: questions relevant for our study

Q1 Are any of these devices available for you to use at home? (Please tick one box on each row)
 Yes, and I use itYes, but I don't use itNo
a) Desktop computer
b) Portable laptop computer
c) Internet connection
d) <Video games console>, e.g. <Sony PlayStationTM>
e) Cell phone
f) Mp3/Mp4 player, iPod or similar
g) Printer
h) Usb (memory) stick
Q2 Are any of these devices available for you to use at school? (Please tick one box on each row)
 Yes, and I use itYes, but I don't use itNo
a) Desktop computer
b) Portable laptop computer
c) Internet connection
d) Printer
e) Usb (memory) stick
Q4 How often do you use a computer for the following activities at home? (Please tick one box on each row)
 Never or hardly everOnce or twice a monthOnce or twice a weekEvery day or almost every day
a) Play one-player games
b) Play collaborative online games
c) Doing homework on the computer
d) Use e-mail
e) <Chat online> (e.g. MSN®)
f) Browse the internet for fun (such as watching videos, e.g. <YouTubeTM>)
g) Download music, films, games or software from the internet
h) Publish and maintain a personal website, weblog or blog
i) Participate in online forums, virtual communities or spaces (e.g. <Second Life® or MySpaceTM>)
Q5 How often do you do the following at home? (Please tick one box on each row)
 Never or hardly everOnce or twice a monthOnce or twice a weekEvery day or almost every day
a) Browse the internet for schoolwork (e.g. preparing and essay or a presentation)
b) Use e-mail for communication with other students about schoolwork
c) Use e-mail for communication with teachers and submission of homework or other schoolwork
d) Download, upload or browse material from your school's website (e.g. time table or course materials)
e) Check school's website for announcements, e.g. absence of teachers
Q6 How often do you use a computer for the following activities at school? (Please tick one box on each row)
 Never or hardly everOnce or twice a monthOnce or twice a weekEvery day or almost every day
a) <Chat on line > at school
b) Use e-mail at school
c) Browse the internet for schoolwork
d) Download, upload or browse material from your school's website (e.g. <intranet>)
e) Post your work on the school's website
f) Play simulations at school
g) Practice and drilling, such as for foreign language learning or mathematics
h) Doing individual homework on a school computer
i) Use school computers for group work and communication with other students

Appendix: Appendix B — Range of variation of the significant coefficients