SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References

There are considerable differences in school outcomes on the NAPLAN tests in Australia between Government and non-Government schools. This paper documents these for four years of study (Years 3, 5, 7 and 9), and for the five NAPLAN domains (Grammar and Punctuation, Numeracy, Reading, Spelling and Writing). It then examines the extent to which the differences are due to characteristics of the schools, such as the socioeconomic status of the school and the gender mix of the total enrolment at the school. It is shown that a greater share of the difference in marks between Government and non-Government schools in Year 9 than in Year 3 is due to the characteristics of the schools. This implies that, consistent with recent press coverage, the better outcomes of the non-Government schools are due, in part at least, to a selection process that favours academically advantaged students.

I. Introduction

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References

Studies of academic performance in primary and secondary school have canvassed a variety of issues. These range from very specific matters, such as the impact of the age of starting school, to more general matters, such as the role of school resources. One of the topics studied that has generated considerable debate concerns differences in student achievements between Government and non-Government schools. As expressed by Coleman et al. (1982, p. 65), ‘The principal issue addressed by empirical data … is that private schools provide better education than that provided by public schools’.1 Coleman et al. (1982, p. 76) conclude ‘… there is, in vocabulary and mathematics, higher achievement for comparable students in Catholic and other-private schools than in public: the results are less consistent in reading’. Most subsequent studies report a similar finding. For example, Dewey et al. (2000) in their meta-analysis, based on 46 studies and 127 separate estimating equations, find that public school students have lower test scores than private school students. These differences have also captured the attention of the popular press. For example, each year in Australia attention is drawn in newspapers across the country to the types of school attended by high achievers in the annual tertiary entrance exam.2

The research reported below adds to the knowledge on this topic through conducting a study of schools in Australia using data from The National Assessment Program—Literacy and Numeracy (NAPLAN), which was introduced in 2008. This program provides a common annual national assessment of all students in Years 3, 5, 7 and 9 in Government and non-Government schools across Australia. The tests cover Reading, Writing, Language Conventions (Spelling; Grammar & Punctuation) and Numeracy skills. The advantages of the study are that it provides a basis for an assessment covering, practically speaking, all students within a country. It also provides assessments at four levels. The data are made available to the public through a website called ‘My School’. This website also provides a range of additional data, including a measure of socioeconomic status for the school, total enrolments at the school, the gender mix at the school and the attendance rate at the school. The data are made available, however, in the form of school average data: data on the individual student are not available to the public.3 Nevertheless, the availability of data on all schools makes this a rich source of information for analysis of student outcomes by sector of school.

The structure of this paper is as follows. Section II provides a brief overview of the Australian literature on differences in educational outcomes between Government and non-Government schools. Section III describes the data. Section IV presents the results of the statistical analyses for Government schools and for the two types of non-Government schools, Catholic and Independent (largely schools linked to religions other than the Catholic Church). Section V contains a summary and conclusion.

II. Literature Review

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References

Studies of student and school outcomes in Australia can be interpreted from the perspective of the production function framework popularised by Hanushek (1986) and others in the US. These studies propose that student and school characteristics are inputs into the production of academic outcomes by the school, as measured by test scores. While there is a large number of these production function type studies, only a few cover differences between Government and non-Government schools. The limited set of studies with coverage of school type appears to be attributable to the reporting requirements of schools, with only Government schools reporting to the state departments of education that are the source of data for a number of empirical studies in Australia. Thus, the studies that analyse the relative academic outcomes of Government and non-Government schools have largely been based on surveys of students rather than on administrative data. One exception to this is the studies of the academic performance of students at universities according to the type of high school attended (see Win & Miller, 2005; Birch & Miller, 2007; Dobson & Skuja, 2005) which are based on administrative data from the University of Western Australia (Win & Miller, 2005; Birch & Miller, 2007) and Monash University (Dobson & Skuja, 2005).

There are six studies of note that cover student outcomes in primary and secondary school according to the type of school attended: the study by Silins and Murray-Harvey (2000) of outcomes in secondary schools in South Australia, the nation-wide analyses of student outcomes by Marks et al. (2001) and Lamb et al. (2004), the studies based on the Australian component of the Organisation for Economic Co-operation and Development's 2003 Program for International Student Assessment (PISA) by Marks (2007, 2009), and Thomson et al.'s (2011) analysis of the 2009 PISA test scores. Silins and Murray-Harvey (2000) use only school-average data in their research, whereas Marks et al. (2001), Lamb et al. (2004), Marks (2007, 2009) and Thomson et al. (2011) use both individual-level data and school-average data (in multi-level modeling). Table I describes the datasets used in these studies. These studies consistently suggest that socioeconomic background, prior academic achievement and school location (primarily metropolitan or rural areas) are important determinants of student and school outcomes. The role of gender is inconclusive in the studies under consideration. Overall, the general findings do not seem to be sensitive to the types of data and the measures of academic performance used in the analysis. The remainder of this review will concentrate on the variations in student and school outcomes due to the impacts of school sector.4

Table 1. Datasets used in Australian literature
StudiesDatasets
Lamb et al. (2004)(i) Longitudinal Surveys of Australian Youth, student cohort in Year 9 in 1995; (ii) Third International Mathematics and Science Survey; (iii) primary schools achievement data for the state of Victoria, and secondary school Year 12 data provided by the Victorian Curriculum and Accreditation Authority.
Marks (2007, 2009)The 2003 cohort of the Longitudinal Surveys of Australian Youth project, which includes the Australian component of the OECD's 2003 Program for International Student Assessment.
Marks et al. (2001)Longitudinal Surveys of Australian Youth, focusing on the student cohort in Year 9 in 1995. Data collected up to and including 1999 for respondents who completed Year 12 in 1998 with valid information on their academic performance.
Silins and Murray-Harvey (2000)Diverse sample of 30 secondary schools (accounting for approximately one fifth of the total secondary schools in South Australia), consisting of 21 Government schools, six Independent schools and three Catholic schools.
Thomson et al. (2011)The 2009 cohort of the Longitudinal Surveys of Australian Youth project, which includes the Australian component of the OECD's 2009 Program for International Student Assessment.

Using a latent variable partial least squares path analysis procedure, Silins and Murray-Harvey (2000) modelled the predictors of school outcomes in South Australia. Their path model indicated that Independent schools and the larger Government schools were successful at retaining students to post-compulsory levels and maintaining commitment to completion and academic success. They contended that the larger Government schools are likely to attract more funding than their smaller sized counterparts, which in turn allowed them to compete on a more equal footing with the Independent schools.5 In contrast, Independent schools with relatively smaller sizes were argued to have the advantage of offering an atmosphere conducive to promoting positive social relations.6 The path model indicated that socioeconomic background was a strong predictor of academic outcomes, and that a major part of the influence of this variable occurs via its effect through the school sector. In other words, students with higher socioeconomic background were more likely to attend non-Government schools. Furthermore, the results indicated that students with higher socioeconomic status backgrounds who did not end up attending Independent schools were more likely to attend the larger Government schools.

Marks et al. (2001) examined the variations in tertiary entrance performance of Year 12 students in 1998 based on data from the Longitudinal Surveys of Australian Youth.7 They established that school sector had a strong influence on the results for the high school examinations used for entrance into tertiary institutions (ENTER). There was a 12-points difference in the mean ENTER scores in favour of Independent school students over Government school students. Catholic school students and Government school students differed by six points, in favour of those attending Catholic schools. The authors point out that these score differences due to sector are comparable to that associated with differences in socioeconomic status. It was also noted that the academic and socioeconomic mix of students accounted for about one-half of the difference in the ENTER scores between Government and Independent schools. That is, the score premium for Independent school students was reduced to around six points once the prior academic achievement and socioeconomic background were taken into account. However, the effect of statistical control for these factors had a different impact on the comparison between Catholic and Government schools. The differences in ENTER scores between students attending Catholic and Government schools only declined by about 20 per cent after accounting for students' prior academic achievements and socioeconomic mix. Marks et al. (2001) reported that the effect of socioeconomic background was similar across the school sectors. Moreover, the results from the analyses indicated that school-level variables (e.g. a general measure of the academic environment given by the mean of the prior academic achievements of the students) played a role in explaining the score differences between students attending Government and Independent schools.

Lamb et al. (2004), as a whole, concurred with the findings of Marks et al. (2001) reported above. Lamb et al. (2004) contended that the sector organisation into Government and non-Government schools in Australia segregates students along social and academic lines. Independent schools were found to account for a greater proportion of students from the highest quintile of socioeconomic status (SES) and highest general achievement band. Lamb et al. (2004) found that effective schools exist in both the Government and non-Government sectors, as there are numerous Government and non-Government schools performing well above expected levels, after taking into account the effects of student intake.

Marks (2007) has a focus on the impact of school sector and school-level factors on early school leaving (did not complete Year 12). A rich set of explanatory variables was considered, including information on the family background of the student, and school-level data on the quality of the school's material resources, student and teacher morale, student behaviour, teacher behaviour and teacher shortages. Attendance at an Independent school rather than at a Government school reduced the chances of leaving schooling early, whereas attendance at a Catholic school did not have a statistically significant effect in this regard. Marks (2007) concluded that schools do not have any major independent influence on school leaving, as only a few schools had substantially higher or lower early school leaving than expected after individual student characteristics, including prior academic achievement as measured by the PISA scores, were taken into account.8

Using similar methods, Marks (2009) examined the extent to which school-sector differences in university entrance performance (ENTER scores) were accounted for by students' socioeconomic background, prior achievement, various aspects of student learning and school-level factors. The ENTER score data were obtained from self-reports in telephone interviews with participants in the 2005 or 2006 waves of interviews for the 2003 cohort of the Longitudinal Surveys of Australian Youth. Marks (2009) reported that differences across school sectors in the socioeconomic background and prior academic achievement of students accounted for almost one-half of the score gap between Independent and Government schools, and 30 per cent of that between Catholic and Government schools. School-level variables were shown to account for about 40 per cent of the remaining differences.9 In other words, the individual-level and school-level variables considered were jointly associated with about 70 per cent of the Independent school effect and 60 per cent of the Catholic school effect observed in the unadjusted data. These school sector effects were argued by Marks to represent ‘moderate valued added effects for school sector’ (p.19).10

Thomson et al. (2011) report on the Australian component of the 2009 PISA. This covers tests of 15 year olds in that year, most of whom were in Year 10, although significant numbers were in Years 9 and 11. Consistent with the findings from other studies, they report that students from Independent schools have better academic outcomes than students from Catholic schools, who in turn have better academic outcomes than students from Government schools. Adjustment for the socioeconomic background of the students was associated with a removal of the Independent schools-Catholic schools score differential, and a reduction of the advantage of students at non-Government schools over their Government schools counterparts (by 43 per cent in the case of Independent schools, and by 20 per cent in the case of Catholic schools). Further control for the school-level socioeconomic background resulted in the complete erosion of the academic advantage of Catholic and Independent schools over Government schools. Thus Thomson et al. (2011, p.62) conclude ‘… students in the Catholic or independent school sectors bring with them an advantage from their socioeconomic background that is not as strongly characteristic of students in the government school sector’. This result is stronger than reported by Marks (2009), though his analysis was based on ENTER scores from Year 12, with a control for prior academic achievements provided by the PISA scores as 15-year-olds.

Thus, the Australian research on student and school outcomes has generally shown that it matters whether the school is a Government or a non-Government school, though the studies differ with respect to how much it matters. However, being based on relatively small numbers of schools, the studies have not explored these school sector differences in depth. Moreover, as many are based on a composite outcome measure (e.g. the ENTER score of Marks et al. (2001) and Marks (2009)), it is unclear if the sector differences apply to all dimensions of learning. The US literature (see, for example, Coleman et al. (1982)) has shown that the school sector effects are subject specific. The current study provides a comprehensive analysis of the school sector differences by examining five separate test outcomes, using contemporary data on (practically) all schools in Australia.11

III. Data

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References

This study is based on the National Assessment Program—Literacy and Numeracy (NAPLAN) data published on the My School website. NAPLAN is a common annual national assessment for all students in Years 3, 5, 7 and 9 in Government and non-Government schools across Australia. The tests cover Reading, Writing, Language Conventions (Spelling; Grammar & Punctuation) and Numeracy skills. Under NAPLAN, students are assessed on the same days across all States and Territories (for example, the test dates in 2009 were 12–14 May). The tests are designed as grade-based rather than age-based instruments. Students with a language background other than English who arrived in Australia less than one year before the tests, and students with significant intellectual disabilities, may be exempted from testing. About one to two per cent of students are exempted from the tests. As well, parents may withdraw their children from the tests. However, the test participation rates (which include exempt students) are very high, at 95 per cent or more for Years 3, 5 and 7, and just several percentage points lower for Year 9. The assessments across Years 3, 5, 7 and 9 are undertaken using a common scale, which ranges from zero to 1000. Consequently, the mean scores for school sectors for Year 9, which typically range between 550 and 600 (see the lower portion of Table II), are higher than those for Year 3, which typically range between 385 and 425 (see the upper portion of Table II).

Table 2. Means and standard deviations of school outcomes on NAPLAN domains, Years 3 and 9
 GrammarNumeracyReadingSpellingWritingSample size
  1. Notes: Standard deviations in parentheses; All pairwise comparisons of means for a particular NAPLAN domain are significantly different at the 10% level, except for the Independent-Catholic comparison for writing for each year.

Year 3
Government schools401.513386.543399.281393.067401.8944469
(51.61)(45.55)(48.98)(43.22)(41.93)
Independent schools434.548411.479432.654418.411424.670743
(47.95)(41.34)(46.17)(40.99)(37.89)
Catholic schools426.096397.406422.785413.268423.4631267
(39.14)(35.89)(38.94)(37.30)(32.89)
All schools410.109391.527407.705399.923408.7246479
(50.69)(44.14)(48.48)(43.12)(41.13)
Year 9
Government schools551.859570.752560.647555.259540.7901330
(42.60)(42.31)(41.51)(41.89)(53.29)
Independent schools597.922609.141598.690590.690586.719604
(40.04)(37.19)(35.30)(35.24)(46.23)
Catholic schools585.630596.858592.116585.833581.932414
(28.40)(23.85)(22.86)(25.32)(32.59)
All schools569.663585.230575.982569.764559.8592348
(44.85)(41.94)(41.23)(41.27)(53.12)

The My School website contains the NAPLAN results for 2008 and 2009, and a range of statistical and contextual information for about 10,000 Australian schools.12 The statistical and contextual information covers: (i) data on the student population of each school, specifically an Index of Community Socio-Educational Advantage (ICSEA),13 which is a measure of the socio-educational status of the school student body, and the student attendance rate;14 (ii) factual details on the school, which include the school sector, the year range catered for, student and staff numbers; and (iii) a school statement which provides details such as the school's mission, values and any special programs or features of the school. The empirical analyses of this paper focus on the 2009 NAPLAN test. For space reasons, the analyses focus on grammar and numeracy scores, for Years 3 and 9, although comment is provided on other test domains and for Years 5 and 7. Detailed results for these other tests and years are available from the authors.15

Table II presents the means and standard deviations of the NAPLAN test scores in Years 3 and 9 by school sector. The means and standard deviations for the explanatory variables used in the statistical analysis are presented in Appendix A. The average Year 3 test scores for Independent schools are consistently the highest across the three sectors, while the Government schools' average test scores are the lowest in each of the five test domains. The standard deviation, however, is greatest for the Government schools whereas the variation in test performance is the smallest across the Catholic schools.

The Independent-Government school differential for Year 3 is greatest in the case of the reading test, at 33.4 points, and it is smallest in the case of the writing test, at 22.8 points. In comparison, the Catholic-Government school differential in Year 3 is greatest for the grammar test (24.6 points) and smallest for the numeracy test (10.9 points). All these differentials are statistically significant.

The directions of the Government-non-Government school relativities in terms of mean outcomes and standard deviations for Year 3 carry across to the data for Years 5 and 7, though there are differences in size when specific test domains are examined. Thus, the data for both Years 5 and 7 show that the Independent-Government school differential is greatest for the grammar test and smallest for the spelling test. The comparative maximum and minimum differentials for the Catholic-Government school comparisons, however, are grammar test and numeracy test, respectively.

The lower segment of Table II shows the means and standard deviations of the NAPLAN test scores in Year 9 by school sector. The score differentials are larger for the Independent-Government schools comparisons than for the Catholic-Government schools comparisons across all school years and test domains. The score differentials between the Government and non-Government schools are also observed to increase with the educational levels. The smallest increases in both the Independent-Government school differential and the Catholic-Government school differential are found for reading, at four and seven points, respectively. The greatest increases are found in the writing test, where the Independent-Government schools differential widened by 23 points and the Catholic-Government schools differential widened by 21 points.16 The standard deviations across schools within a sector are, however, smaller for the Year 9 data than they are for the Year 3 data. In other words, the variation in test performance within school sectors is smaller in secondary education compared to primary education.

IV. Statistical Analyses

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References

The reasons for the considerable differences in school outcomes according to the types of schools outlined in Section III can be examined in a limited way through inclusion of different intercept terms for each of the three broad types of schools in an econometric model of the determinants of the NAPLAN test results. Under this approach, the estimating equation can be written as: NAPLAN test scores = f(Independent school, Catholic school, ICSEA, Proportion females, Attendance rate, School size, State, Region, Combined school), where Government school is the benchmark group for capturing the impacts of different school sectors on NAPLAN outcomes.17 While useful, this approach may mask differences across Government and non-Government schools in the structural relationships between school outcomes and the various regressors. For example, the links between the socioeconomic status of the school's student body and school outcomes may differ across Government, Independent and Catholic schools, and such differences may provide information on why there are differences in the mean school outcomes across these types of schools.

Tests for sector differences

In order to examine the impact of school sector on the NAPLAN results in greater detail, the model of school outcomes is estimated on samples partitioned according to school sector. This general approach allows two broad sets of questions to be answered. First, does the overall structure of the relationship between school outcomes and the various regressors used in the analysis differ across Government schools, Independent schools and Catholic schools? An F-test of overall equality of the coefficients of the model can be used to address this matter. Second, does the impact of any specific regressor, such as ICSEA or proportion of the school enrolment that is female, differ between particular schools? A t-test of whether two given coefficients are of the same value will be used to address this question.

The F-test statistics indicate that the structure of the relationship between the regressors considered above and school outcomes differs significantly across Government, Independent and Catholic schools. These are presented in Table III. In other words, it is inappropriate to constrain all the parameter values to be the same across the three school sectors.18

Table 3. Test statistics for F-test Year 3
NAPLAN testsIndependent vs Government schoolsCatholic vs Government schoolsIndependent vs Catholic schools
Year 3Year 9Year 3Year 9Year 3Year 9
  1. Note: Given the number of observations and number of constraints, the critical value for the F-test at the 5% level is 1.65. Hence all test statistics are statistically significant.

Grammar9.7616.2513.367.283.502.36
Numeracy6.577.894.301.933.822.08
Reading11.0211.0012.777.443.032.51
Spelling8.6910.6210.344.242.031.94
Writing7.9910.2018.105.702.102.05

Table IV presents the results from the separate analyses for grammar and numeracy scores for Year 3. Table V lists similar results for Year 9. The asterisks in these tables denote estimated coefficients for non-Government schools that are significantly different from the corresponding estimate for Government schools.

Table 4. Estimates of model of determinants of NAPLAN Year 3 grammar and numeracy scores, by school sector
VariableGrammarNumeracy
GovernmentIndependentCatholicGovernmentIndependentCatholic
  1. a

    Notes: Heteroskedasticity consistent t-statistics in parentheses. = coefficient significantly different from respective coefficient for Government schools.

Constant−115.442−41.806119.365−5.635108.784138.408
(5.61)(0.86)(4.78)(0.26)(3.82)(5.46)
ICSEA0.2980.2700.2740.2510.2430.226
(38.90)(13.46)(25.45)(32.77)(15.06)(19.30)
School size0.0090.0090.010−0.0010.010a −0.001
(4.41)(2.72)(2.60)(0.47)(3.26)(0.27)
Proportion of students female66.17629.993a 27.930a 10.854−2.059−9.473
(4.74)(6.01)(2.80)(0.80)(0.41)(0.79)
Attendance rate2.0691.9760.213a 1.5370.632a 0.471a
(8.19)(3.84)(0.92)(5.78)(2.14)(1.95)
State dummies (omitted group: NSW)
Australian Capital Territory−12.3211.812−15.501−12.637−6.761−18.551
(3.87)(0.29)(3.47)(4.43)(0.84)(4.60)
Northern Territory−35.811−16.253−44.727−24.296−17.119−39.302
(6.73)(0.76)(5.58)(5.07)(1.39)(3.74)
Queensland−21.586−13.530a −28.910a −18.376−23.904−29.326a
(14.93)(3.93)(13.12)(12.83)(7.96)(13.34)
South Australia−12.773−9.300−14.123−17.415−23.558−18.345
(7.16)(2.72)(4.86)(9.81)(7.33)(5.46)
Tasmania−7.0990.607−19.466a −4.839−1.959−17.500a
(2.56)(0.07)(3.85)(1.92)(0.30)(3.64)
Victoria7.1880.809−4.987a 7.201−1.251a −7.689a
(5.68)(0.26)(2.77)(5.57)(0.43)(4.27)
Western Australia−15.724−15.985−22.017−13.066−16.378−23.433a
(9.15)(3.66)(7.89)(8.52)(4.13)(8.60)
Regions (omitted group: metropolitan)
Provincial0.836−4.894−0.2850.928−4.996a 1.467
(0.74)(1.68)(0.16)(0.84)(1.97)(0.84)
Remote9.6724.869−1.42610.34923.5335.134
(2.53)(0.21)(0.23)(3.03)(1.37)(0.84)
Very remote21.149−34.025a −19.048a 13.229−21.275a −15.967a
(3.69)(0.94)(2.00)(2.33)(0.99)(1.69)
School type (omitted group: primary school)
Combined school−1.523−2.365−2.2290.999−4.3193.458
(0.61)(0.70)(0.66)(0.42)(1.42)(0.99)
Adjusted R 2 0.6260.5670.5810.5420.5310.500
Sample size4,4697431,2674,4697431,267
Table 5. Estimates of model of determinants of NAPLAN Year 9 grammar and numeracy scores, by school sector
VariableGrammarNumeracy
GovernmentIndependentCatholicGovernmentIndependentCatholic
  1. a

    Notes: See Table IV.

Constant117.444142.951149.819161.100215.415204.595
(6.66)(5.23)(3.35)(7.70)(10.06)(5.31)
ICSEA0.2530.330a 0.2570.2270.288a 0.233
(17.02)(18.76)(20.01)(15.75)(20.61)(18.94)
School size0.0100.003a 0.0080.0100.0130.006
(5.21)(1.05)(2.79)(5.12)(5.55)(2.20)
Proportion of students female35.10923.95024.531−0.275−4.528−7.753
(3.43)(6.29)(10.05)(0.02)(1.06)(3.26)
Attendance rate1.8360.912a 1.5872.6280.964a 1.641
(6.74)(3.50)(3.27)(7.87)(4.11)(3.88)
State dummies (omitted group: NSW)
Australian Capital Territory−0.7156.0822.004−12.309−7.083−7.881
(0.19)(1.22)(0.56)(3.18)(1.03)(2.85)
Northern Territory−17.858−3.9034.942−10.866−3.144−2.195
(2.11)(0.47)(0.64)(1.75)(0.62)(0.41)
Queensland−0.47710.830a 7.278a −10.555−3.681−3.774
(0.26)(3.81)(3.15)(5.35)(1.43)(1.69)
South Australia2.7396.3120.982−9.292−7.249−9.511
(1.36)(2.36)(0.34)(3.94)(2.25)(3.16)
Tasmania−6.9899.767a −6.133−18.927−10.167−13.010
(2.49)(1.79)(1.70)(6.75)(2.10)(3.76)
Victoria−6.7213.638a 0.561a −9.339−3.508−2.785
(3.72)(1.29)(0.32)(4.11)(1.42)(1.47)
Western Australia−5.671−7.188−10.089−11.459−13.061−9.727
(2.08)(2.53)(3.33)(4.00)(4.69)(3.91)
Regions (omitted group: metropolitan)
Provincial8.8566.6298.7302.8871.4144.031
(5.71)(2.85)(4.89)(1.77)(0.67)(2.25)
Remote13.145−18.122a 7.74711.94410.9301.361
(3.24)(0.82)(1.35)(3.47)(1.08)(0.37)
Very remote14.021−13.034a −116.249a 26.90321.83758.430
(2.90)(1.09)(8.77)(4.85)(2.23)(5.51)
School type (omitted group: primary school)
Combined school−4.4266.540a 6.329a −3.6890.5303.707a
(2.60)(1.61)(3.20)(2.02)(0.11)(2.09)
Adjusted R 2 0.6800.6830.7390.5840.6760.633
Sample size1,3306044141,330604414

Analyses for Year 3

The first three columns in Table IV list the results from analysis of the Year 3 NAPLAN grammar outcomes for the three separate school sectors: Government, Independent and Catholic. The latter three columns report the analysis of numeracy outcomes for the three sectors.19 The Government schools constitute almost 70 per cent of the total sample in the analysis for Year 3, while Independent schools account for around 10 per cent of the sample and Catholic schools account for 20 per cent of the sample. The discussion will start with examining the determinants of grammar scores for the Government schools.

There are 15 regressors included in the model, 13 of which are statistically significant. The adjusted R-squared indicates that the model explains about 63 per cent of the variation in the school-average grammar test scores. The schools ICSEA variable, which is expected to be the main explanatory variable, has a coefficient of 0.30. This implies that there is a gain of around thirty points in the grammar test with each one-standard deviation (100 points) increase in the ICSEA.20 To put this figure in perspective, the mean Year 5 grammar score is around 80 points higher than the mean Year 3 grammar score, implying there is around a 40-points increase per year of study. The variation in the test outcome with the one-standard deviation increase in the school ICSEA is therefore the equivalent of three-quarters of a year of study.

The school size variable, though highly significant, shows that there is only a minor difference in scores between small and large schools. Thus, with a coefficient of 0.009, an increase in school size of 100 students will be associated with an increase of just under one point of the average grammar score for the school. School size has typically been of little importance in previous studies of academic outcomes in Australia. The female-male composition variable ranges from zero (all-boys school) to one (all-girls school), and it is a statistically significant and materially important determinant of variations in the school-average grammar scores. The coefficient of this variable indicates that the gender difference in the average grammar scores is around 66 marks in favour of female students. Most Australian research has shown that girls have higher school achievements than boys (for example, Marks et al., 2001), although the impact recorded here for Government schools exceeds the impact typically reported in the literature (which is more in line with the estimated effects associated with non-Government schools in Table IV).

The variable that records students' attendance rates reveals that this measure of student commitment has a positive association with grammar test outcomes. The attendance rate is measured in percentages, and a one standard deviation (4.32 percentage points) increase in the attendance rate is associated with an improvement of around nine marks on the grammar score. When attendance rate is excluded from the estimation, a minor increase is observed in the effect of ICSEA for the Government and Independent schools (by about ten per cent), but no material change can be noted for the Catholic school sector.

The State dummies included in the model show that schools in Victoria have the highest achievement in the Year 3 grammar test, ceteris paribus, followed by New South Wales. There are around 7 to 36 marks differences between schools in other States and Territories compared to schools in New South Wales. The negative effect associated with Queensland could be associated with the one-year less formal schooling that Year 3 students in that State have compared to Year 3 students in other States (see ACARA, 2009; Masters, 2009; Miller and Voon, 2011a). Convincing explanations for the other State/Territories effects, however, cannot be developed at this stage.21

The findings for Independent and Catholic schools in Table IV show that most of the main effects, such as that associated with the ICSEA, are broadly the same across the school sectors. The female-male composition of the school is one characteristic where statistically significant differences across the school systems emerge, with both Independent and Catholic schools being associated with much lower positive coefficients, of 29.99 and 27.93, respectively. These are less than one-half of the respective coefficient for Government schools. Thus, the gender gap in grammar scores is less pronounced in the non-Government school systems compared to the Government school system.

In the case of numeracy score estimation for the Government schools in Table IV, the model is slightly different compared to that for grammar scores. There are two main differences that will be commented on. First, the variable for the proportion of female students is insignificant in the case of numeracy test outcomes, while this gender impact is highly significant and positive when school-average grammar scores are considered. In other words, while females are associated with superior academic performance on the NAPLAN grammar tests, this advantage does not carry across to the numeracy tests. Lamb et al. (2004) also reported that the gender of the student was not statistically significant in analyses of achievements in mathematics in junior secondary school. Second, the school size variable is insignificant in the numeracy model, whereas school size is associated with a modest positive impact on test performances in the model for grammar scores.

There are only a few differences across the school sectors in the estimated effects in the models of the determinants of numeracy outcomes. One of these is the impact of Independent schools' size, where this variable is positive and highly significant, which suggests that a larger sized Independent school is associated with better numeracy outcomes in the NAPLAN. It is observed, however, that the magnitude of this impact is again, only modest, with an increase in school size of 100 students being associated with a one-mark improvement in the average numeracy score for the school. Another difference is that the effect of the attendance rate in both of the non-Government schools sectors is noticeably lower than that in the Government schools. In particular, the positive impact of the attendance rate (0.63 for Independent and 0.47 for Catholic) is less than half of that reported for the Government schools (1.54). Finally, while there are some differences across school sectors in the effects on the NAPLAN numeracy outcomes associated with the States and regions, generally these follow the patterns evident in the NAPLAN grammar outcomes.

The results from the analyses of the other three test domains—reading, spelling and writing—are broadly similar to those reported for grammar in Table IV. There are, however, two differences worth noting. First, the school ICSEA variable has a smaller partial impact (by around 30 per cent) in the spelling and writing models than it has in the model of the school-average grammar outcomes. This appears to be due to the way the ICSEA variable was constructed (see Miller & Voon, 2011b). Second, the gender effects in the models of spelling, reading and writing outcomes are typically lower than the effect estimated for the model of grammar outcomes. These patterns are observed within each school sector.

Summary of findings for Years 5 and 7

The total number of schools as well as the proportion of the different school sectors in Year 5 is almost identical to that in Year 3. The Year 7 dataset is smaller (3,804) compared to the sample size of Years 3 and 5 (almost three thousand more schools), which reflects the movement of many students from smaller primary feeder schools to larger combined and secondary schools. It is possible to undertake a pseudo value added approach when using the Year 5 data (see Lamb et al., 2004; Miller & Voon, 2011b), whereby the school average score for Year 3 is included as a crude control for prior academic achievement of the Year 5 group. However, the estimating equation used here for Year 5 is the same as that used for Year 3. The specification of the estimating equation used in the analyses of the Year 7 NAPLAN test outcomes has an additional school type variable—secondary school variable. This is due to the differences across schools systems, where Year 7 is positioned in primary schools in some, and in secondary schools in others.

The results for these intermediate years are, in general, broadly similar to those described for Year 3. However, there are five points of note. First, the gender gap in favour of female students is smaller in Years 5 and 7 than that documented for Year 3. This echoes the immersion effect put forward by Tinto (1975). However, the insignificant gender effect on numeracy test outcomes persists in most cases into Years 5 and 7. The exceptions in this regard occur for Catholic schools in Years 5 and 7, where a higher percentage of a school's enrolment being male is associated with a statistically significant advantage in numeracy test achievement.

Second, the coefficients on the States and Territories dummies exhibit greater homogeneity in the models estimated for Years 5 and 7 than they do in the model for Year 3. The finding for Queensland for Years 5 and 7, where the test outcomes are closer to those for the other States and Territories, is interesting, given that the large, negative coefficient for the Queensland State dummy in the analysis of the Year 3 data was linked to the smaller number of years of schooling that the typical Year 3 student in that State had completed. This suggests that there is pronounced achievement catch-up.

Third, while the school type variable in Year 3 indicates that there is no difference in test performance between primary and combined schools, the combined school variable is estimated to be associated with a modest negative impact in some test models in Year 5. Fourth, the school size variable is associated with significant and positive, though modest, impacts on numeracy scores in the Government schools for Years 5 and 7. This variable was insignificant in the analysis of the determinants of the numeracy domain in the Government schools sector for Year 3. Lastly, secondary only schools in the Catholic schools sector are associated with negative impacts on Year 7 test achievements compared to the primary only schools. For example, on average, Catholic secondary only schools have 14 points lower scores on the grammar test than their primary only counterparts. The estimated impact of the secondary only school type variable for Catholic schools in grammar and reading tests is significantly different from that estimated for Government schools.22

Analyses for Year 9

Table V lists results from the analyses of the Year 9 grammar and numeracy scores. The number of schools for this segment of the analyses is 2,348; of these 56 per cent are Government schools (70 per cent for Year 3), 26 per cent are Independent schools (10 per cent for Year 3), and 18 per cent are Catholic schools (20 per cent for Year 3). The adjusted R-squareds indicate that the models in Year 9 explain more of the variation in the school-average scores than the model estimated using the Year 3 data. For instance, 68 per cent of the variation in grammar scores in Government schools is accounted for when the Year 9 data are used, compared to 63 per cent for the Year 3 data. The overall patterns of findings reported for Year 9 are broadly similar to those discussed above for Years 3, 5 and 7. The discussion that follows will therefore focus only on three features of the Year 9 results.

First, there are large gender effects in many of the analyses of the Year 9 data. Pronounced gender effects in favour of females in tertiary entrance examinations and in the first year of university studies have been reported in a number of Australian studies (see, Dobson and Skuja (2005), and the references therein). Dobson and Skuja (2005, p.59) for example, report that ‘In general, however, it is female students who (on average) gain higher marks’.

Second, the coefficients on the States and Territories dummies in Year 9 are typically smaller than those in the analyses of the data for Years 3, 5 and 7. There are also some changes in rankings in Year 9 compared to Years 3, 5 and 7, though the changes are often specific to particular NAPLAN domains and school sectors. The general trend of dissipation of the differences across States and Territories observed in this study suggests that the apparent advantages of some States in the Year 3 NAPLAN tests cannot be maintained into the later years of study.

Third, considering the underlying structure of variables in the analysis of the NAPLAN Year 9 outcomes in Independent and Catholic schools, it is observed that there are several interesting differences across the school sectors. For example, the partial effect of the ICSEA for Independent schools is significantly different from that of Government schools across all test domains, while the partial effect of the ICSEA for Catholic schools is not significantly different from that for the Government schools in any test domain. In the analyses of the Year 3 NAPLAN outcomes, the ICSEA variable was shown to have similar impacts in each school sector.

Decomposition analyses

The results presented in Tables IV (Year 3) and V (Year 9) can be used to decompose the differences in overall school outcomes in the Government and non-Government sectors into components due to: (i) the different characteristics of the schools in each sector (e.g. different ICSEA, different female mix); and (ii) the differences in the estimated effects of these characteristics discussed above. This can be achieved using the Blinder/Oaxaca decomposition that has been used extensively in the gender wage decomposition literature (Blinder, 1973; Oaxaca, 1973). As applied to the NAPLAN data, this decomposition entails computation of:

  • display math

and

  • display math

where inline image is a mean NAPLAN score for a non-Government schools sector (N = I for Independent, C for Catholic), inline imageis the mean NAPLAN score for the Government schools sector, inline image and inline image are the estimated coefficients discussed above for the non-Government and Government schools sectors, respectively, and inline image and inline image are the vectors of the mean values of the variables included in the model reported in Tables IV and V for non-Government and Government schools, respectively. Hence, inline image refers to the differences discussed in Table II. The first term on the right-hand side is the part of these differences due to different values of the explanatory variables included in the estimating equations. It is sometimes termed the ‘characteristics’ or ‘explained’ component. The second term on the right-hand-side is the part of the difference in mean outcomes for the two school sectors that is due to the differences between sectors in the ways that the characteristics are linked to school outcomes. This component is sometimes termed the ‘coefficients’ or ‘unexplained’ component. Two decompositions are listed above as the decomposition is not unique. The average of the two alternatives is reported below.

Table VI presents the breakdown of the differentials in NAPLAN test outcomes between the Government and non-Government schools for Years 3 and 9 using this decomposition. The first line of each section of the table gives the differences in overall outcomes, from Table II. The second line lists the part of the difference between the overall test outcomes for the Government and non-Government schools that is linked to the differences in characteristics of the schools (that is, the different ICSEA, school sizes etc). The third line of each section lists the component of the difference in the test outcomes for the Government and non-Government schools that is due to differences in the estimated payoffs to these characteristics. In each instance standard errors for the relevant component are presented in parentheses: it is noted that these are relatively small, indicating that the decomposition components are precisely estimated.

Table 6. Decomposition of Government-non-Government schools differentials in NAPLAN outcomes, Years 3 and 9
 GrammarNumeracyReadingSpellingWriting
Year 3
inline image 33.03524.93633.37325.34422.777
(0.86)(0.74)(0.83)(0.73)(0.68)
due to: Characteristics16.61112.09615.30914.17814.478
(0.88)(0.75)(0.84)(0.75)(0.71)
Coefficients16.42412.84018.06311.1668.299
(0.79)(0.72)(0.79)(0.71)(0.63)
inline image 24.58210.86223.50320.20121.570
(0.60)(0.54)(0.59)(0.55)(0.50)
due to: Characteristics12.5429.67811.7869.8989.831
(0.48)(0.40)(0.46)(0.42)(0.40)
Coefficients12.0401.18511.71810.30311.739
(0.40)(0.39)(0.40)(0.38)(0.33)
Year 9
inline image 46.06338.38938.04435.43245.929
(0.90)(0.85)(0.82)(0.82)(1.07)
due to: Characteristics27.52221.88522.07721.52932.023
(0.99)(0.94)(0.90)(0.93)(1.16)
Coefficients18.54116.50415.96713.90213.906
(0.85)(0.84)(0.75)(0.85)(1.00)
inline image 33.77126.10631.46930.57541.143
(0.81)(0.74)(0.72)(0.76)(0.97)
due to: Characteristics25.08819.53118.51723.23128.979
(0.79)(0.64)(0.64)(0.72)(0.91)
Coefficients8.6836.57412.9527.34312.164
(0.55)(0.55)(0.50)(0.53)(0.63)

Decomposition of the school sector differentials in Year 3 NAPLAN results indicates that the differences in the characteristics of the schools account for around 60 per cent of the score premium in spelling and writing tests for the Independent schools. In the case of the reading score, however, the unexplained (or coefficients) differences are the major reason for the difference in the mean outcomes (54 per cent of the differential). The scores premium in Year 3 grammar and numeracy tests for Independent schools is explained equally by the two components mentioned above.

In the case of the Year 3 test score differentials between Catholic and Government schools, the differences in characteristics and difference in payoffs components are approximately equal across all test domains, other than the numeracy score, where almost 90 per cent of the score differential is due to differences in characteristics. This suggests that the Catholic and Government schools have very similar payoff structures in the numeracy model, or where differences arise, some offset the others. The contrast with the decomposition of the grammar score differential in this regard appears to arise due to the stark difference in the gender effect.23

The decomposition of the school sector differentials for Year 9 exhibits similar patterns across all test domains for both the Independent-Government schools comparison and the Catholic-Government schools comparison. There is one distinctive feature of these decompositions, namely that the major part of the score differentials is accounted for by the differences in the characteristics of school. In the case of the Independent-Government schools comparison, close to 60 per cent of the score differentials are due to differences in the characteristics of the schools. The comparative figure for the Catholic-Government comparison is greater, with differences in the characteristics of schools typically accounting for around 70 per cent of the gap in Year 9 outcomes. In other words, the unexplained component of the differences in mean outcomes between school sectors becomes relatively less important as the school level increases. This feature is more pronounced in the Catholic-Government schools comparison than in the Independent-Government schools comparison. The widening of the ‘explained’ differential between the Government and non-Government sectors across school years could be due to the more intense selection processes at the higher school levels, where students are filtered into more selective schools. Examination of more detailed decompositions for Years 3 and 9 reveal that the ICSEA, and student attendance rate contribute to the amplified gaps between sectors at the higher levels of schooling. For example, the component ‘explained’ by the ICSEA increased from 15 marks in Year 3 to 21 marks in Year 9 for the Independent-Government schools comparison. To illustrate the relative size of the individual effects, Table VII provides the details on the explained component of the Independent-Government schools decomposition. The results from the general model that includes the school attendance rate variable are provided in the first column; results from the model that excludes this variable are presented in the second column. It is noted that the school attendance rate variable is of greater importance in Year 9 than it is in Year 3, and this is due to the relatively low mean attendance rate in secondary only Government schools. Regardless of the specification used, the explained component is largely unaltered.

Table 7. Detailed decomposition of NAPLAN grammar score differential between Independent schools and Government schools, Years 3 and 9
VariableYear 3Year 9
Attendance rate includedAttendance rate excludedAttendance rate includedAttendance rate excluded
EffectStandard errorEffectStandard errorEffectStandard errorEffectStandard error
  1. Notes: (a) = variable not included.

ICSEA14.6220.5016.3850.5520.7590.6623.7890.73
School size2.3840.242.8570.250.1820.070.2160.08
Proportion of students female1.1530.161.2160.170.9510.131.0100.14
Attendance rate0.0790.19(a) 5.6030.30(a) 
ACT−0.0260.02−0.0390.020.0190.02−0.1070.02
NT0.1620.060.1950.070.0830.040.1000.04
Queensland−0.1140.12−0.1240.130.0680.050.0520.04
South Australia−0.1810.06−0.2130.07−0.0640.03−0.0260.02
Tasmania0.0240.020.0320.02−0.0120.01−0.0030.01
Victoria−0.1000.03−0.0490.02−0.0340.02−0.0580.03
Western Australia−0.2220.09−0.2310.10−0.0220.05−0.0240.05
Provincial0.2000.070.1870.07−0.8270.10−0.9030.11
Remote−0.1980.090.1540.090.1310.170.1920.17
Very remote0.1170.060.4690.08−0.0200.120.2050.12
Combined school−1.2900.56−3.4100.570.7040.622.4870.63
Total explained16.6110.8817.4300.8727.5220.9927.0211.00

To put these results in perspective, recall from Section II that Marks (2009) reported that the individual-level and school-level variables included in his analysis, which included measures of prior academic achievement, were jointly associated with about 70 per cent of the Independent school effect and 60 per cent of the Catholic school effect observed in the unadjusted data. The results in Table VI are broadly similar to Marks' (2009) findings. However, they fall short of the findings in Thomson et al. (2011, p.62), where the differentials in academic outcomes between Government and non-Government schools dissipated following standardisation for the socioeconomic background of the students. Both Thomson et al. (2011) and Marks (2009) used individual-level data in addition to school-level data in their analyses. The use of individual-level data should be associated with a further reduction in the Catholic-Government and Independent-Government differences in academic outcomes. Hence the general conclusion that can be drawn from these analyses is similar to Marks (2009, p.19), who argued that there were only ‘moderate valued added effects for school sector’.

V. Conclusion

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References

Using school achievement data from the National Assessment Program—Literacy and Numeracy (NAPLAN) and related information from the My School website, this study has examined the academic outcomes of schools by school sector (Government, Independent and Catholic) across Australia. Academic achievements in Years 3, 5, 7 and 9 were analysed, with each of the NAPLAN test domains—Grammar & Punctuation, Numeracy, Reading, Spelling, and Writing—being considered separately.

Across all years of study and NAPLAN domains, the average test scores for Independent schools were consistently the highest across the three sectors, while the scores for the Government schools were the lowest. The difference in overall school outcomes in favour of Independent schools in Year 3 was between 23 and 33 points (six per cent to eight per cent advantage). In the case of Catholic schools, there were differences in outcomes compared to Government schools in Year 3 of between 11 and 25 points (three per cent to six per cent advantage). At the Year 9 level, the mean outcomes for Independent schools were between 35 and 46 marks higher than for Government schools (six per cent to eight per cent advantage again). The marks differential between Catholic schools and Government schools in Year 9 was between 26 and 41 (five per cent to eight per cent advantage). The examination of these differences by school sector was based on an educational production function model, where a particular NAPLAN outcome was related to the school ICSEA, the proportion of the school enrolment made up of girls, the school attendance rate, the total enrolment at the school, the State the school was located in, as well as the region within the State. Preliminary tests revealed that the links between these determinants of school outcomes and the NAPLAN test results differed across the three school sectors. Hence analysis was presented separately for each school sector.

The ICSEA variable was the main explanatory variable across all school sectors and years of study, though in most test domains in Year 9 the impact of the ICSEA was more pronounced in Independent schools than it was in Government schools. Female students in Year 3 were found to have, on average, superior academic performance than their male counterparts, except in the area of numeracy. The gender gap was, however, less pronounced in non-Government schools than in Government schools. However, when later years of study were considered (up to Year 9), the gender variable was found to have similar effects across all school sectors, with substantial gender effects in favour of females usually being observed. There was a favourable effect of the school's attendance rate on test scores, though the impact of this school characteristic was smaller for Independent schools than for Government schools, across all test years. There were considerable differences across States and Territories in Year 3 test outcomes, even after account was taken of the school's ICSEA, the proportion of the school enrolment that was female etc. Differences across States/Territories were found for each school sector. However, the estimated coefficients on the States and Territories dummies exhibited greater homogeneity when later years of study were examined. In other words, there was a marked dissipation of the differences across States and Territories in Year 9 compared to Year 3.

The differences in overall school outcomes between the Government and non-Government sectors were decomposed using the Blinder/Oaxaca decomposition into components due to the different mean levels of the characteristics of the schools (explained), such as differences in the average school ICSEA within each school sector, and the different relationships these characteristics have with the NAPLAN test outcomes in each sector (unexplained). These decompositions revealed two main findings. First, the differential in mean outcomes between the Government and non-Government schools increased as students progressed from Year 3 to Year 9. Second, the unexplained component between the school sectors was relatively smaller when the later years of study were considered. Equivalently, the explained component was proportionately more important. It was hypothesised that the greater ‘explained’ differentials between the Government and non-Government sectors across school years was due to the more intense selection processes at the higher school levels, where students are filtered into more selective schools.24 At face value this casts some doubt over the relevance to academic outcomes of the greater levels of funding associated with Independent schools. Similarly, Marks (2010, p.281) concluded ‘There was no support for the proposition that material and educational resources are important to student performance’. Further research on the links between school resources in academic outcomes is clearly warranted.

Appendix A

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References
Table AI. Means and standard deviations of regressors, Years 3 and 9
VariablesGovernment schoolsIndependent schoolsCatholic schools
Year 3Year 9Year 3Year 9Year 3Year 9
ICSEA992.238975.1501043.6401046.3601024.2101030.750
(101.66)(100.75)(88.07)(90.226)(82.96)(71.10)
School size312.050682.389581.196710.364325.830829.491
(239.21)(434.11)(490.41)(486.53)(239.05)(353.23)
Proportion of students female0.4830.4860.5070.5190.5010.523
(0.04)(0.11)(0.19)(0.20)(0.07)(0.32)
New South Wales0.3180.3270.3150.3110.3280.357
(0.47)(0.47)(0.46)(0.46)(0.47)(0.48)
Australian Capital Territory0.0130.0130.0170.0200.0180.010
(0.11)(0.11)(0.13)(0.14)(0.13)(0.10)
Northern Territory0.0210.0230.0150.0150.0080.012
(0.14)(0.15)(0.12)(0.12)(0.09)(0.11)
Queensland0.1820.1870.1880.2000.1610.203
(0.39)(0.39)(0.39)(0.40)(0.37)(0.40)
South Australia0.0850.0900.1010.0760.0680.072
(0.28)(0.29)(0.30)(0.27)(0.25)(0.26)
Tasmania0.0340.0400.0270.0310.0250.027
(0.18)(0.20)(0.16)(0.17)(0.16)(0.16)
Victoria0.2310.2050.2060.2270.2930.227
(0.42)(0.40)(0.40)(0.42)(0.46)(0.42)
Western Australia0.1170.1160.1310.1190.0990.092
(0.32)(0.32)(0.34)(0.32)(0.47)(0.29)
Metropolitan0.5610.5170.7050.7170.6410.679
(0.50)(0.50)(0.46)(0.45)(0.48)(0.47)
Provincial0.3760.3770.2770.2700.3200.302
(0.48)(0.48)(0.45)(0.44)(0.47)(0.46)
Remote0.0350.0590.0080.0070.0240.017
(0.18)(0.24)(0.09)(0.08)(0.15)(0.13)
Very Remote0.0280.0470.0090.0070.0150.002
(0.16)(0.21)(0.10)(0.08)(0.12)(0.05)
Primary (Yr 3)/Secondary (Yr 9) School0.9130.7380.2480.0680.9240.720
(0.28)(0.44)(0.43)(0.25)(0.26)(0.45)
Combined school0.0870.2620.7510.9290.0750.280
(0.28)(0.44)(0.43)(0.26)(0.26)(0.45)
Attendance rate92.44588.59992.48592.67793.11892.688
(4.32)(5.76)(5.33)(4.66)(4.13)(3.207)
  1. 1

    The same authors, however, note ‘This argument can refer to specific outcomes of education. One is achievement in the basic cognitive skills. Another is the area of moral development … Still another is post-high school activity, in particular, college attendance—or going to a more preferred college’ (Coleman et al., 1982, p. 65).

  2. 2

    The West Australian on 15 January 2010 announced the latest Western Australia school league tables based on data published by the Curriculum Council of Western Australia.

  3. 3

    The use of school averages rather than individual-level data will not constitute a major problem in terms of the magnitudes of the estimated coefficients if the underlying education production process is linear (see, for example, Hanushek (1979)). This matter has not been investigated using Australian data.

  4. 4

    Johnson et al. (2004), Bradley et al. (2004) and Leigh (2010) based their analyses on State administrative datasets (Victoria for the former and Queensland for the latter two) which only contain information on state Government schools.

  5. 5

    This type of overlap in school outcomes is often remarked upon in the literature. For example, Coleman et al. (1982, p. 76) state: ‘… achievement is just as high in the public sector when the policies and the resulting student behavior are like those in the Catholic or other-private schools’.

  6. 6

    See Lee et al. (1993) for more detailed discussion.

  7. 7

    The Longitudinal Surveys of Australian Youth project surveyed a number of cohorts of young people. Marks et al. (2001) focused on the cohort of students who were in Year 9 in 1995, and the sample is a representative sample of all Year 9 students at school in Australia in 1995. The research in Marks (2007, 2009) is based on the 2003 cohort of this suite of surveys, and that by Thomson et al. (2011) is based on the 2009 cohort from the same set of surveys.

  8. 8

    Perry and McConney (2010) examine variation in the Australian PISA scores for 2003, but note (p.1148) ‘users of the data set are not able to determine whether a given school in the sample is private or public’.

  9. 9

    The main school-level factor conducive to higher ENTER scores was a more academic environment, as measured by the mean school prior academic achievement.

  10. 10

    A further study by Marks (2010) does not distinguish schools by sector, but the conclusion (p.281), ‘that school factors do not have a decisive role in influencing student performance’ is consistent with the findings from Marks (2009).

  11. 11

    For confidentiality reasons, test outcomes for very small schools are not available to the public.

  12. 12

    All references to the content of the My School website in this paper refer to the situation in 2010.

  13. 13

    The Index of Community Socio-Educational Advantage (ICSEA) was developed specifically for the My School website to identify socio-educationally similar schools across Australia, that is, schools that serve similar student populations (see Australian Curriculum, Assessment and Reporting Authority (2010) for more details). The ICSEA for 2008 and 2009 is based on two sets of information: (i) a broad range of variables constructed from the Australian Bureau of Statistics' census data, using the students' home addresses to link with census collection districts; and (ii) school data on remoteness and the percentage of the school's enrolment identifying as Indigenous. For 2010, the ICSEA is based, where possible, on direct measures of the family background provided to schools by families, together with an expanded range of school characteristics.

  14. 14

    The attendance rate measure refers to the number of actual student days attended as a percentage of the maximum possible. The reference point is years 1 to 10 (where applicable) at the school and not the particular years that sat the NAPLAN tests.

  15. 15

    As not all schools have students in the particular grades in which the NAPLAN tests are administered (Years 3, 5, 7 and 9), the number of schools varies across various sets of analyses. There are 6,479 schools for the Year 3 analyses, 6,518 schools for Year 5, 3,805 schools for Year 7 and 2,348 schools for the Year 9 analyses.

  16. 16

    To express these score differentials in percentage terms, the non-Government-Government score differential is around eight per cent in Year 9, while the respective figure for Year 3 is five per cent.

  17. 17

    The school's attendance rate is arguably a contentious inclusion in a model aimed at assessing relative school performance. It is a useful variable if the intent is pure description. Results from models where this variable is excluded are discussed in the text.

  18. 18

    Similarly, there are different relationships between NAPLAN test outcomes and the regressors for the Government and non-Government schools in Years 5 and 7.

  19. 19

    The estimations presented here are unweighted. In other words, there is no attempt to assign more weight in the estimations to larger schools. Estimation with the school size as a weight is generally associated with modest changes in the results. One exception to this general finding occurs in the Catholic school sector, where the coefficient of the female-male composition variable is reduced by about one third compared to that of the unweighted regression.

  20. 20

    The ICSEA ranges from 510 to 1246, and hence there is a difference in school-average grammar score of 229 points between schools at the top and bottom of the socioeconomic spectrum.

  21. 21

    The apparently anomalous regional impacts could arise because remoteness is a component of the ICSEA index, and hence there will be an indirect effect of region of residence that occurs via this composite variable.

  22. 22

    This may reflect a conflating of State/Territory and school type variables. Year 7 was positioned in primary-only schools in just several States in 2009 (Queensland, South Australia and Western Australia) and the State Government operates around 80 per cent of the primary schools in these States.

  23. 23

    The broad patterns observed for Year 3 are carried across to the decomposition results for Year 5.

  24. 24

    On this matter, the newspaper story ‘Top school's secret weapon: 95 per cent of students of migrant heritage’ in the Sydney Morning Herald, 13 September 2010, is quite revealing.

References

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Literature Review
  5. III. Data
  6. IV. Statistical Analyses
  7. V. Conclusion
  8. Appendix A
  9. References
  • Australian Curriculum, Assessment and Reporting Authority 2009, ‘National Assessment Program-literacy and Numeracy’, National Report: Achievement in Reading, Writing, Language Conventions and Numeracy , www.naplan.edu.au.
  • Australian Curriculum 2010, ‘Technical Paper: Index of Community Socio-educational Advantage’, www.myschool.edu.au.
  • Birch, E.R. and Miller, P.W. 2007, ‘The Influence of Type of High School Attended on University Performance’, Australian Economic Papers, vol. 46, pp. 117.
  • Blinder, A.S. 1973, ‘Wage Discrimination: Reduced Form and Structural Estimates’, Journal of Human Resources, vol. 8, pp. 436455.
  • Bradley, S., Draca, M. and Green, C. 2004, ‘School Performance in Australia: Is There a Role for Quasi-markets?Australian Economic Review, vol. 37, pp. 271286.
  • Coleman, J., Hoffer, T. and Kilgore, S. 1982, ‘Cognitive Outcomes in Public and Private Schools’, Sociology of Education, vol. 55, pp. 6576.
  • Dewey, J., Husted, T.A. and Kenny, L.W. 2000, ‘The Ineffectiveness of School Inputs: A Product of Misspecification’, Economics of Education Review, vol. 19, pp. 2745.
  • Dobson, I. and Skuja, E. 2005, ‘Secondary Schooling, Tertiary Entrance Ranks and University Performance’, People and Place, vol. 13, pp. 5362.
  • Hanushek, E.A. 1979, ‘Conceptual and Empirical Issues in the Estimation of Educational Production Functions’, Journal of Human Resources, vol. 14, pp. 351388.
  • Hanushek, E.A. 1986, ‘The Economics of Schooling: Production and Efficiency in Public Schools’, Journal of Economic Literature, vol. 24, pp. 11411177.
  • Johnson, D., Jensen, B., Feeny, S. and Methakullawat, B. 2004, ‘Multivariate Analysis of Performance of Victorian Schools’, paper presented at the Making Schools Better Summit Conference, Melbourne.
  • Lamb, S., Rumberger, R., Jesson, D. and Tesse, R., 2004, ‘School Performance in Australia: Results from Analyses of School Effectiveness’, Centre for Post-Compulsory Education and Lifelong Learning, University of Melbourne.
  • Lee, V.E., Bryk, A.S. and Smith, J.B. 1993, ‘The Organisation of Effective Secondary Schools’, Review of Research in Education, vol. 19, pp. 171267.
  • Leigh, A. 2010, ‘Estimating Teacher Effectiveness from Two-year Changes in Students' Test Scores’, Economics of Education Review, vol. 29, pp. 480488.
  • Marks, G.N. 2007, ‘Do Schools Matter for Early School Leaving? Individual and School Influences in Australia’, School Effectiveness and School Improvement, vol. 18, pp. 429450.
  • Marks, G.N. 2009, ‘Accounting for School-sector Differences in University Entrance Performance’, Australian Journal of Education, vol. 53, pp. 1938.
  • Marks, G.N. 2010, ‘What Aspects of Schooling are Important? School Effects on Tertiary Entrance Performance in Australia’, School Effectiveness and School Improvement, vol. 21, pp. 267287.
  • Marks, G., McMillan, J. and Hillman, K. 2001, ‘Tertiary Entrance Performance: The Role of Student Background and School Factors’, Longitudinal Surveys of Australian Youth, Australian Council for Educational Research.
  • Masters, G.N. 2009, ‘A Shared Challenge: Improving Literacy, Numeracy and Science Learning in Queensland Primary Schools’, Australian Council for Educational Research.
  • Miller, P.W. and Voon, D. 2011a, ‘School Outcomes in New South Wales and Queensland: A Regression Discontinuity Approach’, Education Economics, iFirst.
  • Miller, P.W. and Voon, D. 2011b, ‘Lessons from My School’, Australian Economic Review , vol. 44, pp. 366286.
  • Oaxaca, R. 1973, ‘Male-female Wage Differentials in Urban Labor Markets’, International Economic Review, vol. 14, pp. 693709.
  • Perry, L.B. and McConney, A. 2010, ‘Does the SES of the School Matter? An Examination of Socioeconomic Status and Student Achievement Using PISA 2003’, Teachers College Record, vol. 112, pp. 11371162.
  • Silins, H.C. and Murray-Harvey, R. 2000, ‘Students as a Central Concern: School Students and Outcome Measures’, Journal of Educational Administration, vol. 38, pp. 230246.
  • Thomson, S., De Bortoli, L., Nicholas, M., Hillman, K. and Buckley, S. 2011, ‘Challenges for Australian Education: Results from PISA 2009: The PISA 2009 Assessment of Students' Reading, Mathematical and Scientific Literacy’, Australian Council for Educational Research Press.
  • Tinto, V. 1975, ‘Dropout from Higher Education: A Theoretical Synthesis of Recent Research’, Review of Educational Research, vol. 45, pp. 89125.
  • Win, R. and Miller, P. 2005, ‘The Effects of Individual and School Factors on University Students' Academic Performance’, Australian Economic Review, vol. 38, pp. 118.