The SS Initiative was the centerpiece of Governor Jim Hunt's “crusade” for the future of North Carolina children, referenced in the quotation that begins this paper. Indeed, with the governor's strong and energetic leadership, the state accomplished what no other state had yet done. Specifically, in 1993, North Carolina established a comprehensive, collaborative, and decentralized initiative designed to assure that all the state's children and their families have access to high-quality services that would prepare them for school. Services were to be available for all children aged 0 to 5, not just to those with low family income.
The initiative combined a top-down comprehensive systems approach designed to address a broad set of challenges facing children in child care and education, health, and family support, with a bottom-up implementation approach designed to maximize local ownership and commitment. The approach was collaborative in that it required representatives from a wide range of groups, including families, to be involved at all levels of the initiative, with the focus throughout on partnership and teamwork. It was decentralized in that the state provided funding to broadly constituted county-level (and in some cases multicounty) partnerships setup specifically to implement the initiative. Not until 1996 did the state impose any specific restrictions on how the funds were to be used. At that time, the legislature required that at least 30 percent of the funding be devoted to child care subsidies. In the following year it mandated that in the aggregate, at least 70 percent was to be spent on child care related activities.3 Subsequently, the state set up a performance monitoring system with five performance goals in the areas of access to quality child care, health care, and support for families to raise healthy children who are prepared for school (Ponder, 2011). Because of the intentional strategy to encourage local decisionmaking, the implemented program can best be described as a pool of financial resources with guidelines for spending. By design, that pool varied over time and across counties as the program grew and state resources evolved. Although this somewhat loosely defined policy limits scholars’ ability to identify program mechanisms precisely, it is a realistic model for how state-level funding often operates.
In the spirit of public–private partnership, the legislature established a new nonprofit organization called the North Carolina Partnership for Children, Inc. (NCPC) to support and oversee the local partnerships. The legislature also called for 10 years of evaluation that was contracted out to the University of North Carolina's Frank Porter Graham Child Development Institute (FPG).
The characteristics of top-down guidelines and decentralized implementation make the SS Initiative both difficult to define precisely and even more difficult to evaluate. In an early evaluation, for example, FPG found that the 12 initial partnerships had funded 244 individual local service activities (Bryant et al., 2002, 2003). The evaluation team grouped activities into three program categories: child care quality, family functioning, and children's health—along with a process category that included activities designed to improve interagency collaboration. They then developed a logic model to link each category to expected short-term and long-term goals. Funding for the two main categories—raising the quality of child care and improving the functioning of families—was intended to promote the central goal of school readiness by developing skills and improving behavior; funding for health care promoted the short-term goal of increasing immunizations and expanding access to regular checkups, toward the longer-term goal of better child health; and attention to interagency processes was intended to promote child well-being by providing more coordination, eliminating inefficiencies, and reducing service gaps for children and their families.
Over a 10-year period, evaluators at FPG produced 35 separate studies. Many provided valuable information on how the initiative was being implemented, and others provided evidence related to short-term outcomes.4 FPG studies focused on two main impacts: (1) the impact of SS on the quality of child care services, and (2) the impact of SS-funded center quality on the school readiness of participating children. These impacts were particularly germane to the initiative's goal of ensuring “that all children enter school healthy and prepared to succeed,” primarily through improving the quality of early care and education for children ages 0 to 5 (Bryant et al., 2003).
To examine the impact on the quality of child care services in North Carolina over time, Bryant et al. (2002) collected information from centers receiving funds from the initial 12 SS partnerships in 18 counties, over three waves of onsite observations between 1994 and 1999. They concluded that numerous child care improvements were related to center participation in SS, including better classroom quality, increases in teacher qualifications, and increases in the percentage of centers licensed at higher levels and with national accreditation. Although the authors cannot conclude with certainty that SS was the cause of these quality improvements, changes of this type are likely one of the mechanisms through which the initiative may affect long-term academic achievement.
Similar issues emerge in FPG studies that examine the benefits for children who participate directly in centers receiving SS funding. Maxwell, Bryant, and Miller-Johnson (1999) compared the language and social skills of children who had attended child care centers participating in six SS partnerships with those of a comparison sample within the same kindergarten classrooms. They found higher cognitive (but not social) skills for children who attended SS-funded child care centers that had received quality improvement, such as onsite technical assistance and higher levels of teacher education. One limit of this study design is that it assumes no spillover effects. That is, it assumes that control children (taken from the same kindergarten classrooms as SS children) were not affected by program funding. If there is positive spillover, then the FPG analyses will have underestimated the positive impact of SS.
A subsequent study combines research questions, evaluating both the change in child care quality across centers in 20 partnerships and the extent to which attending a higher quality program predicts school readiness for individual children (Bryant et al., 2003). This study included child care quality measures, teacher ratings, and direct assessment of cognitive and social skills. The researchers found that the quality of the individual centers in the sample had increased over time, and that children attending higher quality classrooms had higher levels of kindergarten readiness. Although this finding held across the sample, causal claims were not warranted because there was no comparison or control group of children enrolled in centers not directly benefiting from SS funding.
Taken as a group, the FPG studies suggest that SS was associated with positive effects on center quality and child school readiness, but they do not provide a convincing causal link either at the center or the individual student levels. Moreover, the breadth of the SS Initiative, combined with variations in the way it was implemented at the local level, made it difficult for the FPG researchers to provide an overall evaluation. Nonetheless, the FPG studies were sufficiently positive to generate ongoing legislative support and for the initiative to win a number of national awards, including the prestigious Ford Foundation-Harvard award for Innovations in American Government, to attract attention from other states, and to contribute to a national movement oriented toward a comprehensive approach to the needs of young children. The studies also provide a basis for our hypothesis that the SS Initiative generated medium-term effects on children's academic outcomes, which are the focus of this study.
The initiative was started in 1993 in 12 pilot partnerships that represent 18 of the 100 North Carolina counties. The pilot counties were selected by independent experts and were specifically chosen to be representative of North Carolina's diversity and geography, with one recipient from each congressional district.5 As shown by the left-hand graph in Figure 1, the program increased to more than 50 counties by 1997 and to all 100 by the 1998 to 1999 school year (left axis). During that period, SS funding rose to a peak in 2000 of $250 million (in 2009 dollars) and has since fallen.
Figure 1. North Carolina Early Childhood Initiatives.
1. Data Sources
(a) Yearly Smart Start Funding data provided by North Carolina Partnership for Children FY 1993 to 2009, NC Division of Child Development FY 1993 to 2009.
(b) Yearly More at Four Funding data provided by North Carolina Office of Early Learning.
(c) Monthly CPI data provided by Bureau of Labor Statistics, U.S. Department of Labor.
2. Both figures are in June 2009 dollars using the CPI as of July in each year as an inflator.
Download figure to PowerPoint
Funding per 0- to 5-year-old child varied across counties and over time. Per child funding peaked in 2000 at about $400 per 0- to 5-year-old child and has since fallen to about $220 per child in 2009. A child living in a county with SS funding for all five of his or her early childhood years would have access to SS funding equal to about five times these per child amounts. (See online Appendix Figs. A1 and A2 for the variation in funding across counties by year and for maps illustrating the geographic variation in program funding for selected years.6) We use this variation in the timing of the introduction of the program and in its intensity across counties to identify the effects of the initiative.
More at Four
When Governor Mike Easley took office in 2001, the state added the MAF Pre-Kindergarten Initiative on top of SS. The logic was that even with community-wide efforts to improve general child care and child health, many low-income children would still be deficient in cognitive skills in the year prior to kindergarten matriculation. A key goal of the program was to make high-quality preschool settings available to all children by ensuring access to disadvantaged children.
Like SS, this initiative provided state funding for a top-down program that was to be administered at the local level. Specifically, MAF provided funding for slots for eligible four-year-old children in pre-k centers. Compared to SS, the state was far more prescriptive with respect to how the funds were to be used. First, they were to be used only for “at-risk” four-year-olds, with priority given to children not currently served in a formal preschool or child care program. Eligibility was based on the poverty status of the child and specific risk factors defined in the legislation.7 Second, the funds were to pay for slots in settings operated only by high-quality providers. While local administrators were free to allocate the funded slots to a variety of providers including public schools, private for-profit and nonprofit child care centers, or Head Start programs, any recipient agency of MAF funds was required either to meet state-delineated quality standards or to be on track to meet them.8 These standards refer to staff qualifications, class size, teacher–child ratios, and North Carolina child care licensing requirements. Through this requirement, the state's goal was to promote high-quality preschool not only for the funded children, but also for other children enrolled in the same centers. About one-third of children in MAF classrooms were not directly funded by MAF (Peisner-Feinberg, Elander, & Maris, 2006); hence, roughly 50 percent more children than were directly funded potentially received spillover benefits in the form of higher quality classrooms in MAF-funded preschool settings. Spillover benefits for peers in the kindergarten and later classrooms of MAF recipients are plausible as well.
In recognition of its educational goals, namely to provide a high-quality classroom experience for the state's at-risk four-year-olds, the program was overseen and managed for 10 years by the Office of Early Learning within the North Carolina Department of Public Instruction. To avoid competition or overlap with SS, local committees charged with making decisions about the allocation of MAF slots were jointly convened by the chair of the local SS partnership and the superintendent of schools in each county. Once again the Frank Porter Graham center was the official evaluator. MAF began with funding for 32 counties in 2002 and was expanded to all counties within a few years.9
Early FPG studies provided descriptive information on how the program operated. The median class size for MAF-funded students in 2004 to 2005 was 18, with about two-thirds of the students in each classroom funded by the program. Consistent with the program guidelines, about 90 percent of the children qualified for free or reduced-price lunch and more than three-quarters were not receiving services at the time of enrollment. About a third were African American, a third were white, and about 20 percent were Latino. As measured by staff credentials, the quality of the programs was generally high, but higher in the centers operated by public schools than those run by community for-profit and nonprofit centers. For example, among the lead teachers in the public school settings, 99 percent had a bachelor's degree or higher and 75 percent had an appropriate teaching license, compared to 65 percent and 15 percent, respectively, among the lead teachers in community settings (Peisner-Feinberg, Elander, & Maris, 2006).
Classroom observations, parental surveys, and samples of students provided information on the quality of classroom practices and short-term child outcomes. FPG evaluations of child outcomes focused on measuring gains for MAF-funded children during the year that they participated in the program. The studies show positive gains in measured skills beyond what would be expected for this target population. Furthermore, children who entered the program at higher levels of risk exhibited greater gains during the preschool year on an array of cognitive and behavioral outcomes. At the same time, the absence of a comparison group made it difficult for the researchers to make strong causal claims.
In a subsequent, more ambitious study, the evaluators examined the effects of the program on the longer-term outcomes of third-grade math and reading scores (Peisner-Feinberg & Schaff, 2010). Using two early cohorts of MAF participants, the authors linked the participants to their third-grade test scores, and then compared the test scores of the participants to a demographically similar comparison group of third graders within the same counties in the same years (2007 and 2008). In these analyses, the authors controlled for county-specific levels of state and local education spending. The main finding was that the program participants who were from low-income families exhibited modestly higher third-grade test scores than their equally impoverished same-county counterparts who did not participate, with effect sizes that ranged from about 0.10 to 0.14.10 Although the authors interpreted this finding to mean that the program was serving its primary policy goal of reducing income-related achievement gaps, the results are still at best suggestive. The researchers’ use of a matched comparison group rather than a randomly assigned control group means that they were not able to control for the selection effects that bedevil this type of evaluation. As a result, their impact estimates could be inflated. At the same time, to the extent that positive spillover effects operated as program planners hoped, such that matched-comparison children received indirect benefits, the FPG evaluation would underestimate the true program impact.
In recognition of the potential spillovers, we use the same approach as for SS and define the treatment as the availability and level of MAF funding (per age-appropriate child) in the county. The right graph in Figure 1 shows the rapid rollout of the program across counties and the total funding levels over time. (For more information on the variation across counties in the level of funding and availability of classroom slots across counties, see the online Appendix Figs. A3–A5.11) Because SS funding was available (at varying levels) in all counties when MAF was rolled out, all our estimates of the effects at MAF are conditional on the presence of SS programs.