Facility standards and the quality of public sector primary care: Evidence from South Africa's "Ideal Clinics" program.

Primary healthcare systems are central to achieving universal healthcare coverage. However, in many low- and middle-income country settings, primary care quality is challenged by inadequate facility infrastructure and equipment, limited human resources, and poor provider process. We study the effects of a recent large-scale quality improvement policy in South Africa, the Ideal Clinics Realization and Maintenance Program (ICRMP). The ICRMP introduced a set of standards for facilities and a quality improvement process involving manuals, district-based support, and external assessment. Exploiting differential prioritization of facilities for the ICRMP's quality improvement process, we apply differences-in-differences methods to identify the effects of the program's efforts on standards scores and primary care quality indicators over the first 12 months of implementation. We find large and statistically significant increases in standards scores, but mixed effects on care outcomes-a small magnitude improvement in early antenatal care usage, null effects on childhood immunization and cervical cancer screening, and small negative effect of human immunodeficiency virus (HIV) care. While the ICRMP process has led to significant improvements in facilities' satisfaction of the program's standards, we were unable to detect meaningful change in care quality indicators.

. Second, the extent of poor quality of care is significant globally, particularly in low and middle income country (LMIC) settings, with avertable mortality attributable to poor quality of healthcare estimated at 5.0 million deaths per year (Kruk et al., 2018;WHO et al., 2018). And finally, in LMIC settings, poor quality of care can introduce inefficiencies that exacerbate resource constraints faced by healthcare sectors (Das & Hammer, 2014).
There are a range of approaches to improving quality of care available to policy makers in LMIC settings (Mate et al., 2013;Rowe et al., 2019). Performance-based financing introduces explicit incentives to improve healthcare provider performance, however, has been subject to controversy and mixed evidence on its effectiveness (Paul et al., 2018;Soucat, Dale, Mathauer, & Kutzin, 2017). Community monitoring interventions appeal to the intrinsic motivations of healthcare providers and administrators (Björkman & Svensson, 2009;de Walque et al., 2015). However, many strategies emphasizing the agency and incentives of individual providers provide limited scope for addressing broader, structural deficiencies and constraints on healthcare quality in many LMICs.
Accreditation systems provide a flexible and holistic approach to quality improvement in low resource settings (Mate et al., 2013;Mate, Rooney, Supachutikul, & Gyani, 2014). The International Society for Quality in Healthcare defines an accreditation system to be: "A public recognition by a healthcare accreditation body of the achievement of accreditation standards by a health care organization, demonstrated through an independent external peer assessment of that organization's level of performance in relation to the standards" (Mate et al., 2014). Underpinning any accreditation system are standards serving as benchmarks against which a facility or provider care structure, process, and outcome quality can be assessed (Donabedian, 1988;Peabody et al., 2017).
While these systems are more common in high-income settings, there is growing adoption of similar standardsbased accreditation strategies and interventions at large scale in LMICs such as Tanzania's five-star assessment system and PharmAccess Foundation's SafeCare model implemented in six sub-Saharan African countries (Johnson et al., 2016;Yahya & Mohamed, 2018). However, while much effort is being invested in these interventions, for which at present there is little empirical evidence on their effectiveness and as with the broader quality improvement literature the strength of evidence is weak (Rowe et al., 2019).
In South Africa, a 2012 audit that revealed the poor state of public primary healthcare, including that 94% of primary care clinics reported not having all essential equipment (Health Systems Trust, 2013). Amid growing concern regarding the health system's readiness for the proposed National Health Insurance (NHI) scheme, the Ideal Clinic Realization and Maintenance Program (ICRMP) was introduced as a holistic approach to improving quality of healthcare within public clinics (Fryatt & Hunter, 2015;Hunter et al., 2017).
With implementation beginning at scale in 2015, the ICRMP introduced a set of standards against which all public primary care facilities are assessed and an accompanying staggered process for quality improvement. The program vests responsibility for quality improvement with facility managers, supporting their progress by providing them standard operating procedures (SOPs), and the guidance and supervision of a district-level support team. The standards consist of over 150 binary elements, covering ten domains including but not limited to: administration, infrastructure, clinical services provision, and community engagement. The program is ambitious, aiming to ready South Africa's public sector primary care clinics for a future accreditation system to be administered by the recently established Office of Health Standards and Compliance and to be implemented under the proposed NHI scheme (NDoH, 2015b). However, much of the value of the envisioned accreditation process, would hinge on the extent to which the standards underlying it are associated with improved quality of care.
In this study, we exploit the differential prioritization of facilities during the introduction of the ICRMP to provide quasi-experimental estimates of its quality improvement processes effects first on facilities' performance adhering to the program's standards, and second on indicators of the quality of primary care services. In addition to providing evidence on the implementation and effects of this key health system intervention in the South African setting, this study contributes to the understanding of existing approaches to primary care quality improvement in LMICs more generally. We proceed with a detailed description of the ICRMP, a description of our data and empirical approach, before presenting our results and an accompanying discussion. provincial, and national referral hospitals providing secondary and tertiary care. Primary care services are largely provided at clinics and community health centers, with the latter being larger facilities providing some in-patient, maternal and emergency care services (NDoH, 2015b). The public PHC workforce are largely nurses, supported by general practitioners, with outreach and home-based care provided by lay community health workers (Schneider, Besada, Sanders, Daviaud, & Rohde, 2018).
The public sector exists amid significant inequalities in the health system -health expenditure in the private sector being approximately equivalent to that of the public sector despite the former only serving approximately 15% of the population (Ataguba, Akazili, & McIntyre, 2011;NDoH, 2015b). Consequently, South Africa is presently re-structuring its healthcare system with the goal of implementing a proposed single-payer NHI scheme by 2025 (NDoH, 2015b).
The NHI scheme would introduce a central fund that would contract with only accredited providers, both public and private, and would emphasize primary healthcare as point-of-entry. Accreditation would be the responsibility of an autonomous entity, the Office of Health Standards and Compliance, with concerns that at present accreditation would require significant improvements in the quality of the care offered at public primary care facilities. As a result, several primary healthcare reform efforts are underway to address challenges faced by the public primary healthcare sector, one of these being the ICRMP.

| The program
Through the ICRMP, the National Department of Health seeks to turn South Africa's primary healthcare facilities into socalled "Ideal Clinics" (Fryatt & Hunter, 2015). This aspirational notion of an "Ideal Clinic" has been conceptualized as: a clinic with good infrastructure (i.e. physical condition and spaces, essential equipment, and information and communication tools), adequate staff, adequate medicines and supplies, good administrative processes, and adequate bulk supplies; such a clinic uses applicable clinical policies, protocols and guidelines, as well as partner and stakeholder support, to ensure the provision of quality health services to the community. (Hunter et al., 2017) This concept is operationalized through a set of more than 150 standards or elements, devised through a multi-stakeholder consultative process, and jointly referred to by program implementers as the "Ideal Clinic Framework" (Fryatt & Hunter, 2015). The standards fall into 10 component categories: administration, integrated clinical services management, medicines supplies and laboratory services, human resources, support services, infrastructure, health information management, communication, and stakeholder engagement. An exhaustive description of the elements and their structure is presented in Table S1 in the supplementary appendix. Alongside the set of standards is an accompanying manual which describes SOPs facility managers should follow to satisfy each of the elements. This manual was also made available to facility managers as a mobile application (Hunter et al., 2017).
Elements are classified as vital, essential or important; and if clinics meet certain high but arbitrary threshold percentages of elements satisfied under each of these, they are then classified as "Ideal." Critically, this "Ideal" designation does not entail any further resources or compensation for facility managers or clinicians.

| Scale-up and prioritization
Following various preparatory efforts, implementation of the ICRMP began in earnest in the 2015/2016 fiscal year (Fryatt & Hunter, 2015;Hunter et al., 2017). A "scale-up" process was adopted whereby it was envisioned that the program would convert approximately 1000 facilities into so-called ideal clinics each year until all facilities were ideal. This thus, involved prioritizing quality improvement certain facilities in each year. Due to the partial autonomy of provincial governments, one province, the Western Cape, did not participate in the program or its scale-up during the first year.
Facility managers of prioritized facilities were responsible for achieving better performance on the standards. They were expected to make use of the manual describing the SOPs to be followed in order to satisfy each of the standards. They were supported to implement these changes by a district-based team termed the Perfect Permanent Team for Ideal Clinic Realization and Maintenance (PPTICRM). In instances where equipment or infrastructure was deficient, prioritized facilities were provided with access to supplementary funding (Hunter et al., 2017). Finally, there is an external monitoring mechanism whereby progress in prioritized facilities was independently monitored through external status determinations conducted by PPTICRMs from other districts at various points through the year.
Facilities that were not prioritized were not identified for quality improvement in the first year. While the facility mangers of these facilities would have been exposed to the standards and would have access to standard operative procedures for satisfying the elements, they did not necessarily have access to the support and monitoring of a PPTICRM district team, and they would not have had access to the additional funding for infrastructure and equipment. Their satisfaction of the standards would also not have been assessed by an external peer review team.
When fully implemented the ICRMP was envisioned to operate on an annual plan-do-study-act cycle (Hunter et al., 2017), with all facility managers following the quality improvement process followed by prioritized facilities in the program's early years.

| Mechanisms
As indicated above the overarching goal of the ICRMP is to "ensure the provision of high quality services to all" (Hunter et al., 2017). Our interest, beyond the immediate changes the program induces in facilities' meeting its standards, is its impact on quality of care. As a holistic quality improvement effort, satisfaction of the ICRMP's standards could impact care provision though multiple pathways. Some standards directly impact care provision. For example, the standards falling under the component "Integrated Clinical Services Management" include: the percentage of nurses trained on clinical guidelines, and the availability of guidelines in-facility, the effective management of client appointments for chronic and maternal services, as well as standards pertaining to patient experience of care and waiting times. The "Medicines, Supplies, and Laboratory services" component, which specifies standards for the procurement, storage, and availability of equipment and consumables necessary for the provision of care consistent with best practices. 1 Other components are more structural in nature and could impact providers less directly including for example through the infrastructure and support systems components. Finally, while much of the standards pertain to the facilities themselves, there are also standards for the coordination of services with outreach teams-namely through ward-based outreach teams and school health teams. These provide screening and referral services which could detect unmet need for particular services, including but not limited to antenatal care and immunizations.

| Existing literature
At present, the ICRMP and its effects have scarcely been researched. Official analysis suggests that in 2015/16, only 9.3% of all clinics were classified as "Ideal", while in 2016/2017 this number had risen to 29.9%, and by 2018/2019 had reached 55.4% (Steinhöbel, Jamaloodien, & Massyn, 2020). A study of facility managers finds they indicate having little agency over its targets and construction and report some anecdotal deviations from the prescribed processes (Muthathi, Levin, & Rispel, 2019).

| Overview
We study the impact of the ICRMP's quality improvement processes in two steps: first, we study changes in scores measuring satisfaction of the ICRMP standards, and second, we study changes in indicators of the quality of routine primary healthcare services. 2 For both sets of analyses, we focus on the initial expansion of the program in its first year of at-scale implementation, the 2015/2016 fiscal year. 3 At this point, the standards and tools for assessment were introduced 4 and approximately 1000 facilities were prioritized for quality improvement.

| Data
To study the evolution of ICRMP standards satisfaction, we use data arising from the ICRMP's routine processes. This data is constituted of measures of satisfaction of the ICRMP's standards collected through "status determinations" (in the language of the program). We draw on the status determinations undertaken during the first quarter of the annual implementation cycle, whereby facilities were assessed regardless of their prioritization status. 5 The set of standards are divided into 10 components, and so for each facility's status determination we calculate a score for each component as well as an aggregate score for the complete set of standards, where the scores are effectively the percentage of elements that the facility has satisfied. 6 Compiling these status determinations over time yields a facility-year panel data set containing each facility's aggregate score and component-specific scores from 2015/2016 and 2016/2017 7 , which we combine with a measure of which facilities were prioritized for quality improvement.
We treat the 2015/2016 observations as "pre," as at the time the status determinations took place the ICRMP was just beginning; and we treat the 2016/2017 observations as "post," as these comprise measurements following the firstfull annual cycle of the ICRMP and any associated quality improvement. Our sample for this analysis is restricted to facilities that conducted status determinations in both the pre-and post-periods and is thus not the universe of primary care facilities as one province did not participate in the program and status determinations were not completed in some nonprioritized facilities. In a Supplementary Appendix, we provide greater detail on the construction of the aggregate and component scores and construction of the analytical sample.

| Econometric specification
For each of the score measures, we follow a standard two-period difference-in-difference regression incorporating facility fixed effects, and fit regressions of the following general form: where: Post t is an indicator variable for 2016/17 observations, Prioritized f is an indicator for facilities prioritized for quality improvement in the initial year of the program, X ft are time-varying facility and regional socio-economic status (SES) controls, γ f are facility fixed effects, and ε ft is an idiosyncratic error term. Standard errors are clustered at the facility-level. The time-varying regional characteristics are constructed from Statistics South Africa's General Household Survey (GHS) at the Metro-Province-year level, and include medical aid coverage, urban population, education completion, piped water coverage, toilet access, household size, and youth population. These are time varying population characteristics that could otherwise be correlated with demand for healthcare services and unaffected by prioritization status.

| Data
To undertake our analysis of the effects of the ICRMP on primary care quality indicators, we combine multiple administrative data sets. The first of these is the National Indicator Data Set (NIDS) as captured in the District Health Information System 2 (DHIS), the health information system used by the South African government for routine monitoring and evaluation. NIDS is typically used for department performance management plans, for audit purposes, and as inputs to the provincial equitable share formula for the division of national revenue. For each public healthcare facility, indicator measures are filed on a monthly basis by the facility manager or an alternate designated reporter for various service and facility indicators, subject to a quality control process, and reviewed in the instance of flagged issues. From the NIDS, we draw on a limited selection of indicators. Rather than emphasizing any individual measure of primary care quality, we examine a set of multiple indicators of process quality, clinical outputs, and outcomes for key STACEY ET AL.
-1547 primary care services in South Africa's public sector (namely ante-natal care, immunization, human immunodeficiency virus (HIV)/tuberculosis (TB) care, and women's health). 8 These indicators are: early ante-natal visit rate (% of first antenatal visits before 20 weeks), complete immunization rate at age 12 months (number of infant patients who have completed primary course of immunizations), number of HIV positive clients initiated on Isoniazid preventive therapy (IPT), a TB prevention therapy for HIV positive individuals, and cervical cancer screening numbers (number of women patients over 30 years of age, screened for cervical cancer).
We study early-antenatal care usage, as NDoH guidelines recommend the ante-natal care should be provided as soon as a woman suspects pregnancy (NDoH, 2015a). For immunizations, on every visit the immunization status of every child visiting a primary healthcare facility should be checked and missed vaccinations administered according to a catch-up dose schedule (NDoH, 2018). Newly diagnosed HIV-positive clients are recommended to be screened for TB and if negative initiated on TB preventative therapy (NDoH, 2010). Cervical cancer screening is recommended as a component of routine preventive care for women over 30 years old, with the specific recommendation that women are screened at least three times once over the age of 30 (NDoH, 2017).
In our regressions, we include two sets of time-varying controls. The first are facility-specific and include patient headcounts, nurse workdays, and tracer item stockout rates. These could all jointly impact care quality and the indicators we use and be correlated with changes in the standards adherence. We assume that these themselves are not affected by prioritization status. 9 In addition, we include quarterly, regional SES controls constructed from Statistics South Africa's GHS as with the score regressions.

| Econometric specification
For these quality outcomes, we again adopt a difference-in-differences approach. However, in this instance, we have monthly data reported for multiple periods prior to the introduction of the program, and for multiple periods post the introduction. We incorporate facility and month fixed effects and estimate regressions of the following general form: where: Post t is an indicator variable taking value 1 for periods post the introduction of the program, Prioritized f is an indicator for facilities prioritized in the initial scale-up of the program, X ft are time-varying facility and regional SES controls, γ f are facility fixed effects, δ t are time fixed effects, and ε ft is an idiosyncratic error term. Standard errors are clustered at the facility-level.
A key assumption underlying the difference-in-difference approach is that prioritized facilities would exhibit the same trends in the outcome measures that nonprioritized facilities would have experienced in the absence of the program (Wing, Simon, & Bello-Gomez, 2018). That way, the impact of the program is the sole contributor to any differences in ICRMP scores or quality indicators. While it is not formally possible to test for parallel trends, we present a simple visual heuristic test of parallel trends in the NIDS indicators by plotting un-adjusted mean outcomes over time (Figure 1). 10 These broadly suggest that while there are some level differences in the indicators across the prioritized and nonprioritized facilities, the trends followed by these indicators were broadly similar prior to the introduction of the program.

| Robustness checks
We conduct a series of robustness checks. First, we conduct specification checks which vary the inclusion of our sets of controls. 11 Second, we cluster standard errors at district-level. 12 Third, the DHIS data drawn on for the primary care quality indicator analysis consists of indicators with some nontrivial degree of missingness as well as some outlier and out-of-range or implausible values. 13 To assess to what extent these data issues may influence our findings we: (i) restrict our sample to only facilities which are not missing any data and to only facilities which do not report any outliers, and (ii) impute missing and outlier values via multiple imputation. Fourth, as implementation of the DHIS was one of the targets of the ICRMP standards, and accordingly observed changes in DHIS indicators by facility prioritization could have been driven by differential improvement in the DHIS implementation across prioritized and non-prioritized facilities, we replicate our analysis excluding facilities that did not satisfy the DHIS standard. 14 Fifth, as the standards score sample is slightly different to that of the care quality indicator sample, we replicate the care quality analysis restricting the sample to only the facilities in the score sample. 15 The results of these analyses are presented in Section 3 of the Supplementary Appendix.

| RESULTS
We construct two samples of facilities for the two respective analyses from the master list of fixed public primary care facilities (n ¼ 3464). 16 We exclude facilities missing data. For the analysis of changes in standards scores, this produces -1549 a sample of n ¼ 2350 facilities of which n ¼ 1017 were prioritized facilities and n ¼ 1333 were not. Some facilities that were not classified as prioritized did not conduct baseline status determinations and thus are not included in this analysis. For our analysis of primary care quality indicators, we restrict our analysis to a pool of n ¼ 3352 of facilities for whom data is available. 17 In Table 1, we present descriptive statistics for prioritized and nonprioritized facilities in the samples underlying the two analyses. We compare the socio-economic standing of the placement of the prioritized and nonprioritized facilities through the South African Multiple Index of Deprivation (SAIMD) and its four constituent deprivation domains for prioritized and nonprioritized facilities (Noble, Zembe, Wright, & Avenell, 2013). The SAIMD draws on ward-level census data, and provides a relative measure of deprivation across four domains: material, employment, education, and living environment; and a measure combining each of the domains (Noble et al., 2013). Barring the employment deprivation index for the standards scores sample, we find no statistically significant differences in these indices across prioritization status. However, when we compare the characteristics of the facilities themselves, in both samples prioritized facilities appear to be slightly larger in scale and more likely to be community health centers, and seeing more patients per month, and hosting more nurse clinical workdays per month. Consequently, we adopt a regressionbased approach whereby we control for time-invariant facility fixed effects, and time-varying observable facility characteristics (such as patient headcount and nurse workdays and socio-demographic characteristics) that could otherwise bias inferences regarding the differential effects of prioritization over time.
In Figure 1, we depict mean ICRMP standards scores for the prioritized and nonprioritized facilities, before and after the first year of program implementation. We find greater improvements in standards scores among the prioritized facilities relative to the non-prioritized facilities. The mean aggregate score for prioritized facilities increased by 11 percentage points, while that of the nonprioritized facilities was not statistically different to zero. When examining scores by component of the checklist, there is greater heterogeneity, with some scores indicating improvement for the nonprioritized facilities. For two components, Support Services and Infrastructure, there is a decrease in score for both groups, although the reduction is smaller in magnitude for the prioritized facilities 18 . These results suggest the fidelity of the implementation of the ICRMP's quality improvement efforts were consistent with the prioritization of facilities.
In Table 2 we present the results of our regression adjusted difference-in-differences analysis of the impact of prioritization on standards scores. In column (1), we find that with the implementation of the program, the aggregate checklist scores improved by 11.06 percentage points. In columns (2) to (10), we analyze changes for components of the checklist. Across these components, consistent with Figure 2, we find large and significant positive effects of prioritization. For instance, the Integrated Clinical Services Management component of the checklist saw a 12.31 percentage point improvement in that component's score.
In Table 3, we present the result of differences-in-differences specifications assessing the impact of ICRMP prioritization on care quality indicators. Our results here are mixed and where a positive effect is observed, the magnitude of the effect suggests minimal impact of the prioritization on care indicators-we find a statistically significant increase of 1.48 percentage points in early antenatal visit coverage in prioritized relative to nonprioritized facilities. This, however, is a small change relative to an underlying improvement of 16.52 percentage points observed across all facilities (See Figure 1 and the "Post" coefficient in Column (1) of Table 3). We find null effects on full immunization coverage among infant patients. However, there is no significant effect of the program on cervical cancer screening, and a relative reduction in the rate of initiation of new eligible HIV patients onto IPT.
In addition to the primary regressions, we conducted various robustness analyses. These are presented in the supplementary appendix in Tables S4-S12. While there is some variation in our point estimates, the results are similar quantitatively and qualitatively across the various approaches we adopt to handle missingness and outliers. Moreover, we find little change in the DHIS standard over time for both prioritized and non-prioritized facilities that could otherwise impact inferences on the effect of the ICRMP's prioritization.

| DISCUSSION
While UHC has served as a guiding principle for national and international development goals, the quality of the healthcare system to which UHC enables access ultimately constrains UHC's potential population health benefits. In many LMICs, improving quality of care is challenged by constraints on limited public financial resources, restricted human resources and capacity for training, poor infrastructure and systems based on historical investment, and significant need for healthcare arising from poverty-related disease burdens.  Notes: Excludes facilities which did not complete status determinations in both periods (including Western Cape facilities which did not participate in 2015/2016). All specifications in include facility fixed-effects, and regional SES controls. SES controls include medical aid coverage, urban population, education completion, piped water coverage, toilet access, household size, and youth population. For ease of reading, we have omitted control coefficients, and as such caution should be taken in interpreting the constant term coefficient. Robust standard errors in parentheses. ***p < 0.01, **p < 0.05, *p < 0.1.

T A B L E 3 Impact of the ICRMP on primary care quality indicators
(1) Notes: All specifications in include facility fixed-effects, and regional SES controls. SES controls include medical aid coverage, urban population, education completion, piped water coverage, toilet access, household size, and youth population. For ease of reading, we have omitted control coefficients, and as such caution should be taken in interpreting the constant term coefficient. Robust standard errors in parentheses.
***p < 0.01, **p < 0.05, *p < 0.1. The ICRMP, South Africa's chosen approach to primary care quality improvement, provides a set of standards primary care facilities should strive to adhere to, as well as a means to satisfy those standards through SOPs, district-based support, and external assessment. We find that the program's quality improvement efforts significantly improved facility performance in prioritized facilities, as measured by the ICRMP standards. In contrast, in the absence of quality improvement processes and support, in nonprioritized facilities, average aggregate standards score did not change through the first year of the program's implementation. This suggests two things. First, that the mode of the program's implementation, empowering facility managers with a set of standards and the means to enact those standards and the support of the district-based team for support and peer review, is effective in improving checklist performance. Yet it also means that exposure to solely the standards produced little to no change. Facility managers appear to need additional support to achieve improvement and facility's adherence to these standards.

F I G U R E 2
These findings are broadly consistent with studies of other similar interventions in the global North-in particular, the European Practice Assessment (EPA; Lester & Roland, 2010). Although aimed at a significantly different setting, the principles and structure of the EPA are similar to that of the ICRMP-where it is structured around practice managers and implementing a checklist assessment tool with some external support and auditing (Goetz et al., 2015). Notwithstanding some methodological limitations of this literature, including the absence of control or comparison groups, these findings suggest that implementation of the EPA generates improvements in performance as measured by the assessment tool (Goetz et al., 2015). An adaptation of the EPA model to hospitals and health centers in Kenya suggest similar improvements on the assessment tool's measurements (Marx et al., 2018). These studies, alongside ours, suggest that the introduction of standards-based quality improvement interventions are able to improve quality as captured by the set of standards-however, this literature says less about the impact of such interventions on clinical processes or performance.
We were also interested in whether or not the introduction of the ICRMP impacted the quality of primary healthcare services themselves. With respect to the impact of the ICRMP on a set of clinical quality indicators over the first year, we find that while the ICRMP generated improvements in standards adherence, there is minimal impact of program prioritization on clinical quality measures across public primary healthcare service areas. The ICRMP prioritization had limited, but statistically significant impact on our set of measures of clinical activity and output. There is an improvement in early antenatal care in prioritized facilities relative to control facilities, but no significant change in immunization coverage or cervical cancer screening, and a marginally significant decrease in prophylactic TB care for newly HIV diagnosed patients.
This study is subject to some limitations. First, the differences-in-differences identification strategy makes an underlying assumption of parallel trends, namely that conditional on observables, and in the absence of the treatment, the outcome of interest was on a similar trajectory for treated and control units (Wing et al., 2018). We are able to observe this in only the preintervention period for the clinical indicators, finding no differences in trends across both groups of facilities. However, for the ICRMP standards scores, we observe scores only once before implementation of the improvement efforts begin and are therefore unable to observe or test for parallel trends.
Other limitations stem from the data available for analyzing the impact of the ICRMP. First, we rely on status determinations undertaken by facility managers, who in the prioritized group of facilities in particular may have felt some need to inflate their scores. However, we do not expect inflation to be a significant concern as these facilities were also subject to peer-review status determinations undertaken by teams from other provinces and would be subject to detection (Hunter et al., 2017). Further, we draw on the DHIS, which, while capturing a rich set of clinical outputs and outcomes, does not allow us to directly observe provider activities and rather observes monthly indicators of certain activities from which we have to infer process adherence. Our study provides an example of how data collected through the DHIS platform, used widely in LMICS, could be used for health system policy evaluation. 19 Our findings suggest that this flagship quality improvement program was implemented with some fidelity. The improvement in satisfaction of standards, were driven by the support and resources provided to the dedicated prioritized facilities, and not solely from the introduction of the standards themselves. More challenging, however, is that the underlying changes implied by improved performance on facility standards were not translated into improvements in the clinical services provision indicators we study. While promising, the ICRMP may require beyond 1 year to show a greater impact. The current ICRMP elements that constitute facility standards could perhaps be more closely targeted to clinical process indicators, rather than the largely structural orientation of the current standards.
However, some quality determinants, including the general workload faced by providers, training, motivation, and financial and non-financial incentives, are beyond the control of facility managers. None of these levers are available to