Evaluating Oregon's occupational public health surveillance system based on the CDC updated guidelines

Abstract Background The Oregon Occupational Public Health Program (OOPHP) monitors occupational health indicators (OHIs) to inform occupational safety and health (OSH) surveillance. In 2018, OOPHP evaluated the performance of the OSH surveillance system and identified areas for future improvement. Methods Following the Centers for Disease Control and Prevention (CDC) updated guidelines for evaluating public health surveillance systems, the OOPHP evaluation team engaged internal and external stakeholders using a mixed‐methods approach. Operational measures for ten surveillance attributes were developed. Multiple data collection methods resulted in credible evidence for evaluation conclusions. Analyses included summary statistics and qualitative analysis of interviews, a focus group, and online surveys. Results Twenty stakeholders took part in this evaluation, with an average participation rate of 55%. Results showed the Oregon OSH surveillance system was simple, flexible, and highly accepted by its stakeholders. Funding security presents challenges for stability. A lack of timeliness of OHIs, low relevance of OHIs to local OSH issues, and the system's ineffectual data dissemination all limit the usefulness of the OSH surveillance system. A review of key data sources for the system showed good data quality and predictive value positive, but relatively poor sensitivity and representativeness. Conclusions The evaluation team successfully adapted attributes and examples in the CDC guidelines to this Oregon OSH surveillance evaluation. The evaluation findings have informed the development of recommendations for improvements to OOPHP's OSH surveillance. Future research is needed to develop guidance specific to OSH surveillance evaluation.

activities were formalized in the United States in the 1970s with the enactment of the Occupational Safety and Health Act. 2 The National Institute for Occupational Safety and Health (NIOSH) under the Centers for Disease Control and Prevention (CDC) supports national and state-level OSH surveillance programs. 3 Currently, NIOSH funds 26 states to conduct state-level OSH surveillance programs.
In the long-term, NIOSH envisions that all states will have the capacity to conduct OSH surveillance and contribute to national, state, and local prevention efforts. 3,4 To strengthen states' OSH surveillance capacity, the Council of State and Territorial Epidemiologists (CSTE) occupational health surveillance workgroup in collaboration with NIOSH developed and has been updating occupational health indicators (OHIs) as the minimum state surveillance capacity since early 2000s. [4][5][6][7] OHIs is a set of measures of prioritized OSH conditions covering work-related injuries and illnesses, exposures, hazards, intervention efforts, and socioeconomic impacts. As

| METHODS
CDC Updated Guidelines provide generic recommendations for evaluation of public health surveillance systems but lack detailed information needed to guide the evaluation process. [8][9][10] Particularly, it lacks specifics pertaining to the surveillance of occupational health.
As a result, the evaluation team had to develop a detailed methodology for evaluating the Oregon OSH surveillance system based on the general principles in the guidelines, including methods for engaging stakeholders and collecting data.
The overall evaluation approach followed the six tasks recommended in the CDC Updated Guidelines: Describe the surveillance system and determine the scope of work: Information on the system's work process, surveillance methodology, data sources, organizational structure, and IT infrastructure was collected through a thorough review of the system's working documents, onsite observation, and communication with program leadership and staff. An evaluation team, comprised of evaluators from OSU and the program's leadership and staff, determined the scope of work through formal discussions.
Given limited time and resources for the evaluation, the evaluation team selected three key OHI data sources over which OOPHP might have influence, the inpatient hospital discharge (HD) data, the disabling workers' compensation (WC) data, and the adult blood lead epidemiology and surveillance (ABLES) data were chosen for assessment. A list of these three key data sources and the corresponding OHIs that are calculated from the data sources is shown in Table 1.
Identify and engage stakeholders: Based on a thorough understanding of the Oregon OSH surveillance process, the evaluation team identified major internal and external stakeholders from OSH regulatory, academic, public health, and WC organizations. The team grouped representatives into program leadership including higherlevel leaders and the program's management and key personnel, key surveillance staff, external experts, data providers, disseminators, and users. Stakeholders were further ranked into three levels based on their involvement with the system to facilitate the design of the T A B L E 1 Key data sources and corresponding occupational health indicators (OHIs) evaluation approach. To inform and engage stakeholders, the evaluation team gave formal presentations and reached out by email to introduce the evaluation project and describe the data collection methodology.
Develop the evaluation approach: CDC Updated Guidelines recommend 10 surveillance attributes for assessing a surveillance system's data quality and performance. The evaluation team sorted them into three categories: performance (simplicity, flexibility, acceptability, timeliness, and stability), data quality (data quality, sensitivity, predictive value positive [PVP], representativeness), and overall usefulness. For performance and overall usefulness attributes, the evaluation focused on the whole OOPHP and its OSH surveillance system, while for data quality attributes, the evaluation was limited to the three key data sources and associated OHIs (Table 1). A core task in the evaluation was to design a practical evaluation approach for assessing the ten attributes. The evaluation team referred to both the CDC Updated Guidelines and other surveillance evaluation literature to develop a set of operational measures for assessing each attribute and to specify data collection and analysis methods for each measure (Table 2). 1,[11][12][13][14] Five main data collection methods were used in this evaluation, including semi-structured interviews, a focus group discussion, online surveys, a comprehensive document/literature review, and onsite observations. The best methods were selected for each measure to collect appropriate information. For example, we conducted focus group and interviews among the system's leaders and key personnel to solicit indepth discussions on the system's flexibility, stability, and usefulness, while sought only general perspectives in an online survey on a few attributes such as acceptability and usefulness from external experts and other stakeholders with a low level of involvement in the program. Table 3 shows the data collection method, type of participants, and the corresponding attributes for which evaluation evidence was collected.
Gather credible evaluation evidence: Based on the above specified methods, the evaluation team developed data collection protocols including interview and focus group guides, and survey questionnaires (Supporting Information Appendix). All data collection guides and questionnaires were pretested by more than three evaluators and researchers in OSU.
Semi-structured interviews were conducted by a phone call or inperson depending on the participants' convenience. The focus group discussion was conducted in-person. The online surveys were delivered via Qualtrics. Stakeholders' participation and data collection were Analyze collected evidence and make conclusions: Interviews and focus group discussions were audio-recorded, transcribed, and coded for themes. Mixed methods were used for data analysis. Qualitative summaries were reported by reviewing evaluation evidence collected from different sources, with quantitative statistics used whenever possible.
For system performance and the overall usefulness, judgments were reached by consensus of the evaluation team for each attribute.
To assess overall data quality, the evaluation team rated each measure of the data quality attributes on a 5-point scale, with 1 indicating the worst quality and 5 the best quality. Average ratings were calculated for each attribute and each key data source. An overall average score was then calculated to quantify the system's data quality.
Ensure the use of evaluation findings: Evaluation findings were reported to the OOPHP leadership and its advisory committee through a few meetings. Possible recommendations and feasible action plans were discussed to promote feasible recommendations.
No ethics review and approval were required because the project was regarded as evaluation instead of research.

| RESULTS
Twenty stakeholders took part in 28 data collection sessions with an average participation rate of 55% (see Table 3

| The system's performance
A detailed assessment of the five attributes to determine the Oregon OSH surveillance system's performance is shown in Table 4.
Simplicity: The Oregon OSH surveillance system is simple, without complicated surveillance design for data collection, processing, and case definition. The work process is straightforward.
Flexibility: The OHI methodology guide is regularly updated to add new OHIs or adjust data sources of existing OHIs to reflect changes in the field. The system displays high flexibility in adopting these changes since 2004 when it started to track OHIs. 15 We identified past examples that showed the system's flexibility to respond to local OSH needs. For instance, a "Story Map" project in 2018 produced OHI for local use based on county-level data and state list of hazardous industries. 16 Acceptability: The system was rated as highly accepted. The average willingness of stakeholders to collaborate with the system was rated as 4.8 on a 5-point scale, with 1 indicating the least accepted and 5 the most accepted. Stakeholders were actively involved in the system's activities.
Timeliness: Although the Oregon OSH system can produce OHIs in a timely fashion once data are available, there was a 2-to 3-year gap between the occurrence of an occupational health event or case and the generation of a corresponding OHI. For example, the 2015 OHI report was produced in mid-2018.

| Data quality
Four attributes (data quality, sensitivity, PVP, and representativeness) were used to assess data quality. Table 5 summarized results for each measure and for each of the three key data sources (the inpatient HD data, disabling WC data, and ABLES data).
Overall, the Oregon OSH surveillance system data were fairly good in data quality and PVP (ratings: 4.3), but they had lower scores for sensitivity and representativeness (ratings: 3.6 and 4.2, respectively), due to under-reporting and undercoverage among these data sources commonly reported in existing literature. 7,[17][18][19]22,23 The ABLES data were rated relatively higher in sensitivity (rating: 4.0), considering the mandatory requirement of medical examination for lead-exposed workers and the active case follow-up in the ABLES system, which help to identify more true cases. 19,25 Among the three data sources, the disabling WC data and ABLES data had relatively higher overall score (ratings: 4.3). The inpatient HD data had the lowest score (rating: 3.7) due largely to the concerns of HD data quality issues reported in existing literature. 17,18 The overall average rating for the Oregon OSH surveillance system was 4.1, suggesting a relatively good overall data quality.

| Overall usefulness
Stakeholders' average rating of the relevance of the system's objec-

| Factors limiting usefulness
The evaluation identified a few main factors limiting the Oregon OSH surveillance system's usefulness, including lack of timeliness of OHIs, lack of active data use and distribution, and the limited usability of OHIs in guiding local OSH practices. Average score (representativeness) locations. As such, they cannot identify local risks and populations at risk. States could work with partners to develop disaggregated OHIs with local-level information.
Data dissemination and data use: As pointed out in other surveillance evaluations, broader data dissemination is an important way to improve surveillance usefulness. 28,29 Although OOPHP produces an annual OHI publication, there has been disincentive to promote OHI data. Stakeholders commented that they did not think that "this data is widely published or leveraged." Program leadership and key staff identified a few issues impacting data dissemination and use.
First, they were unsure about how OHIs could be used to guide prevention practices due to the long lag and lack of substate level data. As such, OOPHP had difficulty in targeting end users who may use the information and recommendations for prevention interven- The usefulness of public health surveillance relies on the effective production and use of data to improve health research and practice. Given the OHIs' scale and timeliness limitations and resulting lack of effective data, the Oregon OSH surveillance system did not demonstrate its usability among end users.

| Study limitations
Limited by available time and resources for the evaluation, the evaluation team conducted primarily qualitative assessment of data quality attributes and limited the evaluation to selected data sources. Quantitative analysis on data quality attributes such as sensitivity and specificity was not performed. The evaluators felt that it is infeasible to include quantitative data quality assessment in a routine surveillance evaluation given the time and toolkits needed. Special studies are required for more thorough analysis on data quality.
The evaluation team identified a comprehensive list of stakeholders and actively sought their participation. Selection bias might exist on the part of the participating stakeholders since they may hold a more positive attitude towards the system. We noticed that the online survey had relatively low response rate and stakeholders who did not respond tended to less actively participate in the program's routine activities. This indicates a challenge in the evaluation to engage stakeholders with lower level of involvement. Few data users were identified or included in this evaluation due to the lack of data usage. However, a strength of the evaluation was the use of multiple information sources to collect evaluation evidence. Therefore, bias from stakeholders could be effectively minimized.

| CONCLUSION
OOPHP has reported OHIs since 2004 to track trends in major occupational injuries, illnesses, deaths, and hazards at a state-wide level. A comprehensive evaluation conducted in 2018 found that overall the OSH surveillance system has many positive attributes.
The system was very simple and highly accepted by its stakeholders.
It was flexible in accommodating changes related to OHI and other surveillance activities. The system is stable, however a lack of resources and long-term funding security present challenges to improving surveillance and program sustainability. Assessment of three key data sources showed the surveillance data had fairly good quality but was relatively poor regarding sensitivity and representativeness.
The lack of timeliness and usability of OHIs in guiding local OSH practices creates a disincentive for active data dissemination, resulting in a lack of usefulness of the Oregon OSH surveillance system.
OOPHP should enhance the capacity of its surveillance system to use existing and new data sources to produce timely, substate level information that describe local occupational health burdens and disparities, promote active data dissemination, and foster collaborations to promote occupational public health interventions.