Tradeoffs of using place‐based community science for urban biodiversity monitoring

Community science, the enlisting of volunteers to collect biodiversity data, is now common and widespread. In theory, the benefits of this model are complementary: community science programs produce useful datasets while engaging the public in conservation. However, in practice there may be tradeoffs regarding data quality, economic cost, and public engagement, which are rarely quantified. To assess program structure and relative outputs, we evaluated the tradeoffs associated with (a) employing paid technicians or (b) recruiting community science volunteers to collect bird and butterfly data for an urban biodiversity project. We found mixed results for data quality; the probability of detecting human‐adapted species was similar between technicians and community scientists, but community scientists were less likely to detect human‐sensitive species and overreported the abundance of human‐adapted birds. Habitat use estimates for four out of five species were comparable between the two datasets, although uncertainty was greater for community science data. Community scientists were more efficient in terms of economic cost, producing one and a half times more bird and two times more butterfly surveys and detections per paid work‐hour. Community scientists improved their ability to identify local species (birds by 16%; butterflies by 31%). Lastly, community scientists reported an increased interest in educational or volunteer programs after participating in monitoring but were less likely to report an increased interest in taking conservation action. To improve data quality, we recommend that training focus on identifying human‐sensitive species and tracking multiple individuals of the same species during surveys. To catalyze changes in attitudes, programs should focus on recruiting members of the public with diverse preexisting attitudes toward conservation. Although we demonstrate some shortcomings, our findings add to a growing body of literature that suggests community science can increase scientific literacy and efficiently produce data of similar quality to technicians, particularly for common species.


| INTRODUCTION
Community science, a more inclusive and accurate term for "citizen science" (Meehan, Michel, & Rue, 2019), is a model for engaging nonprofessionals in scientific research (Dickinson et al., 2012;Noss, 2020). The scale and focus of engagement differ across programs, with some large programs, such as Cornell's eBird (https:// ebird.org/home), providing bird occurrence data across the world, while smaller programs, such as Chicago's Wildlife Watch (https://www.zooniverse.org/projects/ zooniverse/chicago-wildlife-watch), are implemented at the city level. Similarly, there are different models for how scientists might engage with the public ranging from contractual programs, in which communities ask professional scientists to conduct research, to cocreated programs, in which community members and scientists work together on research . But in general, community science programs seek to collect useful data while also engaging the public and there is evidence that some programs can achieve these goals (Bonney, Phillips, Ballard, & Enck, 2016;Brossard, Lewenstein, & Bonney, 2005).
Yet, community science has many tradeoffs and potential limitations as a tool for collecting rigorous data efficiently, while also engaging the public. For instance, the training necessary for collecting high-quality data differs depending on the task, with some protocols requiring more intensive training processes, such as in-field training sessions (Kremen, Ullman, & Thorp, 2011;Newman, Buesching, & Macdonald, 2003;Noss, 2020), which may limit the number of willing volunteers and a program's capacity to train and manage volunteers. Studies also increasingly acknowledge that volunteers for community scientist projects are disproportionately white, older, affluent, well-educated, and hold strong preexisting environmental attitudes, potentially limiting opportunities to engage new communities and expand social networks for conservation (Chase & Levine, 2018).
As a result, organizations implementing community science programs that aim to engage the public and collect useful data may face tradeoffs regarding data quality, the costs of data collection, and effective public engagement. Any entity that manages a place-based ecological monitoring program will need to balance project objectives with budgetary considerations to determine an appropriate level of public participation in research and monitoring.
Previous work does offer strong evidence that achieving these goals individually is possible. For example, given appropriate training, community science data can be as precise (Lewandowski & Specht, 2015) and more costeffective than data collected by professionals (Gardiner et al., 2012;Goldstein, Lawton, Sheehy, & Butler, 2014;van der Velde et al., 2017). In addition to community scientists' contributions to research, many studies have demonstrated that community science programs can be a highly effective avenue for educating the public on ecological issues (Bonney et al., 2016;Crall et al., 2013;Jordan, Gray, Howe, Brooks, & Ehrenfeld, 2011) and participation in these programs can also reinforce proenvironmental attitudes and strengthen social networks within communities (Chase & Levine, 2018). However, while these components have been evaluated, potential tradeoffs between community science and other models of data collection (e.g., by trained technicians) are not often evaluated in quantitative terms. Further, to our knowledge, the tradeoffs between data quality, the economic cost of collection, and effective public engagement have not been assessed together within one program. This lack of rigorous comparison makes it difficult for organizations implementing smaller, place-based community science programs to decide how to strike a balance between collecting data that can inform land management while also engaging the public.
Our objective was to quantitatively compare the tradeoffs of employing paid technicians to managing a team of volunteers through a community science program, with the goal of enhancing conservation for wildlife and people in an urban environment. Using the City of Fort Collins' Nature in the City Biodiversity Project, we compared these two approaches to collecting ecological data by assessing the following outcomes: data quality, economic cost, and public engagement (Supporting Information  Table S1). This framework and our results could be applied to guide place-based, ecological monitoring programs where engaging the public in science-based decisions can help conserve open space in growing communities.

| Study site
Fort Collins is located in Colorado, United States. The city has an estimated population of 167,830 and grew by 15.9% from 2010 to 2018(U.S. Census Bureau, 2018. Like many rapidly growing cities, Fort Collins is faced with the challenge of conserving habitat and engaging an increasingly urban population (Spear, Pauly, & Kaiser, 2017).
In response to these challenges, the City of Fort Collins launched the Nature in the City initiative. The Nature in the City initiative was collaboratively produced with Colorado State University and adopted in 2015 by the City of Fort Collins with the vision of creating "a connected open space network accessible to the entire community that provides a variety of experiences and functional habitat for people, plants, and wildlife" (City of Fort Collins, 2018). As part of this initiative, the Nature in the City Biodiversity Project is an ecological monitoring program that recruits teams of community scientists and paid technicians to survey for birds and butterflies in diverse types of urban open space throughout the city. The goals of this project (https://www. fcgov.com/natureinthecity/volunteer-biodiversity-project) are to engage the public in local conservation while also collecting ecological data in urban open spaces (publicor privately-owned green spaces that vary in size from 2-> 200 ha) that can inform local city land management. Specifically, the City of Fort Collins Natural Areas Department's goal is to use the data collected by this monitoring program to identify areas where habitat connectivity can be protected or restored and to inform land acquisition and management. The program's structure and its dual ecological and social goals make the Nature in the City Biodiversity Project an ideal model for assessing the tradeoffs of implementing a community science program versus collecting data with paid technicians.

| Technician surveys
In 2014, paid technicians conducted an initial ecological assessment of 166 urban open spaces in Fort Collins, surveying for all bird and butterfly species (Supporting Information Figure S1). This assessment was repeated in 2018. Technicians surveyed each location three times between May 15 and June 30 using 5-min point counts (Ralph, Droege, & Sauer, 1995), which is when resident birds typically nest and breed, and when detection probability is relatively high (De Wan et al., 2009). Technicians conducted point counts at each sampling point between 06:00 and 10:00. These three surveys were conducted by at least two different trained observers. Observers recorded all bird species that were seen and heard within a 50 m radius.
Technicians also surveyed each site for butterflies three times between July 1 and August 15 in 2014 and 2018. They conducted Pollard walks, a common method for assessing butterfly abundance (Pollard, 1977), along two 50 m transects within each site. Technicians located the start and end of each transect using GPS units. Unlike point counts, Pollard walks were not limited by time. Rather, the observer walked slowly (heel-to-toe) along each 50 m × 12 m belt transect, recording the species and abundance of all butterflies that traversed the transect.

| Community science program
Community scientists were recruited through social media, newsletters, signage at local businesses, and by word-of-mouth throughout April and early May. Recruitment messages were broadly aimed at community members over the age of 18 who were interested in conducting bird and butterfly surveys at local green spaces and included basic information about the Nature in the City program.
From 2015 to 2018, community scientists surveyed a revolving subset of 45 sites out of the total 166 sites. In 2018, technicians conducted a full assessment of all 166 sites. Thus, in 1 year (2018), paid technician and community science monitoring programs were run in parallel. Before each season began, we trained community scientists to identify 15 bird and 10 butterfly species by sight and sound in grouped classroom training events (Supporting Information Table S2). Limiting the number of species that community scientists survey is a common practice for reducing variability in data quality (Freitag, Meyer, & Whiteman, 2016). We selected 15 bird and 10 butterfly species based on the criteria that they were relatively common in the city, easily identifiable to species, and relevant to the Nature in the City initiative's conservation goals. After classroom trainings were completed, community scientists were given in-field training on how to conduct bird and butterfly surveys based on the same survey techniques, described above, used by paid technicians. We maintained communication with participants via email to answer questions about species identification and protocols throughout the program.
We constructed pre-and post-program volunteer surveys adapted from preexisting survey instruments, which assess ecological knowledge, conservation attitudes, and behavioral intentions (Merenlender, Crall, Drill, Prysby, & Ballard, 2016;Phillips, Ferguson, Minarchek, Porticella, & Bonney, 2014;Toomey & Domroese, 2013). The Wildlife Conservation Society's Institutional Review Board reviewed our materials, and we received an exemption because surveys were confidential and posed minimal risk. Our surveys (preprogram: 20 questions; postprogram: 19 questions), assessed community scientists' (a) ability to identify local birds and butterflies, (b) self-efficacy for environmental action, or perception of their ability to address environmental issues, (c) nature relatedness, or an individual's level of connectedness to the natural world (Nisbet, Zelenski, & Murphy, 2009), and (d) behavioral intentions for conservation action, or the degree to which a person has plans to engage in conservation-related activities (Ajzen, 1985) (Supporting Information Methods S1). At the beginning of every season, we administered preprogram surveys to community scientists through SurveyGizmo (SurveyGizmo, 2018) from the time they signed up for the program until they began training (April 15-May 1). At the end of every season, we administered postprogram surveys after community scientists had completed their final field surveys and closed the surveys after approximately 1 month (August 15-September 15).

| Economic cost
In order to assess the tradeoffs associated with using paid technicians to collect data themselves compared to using paid technicians to train and work with community scientists to collect data, program coordinators logged their paid work-hours used to implement the technician surveys separately from the paid work-hours necessary to coordinate the community science program in 2018. Paid work-hours for the technician's surveys included preseason tasks (organizational meetings, hiring processes, and technician training time), monitoring season tasks (time spent conducting surveys and driving time) and postseason tasks (data entry and postprogram meetings). Similarly, community science paid work-hours included preprogram tasks (organizational meetings, volunteer recruitment, and volunteer trainings), monitoring season tasks (coordinating volunteers and driving time) and postprogram tasks (data entry, volunteer appreciation events, and postprogram meetings). While there was some overlap in these tasks, coordinators consciously separated paid work-hours as they pertained to technician surveys versus community science surveys on a weekly basis.

| Data quality
We used an unpaired t-test to assess differences in the reported number of individual bird and butterflies and a false-positive occupancy modeling framework to assess probability of a falsely reported species. To ensure appropriate comparisons between technician and community science surveys, we used a subset of data to assess differences in data quality between these two approaches. Specifically, we limited our analyses to detections of the 15 bird and 10 butterfly species in 2018, when both programs were operating simultaneously, and we included only data collected at the 45 sites surveyed by both technicians and community scientists. As defined and applied by previous studies (Hansen et al., 2005;Odell & Knight, 2001), species were considered human-adapted if the species' abundance, occupancy, or habitat use increased or was not significantly affected by housing density. In contrast, a species was classified as human-sensitive if the species abundance, occupancy, or habitat use decreased with housing density (Odell, Pusateri, & White, 2007). Species were categorized as human-sensitive or humanadapted based on how they were classified in previous studies (Supporting Information Table S2). Because classifications are both context specific, we used classifications from studies in similar geographic areas and residential settings when available (Farr, Pejchar, & Reed, 2017;Mangan, Piaggio, Hopken, Werner, & Pejchar, 2018;Odell et al., 2007).
We used a false positive site occupancy model to estimate the probability that misidentifications occurred based on detection histories at all 166 sites. This technique, introduced by Miller et al. (2011), can be applied when using one "uncertain" and one "certain" survey method. Surveys using the "uncertain method" may falsely detect a species that is absent (false positive, p 10 ), or fail to detect a species that is present (false negative). Surveys using the "certain method" are assumed to have no false positives, but false negatives may still occur. For this study, we considered the technician surveys to be the certain method and the community scientist surveys to be the uncertain method. As such, we fixed the probability of a false positive (r 10 ) to 0 for occasions when technicians collected data when running these models. We used a false positive model to estimate the probability of habitat use (Ψ ), the probability of a false positive detection for community scientists (p 10 ), the probability of a true positive detection recorded by a technician (r 11 ), and the probability of a true positive detection recorded by a community scientist (p 11 ). Estimates for species were excluded if their models failed to converge or yielded unrealistic estimates (ex. occupancy (Ψ ) = 1.00). For each species, we compared the estimated probability of true detection by each survey method (p 11 vs. r 11 ). We also compared the estimated rates of false positive detections by community scientists (p 10 ) among species.
For the species that yielded realistic false positive models, we used a single-season occupancy modeling framework to estimate habitat use probabilities for each of the community science (Ψ CS ) and the technician (Ψ T ) detection histories (Mackenzie & Royle, 2005). Bird models were built using all iterations of three site covariates (site area, natural habitat cover within a 300 m buffer, and vegetation cover within a 500 m buffer), which were calculated using spatial landcover data, and two observational covariates (wind level and cloud cover), which were collected during surveys. Butterfly models were built using all iterations of three site covariates (green space cover within a 100 and 300 m buffer and shrub cover within a 100 m buffer) and two observational covariates (wind level and cloud cover). These variables were originally collected to help inform the City of Fort Collins understand what drives local bird and butterfly habitat use (Supporting Information Methods S2). Before running models, variables were evaluated with a Spearman rank-order correlation (α = .05) and colinear variables were removed. Site covariates and buffer distances were identified as informative predictor variables based on previous analyses (Sushinsky, 2019, unpublished technical report). Models for a species were again excluded if their models failed to converge or yielded unrealistic estimates (ex. Ψ = 1.00). We used Akaike's information criterion (AIC) to select models and report on models with ΔAIC values less than 2 (Burnham & Anderson, 2004). We compared top model estimates from each single season model set (Ψ CS and Ψ T ) to probabilities estimated using combined detection histories in a false-positive occupancy modeling framework (Ψ CS + T ).

| Economic cost
We summed all paid work-hours invested in the technician program and all paid work-hours invested in community science monitoring of birds and butterflies in 2018 separately. We then divided the total number of surveys and number of bird and butterfly detections by paid work-hours to calculate surveys per paid work-hour and detections per paid work-hour for each monitoring approach.

| Public engagement
To assess the potential effect of year on survey responses, we used log-linear analyses to evaluate the proportion of respondents who reported an increased interest in engaging in conservation activities across years. We did not find an effect of survey year and therefore pooled volunteer survey data from all years (2015-2018) (Supporting Information Table S5).
Our survey analyses were guided by protocols provided by Phillips et al. (2014) and Merenlender et al. (2016). Specifically, we scored questions about respondents' ability to correctly identify photos of local birds and butterflies based on the percentage of correct answers out of five. We scored questions concerned with respondents' perceptions of their ability to address environmental issues and interest in the natural world on a 7-point Likert scale (Brossard et al., 2005). We used an unpaired t-test to compare the mean score between preand post-program surveys (Merenlender et al., 2016).
For the behavioral intention questions, we found no effect of year on the proportion of respondents who expressed an increased interest in any activity (Supporting Information Table S5); therefore, we pooled survey data from all years. We used a two-way chi-squared test to compare the proportions of answers (interest increased, decreased or stayed the same) to each question to a standard proportion provided by a dummy activity ("Reduce my ecological footprint") representing a proenvironmental behavior not related directly to our program (Toomey & Domroese, 2013).
Our inability to track and recontact program participants across years limited our ability to address and postadjust for nonresponse bias. However, to assess whether there were considerable differences between our pre-and post-program respondent groups, we used a log-linear analysis to compare key demographics of our respondents (gender, race, education, income, age and home ownership) between pre-and post-program respondents. This approach allowed us to make this assessment while accounting for differences in survey responses between years (Chambers & Welsh, 1993).
Throughout frequentist analyses, we considered a pvalue <.05 to be statistically significant.
Single-season habitat use models for four bird and one butterfly species converged and yielded realistic estimates ( Figure 3). Top model habitat use estimates were comparable, with confidence intervals overlapping, when using community science and technician detection histories for American Robin, Northern Flicker (Colaptes auratus), Western Meadowlark and Cabbage White (Figure 3). For one species, Red-winged Blackbird (Agelaius phoeniceus), top model habitat use estimates were higher when using community science detection histories and confidence intervals did not overlap. The covariates included in top model sets were different for only one out of the five species (Supporting Information  Table S3).

| Economic cost
The community science program was more efficient in terms of economic cost for collecting bird and butterfly data than paid technicians in terms of both surveys per paid work-hour (Community scientist = 2.00; Technicians = 1.16) and detections per paid (Community scientist = 7.53; Technicians = 4.54) work-hour (Supporting Information Table S4). Community scientists more bird surveys (Community scientist = 2.19; Technicians = 1.46) and more bird detections (Community scientist = 10.82; Technicians = 7.14) per paid work-hour than technicians. This difference was even stronger for butterfly monitoring, as community scientists produced more surveys (Community scientist = 1.85; Technicians = 0.85) and reported more detections (Community scientist = 4.72; Technicians = 1.93) per paid work-hour.

| Public engagement
Out of a mean of 39 (±13) community scientists per year, we received responses from 32 respondents (84% ± 7%) to the preprogram survey and 18 (47% ± 18%) to the postprogram survey. The demographic characteristics of F I G U R E 1 Estimates (±95% CI) of the probability that a reported detection is true for community scientists (p 11 ) and paid technicians (r 11 ), and that a reported detection is false for community scientists (p 10 ) as estimated by false-positive occupancy models for eight bird species. Human-sensitive species are indicated with asterisks the pre-and post-program survey respondent groups did not differ (Supporting Information Table S6).
Respondents reported increased interests in spending time viewing birds and butterflies (X 2 [2] = 49, p < .01), seeking additional information about birds and butterflies (X 2 [2] = 43, p < .01), volunteering for another community science program (X 2 [2] = 19, p < .01), volunteering for the Nature in the City biodiversity project in future years (X 2 [2] = 37, p < .01), getting involved with the Nature in the City initiative (X 2 [2] = 13, p = .01), sharing knowledge of birds or butterflies with friends or family members (X 2 [2] = 29, p < .01), visiting F I G U R E 2 Estimates (± 95% CI) of the probability that a reported detection is true for both community scientists (p 11 ) and paid technicians (r 11 ), and that a reported detection is false for community scientists (p 10 ) as estimated by false-positive occupancy models for three butterfly species. Human-sensitive species are indicated with asterisks F I G U R E 3 Habitat use (Ψ ) estimates (±95% CI) for four bird species and one butterfly species using a community science dataset in a single season occupancy modeling framework, a technician dataset in a single season occupancy modeling framework and a combined dataset in a false-positive occupancy modeling framework. Human-sensitive species are indicated with asterisks open space areas in Fort Collins (X 2 [2] = 17, p < .01) and protecting or restoring wildlife habitat throughout Fort Collins (X 2 [2] = 8.52, p = .02) ( Figure 5). However, we did not observe an effect of participation on interest in protecting or restoring wildlife habitat on the respondent's property (X 2 [2] = 5.91, p = .05) or contributing to a (d) (c) F I G U R E 4 A comparison of respondents' ability to identify local birds (a) and butterflies (b), selfefficacy for environmental action, or their perceived ability to address environmental issues, (c), and nature relatedness, or their relationship to the natural world (d) before and after participating in a community science biodiversity monitoring project. An asterisk (*) is used to denote a significant difference F I G U R E 5 The mean (±SD) percentage of respondents each year (n = 4) who reported an increased interest in each activity after participating in the community science program. "Reduce my ecological footprint" was treated as a dummy question and the light gray vertical line represents the "standard" interest increase, to which all other proportions were compared. An asterisk (*) indicates that a statistically significant proportion of respondents reported an increased interest in this activity wildlife conservation organization (X 2 [2] = 2.32, p = .32) ( Figure 5).

| DISCUSSION
Recruiting volunteers to participate in science has the potential to engage the public in conservation while also providing useful datasets for researchers. However, using a community science program instead of a traditional approach to data collection by paid technicians may involve tradeoffs among data quality, economic cost, and public engagement. We quantified those tradeoffs in the context of an urban biodiversity project designed to inform land conservation and development decisions. We found that community scientists had similar probabilities of detecting human-adapted species but may have overcounted individuals and were less likely to detect human-sensitive species. Additionally, we found that the community science model was more efficient in terms of economic cost, completing more surveys completed per paid work-hour. Finally, community scientists improved in their ability to identify local species and reported increased interest in other educational or volunteer programs but were less likely to report an increased interest in taking conservation actions after participating in monitoring.
Questions about the quality of data collected by community scientists are of growing concern and research interest (Lewandowski & Specht, 2015). Our community science dataset yielded bird and butterfly habitat use estimates that were comparable to those generated by the technician dataset. Yet, there were a number of notable differences between the community science and technician datasets that could have important implications for those planning to use these data to make conservation decisions. For example, habitat use estimates based on technician data were considerably less variable than community scientists for some species, suggesting that technician detection histories were more consistent, and habitat use estimates from their data are more certain than those of community scientists (Miller et al., 2011). Additionally, while the probability of a true detection was similar between community scientists (p 11 ) and technicians (r 11 ) for human-adapted species, human-sensitive species were less likely to be detected correctly by community scientists. This could either be the result of volunteers failing to correctly identify human-sensitive bird species once detected or failing to detect these species altogether. Consistent with the latter explanation, Kelling et al. (2015) found that eBird participants with less expertise had a lower rate of species accumulation and identification rates for particularly cryptic or difficult-to-identify species. Our finding is not surprising, given that community scientists are less likely to have experience with observing and identifying human-sensitive species. Further, community scientists tended to overreport the number of individuals for human-adapted species compared to technicians, which could affect estimates of population sizes, community evenness and species dominance. Still, despite these differences in detection rates and abundance estimates, community scientists produced datasets that estimated habitat use values that were comparable to, albeit with less certainty than, technician datasets for four out of five species analyzed. Further, as Kelling et al. (2015) point out, as community scientists gain more experience through our program, we would expect their ability to detect species to more closely mirror technicians. These results adds to a growing body of literature supporting the assertion that community scientists can produce similar datasets to those produced by professional scientists (Lewandowski & Specht, 2015;Meentemeyer, Dorning, Vogler, Schmidt, & Garbelotto, 2015;Theobald et al., 2015;Kosmala, Wiggins, Swanson, & Simmons, 2016).
We found that the community science model was considerably more efficient in terms of economic cost than hiring technicians for both birds and butterflies. For our program, the use of community scientists would reduce the economic cost of completing bird surveys by 33% and butterfly surveys by 54%. This is consistent with previous studies, which have shown that the community science model is more cost-effective in various contexts (Goldstein et al., 2014;Van der Velde et al., 2017). Although false detections are likely inflating the community science detections per paid work-hour estimates, community scientists still produced more surveys per paid work-hour than technicians. We note that economic cost efficiency is likely a function of community science group size and the time spent on training, and thus will vary among programs. However, for this particular program, even if we doubled training time in the field to focus on reducing false positives and improve data quality, the community science model would still produce 10% more bird and 70% more butterfly surveys than the technician model.
We demonstrated that community science can be a highly effective tool for advancing scientific literacy and conservation education (Brossard et al., 2005;Crall et al., 2013). Our program was successful in increasing both the ability of community scientists to identify birds and butterflies, and the volunteers' intention to engage in some conservation-related behaviors, such as spending more time observing and learning more about wildlife. While these findings are encouraging, they are limited to behavioral intentions, which do not consistently predict lasting behavioral change (Webb & Sheeran, 2006). Moreover, the activities with the strongest increases in interest ("Spend time observing wildlife" and "Seek additional information about birds and butterflies") were more closely related to increasing individual scientific literacy, whereas interest in activities that directly relate to conservation action ("Contribute to a wildlife conservation organization" and "Protect or restore wildlife habitat on my own property") did not increase. We also did not observe changes in nature relatedness or self-efficacy for environmental action (Chase & Levine, 2018). We suspect, and the data support, that this is due to our volunteers starting the program with already high levels of nature relatedness and self-efficacy for environmental action.
There were several dimensions of our study that may limit our inference and could serve as important areas for future inquiry on this topic. Like many other studies assessing community science data quality in ecological monitoring, we were limited in our ability to compare community scientists' observations to a true value (Lewandowski & Specht, 2015). However, we demonstrate that it is possible to partially overcome this limitation by using a false-positive occupancy modeling framework to better understand how differences between community scientist and paid technician datasets may ultimately affect the information used by decision-makers. It is important to note that our approach to variable selection was not exhaustive, and this may have contributed to model uncertainty. However, we contend that our use of the occupancy modeling framework was used as a proof of concept, rather than a definitive comparison between datasets collected by technicians and community scientists.
Further, because we only focused on one program, our ability to assess economic cost as a function of program structure was limited. For example, it is important to note that community scientists were not surveying for all species, as technicians were, which likely affects this comparison; this is particularly true for butterfly monitoring, which was not time bound. Further, models for many of our species did not converge, limiting our ability to make strong comparisons across different species. We suggest future research with larger sample sizes may produce higher rates of model convergence and help us understand if our results hold true across full suites of species. Additionally, we anticipate that program economic cost tradeoffs may be a function of program size and the intensity of required volunteer training and suspect that long-term programs may receive more pay off for their initial investment in training community scientists. We suggest that future studies quantify these relationships to better understand the critical points (e.g., mean group size, hours of training) at which one approach becomes more efficient than another. Taken together, our findings do not suggest that community scientists are always a cheaper alternative to hiring paid technicians. Rather, we emphasize that, while community scientists completed more surveys per paid workhour, there were notable tradeoffs with data quality. This point emphasizes the importance of weighing data quality, economic cost, and community engagement considerations together when making decisions regarding program structure and desired outcomes. In essence, we are not simply making the case that technicians can be replaced by community scientists. Lastly, it was beyond the scope of our study to evaluate how our program affected community scientists' long-term behaviors related to conservation. Future studies should monitor the activities of community scientists beyond the time scale of the program to better understand whether participation increases proconservation behaviors and whether those behavioral changes persist over time (Toomey & Domroese, 2013).
We offer several recommendations for how to improve community science programs to achieve conservation goals. First, we suggest that trainings focus on anticipated or observed problems with data collection. For example, given that community scientists may have struggled to detect and correctly report human-sensitive species, we suggest that classroom and in-field trainings focus on detecting and identifying these species. Similarly, given the overreporting of human-adapted species abundance that we observed among community scientists, we recommend that training also focus on tracking multiple individuals of the same species during surveys to reduce errors associated with double counting. We additionally suggest that both of these challenges could be addressed by pairing new volunteers with experienced community scientists or by organizing regular wildlifeviewing trips in small groups to practice field methods. We do acknowledge that investing a substantial amount of time and resources to intensify training may offset cost efficiency. However, we contend that this initial investment is likely to pay off, particularly for large scale, longterm programs that have high community scientist retention rates between seasons. Still, we recognize that the ability to identify human-sensitive species may be a challenge for new volunteers with limited experience. Hiring technicians may be particularly advantageous for shortterm projects for which determining the distribution or abundance of human-sensitive species is a priority.
If a major goal of a given program is to affect public attitudes regarding conservation and environmental action, recruitment should focus on reaching members of the public who may not necessarily have preexisting positive attitudes toward conservation. Community science programs are limited in their ability to change public attitudes when they do not recruit a diverse volunteer group that is representative of the broader community (Lukyanenko, Parsons, & Wiersma, 2016). To this end, some have argued that community science recruitment must engage communities that are typically underrepresented in conservation research and decision-making (Chase & Levine, 2018). If influencing public attitudes is a major goal of a community science program, then we suggest that active recruitment should extend beyond advertising through word of mouth or contacting past volunteers to focus on reaching potential volunteers who are not yet part of these networks.
Community science is a promising tool for collecting large datasets to inform major environmental challenges, while also engaging the public in the scientific enterprise. However, recognizing and understanding the tradeoffs associated with community science is critical to evaluating whether this approach to data collection will meet program objectives. Here, we quantified these tradeoffs by comparing the costs and benefits of paid technicians relative to community scientists in collecting data to inform the conservation of urban open space. We found that these two approaches resulted in similar data quality, although community scientists may underdetect human-sensitive species and overreport human-adapted species. Despite this shortcoming, coordinating community scientists was more efficient in terms of economic cost than employing technicians, and participating in the program increased the scientific literacy of volunteers. We hope that our findings and this framework can be used to help other organizations make strategic decisions about when and how to integrate community science into their programs. Despite some tradeoffs in data quality, engaging the public in data collection has strong potential to broaden the constituency for nature and improve organizations' capacity to make evidence-based conservation decisions. the experimental evidence. Psychological Bulletin, 132(2), 249-268.

SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of this article.