Pushing boundaries: Advocacy evaluation and real‐time learning in an HIV prevention research advocacy coalition in sub‐Saharan Africa

Evaluation processes that facilitate learning among advocates must be nimble, creative, and meaningful while transcending putative performance and accountability management. This article describes the experience, lessons, and trajectory of one such approach, Simple, Participatory Assessment of Real Change (SPARC), that a transnational HIV prevention research advocacy coalition pilot‐tested in sub‐Saharan Africa. Inspired by the pioneering work of the outcome harvesting (OH) and participatory evaluation community, we recuperate advocates' centrality as storytellers, sense‐makers, and strategists in advocacy evaluation and describe how we recalibrated SPARC to meet their evaluation and learning needs. This article highlights the normative value of deliberative discourse in evaluation as it contributes to the interpretation of OH and the enrichment of the theory and practice of advocacy evaluation.

F I G U R E 1 Traditional outcome harvesting process learning about their work (Chouinard, 2013); self-evaluation that prompts participants to explore, debate, and reach consensus to foster organizational learning (Taut, 2007;Uphoff, 1991); collaborative evaluation that enhances understanding and evaluation use (O'Sullivan, 2004;Patton, 2010); and empowerment evaluation, which uses "evaluation concepts and techniques to foster self-determination, focusing on helping people help themselves" (Fetterman, 1994).
Some evaluators take this rationale for participation further, arguing for a radical shift in the conceptualization of Monitoring Evaluation and Learning (MEL) for advocacy (Coe & Schlangen, 2019). They posit that for evaluative processes to be truly useful and promote dynamic learning, advocates need to be more centrally involved in the evaluation. Evaluators should open up space for advocates to process and question their observations about progress toward the changes and define their work implications.
Further, using accessible evaluation approaches and lowering barriers between evaluators and evaluand can promote participation. Participatory and collaborative evaluation that centers learning can shift advocates' role from object to subject. Yet, translating methods so that they are engaging and accessible to advocates still may rely on training advocates in technical evaluation and formal processes. For instance, Hilgendorf, Moore, Wells, and Stanley (2020) suggested a need for easy-to-use systems-oriented evaluation tools and processes that respond to the complexity of coalition evaluation planning and implementation. This includes a more focused inquiry on evaluator-practitioner collaboration in developing systems-and equity-focused evaluation and application.
Informed by these perspectives, this article frames an international nonprofit's evaluation and learning experience and its HIV prevention research advocacy in sub-Saharan Africa. It describes the development and testing of an approach inspired by the concepts of outcome harvesting (OH) (see Figure 1), called SPARC, and how it was parlayed into understanding the emerging results in the HIV prevention space while demystifying evaluation to non-evaluators and seeding more creativity for expanding the discourse and practice of advocacy evaluation. Drawn from internal reports, surveys, and direct and participant observation, it describes the strides made to make SPARC relevant to the learning needs of advocates.

DESIGNING SPARC
AVAC is an international nongovernmental organization in New York City that works to accelerate the ethical development and global delivery of HIV prevention tools as part of a comprehensive and integrated response to the epidemic. Collaborative advocacy is a central organizational strategy. The organization specializes in projects and programs that facilitate engagement across technical sectors and countries to advance HIV/AIDS-related policy and research. Its animating theory of change relies on the generative effects of these collaborative relationships to foster reciprocal learning, support, and continuous feedback loops that, in turn, lead to more effective advocacy.
In 2016, AVAC received funding from the US Agency for International Development (USAID) to develop and implement the Coalition to Support Prevention Research (CASPR) program. This multiyear commitment sought to develop and sustain a vibrant network of HIV prevention advocates in sub-Saharan Africa that would create an enabling environment for biomedical HIV prevention research and development. It was organized around four interdependent objective areas: advocacy network, research translation, research preparedness, and policy engagement. The CASPR network involves nine partner organizations with longstanding engagement and global leadership in the region highly experienced in HIV prevention research, policy, advocacy, communications, and community engagement. These organizations had not formally collaborated before CASPR.
Based on the scan, we softened and simplified the technical aspects of OH. We challenged our ideas about what was essential from an evaluative and participatory perspective, prioritizing engagement and participation over intensively structured steps that required training and management. We used the following considerations, which contain certain departures from traditional OH, to design the process.

Prioritize accessibility and participation by advocates, researchers, and communicators.
Eschewing the idea that evaluation should support the development of advocates' technical evaluation capacity, SPARC focused on critical reflection and learning as a priority skill set and practice. SPARC avoided technical evaluation terms and minimized the need for detailed orientation. It brought the evaluators to the advocacy rather than the advocates to the evaluation. 2. Center and value the experience and insight of advocates and what was considered "valid data." We recognized advocacy evidence of intermediate progress is seldom visible or tangible. We dispensed with the review of the information the advocates generated themselves (it was tautological) because the most current developments potentially had not even been documented yet. 3. Focus on the significance of the outcomes. We focused on selective harvesting of the outcomes, which advocates considered most significant, rather than capturing a volume or number of possible outcomes. In classic OH, processes count the volume or number of outcomes harvested (Scheers & Wilson-Grau, 2018, p. 3-5;Smith, Aziz, & Sutcliffe, 2018). SPARC intentionally did not include counts of outcomes because we were trying to amplify attention to critical reflection and analysis of significance and quality. 4. Stimulate network narratives and learning. SPARC must create a space where advocates can openly and actively engage with each other within their organizations and across the coalition. The real learning opportunity in partner conversations explores differences in perspectives-rather than alignment (Coe & Schlangen, 2019). Further, it intentionally focused on the development of the network itself as a priority CASPR outcome.
F I G U R E 2 SPARC pilot approach 5. Facilitate action, not just results. Advocates are by nature action-oriented. Participating in a process that simply produced findings of outcomes would not be satisfying or a valid use of time and resources. As such, SPARC integrated steps to apply learning to planning.
SPARC was consequently organized in iterative and tiered phases: country teams go through their respective outcome storytelling first and then share their harvested stories with the entire coalition to curate network-level outcomes across its four objective areas (see Figure 2).

ENACTING SPARC
Key AVAC staff provided feedback about the concept and process. Once they were comfortable that the process promised to be useful with minimum burden to partners, they gave the green light to approach partners. We focused on orienting partners through a short introductory text and a set of Frequently Asked Questions detailing the SPARC project's concept and then hosted briefing calls to respond to questions. These conversations with the coalition were crucial in flagging aspects that we needed to clarify and promoting their buy-in. We repeatedly emphasized SPARC's focus on learning about outcomes beyond accountability. The initial SPARC design involved each partner generating outcome stories and bringing these stories together for a cross-network review-the SPARCfest. We opted to pilottest the partner-level SPARC process with just two CASPR partners, New HIV Vaccine and Microbicide Advocacy Society (NVHMAS) and AVAC, 9 months after CASPR's launch. Each organization planned its partner-level SPARC workshop. In preparation, we asked them to consider the following question about their experience with CASPAR to that point: Setting aside whether CASPR may have had anything to do with these changes, what do you think have been the most significant coalition results, positive or negative, related to our CASPR goals and objectives?
The partner workshop began with a warm-up exercise called Illustrated Outcomes, which engaged, enlivened, and focused participants. Participants produced an assortment of drawings and pictures that depicted potential positive and negative changes, allowing them to freely brainstorm rather than closely scrutinize outcomes. The visualization of outcomes through colored pens and crayons helped participants provide concrete details and symbolic meanings to their stories.
Next, participants collectively decided on which outcomes were most significant to their CASPR objectives and related to the development of the CASPR network. We asked participants to consider the following question: Of the outcomes listed, which show the most relevance to our CASPR objectives/goal or represent a key milestone in our change process? The selection of the top outcomes used a group consensus exercise.
We knew that developing outcome statements may be overwhelming and timeconsuming, so we used a simple three-part worksheet to guide partners: When all the outcomes were roughly drafted, we then posed the question: Recalling across your project plans, were there key areas where you would expect to see progress but did not? This was an intentional prompt to allow the group to see surprises and identify the program's spheres that may not have been as fruitful. The next step was to bring the seeds of these organization-level stories to a coalitionwide SPARCfest. It was held during the first day of the 3-day CASPR annual meeting in Cape Town, South Africa. This was when partners were to contribute the outcomes they observed in their work areas and engage in the sifting and analysis of outcomes across the network. We shared orientation materials in advance and asked all partners to consider signs of progress and outcomes related to the particular objectives and goals they were trying to advance through CASPR.
Once gathered, NVHMAS and AVAC presented their organization-level SPARC process and outcomes, which served as an anchor for the conversations that ensued. Participants were then led from the main room to the veranda, which was decked with festive buntings and flipcharts, where they would write their observed outcomes in their respective objective areas. They wrote short result statements, intermingled with others to help frame a statement, and fact-checked each other about their work. It was data collection and data validation happening simultaneously.
The filled-out flipcharts were then brought back to the meeting room, where partners dispersed into small groups, discussed outcomes drawn from the earlier exercise. These outcome deep dives were where participants spoke about the draft results, filled each other in with context and details, and posed clarifying questions to better state an outcome. These outcomes statements were brought back to the main floor, which became the agora of CASPR outcomes that would be further interrogated and refined.
Rating the most compelling outcomes followed. Participants voted on the top outcomes for the year. With five smiley stickers each, they roamed the room, closely examining the posted outcomes, and attached their stickers next to the strongest statements. At the tea break, we identified the winning outcomes and presented them back to the group. F I G U R E 3 Participants write brief outcome statements and confer with each other during the SPARC launch in Cape Town, South Africa The discussion that ensued, "State of the Outcomes Report Out," let everyone see the ranking of outcomes and confirm or validate them. We posed the following questions: Do we agree with this prioritization? Have significant outcomes been overlooked? What makes the outcomes particularly strong? What could make them more so (e.g., stronger evidence, clearer explanation of significance)? And what, overall, can we say about the "state" of this outcome? Groups collated their feedback and reactions to the prioritized outcomes and gave a rapid-fire "state-ofthe-outcome report." During this time, we found that participants sometimes transitioned from making observations to planning their work, skipping over the critical reflection and exploration of available evidence that we had planned. They also tended to focus on outputs, such as trainings conducted. This may relate to the early stages of a start-up program but could just as likely indicate a gap in monitoring or ingrained acculturation of reporting against activities. We thought that asking critical questions about audiences' reactions could help develop more robust outcomes. A participant observed how SPARC was received during the early part of CASPR: The coalition struggled to identify outcomes from its work (initially). It took time, and more intensive support, to get the group to think about Monitoring & Evaluation as something other than indicators measuring number of meetings or participants. SPARC story development was also uneven across partners. While some were strong and demonstrated outcomes, others focused on just reports of activities without evidence of the work's impact.
Following the first SPARC iteration, we developed a plan to ensure that the coalition continuously captured outcome stories between annual SPARCfests by introducing outcome story sharing in the partners' quarterly reports to AVAC. This step replaced organizationallevel workshops, which were difficult to sustain given competing priorities.
In early 2019, CASPR conducted its second SPARCfest in Nairobi, Kenya, again during the partners' annual meeting. During the lead-up, we invited a South African CASPR partner to collaborate on developing and co-facilitating the SPARCfest to hasten the ownership of F I G U R E 4 A participant interprets the SPARC experience through a group visual advocates of the process. She was not an evaluation expert, but she was a communicator and deft at conveying CASPR stories and creating engaging processes.
In preparation, we cataloged and analyzed the 38 stories partners had shared throughout the previous year. We followed up with some partners to ask clarifying questions that we hoped would reinforce good outcome reporting practices. We then categorized the stories by result level: quite a few stories still focused on activities or outputs (n = 20), and some had elements of intermediate (n = 16) and ultimate outcomes (n = 2).
Given the variety of results gleaned, we thought of adding a creative element to the upcoming SPARCfest that would teach the network where it was in its strategic journey based on their story submissions. We used the imaginative concept of an "outcome tree" as a learning device: the roots and trunk were the program inputs, the branches were the varying levels of program results (read: activities/outputs, intermediate or signs of positive or negative response to SPARC's efforts, and ultimate outcomes or changes in policies or practices related to CASPR goals); and the fruits were the effects.
On the day of the SPARCfest, the facilitators plastered tree posters around the hall, each representing a thematic area of CASPR, with story printouts pinned to the branches, creating a gestalt of an outcome orchard. Partners responsible for carrying out a specific thematic track were grouped next to their outcome tree. We posed questions to them that was both general and customized according to their stories: What can you say about the outcome tree-is it fruitful or not? How so? Are there surprises in the results? Do you agree with their position on the tree? How can the outcome statement be improved? Any new story developments, missing detail, or late-breaking stories? They unpinned the story printouts from the tree, restated them, and resorted their revisions back on the tree.
With the help of the facilitators, participants pruned and polished their outcomes. For activity and output stories that had not borne outcomes quite yet, the facilitators asked the participants to write down things they might want to do differently about the said activity to yield substantive results. For stories with emerging early outcomes, participants were also asked to identify future activities that would help enrich those results. This was the part where a planning mindset was integrated more formally, informed largely by the earlier iteration lessons. Unlike the SPARCfest in Cape Town, South Africa, the workshop in Nairobi, Kenya, sought to document more clearly what those next steps might be that could then be incorporated into plans.
Through this exchange, participants not familiar with other partners' stories were given the opportunity to interrogate the context and significance of the results, which then F I G U R E 5 Participants during the second iteration of SPARC in Nairobi, Kenya, use the outcome tree poster to dissect their program results helped the storyteller critically rethink the narrative. The surfaced stories were scrutinized: When puzzled together to understand the program's overall effects on the HIV prevention research space, what grand narratives or insights emerge? This mutual sense-making enriched the story and allowed network members to learn more deeply from the work of their colleagues ( Figure 5).
The final part of SPARCfest was devoted to an orchard walk to view the results, where a representative stood by the outcome tree to answer questions from the touring group. Themes that eventually surfaced became the headlines for the program thus far. Patton (1996) argues that learning resulting from the evaluation process requires that program members have a common understanding of the program's underlying theory of change. This was made clear from the start of SPARC as individual outcome stories were thematized and then elevated to an overall network narrative linked back to CASPR's goals and objectives: Did the stories cohere with the program aspirations? What meta-story is emerging as a result? These were recurring questions posed to the group to ensure a proper (re)contextualization of outcomes vis-à-vis the program's theory of change. Through SPARC, participants could refine and home in on outcomes relevant to their CASPR remit and the program's overall goals and objectives.

REAL-TIME LEARNING
Real-time learning about program results and how advocates defined them was nurtured when advocates openly conferred with each other about the changes they had witnessed. As a program manager reflected, "[I] better understood how our partners viewed their work. In situations where I saw the impact, I noticed that only activities were being reported. It highlighted that the coalition was under-selling its valuable work by not connecting to real change." The cross-coalition forum allowed advocates to convene uncloistered from their program workstreams and geographic confines to meet as equals. Resonant of Habermas's theory of the public sphere (1991), evaluation was democratized to produce reasoned opinions, validated facts, unified narratives, and a call to cooperative action. This social construction of the program's reality was largely achieved via iterative oral and written drafting of change stories and careful negotiation of their meanings. F I G U R E 6 With the outcome statements reattached to the outcome tree, participants develop themes or story headlines that summarize their insights and findings The coalition's partnership chemistry, cultivated from years of working together in the HIV prevention advocacy space, is an attribute that has boded well for SPARC. Tapping into this social capital made it easier to make a case for coalition learning through SPARC.
SPARC also recuperated the role of advocates as a storyteller, sense-maker, and strategist (see Figure 6). The advocates developed outcome narratives not to perfect the art of story writing but to winnow outcomes from their observations and experiences. Critical reflection and analysis of results that required attention to changes beyond activities and outputs enabled advocates to identify changes in situ, in combination with other influences, rather than in the isolated context of the program's logic model. As advocacy engineers, they used the results to identify courses of action to promote greater, if not more meaningful, change.
SPARC retreated from the notion of the evaluator as a dispassionate methodologist and armchair analyst. The techniques we deployed-an outcome sketch and collage instead of a survey; an outcome tree and orchard walk instead of a focus group-broke predictability. It democratized evaluation by ensuring that the advocates did the evaluation themselves, with us acting as "evaluator, facilitator, critical friend, organization developer, mediator, and the like" (Cousins, 2003, p. 261 as cited in Fleischer & Christie, 2009. Toggling between empathy and objective detachment similar to the dialectics of interpretive participatory research (van der Riet, 2008), we embraced a role set, which included the co-creation of knowledge about program outcomes as prospects (Rossman & Rallis, 2000, p. 67, as cited in Fleischer & Christie, 2009 to promote the advocates' learning process. In reframing our positionality in the evaluation conversation, we were able to build mutual trust among participants and in the process itself. Our investment in engaging and orienting participants early on and repeatedly about the approach reinforced the value evaluation plays within the program-not as an adjunct task but as a critical part of programming and, indeed, learning. There was also network learning about the evaluation practice itself. SPARC served as a forum that debunked the myth about the evaluation craft as a technocratic exercise solely serving the reporting needs of the funder. Bracketing aside the funder from the evaluation equation momentarily, advocates made SPARC about their learning first and foremost. Shifting the evaluation "power" to within the network, rather than on behalf of a funder, helped augment engagement and ownership. Because SPARC was not funder-mandated and represented a voluntary and innovative evaluation effort, USAID had earnest respect for and appreciation of the advocates' ownership of it. SPARC challenged participants' early assumptions about evaluation (i.e., something that was done to them and not with them or by them). Removing the artificial divide between the evaluator and advocate and between evaluation and advocacy, the view of advocates about the craft of evaluation consequently shifted. As overheard from other participants, SPARC "was understood by us ordinary people." It reoriented outcome evaluation beyond targets and accountability reporting. Transcending commonly adopted prediction-based logical frameworks, SPARC was aware of the complexity and vigilant about unintended consequences and the program's plausible contribution to the changes.
Most importantly, SPARC helped elevate advocacy evaluation into a discursive experience that redefined the boundaries of evaluation and advocacy and put the framing and use of outcomes in the hands of advocates. In highlighting the normative value of deliberative discourse in evaluation, it gave advocates the language to rationally examine their results, their actions, and what those meant for their advocacies. In these moments of messy yet rational discourse-making, individual and collective learning of and agency for program effects emerged.
Under which conditions does SPARC work best? Our pilot experience shows the following: 1. Participants are connected through shared goals and are incentivized to invest in developing relationships and trust. This enables SPARC participants to build off a common foundation and use the process to develop a more nuanced and useful analysis of their collective progress. 2. SPARC is integrated into forums that network members use to collaborate and plan their work. This enables SPARC to seamlessly feed into planning, which is actionable learning. By demonstrating immediate application and benefits, SPARC is less likely to be stigmatized as an evaluation process. 3. Program managers intentionally calibrate SPARC to integrate and balance with other demands on network participants' time through exploratory conversations with evaluators. 4. Evaluators take a "work ourselves out of a job" approach to ensure focus is on support and facilitation of substantive participation. Calibrating evaluators' role with an eye toward expanding ownership of network members enables evaluators to identify opportunities to support SPARC processes, such as categorizing outcomes that partners may not have time or feel well equipped to conduct.

WHERE TO, SPARC?
AVAC's experience showed that SPARC could work if primed according to its networked advocates' evaluation and learning needs. A Zimbabwean partner who had witnessed SPARC in another program astutely observed: The participatory approach enabled diverse groups to find common ground for reflection on the results of their advocacy. By design, SPARC is not a static approach and could potentially be valuable in disrupting dominant, linear narratives of change. For example, social movements for racial justice, climate action, and human rights provide an opening for SPARC to ensure that subaltern vocalities in programs that address these issues are heard not as tokens, like poster-child stories, but as subjects imbued with agency. Alternatively, learning organizations could use it to reflect on their strategic plan outcomes in addition to ubiquitous key performance indicators. Depending on culture and context, SPARC can be adjusted to ensure that it remains true to its participatory ideals. While the SPARC chronicle remains unfinished, advocacy groups and evaluators could engage with it to experience its potentials and limits. Its promise, after all, does not lie in a formulaic method for conducting advocacy evaluation but in pushing the imagined boundaries among programming, evaluation, and advocates. With unfettered creativity, advocate engagement, and evaluator tenacity to make it work, the SPARC project has just begun.

A C K N O W L E D G M E N T S
The SPARC pilot project was made possible through the support of USAID and AVAC, and the engagement of HIV prevention advocates in sub-Saharan Africa, India, and the United States. Special thanks go to Kevin Fisher and Roberta Sutton of AVAC for their help in assembling this article.

R E F E R E N C E S A U T H O R B I O G R A P H I E S
Jules Dasmariñas is director of Measurement & Evaluation, Global Community Impact at Johnson & Johnson, and previously served as Monitoring, Evaluation, and Learning lead at AVAC and other mission-driven organizations focused on HIV prevention, climate action, human rights/transitional justice, and maternal and child health.
Rhonda Schlangen is an evaluation consultant with over 20 years of global experience in public policy, advocacy, and evaluation, working with philanthropic and nongovernmental organizations that use advocacy as a strategy for social change and learning.