Using a structured decision analysis to evaluate bald eagle vital signs monitoring in Southwest Alaska National Parks

Abstract Monitoring programs can benefit from an adaptive monitoring approach, where key decisions about why, where, what, and how to monitor are revisited periodically in order to ensure programmatic relevancy. The National Park Service (NPS) monitors status and trends of vital signs to evaluate compliance with the NPS mission. Although abundant, The Southwest Alaska Network (SWAN) monitors bald eagles because of their inherent importance to park visitors and role as an important ecological indicator. Our goal is to identify an optimal monitoring program that may be standardized among participating parks. We gathered an expert panel of scientists and managers, and implemented a Delphi Process to gather information about the bald eagle monitoring program. Panelists generated a list of means objectives for the monitoring program: minimizing cost, minimizing effort, maximizing the ability to detect change in bald eagle populations, and maximizing the amount of accurate information collected about bald eagles. We used a swing‐weighting technique to assign importance to each objective. Collecting accurate information about bald eagles was considered the most important means objective. Combining panelist‐generated information with objective importance, we analyzed the scenarios and defined the optimal decision using linear value modeling. Through our analysis, we found that a “Comprehensive” monitoring scenario, comprised of all feasible monitoring metrics, is the optimal monitoring scenario. Even with greatly increased cost, the Comprehensive monitoring scenario remains the best solution. We suggest further exploration of the cost and effort required for the Comprehensive scenario, to determine whether it is in the parks’ best interest to begin monitoring additional metrics.


| INTRODUC TI ON
The collection of long-term datasets, termed monitoring, is an important part of ecosystem science, management, and conservation world-wide (Janetos & Kenney, 2015). Following the "roadmap" by Reynolds, Knutson, Newman, Silverman, and Thompson (2016) for designing and implementing a monitoring program, an adequate program includes steps to encompass the general phases of framing the problem, designing the monitoring program, implementing and learning, and learning and revising. This type of monitoring fits into the scope of "adaptive monitoring," which is motivated by specifying objectives and answering clear questions through long-term monitoring. In this adaptive monitoring framework, all decisions about monitoring should be iterative (Lindenmayer & Likens, 2009), as values and attitudes may change over the course of an extended period of time (Williams, 2011). Repeatedly revisiting decisions related to monitoring data collection allows a monitoring program to remain relevant with changing agency priorities (Oakley, Thomas, & Fancy, 2003). Instead, many programs begin by collecting data before laying the groundwork, and the value of the monitoring effort may be diminished (Reynolds et al., 2016). A structured approach to decisions about a monitoring protocol ultimately leads to a more efficient program by identifying the optimal survey design for monitoring (Reynolds, Thompson, & Russell, 2011).
Structured decision-making is defined by Gregory et al. (2012) as "the collaborative and facilitated application of multiple objective decision-making and group deliberation methods to environmental management and public policy problems." It can be compared to and fit into an adaptive framework, as both exhibit the similarities of defining explicit objectives and alternatives. Structured decision-making approaches can serve as decision aids to facilitate monitoring programs that explicitly address the decisions about protocols or implementation, and can help to conserve limited resources by reducing the waste of time and effort (Gregory et al., 2012;Lyons, Runge, Laskowski, & Kendall, 2008;Neckles, Lyons, Guntenspergen, Shriver, & Adamowics, 2015). Ultimately, monitoring programs that spend an adequate amount of time defining objectives and optimizing the program based on factors that are important to the decision-makers are more successful as their monitoring is focused on important data needs for conservation and wildlife issues (Nichols & Williams, 2006;Oakley et al., 2003). Ideally, structured decision-making is best enacted at the conception of a monitoring program, but can be used to review or revise a monitoring program as needed.
Long-term monitoring programs are collaborative in nature, involving multiple agencies and decision-makers. Although it may be easier to shy away from decisions involving multiple decision-makers, acknowledging the opinions of multiple experts can encourage deeper thinking from individuals (Runge, Converse, & Lyons, 2011). Additionally, a structured process may allow multiple decision-makers to better understand the specifics and reasoning behind alternatives and may foster consensus among a decision team (Mattson et al. 2019, Thorne et al., 2015. Unfortunately, collaborative decisions about monitoring objectives tend to be hindered by logistical constraints (i.e., cost) and a desire to maintain existing survey methods, which can prevent improvements in monitoring (Reynolds et al., 2016). Furthermore, there are often multiple objectives, such as social ideals, and the value of collecting scientific information (Grimble & Wellard, 1997), that may be important to consider when considering a monitoring protocol. A monitoring decision that makes explicit trade-offs to meet all objectives collectively will enable the data to be put to its best use (Lyons et al., 2008;Nichols & Williams, 2006). It is recommended that an open discourse be created and upheld between field scientists, managers, those analyzing the data, and other stakeholders throughout the decision-making process to maintain support for decisions regarding the monitoring protocol (Reynolds et al., 2011). By highlighting trade-offs, the cost (not just monetarily) of choosing one alternative over another can be examined (Grimble & Wellard, 1997).
For the National Park Service (NPS), vital signs monitoring enacted by the inventory and monitoring division (IMD) is intended to evaluate the health of ecosystems in order to measure the ability of NPS to uphold its mission "…To conserve the scenery and the natural and historic objects and the wild life therein and to provide for the enjoyment of the same in such manner and by such means as will leave them unimpaired for the enjoyment of future generations" (Fancy, Gross, & Carter, 2009). Each network was set up in an adaptive monitoring framework based on conceptual models of ecosystem function relevant to each of the 32 monitoring networks.
Individual vital signs were selected by each network so that they provided information necessary to learn about system dynamics depicted by the conceptual models.
In the process of creating a bald eagle monitoring program for the Southwest Alaska Network (SWAN), decision-makers did not fully explore key portions of framing the problem and designing objectives (Reynolds et al., 2016). As a result, the parks currently collect data on bald eagles slightly differently from one another and are not able to use their data as effectively as possible. In this paper, we present a case study that uses structured decision-making to inform a decision about the future of the long-term bald eagle monitoring program in Southwest Alaska National Parks. By using structured decision-making tools to identify monitoring metrics used for the long-term bald eagle monitoring program in the Southwest Alaska Network of National Parks, we will review programmatic goals and examine the trade-offs of monitoring scenarios made up of different monitoring metrics of interest for managers. It should be noted that while parks in the Southwest Alaska Inventory and Monitoring Network monitor bald eagles as part of the Vital Signs Monitoring Plan, bald eagles are not actively managed in the parks, making this K E Y W O R D S bald eagle, long-term monitoring, Southwest Alaska, structured decision, vital signs a case study of using structured decision-making techniques in an adaptive monitoring framework to evaluate a long-term status and trends monitoring program.
Means objectives focus on the manner in which a more basic goal, or fundamental objective, can be achieved (Gregory et al., 2012).
In this decision context, all defined objectives are means objectives to the fundamental objective of optimizing the long-term bald eagle monitoring program for Southwest Alaska National Parks. A multi-agency panel of scientists and managers has already defined means objectives and a suite of potential monitoring metrics to use when evaluating the monitoring decision though a Delphi Process (Kolstrom, Wilson, & Gigliotti, 2020;Linstone & Turoff, 2002). These means objectives were quantified using responses from the Delphi questionnaires (Kolstrom et al., 2020). Now, by considering the means objectives, we identify the optimal decision about monitoring metrics that can be used in the long-term monitoring program by using a linear value modeling approach.
Our main objective is to identify a set of monitoring metrics that is expected to maximize the efficiency of monitoring, while balancing the means objectives of minimizing cost, minimizing effort, maximizing accurate information collected, and maximizing the ability to detect change. Experts chose to base the decision on these four factors because these adequately represent the benefits of and limitations to the long-term bald eagle monitoring program for this particular National Park network.
We developed a decision model, which we used to evaluate the sensitivity of our decision to changes in objective weights. We also explored sensitivity of the optimal decision to experimental increases in cost. We used our model to make suggestions to the Southwest Alaska Inventory and Monitoring Network about how to standardize the long-term bald eagle monitoring program across the five participating parks The methods we have chosen to select an optimal bald eagle monitoring program provide an example case study that uses structured decision-making techniques to formally and transparently analyze complex problems and make a decision that combines the opinions of many experts.  Wilson et al., 2017). Bald eagles in the parks are monitored annually by SWAN and CAKN as part of their Vital Signs Monitoring Plan (Bennett et al., 2006;Wilson et al., 2017).

| PrOACT: Forming the decision context and analyzing the decision problem
Methods for this process were based around the PrOACT concept: Problems, Objectives, Alternatives, Consequences, Trade-offs (Hammond 2015). This study was conceived to address the problem of how to best standardize long-term bald eagle monitoring in Southwest Alaska National Parks. Objectives were defined by a panel of experts, which included decision-makers. Alternatives consist of realistic monitoring scenarios for this study system.
Consequences were first examined among monitoring metrics to narrow down an extensive list of metrics to a more manageable list of feasible metrics. Consequences of competing objectives were then examined through a swing-weighting process of the selected objectives. Finally, trade-offs were examined through a linear value model that calculates a utility value for each monitoring scenario.
Methods are described in more detail, below.
We convened an expert panel of 17 scientists, managers, and personnel from the National Park Service, US Fish and Wildlife Service, and South Dakota Game, Fish, & Parks to participate in a Delphi Process, where we identified important stressors for bald eagles in Alaska and linked stressors to monitoring metrics (Kolstrom et al., 2020;Linstone & Turoff, 2002). We compiled this expert panel using a snowball process. We selected scientists from all participating parks and other experts who expressed interest in participating in the process. We asked these panel members to suggest other members to be included in the expert panel until we received no more suggestions.
We queried the panel about long-term bald eagle monitoring in Southwest Alaska National Parks and gathered information about the cost, effort, reliability, and sensitivity of monitoring metrics commonly used to monitor bald eagle populations (Kolstrom et al., 2020).
Through an in-person panel meeting, we formed means objectives for bald eagle monitoring program decisions in Southwest Alaska Network (SWAN) parks: minimize cost, minimize effort, maximize ability to detect changes in bald eagle populations, and maximize accurate information about bald eagles. The expert panel chose to separate the objectives regarding cost and effort to ensure that staff time was being considered appropriately. Separating these two objectives allowed staff time to be considered as a necessary resource, beyond the cost of paying for the fieldwork (e.g., aircraft contracts).
This was meant to ensure that the time of salaried employees (whose salaries will not change, regardless of the effort required of a monitoring program) will be considered in the decision as a resource being used. The objective to maximize ability to detect changes in bald eagle populations emphasized the panelists' desire to measure metrics that will indicate changes in bald eagle populations in the parks quickly enough to respond with management action. By assigning an objective of maximizing accurate information, panelists hoped to increase knowledge about bald eagles and bald eagle populations in the parks.
We then evaluated the consequences of individual monitoring metrics based on the four means objectives. A comprehensive list of monitoring metrics was formed through structured expert elicitation, the Delphi Process, that uses surveys to combine expert opinion derived from a panel of Federal managers and eagle experts (Kolstrom et al., 2020). Using information collected through the Delphi Process and a consequence table, the comprehensive list of metrics was narrowed to the six best-performing metrics based on cost, effort, reliability, and sensitivity. The monitoring metrics that remained in consideration after this process are as follows: total number of bald eagle nests, changes in distribution, productivity, proportion of nests used by bald eagles for reproduction, total number of nesting pairs, and adult survival. Methods used to obtain this list of six metrics are described in more detail in Kolstrom et al. (2020).
To form alternative monitoring scenarios, we used combinations of the six best-performing monitoring metrics. Although there are many alternative scenarios that can be formed using subsets of the six selected metrics, we chose six scenarios to represent monitoring options that were considered feasible by a park scientist (Table 1).
The scenario "Status Quo" included feasible metrics that are currently monitored by the parks during three flight surveys. Two of these surveys investigate nest initiation and the third investigates nest productivity. The "Comprehensive" scenario consisted of all six metrics determined to be feasible by the expert panel. There is also a scenario, "No Monitoring" that considered the option to discontinue monitoring bald eagles. "New Metrics" considers metrics that are feasible, but not currently monitored by the parks (adult survival and changes in distribution). There are also two scenarios "Reduced Status Quo 1" and "Reduced Status Quo 2" that considered some of the currently monitored metrics with a reduced monitoring effort that results in lower estimator precision and could introduce bias.
The Reduced Status Quo 1 scenario reduced sampling during the second nest initiation survey. Rather than revisiting all previously surveyed nests, a 50% random sample of nests would be revisited.
The Reduced Status Quo 2 scenario would completely remove the second nest initiation survey, but would increase effort of the productivity survey to include all nests found in the first survey. We designed these scenarios to cover a range of reasonable options that are comprised of the feasible metrics identified by the experts.
The methods we used to evaluate and rank alternatives are based on the Simple Multi-Attribute Rating Technique (Edwards, 1971(Edwards, , 1977. We scored scenarios based on the means objectives for the bald eagle monitoring program (minimize cost, minimize effort, maximize accurate information about bald eagles, and maximize ability to detect changes in bald eagle populations). For each objective, we used expert panelist responses from the Delphi process to quantify scores for cost, effort, reliability, and sensitivity.
We asked panelists to assign a cost value to each metric, for each annual year of surveying in one park. We gave multiple choice options for each metric: $0-5,000; $5,000-10,000; $10,000-15,000; $15,000-20,000; $20,000-25,000; and $25,000+. Panelist responses to the multiple choice question were combined into a weighted average value.
To calculate an effort score for each scenario, we asked experts to estimate annual person days required for each individual metric and calculated the mean across panelists for each metric. For each scenario, we summed the mean annual effort values for individual metrics that comprise the scenario.
As a measure of the amount of accurate information collected about bald eagles, we asked panelists to assign a reliability score to each monitoring metric. This was based on the premise that more reliable metrics will increase the amount of accurate information collected. A reliability score was generated through the use of TA B L E 1 Metrics included in each monitoring scenario considered in the decision about long-term bald eagle monitoring program for SWAN parks The ability to detect change was measured using a sensitivity score. To create this sensitivity score, experts were asked to select metrics that are responsive to important stressors to bald eagles.
The sensitivity score for each metric is a count of the stressors to which that metric is responsive. For each scenario, we added the sensitivity scores of the metrics that comprise the scenario. By linking monitoring metrics to important stressors, we were asking panelists to indirectly evaluate how sensitive each monitoring scenario is to important changes in the system. By framing the survey questions in this manner, we were also able to craft a conceptual model of the system.
For the two reduced effort Status Quo scenarios, we did not collect information about the means objectives directly from the expert panel. Aided by a park scientist, we assigned values to these scenarios based on their relative performance to the Status Quo scenario.
Since the Status Quo scenario is comprised of three annual flight surveys, we estimated that one third of each score is attributed with each annual survey. Using these approximations and current park budgets, we calculated scores for scenarios by eliminating 50% of the second initiation flight from one annual survey (Reduced SQ1) or by eliminating the second initiation flight from the annual survey and adding 50% to the productivity survey (Reduced SQ2). We normalized values for each scenario on a 0-1 scale, and those normalized values are used in the decision model (Table 2).
We determined the weight of means objectives based on importance. These weights were determined by the panel of experts using a swing-weighting technique, adapted from Gregory et al. (2012). All panel members who were willing to participate in this task completed a swing-weighting form. We distributed a form to each panelist using Google Sheets. In this Google Sheet, we listed each means objective along with corresponding performance metrics, and whether our aim is to maximize or minimize that attribute. We displayed a range of values, including the worst and best possible values for each attribute. The worst and best values are generated from the range of score responses from the Delphi Process questionnaires. We also displayed five hypothetical situations. A "Benchmark" situation is comprised of the worst possible values for all four means objective attributes. In the remaining four hypothetical situations, all attributes were set to their worst values except for one attribute in each situation, which was set to its best value.
We asked panelists to rank the four hypothetical situations from 1 to 4 (1 is best). The Benchmark situation was automatically assigned the worst rank of 5. By doing this, we were asking the panelists which attribute they would swing to its best level, if they could only pick one. That situation received the rank of 1. The next most important swing was ranked 2, etc. We then asked panelists to score each situation based on its priority. The Rank 1 situation automatically received a score of 100. Panelists assigned scores in decreasing amounts to the remaining hypothetical situations based on importance in achieving each measure swing. We provided the example to panelists that if they score their Rank 2 situation at 50, they are insinuating that it is half as important to achieve that measure swing as the measure swing in their Rank 1 situation, which has a score of 100. Using Equation 1, we assigned a weight to each means objective for each individual panelist and created box and whisker plots for each objective.
To combine panelist responses for cumulative objective weights that will be used in the decision model, we averaged individual panelist weight (normalized) values for each means objective (Equation 1).
To examine trade-offs of each monitoring scenario, we then combined normalized scenario scores and means objective weights to create our decision model. This decision model uses a technique called linear value modeling, also known as linear additive modeling.
A utility score is calculated for each monitoring scenario by multiplying that scenario's score for a particular objective by that objective's weight. The products are then added for all objectives to create the utility score for each scenario, as demonstrated by the following equation: Utility = ΣW i X i , where W i is the weight of means objective I, and X i is the performance score for each means objective i. (Gregory et al., 2012).
We displayed the decision model using program Netica from Norsys Software Corp. The decision net uses three types of nodes: a decision node, nature nodes, and a utility node. The decision node allows the user to select a scenario alternative and displays the utility value of each scenario. The decision node connects to four nature nodes, which correspond to each means objective. These nature nodes are thus named "Cost," "Effort," "Accurate_Info," and "Detect_Change." Using the normalized score values for each objective, we populated the model in Netica. Since this model is not probabilistic in nature, we did not assign distribution to nature nodes.
Rather, we used program Netica to provide a visual representation (1) weight (normalized) = score sum of scores × 100 and "Effort" since our goal is to minimize these attributes. We used the normalized values for "Accurate_Info" and "Detect_Change" since our goal is to maximize these attributes. The utility values are displayed in the decision node (Figure 1). The scenario with the highest utility value is considered the optimal decision.
We performed a sensitivity analysis to determine the change in objective weights needed to alter the outcome of the decision model. To examine sensitivity, we graphed the percent total utility for each monitoring scenario across objective weights ranging from 0 to 100. Percent total utility is a measure of an individual scenario's utility score compared to the utility scores of all scenarios combined at a particular objective weight. By examining intersections in the sensitivity graphs, we determined at which weight one scenario began to outcompete another.
As we began to analyze our results, we noted that the man-   Panelists ranked the means objectives of ability to detect change and accurate information about bald eagles higher than cost and effort objectives (Figure 3 (Figure 4). The best-performing solution, the Comprehensive scenario, according to this model, was comprised of F I G U R E 2 Normalized scores for each means objective defined in the decision about bald eagle monitoring in Southwest Alaska National Parks. For the cost and effort objectives, this chart displays 1-normalized values so that higher scores on this chart represent better-performing scenarios for each objective. Points farther from the origin on each axis are considered "better-performing" with regard to that axis's means objective

F I G U R E 3
Boxplots showing the distribution of panelist weights (n = 10) for the four means objectives of the long-term bald eagle monitoring program in SWAN. These weights were collected through a swing-weighting procedure, and average weights are used in the final decision model. A boxplot is shown for each objective, and the median is displayed on each plot. Outliers are defined as any points that lie beyond the distance from the hinge to 1.5 * (Interquartile Range). Outliers are represented by dots. The range of each boxplot represents the range of individual panelist responses scores from Detecting Change and collecting Accurate Information.
Although it received scores of zero for cost and effort, it still outranked all other scenarios.
The sensitivity analysis revealed the point at which changing objective weights would change the optimal decision ( Figure 5) Experimentally increasing the cost of the Comprehensive survey to 500 times the value of the Status Quo scenario did not change the optimal decision. This is represented by comparing the proportional utility of the Comprehensive and Status Quo scenarios to the proportional cost of these scenarios ( Figure 6).

| D ISCUSS I ON
An "adaptive monitoring" approach should be based on clearly defined questions and should adopt an iterative approach to developing these questions, collecting data, and interpreting data (Lindenmayer & Likens, 2009). Successful monitoring programs should also place focus on initial planning and collaborative learning (Reynolds et al., 2016). In this decision context of long-term bald eagle monitoring in Southwest Alaska National Parks, we queried a team of decision-makers about the study system and the long-term monitoring program to provide an initial analysis of the decision problem. By doing so, we have set the stage continue making better monitoring decisions in an adaptive monitoring context. A structured approach to the decision about long-term bald eagle monitoring for this park system may help park managers to consider alternatives that may not have otherwise been considered, due to the risk of losing consistency by changing a monitoring program from what it has always been. By using a structured decision model, we illustrate the importance of carefully evaluating the resources required of a project before changing the monitoring protocol.
Although any number of monitoring scenarios could have been analyzed using these methods, we chose six scenarios that we felt adequately represented the range of options that vary in the cost and effort they require as well as the information they provide.
Following guidelines from Gregory et al. (2012), the alternatives that are considered when making a decision should be able to provide a complete and meaningful resolution to the problem at hand. We chose not to include all possible combinations of monitoring metrics as monitoring scenarios, as some of these would have been unreasonable. For example, some metrics such as "total number of bald eagle nests" can be measured concurrently while surveying additional metrics. Therefore, eliminating this metric would not reduce cost or effort. In addition to the obvious combinations of currently monitored and newly proposed metrics, we explored two scenarios that monitored current metrics with reduced cost and effort.
Although these scenarios would still provide some information about currently monitored metrics, they would introduce noise and, in the case of Reduced SQ2, bias. This would affect our ability to detect changes in the bald eagle population. We calculated the cost and effort for these reduced scenarios based on a previous budget, and the values entered for amount of accurate information and ability to detect change were estimated with the help of an expert. While we feel these values adequately represent these reduced scenarios, we suggest that if these are options the Southwest Alaska Network chooses to pursue in the future, they thoroughly examine the costs and effort days required at that time. However, we do suggest that these options are not pursued, as the Status Quo monitoring program outcompeted reduced programs at a wide range of objective weights.
Competing means objectives make it necessary to consider trade-offs, which are inevitable in natural resource decisions The results of our decision model relied on the weights assigned by panelists. Since cost and effort were both valued at relatively low weights, the calculated expected utility of each scenario was largely based on its ability to generate accurate information about bald eagles and detect changes in populations without substantial regard to overall program cost. In fact, the monitoring scenario that ultimately had the largest expected utility was the worst performer in terms of cost and effort (it received scores of 0 for these objectives), but performed well enough in its ability to generate accurate information and detect change that it outcompeted all other monitoring scenarios. As Gende, Hendrix, and Schmidt (2018)  weighted very similarly, as were detecting change and collecting accurate information. As the discussion of an optimal monitoring program continues, it may be beneficial to reduce the decision to a more simplistic cost-benefit analysis, with cost and effort combined into a "resources required" objective and accurate information and detecting change combined into an "information obtained" objective.
While simplifying the problem to two means objectives would still be considered a multi-criteria decision analysis, the decision can be improved with greater simplicity (Mendoza & Martins, 2006 The scores assigned to each objective and used as inputs in the decision models were generated using expert opinion, with the panel of experts including decision-makers. This is a valid method of collecting information to supplement empirical data (Eycott, Marzano, & Watts, 2011;MacMillan & Marshall, 2006) and evaluating the desired outcomes involved in a monitoring decision (Gregory et al., 2012). Expert panel selection can influence model outcome (Krueger, Page, Hubacek, Smith, & Hiscock, 2012) (Runge et al., 2011). As more information is obtained, the decision framework should be re-evaluated and changes should be expected (Neckles et al., 2015).
Additionally, a limitation of our study was ambiguity surrounding the monitoring metric "Adult Survival." Since survival is not currently monitored, we asked panelists to make educated guesses about the values that were entered in the model. However, since methods to measure adult survival were not specified, panelists were likely considering differing methods when estimating performance measures.
As an example, one panelist suggested measuring survival by collecting feathers from tree bases, and thus had lower estimates of cost and effort. An additional limitation was the way in which the question about cost was presented (panelists chose cost from a list of options, the greatest of which was> $25,000). This limited the cost of monitoring adult survival to a value that was likely much less than the realistic cost. As this is an initial analysis of the decision problem, we suggest future iterations of this decision analysis explore a more thorough analysis into the exact needs of the parks to monitor adult survival and changes in distribution. A more specific cost estimate to monitor these metrics should be generated, as well as a more specific statement of how this information will be used.
By doing so, decision-makers can further analyze the trade-offs involved in taking on this more intensive monitoring effort.
Low response rates to questions about estimating cost and effort of various monitoring metrics limited our response data. Some panelists felt unqualified to make those estimates, and these questions were asked late in the Delphi Process when response rates were lower. Porter, Whitcomb, and Weitzer (2004)   This process provides an example of using structured decision techniques to inform practical conservation decisions by addressing a unique decision problem about long-term monitoring without associated management action. As future information is collected and priorities of decision-makers may change, iterative analysis of this decision problem can help to provide the basis for a successful and efficient monitoring program. We believe that examining the decision problem through a documented and structured process allowed our team of decision-makers to focus on the specifics of the monitoring program and fostered consensus surrounding the monitoring decision.

CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest.