Optimal spatiotemporal effort allocation for invasive species removal incorporating a removal handling time and budget

Improving strategies for the control and eradication of invasive species is an important aspect of nature conservation, an aspect where mathematical modeling and optimization play an important role. In this paper, we introduce a reaction‐diffusion partial differential equation to model the spatiotemporal dynamics of an invasive species, and we use optimal control theory to solve for optimal management, while implementing a budget constraint. We perform an analytical study of the model properties, including the well‐posedness of the problem. We apply this to two hypothetical but realistic problems involving plant and animal invasive species. This allows us to determine the optimal space and time allocation of the efforts, as well as the final length of the removal program so as to reach the local extinction of the species.

economic (Pimentel, Zuniga, & Morrison, 2005) damage that they cause. Hence, targeting and removing invasive species from ecosystems can have important benefits. However, determining how to allocate limited efforts effectively is a challenging problem. When we consider control strategies and control effort, we are generally talking about how money should be spent through time and space. However, we do note that efforts can constitute other things, such as the amount of time people spend searching for and removing invasive species. Model-driven approaches are a useful tool for invasive species management because they allow the adaptation of control strategies to a variety of scenarios (Baker, Armsworth, & Lenhart, 2017;Baker & Bode, 2016;Baker, 2016;Marangi et al., 2018;. Among the modeling approaches, partial differential models exhibit enough flexibility to account for spatial features in a continually changing environment under the pressure from external drivers, as well as ensuring the transferability of the models from one species to another, through suitable parameter selections. Furthermore, models allow us to test the effectiveness of different control strategies and use methods such as optimal control theory joint with numerical techniques (Chyba, Hairer, & Vilmart, 2009; to optimize management. The introduction of space into the model makes the solution of the optimal control problem less straightforward because there is no complete generalization of the Pontryagin's Maximum Principle to partial differential equations. The basic ideas of optimal control of partial differential equations (PDEs) and some "maximum principle" results can be found in Lions (1970) and Li and Yong (2001), respectively. An informal treatment and some examples of PDEs optimal control problems of ecological interest are illustrated in Lenhart and Workman (2007). In this paper, the formulation of the optimal control problem is based on a PDE reaction diffusion model with a logistic growth, introduced in Baker (2016). We modified the model to describe a more complex and realistic situation by including a control term that has Holling-II type behavior and a budget constraint. We perform an analytical study of the model properties, including the well-posedness of the problem, the existence and the uniqueness of the solution, and conduct some numerical tests to show the effectiveness of the approach.
There is a range of ways that invasive species removal has been modeled in the past, from mass action interactions (Neubert, 2003), where the rate of removal is proportional to the control effort multiplied by abundance, to a constant rate of removal (Lampert, Hastings, Grosholz, Jardine, & Sanchirico, 2014), where the rate of removal is simply proportional to the control effort. Both of these approaches have weaknesses. Mass action is unrealistic at high abundances, as the removal rate can become arbitrarily large for a very large invasive population. In reality, the removal rate is limited by aspects such as the number of baits dropped (where the maximum number of removals is the number of baits) or removal time, where each invasive individual requires a fixed amount of time to remove (where the maximum number of removals is the time elapsed divided by the time to remove one individual). Constant removal is the opposite extreme and is not appropriate for small abundances, as the removal rate does not decrease as the population declines, meaning it does not model the increased search time to find an individual as the species becomes rare. A Holling-II function response seeks to get the best of both worlds, modeling mass action at low abundances, but saturating to constant removal at high abundance. This has been introduced previously in an invasive species context (Baker et al., 2017) but has never been considered in the context of spatial control of an invasive species.
Improving invasive species management modeling to better incorporate handling times is important as it makes modeling results relevant to more situations, including for example, the Ailanthus altissima control in Italy (Casella & Vurro, 2013). Ailanthus is a prolific weed species, invading countries worldwide and even capable of invading established forest (Knapp & Canham, 2000). Easily reaching 20 m tall, removing trees is not trivial. The main method commonly used to eliminate Ailanthus altissima is mechanical removal by means of the complete removal of aerial biomass. However, due to the plant's tendency to resprout, it is an unsuccessful control method. Even if mechanical treatments continuously applied during the growing period could improve the long-term success, recent research in temperate environments has identified the fact that the joint application of mechanical and chemical (herbicide) treatments is more effective. The mid-and long-term effects of repeated application of these treatments on this resprouting species are still unknown, and studies to assess the long-term prognosis for control are needed to test the usefulness of these techniques. (Constán-Nava, Bonet, Pastor, & Lledó, 2010). Hence, incorporating the considerable time effort required to remove established individuals in optimization is key.
The paper is organized as follows. In Section 2, we describe the mathematical framework and formulate the optimal control problem. In Section 3, we provide two apriori estimates holding for the state variable, when control is fixed. In Section 4, we demonstrate the existence of the optimal control. In Section 5, we provide the sensitivity equation and the adjoint system involved in the first-order optimality conditions, then we prove the uniqueness of the optimal solution. The proofs of these results need the boundedness of the adjoint problem solution, which is discussed in Appendix A. Finally, in Section 6 we simulate two hypothetical but realistic applications on spatial geometries with different shapes; precisely, we analyze both the eradication of an invasive plant species in a parking area and the control of an alien fish species population in a lake. Then, the conclusions follow in Section 7.

| MODEL FORMULATION
We study the temporal dynamics of an invasive species population living in an open bounded domain , with a smooth boundary ∂Ω. Let n t x ( , ) represent the population density at (vector) position ∈ x Ω and time ≥ t 0. A reaction-diffusion model is used to describe the spatial and temporal dynamics of the system. We add a logistic density-dependent term, with growth rate r and carrying capacity k, to model population growth. The control term E t x ( , ) represents the effort allocated to the removal of the invasive species. We impose a constraint on the budget with a fixed bound B > 0, then we assume for any x and t. In addition, the control action on n is modeled by an Holling two-type function ∕ μn hμn (1 + ), where μ > 0 is the removal rate per population density unit, due to control. On the other hand, h > 0 represents the average time spent for the removal of a population item. A zero-flux boundary condition is adopted and the initial density n 0 is given. Thus, the model consists of the following equation where D > 0 represents the constant diffusion coefficient and  n is the outward normal vector on ∂Ω. In the reaction part, the term ρ x ( ) represents the habitat suitability function. Due to its meaning, we assume that ρ x ( ) is bounded and . With the aim of analyzing well-posedness of the previous model, we state the following result.
Proposition 2.1 Assume that initial value n 0 is non-negative and bounded, that is, Then, there exists a unique non-negative classical solution of problem (1) and (2) for all Proof We refer to Hollis, Martin, and Pierre (1987) where global existence and uniform boundedness are discussed for a class of reaction-diffusion systems involving two unknowns. Our proof can be obtained by adapting the arguments developed in Proposition 1 and Theorem 1 in Hollis et al. (1987), in the case when an unknown is dropped and the differential system reduces to a single equation. More precisely, we which is related to the reaction term in (1). We notice that, for any ≥ t 0, for all ≥ t w , 0 with ≤ w R. In this respect, the assumptions in Proposition 1 in Hollis et al. (1987) hold; then local solution existence can be achieved with similar approach, by exploiting well-known semigroup theory (see Henry, 1981;Pazy, 1983). Actually, due to Theorem 3.3.3 in Henry (1981), we may argue that model (1) and (2) has a unique, noncontinuable (classical) solution n on T Ω × [0, ) * . Also, there is a continuous function * . We stress that ⋅ n t ( , ) has nonnegative values on Ω since Theorem 3.4.1 in Henry (1981) assures continuous dependence of solutions on forcing term F for our problem; therefore, Theorem 14.11 in Smoller (2012) can be applied to prove that the set ≤ ≥ n n n n Σ = { : − 0} = { : 0} represents an invariant region for (1), as F t , then it holds that (see Ball, 1977;Hollis et al., 1987) In this respect, there exist two continuous functions for all ≥ t w , 0 with ≤ w R; they may be defined by the constant values μ t R rR ( , ) = 0 and μ t r ( ) = 1 , respectively. In this way, in each bounded time interval, ⋅ n t ( , ) remains uniformly bounded so long as the solution to (1) exists (see Remark 1 in Hollis et al., 1987). Thus, as for Theorem 1 in Hollis et al. (1987), that implies ∞ T = * and global existence of non-negative, classical solutions is assured. □ In the following, we focus on the dynamics related to a given finite time horizon T [0, ], with T > 0. Therefore, we assume that the effort allocated for the invasive species removal E t x ( , ) lies in the bounded set Under the assumption in Proposition 2.1, the classical solution n also satisfies the weak formulation of the given model in T [0, ]. In other words, it represents the unique non-negative function n t x ( , ) such that ∈ n L T H (0, ; (Ω)) 2 1 , ∈ ∂ ∂ L T H (0, ; (Ω) ) *  and H (Ω)* 1 , respectively. The goal that we consider is to minimize the environmental damage over time at the minimum cost, in terms of the efforts allocated to the species removal. Moreover, a budget constraint can be imposed for the implementation of the eradication program. That translates into solving a constrained optimal control problem. Assuming that the environmental damage has a cost which increases with the presence of the invasive species, We search then for the minimum of the final population density n T x ( , ). A penalty term is added to take into account the budget constraint for any ∈ t T x ( , ) Ω × [0, ], when the budget bound B > 0 is fixed. As a result, we build the penalized objective function as follows (see Baker, 2016) where ω represents the cost due to the environmental damage, ν is a weight for the final population density, ∈ δ (0, 1) is the discount factor, ∈ q N, ≥ q 2, and ≥ c 0 are suitable constant values. In the following, we assume ∈ ∞ ν L (Ω), This implies that all integrals in J E ( ) are well-defined. We notice that the term represents the budget constraint, which may be neglected by setting c = 0. To handle both problems, with and without penalty term, we have introduced the constant ≥ c 0. However, in case when c is strictly positive, the penalty term induces the budget bound on the optimal solution if and only if c is chosen sufficiently large. Given the arbitrariness of the parameter c, here we bound the controls in taking into account that the upper bound can be redundant.
In this framework, we search for a control ∈ E* such that subject to the state Equations (4) and (5).

| SOME APRIORI ESTIMATES FOR STATE SOLUTION
In this section, we focus on some features for the state solution, when the control, E, is given. We establish two apriori estimates for the state variable, which may be exploited for proving the existence of optimal solutions and for characterizing the optimal control. The estimates can be proved by following approaches that are well known in the literature; in particular, the arguments provided in Finotti, Lenhart, and VanPhan (2012) can be used. For the sake of brevity, the proofs are omitted or briefly sketched. The first proposition provides a simple energy estimate.
, there exists a constant C > 0 depending on D, r, ∥ ∥ ∞ ρ L (Ω) and T , such that the following estimate holds for any solution n t x ( , ): In particular, it may be verified that the constant C in (7) may be defined as . The next result shows that any solution n may be bounded by a value which does not depend on the control parameter E, nor on the whole structure of habitat suitability ρ, but it only depends on its greatest value ∥ ∥ ∞ ρ L (Ω) and on time horizon length T. In particular, this bound increases as T grows.
Proposition 3.2 We assume that the initial value n 0 is non-negative and ( Ω ) , such that the following estimate holds for any state solution n t x ( , ): Concerning the previous result, we would remark that it is sufficient proving that n is bounded from above on T Ω × [0, ] since ≥ n 0 over Ω, due to Proposition 2.1. With this aim, it is possible to adapt the proof of Lemma 3.4 in Finotti et al. (2012); it is based on an iteration technique described in Ladyzenskaja, Solonnikov, and Ural'ceva (1968). An integer K is fixed and a partition then, by the same arguments in Finotti et al. (2012) and Ladyzenskaja et al. (1968), it can be proved that n t . It is possible to prove that both  c and C are increasing when time horizon becomes wider and T increases (see Finotti et al., 2012). The previous results are summarized in the following Proposition.
and it is non-negatively valued, that is,

| EXISTENCE OF AN OPTIMAL CONTROL
Recall that the set of admissible controls is defined in (3): it includes all bounded functions E t x ( , ) with non-negative values less than B > 0, over , which satisfies (4) and (5) and represents the weak solution of state Equations (1) and (2). Moreover, n E ( ) has nonnegative values.
In this framework, we provide the following result which states the optimal control existence.
Proposition 4.1 There exists an optimal control ∈ E* minimizing the objective function J E ( ).
Proof We notice that, since J E ( ) has non-negative values over , it is bounded from below. Then, choose a minimizing sequence and ( Ω ) . Then, by passing to a subsequence, we may assume that for any test function ∈ . Due to the features of function f , described in Section 2, it follows that We start from estimates (10), (11), and (13), we use the results in Simon (1987), and we pass to a subsequence to have We focus on Equation (12) The Lipschitz-continuity of f and the uniform boundedness of the sequence n { } i i can be used in the previous relationship to prove the existence of a constant C > 0 such that Applying all the above convergence relationships to Equation (12), we obtain It follows that n n E = ( ) * *, that is, n* is the solution of (4) and (5) with respect to the control E*.
Finally, we recall the convergence results → n n* and we notice that the continuity and the convexity of the objective functional J E ( ) imply its weak sequential lower semi-continuity; then, we have Thus, E* is an optimal control and the proof is completed. □

| FIRST-ORDER NECESSARY CONDITIONS AND UNIQUENESS OF THE OPTIMAL CONTROL
With the aim of analyzing the optimal control features, we focus on differentiating the functional objective J E ( ) with respect to the control function E. In the following, we provide the sensitivity equation and the adjoint problem and we obtain a relationship which characterizes the optimal control. We also prove uniqueness for the optimal solution.

| Sensitivity equation
The unique solution n n E = ( ) of (4) and (5) is involved in the evaluation of J E ( ); therefore, we start by analyzing the map ↦ E n E ( ). Here, we prove that this map is differentiable and we characterize its derivative, which represents the sensitivity of the model.
Proposition 5.1 Consider any control ∈ E , an arbitrary bounded function l t x ( , ), that is, ∈ ∞ l L T (Ω × [0, ]) and a real number ε, that is small enough to have ∈ E ε l + . Denote by n E ( ) and n E ε l ( + ) the corresponding state variables. Then, define as → ε 0. Moreover, the sensitivity, ψ, is the unique weak solution of the following equation Proof Fix ∈ E and ∈ E ε l + ; then denote n n E = ( ) and n n E ε l = ( + ) ε , respectively. We notice that n satisfies (4) and (5); on the other hand, n ε solves the following problem in weak sense It follows that both n and n ε satisfy the estimates in Proposition 3.3. Using these relationships, and the uniqueness of solution n n E = ( ) for (4) and (5), it is possible to argue, as in the proof of Proposition 4.1, to obtain the following convergence results . Then, subtracting (1) and (2) from (16) and (17) and dividing by ε, we get that ψ ε solves the following problem in weak sense where we set We remark that there exists constant are bounded. It follows that (18) and (19) represents a linear parabolic problem with bounded coefficients. Due to parabolic regularity theory (see Ladyzenskaja et al., 1968, Theorem 9.1 on pages 341-342), we may argue that the problem (18) and (19) admits a unique weak solution, which satisfies the estimate where  C > 0 is a suitable constant. On one hand, we notice that the weak formulation of problem (18) ( , 0) = 0, on Ω, On the other hand, by passing to a subsequence, we may assume This also means that ), as 0.
ε 2 1 These results can be applied to the previous weak formulation to obtain Due to the boundedness of functions n, E, l, it is not so difficult to verify that there exist two constants C > 0 * and C > 0 ** such that  (14) and (15).
As what concerns uniqueness, we suppose that ψ 1 and ψ 2 are two weak solutions of (14) and (15) corresponding to the same E, n n E = ( ) and l. After defining ζ ψ ψ = − 1 2 , it is easy to verify that ζ satisfies the following problem We multiply the previous equation by ζ and exploit integration by parts; then, we apply Gronwall's inequality to get ∫ ∫ , for all t. It follows that ζ t x ( , ) = 0 almost everywhere in T Ω × [0, ], then ψ ψ = 1 2 . □

| Adjoint equation and optimality condition
We consider a control ∈ E* and its corresponding state n n E = ( ) * *; then, we fix a variation function ∈ ∞ l T (Ω × [0, ]). We have proved that the sensitivity ψ ψ E l = ( , ) * satisfies the equation where the sensitivity operator is defined as We notice that the sensitivity equation represents a linearized version of the state problem (1) and (2). In the following, we set ∕ α c q B = (2 − 1) ( ) q 2 −1 and define the function φ α which maps R + 2 into R + so that for each ≥ s z , 0. In the next result, we apply the sensitivity equation to find the adjoint system; moreover, we use φ α for characterizing the optimal control.
Proposition 5.2 Assume that E* is an optimal control and n n E = ( ) * * represents its corresponding state.
Then there exists a unique weak solution ∈ λ L T H (0, , (Ω)) * 2 1 for the following adjoint equation It is possible to prove that optimal control E* is characterized by the following relationship for each ∈ x Ω and for all ∈ t T [0, ].
Proof Consider an optimal control E* and denote by n n E = ( ) * * its corresponding state. The proof is divided into two parts: the first one deals with the adjoint equation analysis, the second one characterizes optimal control E* by exploiting the previous results.
Thus, as a first step, we discuss well-posedness of the adjoint equation in proving that the solution has non-negative values over the whole domain; in addition, we mention its boundedness. In this respect, we reverse the time variable and define ͠ λ (24) and (25), we argue that ͠ λ satisfies the following problem We notice that, in Equation (27), the terms n*, E*, and ω are reversed in time, then they are evaluated at T t x ( , − ). Moreover, we remark that functions ρ x ( ) and ω t x ( , ) are assumed to be bounded and ≤ Under these conditions, problem (27) and (28) is a linear parabolic equation with bounded coefficients. According to the classical regularity theory about parabolic models (see Ladyzenskaja et al., 1968, Theorem 9.1 on pages 341-342), we may state the existence of a unique weak solution ͠ λ such that the quantity is bounded from above. We also claim that ͠ λ is non-negative over the whole domain T Ω × [0, ]. It is possible to prove this statement by applying Theorem 14.7 in Smoller, 2012. Indeed, we may consider a smooth real-valued function such as ͠ ͠ G λ λ ( ) = − , whose gradient ͠ G λ ( ) = −1 ′ never vanishes; then we may define the following set At the boundary ∂Σ, it holds that Then, all the assumptions of Theorem 14.7 in Smoller, 2012 are verified. It follows that Σ is an invariant region for Equations (27) and (28). Therefore, starting from ∈ ͠ λ ν x x ( , 0) = ( ) Σ for each ∈ x Ω, we get ∈ ͠ λ t x ( , ) Σ for each ∈ x Ω and ≥ t 0. That ensures non-negativity of the solution ͠ λ .
In addition, it is possible to prove that solution ͠ λ t x ( , ) is bounded from above by a suitable constant  C > 0, which depends on r, For the sake of brevity, we moved the proof of the previous statement in Appendix A. We remark that the features of system (27) and (28) are inherited by the original problem. Then, we may state that the Equations (24) and (25) has a unique weak solution , which is bounded and non-negative. More precisely, for each where  C > 0 represents a suitable constant. In this way, the first part of Proposition 5.2 is proved.
As a second step, we characterize optimal control E* with respect to n* and λ*.
, for ε > 0 sufficiently small; then, denote by n n E ε l = ( + ) * ε the unique solution of (1) and (2) when the control is set equal to E ε l + * . Due to Proposition 5.1, as , which is the unique weak solution of (20) which means The first two integrals in (33) may be written in an equivalent and alternative way. Indeed, it is possible to use the adjoint equation. Hence, we consider the solution λ* of (24) and (25) Then, we replace (31) into (33) and we obtain the following inequality and notice that, as n*, λ*, and ∈ E* are bounded, there exists a constant ∼ C > 0 such that | | ≤ ∼ g E n λ C ( , , ) * * * over the whole domain T Ω × [0, ]. The previous bounded quantity is used to find a characterization of E* with respect to both the state and the adjoint variables. Actually, in the limit process related to (33), variation l can be selected under the following three arguments: 1. We choose any arbitrary real number σ such that ∕ σ B B 0 < < (1 + ); then, we consider ( , ) (1 − ) } * σ , and we define the function . Under these arguments, in the limit process of (33) we select ε small enough so that ≤ ∕ ∼ ε σ σB C min( , ) . That assures ∈ E εl + * and, in addition, inequality (32) becomes We remark that σ is arbitrary; thus, for → σ 0 + , we get → ε 0 + in the limit process and (33) holds over the set It follows that the optimal control is the positive root of ⋅ g n λ ( , , ) = 0 * * , which is equivalent to have E t φ n t λ t B x x x ( , ) = ( ( , ), ( , )) < * * * α . As a result, the optimality condition (26) holds in all the points t is not empty and we consider any arbitrary ∈ l such that l t x ( , ) > 0 for ∈ t x ( , ) Γ 0 and l t x ( , ) = 0 otherwise. In this case, we have Due to the fact that l t x ( , ) is arbitrary and non-negative over Γ 0 , it follows that Since n* and λ* have non-negative values over the whole domain, the previous relationship gives x x ( , ) = ( ( , ), ( , )) = 0 < * * * α over Γ 0 ; as a consequence, the optimality condition (26) also holds in all the points where E t is not empty; then, we consider any arbitrary ∈ ∞ l L T x ( , ( , ), ( , )) 0 * * for every ∈ t x ( , ) Γ B . We also notice that g φ n λ n λ ( ( , ), , ) = 0 * * * * α over the whole domain; therefore, it holds that ≤ g B n λ g φ n λ n λ ( , , ) ( ( , ), , ) * * * * * * α on Γ B . It is straightforward to verify that function ⋅ g n λ ( , , ) * * is strictly increasing over R + ; then, we may state ≤ B φ n λ ( , ) * * α over Γ B . As a conclusion, E t φ n t λ t B x x x ( , ) = min{ ( ( , ), ( , )), } * * * α holds over Γ B . Optimality condition (26) is also verified in all the points t In this way, Proposition 5.2 is completely proved. □

| Uniqueness of the optimal solution
The results in Proposition 5.2 yield the following boundary value system for all ≥ s z , 0. In the next proposition, we show the uniqueness of solutions for the optimality system under a restrictive condition on T, which gives the uniqueness of the optimal control. Proposition 5.3 There exists a positive threshold T 0 , depending on the model parameters , k, δ, μ, h, B and on | | Ω , such that there is a unique solution of the optimality system, under the assumption that ≤ T T 0 < 0 .
Proof The results proved in Propositions 4.1, 3.3, and 5.2 assure that there exist an optimal control, and the corresponding state and adjoint variable which satisfy the optimality system. Therefore, we only need to prove the uniqueness. With this purpose, we assume that two controls E 1 and E 2 solve the same optimal system. We denote n 1 as the state corresponding to E 1 and λ 1 as the corresponding adjoint solution. In the same way, we denote by n 2 and λ 2 the state solution and the adjoint variable related to E 2 . We notice that ≤ and there are two suitable constants C > 0 and  C > 0 such that ≤ ≤ n t n t C x x 0 ( , ), ( , ) . The existence of C is given in Proposition 3.2. Here, we state that  C depends on D, r, , | | Ω , and T ; but it is independent of the structure of E. In the case when we are interested in the dependence of  C from T , we claim that   C C T = ( ) is continuous, it increases with respect to T and   C C (0) = > 0 0 . This statement is discussed and proved in Appendix A; here, it is exploited to localize a time horizon where the optimality system solution is unique.
Since E φ n λ B = min{ ( , ), } . We set  ∕ ∕ L μ C h = max{ 2, 1 (2 )}, then the following condition holds over the whole domain: Young's inequality is applied to obtain Integrating this inequality with respect to time, we get By the same argument on Equation (37), we also obtain where Young's inequality is applied to provide for any τ. It follows that, integrating the previous inequality with respect to time, we get Estimates (39) and (40) yield x sup ( ( , )) + ( ( , )) ) + 2 ( + ) where we set Function g σ ( ) has got a unique minimum value at Both C 1 and C 2 increases with  C , and in Appendix A we prove that constant   C C T = ( ) can be considered as dependent on T. Hence, we set . As a function of T,  C T ( ) is strictly increasing with respect to T itself; in addition, it is continuous and  . We remark that T 0 depends on r, ∥ ∥ ∞ ρ L (Ω) , C 1 , C 2 and  C ; thus, it depends on optimality system parameters: r, , k, δ, μ, h, B and | | Ω . In this respect, under the assumption ≤ T T 0 < 0 , we have ≤ g σ ( ) 0 0 , and we choose σ σ = 0 in the definition of both ξ and ζ . Thus, replacing σ by σ 0 in relationship (41), it follows that observe the advancement of the front of vegetation spreading along edges and corners, which correspond to the parts that are not easily reachable by cars in the parking. We assume that the owners have a budget constraint to be used to keep the area clean.
The spatial area is simplified as a triangular domain Ω, with measure | | ∕ Ω = 1 2. We exploit a mesh of 512 triangles and 289 nodes, which is built by means of mesh2D Matlab function and is shown in Figure 2 (left). We suppose that the parking area is represented by an ellipses inside the whole Ω. As what concerns suitability function, we set a lower value ρ x y ( , ) = 0.7 at the zone occupied by cars that means inside the ellipses; while, we set a higher value ρ x y ( , ) = 0.9 otherwise, where the soil is sealed by gravel. The scale for suitability values is shown in the picture on the right side of Figure 2.
In addition, the picture on the left side of Figure 3 shows the initial distribution n x y ( , ) 0 , which simulates the real situation in 2013. As a first step, we simulate the dynamics in the case when no control action is considered, that is, when control E is set at zero everywhere. As it is F I G U R E 1 The real domain is shown. The pictures on the top and in the middle date back to years 2013 and 2015, respectively; the picture on the bottom is dated 2017. Images taken from Map data: Google, SPOT Image expected, we find that population reaches its equilibrium at the maximum carrying capacity scaled by the suitability function. The long run result is shown on the right side of Figure 3.
As a further step, we suppose that the control action is planned over a time horizon up to T = 1 and we advance by temporal step size ∕ t Δ = 1 36. Then, we compare two different situations. First, we consider the case when the parking owners have no penalty term for the budget constraint, which means setting c = 0 in the simulation; in the next step, we simulate the dynamics by assuming a penalty term for budget constraint by choosing ≠ c 0. We set c = 0 and B = 1 in the first simulation, c = 1 and B = 1 in the second one. In Figure 4, the results of both simulations are shown: the population density and the control effort are evaluated at T = 1 and their distributions are plotted on the left column and on the right one, respectively. The pictures on the first row correspond with c = 0; while, on the second row the results are related to c = 1.
Concerning the results in absence of penalty term for budget constraint (on the first row of Figure 4) it is evident that, at the end of the dynamics, vegetation reaches its minimum value and the eradication is achieved by an optimal effort E x y ( , , 1) * , which gets a value equal to B = 1 at several points of the spatial domain. As concerns the results in the second example of dynamics with given penalty on budget constraint (on the second row of Figure 4), a small quantity of vegetation still persists in the lower angle on the left of the parking area Ω.
In addition, we evaluate the mean value of variable n x y ( , , 1) at T = 1, which is defined as the averaged value of the function over Ω: , ρ x y ( , ) = 0.9 otherwise. Vegetation parameters are set at the values suggested by the local analysis, that is, r = 1.92, k = 1, ω = 1, μ = 9, ν e = 5 0.1 , h = 1, δ = 0.1, q = 2; moreover, spatial diffusion coefficient is chosen to be D = 0.0052 F I G U R E 3 On the left side: values of initial distribution defined as n x y  Figure 5, where it is evident that vegetation is still gathered at different areas of the spatial domain, at the end of the temporal dynamics.
6.2 | Case study 2: Dynamics of an alien fish population in an hypothetical lake In the second example, we simulate diffusion and control of an alien fish population inside an hypothetical lake, following the previous literature Garvie, Burkardt, & Morgan, 2015). In this case, we face also a very complex geometry representing a realistic domain, where it may be reasonable to suppose very large spatial gradients for ρ x y ( , ). To perform the simulations on the growth and diffusion of the fish population, we apply the meshgrid consisting of 692 triangles and 427 nodes, used in  and Garvie et al. (2015) (left side of Figure 6). Discretization in time corresponds to a step size ∕ t Δ = 1 36. We randomly generate the spatial distribution of habitat suitability. In particular, we define ρ x y ( , ) as the function in Figure 6 (in the middle), which has been generated using the built-in Matlab function rand. A given initial density of population is introduced into the lake (Figure 6, on the right). We suppose to plan a control action over a time horizon with length T = 4 and we set a penalty term weighted by coefficient c = 0.22 for the budget constraint ≤ ≤ E B 0 = 0.5. We start from a uniform condition for the population distribution, which corresponds to the maximum density n x t ( , ) = 1 0 ; then we evaluate the evolution dynamics at t = 1, 2, 3, 4. The results are shown in Figure 7. We find that fish population tends to a final distribution n x y ( , , 4), at T = 4, with mean value 0.0012, integrated over the whole spatial domain. Moreover, in the same Figure 7 it is evident that the maximum allowed value E B = = 0.5 is reached in the areas where the sensitivity function gets its larger values. We also notice that the optimal control tends to nullify the fish population almost everywhere except for some small areas where it gets its maximum allowed value.
T A B L E 1 First example: Mean value of density distribution at T = 1, which is defined as the averaged value over the spatial domain; maximum and averaged values of optimal control effort in space and time c n x y mean ( ( , ; 1)) Ω ∈ E x y t max( ( , , )) * over the whole domain. The population density distribution at T = 1 (on the left side) and the uniform level for control effort (on the right side) are shown F I G U R E 6 Second example: the hypothetical lake and the meshgrid already provided in  and Garvie et al. (2015) (on the left side); scale for suitability values generated by means of Matlab built-in function rand (in the middle); initial distribution (on the right side). The parameters for fish population are set to r k = = 1, μ = 20, h = 1, with a diffusivity coefficient D = 0.05; the parameters related to control dynamics are set as in the previous example, that is, ν e = 5 0.4 , δ = 0.1, ω q = 1, = 2 F I G U R E 7 Second example: density of fish population n x y t ( , , ) (on the left column) and optimal control E x y t ( , , ) (on the right column). The dynamics is evaluated at different times: t = 1, 2, 3, 4. For each time t, the results are shown on the corresponding row 7 | CONCLUSIONS The environmental and economical damage caused by invasive species has a significant cost, which must be reduced at minimum cost. In protected areas, the problem has often to be faced with tight constraints imposed on the budget allocated on species removal programs. A model-driven approach to the problem would allow for the optimization of available efforts and to perform a scenario analysis in environmental conditions, which may vary due to external drivers (climate change, land use change, etc.). In this paper, we formulate a model where the control action is performed through an Holling II-type function, which better describes the effect of the species removal from an ecological point of view, although that makes the analysis of the model more cumbersome. Moreover, we assume that the budget is limited and translate the problem in a constrained optimal control one. We are able to carry out the theoretical analysis for the weak formulation of the state equation and also prove the well-posedness of the problem as well as the existence and the uniqueness of the optimal solution. The results are obtained by using the boundedness of the weak solution of the adjoint problem. Depending on the parameters selection, there may exist an optimal effort allocation within the budget constraint. In either cases, that is, when the optimal effort coincides with the maximum allowed budget or is below that amount, the existence of a unique solution requires that the removal program length is above a given threshold. A suitable numerical discretization has then be applied on two case studies: the eradication of an invasive plant species in a parking area and the removal of an alien fish species in a lake. We show that the optimal distribution of the effort in space and time ensures the local extinction of the invasive species, whereas an equal overall amount of budget, if uniformly allocated, leads to the persistence of the alien species. As a further development, we are currently working at the calibration of the model parameters for a concrete control problem in a protected area, where the biodiversity conservation is threatened by an invasive plant species. In-field data and remote sensing imagery will be integrated and assimilated into the model to provide both qualitative and quantitative suggestions for an optimal control strategy. Moreover, for plant invasive species we will enrich the model with an additional convective term, to account for the effect of wind on seeds dispersal. Finally, taking into account the age structure of some invasive populations may further enrich the model. The fish agestructured model presented in Marinoschi & Martiradonna (2016) might be the proper choice, for example, to treat the case of the brook trout in some alpine lakes in Gran Paradiso National Park. Magnea, Sciascia, Paparella, Tiberti, & Provenzale (2013).

ACKNOWLEDGMENTS
This study has been carried out within the H2020 project "ECOPOTENTIAL: Improving Future Ecosystem Benefits Through Earth Observations," coordinated by CNR-IGG (http://www.ecopotential-project.eu). The project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 641762. Christopher M. Baker is the recipient of a John Stocker Fellowship from the Science and Industry Endowment Fund.