Exact solution approaches for the discrete α‐neighbor p‐center problem

The discrete α‐neighbor p‐center problem (d‐α‐pCP) is an emerging variant of the classical p‐center problem which recently got attention in literature. In this problem, we are given a discrete set of points and we need to locate p facilities on these points in such a way that the maximum distance between each point where no facility is located and its α‐closest facility is minimized. The only existing algorithms in literature for solving the d‐α‐pCP are approximation algorithms and two recently proposed heuristics. In this work, we present two integer programming formulations for the d‐α‐pCP, together with lifting of inequalities, valid inequalities, inequalities that do not change the optimal objective function value and variable fixing procedures. We provide theoretical results on the strength of the formulations and convergence results for the lower bounds obtained after applying the lifting procedures or the variable fixing procedures in an iterative fashion. Based on our formulations and theoretical results, we develop branch‐and‐cut (B&C ) algorithms, which are further enhanced with a starting heuristic and a primal heuristic. We evaluate the effectiveness of our B&C algorithms using instances from literature. Our algorithms are able to solve 116 out of 194 instances from literature to proven optimality, with a runtime of under a minute for most of them. By doing so, we also provide improved solution values for 116 instances.


Introduction
The α-neighbor p-center problem (α-pCP), proposed by Krumke (1995), is an emerging variant of the classical p-center problem (pCP) (Hakimi, 1965) which recently got attention in literature (Chen and Chen, 2013;Callaghan et al., 2019;Sánchez-Oro et al., 2022).In this problem, we are given a set of points and we need to locate p facilities.The goal is to locate the facilities in such a way that the maximum distance between each point and its α-closest facility is minimized.We note that both a continuous and discrete version of the α-pCP exist.In the continuous version, the facilities can be located anywhere on the plane, while in the discrete version the given points are also the potential facility locations.In the discrete version all the points where a facility gets located are not considered in the objective function.The α-pCP can be seen as a robust variant of the pCP, where customers do not need to go to their closest facility, but also have additional α − 1 facilities nearby.Thus, the α-pCP can be a useful modeling approach for applications which are traditionally modeled as pCP, such as emergency service locations and relief actions in humanitarian crisis (C ¸alık and Tansel, 2013;Lu and Sheu, 2013;Jia et al., 2007), where robust solutions are highly relevant.
A formal definition of the discrete α-pCP (d-α-pCP) is as follows (Krumke, 1995;Sánchez-Oro et al., 2022;Mousavi, 2023): We are given a set of points N , a positive integer p < |N |, and a positive integer α ≤ p.For each pair of points i, j ∈ N we are given a distance d ij ≥ 0. A feasible solution consists of a subset P ⊆ N of |P | = p facilities, indicating which facilities are opened.Given a feasible solution P , i.e., a set of open facilities P , the set of demand points is defined as N \ P , i.e., the set of demand points depends on the chosen feasible solution and consists of all points that are not opened.The α-distance d α (P, i) for a feasible solution P and a given demand point i ∈ N \ P is defined as so the α-distance d α (P, i) gives the distance of i to the α-nearest open facility for the feasible solution P .
The objective function value f α (P ) of a feasible solution P is defined as Using these definitions, the d-α-pCP can be formulated as min P ⊆N,|P |=p f α (P ).
We note that the discrete pCP (d-pCP, also known as vertex pCP) is obtained by setting α = 1, if we assume that the distances d ii = 0 for all i ∈ N , i.e., if we assume that each demand point i is covered if the same facility i is opened.Moreover, instead of assuming that distances between all pairs of points i, j are given, the problem can also be defined on a (non-complete) graph and the distances are defined as the shortest-path distances on this graph.With respect to this, Kariv and Hakimi (1979) show that the d-pCP is NP-hard in general, but there are some classes of graphs such as trees, where the problem can be solved in polynomial time (Jeger and Kariv, 1985).
In this work, we present exact solution approaches for solving the d-α-pCP.So far, solution approaches for the α-pCP focused mostly on the continuous version of the problem.For this version, an iterative exact algorithm based on the connection to a version of the set cover problem is proposed in Chen and Chen (2013).We note that for the classical pCP such set cover-based approaches are well established (for both the continuous and discrete version of the problem), going back to the seminal work of Minieka (1970).Recent set-cover based approaches for the classical pCP include Chen and Chen (2009); Contardo et al. (2019).
In Callaghan et al. (2019) such a set cover-based approach is used for the continuous version of both the pCP and the α-pCP.For the d-α-pCP the only existing solution approaches in literature are approximation algorithms (Chaudhuri et al., 1998;Khuller et al., 2000;Krumke, 1995) and heuristics (Sánchez-Oro et al., 2022;Mousavi, 2023).More details on these approaches and on the pCP and other related problems are given in Section 1.2.

Contribution and outline
In this work, we present two different integer programming formulations for the d-α-pCP.We also present valid inequalities, (iterative) lifting procedures for some of the inequalities, inequalities that do not change the optimal objective function value and (iterative) variable fixing procedures.We denote the inequalities that do not change the optimal objective function value as optimality-preserving inequalities.The lifting procedures are based on lower bounds to the problem and can be viewed as extension of previous results for the d-pCP in Gaar and Sinnl (2022).We show that the lower bounds converge to a certain fractional set cover solution when applying the lifting procedure or the variable fixing procedure in an iterative fashion.We also show that we can obtain the optimal objective function value of the semi-relaxation (in this semirelaxation, one set of binary variables of our formulation is kept binary and the other set of binary variables is relaxed) of our second formulation in polynomial time using iterative variable fixing.This can be seen as an extension of a result obtained by Elloumi et al. (2004) for the d-pCP and a fault-tolerant version of the pCP.Moreover, we provide polyhedral comparisons between the formulations.
Based on these formulations and our theoretical results, we develop branch-and-cut (B&C) algorithms to solve the d-α-pCP.These algorithms also contain a starting heuristic and a primal heuristic.We evaluate the effectiveness of our B&C algorithms using instances also used in Sánchez-Oro et al. (2022) and Mousavi (2023).Our algorithms are able to solve 116 out of 194 instances from literature to proven optimality.We also provide improved solution values for 116 out of these 194 instances.Note that these instances are not all the same as the instances for which we manage to prove optimality, as for some instances, the heuristics from literature already found the optimal solution (but of course could not prove optimality, as they are heuristics).
The paper is structured as follows: In the remainder of this section, we discuss previous and related work in more detail.Section 2 presents our first integer programming formulation together with valid inequalities, lifted versions of inequalities, optimality-preserving inequalities and variable fixings.Section 3 contains the same for our second formulation.In Section 4, we provide a polyhedral comparison of the formulations and convergence results for the lower bounds after applying the lifting procedure or the variable fixing procedure in an iterative fashion.In Section 5 we describe the implementation details of our B&C algorithm, including the starting heuristic and the primal heuristic and separation routines.In Section 6 the computational study is presented.Finally, Section 7 concludes the paper.

Previous and related work
The pCP is a fundamental problem in location science, dating back to 1965 (Hakimi, 1965), which has spawned many variations over the years, see, e.g., the book-chapter by C ¸alık et al. (2019).
The seminal work of Minieka (1970) presented the first exact approach for the pCP and also created a blueprint of a solution algorithm which over the years many other algorithms for either the pCP or also variants of it including the continuous α-pCP, used as a starting point.Minieka (1970) showed that the question whether there exists a feasible solution to the pCP with a given objective function value can be posed as a certain set cover problem.As a consequence the pCP can be solved by iteratively solving such set cover problems.Over the years, many authors (Garfinkel et al., 1977;Ilhan and Pinar, 2001;Ilhan et al., 2002;Al-Khedhairi and Salhi, 2005;Caruso et al., 2003;Chen and Chen, 2009;Contardo et al., 2019) have expanded on this idea to present algorithms to solve the pCP.
Aside from these set cover-based approaches, there also exist several integer programming formulations for solving the d-pCP to proven optimality.The classical textbook formulation of the problem (see e.g., (Daskin, 2013)) uses facility opening variables and assignment variables and is known to have a bad linear programming relaxation (see, e.g., (Snyder and Shen, 2011)).In Elloumi et al. (2004) an alternative integer programming formulation is presented and the authors show that there are instances where the linear relaxation bounds are provably better than the bounds obtained by the classical textbook formulation.In Ales and Elloumi (2018) a modification of this formulation is presented.Regarding our second formulation, which we present in Section 3, we note that there exists a variant of the d-α-pCP, in which every point i must be covered α-times, even if there is a facility opened at i.This variant is sometimes called fault-tolerant pCP (see, e.g., Section 6 of Elloumi et al. (2004)), although this name has also been used for other problems in literature, including d-α-pCP.In Section 6 of Elloumi et al. (2004) a formulation for the fault-tolerant pCP extending their formulation for the d-pCP is sketched.Our second formulation, can be seen as an adaption of this formulation, taking also into account the modification proposed in Ales and Elloumi (2018).In Elloumi et al. (2004) it is also shown that a so-called semi-relaxation of their formulation, where one of the two sets of binary variables is relaxed, can be solved in polynomial time.They also briefly discuss such a result for their formulation of the fault-tolerant pCP.We prove a similar result for our second formulation for the d-α-pCP in Section 4.3.In C ¸alık and Tansel (2013) another formulation for the d-pCP is presented and the authors show that the linear programming relaxation of it has the same strength as the relaxation of the formulation of Elloumi et al. (2004).
In Gaar and Sinnl (2022) the classical textbook formulation was used as starting point for a projectionbased approach, which projected out the assignment variables to obtain a new integer programming formulation for the d-pCP.Moreover, an iterative lifting scheme for the inequalities in the new formulation was presented.This lifting scheme is based on the lower bound obtained from solving the linear programming relaxation, in which then the lifted inequalities are included and everything is resolved in an iterative fashion.Gaar and Sinnl (2022) showed that this procedure converges and the lower bound at convergence is the same lower bound as the one of the semi-relaxation considered in Elloumi et al. (2004).Furthermore, Gaar and Sinnl (2022) also showed that the solution at convergence solves a certain fractional set cover problem.Our first formulation for the d-α-pCP, which we present in Section 2, is based on the classical textbook formulation for the d-pCP and is also suitable for the ideas of Gaar and Sinnl (2022) regarding lifting.
For d-α-pCP the only existing algorithms with computational results are the GRASP proposed by Sánchez-Oro et al. (2022) and the local search by Mousavi (2023).Aside from these heuristics, there are also works on approximation algorithms (Chaudhuri et al., 1998;Khuller et al., 2000;Krumke, 1995) which do not contain computations.The best possible approximation factor of two is obtained by the algorithms presented in Chaudhuri et al. (1998); Khuller et al. (2000) under the condition that the distances fulfill the triangle inequalities.We note that in principle set cover-based approaches such as the one of Chen and Chen (2013) also work for the d-α-pCP, but (Chen and Chen, 2013) focuses on the continuous α-pCP and presents no computations for the d-α-pCP.

Our first formulation
In this section we present our first integer programming formulation for the d-α-pCP.First, we describe the formulation in Section 2.1.Then we derive valid inequalities, valid inequalities that are based on lower bounds, and optimality-preserving inequalities in Section 2.2.Next, we detail conditions which allow to fix some of the variables in the linear relaxation in Section 2.3.Finally, we provide some insight on what happens if we relax one set of binary variables of our formulation in Section 2.4.

Formulation
Our first integer programming formulation of the d-α-pCP can be viewed as extension of a classical formulation of the d-pCP (see, e.g., (Daskin, 2000) and (Gaar and Sinnl, 2022)).We refer to this classical formulation of the d-pCP as (PC1) following the notation of Gaar and Sinnl (2022).The formulation (PC1) as well as any other formulations of the d-pCP which are mentioned in the remainder of this work can be found in Appendix A.
Let the binary variables y j for all j ∈ N indicate whether a facility is opened at point j.Let the binary variables x ij for all i, j ∈ N with i ̸ = j indicate whether the point i is assigned to the open facility j.Let the continuous variables z measure the distance in the objective function.Then the d-α-pCP can be formulated as The constraints (1b) ensure that exactly p facilities are opened.The constraints (1c) make sure that for each point i ∈ N , the point is either used for opening a facility, or it is assigned to α other open facilities.The constraints (1d) ensure that if a point i is assigned to a facility at point j, then the facility at point j is opened.The constraints (1e) ensure that z takes at least the value of the distance from i to j if i is assigned to j.Thus, z will take at least the maximum distance for assigning i to α facilities, since constraints (1c) ensure the assignment of i to α facilities in case it is not opened.The objective function (1a) minimizes z, i.e., it minimizes the maximum assignment distance.The formulation (APC1) has O(|N | 2 ) variables and Note that in the formulation (PC1) for the classical formulation of the d-pCP, the constraint (1e) is included in an aggregated fashion as j∈N d ij x ij ≤ z for all i ∈ N .Furthermore, in the classical d-pCP also open facilities are included in the demand points.Thus, in (PC1) the variables x ij are required also for i = j, and the right hand-side is α and not α(1 − y i ) in (1c).

Strengthening inequalities
Due to the fact that (PC1) is typically considered to have bad linear programming bounds (see, e.g., (Snyder and Shen, 2011)) for the d-pCP, it could be expected that also (APC1) has a linear relaxation that provides a poor bound.In fact, we confirmed this in preliminary computations, see also Section 6.2.In Section 4.2 we present some theoretical results on the effect of adding the inequalities described in this section to (APC1).

Valid inequalities
The next theorem presents two sets of valid inequalities for (APC1).

Theorem 1. The inequalities
are valid inequalities for the formulation (APC1) for the d-α-pCP, i.e., when adding (2a) and (2b) to (APC1), the set of feasible solutions does not change.
Proof.Clearly (2a) holds for any feasible solution for (APC1), as in this case j∈N \{i} d ij x ij is either zero (in case i is opened) or the sum of the distances of the closest, second-closest, . . ., α-closest facility to point i, which is at most α times the distances of the α-closest facility measured as z.Furthermore, it is obvious that (2b) holds for any feasible solution for (APC1), as i can not be assigned to any point j ∈ N \ {i} if i is opened.

Valid inequalities based on lower bounds
Given a lower bound on the optimal objective function value of the d-α-pCP, the inequalities (1e) and (2a) can be lifted, as we show next.The lifting is based on a similar idea recently proposed in Gaar and Sinnl (2022) for the d-pCP.
Theorem 2. Let LB be a lower bound on the optimal objective function value of the d-α-pCP.Then are valid inequalities for the formulation (APC1) for the d-α-pCP, i.e., when adding (3c), (3a) and (3b) to (APC1), the set of feasible solutions does not change.
Proof.For the inequalities (3a), we note that due to constraints (1c) at most one of y i and x ij can take the value one in any feasible solution of (APC1).Thus, the left hand-side can be at most max{LB, d ij }, which is clearly a valid lower bound for z.
Clearly the inequalities (3b) are just a relaxation of (3a) and therefore also valid.The validity of the inequalities (3c) follows from combining the arguments from the proof of the validity of inequalities (2a) with the proof for the validity of the inequalities (3a).
Theorem 2 allows us to add new valid inequalities to the linear relaxation of (APC1), as soon as we have a lower bound LB.We present an iterative scheme exploiting this fact in Section 4.2, where we also analyze the convergence behavior of this scheme.

Optimality-preserving inequalities
Next we consider optimality-preserving inequalities.These inequalities may cut off some feasible solutions of (APC1), but do not change the optimal objective function value.In other words, there exists at least one optimal solution to (APC1) which fulfills all these inequalities.
To present the inequalities, let , S ij is the set of points j ′ such that j ′ is closer to i than j, or such that j ′ and j are at the same distance to i and j ′ has a smaller index than j.Thus, for any point i, the sets S ij induce an ordering of all points according to their distance to i and their index.We denote this order with σ i .

Theorem 3. The inequalities
and, if U B is the objective function value of a feasible solution of the d-α-pCP, the inequalities are optimality-preserving inequalities for the formulation (APC1) for the d-α-pCP, i.e., when adding (4a) and (4b) to (APC1), the optimal objective function value does not change.
Proof.Note that the set N \ j ′ ∈Nα (S ij ′ ∪ {i, j ′ }) that appears in (4a) can alternatively be described as the set {j ∈ N \ {i} : ( is the set of facilities j that are further away to i than the furthest facility in N α according to σ i .The inequality (4a) ensures that if a certain number β of facilities, that are at most as far away from i than the furthest facility in N α , are opened (and thus, i can be assigned to these β facilities), the point i is assigned at most α − β times to facilities that are further away from i than the furthest facility in N α .Clearly this is fulfilled for any optimal solution of (APC1), where every facility i is assigned to those α opened facilities, that are the α closest facilities to i according to σ i .Thus, adding (4a) to (APC1) does not change the optimal objective function value.
Next consider the inequalities (4b).If y i is one in an optimal solution of (APC1), the inequality (4b) is clearly satisfied and thus it does not cut off any optimal solution.Now suppose y i is zero, so location i is not opened.As we know that a feasible solution with objective function value U B exists, it follows that i must be assigned to α facilities at distance at most U B to location i and these α facilities must be opened.Therefore, also in this case (4b) is fulfilled.
Note that the inequalities (4a) from Theorem 3 force an assignment of any location i to those α opened facilities, that are the α closest opened facilities to i according to σ i .They do so, even when also other assignments would not change the objective function value.Thus, in a sense (4a) are symmetry breaking constraints that forbid certain similar solutions.

Variable fixing
Next, we present a variable fixing condition which can be utilized whenever a feasible solution to the d-α-pCP is known.This fixing of variables cuts off feasible solutions, but it does not cut off any optimal solution, i.e., it is optimality-preserving.
Theorem 4. Let U B be the objective function value of a feasible solution of the d-α-pCP.Then when adding the constraints to (APC1), no optimal solution is cut off.
Proof.Clearly in an optimal solution no point i can be assigned to a point j that is further away than U B, thus (5a) is satisfied for any optimal solution.

Relaxing the assignment variables
We now turn our attention to an interesting aspect of (APC1).The classical formulation (PC1) of the d-pCP has the following nice property: When relaxing the x-variables in (PC1), i.e., replacing x ij ∈ {0, 1} with 0 ≤ x ij for all i, j ∈ N , then the optimal objective function value does not change.Hence, it is not necessary to force the x-variables to be binary in order to obtain the optimal objective function value of (PC1).This is for example exploited by Gaar and Sinnl (2022).Interestingly, this is not the case anymore for the d-α-pCP.To investigate this in detail, let (APC1 − Rx) be the formulation (APC1) with relaxed x-variables, i.e., (APC1-Rx) is (APC1) without (1f) and with the constraints 0 ≤ x ij for all i, j ∈ N with i ̸ = j.We first consider an example to get some insight.The example is illustrated in Figure 1.1c are the values of the y-variables in the optimal solution, and the values near the arcs are the values of the x-variables in the optimal solution.If an arc is not drawn in a solution, this means the corresponding x-variable takes value zero.
In this example, it is easy to see that one optimal solution (x * , y * , z * ) for the formulation (APC1) of the d-α-pCP is given as y all other values of x * ij are equal to 0, and z * = 4. Thus, the optimal objective function value of (APC1) is z * = 4.
It is easy to see, that the solution (x ′ , y ′ , z ′ ) is not feasible anymore for (APC1-Rx) when the inequalities (2a) are added, as (x ′ , y ′ , z ′ ) does not fulfill (2a) for i = 1.However, the solution (x ′′ , y ′′ , z ′′ ) with x ′′ = x ′ , y ′′ = y ′ and z ′′ = 3 is feasible.So the optimal objective function value of (APC1-Rx) with (2a) is at most 3, and indeed it is exactly 3. Thus, it is again not equal to the optimal objective function value of (APC1).
Example 5 shows that (APC1-Rx) does not necessarily give the same optimal objective function value as (APC1), but there exist instances where after adding (4a), the optimal objective function values coincide.The next result shows that this behavior is not a coincidence.Theorem 6. (APC1-Rx) with (4a) has the same optimal objective function value as (APC1).
Proof.Let (x * , y * , z * ) be an optimal solution of (APC1-Rx) with (4a).Because of Theorem 3, the optimal objective function value of (APC1) is at least z * , so it is enough to show that z * is at least the optimal objective function of (APC1).
To do so, we construct a solution (x Clearly such j i,k exist because of (1c) and y * i = 0. Let N i α = {j i,k : k ∈ {1, . . ., α}}.Due to the fact that x * ij ≤ 1 for all j ∈ N \ {i}, all j i,k are distinct for different values of k by construction, so |N i α | = α.Furthermore, because for such j by construction x * ij > 0 and y * j ≥ x * ij because of (1d).As a consequence, (4a) for This, together with (1c) and the fact that j i,α is the facility in N i α furthest away from i according to σ i , implies that j∈(Sij i,α ∪{ji,α})\{i} x * ij = α.Due to the definition of j i,α this implies that x * iji,α = 1.Thus because of (1e).Finally, let y • and y • are binary and y • satisfies (1b).Furthermore, by construction of N i α , also (1c) and, in particular because of ( 6), (1d) are satisfied.Furthermore, (1e) is fulfilled because of (7).As a consequence, (x As a consequence of Theorem 6, relaxing the x-variables in (APC1) without changing the optimal objective function is possible, whenever the inequalities (4a) are added.Example 5 shows that sometimes these inequalities are indeed necessary to preserve the optimal objective function value.
Note that also additionally including (2a) and ( 2b) into (APC1-Rx) with (4a) does not change the optimal objective function value, as these inequalities are valid for (APC1).

Our second formulation
In this section we detail our second integer programming formulation of the d-α-pCP.First, we present the formulation in Section 3.1.Then we derive a set of valid inequalities in Section 3.2.Finally, we present conditions which allow to fix some of the variables in the linear relaxation in Section 3.3.

Formulation
Our second formulation can be viewed as an extension of the formulation for the d-pCP proposed by Ales and Elloumi (2018), which in turn is a refinement of a formulation of Elloumi et al. (2004) with less constraints and the same linear relaxation bound.We denote the formulation of the d-pCP by Elloumi et al. (2004) as (PCE) in the same fashion as Gaar and Sinnl (2022).Moreover, we denote the formulation of the d-pCP by Ales and Elloumi (2018) as (PCA).Both (PCE) and (PCA) can be found in Appendix A.
Let j∈N \{i} {d ij } \ {d 1 }, so D i is the set of all distances that are relevant for point i, except for the smallest overall distance.
In this formulation, we have a binary variable u k for each k = 2, . . ., K.This variable indicates whether the optimal objective function value of the d-α-pCP is greater than or equal to d k , i.e., u k is one if and only if the optimal objective function value of the d-α-pCP is at least d k .Aside from the u-variables, we also have the binary variables y j for all j ∈ N to indicate whether a facility is opened at point j similar to the previous formulation.The formulation is denoted as (APC2) and reads as The constraints (8b) ensure that exactly p facilities are opened.The constraints (8c) make sure that if the variable u k is one, indicating that the optimal objective function value is at least d k , then also all variables with smaller index are one.These constraints ensure that the objective function (8a) measures the objective function value correctly: in (8a), the coefficient of u k is always the distance-increment from d k−1 to d k .Thus, we need that all u k ′ with k ′ ≤ k are set to one in order to get a value of d k in the objective function.Finally, constraints (8d) model that for each i ∈ N , the u-variables are set in such a way that u k is one, if i is not opened and the α-nearest open facility to i has distance at least d k : In case a facility is opened at point i, i.e., y i is one, the constraints are trivially fulfilled.In case no facility is opened at point i, i.e., y i is zero, the constraints force u k to be one, or that at least α facilities closer than distance In comparison to the formulation (PCA) for the d-pCP, we have several modifications in (APC2) for the d-α-pCP.First, we have the right hand-side 1 − y i instead of just 1 and the sum over all j ∈ N \ {i} instead of over all j ∈ N in (8d) as a consequence of the fact that in the d-α-pCP opened facilities do not serve as demand points.Furthermore, we have a coefficient α for u k and 1 − y i in (8d).Finally, we do not include K into the set D i , independent from whether there is a facility j with distance d ij = d K or not.This does not influence the correctness of the model, as in the case that there is no facility j with d ij = d K for some i, then for (8d) for i and k = K, the sum j∈N \{i}:dij <d k y j is equal to p − y i .This implies that the constraint becomes αu K ≥ α(1 − y i ) − (p − y i ), which is always satisfied because 1 ≤ α ≤ p holds.Therefore the constraint does not impose a restriction on u K , and K can be omitted when defining the set D i .

Strengthening inequalities
We have the following valid inequalities.
Theorem 7. The inequalities are valid inequalities for the formulation (APC2) for the d-α-pCP, i.e., when adding (9) to (APC2), the set of feasible solutions does not change.
Proof.In any optimal solution of (APC2), if a point i is such that it does not have α locations at distance smaller than d k , then any feasible solution either has objective function value at least d k (so u k = 1) or i is opened (so y i = 1).
We observe the following for the inequalities of Theorem 7.
Observation 8.For any i and are dominated by the inequalities (9), because the former are the latter multiplied by α with additional nonnegative terms in the sum on the left hand-side.Thus, it is not necessary to include (8d) for any such i and d k , if (9) is included.

Variable fixing
Next we present some conditions which allow the fixing of variables.In contrast to (APC1), for which we only have a condition based on an upper bound on the optimal objective function value, for (APC2) we also have a condition which can be utilized with any known lower bound on the optimal objective function value of the d-α-pCP.
Theorem 9. Let LB be a lower bound on the optimal objective function value of the d-α-pCP.Then are valid equalities for the formulation (APC2) for the d-α-pCP, i.e., when adding (10) to (APC2), the set of feasible solutions does not change.
Proof.Consider any optimal solution for (APC2).If LB is a lower bound on the optimal objective function value of the d-α-pCP, then this optimal value is at least d k for any k such that d k ≤ LB.Therefore, u k = 1 in this case.
In Section 4.3 we present an iterative scheme for variable fixing based on the optimal solution of the linear programming relaxation of (APC2) which can be seen as extension of Theorem 9.

Theorem 10. Let U B be the objective function value of a feasible solution of the d-α-pCP. Then when adding
to (APC2), no optimal solution is cut off.
Proof.Consider any optimal solution for (APC2).If U B is an upper bound on the optimal objective function value of the d-α-pCP, then this optimal value is at most d k for any k such that d k ≥ U B. As a consequence, this optimal value it not greater or equal to any d k > U B and hence u k = 0 in this case.

Polyhedral study
In this section we provide a polyhedral study of our two integer programming formulations for the d-α-pCP.We start by comparing the basic linear relaxations of the two formulations in Section 4.1.Next, we detail how to obtain the best lower bound based on (APC1), which can be computed in polynomial time, in Section 4.2 and also give a combinatorial interpretation of this best bound.In Section 4.3 we do the same for (APC2).Finally, we compare the two best lower bounds in Section 4.4.

Comparison of basic linear relaxations
Whenever several integer programming formulations of a problem are available, it is an interesting question to compare the corresponding linear relaxations.We note that for the d-pCP, Ales and Elloumi (2018) proved that the objective function values of the linear relaxations of (PCE) and (PCA) coincide.Furthermore, Elloumi et al. (2004) showed that the objective function value of the linear relaxation of (PCE) is always as least as good as the one of the linear relaxation of (PC1), and they demonstrated that the dominance might be strict by providing an instance where this is the case.Thus, in case of the d-pCP, both (PCA) and (PCE) dominate (PC1).Let (APC1 − R) be the linear relaxation of (APC1), i.e., (APC1-R) is (APC1) without ( 1f) and ( 1g) and with the constraints 0 ≤ x ij for all i, j ∈ N with i ̸ = j and 0 ≤ y j ≤ 1 for all j ∈ N .Let (APC2 − R) be the linear relaxation of (APC2), i.e., (APC2-R) is (APC2) without ( 8e) and (8f) and with the constraints 0 ≤ u k ≤ 1 for all k ∈ {2, . . ., K} and 0 ≤ y j ≤ 1 for all j ∈ N .To study (APC1-R) and (APC2-R), we start by considering the following examples, which are illustrated in Figures 2 and 3.
Furthermore, Example 11 also demonstrates the existence of an instance of the d-α-pCP, where including ( 9) into (APC2-R) yields a strictly better bound than the one of (APC2-R).Moreover, Example 12 also shows that there exists an instance where adding (2b) to (APC1-R) improves the linear relaxation bound.

Best lower bound based on (APC1)
The aim of this section is to derive the best possible bound for (APC1) when utilizing all inequalities derived in Section 2.2.To do so, we investigate Theorem 2 in more detail.In particular, it allows us to add new valid inequalities to the linear relaxation of (APC1), as soon as we have a lower bound LB.Our hope is that including the new valid inequalities for the lower bound LB will give us a new, even better lower bound, with which we can include new, stronger valid inequalities.This leads to an iterative approach to improve the lower bound on the optimal objective function value of the d-α-pCP, which is analogous to the approach Gaar and Sinnl (2022) have developed for the d-pCP.They proved that their approach for the d-pCP converges (i.e., including the valid inequalities for a given lower bound LB does not give a better lower bound, but only LB again) if and only if there is a fractional set cover solution with radius LB that uses at most p sets.In the following, we investigate a similar iterative approach for the d-α-pCP by iteratively adding the inequalities from Theorem 2. Let LB be a lower bound on the optimal objective function value of the d-α-pCP and let It follows from Theorem 1, 2 and 3 that L α (LB) is again a lower bound on the optimal objective function value of the d-α-pCP.Furthermore, it is easy to see that L α (LB) ≥ LB holds.We now want to establish a condition for the case that adding inequalities from Theorem 2 for a lower bound LB to (APCLB) does not improve the obtained bound L α (LB) anymore, i.e., a convergence-condition.It turns out that the following holds.
Theorem 13.Let LB be a lower bound on the optimal objective function value of the d-α-pCP.
with objective function value at most p.
Proof.We prove each of the two sides of the equivalence in a separate part for the sake of clarity.
Part 1: Assume LB is such that L α (LB) = LB holds.Let (x * , y * , z * ) be an optimal solution of (APCLB) in this case, so L α (LB) = z * = LB.We will finish this part of the proof by showing that y * is a feasible solution for (13) with objective function value at most p.
It is easy to see that (13d) is satisfied because of (12e) and that the objective function value (13a) of y * is p because of (1b).
In order to show that y * fulfills (13b), we can exploit (3c) and get that holds, which, when splitting the x * ij according to their distance d ij , is equivalent to Now we can use (1c) for the first sum with x * ij and obtain which can be simplified to On the left hand-side this is a sum of non-negative terms, because x * ij ≥ 0 due to (12d) and for each term in the sum (d ij − LB) > 0 holds.Thus, the only way that this can be satisfied is that x * ij = 0 for all j ∈ N \ {i} such that d ij > LB.This, together with (1c) and (1d) implies that What is left to show is that y * satisfies (13c).Towards this end, we can use (1d), ( 14) and (2b) to obtain so y * fulfills (13c).Thus, y * is a feasible solution for (13) with objective function value at most p.Part 2: Assume LB is such that there is a feasible solution y • for (13) with objective function value of at most p.We will finish this part of the proof in four steps.In the first step we utilize y • to construct y * , which is feasible for (13) and has an objective function value p.In the second step we use y * to construct y •,i for each i ∈ N and show that y •,i has a particular property.In the third step we use y •,i to construct x * .In the fourth step we show that (x * , y * , z * ) with z * = LB is a feasible solution for (APCLB), which implies that L α (LB) = LB holds.
We start with the first step by constructing y * .Let p • be the objective function value (13a) of y • , so p • = j∈N y • j .It follows that p • ≤ p, as y • has objective function value at most p.We now construct y * as y * j = y Thus, y * is feasible for (13) and has objective function value p.
We proceed with the second step by constructing y •,i .For all i ∈ N we define y •,i in such a way that y •,i j = min{y * j , 1 − y * i } for all j ∈ N , so in particular y •,i is the component-wise minimum of y * and (1 − y * i ) and y •,i ≤ y * holds.We will now show that y •,i fulfills To do so, let is the set of indices j that appear in the sum on the left hand-side of (15) and fulfill y for each term in the sum of (15) and thus ( 15) is satisfied because y * fulfills (13b). If so also in this case (15) is fulfilled.
so also in this case (15) is fulfilled.As a consequence, y •,i satisfies (15) in all cases, so for all i ∈ N .We continue with the third step, i.e., we now construct x * .To do so, we first fix a point i ∈ N .Then let j i be such that Clearly such a j i exists and d iji ≤ LB because of (15).Then we set and we set x * ij = 0 otherwise.Note that this construction implies that x * ij = 0 for all j such that d ij > LB.Finally, we are able to do the fourth step, i.e., we show that (x * , y * , z * ) with z * = LB is feasible for (APCLB).By construction, i for all i, j ∈ N with j ̸ = i, so (x * , y * , z * ) fulfills (12d), (1d) and (2b).Also j∈N \{i} x * ij = α(1 − y * i ) by construction, so (1c) holds.Moreover, by construction y * is a feasible solution of (13) and has objective function value p, so it fulfills (12e) and (1b).Furthermore z * = LB, so clearly (12f) is satisfied.
The inequality (3a) is fulfilled if d ij > LB, because then x * ij = 0 and thus LBy * i ≤ LB = z * is satisfied as we have already shown that (12e) holds.If d ij ≤ LB, then the inequality is LB(y * i + x * ij ) ≤ LB = z * , which is fulfilled because we already know that (2b) is satisfied.Thus, in any case (x * , y * , z * ) fulfills (3a).
When comparing Theorem 13 to the corresponding result for the d-pCP, it becomes obvious that ( 13) is closely related to a fractional set cover problem, where every set has to be covered α times.
We note that the right hand-side of (13b) is α(1 − y i ), instead of α, which would be the generalization of the result of Gaar and Sinnl (2022).This is caused by the fact that i does not need to be covered if it is opened in the d-α-pCP, while in the d-pCP each point needs to be covered.Moreover, the inequalities (13c) are completely new.They make sure that a set cover property is fulfilled not only for all points at most LB away, but also for subsets of these points when removing at most α points.Note that (13b) can be interpreted as (13c) for β = 0.
Interestingly, we can pin point which of the inequalities of (APCLB) are responsible for the existence of (13c).To do so, let j∈N \{i}:dij ≤δ denote the fraction set cover problem for the d-α-pCP for a given δ ∈ R. Note that (AFSC) LB is a relaxation of (13).Furthermore, let LB be a lower bound on the optimal objective function value of the d-α-pCP and let (12d), (12e), (12f).
Note that when in (APCLB) the constraint (3a) is relaxed to (3b) and ( 2b) is removed, then one obtains (APCLB'), so (APCLB') is a relaxation of (APCLB).We are also able to give an interpretation of when the new lower bound L ′ α (LB) does not improve the previous lower bound LB in the following theorem.
Theorem 14.Let LB be a lower bound on the optimal objective function value of the d-α-pCP.Then L ′ α (LB) = LB holds if and only if there is a feasible solution for (AFSC) LB with objective function value at most p.
Proof.The proof of Theorem 14 is a straight-forward simplified version of the proof of Theorem 13, where the construction of y •,i in the second part is replaced by using y •,i = y * for all i ∈ N .Thus, we omit the proof for the sake of brevity.
If we combine the knowledge of Theorem 13 and 14, then we can deduce that the inequalities (2b) and (3a) in (APCLB) (instead of the weaker version (3b) in (APCLB')) are responsible for the existence of (13c) in ( 13).Thus, the inequalities (13c), which are not present in a straight-forward generalization of the results of Gaar and Sinnl (2022) for the d-pCP to the d-α-pCP, are caused by the inequalities (2b) and (3a).
Furthermore, with the help of Theorem 13 and 14 it is easy to see that whenever L α (LB) = LB and L ′ α (LB) = LB holds for some lower bound LB, then also L α (LB ′ ) = LB ′ and L ′ α (LB ′ ) = LB ′ holds for any LB ′ > LB, that is, if the lower bound LB can not be improved by adding the valid inequalities from Theorem 2, then also no larger lower bound can be improved this way.Thus, it makes sense to define the largest possible lower bounds one can obtain with iteratively adding the valid inequalities from Theorem 2.
Our results imply the following relationship.

Corollary 15. It holds that LB
Proof.This is a consequence of Theorem 13 and 14.
Next, we point out that both LB # α and LB # α ′ can be computed efficiently.
Theorem 16.LB # α and LB # α ′ can be computed in polynomial time.
Proof.A trivial lower bound LB on the optimal objective function value of the d-α-pCP is given by d 1 , the smallest element of D. For any given lower bound LB, the computation of L α (LB) requires to solve a linear program with a polynomial number of variables and constraints, and thus can be done in polynomial time.Furthermore, there are only a polynomial number of potential values for LB # α , as clearly LB # α ∈ D holds, because only for values in D the included variables in the sum in the left hand-side of (13b) and (13c) change.Thus, whenever we have obtained some new lower bound LB # α , we we know that also min Thus, not only for the d-pCP, but also for the d-α-pCP the iterative improvement of the lower bound leads to an ultimate lower bound LB # α , which can be computed in polynomial time.Finally, we want to discuss another interesting aspect about LB # α and LB # α ′ .We have seen in Example 5 that adding the optimality-preserving inequalities (4a) to a relaxed version of (APC1) improved the bound obtained from the relaxation.Thus, it is a natural question if the bounds LB # α and LB # α ′ could be further improved by adding (4a) to (APCLB) and (APCLB'), respectively.It turns our that this is not the case.
Theorem 17.Let LB be a lower bound on the optimal objective function value of the d-α-pCP and let U B be the objective function value of a feasible solution of the d-α-pCP.Let (APCLB • ) be (APCLB) with (4a) and (4b), and denote the optimal objective function value with L • α (LB).Let (APCLB ′• ) be (APCLB') with (4a) and (4b), and denote the optimal objective function value with To prove (a), it is enough to show that L • α (LB) = LB if and only if there is a feasible solution for (13) with objective function value at most p, because of Theorem 13.To do so, we can follow the proof of Theorem 13.In particular, Part 1 can be used without modifications.Also steps one, two and three of Part 2 can be used without changes.Only in step four we have to additionally show that (x * , y * , z * ) fulfills (4a) and (4b).Clearly (x * , y * , z * ) satisfies (4b) due to (13b) and the fact that LB ≤ U B.
To show that also (4a) is fulfilled we fix some i ∈ N and some N α ⊆ N with |N α | = α.Let j α ∈ N α be the maximum entry of N α according to σ i , i.e., such that N α ⊆ (S ijα ∪ {j α }).Then (4a) can be reformulated to j∈Nα so it is enough to show that (17) holds for (x * , y * , z * ).If j i ∈ (S ijα ∪ {j α }), i.e., if j i is before j α according to the order σ i and thus j i is closer or at the same distance to i than j α , then we can deduce that x * ij = 0 for all j ∈ N \ (S ijα ∪ {i, j α }) by construction, because all of these j are further away from i than j i is.Thus, this implies that ( 17) is fulfilled in this case, as |N α | = α and y * j ≤ 1 for all j ∈ N .If j i ̸ ∈ (S ijα ∪ {j α }), i.e., if j i is further away to i than j α is, then x * ij = y •,i j = min{y * j , 1 − y * i } holds for all j ∈ N α by construction.Thus, we can define ε j such that y * j = x * ij + ε j for each j ∈ N α , because either y * j = x * ij and ε j = 0, or y * j > x * ij = 1 − y * i and ε j = y * j − (1 − y * i ).In any case, 0 ≤ ε j and ε j ≤ y * i , as y * j ≤ 1.This, together with the already shown (1c), implies that holds also in this case.Thus (x * , y * , z * ) fulfills (4a), which finishes the proof of (a).The proof of (b) can be done analogously with the help of Theorem 14 and is therefore skipped.
Theorem 17 shows that adding the optimality-preserving inequalities (4a) and (4b) to the iterative lifting does not improve the best lower bounds obtained LB # α and LB # α ′ .

Best lower bound based on (APC2)
Next, we analyze (APC2) for the d-α-pCP in a similar way Elloumi et al. (2004) and Ales and Elloumi (2018) have done with (PCE) and (PCA) for the d-pCP.To do so, we introduce a semi-relaxation (APC2 − Ry) of (APC2) which is defined as (APC2) with relaxed y-variables, i.e., (APC2-Ry) is (APC2) without (8f) and with the constraints 0 ≤ y j ≤ 1 for all j ∈ N instead.In the same fashion, let (PCE − Ry) be the formulation (PCE) without the constraints y j ∈ {0, 1} and with the constraints 0 ≤ y j ≤ 1 for all j ∈ N .In case of the d-pCP, the semi-relaxation (PCE-Ry) of Elloumi et al. (2004) has several interesting properties, which we now investigate in analogous form for the d-α-pCP.

Computation in polynomial time
First, for the d-pCP the optimal objective function value of the semi-relaxation (PCE-Ry) can be computed in polynomial time as shown by Elloumi et al. (2004).Our next aim is to present a procedure for the d-α-pCP to compute also the optimal value of the semi-relaxation (APC2-Ry) in polynomial time.To do so, we first need the following result.
Proof.If u * is binary, k * is chosen in such a way that the optimal objective function value of (APC2-R) with ( 18) is d k * .If u * is not binary, then k * is chosen in such a way that the optimal objective function value of (APC2-R) with ( 18) is larger than d k * −1 .Thus, in any case, the optimal objective function value of (APC2-R) with ( 18) is larger than d k * −1 .Assume that ( 19) is not a valid equality for (APC2).Then there is a feasible solution (u • , y • ) of (APC2) and there is a k Thus, the objective function value of (u • , y • ) for (APC2-R) with ( 18), which is equal to 18), because (APC2-R) with ( 18) is a relaxation of (APC2).Thus, the optimal objective function value of (APC2-R) with ( 18) is at most d k * −1 , a contradiction.Therefore, the assumption was wrong and ( 19) is a valid equality for (APC2).
The fact that ( 19) is a valid equality for (APC2-Ry) can be shown analogously.
As a consequence, by applying Lemma 18 in an iterative fashion, we can compute (APC2-Ry) in polynomial time, as the next results shows.
Theorem 19.An optimal solution of the semi-relaxation (APC2-Ry) can be computed in polynomial time.
Proof.We can compute an optimal solution of (APC2-Ry) as follows.First, we set k ′ = 1.Then we solve (APC2-R) with ( 18), which is equivalent to (APC2-R) in the case that k ′ = 1 holds.Let (y * , u * ) be the obtained optimal solution.If u * is binary, it is an optimal solution of (APC2-Ry).Otherwise, we can apply Lemma 18 to obtain k * , update k ′ = k * and solve (APC2-R) with ( 18) again.We repeat this, until we obtain a binary u * .
Note that k ′ increases at least by one in each iteration, and there are O(|N | 2 ) many potential values of k ′ .Furthermore, in each iteration a linear program with a polynomial number of variables and constraints has to be solved.Thus, this procedure computes an optimal solution of (APC2-Ry) in polynomial time.

Combinatorial interpretation
A second interesting property of the semi-relaxation (PCE-Ry) for the d-pCP is that Gaar and Sinnl (2022) proved that it is connected to the optimal solution of a set cover problem.In particular, the optimal objective function value of (PCE-Ry) is equal to d * ∈ D if and only if there is a fractional set cover solution with radius d * that uses at most p sets.It turns out that the following analogous result is also true for (APC2) for the d-α-pCP.
Theorem 20.Let d * ∈ D. Then the optimal objective function value of (APC2-Ry) is equal to d * if and only if d * is the smallest possible value of δ such that there is a feasible solution for (AFSC) δ with objective function value at most p.
Proof.As (APC2-Ry) requires the u-variables to be binary and (8c) has to hold, it is clear that the optimal objective function value of (APC2-Ry) is a value from D. Furthermore, it is clear that the smallest possible value of δ such that there is a feasible solution for (AFSC) δ with objective function value at most p is a value from D, because only for such values the problem (AFSC) δ changes.Thus, in order to prove the result it is enough to show that for any δ ∈ D there is a feasible solution for (APC2-Ry) with objective function value δ if and only if there is a feasible solution for (AFSC) δ with objective function value at most p.We will finish the proof by showing each side of this equivalence in a separate part.
Part 1: Let (u * , y * ) be a feasible solution of (APC2-Ry) with objective function value δ ∈ D. We will finish this part of the proof by showing that y * is a feasible solution for (AFSC) δ with objective function value at most p.
Towards this end, let ℓ be such that δ = d ℓ .Then (8c) together with (8e) imply that u * k = 1 for all k ∈ {1, . . ., ℓ} and u * k = 0 for all k ∈ {ℓ + 1, . . ., K}.Now we fix some i ∈ N and distinguish two cases.If there is an element in D i that is larger than d ℓ , then let so in this case (16b) is satisfied for i.
If there is no element in D i that is larger than d ℓ , then d ij ≤ d ℓ for all j ∈ N \ {i} and with (8b) and 1 ≤ α ≤ p this implies that so also in this case (16b) is satisfied for i.
As a result, the inequality (16b) is satisfied by y * in any case.Furthermore, y * fulfills (16c) because it satisfies the relaxation of (8f).The objective function value (16a) of y * is equal to p because of (8b).Thus, y * is feasible for (AFSC) δ with objective function value at most p.
Part 2: Assume δ ∈ D is such that there is a feasible solution y • for (AFSC) δ with objective function value at most p.We will finish this part of the proof by constructing a feasible solution (u * , y * ) for (APC2-Ry) with objective function value δ.
Towards this end, let u * 1 = 1 if d k ≤ δ and let u * k = 0 otherwise.Furthermore, we construct y * from y • in the same fashion as in Part 2 of the proof of Theorem 13.In particular, let p • be the objective function value (16a) of y • , so p • = j∈N y • j and construct y * as y * j = y With the same arguments as in the proof of Theorem 13 it follows that y * is feasible for (AFSC) δ and has objective function value p.
By construction, u * fulfills (8c) and (8e).Furthermore y * fulfills the relaxation of (8f) because of (16c), and it satisfies (8b) because it has objective function value p for (AFSC) δ .Next we consider the inequalities (8d) for some i ∈ N .This inequality is clearly satisfied for any so (8d) holds in any case.As a consequence, (u * , y * ) is feasible for (APC2-Ry).By construction, and because δ ∈ D, it follows that the objective function value of (u * , y * ) for (APC2-Ry) is δ, which closes this part of the proof.

Comparison of the best lower bounds
Finally, we compare the best lower bounds obtainable with the two formulations.For the d-pCP, Gaar and Sinnl (2022) proved that iteratively using the lower bound information for (PC1) yields a bound, which coincides with the bounds obtained by the semi-relaxation (PCE-Ry).
It turns out that this may not the case anymore for the d-α-pCP.Towards this end, let LB * α be the optimal objective function of (APC2-Ry).Then we can deduce the following result.

Theorem 21. It holds that LB
Proof.This is a consequence of Corollary 15 and Theorem 14 and 20.
As a consequence, for the d-α-pCP, when all our valid inequalities are included, the model (APC1) produces as least as good bounds as the semi-relaxation of (APC2), and might produce better bounds.
do not start the solution process with all the inequalities of the formulation, but add some of them on-the-fly when needed using separation procedures.
We first describe the starting heuristic and the primal heuristic, which are used by both B&C algorithms in Section 5.1.Then, we give a description of our B&C based on (APC1) in Section 5.2, and a description of our B&C based on (APC2) in Section 5.3.We evaluate the effects of the different ingredients of the B&C algorithms on the performance in Section 6.2.

Starting heuristic and primal heuristic
Our starting heuristic is a greedy heuristic.We initialize the (partial) solution P by randomly picking a location j ∈ N to open a facility.We then grow P by iteratively adding additional locations to P in a greedy fashion until |P | = p.As a greedy criterion to choose the location to add to P , we take the location j ∈ N \ P which has the largest α-distance to |P |.We note that if |P | < α this criterion is not well-defined, and thus in this case we use the |P |-distance.We run this heuristic startHeur times before we start with the B&C and initialize the B&C with the best solution found.
Our primal heuristic is a greedy heuristic driven by the values y * of the y-variables of the linear relaxation at the nodes of the B&C tree.The heuristic simply sorts the locations j ∈ N in descending order according to y * j and picks the p-largest ones as a solution.The primal heuristic is implemented within the HeuristicCallback of CPLEX, which is the mixed-integer programming solver we are using.

Variable fixing
We use the solution value U B from the solution obtained by the starting heuristic to fix the x-variables to zero as described in Theorem 4 at initialization.During the B&C we continue with this variable fixing procedure by adding these fixings in the UserCutCallback of CPLEX in case an improved primal solution found during the B&C allows additional fixings.This callback gets called by CPLEX whenever the solver encounters a fractional solution during the solution process.

Overall separation scheme
We separate the following inequalities in the branch-and-cut, where the order below gives the order in which we do the separation.

optimality-based inequalities (4a)
The inequalities listed above are separated within the UserCutCallback.Inequalities (1e) and (1d) from the formulation, which are needed for the correctness of our algorithm, are also separated within the LazyConstraintCallback, which gets called by CPLEX for each integer solution (i.e., each potential new feasible solution).We perform at most maxSepRoot separation-rounds at the root-node and at most maxSepTree separation-rounds at the other nodes of the B&C tree.In the root-node, we add at most maxIneqsRoot violated inequalities in a separation-round and at the other nodes, we add at most maxIneqsTree violated inequalities.The parameter-values we used in our computations are given in Section 6.Note that depending on the setting selected, in the computational study not all the inequalities above are actually used.For more details see Section 6.2.

Details about the separation procedures
All inequalities except (4a) are separated by enumeration.We note that the lifted inequalities (3c) and (3a) depend on the current lower bound LB and the inequalities (4b) depend on the current upper bound U B. Thus, these inequalities can potentially be added again in a stronger version for fixed i and j or for a fixed i, when an improved bound becomes available.For this reason, we add them with the CPLEXoption purgeable, which allows CPLEX to remove added inequalities if they are deemed no longer useful by CPLEX.Moreover, during the B&C tree, we can use the local lower bounds from the nodes of the B&C tree as LB for the inequalities (3a) and (3c).Naturally, the inequalities are then only valid for the subtree starting at this node.CPLEX allows to add such locally valid inequalities with the method addLocal1 .When separating the inequalities (1d) and (1e), for each point i we add the ones corresponding to the numInitAPC1 nearest locations j at initialization.
The separation routine for the inequalities (4a) is a heuristic.For a given location i ∈ N , our goal is to find a set X ⊆ N \ {i} and a set is a relaxation of the valid inequality (4a), and hence also ( 20) is a valid inequality.Thus, we want to heuristically find such sets X and N α which maximize j∈Nα y * j + j∈X x * ij , where (x * , y * ) is the solution of the linear programming-relaxation at the nodes of the B&C tree.Then, if we have that j∈Nα y * j + j∈X x * ij > α, we have obtained a violated inequality (20) and thus also a violated inequality (4a).The heuristic proceeds as follows: Let N i be the locations j ∈ N \ {i} sorted in descending order according to d ij .We initialize X with the first entry of N i .Based on X, all potential candidates of N α are all j ∈ N \ (X ∪ {i}) with d ij < min j ′ ∈X d ij ′ .To obtain N α , we sort all candidates j according to their y * j -value in descending order, and take the α largest ones.If the inequality (20) implied by X and N α is violated, we stop and add (20) for N α and this X, if not, we continue by adding the next entry from N i to X and repeat the procedure.

Branching priorities
CPLEX allows to set branching priorities on the variables, which it then takes into account during the B&C.We set the priorities of the y-variables to 1002 and the priorities of the x-variables are left at the default value of zero in order to force CPLEX to branch on the y-variables first.This is done, as fixing y-variables is likely to have more structural impact on the linear programming relaxations compared to fixing x-variables.

Variable fixings
Similar to our approach for (APC1), we use the solution value U B from the solution obtained by the starting heuristic for variable fixing, i.e., we fix the u-variables to zero as described in Theorem 10 at initialization.Moreover, we also continue these fixings in the UserCutCallback whenever an improved incumbent is found.
Furthermore, we also fix the u-variables to one in the UserCutCallback using the available (local) lower bound LB at the current branch-and-cut node and the theory provided in Lemma 18.Note that Lemma 18 allows us to fix one fractional u-variable in each separation round.Thus, to speed-up the fixing, we first check if there are k such that u k is fractional and d k ≤ LB, i.e., we check if there are u-variables that we can fix according to Theorem 9.If yes, under all the u-variables fulfilling the conditions, we fix the one corresponding to the largest distance.By constraints (8c) this setting will also set all variables corresponding to smaller distances to one.If there is no variable fulfilling this condition, then we use Lemma 18 for fixing.As we use the local lower bound for fixing, we add the fixing with the method addLocal.

Details about the separation scheme
We have implemented a separation routine for the inequalities (8d).This allows us to dynamically add them when needed instead of adding all of them at initialization.This is an attractive option due to the structure of the formulation (in particular constraints (8d)) in combination with Lemma 18.As this lemma provides results to fix u-variables to one, we may not need to add all inequalities (8d) to correctly measure the objective function value.
Our separation routine is based on enumeration.However, we add at most one violated inequality (8d) per location i ∈ N in each round of separation.In order to determine which inequality we add, if there is more than one inequality (8d) violated for a location i, we compute violation(u * , y , where (u * , y * ) is the solution of the linear programming-relaxation at the nodes of the B&C tree.All inequalities with violation(u * , y * , i, k) < 0 are violated.Then we calculate the score s = −violation(u * , y * , i, k) • d ik .With the score, we try to find a k which gives a good balance between violation and effect on the objective function value.When we apply the separation-approach, we initialize our B&C with all the inequalities (8d) corresponding to the nInitAPC2 smallest distances of the instance.Since the inequalities (8d) are needed for correctness of the formulation, we call the separation routine both in the UserCutCallback and also the LazyConstraintCallback.
Regarding the number of separation rounds and the number of added violated inequalities, we use the same strategy as described in Section 5.2.

Branching priorities
Similar to the B&C for (APC1), we set the values of the branching priorities of the y-variables to 100, and the priorities of the u-variables are left at the default value of zero.

Computational results
We implemented our B&C algorithms in C++ using CPLEX 20.1.The runs were made on a single core of an Intel Xeon E5-2670v2 machine with 2.5GHz and 6GB of RAM, and all CPLEX settings were left on their default values, except the branching priorities which we set as described in Section 5. We have set a time limit of 1800 seconds.

Instances
We considered two sets of instances from the literature in our computational study.The details of these sets are given below.
• TSPLIB: This instance set is based on the TSP-library (Reinelt, 1991) and was used in Sánchez-Oro et al. ( 2022) with α = 2, 3.In particular, the instances att48, eil101, ch150, pr439, rat575, rat783, pr1002 and rl1323 were used with p ∈ {10, 20, . . ., 130, 140}.The number in the instance-name gives the number of locations |N |.In these instances all locations are given as two-dimensional coordinates, and the Euclidean distance is used as a distance function.The instance set contains 154 instances.
We note that Sánchez-Oro et al. ( 2022) did not use all values of p for all instances.In our computational study we considered the same combinations of instances and p as Sánchez-Oro et al. (2022).For the used values of p for each instance see e.g., Tables 1 and 2.
• pmedian: This instance set is based on the OR-library (Beasley, 1990).It was used in Mousavi (2023) with α = 2.Each instance is given as a graph, and to obtain the distances between all the locations N (nodes in the graph) an all-pair shortest-path computation needs to be done.In these instances, all the distances are integer.The number of locations |N | is between 100 and 900, and p is between 5 and 200.Each of these instances has a value of p encoded in the instance.For the concrete values of |N | and p for each instance see Table 5.The instance set contains 40 instances.

Analysis of the ingredients of our branch-and-cut algorithms
To analyze the effect of the ingredients of our B&C algorithms, we performed a computational study on a subset of the instances, namely the instances att48, eil101, ch150.We compare the following different settings for the B&C based on (APC1): • 1: Directly solving (APC1) without any additional ingredients • 1H: Adding the starting heuristic, the primal heuristic and the variable fixing based on the upper bound according to Theorem 4 • 1HS: Setting 1H with separation of the inequalities (1d) and the inequalities (1e) (instead of adding them in the beginning) as described in Section 5.2 • 1HSV: Setting 1HS together with the valid inequalities (2a) and (2b) • 1HSVL: Setting 1HSV together with the lifted version (3a) of the inequalities (1e) and also the lifted version (3c) of the inequalities (2a) • 1HSVLO: Setting 1HSVL together with the optimality-preserving inequalities (4a) and (4b) For the B&C based on (APC2) the following settings are considered: • 2: Directly solving (APC2) without any additional ingredients • 2H: Adding the starting heuristic, the primal heuristic and the variable fixing based on the upper bound according to Theorem 10 • 2HV: Setting 2H with the valid inequalities (9) replacing the corresponding inequalities (8d) according to Observation 8 • 2HVS: Setting 2HV with separation of the inequalities (8c) (instead of adding them in the beginning) as described in Section 5.3 • 2HVSL: Setting 2HVS with the variable fixing based on the lower bound according to Theorem 9 and Lemma 18 as described in Section 5.3 The following parameter values were used for the B&C algorithms, they were determined in preliminary computations: startHeur: 10, maxIneqsRoot: 50, maxIneqsTree: 20, maxSepRoot: 100, maxSepTree: 1, numInitAPC1: 10, numInitAPC2: 100 for the instance set TSPLIB and 10 for the instance set pmedian.We have used a different parameter for numIntiAPC2 depending on the instance set, as the distance-structure of the instances is quite different.In particular, for TSPLIB the distances are essentially unique (as they are Euclidean distances) while for pmedian many are similar (as they are shortest path distances on a graph).Thus, for pmedian we would often add all inequalities (8d) at initialization for a parameter value of 100, as there are usually less than 100 different distances in an instance.
In Figures 4a and 4b we show a plot of the runtimes.We see that for both formulations the largest positive effect is achieved by adding the heuristics with the associated variable fixing based on the upper bound.This can be explained by the fact that with the variable fixing the linear programs which are needed to be solved are getting much smaller.Moreover, the lifting procedures for (APC1) and the variable fixing based on the lower bound for (APC2) also have a discernible (incremental) effect.This is in line with both the computational results in Gaar and Sinnl (2022) for a similar lifting procedure for the d-pCP, and the theoretical result provided in Section 4.
Starting to use separation of inequalities which are needed in the formulations (i.e., settings 1HS and 2HVS) instead of adding all of these inequalities at initialization has a rather neutral effect on the selected instances.This can be explained by the fact that these instances are quite small, for the larger instances in our sets, preliminary computations showed that we cannot even solve the root-relaxation (for both (APC1) and (APC2)) due to either running into the time limit or due to exceeding the available memory.
The valid inequalities (9) also have no visible effect.A potential explanation of this is that modern mixed-integer programming solvers like CPLEX are quite effective in strengthening given inequalities and may already transform (8c) into (9) automatically whenever it is possible.Finally, adding the optimalitypreserving inequalities for (APC1) has a negative effect.This is consistent with Theorem 17, which shows that at convergence the inequalities (4a) and (4b) are not further improving the bound.(a) (APC1)

Comparison with approaches from the literature
In this section we provide a detailed comparison with the existing approaches from literature, namely the GRASP of Sánchez-Oro et al. ( 2022) and the local search of Mousavi (2023) on the instances used in the respective works.We compare the existing approaches with the best settings for both of our B&C algorithms, i.e., 1HSVL for the one based on (APC1) and 2HVSL for the one based on (APC2).
In Tables 1-4 we give the comparison with Sánchez-Oro et al. (2022).For our approaches we report the runtime (columns t[s] with entry TL indicating that the time limit of 1800 seconds was reached), the obtained upper bound (i.e., the objective function value of the best obtained solution, columns UB) and lower bound (columns LB) and the number of nodes in the B&C tree (columns nBC).Since the approach of Sánchez-Oro et al. ( 2022) is a heuristic, only upper bounds and runtime can be reported for their approach.We note that the runs in Sánchez-Oro et al. (2022) were made on a AMD Ryzen 5 3600 with 2.2 GHz and 16GB RAM.The best values for UB, LB and runtime are indicated in bold in the tables.For the runtime, we just consider our branch-and-cut approaches, while for the UB we consider all three approaches to determine these best values.
The tables show that for 114 out of 154 instances our approaches improve on the best solution value obtained in Sánchez-Oro et al. (2022) and additionally for 7 instances, we match the best solution value.Our approaches manage to solve 76 instances to proven optimality.For some of the instances, our approaches are more than two orders of magnitude faster than the GRASP (e.g., instance pr439 with p = 30 and α = 2).Comparing 1HSVL with 2HVSL, we can see that 2HVSL performs better overall, in particular for larger instances.This can be explained by the fact that due to the structure of the formulations, the variable fixing procedures can fix much more variables when using (APC2) compared to (APC1).We can also see that for α = 3 the problem is harder than for α = 2.In Table 5 we provide a comparison with the local search of Mousavi (2023).The runs in Mousavi (2023) were made on an Intel Core i5-6200 with 2.3 GHz CPU and 8 GB of RAM.We note that Mousavi (2023) presents runtimes for different version of their developed heuristics, in the table we show the fastest runtime and the best objective function value found by the heuristics.The results show that 2HVSL can solve all instances to optimality under one minute, while for two of the instances the heuristics of Mousavi (2023) do not manage to find the optimal solution.Similar to the instance set TSPLIB, the setting 1HSVL performs worse than 2HVSL.

Conclusions
In this work, we present two integer programming formulations for the discrete version of the α-neighbor p-center problem (d-α-pCP), which is an emerging variant of the classical discrete p-center problem (d-pCP), which recently got attention in literature.We also present lifting procedures for inequalities in the formulations, valid inequalities, optimality-preserving inequalities and variable fixing procedures.We provide theoretical results on the strength of the formulations and convergence results for the lower bounds obtained after applying the lifting procedures or the variable fixing procedures in an iterative fashion.These results extend results obtained by Elloumi et al. (2004) and Gaar and Sinnl (2022) for the d-pCP.Based on these results we provide two branch-and-cut algorithms, namely one based on each of the two formulations.
We assess the efficacy of our branch-and-cut algorithms in a computational study on instances from the literature.The results show that our exact algorithms outperforms existing algorithms for the d-α-pCP.These existing algorithms are heuristics, namely a GRASP by Sánchez-Oro et al. ( 2022) and a local search by Mousavi (2023).Our algorithms manage to solve 116 of 194 instances from literature to proven optimality within a time limit of 1800 seconds, in fact many of them are solved to optimality within 60 seconds.They also provide improved best solution values for 116 instances from literature.Note that these 116 instances are not the same instances as the instances where optimality is proven, as for some of the latter instances the existing heuristics already manage to find the optimal solution (but of course can not prove optimality, as they are heuristics).
There are various directions for further work.One direction could be to try to derive further valid inequalities.In particular it could be interesting to investigate if there are inequalities which ensure that the best possible bounds of both formulations coincide, i.e., if the second formulation can be further strengthened, as our current results show that the best bound of the first formulation could be better for some instances.Another interesting avenue could be the development of a projection-based approach similar to the one of Gaar and Sinnl (2022) for the d-pCP, in which a lower number of variables suffices to model the problem and which is therefore better suited for large scale instances.
Furthermore, trying to extend the approaches including the lifting schemes to other variants of the d-pCP such as robust versions (see, e.g., (Lu and Sheu, 2013)), capacitated versions (see, e.g., (Scaparra et al., 2004)) or the p-next center problem (see, e.g., (López-Sánchez et al., 2019)) could be fruitful.Moreover, while we managed to improve many of the best known solution values for the instances from literature, there are also some instances where the existing heuristic work better.Thus further developments of heuristics can also be interesting, including matheuristics such as local branching (see, e.g., (Fischetti and Lodi, 2003)) which could exploit our formulations.

A Formulations for the d-pCP from the literature
A formal definition of the d-pCP is as follows.Given an integer p, a set of customer demand points I with cardinality |I| = n, a set of potential facility locations J of cardinality |J| = m ≥ p and a distance d ij from a customer demand point i to the potential facility location j for every i ∈ I and j ∈ J, find a subset S ⊆ J with cardinality |S| = p of facilities to open such that the maximum distance between a customer demand point and its closest open facility is minimized, i.e., such that max i∈I min j∈S {d ij } is minimized.We note that in the d-α-pCP, we have N = I = J by definition of the problem.This is necessary as the set of demand points (i.e., customers) in the d-α-pCP depends on a given feasible solution and is defined as all points where no facility is opened in the solution.Due to this difference, slightly modified definitions of D and D i are necessary for the d-pCP below as compared to the ones of d-α-pCP above.
Let the binary variables y j for all j ∈ J indicate whether a facility is opened at location j.Let the binary variables x ij for all i ∈ I, j ∈ J indicate whether the customer i ∈ I is assigned to the open facility j.Let the continuous variables z measure the distance in the objective function.The classical textbook formulation of the d-pCP (see for example Daskin (2013)) is as follows.x ij ≤ y j ∀i ∈ I, ∀j ∈ J (21d) x ij ∈ {0, 1} ∀i ∈ I, ∀j ∈ J (21f) In Elloumi et al. (2004) another formulation was introduced: let D = {d ij : i ∈ I, j ∈ J} denote the set of all possible distances and let d 1 , . .., d K be the values contained in D, so D = {d 1 , . . ., d K }.Furthermore there is a binary variable for each value in D that indicates whether the optimal value of the d-pCP is less or equal than this value.Towards this end let u k = 0 if all customers have an open facility with distance at most d k−1 , otherwise u k = 1 for all k ∈ {2, . . ., K}.Then the formulation reads as follows.

Figure 1 :
Figure 1: Illustration of Example 5, in which p = 3 and α = 2.The value in the nodes in Figure 1a is the index of the node and the values near the arcs are the distances.The values in the nodes in Figures1b and 1care the values of the y-variables in the optimal solution, and the values near the arcs are the values of the x-variables in the optimal solution.If an arc is not drawn in a solution, this means the corresponding x-variable takes value zero.
denote the set of all possible distances and let d 1 , . .., d K be the values in D, i.e., D = {d 1 , . . ., d K }.It is easy to see that the optimal objective function value of the d-α-pCP is in D and there are at most (|N | − 1)|N | potential optimal values.Furthermore, let D i =

Figure 2 :
Figure 2: Illustration of Example 11, in which p = 2 and α = 2.The value in the nodes in Figure 2a is the index of the node and the values near the arcs are the distances.The values in the nodes in Figures 2b, 2c and 2d are the values of the y-variables in the optimal solution, and the values near the arcs in Figure 2d are the values of the x-variables in the optimal solution.

Figure 3 :
Figure 3: Illustration of Example 12, in which p = 2 and α = 2.The value in the nodes in Figure 3a is the index of the node and the values near the arcs are the distances.The values in the nodes in Figures 3b, 3c and 3d are the values of the y-variables in the optimal solution, and the values near the arcs in Figure 3b and 3c are the values of the x-variables in the optimal solution.
Therefore, it is possible to compute LB # α in polynomial time.By same arguments also LB # α ′ ∈ D and LB # α ′ can be computed in polynomial time.

Figure 4 :
Figure 4: Runtime for different settings of our B&C algorithms on a subset of the instances.
D i = {d ij : j ∈ J} \ {d 1 } for i ∈ I.The modified variant of (PCE) proposed byAles and Elloumi Then L α (LB) = LB holds if and only if there is a feasible solution for j∈N \{i}:dij ≤LB

Table 1 :
Detailed results for instance set TSPLIB with α = 2, part one.

Table 2 :
Detailed results for instance set TSPLIB with α = 2, part two.

Table 3 :
Detailed results for instance set TSPLIB with α = 3, part one.

Table 4 :
Detailed results for instance set TSPLIB with α = 3, part two.

Table 5 :
Detailed results for instance set pmed with α = 2.