Extensions of labeling algorithms for multi‐objective uncertain shortest path problems

We consider multi‐objective shortest path problems in which the edge lengths are uncertain. Different concepts for finding so‐called robust efficient solutions for multi‐objective robust optimization exist. In this article, we consider multi‐scenario efficiency, flimsily and highly robust efficiency, and point‐based and set‐based minmax robust efficiency. Labeling algorithms are an important class of algorithms for multi‐objective (deterministic) shortest path problems. We analyze why it is, for most of the considered concepts, not straightforward to use labeling algorithms to find robust efficient solutions. We then show two approaches to extend a generic multi‐objective label correcting algorithm for these cases. We finally present extensive numerical results on the performance of the proposed algorithms.

A solution that minimizes all objectives simultaneously does usually not exist. We hence have to explain what "min" means: We introduce the well-known concept of efficient solutions.
In the following, we only use the symbols < (strictly less than) and (less than or equal to) to compare scalars.

Definition 2.
A path q dominates another path q with the same start and end node if z(q ) ≤ z(q). We also say that z(q ) dominates z(q). A path q ∈ Q t is an efficient path for (MOSP) if there is no q ∈ Q t such that z(q ) dominates z(q). Then z(q) is called non-dominated. A complete set of efficient paths is a set Q ⊆ Q t , such that for each efficient path q ∈ Q t there exists an equivalent path q ∈ Q .
Solving (MOSP) means to find a complete set of efficient paths. Often the costs for the edges are not known exactly, but they depend on the scenario that occurs, for example, travel times can depend on the time of the day, on special events, on the weather, etc. Here, we consider multi-objective uncertain shortest path problems with a finite set of scenarios U := {ξ 1 , . . . , ξ r }. In multi-objective uncertain optimization, the cost vectors depend on the scenario which occurs, that is, for every scenario we may get a different cost vector. Hence, c is a function that assigns a cost vector c(e, ξ) = (c 1 (e, ξ), . . . , c k (e, ξ)) T ∈ R k to each edge e ∈ E for each scenario ξ ∈ U. We hence obtain a cost matrix c(e) := ⎛ ⎜ ⎝ c 1 (e, ξ 1 ) . . . c 1 (e, ξ r ) . . . c k (e, ξ 1 ) . . . c k (e, ξ r ) for every edge e. The cost of a path q is the sum of the costs of the edges it traverses, that is, for a simple path we have z(q, ξ) = e∈q c(e, ξ) and its cost matrix is z(q) = e∈q c(e). In this setting, two paths are called equivalent if they have the same start and end node and z(q) = z(q ). For a matrix Y we denote by Y (i,·) its i-th row and by Y (·,j) its j-th column, that is, c(e) (·,j) = c(e, ξ j ).
The multi-objective uncertain shortest path problem (MOUSP) is the family of multi-objective optimization problems The notion of what is a good solution to a multi-objective uncertain problem is not trivial. In multi-objective robust optimization one searches for so-called robust efficient solutions. We now present some concepts to define robust efficient solutions proposed in the literature. The concept of multi-scenario efficiency [7,18] applies the idea of efficiency to several scenarios and multiple objective functions at the same time: A solution is multi-scenario efficient, if there is no other solution which dominates it in one scenario and is as least as good in all other scenarios. s the start and t the end node, and c denoting the objective function, which assigns for each scenario a cost vector c(e, ξ) to each edge.

(MORSP)
Given a concept of robust efficiency, find a complete set of robust efficient solutions for (MOUSP).
That is, find a complete set of multi-scenario efficient, flimsily robust efficient, highly robust efficient, point-based minmax robust efficient or set-based minmax robust efficient solutions. For |U| = 1, (MOUSP) reduces to (MOSP). In this case, the robust efficient solutions w.r.t. any of the concepts defined in this section are exactly the efficient solutions of (MOSP).

G E N E R A L L A B E L C O R R E C T I N G A L G O R I T H M
Labeling algorithms are a standard method for solving shortest path problems, in the single-objective as well as in the multiobjective case. Label setting algorithms can be used for instances with positive edge costs, whereas label correcting algorithms also work for negative edge costs, as long as there are no negative cycles. They can be based on node selection or label selection. We consider a generic label selection method as given in [20] for the multi-objective shortest path problem (see also [9] for the bi-objective problem).
A label is a tuple l = (v, z, l ) consisting of • a node v ∈ V (we say that l is a label at v), • a cost z(l), and • a predecessor label l (or 0 if l is the start label with cost 0 at s).
Every label l at a node v = s represents a path q from s to v. That means that z(l) = z(q) and l's predecessor label l represents the subpath of q from s to v , with (v , v) being the last edge of q. Given the label l, its corresponding path q can be constructed by backtracking the nodes of the predecessor labels. These labels are called ancestors of l. The labels are constructed iteratively from their predecessor labels. We store them in two label sets: A newly created label is first added to the set of temporary labels T . As soon as a label l ∈ T at a node v is chosen in the label selection step, it is stored in the label set L instead and, at the end nodes of all outgoing edges of v, new labels with predecessor label l are created. The cost of a label can efficiently be computed by adding the cost of the predecessor label and the edge cost. We say that a label l is dominated by a label l if z(l) is dominated by z(l ).
Algorithm 1 is a generic label correcting algorithm with label selection as given in [20], but with an adjustment: We look for a complete set and not for the whole set of efficient solutions as done in [20]. This is why we only keep newly created labels if there is not yet any other label at the same node with the same cost. That is, we only keep track of a new path if it is not equivalent to an already existing path. Label correcting algorithms are widely used for solving multi-objective shortest path problems. The goal of this article is to make use of labeling algorithms also for solving uncertain multi-objective shortest path problems, that is, to compute robust efficient shortest paths.
We now discuss how we can transfer Algorithm 1 to a solution algorithm for solving the multi-objective robust shortest path problem. The first difference is that in the concepts of robust efficiency given in Section 2, the set of optimal solutions, that is, the set of robust efficient paths in Q t , is defined explicitly and not implicitly via a dominance relation. However, in order to compare label costs we need a suitable definition of dominance. For the decision if a path dominates another one, all data of the uncertain problem has to be available, that is, we need cost matrices c(e) given in (1) on every edge e ∈ E. Finally, Algorithm 1 can only work if Bellman's principle of optimality [2] holds for the given concept of robust efficiency.
We summarize these conditions below.
1. Principle of optimality: For every instance (G, U, c, s, t) of (MOUSP) we require: If q ∈ Q t is a robust efficient path for (G, U, c, s, t), then for every node v in q the subpath q s,v is robust efficient for the instance (G, U, c, s, v). if there is no labell ∈ T ∪ L at node v dominating l or with z(l) = z(l) then 7: T := T ∪ {l} 8: for all labelsl ∈ T at v dominated by l do 9: T := T \ l 10: for all labelsl ∈ L at v dominated by l do 11: L := L\ l (a) The relation is consistent with the concept of robust efficiency: For all instances with k objectives and |U| = r: q ∈ Q t is robust efficient ⇔ q ∈ Q t : (z(q ), z(q)) ∈ R (b) Domination property (see [3]): For all instances with k objectives and |U| = r: We say that q dominates q if (z(q ), z(q)) ∈ R.
With these conditions satisfied, all structural requirements that ensured correctness of Algorithm 1 for the deterministic case are guaranteed and we easily transfer Algorithm 1 to a solution algorithm for solving the multi-objective robust shortest path problem, which we call Algorithm 1'. As input it takes an instance (G, U, c, s, t) of (MORSP) with edge costs c(e) ∈ R k×r . It executes the same steps as Algorithm 1, but using the definition of dominance given in Condition 2.
To ensure that Algorithm 1' terminates we use the common requirement that the instance is conservative w.r.t. R, that is, for all cycles C ∈ G either z(C) = 0 or ∀ Y ∈ R k×r : (Y , Y + z(C)) ∈ R. Note that in single-objective deterministic optimization, conservativeness requires that no cycles of negative cost exist.

Theorem 8. If the concept of robust efficiency satisfies Conditions 1 and 2 and the instance is conservative w.r.t. R, Algorithm 1' finds a complete set of robust efficient solutions.
Proof. We now check that Conditions 1 and 2 and the requirement of conservativeness are indeed enough to guarantee finiteness and correctness of Algorithm 1', proceeding analogously to a proof for correctness of Algorithm 1 in the deterministic case : We first remark that a label representing a non-simple path p will never be added to T in Line 6: Whenever the algorithm considers adding the label corresponding to p, this label will be dominated by or have the same cost as the label l of the corresponding simple path p . Since p is a subpath of p and R is transitive, either l or a label that has the same cost as l or dominates l (and thus l) will already be contained in T ∪ L. Since in each iteration of Line 3 at least one label is removed from T and there are only finitely many simple paths in G, Algorithm 1' stops after finitely many iterations.
To see that for each robust efficient path p there will be a label l with z(l) = z(p) in L when the algorithm terminates, note that Lines 1-5 and 7 describe a routine which iteratively constructs all paths from the source. This routine is complemented by Lines 6, [8][9][10][11] in which dominated labels are removed. This also prevents paths with dominated subpaths to be constructed. However, Condition 2(a) guarantees that a label corresponding to a subpath of a robust efficient path is only removed during the dominance check if there already exists a label with the same cost. Hence for every robust efficient path p a label l with z(l) = z(p) will be found. On the other hand, any label corresponding to a path which is not robust efficient will be sorted out due to Condition 2(b), so that we obtain a complete set of robust efficient paths. ■ Conditions similar to Conditions 1 and 2 are used in [35] for a labeling approach in cycle-free graphs and (partly) in earlier dynamic programming literature (e.g., [8,21,29,31]). The main conceptual difference is that they start with a given dominance relation and define optimality and a counterpart to Condition 1 based on this relation. We chose to state the principle of optimality in a way which does not pre-suppose the existence of a suitable dominance relation, since the concepts for robust efficiency studied in this article are not defined via a dominance relation, and it is not immediately obvious for which of the concepts a suitable dominance relation exists (see Section 4 for the corresponding analysis).
Further, instead of requiring Property 2(b), they often require asymmetry of the considered relation. Although on their own these properties are not equivalent, they are equivalent if Properties 2(a) and 2(c) hold, as we show in the following lemma.

Lemma 9. Let R be a binary relation with Properties 2(a) and 2(c). Then Property 2(b) is equivalent to asymmetry
Proof. We first show by contradiction that asymmetry of R follows from Property 2(b). Let R have Property 2(b).
We construct an instance with only two (distinct) paths q, q from s to t with z(q) = Y and z(q ) = Y . Then q dominates q and vice versa. Hence, q is not robust efficient, but there exists no robust efficient path from s to t dominating q. This is a contradiction to Property 2(b). On the other hand, Property 2(b) follows from asymmetry of R due to the finiteness of the set Q t . This has been shown, for example, in [35,Lemma 17] for relations on the solution set, which we can define from the given relation in the objective space.

L A B E L I N G F O R T H E M U L T I -O B J E C T I V E R O B U S T S H O R T E S T P A T H P R O B L E M
In the following we discuss whether the concepts of robust efficiency presented in Section 2 satisfy the conditions given in Section 3 for using Algorithm 1'. If a concept does not satisfy the conditions, we investigate whether and how the idea of label correcting algorithms can nevertheless be used to find robust efficient solutions.

Multi-scenario efficiency
Recall that a solution is multi-scenario efficient if it is efficient w.r.t. the deterministic multi-objective edge costs c(e) = (c 1 (e, ξ 1 ), . . . , c k (e, ξ 1 ), c 1 (e, ξ 2 ), . . . , c k (e, ξ r )) T . We can hence reduce (MORSP) to a deterministic multi-objective problem and directly use Algorithm 1 to solve it. Note that the set of multi-scenario efficient solutions contains all highly robust efficient solutions as well as the set of all so-called strictly flimsily robust efficient, strictly point-based and strictly set-based minmax robust efficient solutions [7].

Flimsily robust efficiency
Recall that a solution is flimsily robust efficient if it is efficient for at least one scenario in U. We show that for flimsily robust efficiency, Condition 1 for using Algorithm 1' is satisfied, but not Condition 2. We then extend Algorithm 1' by storing some additional information for each label, such that we can find a complete set of flimsily robust efficient solutions. We also introduce an alternative solution approach which finds a complete set of flimsily robust efficient solutions by applying Algorithm 1' once for each scenario and taking the union of the solution sets.
we conclude that for each ξ ∈ U there exists a path from s to t dominating q in scenario ξ. This contradicts q being flimsily robust efficient.

■
The following lemma shows that for flimsily robust efficiency there does not exist a binary relation as required in Condition 2, even for only two objectives and two scenarios. Proof. Assume that for k = r = 2 there exists a binary relation R ⊆ R k×r × R k×r with Property 2(a). Consider an instance of (MOUSP) with three disjoint paths as feasible set with the following cost matrices z(Q t ) = Y 1 := 0 5 0 5 , Y 2 := 1 4 1 4 , Y 3 := 2 3 2 3 , for example, as in Figure 1. For such an instance, a path q with z(q) = Y 2 is not flimsily robust efficient, because we have Y 1 (·,1) ≤ Y 2 (·,1) and Y 3 (·,2) ≤ Y 2 (·,2) . It follows that (Y 1 , Y 2 ) ∈ R or (Y 3 , Y 2 ) ∈ R because of Property 2(a). However, for instances with z( which is a contradiction. ■ From Lemma 11 it follows that for finding flimsily robust efficient solutions there is no suitable binary dominance relation to be used in Algorithm 1'. It is not sufficient to compare the cost matrices of the paths pairwise without considering additional information in Lines 6-11 of Algorithm 1'. However, if we store the information from previous comparisons, we can eliminate labels representing paths which are not flimsily robust efficient by pairwise comparisons. Using this idea, we extend Algorithm 1' to Algorithm 2. For each label l we use a binary vector x(l) ∈ {0, 1} |U| to indicate under which scenarios its path has been shown to be dominated. With q being the path represented by l we define z(l, ξ) := z(q, ξ). Algorithm 2 finds a complete set of flimsily robust efficient solutions for instances where each cycle has either cost 0 for each scenario or has cost ≥ 0 for each scenario.
Correctness of this algorithm can be proven similarly to the proof of Theorem 8, which leads to the following theorem.

Theorem 12.
If for each cycle C in G either ∀ξ ∈ U : z(C, ξ) = 0 or ∀ξ ∈ U : 0 ≤ z(C, ξ), then the output label set of Algorithm 2 represents a complete set of flimsily robust efficient solutions of (G, U, c, s, t).
Since each cycle has either cost 0 or has costs ≥ 0 for each scenario, the cost of a non-simple path is either equal to the cost of the corresponding simple path or dominated by it in each scenario. Analogous to the proof of Theorem 8, whenever the algorithm considers adding the label l of a non-simple path to T , there either exists a label with the same costs or for each scenario ξ there exists a label l ξ dominating l. Hence the algorithm stops after finitely many iterations and finds only simple paths.
In Algorithm 2, for every path p from the source a label l with z(p) = z(l) is constructed, if it does not contain a subpath that is dominated in every scenario. From Lemma 10 it follows that no subpath of a flimsily robust efficient path is dominated in every scenario. Analogous to the proof of Theorem 8, we conclude that for each flimsily robust efficient path p a label l with z(p) = z(l) is found, whereas all labels representing paths which are not flimsily robust efficient are deleted during the algorithm. ■ An alternative approach to finding a complete set of flimsily robust efficient solutions is presented in Algorithm 3. For each scenario, we use Algorithm 1' to find solutions, which are efficient w.r.t. this scenario. Note that the dominance relation used when applying Algorithm 1' to the subproblems only depends on one scenario. However, when comparing the costs of two labels in Line 6 we only consider them equal if they are equal for each scenario. Therefore, the union of the obtained solution sets is a complete set of flimsily robust efficient solutions. To ensure that Algorithm 3 terminates, we use the same requirement as in Theorem 12: Each cycle C in G has to satisfy either ∀ξ ∈ U : z(C, ξ) = 0 or ∀ξ ∈ U : 0 ≤ z(C, ξ).
Algorithm 3 Repeated label correcting algorithm to find flimsily robust efficient solutions Input: an instance I = (G, U, c, s, t) of the multi-objective uncertain shortest path problem Output: label set L, of which the labels at t represent a complete set of flimsily robust efficient solutions of (MOUSP) 1: L := ∅ 2: for all i = 1, . . . , r do 3: L ξ i := output of Algorithm 1' with the relation

Highly robust efficiency
Recall that a solution is highly robust efficient if it is efficient for each scenario. We show that for highly robust efficiency Condition 1 is satisfied, but that there exists no binary relation with Property 2(b) as required in Condition 2. However, every highly robust efficient solution is flimsily robust efficient as well. We give an algorithm to find a complete set of highly robust efficient solutions which filters the labels obtained by Algorithm 2. Afterwards, we describe an alternative approach in which we apply Algorithm 1' r times and intersect the obtained solution sets.
Lemma 13. Let q be a highly robust efficient path for an instance (G, U, c, s, t) of (MOUSP). Then for every node v in q the subpath q s,v is highly robust efficient for the instance (G, U, c, s, v), that is, Condition 1 is satisfied.
Proof. Let v be any node in q. Assume that q s,v is not highly robust efficient for (G, U, c, s, v). Then there exists a path q from s to v, which dominates q s,v under at least one scenario ξ ∈ U. It follows that This contradicts q being highly robust efficient. Proof. Consider the following instance of (MOUSP) for k = r = 2 with two paths q 1 and q 2 with the following cost matrices: Then, none of the two paths in Q t is highly robust efficient, because Y 1 (·,1) ≤ Y 2 (·,1) and Y 2 (·,2) ≤ Y 1 (·,2) , but both are not dominated by any highly robust efficient path. We conclude that the concept of highly robust efficiency does not have the domination property, hence Property 2(b) cannot hold for any binary relation.

■
We remark that also Properties 2(a) and 2(c) cannot hold at the same time for the concept of highly robust efficiency. Without a suitable dominance relation, we cannot use Algorithm 1' to find highly robust efficient solutions. However, since every highly robust efficient solution is also flimsily robust efficient, we can instead compute a complete set of flimsily robust efficient solutions and filter out the highly robust efficient solutions. This can be done efficiently with the help of the additional vectors x(l), which we already introduced for Algorithm 2: At the end of Algorithm 2, a label l is highly robust efficient if x(l) = (0, . . . , 0). This leads to Algorithm 4. Proof. The statement follows directly from Theorem 12 and the fact that every highly robust efficient solution is flimsily robust efficient. ■ Similar to Algorithm 3 for finding flimsily robust efficient solutions, an alternative approach for finding highly robust efficient solutions is given in Algorithm 5: For each scenario, we use Algorithm 1' to find efficient solutions w.r.t. this scenario. Then we intersect the obtained solution sets. Here again, when applying Algorithm 1' to the subproblems, the dominance relation only depends on one scenario. However, when comparing the costs of two labels in Line 6 of Algorithm 1', we only consider them equal if they are equal for each scenario, in order to obtain a complete set of highly robust efficient solutions in the end. Hence, Algorithm 5 finds a complete set of highly robust efficient solutions for instances where each cycle C in G satisfies either ∀ξ ∈ U : z(C, ξ) = 0 or ∀ξ ∈ U : 0 ≤ z(C, ξ).
Algorithm 5 Repeated label correcting algorithm to find highly robust efficient solutions Input: an instance I = (G, U, c, s, t) of the multi-objective uncertain shortest path problem Output: label set L, of which the labels at t represent a complete set of highly robust efficient solutions of (MOUSP) 1: L := ∅ 2: for all ξ ∈ U do 3:

Point-based and set-based minmax robust efficiency
We show that point-based and set-based minmax robust efficiency both satisfies Condition 2 for using Algorithm 1', but not Condition 1. To be able to nevertheless use a label correcting approach, we propose to use several label sets at each node. This idea was first introduced for single-objective minmax robust shortest path problems in [42]. We first show that both concepts for robust efficiency satisfy Condition 2 by defining a relation for each of the concepts with Properties 2(a)-2(c). Recall that a solution is point-based minmax robust efficient if it is efficient for the deterministic multi-objective problem . . .
(SP max ) is not a classical multi-objective robust shortest path problem, because suitable edge costs are not known in advance. Therefore, it cannot simply be solved with a deterministic multi-objective labeling algorithm. However, by identifying z(q) withz(q), the ≤-relation on R k induces a binary relation R point ⊆ R k×r × R k×r , which is defined as It is easy to check that this relation has the properties required in Condition 2. Now, we consider set-based minmax robust efficient This definition directly leads to a suitable binary relation on R k×r : Given k, r ∈ N we construct the binary relation R set ⊆ R k×r × R k×r : Again, it can be checked that this relation fulfills Condition 2. For k = 1, both point-based and set-based minmax robust efficiency reduce to the single-objective concept of minmax robustness. The single-objective minmax robust shortest path problem is already NP-hard [32,42]. Efficient labeling and dynamic programming algorithms cannot be used, because Bellman's principle of optimality is not satisfied.
In [42] a pseudo-polynomial algorithm for the single-objective minmax robust shortest path problem with positive integer edge lengths is given. Instead of a single label at each node v, for each possible cost of the part of the path that has not been looked at yet, a label is saved at v.
In order to find a complete set of [set-based/point-based] minmax robust efficient solutions, we transfer this idea to our label correcting algorithm by adding a prediction matrix as a fourth component to each label: A label l = (v, z(l), l , A) now consists of a node v, a cost matrix z(l), a predecessor label l as before, and a prediction matrix A ∈ Z k×r . We also define a function a with a(l) := A, assigning the prediction matrix to label l. A path from s to v can be represented by several labels with different prediction matrices. The prediction matrix contains the assumed costs for continuing the path from v to t.
In the beginning of the algorithm, component-wise upper and lower bounds A min i,j and A max i,j for the cost of a simple path in G are computed. For example, one obtains suitable bounds by With A A we denote that matrix A is component-wise smaller or equal to matrix A . Algorithm 6 is correct for instances with integer edge costs which are conservative w.r.t. [R point /R set ]. However, it can easily be adjusted to rational edge costs by allowing A ∈ Q k×r and adjusting the step length by which A i,j is increased in Lines 3 to 6.

12:
if A min a(l) A max and l = (v, z(l),l , a(l)) ∈ T ∪ L with z(l) = z(l) or (z(l) + a(l), z(l) + a(l)) ∈ R then 13: T := T ∪ {l} 14: for all labelsl = (v, z(l),l , a(l)) ∈ T with (z(l) + a(l), z(l) + a(l)) ∈ R do 15: T := T \ l 16: for all labelsl = (v, z(l),l , a(l)) ∈ L with (z(l) + a(l), z(l) + a(l)) ∈ R do 17: L := L\ l Proof. We first show that Algorithm 6 stops after finitely many iterations. We then show that q ∈ Q t is [pointbased/set-based] minmax robust efficient ⇔ at the end of Algorithm 6, there is a label l ∈ L at node t with cost z(l) = z(q) and prediction matrix a(l) = 0. First note that in contrast to Algorithm 1 and Algorithm 2, in Algorithm 6 labels corresponding to non-simple paths are not immediately sorted out in Line 12, since Line 12 only compares labels having the same prediction matrix. However, since there are onlym := i=1,...,k,j=1,...,r A max i,j − A min i,j + 1 different prediction matrices, no path for which a label is added to L contains a node v more thanm times: A path p containing v more thanm times has at leastm + 1 subpaths ending in v (including p itself). Hence, at least two of the corresponding labels have the same prediction matrix. However, as soon as a label l in v is created with the same prediction matrix as a predecessor label l in v, we have z(l) = z(l ) or (z(l) + a(l), z(l ) + a(l )) ∈ R since the instance is conservative, and l is discarded in Line 12. We now show that q ∈ Q t is [point-based/setbased] minmax robust efficient ⇔ at the end of the algorithm, there is a label l ∈ L at node t with cost z(l) = z(q) and prediction matrix a(l) = 0.
⇒: Let q be a [point-based/set-based] minmax robust efficient solution. Without loss of generality we can assume that q is a simple path: Because the instance is conservative w.r.t. R, for any non-simple path q there either exists an equivalent simple path or q is not [point-based/set-based] minmax robust efficient.
Let l be the first label at t added to T with cost z(l) = z(q) and a(l) = 0. Then l ∈ L at the end of the algorithm, because there exists no q ∈ Q t with (z(q ), z(q)) ∈ R. It remains to show that a label with cost z(l) = z(q) and a(l) = 0 is added to T . We show by induction that for each node v on q, a label with cost z(q s,v ) and prediction matrix z(q v,t ) is added to T during the algorithm.
In Line 7, a label at node s with length 0 and prediction matrix z(q) is added to Hence, l is removed from T and added to L in some iteration of Line 9. Then, in Line 11 a label l with ) ∈ R and l is added to T , unless there already is a label in T ∪ L with the same cost and prediction matrix.
We conclude that for each node v on q, a label with cost z(q s,v ) and prediction matrix z(q v,t ) is added to T during the algorithm, in particular for v = t.
⇐: The dominance checks in Lines 12-17 guarantee that for any two labels l, l in L we have (z(l) + a(l), z(l ) + a(l )) / ∈ R, thus in particular for our output labels (with a(l) = a(l ) = 0) no two paths in the output dominate each other.

■
Note that this algorithm also only returns labels representing simple paths: If a non-simple path p is not dominated, the cost of all its cycles is 0 and the label representing the respective simple path with prediction matrix 0 was constructed earlier than the label representing p. Table 1 summarizes which properties of the two conditions given in Section 3 are satisfied for each of the considered concepts of robust efficiency and which algorithms can be used to find a complete set of robust efficient solutions. All presented algorithms are pseudo-polynomial for a fixed number of objectives and scenarios and integer edge costs: Carefully counting the steps shows that for polynomially bounded integer edge costs the algorithms run in polynomial time. This cannot be expected if the number of scenarios is unbounded, since the single-objective minmax robust shortest path problem with integer edge costs is then already strongly NP-hard [42].

E X P E R I M E N T S
In the previous section we developed several algorithms for finding robust efficient solutions. These can be classified into two groups: • Extended labeling algorithms: Algorithms that use an extension of Algorithm 1' based on the Conditions 1 and 2 we introduced in Section 3. These are Algorithms 2, 4, and 6 for flimsily, highly, and point-based/set-based minmax robust efficiency.
• Repeated labeling algorithms: Algorithms that rely on repeated application, for every scenario, of Algorithm 1'. These are Algorithms 3 and 5 for flimsily and highly robust efficiency The main goal of this section is to compare these two classes of algorithms. Since we have algorithms from both classes for the two concepts of flimsily and highly robust efficient solutions we take these as basis for our experiments, that is, the following four algorithms presented in this article are tested and compared in detail: • EL-Flimsily is Algorithm 2, the extended label correcting algorithm to find flimsily robust efficient paths.
• RL-Flimsily is Algorithm 3, where Algorithm 1' is applied r times to find flimsily robust efficient paths.
• EL-Highly is Algorithm 4, which applies Algorithm 2 (EL-Flimsily) and identifies highly robust efficient solutions from the output.
• RL-Highly is Algorithm 5, where Algorithm 1' is applied r times to find highly robust efficient paths.
In addition, we also present some results showing particularities of the extended labeling algorithms for finding point-based and set-based minmax robust efficient solutions: • EL-PB is Algorithm 6 with dominance relation R point , the extended label correcting algorithm to find point-based minmax robust efficient paths.
• EL-SB is Algorithm 6 with dominance relation R set , the extended label correcting algorithm to find set-based minmax robust efficient paths.
Since our test instances have positive edge lengths, we set the lower bounds A min i,j needed for EL-PB and EL-SB to 0. Further, we calculate the upper bounds A max i,j as the sum of the |V | − 1 largest costs for each objective i and scenario j. All algorithms were implemented in C++, compiled with gcc version 5.4.0, and run under Ubuntu 16.04.2 on a laptop with 3GHz processor and 16GB RAM. Results are analyzed and plots and tables are generated in the statistical computing environment R [36].

Test instances
We test the presented algorithms based on two types of network instances, grid networks and so-called NetMaker networks.

Grid networks
Grid networks are introduced in [11,37], where nodes are arranged in a rectangular grid of height h and width w. The start node s and end node t are outside the grid, namely on the left and right, with edges connecting them to all left-most and right-most F I G U R E 2 Structure of grid networks nodes, respectively, as shown in Figure 2. The (integer) edge cost components are randomly chosen from a discrete uniform distribution between 1 and a given upper bound c. We construct one set of random grid network instances where the costs for all scenarios are chosen randomly. For the other set of correlated grid network instances the cost vector of scenario ξ 1 is randomly generated, and the other cost vectors c(e, ξ) are generated based on c(e, ξ 1 ), where costs are now randomly generated such that

NetMaker networks
So-called NetMaker networks were first introduced by [40] for testing a bi-objective shortest path algorithm, and also used by others [37]. A random Hamiltonian cycle ensures the network is connected. Other edges (v, v ) are randomly generated for each node v. A random number of edges with tail node v are generated where the number of such edges lies in the interval {e min , e min + 1, . . . , e max } with equal probability. NetMaker also limits how far these edges can reach: Assuming all nodes are numbered {1, 2, 3, . . . , |V |}, with 1 being the start node and |V | the target node, any edge This prevents paths from s to |V | with very few edges, which then would easily dominate all other paths. In the following I = 10 is chosen.
In the original bi-objective instances [37,40], for each edge it is first randomly determined which interval edge costs fall into: We generate NetMaker network instances with random scenarios. For any edge e and cost component k, all scenarios' costs c k (e, ξ) will be randomly chosen from the same interval associated with k and e. Correlated NetMaker instances are constructed as for grid networks by randomly generating the cost vector c(e, ξ 1 ) according to scenario ξ 1 , and generating the others such that c(e, ξ) ∈ {max {1, c(e, ξ 1 ) − 3} , . . . , c(e, ξ 1 ) + 3}. The costs of edges, for all cost components and scenarios, on the Hamiltonian cycle are chosen like all other edge costs, and multiplied by a factor of 10 to penalize their use. In this aspect our instance generation may differ from [40].

Finding flimsily and highly robust efficient solutions
This section analyses solution numbers, difficulty of problem instances and runtimes of the different algorithms introduced for finding flimsily and highly robust efficient solutions.
• This ensures that, on average, there are 4-5 outgoing edges for each node. This leads to similar network density in grid and NetMaker networks.
Tables A1-A16 in the appendix list |V |, |E| and the choice of parameters h, w, r, c for each grid instance; similarly NetMaker instance parameters are listed in Tables A17-A24. Runtime (in seconds) is recorded for each algorithm in the tables. When runtime exceeds 1 hour, runs were not completed and the runtime is shown in the tables as > 3600.00. The tables also list the number of solutions found for each instance, where the column "sols" refers to the number of obtained flimsily and highly robust efficient solutions, respectively.
For most experiments a single instance was generated for each set of parameters. Since costs in grid networks, as well as edges and costs in NetMaker networks, are randomly chosen, instances for the same set of parameters can vary. For NetMaker instances we analyze the results over repeated runs (20 for k = 2 and 10 for k = 3) for each set of problem parameters. Hence, minimum, maximum and averages are reported in Tables A17-A24, and plots in the following subsections show average results and error bars (one standard deviation), where applicable. For grid networks, where only the edge costs, not the network structure itself, are variable, we investigate the variability of runtimes and numbers of solutions for k = 2 objectives on 20 instances for each parameter set (see Section 5.2.2). Results for k = 2 in Tables A1-A4 and A9-A12 also report minimum, maximum and average, and plots are based on average results, with error bars where applicable.

Comparison of extended and repeated labeling algorithms
Tables A1-A24 show that the runtimes of the extended labeling algorithms EL-Flimsily and EL-Highly are in general similar for each instance, which is expected as they both apply Algorithm 2. Similarly, runtimes of the repeated labeling algorithms RL-Flimsily and RL-Highly are similar as they also both apply Algorithm 1' r times. When runtimes differ this is due to the complexity of the filtering process to identify all flimsily or highly robust efficient solutions. Hence, we will illustrate all results about runtimes only for flimsily robust efficiency. The same trends can be observed for highly robust efficiency as well, if not stated otherwise. Figures 3 and 4 show runtimes of both classes of algorithms for finding flimsily robust efficient solutions in grid networks with correlated and random edge costs. The horizontal axis shows network height and width of the instances, and the two different types of algorithms are shown as circles and triangles with points slightly offset to make them easier to compare. The white background color indicates results for c = 10, and gray background for c = 100. For k = 2 average runtimes are shown by the marker with error bars indicating one standard deviation. Furthermore, the number of scenarios in an instance is color-coded.
We observe that it is faster to solve Algorithm 1' r times, as in the repeated labeling algorithms RL-Flimsily and RL-Highly, than to tackle the full problem with the extended labeling algorithms EL-Flimsily and EL-Highly, respectively. This is due to the increased complexity of the algorithms as the additional vector x has to be maintained to correctly determine dominance of flimsily robust efficient labels. Discarding a label because it is dominated may only be possible later during the algorithm as a label can only be discarded once it is dominated in all scenarios, when x = (1, 1, . . . , 1). Hence, before it is confirmed that a label cannot be flimsily robust efficient, it may have been extended to many other nodes. The advantage of solving Algorithm 1' r times, as in RL-Flimsily and RL-Highly, is that the subproblems have fewer labels at the nodes as dominance can be established earlier, namely as soon as a label is dominated in the current scenario. This means that labels are less often unnecessarily carried forward by the algorithms.
For random instances ( Figure 3) runtimes of RL-Flimsily and EL-Flimsily increase, when the maximum cost increases from c = 10 to c = 100. For correlated instances, runtimes of RL-Flimsily increase, whereas runtimes of EL-Flimsily decrease when the maximum cost increases. The number of flimsily robust efficient solutions tends to decrease (see corresponding tables), and the runtime of EL-Flimsily benefits from this. Finally, the repeated runs for the same set of instance parameters with k = 2 show that instances are of varying difficulty in terms of number of solutions and runtime, as expected. For instances with random scenarios the effect was minor; that is runtimes for one set of parameters generally do not overlap with those for a different set of parameters. While runtimes for similar sets of instance parameters can overlap for correlated instances, for example, for 6 and 8 scenarios, this does not tend to occur for parameter values that differ more, for example, 2 and 8 scenarios. Therefore we conclude that general trends observed for grid networks in this section are valid even though experiments were only run for one instance per set of parameters when k = 3, 4. scenarios is color-coded and the subfigures show instances with k = 2 or k = 3 objectives. We observe that for this network type, the extended algorithm is sometimes faster than the repeated algorithm, in particular for correlated scenarios and k = 3 ( Figure 5). This is illustrated in Figure 6, where runtimes of the EL-Flimsily and RL-Flimsily algorithms are plotted for the same set of parameters, and the straight line indicates where runtimes would be equal. It again confirms that for some correlated instances with k = 3 EL-Flimsily is faster than RL-Flimsily. This can be explained by the fact that for correlated scenarios a path is more likely to dominate another in every scenario than for random scenarios. As explained above, in EL-Flimsily, a label that does not represent a flimsily robust efficient path may produce many successor labels until a dominating label is found for each scenario. In instances with correlated scenarios, however, a label is often dominated for all scenarios, as soon as it is dominated for one scenario, hence one dominating label suffices to discard it. The runtime of EL-Flimsily benefits from this, whereas RL-Flimsily needs to repeat the whole labeling procedure r times, even if the costs are identical for all scenarios. This effect can also be observed for grid networks, when comparing the runtime of random and correlated instances in Figures 3 and 4 (in particular for c = 100); however, RL-Flimsily is still faster even for correlated grid instances. The difference between runtimes for random and correlated instances is discussed in more detail in Section 5.2.4.

Runtime with respect to network size and number of scenarios and objectives for both classes
For grid networks, Figures 3 and 4 show how instances become more challenging as the height or width of the problem instance increases. This increase is more significant for increasing width than for increasing height, which is explained in Section 5.2.5. Comparing the plots for k = 2, 3, 4, which all use the same scale for runtime, it is apparent that increasing k significantly increases the runtime. Further, for higher numbers of objectives, the parameters h and w influence runtime more, as can be seen by comparing the difference between runtimes for different network sizes in each of the plots.
In addition, the number of scenarios is color-coded in the figures and illustrates that the runtime of both classes of algorithms mostly increases as the number of scenarios increases. However, this trend is not as clear as for increasing size of networks and number of objectives, as can, for example, be observed for several instances with 6 or 8 scenarios, in particular for correlated instances.
Similarly, Figure 5 shows that also for NetMaker instances increasing the numbers of nodes, objectives and scenarios generally lead to increasing runtimes.

Differences between correlated and random scenarios
We also analyze differences in runtime and number of solutions for random and correlated instances with the same parameters. For grid instances, by comparing Figures 3 and 4 (and corresponding tables), one can observe that runtimes for EL-Flimsily tend to be lower for correlated instances than for random instances, in particular for c = 100. An explanation for this is given in Section 5.2.2. In Figure 7 we analyze differences in runtime of RL-Flimsily and number of solutions found for random and correlated scenarios. Every point in Figure 7(a) represents the number of solutions of a grid instance with parameters k, r, h, w with correlated scenarios (horizontal axis) and random scenarios (vertical axis). For k = 2 it shows the average number of solutions of all 20 instances with the same set of parameters. It should be noted that, for all instances contributing to the same point in the figure, the instance parameters k, r, h, w are identical, but instances have different randomly generated costs associated with the edges. The straight line indicates where the number of solutions for random and correlated instances is identical. The figures distinguish instances with c = 10 (circles) and c = 100 (triangles). Figure 7(a) shows that the number of flimsily robust efficient solutions found for instances with correlated and random scenarios is often similar but some random instances with c = 10 tend to have more solutions than their correlated counterpart (there are more points further above the line than below). For c = 100 a clear trend for more solutions in random scenarios can be seen. Runtimes (or average runtimes, for k = 2) of RL-Flimsily in Figure 7(b) tend to be similar for correlated and random scenarios (points are close to the line), despite more solutions for random scenarios. This is likely due to similar numbers of efficient solutions found for each scenario, which, in the correlated case, are often the same solution, whereas they are more likely to be distinct solutions in the random case.
Similarly, comparing the plots in Figure 5, it is apparent that the runtime of EL-Flimsily tends to be much lower for correlated NetMaker instances than for random NetMaker instances, as explained in Section 5.2.2. We do not observe this for RL-Flimsily. In Figure 7(c,d) this is investigated in more detail, similar to Figure 7(a,b) for grid instances. Again, the average results over all instances with the same parameters are shown. Figure 7(d) shows runtimes of RL-Flimsily, which tend to be similar for random and correlated instances, even though the number of solutions tends to be higher for random instances, as shown in Figure 7(c).

Number of robust efficient solutions
There generally are many flimsily robust efficient solutions, and fewer highly robust efficient solutions. We note that grid network instances with random scenarios in our experiments do generally not have any highly robust efficient solutions for k = 2, 3, see Tables A11-A14, whereas instances with k = 4 tend to have a few, mainly for r = 2 (Tables  A15, A16). For grid network instances with correlated scenarios more highly robust efficient solutions are found, see Tables A3-A9, as a solution that is efficient in one scenario is more likely to also be efficient in another (correlated) scenario. This effect is stronger for c = 100, when compared to c = 10, leading to more highly robust efficient solutions when c = 100. In addition, it can be observed that the number of highly robust efficient solutions increases as the number of objectives increases, and that it tends to be higher for fewer scenarios.
As instance size, number of scenarios, and number of objectives increase, the number of flimsily robust solutions found also increases. Figure 8(a) shows that instances with the same parameters h, w, r with k = 2 objectives (horizontal axis) and k = 3 objectives (vertical axis) clearly have more solutions for k = 3. Figure 8(b) illustrates the increase in highly robust efficient solutions found for k = 3, again compared to k = 2.
Our results also show that problem instances become more challenging as their size increases, that is as h and w increase. On closer inspection wider networks are more challenging than higher networks. For example, instances with h = 20, w = 30 have more flimsily robust efficient solutions and longer runtimes than instances with h = 30, w = 20. Narrow and high networks tend to have shorter paths and fewer flimsily robust efficient paths as paths tend to dominate each other more. Wide networks, on the other hand, have longer and more flimsily robust efficient paths as there are more possible ways of traversing the network on paths that do not dominate each other.
Random NetMaker instances also tend to have few highly robust efficient solutions, in particular for only two objectives, where often no highly robust efficient solution exists (see Tables A23, A24). Correlated instances, however, tend to have more highly robust efficient solutions, since an efficient path w.r.t. one scenario is much more likely to be efficient w.r.t. the other scenarios as well, if the edge costs in all scenarios are similar. As for grid network instances, NetMaker instances with three objectives generally have more flimsily and highly robust efficient solutions than instances with only two objectives.

Finding point-based and set-based minmax robust efficient solutions
EL-PB and EL-SB are, already for small matrices, demanding in terms of runtime and memory usage. A RAM limit of 14 GB did only allow the solution of instances with very small networks. The memory usage and runtime increase rapidly with increasing number of scenarios and/or objectives.
In addition the memory usage and runtimes for instances with the same number of objectives and scenarios and the same network structure differ greatly. An important factor is the number of prediction matrices used in the algorithm, since the number of constructed labels relies heavily on it, which we demonstrate based on a grid network with h = w = 2, k = 2 and r = 2. The integer edge cost components are randomly chosen between 1 and c from a discrete uniform distribution, where c lies between 2 and 12. For each c ∈ {2, . . . , 12}, ten random instances are created. Figure 9 shows the runtime of EL-PB in relation to the number of prediction matrices for this 2 × 2 grid network with r = 2 and k = 2. The runtimes of EL-SB show the same trend and are omitted here.
From the theory we know that the number of considered prediction matrices depends on the number of objectives and scenarios and on the lower and upper bounds A min i,j and A max i,j . Hence, in addition to the number of objectives and scenarios, also the lower and upper bounds play a critical role regarding the runtime (and memory usage) of EL-PB and EL-SB. Since we look for the labels with prediction matrix 0 at node t, the lower bounds cannot be chosen higher than 0. However, the upper bounds depend on the |V | − 1 maximal edge costs, and thus on the maximal possible edge cost c.
As a consequence, to be able to compare networks of different sizes we consider k = 2 objectives, r = 2 scenarios and edge costs chosen randomly from {1, 2} (uniformly distributed). Figure 10 shows the average runtime of ten random instances each for different sizes of grid networks with width and height between 2 and 4. One can see that the average runtimes of EL-PB and EL-SB increase significantly with the size of the network, even for the small networks considered here. Instances with grid networks of width and height larger than 4 could not be solved due to memory capacities. In comparison, the time needed to find flimsily or highly robust efficient solutions increases much more slowly. Further, the runtime of EL-SB is higher and increases faster than the runtime of EL-PB. This can be explained by the complexity of the comparison procedure: to check whether a pair of label costs is in R set takes more time than to check whether it is in R point .

Summary
In summary, it is challenging to identify robust efficient solutions even for small to medium sized problem instances, in particular point-based and set-based minmax robust efficient solutions. An increase in the number of scenarios and objectives considered, as well as the size of the network, is associated with an increase in runtime. In case of the algorithms EL-SB and EL-PB, the values of the edge cost components also influence the runtime significantly. The experiments on instances with flimsily and highly robust efficiency show that it is preferable to use the class of repeated labeling algorithms for grid and many NetMaker instances, while extended labeling algorithms sometimes perform better for NetMaker networks with correlated scenarios.

C O N C L U S I O N
In this article, we have investigated whether and how a generic label correcting algorithm for the multi-objective shortest path problem can be extended to find robust efficient solutions for the multi-objective uncertain shortest path problem. We have introduced algorithms to find robust efficient solutions for several popular concepts of robust efficiency, which can be classified into extended and repeated labeling algorithms. We compared their performance experimentally on several instances of grid networks and NetMaker networks and observed that the repeated labeling algorithms are often, but not always, faster than the extended labeling algorithms. We observed that in particular finding minmax robust efficient solutions is challenging even for small networks and few scenarios and objectives.
Therefore, investigating possible accelerations of the algorithms seems worthwhile, for example, by reducing the number of prediction matrices with the help of better upper bounds on the longest paths. More efficient ways to store and evaluate the information about the prediction matrices, than constructing one label per matrix, are also of interest.
There exists a great number of further concepts of robust efficiency, for example, lightly robust efficiency [24,26,39] and hull-based minmax robust efficiency [6]. The conditions for using the generic label correcting algorithm and the methods to extend it, as presented in this article, can also be useful when other concepts are considered.
The algorithm for the multi-objective problem that we have extended for the multi-objective uncertain case, is a generic algorithm with label selection. Our extended algorithms still include the label selection step. It would be of interest which label selection methods are best suited for the algorithms introduced in this article. In addition, the ideas presented to extend the label correcting algorithm with label selection might also be applicable to other labeling algorithms. Further research could also include possible extensions of other methods to solve the multi-objective or the robust shortest path problem.
In robust optimization, PRO (Pareto robust optimal) solutions are of interest (see [22] for single-objective, [26] for biobjective problems and [7] for general multi-objective problems). PRO robust efficient solutions are solutions which are multi-scenario efficient and robust efficient w.r.t. some other concept at the same time. To find PRO robust efficient solutions, the approach given in [26] for bi-objective problems with uncertainty in only one objective can be extended to several uncertain objectives: First, one finds a complete set of multi-scenario efficient solutions, then these solutions are filtered to obtain the PRO robust efficient solutions. In comparison to the filtering procedure given in [26], filtering is much more time consuming for several uncertain objectives. Therefore, efficient filtering methods are of interest. In addition, pruning techniques would be useful, for example, as proposed in [32], where a multi-objective label correcting algorithm is used to find solutions of the single-objective minmax problem.