Minimization of Constraint Violation Probability in Model Predictive Control

While Robust Model Predictive Control considers the worst-case system uncertainty, Stochastic Model Predictive Control, using chance constraints, provides less conservative solutions by allowing a certain constraint violation probability depending on a predefined risk parameter. However, for safety-critical systems it is not only important to bound the constraint violation probability but to reduce this probability as much as possible. Therefore, an approach is necessary that minimizes the constraint violation probability while ensuring that the Model Predictive Control optimization problem remains feasible. We propose a novel Model Predictive Control scheme that yields a solution with minimal constraint violation probability for a norm constraint in an environment with uncertainty. After minimal constraint violation is guaranteed the solution is then also optimized with respect to other control objectives. Further, it is possible to account for changes over time of the support of the uncertainty. We first present a general method and then provide an approach for uncertainties with symmetric, unimodal probability density function. Recursive feasibility and convergence of the method are proved. A simulation example demonstrates the effectiveness of the proposed method.

the chance constraint must be satisfied for all drawn scenarios. A low risk parameter yields a large number of necessary scenarios to be considered.
However, the benefits of using chance constraints come with the disadvantage of constraint violations if unlikely uncertainty realizations occur. This also leads to the problem of ensuring recursive feasibility, i.e., guaranteeing that the MPC optimization problem remains solvable at every step. In case of an uncertainty realization with low probability, there might not be an admissible control input that can satisfy the chance constraints. Recursive feasibility also becomes an issue if the maximal uncertainty value is not constant, i.e., the support of the uncertainty probability density function changes over time.
Recursive feasibility of SMPC for bounded disturbances is addressed by Korda et al. [19]. A further approach addressing recursive feasibility in SMPC are stochastic tube methods [20], [21] using constraint tightening. Lorenzen et al. [22] suggested an approach that combines the works of Korda et al. [19] and Kouvaritakis et al. [20] where a tuning parameter is introduced that allows for shifting priority between performance and an increased feasible region. Recursive feasibility in SMPC for probabilistically constrained Markovian jump linear systems is addressed by Lu et al. [23].
Due to its ability to efficiently cope with environments subject to uncertainty, SMPC has become increasingly popular in applications such as process control [14], [24], energy control [25] and power systems [26], [27], finance [28], general automotive applications [29], as well as more specifically safety-critical applications, e.g., path planning [15] and autonomous driving [30]- [35]. However, the possible constraint violation and the resulting infeasibility of the optimization problem are limiting factors when designing an efficient SMPC algorithm in practice, especially in safety-critical applications.
A further drawback of chance constraints in SMPC appears if the optimal solution is 'on the chance constraint' even though other solutions are possible with no or only minimal effect on the cost function. In other words, a solution of the SMPC optimization problem minimizes the cost function and satisfies the required probability for the chance constraint. There might be other solutions with low cost that have a chance constraint violation probability less than required by the risk parameter or even zero. However, the SMPC optimization problem is solved once a solution is found with minimal cost and that satisfies the chance constraint, given the risk parameter. This means that the solution with a lower constraint violation probability is not found. Additionally, choosing a suitable risk parameter is challenging, as high values increase risk while low values reduce efficiency.
These issues are especially relevant in safety-critical systems. One example is an autonomous vehicle that plans to avoid collisions in an uncertain environment, e.g., a car avoiding a bicycle with uncertain behavior. If the support of the uncertainty is not known a priori, RMPC algorithms are either not applicable or require that the vehicle does not move until all surrounding vehicles are distanced enough. This, however, is not practical. Therefore, the collision constraint, realized with a norm constraint, could be transformed into a chance constraint in an SMPC approach, allowing a small collision probability. While this yields a more efficient solution than RMPC, it might result in a collision. However, an autonomous vehicle must always choose the safest, sensible trajectory, even if it comes at the cost of increased energy or longer travel time. Further, if the chance constraint in SMPC cannot be satisfied anymore because an unlikely scenario occurred or the uncertainty support changed, the optimization problem becomes infeasible. Alternative control laws, e.g., full breaking, and recovery strategies can then be used to regain a feasible controller. However, in such scenarios where the chance constraint cannot be satisfied, the controller ideally yields the safest solution possible, which is not guaranteed with standard recovery strategies. In the example of the autonomous vehicle this is the solution with the lowest collision probability.
In this paper we propose a novel MPC strategy for linear, discrete-time systems which not only satisfies general hard constraints over the entire prediction horizon, but additionally minimizes the probability of violating a norm constraint in the next predicted step, while also optimizing for other control objectives. This is achieved by first calculating a set that constrains the system inputs such that only those inputs are allowed which minimize the constraint violation probability. This is then followed by an optimization problem which optimizes further required objectives such as fast reference tracking or energy consumption. In this subsequent optimization problem only those inputs are admissible which minimize the norm constraint violation probability. The proposed method can handle time-varying bounds for the support of the system uncertainty and considers hard constraints on the state and input for the entire prediction horizon, e.g., due to actuator limitations.
We will first present the general method to minimize constraint violation probability. For the general method it can be difficult to determine a tightened set of admissible inputs which guarantee minimal constraint violation probability. Therefore, we suggest an approach which allows the computation of this tightened input set, given uncertainties with symmetric, unimodal probability density function, i.e., the relative likelihood of uncertainty realizations decreases with increased distance to the mean. This tractable approach yields a convex set of inputs which minimize the constraint violation probability. Guarantees are provided for recursive feasibility and convergence of the proposed MPC algorithm. In the following we will refer to the proposed method as CVPM-MPC, i.e., MPC with constraint violation probability minimization (CVPM). A simulation for a vehicle collision avoidance scenario is shown to display the effectiveness of the proposed method and highlight its advantages compared to SMPC and RMPC.
In summary, the contribution is as follows: • Derivation of a CVPM-MPC approach for uncertainties with symmetric, unimodal probability density function • Guarantee of recursive feasibility and convergence The proposed CVPM-MPC method can be beneficial to multiple applications, especially to safety-critical applications such as autonomous driving or human-robot interaction where the risk measure regarding collision is norm-based [30], [36], [37]. In these safety-critical applications there is a clear priority on maximizing safety, i.e., the constraint violation probability of safety constraints needs to be minimal, before optimizing other objectives, e.g., energy consumption. While in general it is possible to minimize the constraint violation probability not only for the first step but for multiple steps, this significantly increases conservatism, resulting in solutions which are more similar to RMPC solutions. Minimizing the first step probability iteratively yields the advantage of safer solutions than SMPC and less conservatism compared to RMPC. We therefore focus on iteratively minimizing the constraint violation probability for the next step, i.e., the first predicted MPC step, however, a solution approach for a multi-step CVPM-MPC method is also provided.
The remainder of the paper is structured as follows. Section II introduces the system to be considered, the uncertainty, and the control objective. The proposed method is introduced in Section III, first focusing on minimizing the constraint violation probability, then introducing the resulting MPC algorithm. Section IV analyzes the properties of the suggested method, guarantees on recursive feasibility and convergence, while the CVPM-MPC method is discussed in Section V. An example of the applied method is given in Section VI, simulating a vehicle collision avoidance scenario. Section VII provides conclusive remarks.
Notation: Regular letters indicate scalars, bold lowercase letters denote vectors, and bold uppercase letters are used for matrices, e.g., a, a, A, respectively. Random variables are represented by bold uppercase letters. The Euclidean norm is denoted by . 2 . The probability of an event is given by P(.). A probability distribution is denoted by p and is described by the probability density function f if a probability density function exists. The probability distribution and density function have support supp(p) and supp(f ), respectively.
Step k of a state or parameter is represented by a subscript k, e.g., x k for state x. The integers in the interval between a and b, including the boundaries, are denoted by I a:b .

II. PROBLEM SETUP
In this section we define the system class and the general MPC algorithm. Additionally, a probabilistic norm constraint is introduced.

A. System Dynamics and Control Objective
Consider the controlled linear, time-invariant, discrete-time system with time step k, states x k ∈ R n , control input u k ∈ R m , output y k ∈ R q , and matrices A ∈ R n×n , B ∈ R n×m , C ∈ R q×n . Furthermore, we consider an uncertain system y r,k+1 =y r,k + u r,k +w k (2a) = y r,k+1 +w k (2b) depending on the output y r,k ∈ R q at step k, a deterministic, known input u r,k ∈ R q , and a stochastic part w k ∈ R q which is the realization of a random variable W k . The nominal prediction of y r,k+1 is indicated by y r,k+1 = y r,k + u r,k , consisting of the previous output y r,k and the deterministic, known input u r,k .
Assumption 1 (Uncertainty Properties). The random variables W k (w k ) ∼ f W k with the probability distribution p W k and density function f W k have zero mean and are truncated with the initially known, convex and bounded support supp (f W k ).
The support of f W k is given by where w max,k ∈ R ≥0 . The controller for (1) is designed to (approximately) optimize an infinite horizon objective function, while accounting for input and state constraints. For an initial state x k at time step k the objective function to be minimized is with Q ∈ R n×n , R ∈ R m×m and Q 0, R 0. In the following the index k represents regular time steps while the index j indicates prediction steps within an MPC optimization problem, similar to (4).

B. Model Predictive Control
We consider an MPC algorithm to solve the control problem (4) with a finite horizon objective function V N . MPC repeatedly solves an optimization problem on a finite horizon. After the optimization only the first optimized control input is applied. Then the horizon is shifted by one step and a new optimization is performed. Without loss of generality it is assumed that an MPC iteration starts with x 0 . The finite horizon cost is then given by We first formulate the MPC optimization problem with input constraints U and state constraints X that are independent of the uncertain system (2), resulting in The input u j is bounded by the non-empty input value space U ⊆ R m , i.e., the input constraint (6c). The convex state constraint and terminal constraint set are given by X and X f , respectively.

Assumption 2 (Control Invariant Terminal Set).
For all x j ∈ X f , there exists an admissible u j such that x j+1 ∈ X f .
are Lyapunov functions in X and X f , respectively.
We denote with U x,j the set of admissible inputs u j such that all constraints of (6) are satisfied for j, i.e., Remark 1. Instead of steering x k to the origin as in (5), specific references x ref,k can also be tracked.

C. Model Predictive Control with Norm Constraint
In the following the uncertain system (2) is considered.
Assumption 4 (Initially known Uncertainty). The initial state y r,0 and deterministic input u r,0 are known at the beginning of each MPC optimization.
Here, we consider an additional constraint for the MPC problem (6), which is the norm constraint representing a constraint on the 2-norm y k − y r,k 2 , e.g., the distance between two points must not be smaller than a minimal value c k . While (8) is a hard constraint, we will first transform (8) into a chance constraint and later, in Section III, we will minimize the probability that this norm constraint is violated.
Remark 2. It is also possible to consider a p-norm constraint with y k − y r,k p instead of the 2-norm. Similar to the 2-norm, all p-norms are convex. Without loss of generality we will consider the 2-norm as most applications require a 2-norm to represent the Euclidean distance.
As y r,k is subject to uncertainty, the norm constraint (8) is difficult, potentially impossible, to fulfill, or it might lead to overly conservative control inputs. The hard norm constraint (8) can be relaxed if substituted by the chance constraint P y k − y r,k 2 < c k ≤ β k with p cv,k (u k−1 ) := P y k − y r,k 2 < c k where β k is a risk parameter and p cv,k denotes the constraint violation probability for the norm constraint (8). The constraint violation probability p cv,k for step k is evaluated at step k − 1, i.e. at the previous step. Therefore, the probability p cv,k depends on the input u k−1 , yielding y k according to (1). In the following, the dependence of p cv,k on u k−1 is omitted if it reduces notation complexity.
The following example will illustrate the idea of the chance constraint. We consider a controlled object with position y k and a dynamic obstacle with position y r,k where y k − y r,k 2 is the distance between both objects. The objects collide if y k − y r,k 2 < c k . An interpretation for (9) is that p cv,k represents the probability of a collision and this constraint violation probability is bounded by a predefined risk parameter β k . A similar example is analyzed in a simulation in Section VI.
The bounded support of p cv,k is given by resulting in p cv,k = 0 if the maximal uncertainty value w max,k−1 cannot cause y k − y r,k 2 < c k . While it is possible to consider the norm constraint (8) over multiple steps, we will only consider the norm constraint for the next predicted step j = 1 with a horizon N ≥ 1. Applying (8) over the entire horizon N results in a conservative control law similar to RMPC.
Only the general MPC problem (6) is addressed in Assumptions 2 and 3, the norm constraint (12f) is not considered, as (12f) is specifically addressed in the method presented in Sections III and IV.
Remark 3. In (12) the norm constraint (8) is only considered in the first step, i.e., at step j = 1, as we later minimize the probability of constraint violation for the first step. However, if this norm constraint is required to be considered at future steps j ∈ I 2:N , this can be achieved by treating (8) as a chance constraint, similar to (13), resulting in P y j − y r,j 2 < c j ≤ β j , j ∈ I 2:N .
This chance constraint (15) is then added to (6) and subsequently needs to be considered in (7). Assumptions 2 and 3 still need to be fulfilled if chance constraints are included for j ∈ I 2:N in the optimization problem.

D. Problem Statement
Instead of only bounding the chance constraint (13) by the risk parameter β 1 , we aim at minimizing the constraint violation probability p cv,1 within the MPC optimization problem. The challenge is to solve the MPC problem (12a) -(12e), while it needs to be guaranteed that p cv,1 = min u0∈Ux,0 P y 1 − y r,1 2 < c 1 (16) and that the MPC problem remains recursively feasible.
Multiple issues arise when implementing chance constraints. As (9) is a probabilistic constraint it cannot directly be handled by an optimization solver. The probabilistic chance constraint needs to be transformed into a tractable substitute of the chance constraint.
If chance constraints are used in SMPC, two further problems occur. First, recursive feasibility of the SMPC optimization problem needs to be ensured. If the SMPC optimization problem is solvable at step k, it must also be solvable at step k + 1 to guarantee recursive feasibility. This is a challenge for various SMPC methods as the risk parameter β k allows a certain Fig. 1: Visualization of CVPM-MPC method: Given an input set and state constraints, as well as the current system state and uncertain system state, an updated input set is determined. This updated input set minimizes the norm constraint violation probability for the next step. Then an MPC optimization problem is solved. The updated input set ensures constraint violation probability minimization while optimizing for other objectives.
probability of constraint violation, causing infeasibility of the optimization problem for uncertainty realizations with low probability. Additionally, in both SMPC and RMPC recursive feasibility is not ensured in case of an unexpectedly increasing uncertainty support. Robustness in RMPC or a satisfaction of the chance constraint in SMPC are typically only ensured for the initially considered uncertainty support.
Second, in safety-critical systems a further aspect reduces the usability of chance constraints in SMPC. A solution is valid as long as the probability of violating the safety constraint satisfies the risk parameter. Assuming there exists a solution with lower, or even zero percent, constraint violation probability, the optimization solution will still be 'on the chance constraint' if this results in lower objective costs, i.e., allow constraint violations according to the risk parameter.
We consider again the example in the introduction of a car overtaking a bicycle. Using a chance constraint with β k > 0, the car will pass the bicycle but will choose a trajectory around the bicycle that allows a collision with a low probability due to β k > 0. Given a finite bicycle uncertainty support, passing the bicycle with slightly more distance yields zero collision probability with only a small increase of cost. However, in practice, this slightly increased cost is acceptable if thereby safety is guaranteed.
In this paper we propose a novel MPC approach, CVPM-MPC, that first ensures the minimal constraint violation probability p cv,1 , but then still optimizes the objective function J N (x 0 , U 0 ). This approach yields a control input resulting in the lowest possible constraint violation probability, given input and state constraints, while still optimizing further objectives. The CVPM-MPC method guarantees recursive feasibility, also for a changing uncertainty support, and ensures convergence of the MPC algorithm.

III. METHOD
In this section we derive the CVPM-MPC method to minimize the constraint violation probability p cv,j for the first predicted step j = 1 in an MPC problem. First, a general approach is presented to find a tightened admissible input set that minimizes the first step constraint violation probability. In the following part it is shown how this approach can be incorporated into MPC. A visualization of the method is displayed in Figure 1. As determining the tightened input set within the CVPM-MPC method is difficult in general, we then provide an alternative, computable approach, assuming an uncertainty with symmetric, unimodal probability density function (PDF). A solution approach for a multi-step CVPM-MPC is described in Appendix I.

A. General Method to Minimize Constraint Violation Probability for One-Step Problem
When minimizing p cv,1 over u 0 within the MPC algorithm, three different cases need to be considered. In each case a set U cvpm,0 is determined which consists of inputs u 0 that minimize the constraint violation probability. Ideally, even considering the bounded uncertainty, satisfaction of the constraint can be guaranteed in the next step, for all choices of u 0 ∈ U x,0 , which will be referred to as case 1. However, for stochastic systems we potentially have the situation that case 1 cannot be guaranteed. Here, two cases need to be distinguished. First, given the uncertainty, there is no choice for u 0 that guarantees constraint satisfaction (case 2). Second, some choices for u 0 guarantee constraint satisfaction, while other choices do not lead to such a guarantee (case 3). Depending on the case, U cvpm,0 is determined differently as described in the following.

Case 1 (Guaranteed Constraint Satisfaction):
The probability of violating the norm constraint is zero independent of the choice for u 0 , i.e., Therefore, every u 0 ∈ U x,0 is a valid input, resulting in Case 2 (Impossible Constraint Satisfaction Guarantee): There is no choice for u 0 such that constraint satisfaction can be guaranteed in the presence of uncertainty, i.e., As it is impossible to guarantee p cv,1 = 0, the aim is to minimize p cv,1 . Selecting yields the set U cvpm,0 which only consists of inputs u 0 that minimize p cv,1 .

Case 3 (Possible Constraint Satisfaction Guarantee):
If only some inputs u 0 guarantee satisfaction of the constraint (12f), i.e., then the set consists of these inputs which yield constraint satisfaction.
In all three cases U cvpm,0 needs to be found, leading to the following strong assumption.
Assumption 5. The set U cvpm,0 can be determined for all cases 1-3.
While it is possible to approximate U cvpm,0 by sampling, finding an analytic solution for U cvpm,0 highly depends on the probability distribution. However, if U cvpm,0 can be determined, the CVPM-MPC method guarantees minimal constraint violation probability for p cv,1 .
Theorem 1. If Assumption 5 holds, minimization of the constraint violation probability of p cv,1 is guaranteed by selecting U cvpm,0 according to cases 1-3.
Proof. The proof follows straightforward from the definition of the three cases. All possibilities are covered regarding the guarantee of constraint satisfaction, i.e., guaranteed constraint satisfaction (case 1), impossible constraint satisfaction guarantee (case 2), and the case where constraint satisfaction is only guaranteed for some but not all u 0 ∈ U x,0 (case 3). If p cv,1 = 0 is possible, i.e., case 1 or 3, (18) and (22) guarantee that U cvpm,0 consists only of inputs u 0 ∈ U x,0 which yield p cv,1 = 0. If no u 0 ∈ U x,0 guarantees p cv,1 = 0, minimal constraint violation is guaranteed by only allowing inputs u 0 ∈ U x,0 which minimize p cv,1 according to (20).
In dynamic environments the worst-case uncertainty w max,k can change over time, which influences the probability of constraint violations. If the support changes, the CVPM-MPC approach still minimizes this constraint violation probability. Corollary 1. If the uncertainty support supp (f W k ) changes from step k and k + 1, the CVPM-MPC problem solved at step k + 1 guarantees that the constraint violation probability p cv,k+2 is minimized.
Proof. The proof follows directly from the problem definition. First, the CVPM-MPC approach ensures that the constraint violation probability is minimized for each step, which allows p cv,k+2 > p cv,k+1 if the uncertainty support increases. Second, minimizing p cv,k+2 is independent of minimizing p cv,k+1 .
The MPC problem (12) is now adapted given the set U cvpm,0 to guarantee minimal constraint violation probability of (12f) while still optimizing for further objectives.

B. Model Predictive Control with Minimal First Step Constraint Violation Probability
Applying the previously determined U cvpm,0 yields the CVPM-MPC problem The set U * 0 defines the admissible inputs which yield minimal constraint violation probability combined with keeping the inputs and states within the input and state constraint sets. The set U * 0 is given by where U x,j is defined in (7) and U cvpm,0 is obtained according to Section III-A. The complete CVPM-MPC problem (23) allows to optimize a cost function and satisfy state and input constraints, while minimization of the constraint violation probability p cv,1 is ensured.

C. Minimal Constraint Violation Probability for One-Step Problem with Symmetric Unimodal PDF
The proposed CVMP-MPC method in Section III-A only guarantees minimal constraint violation probability if Assumption 5 is fulfilled. Therefore, it must be possible to always determine U cvpm,0 , which is a strong assumption. In the following we provide an adapted approach of the CVMP-MPC method which guarantees minimal constraint violation probability if the PDF of the uncertainty is symmetric and unimodal.
In the following we first give a definition for symmetric, unimodal PDFs. Further, we introduce a substitute for the constraint violation probability p cv,k . Then, the three cases from Section III-A are adapted in order to minimize p cv,1 for the PDF addressed in the following. For each case a convex set of admissible inputs U cvpm,0 is determined.
1) Symmetric Unimodal PDF: We first define the class of symmetric, unimodal probability distributions.

Definition 1 (Symmetric Unimodal Distribution).
A probability distribution is symmetric and unimodal if its PDF has a single mode which coincides with its mean µ and With Definition 1 it is ensured that the PDF has its peak at mean µ. As the probability distribution is symmetric, all realizations with similar distance to µ have the same relative likelihood. Since there is only one global maximum of the PDF at µ, realizations with increasing distance to µ have a lower relative likelihood.
The constraint violation probability p cv,k is a probabilistic expression and cannot directly be used in the optimization problem. The following assumption will allow to find a deterministic substitute for p cv,k .
Assumption 6 (Uncertainty with Symmetric Unimodal PDF). The PDF f W k for W k in (2) is symmetric and unimodal with mean µ = 0.
An example for an admissible probability distribution p W k with symmetric, unimodal PDF is a truncated isotropic bivariate normal distribution N (0, Σ) with covariance matrix with variance σ 2 and identity matrix I. The support in each direction is required to be equal, which can be achieved by over-approximating. Distributions with σ 1 = σ 2 can be over-approximated by choosing We now address the relation between p cv,k and f W k considering Assumption 6. The following lemma shows that the constraint violation probability p cv,k can be decreased by choosing u k−1 such that the distance is increased between the next system output y k and the next known, nominal random system output y r,k . Lemma 1. If Assumption 6 holds, the probability p cv,k is decreasing for an increasing norm y k − y r,k 2 .
Proof. According to Assumption 6, f W k is symmetric and unimodal, and therefore f W k is decreasing with increasing w k 2 , i.e., the larger the value w k 2 , the lower its probability. The realization with the highest relative likelihood is the mode of f cv,k with w k = 0, yielding the most likely random output y r,k+1 = y r,k+1 . It follows that whereỹ r,k+1 = y r,k+1 +w k is less likely than y r,k+1 = y r,k+1 + w k and y r,k+1 −ỹ r,k+1 2 > y r,k+1 − y r,k+1 2 (29) due to w k 2 > w k 2 . It follows that the larger the value y k+1 − y r,k+1 2 , the higher the probability of a large value y k+1 − y r,k+1 2 due to (28). Therefore, the larger y k+1 − y r,k+1 2 , the higher the probability of y k+1 − y r,k+1 2 ≥ c k . This results in withỹ k+1 = y k+1 andỹ k+1 = C (Ax k + Bũ k ) according to (1). The same holds for y k − y r,k 2 instead of y k+1 − y r,k+1 2 , showing that p cv,k is decreasing with an increasing y k − y r,k 2 .
The lemma shows that the probability of violating the norm constraint (12f) decreases if the difference between y k and y r,k increases. Lemma 1 now allows to find a substitute function for p cv,k .
2) Substitute Probability Function: In order to provide a tractable substitution for p cv,j in the CVPM-MPC problem, we introduce the scalar, twice differentiable, strictly monotonically increasing function where, is used as a substitution for the constraint violation probability p cv,j , as p cv,j is decreasing with the norm y j − y r,j 2 according to Lemma 1, while h y j − y r,j 2 is increasing with y j − y r,j 2 . This substitution has the benefit that an increasing y j − y r,j 2 is equivalent to a decreasing constraint violation probability. Considering the constraint violation probability for the first predicted step j = 1, this probability p cv,1 is minimized for a maximal h y 1 − y r,1 2 . However, since f W k is truncated and supp (p cv,k ) is bounded, there potentially are multiple admissible inputs which result in an equal constraint violation probability. The aim is now to find the convex set U cvpm,0 including all inputs u cvpm,0 ∈ U cvpm,0 which result in a minimal p cv,1 . As u r,0 is deterministic and known according to Assumption 4, h y 1 − y r,1 2 is a deterministic expression that can be evaluated.
The set U cvpm,0 can then be found by comparing the worst-case uncertainty w max,0 with the minimum and maximum possible values of h y 1 − y r,1 2 , i.e., h min,1 and h max,1 , respectively. The maximal value h max,1 is determined by corresponding to the largest distance between y 1 and y r,1 . Analogously h min,1 can be found by The result for h min,1 can be obtained by determining the minimum value of y 1 − y r,1 2 , as the objective function h y 1 − y r,1 2 and U x,0 are convex. The following lemma provides a strategy to find h max,1 .

Lemma 2.
Let the non-empty convex polytope V ⊂ R g , g ∈ N, be bounded by a finite set of hyperplanes, such that V has a finite number of edge vertices with a convex function z : V → R. Then a global maximum is obtained by searching for the maximum value of z on the boundary ∂V of its domain V.
Proof. This proof is based on Bauer's maximum principle [38]. We consider any two points v 1 , v 2 ∈ ∂V on the boundary of V. Any point on the line between v 1 , v 2 can be described by b = λv 1 + (1 − λ)v 2 , using the definition of convexity. Due to the convexity of z it holds that Any point on the line between v 1 , v 2 can be reached by a convex combination. Since v 1 , v 2 can be chosen arbitrarily, every point b in the interior of V can be reached. Therefore, a global maximum z max is found on the boundary ∂V.
3) Determination of the Updated Admissible Input Set: Similar to Section III-A we regard three cases. The resulting set U cvpm,0 , depending on the three cases, is then used in the CVPM-MPC problem (23) to guarantee minimal constraint violation probability of (12f). In order to distinguish between the cases, we will consider the relation where a detailed derivation of (36) is shown in Appendix III. Here, c 1 + w max,0 represents the necessary distance between y 1 and y r,1 , consisting of the required minimal distance c 1 at step j = 1 and the maximal random system step w max,0 at j = 0, such that y 1 − y r,1 2 ≥ c 1 for all w 0 2 ≤ w max,0 . Case 1 (Guaranteed Constraint Satisfaction): For any u 0 ∈ U x,0 constraint satisfaction is guaranteed, i.e., p cv,1 = 0 for The initial state configuration of the controlled and the stochastic system is such that the minimum value possible for h y 1 − y r,1 2 , h min,1 , still yields a larger value than inserting c 1 combined with the worst-case w max,0 into h, which moves y r,1 closest to y 1 . This results in a guaranteed constraint satisfaction p cv,1 = 0. Therefore, every u 0 ∈ U x,0 is an admissible input, i.e., Case 2 (Impossible Constraint Satisfaction Guarantee): There is no input u 0 ∈ U x,0 which can guarantee p cv,1 = 0, i.e., The largest value for h y 1 − y r,1 2 that can be achieved with u 0 ∈ U x,0 is h max,1 , corresponding to the lowest possible p cv,1 . However, to guarantee constraint satisfaction of (12f), h max,1 is required to be larger or at least equal to h(c 1 + w max,0 ), with the worst-case absolute value w max,0 for the realization of w 0 . Constraint satisfaction cannot be guaranteed here.
The solution corresponding to h max,1 is denoted by u cvpm,0 . Minimal p cv,1 is achieved with as h y 1 − y r,1 2 increases and p cv,1 decreases with an increasing norm. Therefore, is selected since the input choice u cvpm,0 guarantees the lowest constraint violation probability when p cv,1 > 0.
While some u 0 ∈ U x,0 cannot guarantee zero constraint violation probability, it is possible to find u 0 such that Therefore, for some u 0 constraint satisfaction can be guaranteed in the presence of uncertainty. Hence, the task is to find a set which consists of all inputs u 0 ∈ U x,0 that yield constraint satisfaction and therefore p cv,1 = 0. The first part of the set in (44), describes a super-level set, including only inputs u 0 which lead to p cv,1 = 0. This super-level set is generally non-convex. In order to receive a convex set U cvpm,0 for the optimization problem, an approximation is performed, based on the boundary Proposition 1. An approximated, convex solution of (44) in case 3 is obtained by the gradient operator ∇ u * 0 , and a point u * 0 ∈ ∂U mode3,0 ∩ U x,0 which is an admissible input. Remark 5. While it was previously not explicitly stated that y 1 depends on u 0 , in Proposition 1 the dependence of y 1 on u * 0 is stated for clarity. Proof. The set U mode3,0 is non-empty and non-convex with the boundary point u * 0 ∈ ∂U mode3,0 of U mode3,0 . There exists a supporting hyperplane to U mode3,0 at u * 0 [39]. This supporting hyperplane is used to approximate the non-convex set U mode3,0 . The gradient ∇ u * 0 h y 1 (u * 0 ) − y r,1 2 is a vector orthogonal to the hyperplane on the boundary ∂U mode3,0 at u * 0 , pointing away from the convex set U mode3,0 . The scalar product of ∇ u * 0 h y 1 (u * 0 ) − y r,1 2 and any point u 0 on this hyperplane is zero, while the scalar product of ∇ u * 0 h y 1 (u * 0 ) − y r,1 2 and any point in the half plane not containing U mode3,0 is positive. Therefore, (48) approximates U mode3,0 . As the intersection of two convex sets yields a convex set [39], the resulting approximated setÛ cvpm,0 is convex as well.
An approach to finding u * 0 is solving the system with u * 0 ∈ U x,0 . The choice of u * 0 is not unique. It is possible thatÛ cvpm,0 is empty due to approximating even though case 3 applies.
Remark 6. IfÛ cvpm,0 = ∅ in case 3, then u 0 can be determined by following the procedure of case 2.
Following the approach in Remark 6 still provides a solution that minimizes p cv,1 . However, in case 2 only a single option U cvpm,0 = u cvpm,0 is given, while case 3 has the advantage of providing a set U cvpm,0 with multiple possible inputs u 0 . Case 3 therefore offers the possibility to then optimize to account for further objectives, given the set of admissible inputs U cvpm,0 .

IV. PROPERTIES
In the following two important properties are analyzed. First, recursive feasibility of the proposed method is shown. This is followed by a proof of convergence.

A. Recursive Feasibility
Recursive feasibility guarantees that if the MPC optimization problem is solvable at step k, it is also solvable at step k + 1. This needs to hold as MPC requires the solution of an optimal control problem at every time step.

Definition 2. (Recursive Feasibility) Recursive feasibility of an MPC algorithm is guaranteed if
where U N k is the set of admissible inputs U k to fulfill the constraint (23e) from step k to step k + N . In the following recursive feasibility will be established for the proposed method. Without loss of generality the MPC optimization problem starts at x 0 with k = 0. The proof is divided into two parts. First it is shown that U cvpm,0 = ∅ at any step, then recursive feasibility of the optimization problem (23) is shown.
Proof. As shown in the proof of Theorem 1 the three cases (17), (19), and (21) cover all possibilities with individual, nonempty sets U cvpm,0 . This yields that there always exists a u 0 ∈ U cvpm,0 .
As U x,j is a non-empty set due to Assumption 2, there exist solutions u j ∈ U x,j for j ∈ I 1:N −1 . The first condition in (24) considers the first input u 0 , while the second condition covers the following inputs u j with j ∈ I 1:N −1 . Therefore, the two conditions are independent and U * 0 = ∅ for any MPC optimization. The MPC algorithm (23) is guaranteed recursively feasible.
The proof for the general CVPM-MPC method can be extended for the CVPM-MPC approach for uncertainties with symmetric, unimodal PDFs in Section III-C. (23) is recursively feasible with the CVPM approach of Section III-C.
Theorem 2 and Corollary 2 show that if the MPC problem (6) is designed to be recursively feasible, the CVPM-MPC algorithm (23), based on (6), remains recursively feasible. According to Corollary 1, minimizing p cv,k is independent of uncertainty support, therefore, recursive feasibility is guaranteed if the uncertainty support changes.

B. Convergence
In the following convergence of the proposed method is shown. In this section the MPC optimization starts at x k . Considering Remark 1 it is possible to track a reference varying from the origin, however, without loss of generality we will only consider the regulation of the origin here.
The uncertain output y r,k can potentially lie close to the origin or even directly in the origin. In order to minimize p cv,k , an area around y r,k is then inadmissible for the system output y k . This can lead to the case where the origin is inadmissible for the controlled system, i.e., 0 ∈ X cv,k , where denotes the bounded and open set of states x k with p cv,k > 0, i.e., constraint violation is possible for all x k ∈ X cv,k . An inadmissible origin, of course, is an issue when investigating the stability of the proposed algorithm. However, we will provide a convergence guarantee under the following two Assumptions concerning the stochastic nature of y r,k .
Assumption 7 (Admissible Origin). (a) There exists a k 0 < ∞ such that for all k ≥ k 0 it holds that (b) There exists a k y0 < ∞ and a finite sequence of inputs u k such that y k = 0 for all k ≥ k y0 ≥ k 0 . (c) There exists a k case1,3 < ∞ and for all k ≥ k case1,3 ≥ k 0 ∃ u k−1 s.t. p cv,k (u k−1 ) = 0 (53) and U cvpm,k = ∅.
Assumption 7 (a) ensures that even if y r,k is occupying the space around the origin for some time, eventually y r,k will be distanced enough that the origin becomes admissible for the controlled system, as the boundedness of the stochastic system state yields a closed admissible space for the controlled system. Assumption 7 (b) ensures that there is a possibility for the controlled system to reach the origin.
With Assumption 7 (c) it is guaranteed that either case 1 or case 3 is applicable if Assumption 7 (a) holds. This ensures that p cv,k = 0 at some time after the origin becomes admissible for the controlled system. Lemma 3. If Assumptions 7 holds, there exists a closed, control invariant setX k = X \ X cv,k for k ≥ k case1,3 , which contains the origin.
Proof. As cases 1 or 3 are applied, the space blocked by X cv,k around y r,k with non-zero constraint violation probability can be regarded as a hard constraint. This yields x k / ∈ X cv,k for all k ≥ k case1,3 . As X is closed and X cv,k is open, the resulting setX k is closed. As x k ∈X k ⊆ X , there exists a u k such that x k+1 ∈ X according to Theorem 2. Assumption 7 (c) ensures that U cvpm,k is not empty, therefore x k+1 ∈X k andX k is control invariant.
The setX consists of the states which ensure constraint satisfaction of X and yield p cv,k = 0 for k ≥ k case1,3 .
Assumption 8 (Terminal Constraint Set). The terminal constraint set X f is a subset ofX k , i.e., X f ⊂X k .
In the following convergence of the proposed method is addressed.
Theorem 3. If Assumptions 3 and 7 hold, the proposed CVPM-MPC method in Section III-A satisfies that x k converges to 0 for k → ∞.
Proof. First, the MPC algorithm in (6) will be considered without the norm constraint (12f). As V N (x 0 , U 0 ) is a Lyapunov function in X , given Assumption 3, the MPC algorithm of (23) without (12f) is asymptotically stable, following the MPC stability proof of Rawlings et al. [40,Chap. 2.4]. Now the CVPM-MPC method is considered. According to Theorem 2, for all k, x k ∈ X there exists a feasible U k such that x k+1 remains in X . Lemma 3 ensures that x k * ∈X k for k * ≥ k case1,3 , whereX k replaces X to ensure constraint satisfaction of the norm constraint. The setX k is closed, control invariant, contains the origin according to Assumption 7, and X f ⊆X k , given Assumption 8. Therefore, the system (1), controlled by the CVPM-MPC algorithm in (23), is asymptotically stable and converges to 0 for k > k * and k → ∞, similar to the MPC algorithm in (6).
In Theorem 3 it is only shown that the system converges to the origin once the random system fulfills Assumption 7. However, every time the stochastic output allows the system to reach the origin, the system will move towards the origin. The system state x k remains at 0 until y r,k moves in such a way that the origin has non-zero constraint violation probability. As the main goal is to ensure minimum constraint violation probability of (9), y k will move away from the origin to minimize p cv,k if y r,k behaves in such a way that it causes p cv,k > 0 in the origin.
Corollary 3. If Assumptions 7 holds, the proposed CVPM-MPC method in Section III-C for uncertainties with symmetric, unimodal PDFs satisfies that x k ∈ X for all k and that x k converges to 0 for k → ∞.
Proof. The proof is similar to the proof of Theorem 3. The set X cv,k in (51) can be expressed as Equation (52) is satisfied by for the CVPM-MPC method in Section III-C. Similar to Lemma 3, given the open and constant set X cv,k ,X k is closed, constant, control invariant, and contains the origin given Assumption 7. With the MPC algorithm (6) and k > k * , k → ∞ the system (1) is asymptotically stable and therefore converges to 0.
Therefore, if the origin is admissible, the controlled system will converge. However, satisfying the norm constraint has priority over converging to the origin.

V. DISCUSSION OF THE PROPOSED CVPM-MPC METHOD
One could argue now that the proposed algorithm is a combination of RMPC in the first step and, potentially, SMPC in the following steps. While there are some similarities to this combination, we solve a different problem. The most important difference is that the constraint violation probability is minimized in the first predicted step and the initial uncertainty probability is not required to be 0. RMPC approaches require constraint satisfaction initially and ensure that constraints are satisfied throughout the prediction horizon.
Our proposed CVPM-MPC method is more closely related to SMPC than RMPC, as constraint violations are possible. Nevertheless, the suggested method can be interpreted as lying between SMPC and RMPC. The results are more conservative than SMPC, as a zero percent constraint violation probability is found if possible, i.e., p cv,k = 0 in (9), but less conservative than RMPC. An advantage over both, SMPC and RMPC, is the ability to minimize the constraint violation probability and to successfully cope with sudden uncertainty support changes, as recursive feasibility can still be guaranteed. The uncertainty support can change due to unexpected events or modeling inaccuracies.
In SMPC with chance constraints recursive feasibility is a major issue. For example, an unexpected realization of the uncertainty at step k, whose probability lies below the chance constraint risk parameter at step k, leads to a state at step k +1 with no solution to the optimization problem if the required risk parameter of the chance constraint cannot be met. An option to regain feasibility is to solve an alternative optimization problem or apply an input that was previously defined. However, these alternatives do not necessarily lead to a solution that yields the lowest constraint violation probability. Furthermore, it is possible to soften chance constraints by using slack variables in the cost function. However, this approach is not acceptable in applications where the chance constraint represents a safety constraint. If a slack variable is introduced, it competes with other objectives within the cost function and does not ensure constraint satisfaction. The proposed CVPM-MPC method always finds the optimal input that results in the lowest constraint violation probability while remaining recursively feasible.
RMPC guarantees recursive feasibility but at the cost of reduced efficiency, as worst-case scenarios need to be taken into account. Additionally, if the support of the uncertainty can suddenly change over time, e.g., the future motion of an object becomes more uncertain due to a changing environment, RMPC can become too conservative to be applicable. A robust solution can only be obtained by always considering the largest possible uncertainty support. The proposed method deals with this by adjusting to changing uncertainty supports at every step, as will be illustrated in Section VI. A suddenly or unexpectedly increasing uncertainty support, e.g., due to an inaccurate prediction model, may lead to increased constraint violation probability for a limited time after the support changes. Before the support changes, the optimized inputs of the proposed algorithm lead to a less conservative result than RMPC, while ensuring that the constraint violation probability is kept at a minimal level immediately after the change.
In the proposed method we only consider minimizing the constraint violation for the first predicted step. It is possible to consider multiple steps by increasing the uncertainty support for each considered step as described in Appendix I, however, this leads to a more conservative solution. For every extra predicted step in which the constraint violation probability is minimized, the maximal possible uncertainty value must be considered. This yields a highly restrictive set of admissible inputs which minimize the constraint violation probability over multiple predicted steps. As it is assumed that the support of the uncertainty PDF can change over time, considering multiple steps with the initially known support does not guarantee lower constraint violation probability for multiple steps. If the support increases the previously obtained multi-step CVPM-MPC solution becomes invalid. Therefore, given an updated uncertainty support at each step, it is a reasonable approach to only minimize the constraint violation probability for the first predicted step, resulting in the safest solution at the current step. It is possible to consider the norm constraint for collision avoidance in multiple predicted steps by either formulating a chance constraint, as mentioned in Remark 3, or a robust constraint. However, this can result in infeasibility of the optimization problem, particularly if the uncertainty support varies over time. Despite only considering the norm constraint for the next predicted step, it is still beneficial to use an MPC horizon N > 1. Other objectives are optimized over the entire horizon, given that the first input is included in the set U cvpm,0 , which potentially consists of multiple admissible inputs that all minimize the constraint violation for the next step.
Applying the CVPM-MPC approach possibly results in oscillating behavior. As long as case 1 is valid, the proposed method does not affect the optimization, as U cvpm,0 = U x,0 . Once case 2 is active, a solution is found which minimizes the probability of constraint violation, ignoring the reference and potentially moving from the reference, as only one input is admissible. When case 1 is valid again, the optimized the reference is tracked again until, possibly, case 2 becomes active again. This can be improved by considering the norm constraint as a chance constraint for multiple predicted steps, however, recursive feasibility is not guaranteed, as mentioned before.
The main focus of the suggested method is to minimize the constraint violation probability. It is clear that stability cannot always be guaranteed, as the origin can be excluded from the admissible state set. We consider a narrow road where a bicycle is between the controlled vehicle and the vehicle reference point. If the road is too narrow for the vehicle to pass, it will remain behind the bicycle and never reach the reference point, i.e., Assumption 7 (b) is violated. However, Assumption 7 implies that the origin is not inadmissible at all times, and once the origin is admissible, the controlled system converges.
It is also important to note that minimizing the constraint violation probability has priority over other optimization objectives. Especially in safety-critical applications this can be of major interest, e.g., an autonomous car must ensure that the collision probability is always minimal, prior to reducing energy or increasing passenger comfort. If SMPC were to be applied in such scenarios, the question would arise of how to choose the SMPC risk parameter β k . A large β k yields efficient behavior but might be unacceptable due to an insufficient safety level. Finding a reduced value for β k in SMPC is challenging, as even very small risk parameters allow for constraint violations, while β k = 0 does not yield a chance constraint and the advantages of SMPC are lost. In the proposed CVPM-MPC the task of appropriately choosing the risk parameter is not required.
For the approach in Section III-C the PDF f W k does not need to be known exactly as long as it fulfills Assumption 6. If f W k is symmetric and unimodal, it is ensured that increasing y k − y r,k 2 results in a lower constraint violation probability p cv,k .
The proposed method is especially useful in collision avoidance applications, which are either in 2or 3-dimensional space. While applying the proposed method in 2D is straightforward, 3-dimensional applications can be more challenging to solve, especially finding u * 0 in (47).
VI. SIMULATION RESULTS In the following a simulation is presented and discussed to further explain the general idea and its application. This collision avoidance scenario with two vehicles illustrates an application where the proposed method is beneficial. The simulations were run in MATLAB on a standard desktop computer using MPT3 [41] and YALMIP [42]. Solving a single optimization of the MPC algorithm takes 54 ms on average. All quantities are given in SI units.

A. Collision Avoidance Simulation
A collision occurs if the distance between two objects becomes too small. This distance can be represented by a norm constraint. The priority is then enforcing the norm constraint, or if not feasible, minimizing the probability of violating the norm constraint.
We consider the example mentioned in Sections I and II where a controlled vehicle avoids collision with a bicycle, referred to as obstacle in the following. The controlled vehicle is approximated by the radius r c = 2.0 and the obstacle is approximated by the radius r r = 0.8 and is subject to stochastic motion in a bounded area, e.g., a road. The circles are chosen to fully cover the individual shapes of the controlled vehicle and obstacle. The scenario setup is shown in Figure 2.
The continuous system dynamics of the controlled vehicle in x and y direction are given bẏ where x = [x, y] and [v x , v y ] are the position and velocity in a 2D environment, respectively. The inputs are given by [u 1 , u 2 ] . Using zero-order hold with sampling time ∆t = 0.1 yields the discretized system given by (1) with We will consider the input constraints In x-direction there exists a minimum velocity v x,min = 1 to ensure that the controlled vehicle is always moving forward, which also limits the potential oscillating behavior due to the CVPM-MPC approach. We also consider the state constraint where y lb = 2.0 and y ub = 8.0 are the boundaries of the road minus the radius r c . The behavior of the obstacle with random behavior is given by depending on the initial output y r,0 , the input u r,k , and the realization w k of the random variable W k ∼ f W k and y r = [x r , y r ] . We assume f W k to be symmetric, unimodal, and truncated, resulting in the support of where w max,k is the radius of the support boundary of W k . The physical interpretation of w max,k is that it is the maximum uncertain distance the obstacle can move in one step, additionally to the deterministic distance u r,k . At step k the controlled vehicle knows the obstacle position y r,k and deterministic input u r,k , but w k is unknown.
As the main aim of this simulation is to minimize the collision probability, an expression for this probability is necessary in order to analyze the simulation results. The collision probability at step k between the two vehicles will be denoted by p col,k and it has finite support as f W k is truncated. In this example a norm constraint is used to avoid a collision, i.e., the norm constraint violation probability is minimized. Therefore, the probability of a collision p col,k is defined analogous to p cv,k in Section II. The derivation and expression for the collision probability p col,k is omitted here due to readability. Details can be found in Appendix II.
The collision probability p col,k depends on the Euclidean distance between the controlled vehicle and obstacle. Similar to (8) a norm-constraint can be formulated where c k = d safe,k can be interpreted as the minimal distance between the controlled vehicle and the obstacle such that a collision is avoided. The support of p col,k results from adding the radius of the controlled vehicle and the obstacle to supp (f W k ), i.e., where d safe,k = w max,k−1 + r r + r c is the safety distance required to avoid a collision between the controlled vehicle and the obstacle, taking into account the radius of both vehicles, r r and r c , and the maximal obstacle step w max,k−1 . Similar to Lemma 1 for p cv,k , p col,k is decreasing for increasing d k .
We choose h(ξ) = ξ 2 , which is strictly monotonically increasing with ξ. This yields which can be considered a substitution of the probability function p col,k . The controlled vehicle uses the CVPM-MPC algorithm (23) with N = 10 and The x-position references for the controlled vehicle are obtained by In the following two scenarios will be analyzed. In the first scenario the controlled vehicle is located close to its state boundary, i.e., the road boundary, showing that the norm constraint can be minimized in the presence of state constraints. In the second scenario the obstacle uncertainty support will suddenly increase. The orientation φ of the controlled vehicle in The vehicle configurations at different time steps are shown in Figure 3 and the results of the simulation are displayed in Figure 4. Initially, the controlled vehicle and obstacle have the same x-position. Starting at t = 3.9 s the controlled vehicle needs to slow down to maintain a safe distance to the obstacle. As the maximal obstacle uncertainty is known by the controlled vehicle, the collision probability is kept at zero. After t = 4.5 s the obstacle moves away from the controlled vehicle, resulting in increased input u 1 in order to get closer to the x-position reference. At t = 5.0 s the controlled vehicle catches up with its x-position reference, which is then followed by constant inputs. Between t = 9.0 s and t = 11.0 s similar behavior can be observed. It can be seen that the CVPM-MPC ensures p cv,k = 0 with active state constraints. As mentioned in Section IV, the motion of the obstacle can result in an inadmissible origin, i.e., Assumption 7 (c) is violated and the controlled vehicle cannot keep its reference velocity. However, as shown in Theorem 3, once the obstacle moves away the velocity of the controlled vehicle again reaches the reference velocity.
2) Change of Uncertainty Support: In the second simulation we show that the proposed method is capable of dealing with varying uncertainty support of the obstacle. The controlled vehicle aims to obtain the reference velocity v x,ref at y r = 3.0. We consider here that the obstacle uncertainty support suddenly changes, for example due to a changing environment. At first the expected uncertainty support is w max,k = 0.15 and at t = 3.0 s it changes to w max,k = 0.9, while returning to w max,k = 0.15 at t = 5.0 s. In the simulation the obstacle does not move randomly, which helps to better understand the action of the controlled vehicle once the uncertainty support changes. At each time step the controlled vehicle knows the current uncertainty support of the obstacle.  The vehicle configurations at different time steps are shown in Figure 5 and the results of the simulation are displayed in Figure 6. As the controlled vehicle has a higher velocity it will eventually pass the obstacle, therefore, the distance ∆x = x − x r turns positive. At t = 1.8 s the controlled vehicle gets close enough to the obstacle that the controlled vehicle moves away from y ref to maintain v x,ref and ensures that the distance d k = y k − y r,k 2 ≥ d safe,k . As w max,k increases at t = 3.0, so does the required distance between the controlled vehicle and obstacle, causing the controlled vehicle to move further away from y ref . Due to input limitations the controlled vehicle cannot move fast enough. This results in d k < d safe,k , i.e., p col,k > 0 at t = 3.0 s, i.e., there is a probability of collision for the next time step. However, d k is increased to a maximal level, given u k ∈ U x,k , resulting in a minimal constraint violation probability p col,k . Once the distance satisfies d k ≥ d safe,k at t = 3.2 s, p col,k becomes zero, and the controlled vehicle moves along the obstacle boundary for the next step, as seen for t = 3.3 s. At t = 5.0 s, w max,k decreases, and the controlled vehicle converges to y ref at t = 5.8 s.
In order to validate the probability of constraint violation, the simulation was run 2000 times with an arbitrary random obstacle step at t = 3.0 s, which is the first step with the increased uncertainty bound w max,k = 0.9. The vehicles collided in 144 simulations, yielding a collision probability of 0.072 compared to the calculated collision probability 0.0723, as described in Appendix III.  3) Comparison to RMPC and SMPC: If RMPC and SMPC were applied in the simulations, certain problems would arise, mainly due to infeasibility of the optimization problem. This could be solved by providing rigorous alternative optimization problems, predefined alternative inputs, or highly conservative worst-case considerations. However, there is no ideal RMPC or SMPC approach to deal with the scenario in the simulation. Therefore, we will compare the simulation results of the proposed method to RMPC and SMPC only qualitatively.
We will first consider the behavior with RMPC applied to the controlled vehicle. In the first simulation RMPC would deliver safe results similar to the CVPM-MPC method, while remaining behind the obstacle in order to account for the worst-case obstacle behavior. In the second simulation two cases can be distinguished. If the initially considered uncertainty support is w max,k = 0.15, the behavior would be similar to the proposed method until the support changes. As it is impossible to find a state with zero collision probability after the uncertainty support is altered, the RMPC optimization problem becomes infeasible. If the considered uncertainty support is initially chosen such that the larger support after t = 3.0 s is covered, RMPC yields a safe solution, however, it is passing the obstacle at a larger distance than initially required, yielding a higher cost compared to the proposed CVPM-MPC method. In many applications it is also difficult to choose the worst-case uncertainty support a priori, as higher supports might occur later, resulting in even more conservative RMPC solutions.
It is now assumed that the controlled vehicle is controlled using SMPC with a chance constraint with risk parameter β k > 0 for collision avoidance. In the second simulation, before the uncertainty support changes, the controlled vehicle would pass the obstacle a little closer than with the proposed CVPM-MPC method, as the chance constraint allows for small constraint violations. The larger β k is chosen, the smaller the distance. However, while the proposed CVPM-MPC method ensures safety while only passing the vehicle with little more distance, the SMPC approach would pass the obstacle 'on the chance constraint', i.e., as close as β k allows, sacrificing guaranteed safety for small cost improvements. In other words, leaving slightly more space between the controlled vehicle and the obstacle would result in p cv,k = 0 with only little higher cost.
When the uncertainty support changes, the SMPC solution is as close to the obstacle as β k allowed in the previous step. The chance constraint cannot be met anymore because the uncertainty support increased, resulting in a constraint violation probability larger than allowed by β k . The SMPC optimization problem then becomes infeasible, requiring an alternative optimization problem to be defined beforehand. In the first simulation a similar situation would occur. If the chance constraint allows the controlled vehicle to be in a position which will yield p col,k > β k due to the unconsidered worst-case obstacle motion, this leads to infeasibility of the optimization problem.
Considering the qualitative comparison, we can see that the proposed method offers certain advantages over RMPC and SMPC, especially guaranteeing recursive feasibility of the optimization problem in the presence of a changing uncertainty support.

VII. CONCLUSION
The proposed CVPM-MPC algorithm yields a minimal violation probability for a norm constraint for the next step, while also optimizing further objectives and satisfying state and input constraints. Recursive feasibility and, under certain assumptions, convergence to the origin is guaranteed. While the suggested method is inspired by RMPC and SMPC, it provides feasible and efficient solutions in scenarios where RMPC and SMPC encounter difficulties or are not applicable.
As norm constraints are especially useful in collision avoidance applications, the advantages of the presented CVPM-MPC method can be exploited in applications such as autonomous vehicles or robots that work in shared environments with humans. A brief example is introduced where a controlled vehicle is overtaking a bicycle while minimizing the collision probability. Here, we focus on minimizing the constraint violation for a norm constraint. However, depending on the application, a multi-step CVPM-MPC could be beneficial. Especially for collision avoidance it is also of interest not only to focus on the collision probability but to consider the severity of collision if a collision is inevitable.

APPENDIX I MINIMAL CONSTRAINT VIOLATION PROBABILITY FOR THE MULTI-STEP PROBLEM
The method presented in Section III minimizes the constraint violation probability for the next step. In the following a possible extension of the one-step CVPM-MPC method is shown. Considering multiple steps l > 1 yields a method closer related to RMPC, as it provides advantages with respect to robustness but conservatism is increased.
Considering the stochastic process (W k ) k∈I0:j−1 , its realization, a sequence (w k ) k∈I0:j−1 with j ∈ N ≥0 , and the initially known output y r,0 yields While in the one-step method p cv,j only needs to be minimized for the next step j = 1, for the l-step approach p cv,j needs to minimized for 1 ≤ j ≤ l.
Similar to Section III we first address the general method and then provide a solution for f W k satisfying Assumption 6.

A. General Method to Minimize Constraint Violation Probability for Multi-Step Problem
It is necessary to find the set U cvpm,0:l−1 , which represents the set of admissible input sequences U 0:l−1 = [u 0 , ..., u l−1 ] that minimize p cv,j for 1 ≤ j ≤ l. In the following three cases are again considered. The constraint violation p cv,j+1 for step j + 1 depends on the previous output y j , the input u j , and the uncertain output y r,j .
Case 2 (Impossible Constraint Satisfaction Guarantee): For a j with 0 ≤ j ≤ l − 1, potentially at multiple steps j, constraint satisfaction cannot be guaranteed by any input u j ∈ U x,j , i.e., p cv,j+1 = 0. This can be expressed by The set of admissible inputs which minimize the constraint violation probability is then given by U cvpm,0:l−1 = u j u j = arg min uj ∈Ux,j p cv,j+1 (u j ) , j ∈ I 0:l−1 .

B. Minimal Constraint Violation Probability for Multi-Step Problem with Radially Decreasing PDF
After defining the general case, we now address the multi-step CVPM-MPC method for a symmetric, unimodal PDF. We make the following assumptions. Assumption 10 (Known Deterministic Input). The deterministic input u r,j is known for j ∈ I 0:l−1 .
A simple approach to find U cvpm,0:l−1 is to maximize y l − y r,l 2 with y r,l = y r,0 + as this automatically results in a maximization of y j − y r,j 2 for j ∈ I 0:l−1 because p cv,j is decreasing with increasing y j − y r,j 2 . Therefore, if p cv,l is minimized, p cv,j is also minimized for j ∈ I 0:l−1 . Similar to (33) and (34), we define h max,l = h max ui∈Ux,i, i∈I 0:l−1 y l − y r,l 2 , h min,l = h min ui∈Ux,i, i∈I 0:l−1 y l − y r,l 2 .
Case 3 (Possible Constraint Satisfaction Guarantee): There are possible input sequences U 0:l−1 such that p cv,l = 0. Similar to case 3 for the one-step CVPM-MPC, we again need to find a set U cvpm,0:l−1 which only allows input sequences U 0:l−1 that result in constraint satisfaction of (8) for j ∈ I 1:l . This is achieved by choosing U cvpm,0:l−1 = U 0:l−1 h y l − y r,l 2 ≥ h c + l−1 i=0 w max,i ∩ u j ∈ U x,j , j ∈ I 0:l−1 (81) where the approximationÛ cvpm,0:l−1 can be found analogously to Proposition 1.

APPENDIX II INEQUALITY DERIVATION
In the following it is shown that y j − y r,j 2 ≥ c j holds if y j − y r,j 2 ≥ c j + w max,j−1 . From (2) it follows that y j − y r,j 2 = y j − y r,j + w j−1 2 ≥ c j . (83) Using the reverse triangle inequality yields y j − y r,j − w j−1 2 ≥ y j − y r,j 2 − w j−1 2 ≥ c j with y j − y r,j 2 − w j−1 2 ≥ y j − y r,j 2 − w j−1 2 .
Equation (36) in Section III-C is obtained for j = 1.

APPENDIX III COLLISION PROBABILITY FUNCTION
Here the collision probability p col,k is described in detail, which is only needed for the evaluation of the simulation but not the proposed method. The PDF f W k is chosen to be where r k is used instead of w k and with variance σ = 1 and Φ (r)= 0.5 1 + erf such that Fig. 7: Collision probability calculation. The blue circle combines the radius of the controlled vehicle and the obstacle, the dashed orange circle represents the area potentially covered by the uncertainty. The striped area represents one half of the intersection between the two circles.
As the main aim of this simulation is to minimize the constraint violation probability, i.e., the collision probability, an expression for this probability is necessary in order to analyze the simulation results. The controlled vehicle and the obstacle collide if their bodies overlap, i.e., r comb > y k − y r,k 2 with the combined radius r comb = r c + r r . A collision at step k is inevitable, if y k − y r,k 2 + w max,k−1 < r comb , i.e., even for the best-case w max,k−1 the objects will collide at step k. For y k − y r,k 2 − w max,k−1 ≥ r comb it follows that p col,k = 0. The collision probability is calculated according to the following procedure. We consider a circle where the radius is the required distance r comb and a circle with radius r k . The intersection of both circles can be interpreted as the collision probability, by integrating the intersection area of both circles, weighted with f W k (r k ). This is illustrated in Figure 7. In case that there is no intersection area, then p col,k = 0. If an intersection exists, there are two intersection points. The intersection area is therefore bounded on one side by the arc with radius r comb and on the other side by the arc of the boundary of the uncertainty. As the intersection area is symmetric, it is sufficient to derive the calculation for one half, i.e., the area between the line connecting y r,k and y k and the intersection point p int,1 as depicted by the striped area in Figure 7. This yields an angle θ int,1 ∈ [0; 0.5π] between the two lines connecting y r,k and y k as well as y r,k and p int,1 . The distance r int (θ) between y r,k and the controlled vehicle boundary between the two intersection points follows from the law of cosines where d k = y k − y r,k 2 and θ ∈ [0; θ int,1 ] with θ int,1 = sin −1 a 2w max,k−1 , a= This yields r int (θ) = 0.5 2d k cos(θ) − (2d k cos(θ)) 2 − 4 (d 2 k − r 2 comb ) .
The intersection area on both sides of the line between y k and y r,k , weighted with the PDF f W k,pol , yields the collision probability p col,k = 2 θint,1 0 1 2π for d k + w max,k−1 ≥ r comb , depending on the angle θ int,1 .
This yields the overall collision probability if d k − w max,k−1 ≥ r comb , (98) otherwise.

(99)
For reasons of readability, the dependence on k for p int , θ int,1 , r int , and a is omitted.