An Amplitude‐ and Rate‐Saturated Controller for Linear Plants

This paper proposes a new nonlinear controller applicable to single‐input linear systems under bounded disturbance. The controller provides control signals satisfying specified amplitude and rate‐of‐change limitations. This feature is realized by its sliding mode‐like structure comprising a set‐valued function. The controller also employs a state‐dependent parameter to broaden the region of attraction and to shrink the terminal attractor. In addition, this paper provides a discrete‐time implementation of the proposed controller based on a model‐based implicit discretization scheme. Numerical examples show the validity of the proposed controller.


I. INTRODUCTION
Some applications need special control techniques that are suitable for specific limitations of hardware and actuators. Not considering these limitations in the controller design may cause performance degradation or instability. The need for this type of controllers appears in aircrafts [1,2] and ship steering autopilot systems [3]. Moreover, the limitations of both the amplitude and rate-of-change of the control signal are needed for safety reasons and to protect the controlled system from drastic commands and physical wear such as in wind turbine systems [4,5].
Many research studies have been done to study the behavior of closed-loop systems under these limitations. Some of the previous studies [6][7][8] focus on the stability problems raised by the limitation of the amplitude of the control signal. Imposing constraints on control signal's rate-of-change further complicates the stability problem, as has been studied in [9][10][11][12][13][14][15].
The main interests of the previous studies on systems under amplitude-and rate-saturated controllers are to avoid the instability and to realize smooth behaviors. There have been two approaches for such systems. The first approach [9][10][11] is to model the actuator dynamics as a nested saturation system, in which the amplitude and the rate-of-change of the control signal are saturated. Most of them employed linear matrix inequality (LMI)-based conditions to select the suitable linear feedback gains that maintain the stability. The second approach [12][13][14][15] is to construct a nonlinear controller providing control signals that already satisfy the amplitude and rate limitations, which are imposed by the actuators.
Regarding the first approach, Gomes da Silva et al. [10] proposed a state feedback controller for such linear systems with actuator limitations. They clarified the trade off between the closed loop performance and the size of the region of attraction, and they proposed an algorithm based optimization problem to obtain the controller parameters. Palmeira et al. [11] also introduced a state feedback controller, in which the control signal is sampled by non-periodic sampling interval. In order to obtain the controller parameters, they introduced two optimization problems based on two scenarios; one aims to maximize the region of attraction, and the other aims to maximize the sampling interval permissible for stability. It should be noted that the existence of external disturbance was not taken into account in [10,11].
Regarding the second approach, Stoorvogel and Saberi [12] presented a nonlinear state feedback controller that produces a control signal with limited amplitude and limited rate-of-change. They employed an observer-based measurement feedback to reject the disturbance effects. Gomes da Silva et al. [13] introduced a controller with amplitude and rate-of-change limitations involving two anti-windup loops, which requires additional parameters to be tuned appropriately. This controller has been improved by Bender and Gomes da Silva [14] to take the existence of disturbance into account, where the disturbance tolerance and the system output magnitude are treated in a framework of an optimization problem under LMI constraints.
In this paper, we follow the second approach, where we propose a new controller that provides a control signal limited in both its amplitude and its rate-of-change. This controller has a structure in which the saturation function and the signum function are used in a nested way. This structure is similar to the one called an ideal rate limiter [9,10,16], and it does not include anti-windup loops. One of the main features of the new controller is that it involves a state-dependent parameter to suppress the effect of disturbance without affecting its convergence behavior. More specifically, this state-dependent parameter imposes low gain when the state is far from the origin, and imposes high gain when the state is near the origin. Another feature of our controller lies in its discrete-time implementation, which is derived based on the implicit Euler method to avoid chattering raised by the discontinuous (or more strictly, set-valued) function.
The remainder of this paper is organized as follows. Section II presents the problem formulation in the continuous time and shows two previous approaches for linear systems subjected to amplitude-and rate-saturated control signals. In Section III, we analyze the idea of using the signum function to produce rate-saturated control signals. Section IV introduces a new control scheme with a designed state-dependent parameter to improve its convergence behavior and its insensitivity against the disturbance near the origin. A new discrete-time algorithm of the proposed control scheme is also presented in Section IV. Section V shows illustrative numerical examples of the proposed controller. Finally, concluding remarks are provided in Section VI.

II. PRELIMINARIES
In the rest of this paper, R denotes the set of all real numbers. The symbol 0 denotes the zero vector or the zero matrix of appropriate dimensions. The symbols cl() and  denote the closure and the complementary set, respectively, of a set . The interior and the boundary of a set  are denoted by Int() and , respectively. The set of all subsets of a set  is denoted by 2  . For brevity, Eig(X ) denotes the set of all eigenvalues of a matrix X .
In the following, we extensively use the following function: We also use the following set-valued signum function: It should be noted that the function sgn(⋅) can be seen as an almost-everywhere point-wise limit of sat(⋅) as follows: This can be easily proven by considering the definitions of sgn(x) and sat(x) for x ≠ 0 as follows:

Problem setting
This paper considers linear controlled plants of the following form: where x ∈ R n is the state vector, u ∈ R is the control input, and ∈ R is the unknown perturbation. We assume that there exists L m > 0 with which the perturbation satisfies | | ≤ L m for all t > 0. Throughout this paper, the matrices A ∈ R n×n and b ∈ R n are constant, and the pair (A, b) is controllable.
Here, we assume that the input signal u needs to satisfy |u| ≤ and |̇u| ≤ where and are positive constant scalars. We also assume that L m < . For the convenience of further derivation, we use another positive constant L that satisfies L m < L < .

Modeling of amplitude and rate limitation
To model the control input restriction of the form of (6), many researchers employ the following differential equation: Here, (x) is a scalar function of the state vector x, and the scalars { , 1 } are positive constants. This differential equation has been used for the model of the hardware limitation of actuators [9,11] and for controllers that have amplitude and rate-of-change limitations [15]. When (7) is viewed as an actuator model, as shown in Fig. 1(a), the constant 1 can be seen as a model parameter that is determined by the hardware and as the time constant of the first-order lag. In this case, (x)∕ and u are the input and the output of the actuator model, respectively. Bateman and Lin [9] employed this actuator model and derived conditions for the controller parameters to achieve the stability in the presence and in the absence of disturbances. Other research work [11,16] focused on the enlargement of the domain of attraction.
When (7) is viewed as a controller, as shown in Fig. 1(b), the actuator is considered as a part of the linear plant that accepts the control input satisfying (6). The extreme case where 1 ↘ 0 is considered by Stoovrogel and Saberi [12], where (7) reduces tȯ as shown in Fig. 1(c). Note that the controller of this extreme case is effective only if the time constant 1 of the actuator is sufficiently close to zero. As is formally pointed out in [9,10,16], (8) can be seen as an ideal amplitude and rate limitation operator.

III. ANALYSIS OF THE SGN-SAT TYPE CONTROLLER
By combining the controlled plant (5) with the control law (8), we obtain the following system: Then, we can consider a subsystem of the system (9)(10) as follows:̇∈ where and (x, u, ) is a linear function of {x, u, }. The subsystem (14)(15) is obtained by projecting the system (9)(10) to the subspace R 3 with the following operation: where For the convenience of further discussion, let us define the following subsets of the sub-state space R 3 : which are illustrated in Fig. 2. We also define operatorŝ which are to make correspondence between a subset of the total state space R n+1 and a subset of the sub-state space R 3 . Throughout this paper, a calligraphic symbol with or without a hat denotes a subset of R n+1 or R 3 , respectively. Now, let us show that the sliding mode takes place at a portion of . Theorem 1. Let us consider the system (9)(10). Then, the sliding mode takes place at the portion( C ∪ ( L ∩  )) of the surface(), on which s( ) = 0 is satisfied.
Proof. Let us consider the following function: One can see that, for a s ∈ , if there exists a > 0 that satisfieṡ in the intersection of an open neighborhood of s ∈  and the subset S, where s( ) ≠ 0, we can say that the sliding mode takes place at s ∈ . The following proof shows that such a > 0 exists for every ∈  C ∪( L ∩ ).
From (17), we can obtain the following: This means that, if | | > and s( ) ≠ 0, (28) is satisfied with = and thus the sliding mode takes place on the set  C , which is the portion of  that lies in the region | | > .
Meanwhile, if | | < , s( ) ≠ 0 and also ∈  are satisfied, (29) implies that the following is satisfied: where we used the fact that This means that (28) is also satisfied in this case and thus the sliding mode also takes place on the set  L ∩  . Therefore, we can see that the subsystem (14)(15) is in the sliding mode on the patch  C ∪ ( L ∩  ) of the switching surface . This implies that the total system (9)(10) is in the sliding mode on the patch( C ∪ ( L ∩  )) of the surface(). This completes the proof.
This theorem indicates that, once the state reaches the manifold , the state may escape from  only from the portion  L ∩ . Considering that u = − ∕ is satisfied on  L , the subset  L ∩  can be written as follows: After reaching the set  C , the state moves toward  L as long as it stays in  C ∩. The following theorem formally states this fact.
Proof. When ∈  C ∩ , the following is satisfied: which leads to the following: With the use of the comparison lemma [17,Lemma 3.4], one can obtain the following from (34): In  C ∩, | | is lower-bounded by . Therefore, if we set we have the following This means that, at such a time instant t m shown in (37), the state is outside the set  C ∩ . Because the sliding mode takes place on  C ∩ , the state does not move directly from  C ∩  into . Therefore, possible transitions at the time t m is only into cl( L ) and cl( C ∩). This completes the proof.
After the state reaches the set  L (i.e., the state reaches the set( L )), the system is linear. As long as the system state stays in( L ∩  ) (i.e., the condition (31) is satisfied), we can prove that the state is attracted to a neighborhood of the origin through the following Theorem.
Theorem 3. With the system (9)(10), there exists a set ⊂ ( L ∩  ) that includes the origin and is asymptotically stable if L m is small enough and if A − bc T ∕ is Hurwitz.
Proof. When ∈  L , the system (9)(10) reduces to the following linear system: If A − bc T ∕ is Hurwitz, for every positive definite matrix Q ∈ R n×n , there exists a positive definite matrix P ∈ R n×n that satisfies With such Q and P, let us define the following function: Then, we obtaiṅ where Q stands for the minimum eigenvalue of Q. This means thatV q (x) < 0 is satisfied if Thus, we can define the following quantity: which does not depend on L m . Based on this, let us define the following: Let us assume that ⊂ Int(( L ∩  )). Then, there exists an open neighborhood of that is small enough to satisfy ∩( L ∖ ) = ∅. With such an , we can see where a 1 and a 2 are positive scalars. Therefore, if one set we can see that V a ( ) = 0 is satisfied for all ∈ anḋ V a ( ) < 0 is satisfied for all ∈ ∖. Thus, is asymptotically stable in a local sense if ⊂ Int(( L ∩  )). The definition (45) implies that ⊂ Int(( L ∩  )) is satisfied if L m is small enough. This completes the proof.
With respect to the terminal attractor, a subset of the region of attraction can be given as follows: If the set shares its boundary with the set( L ∩  ), then the state reaches the terminal attractor, as long as it stays in( L ∩  ).
In conclusion, as long as is in the portion( C ∩ ), it is attracted to the subset( L ). Once reaches, which is a subset of( L ∩  ), the state asymptotically approaches the terminal attractor.
It should be noted that a smaller results in a smaller terminal attractor because it is a subset of ( L ∩  ), of which the width is . It however results in a smaller size of the linear sliding patch( L ∩  ), which includes a subset of the region of attraction. Therefore, one can conclude that should be large when the state is far from the origin and should be small when the state is close to the origin.

IV. PROPOSED CONTROLLER
Now, we propose a new controller algorithm based on the discussion in Section III. The proposed controller is built on the controller (8) but the parameter is determined by a particular function of the state variables x and u. This state-dependent parameter is chosen to decrease the size of the terminal attractor and to increase the size of the linear sliding patch( L ∩  ).

Continuous-time representation
For the application to the controlled plant (5), we present a new controller as follows: where Here, u is the control signal amplitude, c is a positive constant representing an upperbound of (x, u), L is a positive constant that is greater than the expected disturbance (i.e., L > | |), and c ∈ R n is a vector chosen so that it satisfies c T b > 0. It is assumed that the maximum control signal amplitude is greater than L, i.e., This choice of the state-dependent parameter (x, u) is motivated by the proof of Theorem 1, which shows that needs to hold true to realize the sliding mode. To satisfy this condition, one choice is to set as follows: Here, we need to prevent from becoming excessively large because a very large means a very low control gain. Considering these points, we can see that the definition (52) of (x, u) is a natural choice, in which an upperbound c is set.
With the state-dependent parameter (x, u), the stability proofs in Section III do not strictly hold because it will inject additional terms proportional tȯto the derivatives of V s (s( )) and V q (x). They are still valid iḟis small enough, although it is still unclear in what regions of the state space |̇| can be said to be small enough. One approach to this problem might be to use the fact that the upperbound oḟcan be given as a function of the state vector . Leaving this problem as an open problem, we attempt to support the usefulness of the controller through some numerical examples in Section V.

Parameter tuning guideline
This section shows an approach to design the vector c ∈ R n and the parameter c > 0, which comprise all parameters that need to be designed. Here, we consider the problem to choose c and c so that the eigenvalues of the matrix A − bc T ∕ is within a given region  in the complex plane for all ∈ (0, c ]. The region  can be given according to required response characteristics, such as damping ratio and settling time, of applications.
Let us define Then, one can see that, as ↘ 0, one of the eigenvalues of the matrix A d ( ) goes to −∞, while the others remain finite because we have assumed that c T b > 0 in the condition (12). Thus, we need to design the vector c so that lim ↘0 Eig(A d ( )) ⊂ . For this purpose, the following theorem is useful.
With this theorem, a pole placement problem of an n × n system with one infinitely fast pole can be reduced to another pole placement problem of an (n − 1) × (n − 1) system. This theorem can be used to set the eigenvalues of lim ↘0 A d ( ) to specified locations {q i , · · · , q n−1 }, which should be located in . To apply this theorem to system (5), we define a matrix T ∈ R n×n so that Tb = [0 T n−1 , 1] T is satisfied. Once we obtain the vectorc 1 ∈ R n−1 using Theorem 4, the vector c can be chosen as follows: Now, we discuss the choice of c , which is the upperbound of the the state-dependent parameter (x, u). Recalling that Eig(A d ( )) ⊂  should be satisfied for all ∈ (0, c ], the maximum of such values of can be found by drawing the loci of Eig(A d ( )) with increasing from zero and by searching for the critical value of with which at least one of Eig(A d ( )) crosses the boundary of . Fig. 3 is an illustrative graph to show Eig(A d ( )) as a function of and to show the selection of c according to the region .
In conclusion, we propose the following procedure for choosing the vector c and the upperbound c : 1. Set a region  in the complex plane and the desired eigenvalue locations {q 1 , · · · , q n−1 } in  according to required response characteristics of the application.

Find an invertible matrix T ∈ R n×n with which
is satisfied. 3. Calculate the matrixĀ 11 ∈ R (n−1)×(n−1) and the vectorā 12 ∈ R n−1 as follows: Note that the step 4 is to adjust Eig(A d ( )) at = 0 and that the step 5 is to adjust Eig(A d ( )) at ∈ (0, c ].

Discrete-time implementation
This section presents a discrete-time algorithm of the proposed controller for its implementation to digital controllers. Since the proposed continuous-time controller (51)(52) involves the set-valued function sgn(⋅), inappropriate discretization prevents the exact sliding mode and causes chattering. Here, we employ the approach called an implicit method [19,20]. The idea of the implicit method is to resolve the set-valuedness of the controller's equation by viewing the mutual dependence between the control input and the system state as an algebraic constraint. This approach utilizes the model of the controlled plant as a predictor of the system state that is achieved by a given control input.
Let us start from the implicit Euler discretization of the proposed controller (51)(52) as follows: Here, h denotes the sampling interval. The system state x at time step k needs to be predicted by the nominal model of the controlled plant, which iŝ This 'predictor' equation is obtained by neglecting the perturbation in the system model (5). Substituting x k in (64) by the predicted valuex k , we obtain the predicted value of k as follows: Let us define w k−1 = Δ c T (I+hA)x k−1 so that̂k is rewritten as follows:̂k Then, substituting k in (63) bŷk in (68) yields the following: in which u k appears in both the right and left-hand sides. By using Theorem 5 presented in Appendix, one can solve (69) with respect to u k as follows: Here, one can see that the set-valuedness in (69) is resolved as in (70). In conclusion, the discrete-time controller, which realizes the controller (51)(52), can be given as follows: ) .
(73) Remark 1. In cases where the rate-of-change of the control signal is not limited (i.e., = ∞), the state-dependent parameter k holds at zero. In this case, the controller (71)(72)(73) reduces to the following simpler controller, which is an implicit implementation of the conventional sliding mode controller u = − sgn(c T x) combined with the nominal plant model (66).

Remark 2.
By setting < ∞ and holding = 0, the controller (51)(52) reduces tȯ for all x satisfying c T x ≠ 0. In order to deal with the case of c T x = 0, a nested signum structure needs rigorous re-definition of the set-valued mapping sgn. A similar nested signum structure appears in the work of Miranda-Villatoro et al. [21], who also used the implicit discretization scheme [19,20]. This paper does not discuss the extreme case of = 0 because > 0 is always satisfied in our controller.

V. NUMERICAL EXAMPLES
In this section, we apply the proposed controller to some numerical examples. We performed simulations with MATLAB software in the discrete time.

Example 1
In this example, we use an example reported in [9,16], which employs the controlled plant (5) with the following matrices: Here, note that the pair (A, b) is controllable. Let us consider that this controlled plant is subjected to disturbance and parameter uncertainty as in the following form: The control input u is under the restriction of |u| ≤ 1 and |̇u| ≤ 2.5, and the initial states are x 0 = [0, 0.4] T and u 0 = 0. The requirement is to set the 2% settling time to be less than or equal to 8 s and the damping ratio to be greater than or equal to 0.7. We apply the discrete-time algorithm (71)(72)(73) with the sampling interval h = 0.01. We here set L = 0.125, and we obtain c and c based on the design procedure in Section 4.2, which is as follows: 1. We set  as shown in Fig. 4, i.e.,  = {s ∈ C | ℜ(s) ≤ − ln(0.02)∕8 ∧ cos(arg(s)) ≤ −0.7}. We also set q 1 = −0.5 so that it resides in .   5 shows the results of simulation under = 0.1 sin(t) and ΔA = 0. Here, one can see that the use of the small constant ≡ 0.6 c results in the instability. In contrast, the use of the state-dependent (x, u) provides best convergent behavior in spite of the fact that the value of (x, u) eventually becomes smaller than 0.6 c . It should be noted that the decrease of (x, u) takes place only after the state comes close to the origin. This decreasing leads  [Color figure can be viewed at wileyonlinelibrary.com] to better disturbance rejection than the case with the larger constant ≡ c . Fig. 6 shows the results of simulation under and ΔA indicated in (79). The state still smoothly converges to the neighborhood of the origin with the state-dependent (x, u), although the constant values produce larger errors and overshoots. It should be noted that the state-dependent (x, u) results in smaller terminal error than the larger constant ≡ c and smaller overshoots than the smaller constant ≡ 0.9 c . It is also interesting to see that, although the constant ≡ 0.8 c results in the instability, the proposed (x, u), which eventually falls far below 0.8 c , maintains the stability and good convergence.

Example 2
As an example of a third order system, we consider the controlled plant (5) with the following controllable pair of matrices: Let us consider that this controlled plant is subjected to disturbance and parameter uncertainty as in the following form: where The disturbance is equal to 0.0184 sin(t) from 0 to 7 s, and after that, it changes into = 0.0184 sin(2t). The control signal amplitude and its rate-of-change are set to be subject to the following limitations: |u| ≤ 1.   Fig. 7, we obtain c = 1∕1.44 = 0.69. Fig. 8 shows the simulation results of Example 2 under the existence of . It is clearly seen that the convergence is much faster with (x, u) than with the smaller constant ≡ 0.2 c in spite of the fact that (x, u) goes below 0.2 c . Fig. 8 also shows that the system with (x, u) is less sensitive to the disturbance than that with the larger constant ≡ c . Fig. 9 shows the simulation results of Example 2 under the existence of both and ΔA. The states with (x, u) converge faster than those with ≡ 0.2 c , and are less sensitive to the disturbance than those with ≡ c .

Example 3
In this example, the proposed controller is compared with a previous discrete-time controller introduced by Palmeira et al. [11]. The plant considered in [11] is as follows.ẋ Here, (85) is regarded as an actuator having dynamics with the time constant 1 as in Fig. 1(a). The actuator provides the input u a to the plant (84) and the control command u needs to be provided by the controller. The initial states are set as x 0 = [0.01, −0.24] T , and u a0 = 1.2.
The actuator parameters are set as 1 = 0.05, = 1 and = 10. Palmeira et al.'s [11] controller is obtained by solving an optimization problem that maximizes the region of attraction with a given sampling interval h ∈ [0.01, 0.07], without considering the existence of disturbance . The obtained controller is written as follows: We apply our proposed controller to this example with neglecting the actuator dynamics (85). The requirements are assumed to place the eigenvalues of the overall system in the region ℜ(s) < −3 and {q 1 } = {−4}. We set h = 0.01 and L = 0.11 to satisfy | | < L. By using the proposed tuning guideline, the vector c and c are obtained in the following procedure: 1. We set  as shown in Fig. 10, i.e.,  = {s ∈ C | ℜ(s) < −3}, and we set {q 1 } = {−4} so that it resides in . 2. To realize Tb = [0, 1] T , we set T = I. 3. From (62), we find that:Ā 11 = 0, andā 12 = 1. 4. By solving the pole placement problem to put Eig(Ā 11 −ā 12c T 1 ) at {q 1 }, we find thatc 1 = 4, and thus we set c = [4, 1] T . 5. By drawing the loci of Eig(A − bc T ∕ ) as shown in Fig. 10, we obtain c = 0.17. In Fig. 11 and Fig. 12, we compare the proposed controller and the previous controller [11]. Fig. 11 shows the results under no disturbance ( ≡ 0), where the both controllers realize smooth convergence. Fig. 12 shows the results with non-vanishing disturbance = 0.1 sin(t), where the proposed controller shows much better performance against the disturbance than the previous one. The graphs in Fig. 11 and Fig. 12 show that there exists a significant lag between the actuator signals {u a ,̇u a } and the controller signal {u,̇u}, which is caused by the actuator dynamics (85). We can see that the proposed controller provides smooth and accurate convergence even under this lag.

Remark 3.
The controlled plants in Examples 1 and 3 are unstable systems. As can be seen in these examples, the proposed controller is applicable also to unstable plants.

VI. CONCLUSIONS
This paper has proposed a sliding mode-like controller that produces control signal with limitations on both its amplitude and rate-of-change. This paper is motivated by the idea of the ideal rate limiter, which involves the nested signum (sgn) function, while we used a saturation (sat) function to produce limited control signal amplitude. We have analyzed this sng-sat type controller and have shown that the total closed-loop system reduces to a linear system in the sliding mode. Based on the analysis, we designed our controller using a state-dependent parameter. As the state approaches the origin, the value of this parameter is reduced, and it contributes the reduction of the size of the terminal attractor. A tuning guideline for other controller parameters is also presented, which places the poles of the system in a given region in the complex plane. We also have presented a discrete-time implementation of the proposed controller, which is based on a model-based implicit discretization. This implementation does not produce chattering, which could be caused by other discretization schemes.
Future study should address a more in-depth analysis to obtain the region of attraction. In addition, optimal placement of the poles in the complex plane should be clarified. VII. APPENDIX Lemma 1. For any y, z ∈ R and a > 0, the following is satisfied: y ∈ asgn(z − y) ⇐⇒ y = sat a (z). (A1) Proof. See [22, Sec.II].

Lemma 2.
For any y ∈ R and a, b, c, d > 0, the following is satisfied: ) . (A2) Proof. Let us define the following function: (y) It is obvious that the function is strictly monotonically increasing function and it is unbounded. Thus, it is clear that there is a unique real value Y that satisfies (Y ) = 0, and that such a Y satisfies sgn( (y)) = sgn(y − Y ) is satisfied. Then, one can see that the real value Y can be obtained by solving (Y ) = 0 as follows. First, if |Y + c|∕d ≤ b, (Y ) = 0 reduces to Y + a + (Y + c)∕d = 0, which is equivalent to: Second, if (Y + c)∕d > b, (Y ) = 0 reduces to Y + a + b = 0, which is equivalent to: Third, if (Y + c)∕d < −b, (Y ) = 0 reduces to Y + a − b = 0, which is equivalent to: By combining these three cases (A4),(A5), and(A6), we can obtain the solution Y as follows: Therefore, the left-hand side of (A2) is equal to sgn(y−Y ) with Y defined in (A7), and it is equal to the right-hand side of (A2). This completes the proof.