Determining input‐to‐state and incremental input‐to‐state stability of nonpolynomial systems

In this study, we propose constructive ways to determine input‐to‐state stability (ISS) as well as incremental ISS (δISS) of nonpolynomial dynamical systems. The developed procedures are based on sums‐of‐square decomposition. This tool is only applicable to polynomial systems. Thus, a rational recast of the nonpolynomial system description is used. This recast generally leads to an increased system order and additional constraints. These constraints must be respected in the resulting formulations. The proposed approach gives a unique and constructive procedure to determine the ISS and the δISS property, which is normally nontrivial and needs a good understanding of the system's dynamics. The proposed approaches are illustrated on several examples.


INTRODUCTION
Stability is one of the fundamental properties of control engineering. In the theory of nonlinear systems, Lyapunov's direct method emerges as a standard technique for stability analysis. Despite the diversity of possible applications, Lyapunov's method is in its original form only applicable to autonomous systems. This is not sufficient for many applications, especially if inputs or disturbances are considered. This leads to the concept of input-to-state stability (ISS). 1 Due to the importance of that concept, it has been intensively studied during the last decades, for example, References 1-5. Nevertheless, the determination of that property is not an easy task, such that generally a deep insight into the system is necessary. This particular holds if ISS is not sufficient because disturbances on a desired input signal should be considered. In this case, incremental input-to-state stability ( ISS) 6,7 is the concept of choice. This concept extends the idea of stability corresponding to an equilibrium point to the behavior of trajectories corresponding to each other. Hence, ISS describes how trajectories change their behavior due to an equilibrium under disturbances while ISS gives a statement of how does a disturbed input signal influences the resulting trajectory in comparison to the desired one. So ISS is generally interesting in a stabilizing context, where robustness against disturbances should be achieved. Whereas ISS guarantees to limit the resulting error of tracking control, considering uncertainties of the initial value and disturbances of the input signal.
In a Lyapunov context, a stability property often leads to a definiteness requirement, which is generally a complex problem. More precisely, the determination of the definiteness of fourth-order polynomials is NP-hard, for example, Reference 8. To overcome that problem and organize the calculation in an automatic manner, sum-of-squares (SOS) decomposition is used. This framework is based on the idea that a polynomial, which can be decomposed in a sum of squares, is positive semidefinite. Although not every positive semidefinite polynomial can be decomposed as an SOS, 9 the question if there exists a suitable decomposition can be computed with a semidefinite programming procedure. 10 Thus, the computation is based on a convex optimization framework. As mentioned before, this concept is only applicable to polynomials and thus polynomial system descriptions, but there are a lot of examples containing trigonometric or exponential functions (eg, in mechanical or chemical systems 11 ). A priori such systems cannot be analyzed using SOS decomposition. A rational recast 12,13 is briefly introduced to avoid these limitations. The results given here were partly published in the PhD thesis of the author. 14 This article is organized as follows: The recasting process is explained in the next section. This is followed by an introduction of the Lyapunov based stability analysis of recast systems. In Sections 4 and 5, these ideas are extended to ISS as well as ISS, respectively. The article ends up with a concluding summary in Section 6.

POLYNOMIAL RECAST
Consider a nonlinear systeṁz with the vector field f ∶ R n → R n . We assume that (1) has an equilibrium point in the origin, that is, f(0) = 0. If the map f is nonpolynomial but consists of sums and products of nested elementary functions, like exponentials, logarithms, trigonometric, or hyperbolic functions, it can be transformed into a rational equivalent with a higher dimension using an algorithmic procedure. This was already shown in the 1980s. 12,13 An equivalent approach is used in algorithmic differentiation. 15,16 Thus, systems of the forṁz are examined in the following considerations, where f ijk (z) are nested elementary functions. Algorithm 1 shows how the rational recast is carried out. For each nonpolynomial term, a new state is introduced and afterward the time-derivative is calculated. This is repeated until the system description only contains rational terms.

TA B L E 1 Common nonlinearities and their related
ODEs 15,26 Following Algorithm 1, we get x 1 = z and x 2 = This leads to the polynomial systeṁ Thus one additional state is needed to polynomialize the system, although Table 1 shows that two (k=2) states are necessary. The cutting of the denominator leads to this simplification. Furthermore, two additional constraints are arising.
The first is x 2 − 1 ≥ 0 and the second is by definition x 2 = √ x 2 1 + 1. This second constraint is not polynomial and therefore not manageable with SOS, but we can this reformulate to which is a polynomial expression.
The proposed recasting process is the basis for the upcoming stability considerations. This leads to the three-step approach. First, the investigated system is reviewed whether it is polynomial or not. In the nonpolynomial case, the illustrated recasting process is used to generate a polynomial system with constraints. These systems can be analyzed with the following stability formulations.

STABILITY ANALYSIS OF THE RECASTED SYSTEM
In this section, the recasted systeṁx is analyzed. The following statements are according to Reference 18 and briefly reviewed as a basis of the upcoming considerations. As mentioned the recasting process leads to additional constraints. These constraints have several causes. The constraints directly induced by the recasting process are given by and the indirect ones are where F, G 1 , and G 2 are column vectors of functions and the equalities as well as inequalities hold element-wise. Furthermore, let g(x 1 , x 2 ) be the collective denominator of (6) and (7), which means that gf 1 and gf 2 are polynomials. Moreover we assume that g(x 2 , x 2 ) ≥ 0, otherwise the system is not well-posed. 18 Furthermore, there are constraints resulting from the domains of the original nonpolynomial functions (eg, that the square root is only defined for non-negative values). These constraints are described by the semi-algebraic set where G D (x 1 , x 2 ) is a column vector of polynomials that fulfill the inequality entry-wise. The set defines the set of all SOS polynomials, while  is the set of all polynomials. Based on those assumptions, we can formulate the following proposition.
Remark 1. By substituting x 1 = z and x 2 = F(z) in the Lyapunov function V * (x 1 , x 2 ), a corresponding Lyapunov function V(z) for the original system (1) results. Further note that the equilibrium z = 0 can determined to be globally asymptotically stable by adding a term (x 1 , (15). This guaranties the negative definiteness ofV * andV, respectively.
With Proposition 1, we can verify the stability of system (3).
Remark 2. Please note that the Lyapunov candidate functions in the transformed coordinates do not necessarily fulfill the Lyapunov requirements (V positive definite andV negative (semi-) definite). Nevertheless, lead the additional constraints to a valid candidate in the original coordinates. For instance is Equation (22)

INPUT-TO-STATE STABILITY
The ISS property is an extension of asymptotic stability to systems of the forṁ Based on these types of comparison functions, we can define the ISS property. 1 Definition 3 (Input-to-state stability). A system (25) is called ISS, if there exist two functions ∈ KL and ∈ K ∞ , such that for every initial value z 0 = z(0) and each measurable essentially bounded input function w, the corresponding solution z(t, z 0 , w) exists on the entire real axis and the inequality holds for t ≥ 0, where | ⋅ | denotes the Euclidean norm and || ⋅ || ∞ denotes the norm of the Lebesgue space L ∞ , respectively.
The ISS condition (26) means that for an ISS system every trajectory remains in a ball with the radius (|z 0 |, t) + (||w|| ∞ ). For t → ∞, this leads to the smaller ball with the radius (||w|| ∞ ). If the input is identically zero, we see that ISS implies global asymptotic stability. An equivalent characterization of this behavior can be formulated using ISS-Lyapunov functions. 2,20 Definition 4 (ISS-Lyapunov function). A smooth function V ∶ R n → [0, ∞) is called ISS-Lyapunov function of (25) if the following conditions hold for all x and w with , ∈ K ∞ and , ∈ K.
Alternatively to condition (28), the ISS-Lyapunov function can be defined in a so called "dissipation" type of characterization (see remark 2.4 of Reference 2).
Equations (27) and (29) ensure the positive definiteness as well as the radially unboundedness of the function V, while (28) and (30) guarantee the negative definiteness ofV for all input magnitudes if |x| is large enough. Furthermore is an ISS-Lyapunov function V(z) of the System f(z, w), a Lyapunov function for the system f(z, 0) = f(z) and guaranties the stability of the autonomous system.

Verification of ISS for polynomial systems
In this subsection, we consider a polynomial vector fields f(x, w). The ISS analysis for such systems has been done by Ichihara 3 and is briefly reviewed here. The first task is to find SOS compatible formulations for the comparison functions, because every ISS condition is based on them. The following lemma 3 yields a possible SOS condition for K ∞ functions.

Lemma 1. A univariate real even polynomial without constant term
with at least one coefficient c 2i ≠ 0, belongs to class K ∞ if and only if holds for all s ∈ R.
Remark 4. It seems unusual that the condition (32) should be fulfilled for all s, since functions of norms are considered. This requirement arises from the SOS procedure. So (32) is checked to be a sum of squares and thus it must hold for all s.
Using the above lemma and the reformulation of Equations (29) and (30) by

TA B L E 2 Common nonlinearities and their upper bounds
with  being the set of all SOS polynomials, the ISS property of a polynomial system can be determined with the software package SOSTOOLS. 3

ISS for nonpolynomial systems
The approach introduced in Section 4.1 cannot directly be applied to the recasted system (6)- (7). If a function * (s) is a K ∞ function in the transformed coordinates then is generally * (s) no K ∞ function in the original coordinates z. The first requirement that needs to be ensured is that a K ∞ function must be zero if z = 0. This can be achieved with Furthermore, the resulting function is not a function in |z|. The function (36) guarantees the monotony and radial unboundedness in |(z, F(z)) T | but not in |z|. The terms depending on F(z) need to be upper or lower bounded with terms depending on |z| to overcome that problem. In condition (33), we need to determine a lower bound of the function . This tackles the following proposition. Proof. If V * (x 1 , x 2 ) − (x 1 , x 2 ) ≥ 0 holds then V(z) − (z, F(z)) ≥ 0 holds as well. With |(z, F(z)) T | = √ |z| 2 + |F(z)| 2 ≥ √ |z| 2 , it follows due to the monotony property of that (z, F(z)) ≥ (z, 0) = (|z|). Thus V(z) − (|z|) ≥ 0 holds as well, with the K ∞ function (|z|). ▪ The same argumentation holds for the function in (35). We need an upper bound for the function in (34). This estimation is not as generalizable as the lower bound. Nevertheless, the determination of an upper bound is unproblematic for common nonpolynomial terms, see Table 2. With the estimations of Table 2 and Equation (36), the monotony condition as well as z = 0 hold.
Using these results, a theorem for the ISS analysis of the recasted system (6)-(7) can be formulated.

that (39) implies (35). Such that (33)-(35) hold and system (25) is ISS. ▪
To illustrated the result, let us analyze system (3). For that purpose, the input or disturbance w is added.

Example 3 (Example ISS). Considering the systeṁ
and the candidate functions Applying Theorem 1, it results in Note that, in (49), the coefficient 0.2439 is not the half of 0.4877. This is due to the rounded values illustrated here. Using all decimals it is the exact half, so V(z = 0) = 0. In our case, we do not need to estimate the comparison functions because they are automatically fulfill the conditions of Definition 1.
In the following section, these results are extended to the incremental ISS property.

VERIFICATION OF INCREMENTAL ISS
The classical stability concepts like stability in the sense of Lyapunov, asymptotic stability and ISS describe the behavior of trajectories corresponding to an equilibrium or a special trajectory. The generalization of that idea leads to the concept of incremental stability and if we consider inputs to incremental ISS ( ISS). This concept characterizes the behavior of the trajectories to each other. This gives a framework to analyze the influence of errors or disturbances in the input signal as well as the initial conditions. Based on those considerations, we can define ISS. 7 Definition 5 (Incremental Input-to-state stability). A system (25) is called ISS, if there exist two functions ∈ KL and ∈ K ∞ , such that for every initial values z 10 , z 20 and each measurable essentially bounded input functions w 1 , w 2 ∈ M w , with M w being closed and convex, the corresponding solutions z(t, z 10 , w 1 ), z(t, z 20 , w 2 ) exist on the entire real axis and the inequality holds for t ≥ 0.
The inequality (67) describes the following behavior. For t → ∞ vanishes the initial value depending KL function such that the influence of an error of the initial value vanishes for large t as well. This means that for t → ∞ only the term (||w 1 − w 2 || ∞ ) remains and the resulting distance of the trajectories only depends on ||w 1 − w 2 || ∞ . If we set w 2 and z 20 identically zero and assume that f(0, 0) = 0, it yields the condition for ISS. Thus, ISS is a special case of ISS and therefore every ISS system is ISS as well.
Another useful property is given in the following proposition. 7 Proposition 3. For every ISS system (25) with arbitrary z 10 , z 20 ∈ R n , there holds |z(t, z 10 , For a proof of that proposition, see Reference 7. In other words, the states of a ISS system converge, if the inputs converge. Using this property, we can show with the following example that not every ISS system is ISS.

Example 4 (Counterexample). The systeṁz
is ISS and incremental global asymptotically stable but not ISS. To show that the system is ISS, we choose V(z) = z 2 , which automatically fulfills the condition (29). The time derivative of the candidate function V is given bẏ Hence, condition (30) holds as well and the system is ISS. Following the ideas presented in Reference 7, we can show that the system is not ISS. Considering that the system (68) is intȯz = 1. This leads to the inputs u 1 = (z(t, z 10 , u 1 ) + 1) 1 3 and u 2 = (z(t, z 20 , u 2 ) + 1) 1 3 as well as the trajectories z(t, z 10 , u 1 ) = t + z 10 , and z(t, z 20 , u 2 ) = t + z 20 . Then is the difference of the both trajectories constant z(t, z 10 , u 1 ) − z(t, z 20 , u 2 ) = z 10 − z 20 for all t, but the inputs converging for t → ∞ ⇒ u 1 (t) − u 2 (t) → 0. This contradicts Proposition 3. Thus system (68) is not ISS.
As in the case of ISS exists an equivalent Lyapunov condition to characterize the ISS property. 7 Definition 6 ( ISS-Lyapunov function). A smooth function V ∶ R n × R n → [0, ∞) is called an ISS-Lyapunov function of (25) if the following conditions hold with , , ∈ K ∞ and > 0.
The term V(z 1 , z 2 ) can without limitation to the generality be replaced with a K ∞ function (|z 1 − z 2 |). 21 Using this relation and the conditions of Definition 6, the equations arise as sufficient SOS conditions. Equations (73)-(75) are applied in the following example.
Example 5 (Tank system). This example is adapted from Reference 14. To illustrate the procedure, we analyze the system shown in Figure 1. Using Torricelli's law, 22 the system can be described witḣ The roots in (76) are nonpolynomial and thus make SOS not applicable. Thus we want to approximate these roots with appropriate Taylor polynomials. The resulting polynomials with different order can been seen in Figure 2, developed at 0.5 m. The third-order polynomial gives a well trade-off between accuracy and complexity. Thus it results in The problem (73)-(75) is not solvable if we just use all monomials until order 2 as comparison and Lyapunov function. However, if the function candidates examples. To overcome the restriction of polynomial systems, a rational recast process is used. This generally results in additional constraints that need to be considered in the analysis. Thus extensions to the Lyapunov ISS and ISS conditions are presented. These additional constraints F, G 1 , G 2 are respected in using the polynomials i and i . This approach seems complicated and is deeply rooted in the SOS procedure since we are just able to verify if a given polynomial is decomposable in a sum of squares. So the direct consideration of the constraints is not possible. Alternatively, quantifier elimination (QE) techniques can be used to include the given constraints. 23,24 In that approach, uncertain or design parameters can be considered. The main disadvantage is the enormous computational effort, which is needed to solve such problems computationally. Nevertheless is some significant progress in the QE algorithms and implementations done. 25 So it seems fruitful for further applications.
Approximation yields another possibility to analyze nonpolynomial systems. Considering higher order terms can be used to respect certain phenomena. Using a second-order approximation, for instance, Coriolis and centrifugal terms can be taken into account. Indeed, these approximations are not exact and may produce errors.
In each of these options, the most challenging issues are the choice of the initially chosen Lyapunov candidate as well as comparison functions and the inherent computational barriers coming along with the subordinated semidefinite program or QE method. Nevertheless, the proposed procedure can easily be applied to other properties or control techniques.