Normal forms and internal regularization of nonlinear differential‐algebraic control systems

In this article, we propose two normal forms for nonlinear differential‐algebraic control systems (DACSs) under external feedback equivalence, using a notion called maximal controlled invariant submanifold. The two normal forms simplify the system structures and facilitate understanding the various roles of variables for nonlinear DACSs. Moreover, we study when a given nonlinear DACS is internally regularizable, that is, when there exists a state feedback transforming the DACS into a differential‐algebraic equation (DAE) with internal regularity, the latter notion is closely related to the existence and uniqueness of solutions of DAEs. We also revise a commonly used method in DAE solution theory, called the geometric reduction method. We apply this method to DACSs and formulate it as an algorithm, which is used to construct maximal controlled invariant submanifolds and to find internal regularization feedbacks. Two examples of mechanical systems are used to illustrate the proposed normal forms and to show how to internally regularize DACSs.


INTRODUCTION
Consider a nonlinear differential-algebraic control system DACS of the form where x ∈ X is the generalized state, with X an n-dimensional differentiable manifold (or an open subset of R n ) and u ∈ R m is the control vector. For the differentiable manifold X, we denote by TX the tangent bundle of X and by T x X the tangent space of X at x ∈ X. The maps E ∶ TX → R l , F ∶ X → R l and G ∶ X → R l×m are smooth and the word "smooth" will always mean  ∞ -smooth throughout the article. For each x ∈ X, we have E(x) ∶ T x X → R l , which is the linear maṗ x  → E(x)̇x. In particular, if X is an open subset of R n , then for each x ∈ X, we have E(x) ∶ R n → R l , that is, E(x) ∈ R l×n . A DACS of the form (1) will be denoted by Ξ u l,n,m = (E, F, G) or, simply, Ξ u . A particular case of (1) is a linear DACS of the form where E ∈ R l×n , H ∈ R l×n , L ∈ R l×m , denoted by Δ u l,n,m = (E, H, L). If G(x) = 0, that is, the control u is absent (L = 0 in the case of (2)), then we will speak about differential-algebraic equations (DAEs). The DAEs/DACSs are also called implicit, singular, generalized or descriptor systems. There are many practical applications of DAEs/DACSs, and as surveys and books using DAEs/DACSs to model physical systems, the reader may consult, for example, References 1-3 and chapter 1 of Reference 4. In particular, DAEs/DACSs are suitable tools to describe constrained mechanics, 1,5 electrical circuits, 2,6 and chemical processes. 7 The necessity of using DAEs/DACSs to model physical systems instead of ordinary differential equations (ODEs) is justified by the presence of constraints (e.g., nonholonomic and holonomic constraints for mechanical systems, see (3a)-(3c), and the algebraic constraints resulting from Kirchoff's laws and characterizations of nonlinear components for electric circuits). These constraints result in an implicit differential equation for which it is impossible to explicitly express the derivativėx as a function of the state variables x, that is, as an ODĖx = f (x).
For a DACS of the form (1), the map E is not necessarily square (i.e., in general, l ≠ n) nor invertible (even if l = n) and, as a consequence, some variables of the generalized state x play different roles for the system. More specifically, the noninvertibility of E may imply the existence of algebraic constraints and some variables of x (also some u-variables) are constrained by those algebraic constraints. On the other hand, because of the nonsquareness of E, some other variables of x may enter into system statically (since there may not exist differential equations defining their evolutions) and we may call them free variables. One of results of this article (see Theorem 1) will reveal (under a suitable coordinate transformation) four different types of the generalized state-variables: the unconstrained state variables z 1 , the unconstrained free variable z 2 , the constrained state variables z 3 and the constrained free variables (or the algebraic variables) z 4 . Note that although the free variables of x may perform "like" inputs of the system, throughout we will distinguish them from the original control inputs u. The variables u are predefined control inputs, such as external forces, and we can change them in order to act on the system. However, the free variables of x are predefined states which cannot be changed actively and arbitrarily. Such free variables may come from unknown constraint forces or some redundancies of mathematical modeling. For the behavioral approach to systems theory (see Reference 8), there is also a distinction between the latent/internal variable (i.e., x) and the manifest/external variables (i.e., the inputs and outputs (u, y)).
A typical example to illustrate that the control variables u and the free variables of x are different is the following DACS which represents the dynamics of a mechanical system under both nonholonomic and holonomic constraints (see, e.g., Reference 1 for the definitions of nonholonomic and holonomic constraints): H(q)q = 0, (3b) where q is the vector of position (configuration) variables, M(q) is a matrix-valued function which is associated with masses (or inertia), and V(q, q) is a row vector function which characterizes the Coriolis, the centrifugal and the gravity forces, is a vector of external torques, C(q) is a vector of scalar functions c i (q), i = 1, … , k and N(q) = C(q) q , and H(q) is matrix-valued function of appropriate size. Clearly, Equation (3b) defines nonholonomic constraints, which depend on both velocities and positions, Equation (3c) defines holonomic constraints, which depend on positions only. The variables n and h are the Lagrange multipliers with respect to the nonholonomic and holonomic constraints, respectively. We can regard system (3) as a DACS of the form (1), with the generalized state x = (q,q, n , h ) and the control input u = .
Observe that the variables h and n are free variables since there are no equations foṙn anḋh but they are not active control inputs contrary to the external force . The latter can be realized by some actuators (e.g., electric and hydraulic motors) while h and n are variables related to unknown constrained forces and are sometimes called the constrained input variables. 9 One purpose of this article is to find normal forms under the external feedback equivalence (see Definition 3). We will construct our normal form using a notion called maximal controlled invariant submanifold, which is, roughly speaking, the locus where the solutions of the DACSs exist and is defined by the constraints which the system should respect (for the precise definition, see Definition 2). For linear DACSs of the form (2), a canonical form, which consists of six independent subsystems, was proposed in Reference 10. One can easily conclude the roles of the variables (e.g., which variables are free and which are constrained) from the canonical structure of each subsystem. For nonlinear DACSs, although it is hard to find a fully decoupled normal form, we intend to simplify the system structures utmost such that the above mentioned various roles of variables can be explicitly and easily seen from our proposed form. The authors of Reference 11 offered a nonlinear generalization of the Kronecker canonical form using an algebraic inversion algorithm for differential-algebraic equations DAEs of the general form F(̇x, x, t) = 0, while we intend to find normal forms for nonlinear DACSs using geometric methods. A zero dynamics form for DACSs with outputs was proposed in Reference 12 using the notion of maximal output zeroing submanifold introduced in Reference 13. Note that our system Ξ u is different in two ways from the DACSs studied in References 12,13. First, in References 12,13, the distribution ker E(x) is assumed to be involutive while we consider any E(x). Second, systems in References 12,13 are equipped with outputs. Calculating the zero dynamics of a DACS Ξ u with zero output y = h(x) = 0 can be seen as studying an extended DACS Ξ u , because the maximal output zeroing submanifold of Ξ u (with the output y = h(x)) coincides with the maximal controlled invariant submanifold of Ξ u ext . Some differences of our proposed normal forms and the zero dynamics form in Reference 12 are explained in Remark 3(vii).
We also investigate the internal regularizability of DACSs, that is, given a DACS Ξ u , when there exists a feedback u = (x) such that the resulting DAE E(x)̇x = F(x) + G(x) (x) is internally regular. The latter notion characterizes the existence and uniqueness of solutions of DAEs, its formal definition will be given in Definition 4. Regularization problems of nonlinear DAEs and DACSs can be consulted in References 14-18, where both numerical and geometrical methods have appeared. The second aim of our article is to give a geometric characterization of the internal regularizability of nonlinear DACSs. For linear DACSs, some equivalent characterization of the internal regularizability are given in theorem 3.5 of Reference 19 using a geometric notion named the augmented Wong sequences (see Remark 1(iv)). Note that the internal regularizability is called autonomizability in Reference 19; the reason, for which we insist to use the word "internal", is to stress the difference between two cases. One case is to consider a DAE "internally" on its maximal invariant submanifold (i.e., on the set where the solutions exist). Another is to consider a DAE "externally" on a whole neighborhood, even although there exist no solutions for any initial point outside the maximal invariant submanifold, it is still meaningful to study how to steer the initial point toward the constraints via, for example, jumps and impulses. The reader may consultReferences 4,20-22 for the details of the differences between the internal and external analysis of DAEs.
The article is organized as follows. In Section 2, we recall the notion of maximal controlled invariant submanifold and discuss its relations with the solutions of DACSs. In Section 3, we define the external feedback equivalence of two DACSs and propose two normal forms. In Section 4, we discuss the internal regularization problem. In Section 5, we illustrate our results of Sections 3 and 4 by two examples of mechanical systems. In Section 6, we give the conclusions of the article. The Appendix contains an algorithm using which we can construct the maximal controlled invariant submanifold and the feedback which we need to internally regularize a DACS. We use the following notations. We use R n×m to denote the set of real valued matrices with n rows and m columns, GL (n, R) to denote the group of nonsingular matrices of R n×n and I n to denote the n × n-identity matrix. We denote by  k the class of k-times differentiable functions. For a smooth map f ∶ X → R, we denote its differential by df . For two column vectors v 1 ∈ R m and v 2 ∈ R n , we write We assume the reader is familiar with some basic notions from differential geometry as smooth manifolds, embedded submanifolds, tangent bundles, distributions; the reader may also consult, for example, the book 23 for definitions of those notions.

PRELIMINARIES ON SOLUTIONS OF DIFFERENTIAL-ALGEBRAIC CONTROL SYSTEMS
We define a solution of a DACS as follows.
We call a point x 0 ∈ X an admissible point of Ξ u if there exists at least one solution (x(⋅), u(⋅)) satisfying x(t 0 ) = x 0 for a certain t 0 ∈ I. We will denote admissible points by x a and the set of all admissible points by S a . Note that for any DACS Ξ u , there may exist some free variables among the components of x. As a consequence, even for a fixed u(⋅) defined on R, there is not a unique prolongation of a solution (x, u) defined on I to a maximal solution. For this reason, we will not use the concept of maximal solutions (although they can be defined, see, e.g., Reference 12) except for Section 4, where we can deal with maximal solutions due to an identification of free (algebraic) variables. Definition 2 (controlled invariant submanifold). Consider a DACS Ξ u l,n,m = (E, F, G). A smooth connected embedded submanifold M is called a controlled invariant submanifold of Ξ u if for any point x 0 ∈ M, there exists a solution (x, u) ∶ I → X × R m such that x(t 0 ) = x 0 for a certain t 0 ∈ I and x(t) ∈ M for all t ∈ I.
We fix a point x p ∈ X, a smooth embedded submanifold M containing x p is locally controlled invariant (around x p ) if ∃ a neighborhood U of x p in X such that M ∩ U is controlled invariant (and thus, by definition, connected). Consider a DACS Ξ u l,n,m = (E, F, G), let N ⊆ X and fix a point x p ∈ N; we introduce the following constant rank assumption: (CR) there exists a neighborhood U in X of x p such that N ∩ U is a smooth connected embedded submanifold, and such that dim E(x)T x N = const. and dim(E(x)T x N + Im G(x)) = const. for x ∈ N ∩ U.
The following characterization of local controlled invariance, under the constant rank assumption (CR) satisfied for M, was given as theorem 9 in Reference 13 for DACSs whose ker E(x) is an involutive distribution. The DACSs in Reference 13 is of the form de(x(t)) dt = f (x(t)) + g(x(t))u(t). Note that e(x), denoted by E(x) in Reference 13, is an R l -valued function, while E(x) of our article is a matrix-valued function, whose rows, in the case of de(x(t)) dt of Reference 13, are Then either x p ∉ M k or x p ∈ M k , and in the latter case, assume that there exists a neighborhood U k of x p such that M c k = M k ∩ U k is a smooth embedded submanifold (which can always be assumed connected by taking U k sufficiently small).

Proposition 2.
In the above recursive procedure, there always exists k * ≤ n such that either k * is the smallest integer for which x p ∉ M k * +1 (and then there is a neighborhood of x p in which there does not exist any controlled invariant submanifold) or k * is the smallest integer such that x p ∈ M c k * +1 and M c k * +1 ∩ U k * +1 = M c k * ∩ U k * +1 . In the latter case, we assume that M * = M c k * +1 satisfies the constant rank condition (CR) in a neighborhood U * ⊆ U k * +1 of x p in X and then (i) x p is an admissible point and M * is a locally maximal controlled invariant submanifold on U * (by taking a smaller U * , if necessary); (ii) M * coincides locally with the admissible set S a , that is, M * ∩ U * = S a ∩ U * .
Proof. Let k be the largest integer such that M c 0 ⊋ M c 1 ⊋ … ⊋ M c k and x p ∈ M c k , where M c i , 0 ≤ i ≤ k are connected embedded submanifolds, and then either x p ∉ M k+1 or x p ∈ M k+1 and M c k+1 = M k+1 ∩ U k+1 is a submanifold (by the recursive procedure assumptions) such that dim M c k = dim M c k+1 . Then k * = k is the integer whose existence is indicated. The condition k * ≤ n follows from dim M c i−1 > dim M c i , 1 ≤ i ≤ k * . Claim. If an admissible point x a ∈ S a ∩ U k * , then x a ∈ M k * +1 . To prove the Claim, notice that if x a is admissible, there exists a solution (x(t), u(t)) and t 0 ∈ I such that x(t 0 ) = x a . It follows that for all t ∈ I, So F(x(t)) ∈ Im E(x(t)) + Im G(x(t)), ∀t ∈ I. Thus by Equation (5), we have x(t) ∈ M 1 , ∀t ∈ I. Suppose that for a certain i > 1, we have x(t) ∈ M i−1 , ∀t ∈ I. We then have thaṫx(t) ∈ T x(t) M i−1 , ∀t ∈ I (note that when restricted to U i−1 , the set M i−1 is a submanifold). Thus in . It follows that x(t) ∈ M i ∩ U i−1 , for any t ∈ I, due to (5). By an induction argument, we conclude that x(t) ∈ M k * +1 ∩ U k * , and, in particular, we have x a = x(t 0 ) ∈ M k * +1 ∩ U k * , which proves the Claim.
(i) If x p ∈ M k * +1 , we have dim M c k * +1 = dim M c k * and since M c k * +1 ⊆ M c k * , it follows that there exists an open neighborhood U k * +1 such that M c k * +1 ∩ U k * +1 = M c k * ∩ U k * +1 . By assumption, M * = M c k * +1 ∩ U * satisfies (CR) in U * ⊆ U k * +1 . So, using Proposition 1, we conclude that M * is a locally invariant submanifold on U * . To prove that M * is maximal in U * , let M ′ be any controlled invariant submanifold, then any point x 0 ∈ M ′ ∩ U * is admissible, so x 0 ∈ S c ∩ U * and thus by the above Claim, (ii) We now prove that M * coincides with the admissible set S a on U * . Since M * ∩ U * is locally controlled invariant, for any point x 0 ∈ M * ∩ U * , there exist at least one solution (x(⋅), u(⋅)) on I and t 0 ∈ I such that x(t 0 ) = x 0 (by Definition 2), which implies that x 0 is admissible, that is, x 0 ∈ S a . It follows that M * ∩ U * ⊆ S a ∩ U * . Conversely, consider any point x 0 ∈ S a ∩ U * , using again the above Claim, we conclude that Note that the proof of Proposition 2(i) can be performed in a similar way as that of theorem 12(ii) of Reference 13 for proving that M * is a maximal output zeroing submanifold with output function taken as zero. However, in order to show item (ii), we need the Claim that implies implicitly the maximality property in item (i), and therefore we provide a proof of both (i) and (ii) of Proposition 2.
(i) Proposition 2 is a geometric method to construct the locally maximal controlled invariant submanifold M * . Such an iterative way of identifying the admissible set of a DAE is called the geometric reduction method and has appeared frequently in the geometric analysis of nonlinear DAEs (see, e.g., References 6,24-26 and the recent papers 13,22 ). We state a practical implementation of this geometric method as Algorithm 1 of the Appendix, where we also compare our Algorithm 1 with an existing geometric reduction method of section 3.4 of Reference 6. A preliminary version of Algorithm 1 for DAEs (without control u) can be consulted in Reference 27. (ii) Item (ii) of Proposition 2 asserts that in the neighborhood U * of an admissible point x a = x p , the solutions of Ξ u exist on M * only, which implies that for any point x 0 ∈ U * ⧵M * there are no solutions passing through x 0 . For practical systems, the initialization x 0 of the state x could be any point of the state space X. If x 0 ∈ U * ⧵M * (i.e., x 0 is not admissible), we need an instantaneous change of x 0 , that is, a jump at t = t 0 , to steer the inadmissible point x 0 into an admissible one. The jump of x 0 at t = t 0 will cause a distributional term, the Dirac impulse , to be present iṅx. For linear DAEs/DACSs, the distributional solution theory of linear DAEs/DACSs has been established to deal with the discontinuity caused by inadmissible initial points, see e.g., References 28,29. We will not discuss distributional solutions of nonlinear DAEs/DACSs since the purpose of the present article is just to propose normal forms to simplify system structures. Note, however, that the normal forms studied in Section 3 are external forms that hold on a whole neighborhood (not just on M * ) of a nominal point x p . This is a useful tool for studying jumps and distributional solutions of DAEs/DACSs and is an ongoing research, c.f. our recently submitted conference contribution. 30 (iii) If for a fixed x p , we drop the requirements that x p ∈ M k and that M c k are connected, then Proposition 2 allows to detect all admissible points x a in U * that form the union ⋃ M * i of all locally maximal controlled invariant submanifolds in U * . Notice that, first, that union ⋃ M * i may have more than one connected components (each of them being a locally maximal controlled invariant submanifold), second, x p may not be in ⋃ M * i (implying that x p is not admissible) and, third, ⋃ M * i can be empty (implying that there are no admissible points in U * ). (iv) The recursive procedure of Proposition 2 leads to the sequence of nested submanifolds At each step, we construct a submanifold M c k+1 that is of a smaller dimension than M c k , except for the last step, where M k * +1 , defined by Equation (5), coincides with M c k * , although not on U k * but on a smaller neighborhood U k * +1 and M c k * +1 is actually M c k * restricted to U k * +1 . The need to take a smaller neighborhood U k * +1 ⊆ U k * is a purely nonlinear phenomenon. Take, for example, the following nonlinear DACS: (v) If we apply the above procedure of constructing M k to a linear DACS Δ u = (E, H, L), then we get a sequence of subspaces The sequence k is one of the augmented Wong sequences (see Reference 31), that play an important role in the geometric analysis of linear DACSs (see, e.g., Reference 32). In particular, it is shown in References 10,19 that the indices of the feedback canonical form of linear DACSs are closely related to these sequences. In the linear case, the submanifold M * is the largest subspace such that HM * ⊆ EM * + Im L, which we denote by ℳ * . Clearly,

TWO NORMAL FORMS UNDER EXTERNAL FEEDBACK EQUIVALENCE
The canonical form of linear DACSs in Reference 10 is with respect to the equivalence relation: where Q, P, T are invertible real matrices and F u defines a static state feedback. In the following definition, we generalize this equivalence relation to the nonlinear case.

Definition 3 (external feedback equivalence). Two DACSs
The ex-fb-equivalence of two DACSs is denoted by Ξ u ex−fb ∼Ξũ. If ∶ U →Ũ is a local diffeomorphism between neighborhoods U of x 0 andŨ ofx 0 , and Q(x), (x), (x) are defined on U, we will talk about local ex-fb-equivalence.
Remark 2. If two DACSs are ex-fb-equivalent, the diffeomorphismx = (x) and the feedback transformation u = (x) + (x)ũ establish a one-to-one correspondence of solutions (x(⋅), u(⋅)) and (x(⋅),ũ(⋅)) of the DACSs, that is, On the other hand, if the solutions of two DACSs correspond to each other via a diffeomorphism and a feedback transformation, then the two DACSs are not necessarily ex-fb-equivalent (since the diffeomorphism is defined on the whole neighborhood U but the solutions exist on the maximal controlled invariant submanifold M * only), which is the main reason for us to distinguish the "external" and "internal" analysis of DACSs. As a simple example, we consider the following two DAEs Ξ u 2,1, It is clear that (x, u) = (1, e −1 ) and (x,ũ) = (0, 0) are the unique solutions of the two DACSs and the diffeomorphismx = (x) = x − 1 and the feedback transformationũ = −x 2 + e x u map (x, u) to (x,ũ). However, the two DACSs cannot be ex-fb-equivalent since E andẼ are not of the same rank (two ex-fb-equivalent DACSs should have E-matrices of the same pointwise rank).
Theorem 1 (normal forms). Consider a DACS Ξ u l,n,m = (E, F, G) and fix a point x p ∈ X. Let M * ⊆ X be a smooth connected embedded submanifold containing x p . Assume that M * is a locally maximal controlled invariant submanifold around x p and that there exists a neighborhood V of x p such that Then there exist a neighborhood U ⊆ V of x p such that Ξ u is locally ex-fb-equivalent to a DACS represented in the following normal form

Furthermore, if the above (A2) is replaced by the condition that there exist a neighborhood V of x p and an involutive
then there exists a neighborhood U ⊆ V of x p such that Ξ u is locally ex-fb-equivalent to Equation (9), for which, additionally, E 2 2 (z) ≡ 0 and rank G 2 (z) = m 1 , ∀z ∈ U, which we call the special normal form Proof. Since M * is a smooth connected embedded submanifold, there exist a neighborhood U 1 of x p and local coordinates Thus, by Dolezal's theorem (see Reference 33), there exists a smooth map Q 1 ∶ U 2 → GL(l, R), and G 2 ( ) above are of full row rank.
By dim E(x)T x M * = const. = r 1 of assumption (A2), it is immediate to see that rankĒ 1 ( ) = r 1 for ∈ M * . It follows from the smoothness ofĒ 1 ( ) that by taking a smaller U 2 , if necessary, there exist r 1 columns ofĒ 1 ( ) that are linearly independent in U 2 . Now we write the matrix where E 1 1 ∶ U 2 → R r 1 ×r 1 and E 3 2 ∶ U 2 → R r 2 ×r 2 and where r 2 = r − r 1 (and other matrices are of suitable dimensions). We can always permute the rows (by a constant Q-transformation) and the columns (by permuting the components of 1 ) of the above matrix such that is of full row rank r 2 for ∈ U 2 . Then we can always permute the columns (by permuting the components of 2 ) ofĒ 2 such that E 3 2 is invertible. On the other hand, we write where G 2 2 ( ) is a m 2 × m 2 matrix (and others are of suitable dimensions). Since G 2 ( ) is of full row rank m 2 for ∈ U 2 , we can permute the components of u (by a feedback transformation) such that G 2 2 ( ) is invertible. Since both E 3 2 ( ) and G 2 2 ( ) are invertible, we can set F 2 , F 3 , F 4 ). Then by the feedback transformation
Consider Equation (9), then the condition dim E(x)T x M * = rank = r 1 for all z ∈ M * , of assumption (A2), implies E 2 2 (z) = 0, for all z ∈ M * , and the condition dim(E(x)T x M * + Im G(x)) = rank of assumption(A2), implies rank G 2 (z) = m 1 , for all z ∈ M * . Moreover, by the fact that M * is a controlled invariant submanifold and due to condition (4) of Proposition 1, it follows that F 4 (z) = 0 for all z ∈ M * . Now we prove that under assumptions (A1) and (A3), Ξ u is locally ex-fb-equivalent to the special normal form (SNF), given by (10). The construction of the (SNF) is similar to the above construction of the (NF) given by (9) but we choose coordinates = ( 1 , 2 ) differently. By the involutivity of  of assumption (A3) and Frobenius theorem (see, e.g., Reference 23), there exist a neighborhood U 1 of x 0 and two vector-valued functions 1 ∶ U 1 → R n 1 and 2 ∶ U 1 → R n 2 such that span{d 1 1 , … , d are linearly independent. Since (x) = T x M * locally for x ∈ M * , we still have M * ∩ U 1 = {x | 2 (x) = 0}. Observe that assumption (A3) implies (A2), so we may transform Ξ u into (NF), given by (9), using the construction described above. But now by assumption (A3), we have for all z ∈ U 2 , which, respectively, implies E 2 2 (z) ≡ 0 and rank G 2 (z) = m 1 on U 2 . Therefore, under assumptions (A1) and (A3), the DACS Ξ u is locally ex-fb-equivalent to the (SNF) given by (10). ▪ The following observations are crucial.
(i) If the submanifold M * exists and Ξ u satisfies the constant rank assumptions (A1) and (A2), which are regularity assumptions, then Ξ u is locally ex-fb-equivalent to the (NF), given by (9). If Ξ u satisfies the constant rank and involutivity assumptions (A1) and (A3), then it is locally ex-fb-equivalent the (SNF), given by (10), in which, additionally compared to (9), we have E 2 2 (z) ≡ 0 and rank G 2 (z) = m 1 for all z ∈ U. Note that if M * is replaced by M being any controlled invariant submanifold (not necessarily maximal) and satisfying (A1) and (A2), or (A1) and (A3), we may still transform Ξ into form (9) or (10) since we do not use the maximality of M * to construct the two normal forms as shown in the above of proof. However, if M * is not locally maximal, we can neither conclude that M * = {z|z 3 = z 4 = 0} nor that (z 1 , z 2 ) are the local coordinates on the admissible set S a = M * . (ii) By a suitable feedback transformation introducing new controls (u 1 1 , u 2 1 ) (possibly also by a permutation of z 3 -variables), the second equation = F 2 (z) + G 2 (z)u 1 of (10) can be further simplified as where u 1 1 ∈ R r 2 −m 1 , u 2 1 ∈ R m 1 and F 2 (z) = 0 for z ∈ M * . (iii) The forms (NF) and (SNF) are two normal forms under the external feedback equivalence, meaning that both hold locally everywhere around x p , not just on the maximal controlled invariant manifold M * passing through x p . For any point x 0 ∉ M * around x p , the system does not have solutions passing through x 0 (see item (ii) of Remark 1), but the system admits the above normal forms, which can be useful if we want to steer x 0 toward M * . (iv) Note that M * = {z | z 3 = 0, z 4 = 0}. If we consider Ξ u "internally," that is, locally on M * , by setting z 3 and z 4 to be zero, we get from Equation (9) the following system (we may do the same for Equation 10): Since rank G 2 (z) = m 1 for z ∈ M * , via a suitable feedback transformation introducing new controls (u 1 1 , u 2 1 ), and a Q(x)-transformation (defined on M * but it can be extended to U * that is open in X), the above DACS can be transformed into , for some mapsF 1 andG 1 1 . It can be seen from item (i) of Theorem 2 that Ξ u has solutions isomorphic with those of the first subsysteṁz 1 + E 2 1 (z 1 , z 2 )̇z 2 =F 1 (z 1 , z 2 ) +G 1 1 (z 1 , z 2 )u 1 1 , which we denote by Ξ u | M * and call the restriction of Ξ u to M * ; the latter can be regarded as an ODE control system with controls w =̇z 2 and u 1 1 : (This is a particular case of a general procedure proposed in Reference 22 under the name of (Q, w)-explicitation). From the above analysis, it is seen that for a fixed control u, the original Ξ u has a unique maximal solution (see the definition of maximal solution in Section 4) if and only if n 1 = r 1 (since in this case, the z 2 -variables are absent).
(v) The above two normal forms (NF) and (SNF) facilitate understanding the actual roles of the variables in the nonlinear DACS Ξ u . As a result, some generalized states, namely (z 1 , z 3 ), behave like state variables of differential equations and some generalized states, namely (z 2 , z 4 ), are free variables since their derivatives (̇z 2 ,̇z 4 ) are not constrained and can be seen as extra inputs which are different from u. Moreover, some generalized states, namely (z 3 , z 4 ) are constrained and some controls, namely u 2 1 and u 2 are also not free to be chosen (since they are forced to be 0 by the constraints) when the DACS is considered internally on M * . The generalized state z 2 and the control u 1 1 are the truly free variables and are not constrained. (vi) It is worth to mention that the behavioral approach of system theory (see Reference 8) does not a priori distinguish the roles of the variables (which is also the case of our variables x consisting of all components of the generalized state; we distinguish, however, the control u from the generalized state x) and only the analysis of the system reveals the nature of those variables. The observations of item (v) above could be regarded as an instruction for the reinterpretation of the meaning of those variables, the latter has already been addressed in Reference 14,19 for the regulation problems. For instance, the free generalized states z 2 could be reinterpreted as a new input (but they should be distinguished from the true controls u considering the physical meanings of the generalized state variables, see Section 1) and the constrained generalized states z 3 and z 4 could be redefined as zeroing outputs of the system. Consider the DACS Ξ (which describes a 3-link manipulator with a free end-joint) and its (SNF) of Example 1. It is seen that F f (the friction force at the end joint) is a free generalized state, which, however, should be distinguished from the real active input u = (F x , F y ) since the physical meaning of regarding F f as a new active control is that we add a motor/actuator to the free joint and consider instead a fully actuated manipulator. (vii) Our forms (NF) and (SNF) are different from the zero dynamics form (ZDF) proposed in Reference 12 in many ways. First, the feedback transformations, which play important roles for our normal forms, are not used for the (ZDF). Second, it is assumed for the (ZDF) that while we only assume that dim E(x)T x M * + Im G(x) = const., which is more general since assumption (11) excludes the existence of free generalized states and control inputs in the internal dynamics. Third, the utilization of the involutive distribution , not present in (ZDF), shows a possibility to further simplify the structure of the matrix E(x) in (SNF).

INTERNAL REGULARIZATION OF NONLINEAR DACSS
In this section, we first consider the uncontrolled case of (1), that is, nonlinear DAEs, which are of the form and are denoted by Ξ l,n = (E, F) or, equivalently, by Ξ u l,n,0 = (E, F, 0). If we apply Definition 2 to a DAE Ξ, then M * is called a locally maximal invariant submanifold. It is well known (see, e.g., References 6,20,22,25,26) that the solutions of a DAE Ξ exist locally on its maximal invariant submanifold M * only and that the uniqueness of solutions can be characterized by the notion of local internal regularity, which is defined below. We will say that a solution x : I → M * satisfying x(t 0 ) = x 0 , where t 0 ∈ I and x 0 ∈ M * , is maximal if for any solutionx ∶Ĩ → M * such that t 0 ∈Ĩ,x(t 0 ) = x 0 and x(t) =x(t) for all t ∈ I ∩Ĩ, we haveĨ ⊆ I.  Now we use Algorithm 1 in the appendix to study the problems of when is a DACS locally internally regularizable and how to design internal regularization feedback laws. Note that Algorithm 1 is a practical implementation of the recursive procedure of Proposition 2, see Remark 1(i), with additional Assumptions 1 and 2. At every step of Algorithm 1, we construct a submanifold M c k and a local form, given by (A1), under the external feedback equivalence, based on which we give an explicit expression of the restricted/reduced system defined by Equation (A2). Moreover, at every k-step, we show in details how to construct the coordinate transformations k and the feedback transformations (u k ,ū k ) = a k + b k u k−1 , which lead to the local form. In the statement of Theorem 2, we refer to the submanifold M * = M c k * +1 , and to the open neighborhood U * = U k * +1 (in X) of Step k * + 1 of Algorithm 1.

Proof. (i) At the general Step k of Algorithm 1, consider the DACSsΞũ
, the latter given by (A1). Then we show that the following items are equivalent.
SinceΞũ k = Ξ u k−1 is locally ex-fb-equivalent toΞ û k via Q k , k , k and k , we have that item (a) and item (b) above are equivalent (see Remark 2). The equivalence of item (b) and item (c) follows from the fact that the solutions exist on M k only and should respect the constraints z k = 0 andū k = 0.
Observe that Ξ u is internally regularizable, that is, there exists a feedback u = (x) such that Ξ = (E, F + G ) is internally regular if and only if the algebraic constraint has a unique maximal solution (x(⋅), u(⋅)) satisfying x(t 0 ) = x 0 and u(t 0 ) = (x 0 ) for any x 0 ∈ M * ∩ U, where M * is a locally maximal invariant submanifold of Ξ and U is a neighborhood of x p . By item (i) of Theorem 2, there is a one-to-one correspondence, given by a local diffeomorphismẑ = Ψ(x) and a feedback transformation u = (x) + (x)û, between the solutions of Ξ u and those ofΞ û . As a consequence, Ξ u is internally regularizable if and only if there exists ∶ M * → R n such that wherê(ẑ) = −1 ( (Ψ −1 (ẑ)) − (Ψ −1 (ẑ))), has a unique maximal solution (ẑ(⋅), û(⋅)) satisfyingẑ(t 0 ) =ẑ 0 , whereẑ 0 = Ψ(x 0 ), and û(t 0 ) = −1 (x 0 )( (x 0 ) − (x 0 )), for any x 0 ∈ M * ∩ U, that is,Ξ û is internally regularizable. Now we will show that Ξ û is internally regularizable if and only if (13) holds. Since E * (z * ) of (12) is of full row rank, we may view the first equation of (12) as an ODE control system with extra free variables. More precisely, assume that the first r * columns of E * (z * ) are linearly independent (if not, we can always permute the components of z * ), then we can rewrite E * (z * )̇z * as , where z * = (z * 1 , z * 2 ) and E * 1 ∶ M * → R r * ×r * is invertible. Thus we can rewrite the first equation of (12) as It follows thatΞ û is internally regularizable if and only if the free variables z * 2 can be fixed via the constraintsū = 0, which is equivalent to the fact that the number of constrained inputsū (there are m = m − m * of them) is not less than the number of components of z * 2 (which is n * − r * ) and thus equivalent to (13). (iii) If m = m − m * ≥ n * − r * , then there are enough components of constrained inputsū = 0 that can be used to fix the free variables z * 2 . Namely, denote û = (u * , then we impose z * 2 = 0 by settingū ′ = z * 2 = 0 and the remaining componentsū ′′ = 0 to construct a controlled invariant submanifold. We can choose u * = * (z * ) arbitrarily and then is thus the feedback law (14). Now using the diffeomorphism x = Ψ −1 (z * , z) and invertible feedback u = (x) + (x)û that transform solutions of Ξ u into those ofΞ û (see item (i)), we conclude that the feedback law u = (x) = (x) + (x)̂( (x)), wherêis given by (18) and (x) = Ψ −1 (z * , 0), internally regularizes the original system Ξ u , which completes the proof. ▪ Remark 5.
(i) Note that we perform k * + 1 steps of Algorithm 1. Actually, we get M * already at the step k * , however, we need to perform one step more to know that Algorithm 1 stops (because n k * +1 = n k * ) but also in order to normalize the system Ξ u k * and obtain Ξ u * = Ξ u k * +1 = (E * , F * , G * ). (ii) The first equation of (12), that is, E * (z * )̇z * = F * (z * ) + G * (z * )u * , which we denote by Ξ u | M * , has isomorphic solutions with Ξ u and can be seen as the "internal" dynamics of Ξ u . Since E * (z * ) is of full row rank, we may view Ξ u | M * as an ODE control system (given by the first equation of (17), see also item (iv) of Remark 3) with two kinds of inputs, namely u * and w. (iii) The procedure of internal regularization, leading to Theorem 2(iii), that we propose is not unique at two stages.
First, by settingū ′ = (z * ) for any such that (z * ) z * 2 is invertible, we can find z * 2 = (z * 1 ) satisfying (z * 1 , (z * 1 )) = 0 and thus we constrain the z * 2 -variables via û ′ = (z * ) = 0. Second, we can choose u * = * (z * ) arbitrarily and that choice does not affect internal regularity of Ξ u (nor the invariant submanifold M * ) since the feedback law u * = * (z * ) does not influence the constraintsū ′ = (z * ) = 0. (iv) A linear DACS Ξ = (E, H, L), given by (2), is internally regularizable/autonomizable (see theorem 3.5 of Reference 19) if and only if where * is the limit of the augmented Wong sequence k of (7), which is, clearly, a linear counterpart of M * (denoted ℳ * , compare Remark 1(v)). Thus item (ii) of Theorem 2 is a nonlinear generalization of the above result for linear DACSs. (v) Combining the results of Theorem 1 and Theorem 2, it is seen that if a DACS Ξ u is ex-fb-equivalent to the (NF) or the (SNF), then Ξ u is internally regularizable if and only if r 1 + m 1 + m 2 ≥ n 1 .

EXAMPLES
In this section, we give two examples to illustrate the results of Theorems 1 and 2. In particular, Example 1 shows how to use Algorithm 1 to find a feedback to internally regularize a given DACS, while Example 2 puts emphasis on finding a normal form and demonstrates that an internal regularization feedback can be constructed based on the obtained normal form.

Example 1.
Consider the model of a 3-link manipulator taken from Reference 34 as shown in Figure 1, where joint 1 and joint 2 are active, and joint 3 is passive and called a free joint. The dynamic equations of the system are given by: where m and l are constants representing the mass and the half length of the free-link, respectively, x and y are the position variables of the free joint, and is the angle between the base frame (attached to joint 1) and the link frame, F x and F y are the translational force at the free joint, is the torque applied to joint 3 (and we take = 0 implying that joint 3 is free), F f is the friction force caused by the rotation of the free link. We regard F x and F y as the active control inputs to the system. Note that the friction F f is a generalized state variable rather than a control input since it cannot be changed actively. We require the trajectories of system (19) to respect the following constraint: Denote x 1 = x, x 2 =̇x, y 1 = y, y 2 =̇y, 1 = , 2 =̇and choose the generalized state z 0 = (x 1 , x 2 , y 1 , y 2 , 1 , 2 , F f ). Rewrite (19) and (20) together as a DACS Ξ u 7,7,2 = (E 1 , F 1 , G 1 ), given by We assume that 1 ≠ ± 2 , so we do not work on X = S 1 × R 6 but on U 0 = (− 2 , 2 ) × R 6 . We now apply Algorithm 1 to Ξ u .
Step 1: We have rank E 1 (z 0 ) = 5 and rank [ . We get Clearly, z 0p ∈ M 1 . Then choose a new coordinate z 1 = x 1 − y 1 and keep the remaining coordinates z 1 = (x 2 , y 1 , y 2 , 1 , 2 , F f ) unchanged, and set It is seen that the DACS Ξ u is ex-fb-equivalent to .
Step 2: We have that rank Clearly, z 0p ∈ M 2 . Set z 2 = x 2 − y 2 and keep the remaining coordinates unchanged, and the system Ξ u 2 is ex-fb-equivalent to So Ξ u represented in new coordinates and restricted to M c Step 3: we have that rank Thus z 0p ∈ M * is an admissible point. Hence we get z * = z 2 = (y 1 , So r * = 4 and m * = 1, by item (ii) of Theorem 2, r * + (m − m * ) = 4 + 1 = n * = 5 implies that our system Ξ u is internally regularizable. A feedback that internally regularizes Ξ u can be deduced, by item (iii) of Theorem 2, from The above equation has a infinite number of solutions. The control u 1 = u * = * (z * ) can be chosen arbitrarily (see the proof of Theorem 2(iii)). So we can chose (z * ), for instance, to stabilize Ξ u | M * , that is, Ξ u * = (E * , F * , G * ) on M * (which can be viewed as an ODE since E * is of full row rank). Set = −b −1 a and = b −1 , where a, b are given in (21), and definê . Then the feedback which internally regularizes and stabilizes Ξ u can be uniquely solved. The solution is u = (z 0 ) = (z 0 ) + (z 0 )̂(z 0 ), that is, Note that our system Ξ u satisfies assumptions (A1) and (A3) of Theorem 1 since rank E(z 0 ) = 6 and rank = 6, and the distribution satisfies (z 0 ) = T z 0 M * locally for all z 0 ∈ M * , and dim E(z 0 )(z 0 ) = 4, dim(E(z 0 )(z 0 ) + Im G(z 0 )) = 5. In fact, Ξ u is locally ex-fb-equivalent to , F I G U R E 2 A rolling disk on an inclined plane where = ( 1 , 2 , 3 , 4 ) (we use for (SNF) since z are already used as coordinates of the system obtained via Algorithm 1), 1 = (y 1 , Note that for the system (SNF) represented in the z-coordinates, we have M * = { | 3 = 4 = 0}. The variables 1 = (y 1 , y 2 , 1 , 2 ) and 3 perform as the states of differential equations (there are differential equations foṙ1 anḋ3), but 3 are constrained and equal to 0. Moreover, 2 = F f is a truly free variable, 4 is a constrained free variable, and u 1 is a constrained control input.

CONCLUSIONS
In this article, we proposed two normal forms for nonlinear DACSs under external feedback equivalence to simplify the structure of systems and to clarify different roles of variables, which is our first main result. One normal form requires only the existence of a maximal controlled invariant submanifold and some constant rank assumptions of system matrices while another requires additionally involutivity of a certain distribution. Moreover, we give a necessary and sufficient geometric condition for a nonlinear DACS to be internally regularizable (second main result), we also formulate an algorithm to calculate the maximal invariant submanifold and a feedback which internally regularizes the system. Two examples of mechanical systems are given to illustrate the proposed normal forms and the internal regularization algorithm.

ACKNOWLEDGMENT
This work was supported by Vidi-grant 639.032.733.

CONFLICT OF INTEREST
The author(s) declared that there is no conflict of interest.

DATA AVAILABILITY STATEMENT
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.