Fault accommodation controller under Markovian jump linear systems with asynchronous modes

We tackle the fault accommodation control (FAC) in the Markovian jump linear system (MJLS) framework for the discrete‐time domain, under the assumption that it is not possible to access the Markov chain mode. This premise brings some challenges since the controllers are no longer allowed to depend on the Markov chain, meaning that there is an asynchronism between the system and the controller modes. To tackle this issue, a hidden Markov chain ( θ(k) , θ^(k) ) is used where θ(k) denotes the Markov chain mode, and θ^(k) denotes the estimated mode. The main novelty of this work is the design of ℋ∞ and ℋ2 FAC under the MJLS framework considering partial observation of the Markov chain. Both designs are obtained via bilinear matrix inequalities optimization problems, which are solved using coordinate descent algorithm. As secondary results, we present simulations using a two‐degree of freedom serial flexible joint robot to illustrate the viability of the proposed approach.


INTRODUCTION
Health monitoring of complex control systems has received increased attention in recent years as it can play an important role in the predictive maintenance of such systems. It allows for an autonomous detection and prediction of the occurrence of faults in the systems so that these potential problems can be mitigated in a timely manner. Among a multitude of structures that tackle this issue, the fault tolerant control (FTC) is one of the leading frameworks. [1][2][3][4][5] The FTC framework can be classified into two main methodologies, passive FTC and active FTC. 6 In passive FTC, the occasional fault occurrence is considered in the controller design, where the controller itself does not change its dynamic. 3 In active FTC (AFTC), the controller is reactive by design and can adapt itself whenever faults are presence, such as the gain-scheduled framework. 7 This article focus on the AFTC methodology, more specifically in the situation where there is no presence of a parallel actuator, the so-called fault accommodation control (FAC). The main purpose of this method is to mitigate the fault effect in the system until a proper solution is made. 8 There are several ways to design an FAC scheme, for instance, the data-driven approach 5 or the model-based approach. 3 In this article, we tackle this problem using the model-based approach.
In networked systems, one should not neglect the faults related to the communication channels, since the packet loss may impact the overall system performance. One possible way to consider such behavior is to use the so-called Markovian jump linear system (MJLS) as a tool to model the network behavior. 9 Regarding more recent works, we can, for instance, encounter methods that use an augmented system as a descriptor system associated with a sliding mode observer for MJLS. 10 The use of sliding mode  ∞ finite-time boundedness in the discrete-time domain under the Markovian jump system framework has also been used for fault accommodation. 11 When Markovian jump systems with mode-dependent interval time-varying delay and Lipschitz nonlinearities is considered, an FTC approach has also been well studied. 12 For multiagent systems with a switching topology, a consensus algorithm-based FAC has been investigated for such systems. 13 Tariverdi et al 14 provide FTC design implementing adaptive sliding mode control method applied to multiuser telerehabilitation systems. Zhang et al 15 present a solution based on interval sliding mode observer combine with nonminimum phase linear-parameter-varying systems. Jiang et al 16 compile several works on fault detection and fault accommodation in the context of switching systems applied to spacecraft. Li et al 17 tackle the state estimation problem under the assumption that the system is subjected to delay and the transition probabilities are uncertain. One similarity of the aforementioned works is the premise that the Markov chain modes are instantly accessible, which is not a realistic premise in most of practical applications. Regarding some works that do not assume that the Markov modes are directly accessible but, instead, there is a detector providing information about this parameter, we can refer to References 18-21 that deal with the control and filtering problem of such systems considering a hidden Markov set up for the Markov and detector parameters. Along similar lines, the paper by Ogura et al 22 deals with a state feedback control for MJLS considering hidden Markov mode observations, while that by Cheng et al 23 presents an event-based asynchronous approach for MJS under a hidden mode detection formulation and missing measurements. However, the aforementioned works do not tackle the FAC problem. Therefore, we have as a motivation the development of a new procedure to design FAC for MJLS that does not rely on this particular premise (Markov chain being directly accessible), that is, we consider the eventual asynchronism between the actual network mode and the mode used by the sensor or actuators (obtained through the means of estimation).
This article aims to provide an FAC under the discrete-time MJLS framework with partial information on the jump parameter. The partial information on the jump parameter is inspired by the work of Todorov et al, 18 which allows the possibility to consider the eventual asynchronism between the actual network mode and the mode implemented in the controllers. This framework yields to a control design that mitigates the fault effect in the MJLS where the Markov mode is not instantly accessible, under two performance criteria: the  ∞ and  2 norms. The main novelties in this article are summarized as follows: • Analysis of the  ∞ FAC problem in the discrete-time domain for the MJLS framework with partial information on the Markov mode, based on bilinear matrix inequalities (BMIs).
• Analysis of the  2 FAC problem in the discrete-time domain for the MJLS framework with partial information on the Markov mode, based on BMIs.
• Analysis of the mixed  ∞ / 2 FAC problem in the discrete-time domain for the MJLS framework with partial information on the Markov mode, based on BMIs.
The main advantages of the proposed approach when compared with other fault accommodations control design can be listed as follows: (i) the proposed approach considers the eventual asynchronism between the actual network mode and the mode used by the sensor or actuators; (ii) the proposed approach only acts to mitigate the fault effect, without the need to alter the nominal control performance.
Hereafter, this article is organized as follows. Section 2 presents the theoretical background. Section 3 formulates the FAC problem. Section 4 introduces the main novelties of this article. Section 5 presents the simulation the experimental results. The final comments are provided in Section 6.

PRELIMINARIES
In this section, we provide a basic theoretical background to understand the concepts presented herein.

Notation
The notation throughout this article is standard. The real Euclidian space is presented by R n where n denotes its dimension, and n × m represents the real matrices dimension, as for example A(R n , R m ). The symbol (⋅) ′ denotes the transpose of a matrix, and I indicates the identity matrix. The operator Her(⋅) represents the symmetric sum (X) = X + X ′ . A diagonal matrix is represented by the operator diag(⋅). The symbol • represents a symmetric block in a partitioned symmetric matrix. On a probability space (Ω, ℱ , P) with filtration {ℱ k }, the expected value operator is represented by (⋅), the conditional expected operator, by (⋅|⋅), and the space of all discrete-time sequences of dimension r, ℱ k -adapted processes, such that ||z|| 2 2 ≜

Markovian jump linear systems
Let us define a generic Markovian jump system as where x(k) ∈ R n denotes the state process, w(k) ∈ R r is a stochastic disturbance with finite energy (w(k) ∈  2 ), and z(k) ∈ R p represents the output signal. The index (k) is a Markov chain taking its value from the set N = {1, 2, … , N}, and its jump behavior is described by the transition matrix P = [p ij ], which is assumed to be nondegenerate, meaning that there are no columns equal to zero, see. 24 For a set of matrices Q 1 , … , Q N , we define  i (Q) = ∑ N j=1 p ij Q j . An important hypothesis in this article is that the Markov chain mode, denoted by (k), is not instantly accessible, instead there is a finite set M, which contains all the possible estimated values for (k), with the estimation being represented bŷ(k). 0 represents the -field generated by {x(0), (0)} and k is the -field generated by {x(0), (0),̂(0), … , x(k), (k)}. It is supposed that̂(k) ∈ {1, … , M} is related with (k) as described by Consequently, i represents the probabilities that the detector will emit the signal l ∈ M considering (k) = i. The set M i is given by where Consider  k as the -field generated by {x(t), (t),̂(t); t = 0, … , k}. We assume that Summing up, system (1) depends on two indexes the first one is (k) that represents the Markov chain mode, which we assume that is not directly accessible. The second one represents an observable estimation of (k), denoted bŷ(k). This conjointly dependency is based on the hidden Markov models. 24 This particular dependency presented in system (1) is an important aspect that will be useful later on in this article.
We present next the definition of stochastic stability that will be used throughout this article.

 ∞ norm
Before introducing the definition of the  ∞ norm, it is necessary to define the set (1) is stochastically stable, as in Definition 1, the  ∞ norm of (1) is given by Considering the matrices that compose system (1), the bounded real lemma presented in Reference 18 proposes the following inequalities to obtain an upper bound > 0 for the  ∞ norm: and N i such that the inequalities (5), (6) hold for all i ∈ N, and ∈ M i .

 2 norm
Definition 3 ( 2 norms). Suppose that (1) is stochastically stable, as in Definition 1. Forx(0) = 0, define z s, i , the outputs of (1) for the initial condition (0) = i and the input w(k) = 0 for k ≥ 1 and w(0) = e s , where e s is the sth vector of the standard basis of R s . The  2 norm of (1) with respect to the outputs z s, i is given by where Prob( (0) = i) = i ≥ 0 for all i ∈ N represents the initial Markov chain state distribution.
Considering (1), and writing the following inequalities and defining F I G U R E 1 Fault compensation control scheme diagram we have the following result (see the proof in References 19 and 20): We define the following optimization problem related to the  2 norm:

PROBLEM FORMULATION
The FAC problem is a particular class of FTC, which uses a different approach when compared to the usual FTC in the literature. The majority of FTC approaches considers the occurrence of faults during the design process of a static controller. In the case of FAC, there are two controllers working alongside each other where the first one is designed for the nominal conditions while the other one will be active when a fault occurs.
For the FAC problem, we consider the following MJLS formulation where the vectors x(k) ∈ R n , y(k) ∈ R p , d(k) ∈ R r , f (k) ∈ R q , u total (k) ∈ R m are, respectively, the system state, output, exogenous input, fault signal, the control input, and (k) denotes the mode of a Markov chain which is initialized at 0 . The nominal control is provided by state-feedback controller where x(k) ∈ R n represents the states of system (14). Figure 1 depicts the overall block diagram of the MJLS along with the FAC controllers K for the nominal one and  c for the faulty ones.
As shown in Figure 1, the signal u total is the sum of the nominal control signal u(k) and the fault compensation control signal h(k). Consequently, in nominal conditions, the signal h(k) is close to zero. In other words, the fault compensation control signal only acts in the presence of a fault as expected.
The FAC controller  c is assumed to have the following structure where ∈ K q represents the FAC state vector, u(k) and y(k) are, respectively, the control signal from the nominal controller and the measured signal from the system. It is of utmost importance to note that the FAC does not depend on the index (k). Instead it depends solely on the index̂(k), which is one of the novelties of the present work.
As presented in Figure 1, the closed-loop for system (14), the state feedback control law (15), and the proposed FAC (16) can be compactly written as , with the augmented matrices given bȳ As previously stated, the main purpose of this work is to provide an FAC design, as in (16), where the supplementary control signal will accommodate the fault signal. This accommodation for the  ∞ case is described by the difference , which we desire to be close to zero. From the above, the optimization problem regarding the  ∞ case can be described as where the augmented matrices C i and D i are given by The use of the  2 norm as a performance criteria is due to the similarities to the LQR controllers, which are known in the literature for its good performance and reliability. Therefore, the optimization problem for the  2 case can be described as where the augmented matrices are It is important to point out that the controller K is obtained beforehand, for instance, the controller in Reference 18, but any other controller that guarantees stability in the same condition can be implemented.

MAIN CONTRIBUTION
In this section, we present the main novelty of this article, which is the design of an FAC under MJLS considering partial knowledge of the Markov modes for the  ∞ norm case and  2 norm case.

 ∞ case
Our first main result on the procedures to design the FAC for the  ∞ norm case is presented in Theorem 1 below.

Theorem 1.
There exists a mode-dependent FAC as described in (16) with hold for all i ∈ K and for all ∈ M i .
Proof. The proof is based on the results presented in References 21 and 25. We impose the structure of the matrices P i and P −1 i of (5)-(6) as Also define the matrices i and i as Observing that (23) is diagonal block, we can also write that Her(R ) >  i (X − Z) > 0, and as a by-product R is nonsingular. Setting U i = −X i , allow us to rewrite the matrices P i and P −1 i as Hence, Equation (25) are now ] .
As R is nonsingular, and using the results presented in References 25 and 26, we get that R  i (X − Z) −1 R ′ ≥ Her(R ) +  i (Z − X), so that the constraint (23) still hold if the diagonal term Her(R ) where and pre and post multiplying (29) by diag(I, I, Π i , I), and its transpose, respectively, we get that By pre and pos multiplying (31) by diag( −1 i , I, −1 i , I), and after that using the Schur complement to the resulting constraint, we obtain that (6) holds. At last, observing that (22) can be rewritten as we get, after pre and post multiplying (32) by diag( −1 i , I), that constraint (5) holds. Since (5)-(6) hold for the closed-loop system as in (17), we get from Lemma 1 that || aug || ∞ < , and the claim follows. ▪ Remark: Notice that the matrices for the FAC controller in (16) and satisfying (19) are directly obtained from the solution of the inequalities (22), (23).

 2 case
We present now the design of an FAC for the  2 norm case.

Theorem 2.
There exists a mode-dependent FAC  c as in (16) with hold for all i ∈ K and for all ∈ M i .
Proof. The proof uses a similar scheme as the one of Theorem 1. Consider Q i in (8)-(11) with the following form and define the matrices i and i by It follows from (35)-(36) that R is nonsingular. By imposingŪ i = −Ô i and recalling that Q i Q −1 i = I, we can rewrite (37) as where Υ 1i = T −1 i − (O i − T i ) −1 , and we can also rewrite (38) as Using the same idea applied as in the proof of Theorem 1 we get that R  i (O − T) −1 R ′ ≥ Her(R ) +  i (T − O). Let us rewrite (35)-(36) as follows and By defining ] , pre and pos multiplying (41) by diag(I, I, Π i ), and (42) by diag(I, I, Π i , I) we get By pre and pos multiplying (43) by diag(I, −1 i , I), and (44) by diag( −1 i , −1 i , I) we get that (9), (11), hold with the closed-loop matrices of system (17). Consequently we can rewrite (33) as Therefore, it is noticeable that (33) and (8) are equivalent, we can see that (10) is also satisfied by pre and pos multiplying (45) by −1 i . From Lemma 2, || aug || 2 < , and the claim follows. ▪ The proof for Theorem 3 is a direct consequence of Theorems 1 and 2 . ◼ Remark: It is important to mention that the level of conservatism in Theorem 3 is higher in comparison to that of Theorem 1 and Theorem 2, since Theorem 3 considers the BMI constraints (22)-(23) from Theorem 1 and (33)-(36) from Theorem 2 simultaneously. Note that the number of variables for each theorem is It is noteworthy that the number of variables in Theorem 3 is not the direct sum of the variables in Theorem 1 and 2, due to the fact that matrices R , , , , and ℭ , which are the matrices that compose the FAC (16), are present in the BMIs constraints of Theorems 1 and 2. Regarding the number of BMI constraints, Theorem 1 has 2 × i max × max BMIs, Theorem 2 has 4 × i max × max BMIs, and the number of BMIs in Theorem 3 is the sum of BMIs in Theorems 1 and 2, therefore, the number of BMI is 6 × i max × max . Hence, the region of feasible solutions in Theorem 3 is smaller in comparison to the ones for Theorem 1 and Theorem 2, and by consequence increasing the computational effort necessary to solve Theorem 3.

Coordinate descent algorithm
As stated previously, the constraints in Theorems 1 and 2 are BMIs. For solving these optimization problems with BMI constraints, there are a number of approaches presented in literature, to name a few Reference 27 or 28. In this article, F I G U R E 2 Force diagram of a serial flexible joint robot, the description, and values of each parameter can be seen in Table 1 we use the coordinate descent algorithm (CDA) for solving the problems which is also used and presented in References 21 and 29. The CDA is presented below.

Algorithm 1. Coordinate Descent Algorithm
Input: K , , t max , . Output: , , , ℭ . Initialization: While: Step 1: Solve the constraint in Theorem 1 or 2 considering ℭ as a constant, which can be obtained using18. Obtain the values of R , and Z i for the Theorem 1 or R T i for the Theorem 2.
Step 2: Solve the constraint in Theorem 1 or 2 this time using the values of R , and Z i or R , and T i obtained in Step 1 and ℭ as a variable. Obtain the value of .
In the above algorithm, the input is the stop criteria and t max is the maximum number of interactions allowed. Remark: The controller used in the CDA can be obtained using any design approach, but it is recommended to use a controller that is also under the MJLS framework. If the first iteration is feasible, the algorithm will at least keep the same result obtained, or improve the results.

RESULTS
In this section, we provide the simulation and experimental results using a two-degree of freedom SFJR (Quanser Model:2DSFJ). First, we present the mathematical model of the SFJR, we provide the method used to model a particular fault in the SFJR, and finally, the results obtained via MATLAB simulation are presented

SFJR modeling
The SFJR system where the simulations were made is presented in the force diagram in Figure 2 Using the variables as defined in Table 1, we define the state vector of the SFJR system as follows Let the control input be the electrical current to the first motor so that u 1 = I m1 . In this case, the state-space matrices A and B of the SFJR system (see also Reference 30) are given by The matrices were obtained using the parameters provided by the manufacturer and shown in Table 1. The discretization procedure implemented was a Zero-order Hold with a sampling time of 0.05 second.
Therefore, the matrices that describe the system in the discrete time-domain are: Observe that the matrix that represents the exogenous input, J, is a ratio of the input matrix B. Another relevant information is that matrix F represents a possible fault in the positioning/acceleration on the load or second part of the joint.
Hereafter, we present the MJLS modeling of the system, which will be responsible to model the network communication loss. This modeling is widely used in the networked control system field and, for the sake of simplicity, we consider only the sensor communication. The communication loss is modeled using a specific mode of the Markov chain to represent each of network state. In this case, there are two modes where the first one is the nominal communication and the second one represents the communication loss, that is, The transition matrix and the detector matrix used are (52) Using the above mentioned parameters and according to Theorem 1, the FAC as in (16)

Fault modeling
For numerical simulation, we simulated the input signal and fault signal as presented in Figure 3, where the black curve represents the input signal, and the dashed one represents the fault signal, which models an abnormal decrease in the input signal.

Simulation results
In this section, we present the results using a Monte Carlo simulation where we compare the performance of closed-loop system with the  ∞ and  2 FAC approaches and without FAC, for example, it only uses the nominal controller  c . Each Monte Carlo analysis is based on 200 simulations where the noise and transitions are randomized. Moreover, we also compare these results with two distinct FACs. The first one does not consider the eventual asynchronism, which means the controller and FAC depend on the Markov chain mode. The second one ignores completely the presence of jumps. The graphics presented each state of the plant in Figures 4  to 7, and the control signal in Figure 8. In Figures 4 to 8, the black curve represents the results obtained using Theorem 1, the blue curve denotes the results obtained using Theorem 2, the red curve represents FAC approach that does not consider the asynchronism, the green denotes the results using an FAC approach that does not consider asynchronism nor the loss of communication, and the dashed black curve represents the situation without any FAC. It can be observed from Figure 4 that both FAC approaches are able to accommodate the fault as expected. The figure shows that while the  ∞ case provides a more aggressive behavior and better compensation the fault that the  2 one, it influences the control signal even though the plant is in its nominal state. The FAC approaches that do not consider the asynchronism give the worst performance compared to the other ones, which is expected.
Similar observation can be seen in Figure 5 where both FAC approaches minimize the effect of the fault. It can be seen in this figure that the black and gray curves reach the nominal value even in the faulty situation in instant 5.5 seconds, which does not occur for the case without the FAC. Regarding the FACs that do not consider the asynchronism, they once again presented the worst performance. Figure 6 shows the system velocity where the  ∞ FAC approach gives performance close to the nominal case and the  2 FAC case exhibits noisy behavior. None of the FAC approaches surpasses the performance of nominal controller in the faultless conditions. For the FACs that do not consider the asynchronism, they give the worst performance and the presence of chattering is pronounced.
In Figure 7, both situations do not present meaningful differences, but both FAC approaches presented a slightly lower value in the entire simulation.
As can be observed in Figure 8, the FAC controllers designed using Theorem 2 is more susceptible to noise, which is expected since this solution is based on  2 norm. However, the maximum values barely surpass the control signal for F I G U R E 8 The control signal (u total (k)) response for all cases with and without fault [Colour figure can be viewed at wileyonlinelibrary.com] the system without FAC. For the FACs that do not consider the asynchronism, the presence of chattering is even greater which is not desirable.

CONCLUSION
In the present work, we solve the fault accommodation problem under the MJLSs with partial observation on the Markov parameter for the discrete-time domain. Our main contributions in Section 4 are the design of  ∞ and  2 FAC for MJLS with partial observation based on BMIs. We present as well the design of the mixed  2 / ∞ FAC for MJLS with partial observation also based on BMIs. The assumption on partial observation of the Markov chain imposes some challenges, which were tackled using hidden Markov chains. We describe the CDA as a tool to solve the proposed BMI formulation. A possible way to improve this line of research would be to consider control saturation and parallel actuators during the formulation.