Stabilization of hybrid systems by intermittent feedback controls based on discrete‐time observations with a time delay

This paper mainly investigates stabilization of hybrid stochastic differential equations(SDEs) via periodically intermittent feedback controls based on discrete-time state observations with a time delay. Firstly, by using the theory of M-matrix and intermittent control strategy, we establish sufficient conditions for the stability of hybrid SDEs. Then, we prove the intermittent stabilization for a given unstable nonlinear hybrid SDE by comparison theorem. Two numerical examples are discussed to support our results of theoretical analysis.


INTRODUCTION
Stochastic systems have been applied to model practical problems in many fields such as science and technology, information engineering, social economy and so on. As an important type of stochastic systems, hybrid stochastic differential equations (SDEs; also known as SDEs with Markovian switching) can well describe the actual systems whose structures and parameters are suddenly changed. Therefore, hybrid SDEs have been studied by many reseachers (see, e.g. [1][2][3][4][5]).
Stabilization is one of the hot topics in the research of hybrid stochastic systems (see, e.g. [6,7]). That is, to design a feedback control in the drift part to make the given system become stable. Regular feedback controls are designed based on the continuous-time observations of current state x(t ) (see, e.g. [1,3,4,8]). To reduce the high cost of continuous-time state observations, Mao [9] introduced the feedback controls based on discrete-time state observations to stabilize the given  [10] improved method to study the discrete-time state feedback control system, and stabilize a given hybrid stochastic systems in the sense of mean-square exponential stability. You et al. [11] discussed not only stability of controlled systems in the sense of mean-square exponential stability (as Mao does), but also H ∞ stability and asymptotic stability in mean square and other senses. Dong [12] discussed almost sure exponential stabilization by stochastic feedback control based on discrete-time observations. However in real life, there is a time lag between state observations and true value of the current system states. Chen et al. [13] studied stabilization of hybrid neutral stochastic differential delay equations by delay feedback control. Mao et al. [14] and Hu et al. [15] investigated stabilization of hybrid SDEs by delay feedback control. Li et al. [16] discussed the high non-linear hybrid stochastic delay differential equations by Lyapunov function. Qiu et al. [17] and Zhu et al. [18] took both discrete time and delay into account when designing the controller, they studied exponential stability problem of hybrid SDEs by feedback control based on discrete-time state observations with a time delay. Song et al. [19] studied stabilization based on discrete-time observations of state and mode.
In order to reduce the cost of continuous working time of the controller, intermittent control is an efficient strategy to stabilize the unstable systems (see, e.g. [20,21]). Intermittent control divides time into two parts: working time and rest time. The controller runs at working time, closes at rest. In other words, the controlled system can be regarded as a transformation between the closed loop system and the open loop system. Obviously, it reduces the control cost in practical application; At the same time, it improves the control efficiency and operability. Zhang et al. [22] applied intermittent stochastic noise to stabilize the non-linear differential equation, and established a class of theory about intermittent stochastic disturbance stability. Ren et al. [23] showed the quasi-sure exponential stabilization of non-linear differential equations via intermittent G-Brownian motion. Liu et al. [24,25] investigated the stochastic stabilization based on the intermittent control strategy with discretetime feedback or time delay feedback. Mao et al. [26] studied the stabilization by intermittent control for hybrid stochastic differential delay equations. Yin et al. [27] discussed the almost sure exponential stabilization of non-linear differential equations by intermittent stochastic perturbation with jumps. Recently, intermittent control has been applied in many fields, such as complex network (see, e.g. [28][29][30]), multi-agent system (see, e.g. [31]), synchronization of memory neural network (see, e.g. [32]) and so on.
In order to obtain better control effect, more and more scholars have studied the stabilization problem of using hybrid strategy (also known as two or more control strategies at the same time, see, e.g. [29,31]). Here, we take both discrete-time state observation with delay and intermittent control strategy into account when designing the controller, there are rare literatures on this topic.
The hybrid design strategy of controller is very novel and it has not been applied to stabilize an unstable non-linear hybrid stochastic system. We will design the feedback controller based on discrete-time state observations, time delay and the intermittent control strategy. Taking both the observation time lag and the observation frequency into account, we can make the decision on designing the controller. Under the different assumptions, we will prove the controlled hybrid system is exponential stable by some stochastic analysis techniques and dynamical property.
The rest of the paper is organized as follows. In Section 2, we introduce some preliminaries. In Section 3, we investigate the intermittent stabilization for hybrid stochastic systems by feedback controls based on discrete-time state observations with time delay. While in Section 4 we give two examples to illustrate our theory.

PRELIMINARIES
Here, let (Ω,  , { t } t ≥0 , ℙ) be a complete probability space with a filtration { t } t ≥0 satisfying the usual condition. Let B(t ) = (B 1 (t ), … , B m (t )) T be an m-dimensional Brownian motion defined on the probability space. Let r (t ), t ≥ 0 denote a rightcontinuous Markov chain on the probability apace taking values in a finite state space S = {1, 2, … , N } with the generator Γ = ( i j ) N ×N given by We assume that the Markov chain r (⋅) is independent of the Brownian motion B(⋅).
Consider an unstable hybrid SDE: on t ≥ 0 with initial value on x(0) = x 0 and r (0) = r 0 , where f ∶ R n × S × R + → R n and g ∶ R n × S × R + → R n×m , for the unstable hybrid SDE (2.1), we aim to design the control function with some time delay 0 > 0 and the gap of the discretetime state observation > 0 to make it stable. Moreover, we will combine stabilization technology with the intermittent control strategy, Thus, the controlled system is as follows: where t = [t ∕ ] − 0 and Noting we naturally impose the initial data Here, the coefficients are assumed to satisfy the following assumption.
Assumption 2.1. There exist three positive constants 1 , 2 , and 3 such that for all x, y ∈ R n and i ∈ S , we assume that f (0, i, t ) = 0, u(0, i, t ) = 0, g(0, i, t ) = 0 for all t ≥ 0. We see this assumption implies

MAIN RESULTS
The stabilization problem by intermittent feedback controls based on discrete-time observations with a time delay could be transferred to the classic stabilization problem by intermittent feedback controls without discrete-time observations state and delay, the form of function is as follows (3.1). In [15], Hu et al. used this approach to build up the connection between the delay feedback control and the control function without delay. where is a non-singular M-matrix, where is a non-zero state, almost all the trajectories of system (3.1) will never converge to the origin. Thus. Lyapunov functions can be chosen in a variety of ways.
We have [ This implies This implies We have Proof. Applying Itô formula again on |y(t )| p , gives ds.
The well-known Gronwall inequality yields that sup 0≤s≤t |y(s)| p ≤ |y(0)| p exp{ pt }.  Let k be any non-negative integer. We first prove the following equation for p = 2, we have (3.10) By Assumption 2.1, we can obtain that By (3.9), we have Repeating the above procedure, we get By Chebyshev inequality, we get By Borel-Cantelli Lemma show that for almost all ∈ Ω, there is a positive integer i 0 = i 0 ( ) such that So, for almost all ∈ Ω, We have To prove Theorem 3.8, we present some lemmas.
Proof. We will simple write H 3 (p, 1 , T ) = H 3 and H 4 (p, 1 , T ) = H 4 . Fix 1 ∈ (0, * ) and the initial data (2.3), For simplicity, we write x(t ; , r 0 , 0) = x(t ), r (t ; r 0 , 0) = r (t ) for t ≥ 0. Likewise, we write y( 1 + T ; 1 , x( 1 ), r ( 1 )) = y( 1 + T ). By Lemmas 3.4 and 3.6, let p 1 = 2 ∨ p, p 2 = 2 ∧ p, C 1 = min i∈S i , C 2 = max i∈S i , we have (3.17) Moreover, by the elementary inequality (a (3.17) and Lemma 3.7, we have On the other hand, by Lemma 3.6, we have for some > 0. We see from (3.20) that Let us proceed to consider the solution x(t ) on t ≥ 2 1 + T . There exists a 1 and a positive constant N such that T + 2 1 = N Δ 1 . By the flow property, this can be regarded as the solution of Equation (2.2) with the initial data x N Δ 1 and r (N Δ 1 ) at t = N Δ 1 . In the same way as above, we can show that By (3.21), this implies Repeating this procedure, we have For all k = 1, 2, …. Now, by Lemma 3.6, we have (3.22) for all k = 0, 1, 2, …. Hence, for t ∈ [kN Δ 1 , (k + 1)N Δ 1 ], This implies Using Markov inequality and (3.22), we get for all k ≥ 0. By the Borel-Cantelli Lemma, we can obtain that for almost all ∈ Ω, there exists an integer k 0 = k 0 ( ) such that for any k > k 0 ( ). This implies that for almost all ∈ Ω. The proof is therefore complete. □

SIMULATIONS
We present two numerical examples in this section to support our theoretical results.

Example 4.1. Consider a hybrid SDE
Let R(t ) be a Markov chain with the state space S = {1, 2} and the generator is It is obvious that the hybrid SDE is unstable (see Figure 1). In our example, we will design a control function u ∶ R × S → R defined by It is straightforward to show that Assumption 2.1 is satisfied with 1 = 0.2, 2 = 0.5, 3 = 0.5. Then, we choose p = 0.99, it is easy to see that 1 , 2 , 1 , 2 in Assumption 3.1 are The matrix defined by (3.2) as which is a nonsingular M-matrix. By (3.3), we have 1 = 6.9425, 2 = 6.6298, since C 2 = 6.9425, C 1 = 6.6298, = 0.4,  which has the unique positive root * = 1.1031 × 10 −4 (which is about microseconds if the time unit is of year). By Theorem 3.8, we can conclude that the controlled system (2.2) is almost surely exponentially stable provided 1 < 1.1031 × 10 −4 , we let( , 0 ) = (10 −4 , 10 −5 ) and Δ = 10 −5 (see Figure 3). The computer simulation supports this theoretical result clearly. When = 0.85, 0.9, 0.95, 0.99, the calculation shows that delay time changes along with the intermittent parameters . The larger the intermittent parameter value is, the longer time is taken to control. The larger the value * is, the frequency of control is less. We can choose the different values of according to the actual situation (see Table 1).

CONCLUSION
Here, we have discussed the stabilization of continuous-time hybrid SDEs by intermittent feedback controls based on discrete-time state observations with a time delay. The stabilities here mainly refer to exponential stability in pth moment and almost sure. We point out that the problems become harder when we take discrete-time state observation, time delay feedback and the intermittent control strategy into consideration at the same time. Finally, we obtain the upper bound of ( , 0 ) and intermittent parameter . Two examples and computer simulations are illustrated to support our theory.