Vision-based tip position tracking control of two-link flexible manipulator

: Owing to the non-collocation of actuators and sensors in a flexible-link manipulator (FLM) it becomes difficult to achieve accurate tip position tracking. To resolve this issue, a vision sensor is used for direct measurement of the tip position instead of employing the traditional mechanical sensors. Among the different visual servoing (VS) control schemes, image- based VS (IBVS) is more effective. However, there are many challenges in the IBVS scheme such as singularities in the interaction matrix and local minima in trajectories that affect the system performance in real-time applications. In this study, the moment-based new visual feature is selected to address the aforesaid issues that arise in the IBVS scheme. Furthermore, a new two-time scale IBVS controller is developed for addressing the tip-tracking control problem of the two-link flexible manipulator (TLFM). In the proposed control scheme, the dynamics of the FLM is decomposed into two-time scale models, namely a slow subsystem and a fast subsystem. The performance and robustness of the proposed new two-time scale IBVS controller for TLFM are verified by pursuing simulation studies. It is observed from the obtained results that the proposed controller effectively stabilises the oscillatory dynamics and tracks the reference trajectory accurately.


Introduction
In a flexible-link manipulator (FLM), sensors and actuators are not placed at the same location, i.e. these are non-collocated. In most of the works, tip position information is measured by a traditional mechanical sensor such as strain gauge, accelerometer, and encoder. However, sometimes these sensors exhibit poor performance in the critical environment due to electromagnetic interference and give a noisy response. The vision sensor provides a solution to overcome these difficulties which give a direct measure of tip point deflection. In recent years, there has been an increasing interest in high-performance control of flexible manipulators using visual servoing (VS) [1].
There are four VS schemes based on the error: (i) positionbased VS [2], (ii) image-based VS (IBVS) [3], (iii) hybrid VS [4], and (iv) motion-based VS [5]. IBVS is found to be the most preferred scheme in control of FLMs, as it is more efficient than the above-mentioned schemes. Also, IBVS eliminates errors due to sensor modelling and it is flexible against camera calibration error. In this work, the eye-in-hand configuration (camera placed in the tip and observe only target object) is considered as it excludes the effect of kinematics on positioning accuracy.
However, referring to recent research, IBVS has two challenges: (i) the selection of visual features to avoid the singularities in the interaction matrix and (ii) the design of the control scheme by selected visual features such that FLM tracks the reference trajectory with minimum tracking error. In IBVS, the design and selection of suitable visual features are difficult tasks. IBVS based on moments exploits global image features to eliminate the process of extraction, matching, and tracking in the image processing step.
Recently, image moments are widely utilised in VS. Initially, moments have been applied for visual pattern recognition in computer vision applications in [6]. In [7], an image moment is used for selecting six visual features in the IBVS application to control the six-degree-of-freedom (6-DOF) of the system. The IBVS scheme using an image moment is adopted to design six features from a solid and discrete object to decouple the DOFs of the system in [8]. In the application perspective, a shape moment-based VS architecture is proposed for a redundant manipulator in [9]. In [10], image moment-based predictive IBVS control architecture is recommended to control the 6-DOF manipulator modelled as a virtual Cartesian motion device. In [11], an IBVS scheme is presented that uses a virtual spring approach with image moments for controlling the position and orientation of an unmanned aerial vehicle. In [12], the IBVS scheme based on image moment is presented for a 7-DOF robot manipulator. In [13], the image-moment based VS technique is used to find the features (centroid and major axis of the region) for control of a planar flexible robot manipulator. In view of the successful use of image moments based visual serving control schemes in different robotic applications, in this work, an attempt has been made to extend the approach to design an image moment-based new IBVS controller for tip-tracking control of two-link flexible manipulator (TLFM).
FLMs are under-actuated systems and their dynamics exhibit non-minimum phase characteristics, and therefore designing a control scheme for tip-tracking is very challenging due to unstable internal dynamics [14]. To deal with these issues and effectively control the internal states in different frequency ranges, modal transformation methods, i.e. output redefinition or singular perturbation (SP) can be used before controller design. In the first method, the reflected tip position is chosen as a redefined output or combination of tip rate and joint rate as a new output to obtain the minimum phase characteristics. In the second method, the overall dynamics of FLM is decomposed on a two-time scale, i.e. slow and fast time scale. The speed of joint motion is relatively slow compared to flexible modes. Therefore, the tip position related to joint motion is considered as a slow subsystem, while tip deflection related to flexible modes acts as a fast subsystem. The slow subsystem corresponding to the slow time scale tracks the desired trajectory and the fast subsystem corresponding to a fast time scale minimises vibration of the links. The SP method is less complex comparable to output redefinition, as it needs fewer measurement data such as joint position, tip position, and joint velocity and also excludes derivative signals of flexible vibration modes. By following the two-time scale composite control technique [15][16][17], initially slow subsystem controller is designed, and then a fast feedback control is added to stabilise the fast subsystem along its  [15][16][17], new vision-based tiptracking control of TLFM is proposed here.
The last decade has seen a major amount of research interest in VS-based control of FLM. Also, the advantages of image momentbased feature over other local features motivated to employ it in many vision-based robotic applications. In [16], the computed torque method (IBVS controller) is used for controlling the slow subsystem. Therefore, in this study, the moment-based IBVS approach is utilised to design a high-performance slow subsystem controller for tip position control. To handle the model uncertainties and disturbances associated with TLFM, a linear quadratic regulator (LQR) controller has been used in [18], for the flexible dynamics. However, state observer is needed to estimate unmeasurable/elastic modal coordinates [19]. Therefore, the Kalman filter based on the LQR controller is designed as a fast subsystem controller to dampout the deflection by handling the model uncertainty. It also provides robustness towards measurement noise and time delays.
The objective of this work is to design an image Jacobian (interaction) matrix with a minimal set of visual features using image moments for visual control of TLFM that can track the reference trajectory with minimum tracking error. Then, a suitable controller is designed such that when it is applied to the manipulator with coupled rigid and flexible dynamics, the reference trajectory will be tracked with simultaneous control of link vibrations.
The contributions of the study are as follows: • Moment-based visual features have been designed to address the singularity and local minima issue of IBVS. • The new two-time scale IBVS controller is developed for tiptracking control of TLFM, whose dynamics is decomposed into two-time scale models namely slow and fast models. Subsequently, the moment-based IBVS controller is designed for the slow subsystem, and a Kalman filter-based LQR controller is designed for the fast subsystem for tip-tracking control of FLM.
The rest of the paper is organised as follows. In Section 2, the dynamics of TLFM, VS, and camera model are described. Then, the selection of visual features for tip position control of TLFM is presented in Section 3. The control problem is formulated in Section 4. The design of the new two-time scale IBVS controller is presented in Section 6. The stability and robustness of the proposed control scheme are investigated in Section 7. Results and discussion are presented in Section 8. The conclusion is given in Section 9.

TLFM dynamics
Owing to distributed link flexure, the positioning and tracking of the tip in the case of a TLFM are very difficult. However, owing to the link flexure, the dynamics of TLFM is a distributed parameter system that involves partial differential equations, i.e. an infinite number of flexible modes are needed for the modelling. However, for the realisation of the controller, a finite-dimensional model is necessary [20]. Hence, it is necessary to truncate the higher-order flexible modes. Therefore, the dynamics of TLFM is derived by using the Euler-Lagrangian formulation technique along with the assumed mode method (AMM) [21]. In this work, it is assumed that motion of the TLFM is in the horizontal plane; the links have uniform material properties and have a constant cross-sectional area [22]. The schematic diagram of TLFM with a tip-mounted camera is shown in Fig. 1, where X 0 OY 0 is the fixed coordinate frame with the joint of link-1 located at its origin. X i O i Y i is the rigid body moving coordinate frame of the ith link, and is fixed at the joint of link i. X i Ô i Y^1 is the flexible body moving coordinate frame, and is fixed at the end of link i. τ i represents the applied torque of the ith link, θ i represents the joint angle of the ith joint, and y i (l i , t) denotes the deflection along with the ith link.
The complete system behaves as a non-minimum phase system, when the tip position is taken as the output. The actual output vector y pi is considered as the output for the ith link. Hence, the redefined output can be written as where l i is the length of the ith link. The dynamics of flexible links are derived as Euler-Bernoulli beams with deformation y i (l i , t) for the ith link satisfying the link partial differential equation where ρ i and (EI) i represent the density and flexural rigidity of the ith link, respectively. The finite-dimensional expression y i (l i , t) can be presented using the AMM [21] as where φ i j and δ i j denote the jth mode shape and modal coordinate of the ith link, respectively, and n is the number of assumed modes. The dynamics of TLFM is derived by using the energy principle and Lagrangian formulation technique along with AMM. The total Lagrangian (L) can be defined as where q i is the ith generalised coordinates, i.e. q i = [θ i θ˙i δ i δ˙i]. Substitute total Lagrangian (L), i.e. difference of total kinetic energy and total potential energy in (4) and solving for the q i generalised coordinates, the dynamics of TLFM can be expressed as In (5), M is the positive definite symmetric inertia matrix, c 1 and c 2 are the Coriolis and centrifugal force vectors, respectively, K is the stiffness matrix, and D is the damping matrix. The details of the matrices and vectors of (5) are given in [22]. The state-space formulation of (5) can be rewritten as where, x i ∈ ℜ 2n , y ∈ ℜ m , and u i ∈ ℜ n are the TLFM state, tip position, and input, respectively. A, B, and C are matrices with appropriate dimensions

Camera modelling
Camera modelling is necessary to understand the geometric aspects of the camera. To control the motion of the TLFM, we assume the camera is modelled as a preservative projection [23]. Fig. 2 shows the perspective camera model, where (x 0 , y 0 ) is the coordinate of the principle point and coordinate (x p , y p ) is the 2D projection of 3D point P with coordinates (X p , Y P , Z p ) on the image plane where f is the focal length of the camera, k x = k y = k are the pixel size and the α = f /k is the amplification factor. The image Jacobian matrix with reference to 2D projection coordinate (x p , y p ) can be written as where

Feature selection
In the IBVS scheme selection of visual features is a difficult task. Two types of features are used in a VS application, namely local and global features. In the practical situation, the object can be of any shape, it is difficult to match each point accurately, and due to the insufficient number of image feature points singularity or local minima may occur. Therefore, the use of an image point (local feature) as a feature is inadequate in IBVS. Hence, to improve the performance, robustness, and stability of the robotic control system, the global feature can be selected as the image feature instead of simple points (local feature). It is expected to use global features to avoid extraction, tracking, and matching steps. Global feature extraction based on optical flow, luminance [24], are developed to avoid the tracking and matching in IBVS but these have limited convergence domain due to the non-linear nature. Another efficient and interesting global feature is image moment in VS. In image moment-based feature, some independent feature of the object is used such that the corresponding interaction matrix (Jacobian matrix) is full rank and has a maximal decoupled structure. Recently, the performances of global (moment based) and local (point-based) features in IBVS are compared and verified that the moment-based feature performs better [25]. Therefore, the singularity and local minima problem of the IBVS can be avoided by selecting proper moment-based features that also simplify the controller design.
However, referring to the recent research [26] in moment-based VS sensitivity of data noise increases with the increase of moment orders. To deal with the issue low order shifted moment is proposed in [26] to reduce the effect of data noise on the control performance. The shifted moment has been used to select features to efficiently control the rotational DOFs. However, the selection of visual features is still a key issue to solve the problem of singularity in IBVS [27]. In this section, two combinations of image moment-based features are selected from the previous theoretical result to control the 2-DOF of the TLFM.

Image moments
Assume f x, y ⩾ 0 is a real bounded function of region R. Then, the moment of f x, y for i + j order can be given as where i + j is the order of the moment. The central moment (μ i j ) is computed with respect to the object centroid (x g , y g ) and can be defined as where (x g , y g ) is computed from the first-order moment (m 01 , m 10 ).

Interaction matrix of image moment features
The interaction matrix or image Jacobian matrix describes the time variation of the moments with respect to the relative kinematic screw X c . Interaction matrix L m i j related to m i j is ṁ i j = L m i j v has been derived in [7]. It is obtained from the following equation: where f (x, y) = (x i y j ). Consider a planar object point, which excludes the degenerate case where the camera optical centre is in plane where d is the depth of point, and X, Y, and Z are parameters of the plane.

Interaction matrix of shifted moment features
In [26], low-order shifted moment invariant-based new visual features have been introduced. VS based on shifted moments is utilised to reduce the effect of measurement noise on the control performance. To control the two rotational motions of TLFM, two visual features are selected from shifted moments. These features are selected from three polynomials computed from low-order shifted moments that reduce the sensitivity of data noise.
From (14), it can be noted that if the shifted moments are zero, then it is the same as the classical central moment as defined in (11).
Consider P s = [ẋ 0 y˙0] T as the coordinates of the point. The moment with respect to x 0 can be computed as where x˙0 and y˙0 are linked to the camera screw through the interaction matrix related to the shifted point. After a simple development interaction matrix (L μ i j s ) can be derived and the same is given in [26].

Visual features for tip position control of TLFM
In [7], two shifted moment-based visual feature is needed to control the 2-DOF of TLFM. In TLFM, the tip-mounted camera position is a function of joint angle θ i of the ith joint. Therefore, one needs to select two shifted moment-based visual features to control the two rotational motions of TLFM. So, to implement visual tip position control of TLFM, for selected shifted momentbased visual features, the interaction matrix has to be developed that relates the visual feature with the joint angular velocity.
Low-order shifted moment-based visual feature to control 2-DOF of TLFM is used to reduce the sensitivity of data noise. These are selected from three polynomials computed from shifted moments. The polynomials computed from the shifted moments of order 2 and 3 are given as follows [26]: In the case of shifted points, P 1 and P 2 are to be selected in the direction of the major principal axis of the object as shown in Fig. 3. The two rotational motions are computed from (17) using P 1 and P 2 as shifted points: where x s1 = acos θ, y s1 = asin θ, x s2 = acos θ + (π/2) , y s2 = asin θ + (π/2) , a = (μ 20 + μ 02 ) 1/4 .
The interaction matrix corresponding to the shifted parameter is to be calculated from (18) is μ 20 and μ 02 are computed using (11). Therefore, using points selected from the object allows preserving the invariance to translation, rotation, and scale obtained with the function of central moments. Now, the invariance properties of selected features are validated in Section 8.1. Shifted moment-based two visual features are selected from two invariants from (16) and (17) combining three kinds of moment invariants (invariant to translation, to 2D rotation and to scale). The interaction matrix L θ s related to shifted moment-based two-visual feature (shifted moment) to control the 2-DOF of TLFM can be written as where The analytical form of the interaction matrix related to any moment can be computed from a binary or a segmented image.

Problem formulation
A popular composite control scheme for a considered TLFM with tip mounted camera is presented to solve the tip-tracking problem that uses the SP approach to decompose the manipulator dynamics into a two-time scale, slow subsystem based on strain measurements and fast subsystems based on visual feedback. This controller is designed to control these two separate time-scale subsystems, such that when it is applied to the manipulator with coupled rigid and flexible dynamics, the reference trajectory will be tracked with simultaneous control of link vibrations. However, the design of the observer to estimate fast states is challenging due to the measurement noise present in strain gauge measurement. Also, the selection of noise-free visual features from vision feedback is difficult for perfect tip position tracking of TLFM.
An SP-based composite controller for this manipulator using reduced-order models is designed, where Kalman filter based on the LQR controller is implemented for the fast subsystem and the moment-based IBVS controller is designed for a slow subsystem for tip-tracking control of FLM.
A fast subsystem performs a real-time operation as fast as desired for stability and quality control, whereas a slow subsystem carries out a non-real-time operation and handles the image acquisition. Fig. 4 shows the structure of the proposed new twotime scale control scheme. In that, τ is the input of the TLFM, y 1 is the strain output of the strain gauge attached in the flexible link, y 2 is the output of the vision system. τ¯s is the visual output of the slow subsystem controller, τ f is an output fast subsystem controller. τ is the combined output of both the controllers that are used as control input for TLFM.

Model decomposition by two-time scale perturbation technique
Owing to the distributed link flexure, the dynamic of the flexible manipulator becomes a distributed parameter system. Usually, the dynamics of TLFM is composed of rigid and flexible dynamics. A popular approach to decompose the complex dynamics into twotime scales is by an SP technique. In the SP method, the design of a feedback control system for the under-actuated system can be decomposed into two subsystems, i.e. slow subsystem (for tip position measurement and control) and fast subsystem (for compensating tip deflection/vibration). Using the SP theory, the state variable of TLFM dynamic model (5) can be written as where ξ = 1/ k is the SP parameter with a common scale factor of stiffness coefficients, the slow part of each variable is denoted by overbars. η 1 and η 2 are the fast part of variables z 1 and z 2 , respectively.
The slow subsystem can be defined as The fast subsystem is described as In terms of η 1 and η 2 , the fast subsystem can be defined as where T = t/ξ is the fast time scale, H = M −1 and τ s and τ f are the slow and fast control signal, respectively. The slow and fast parts of the tip position variables and of the deflection variables change with respect to (23) and (25), respectively. So, as per the composite control theory, the control input of TLFM can be expressed as where τ¯s and τ f are the slow and fast control inputs, respectively. τ f (x 1 , 0, 0) = 0, i.e. fast control signal is not needed during trajectory tracking with slow subsystem (23).

Design of new two-time scale IBVS controller
The tip-tracking issue of TLFM can be divided into two subproblems: (i) tracking of the tip motion and (ii) suppression of the oscillation in flexible beams. To deal with these issues, a new twotime scale IBVS controller is proposed. The proposed new twotime scale controller is a composite controller that is composed of a fast subsystem, based on strain measurement, that damps the elastic vibration, and a slow subsystem, based on a vision-based measurement that achieves the tracking of a reference tip trajectory. Slow and fast subsystems controllers are designed next.

Design of slow subsystem controller
The slow controller design is based on the rigid model (23) of the TLFM. In this section, the shifted moment-based IBVS controller is designed for tip position control of the slow subsystem of a TLFM.
Here, a shifted moment-based visual feature is measured from a binary or a segmented image of the object of the static environment projected in the image plane. The mathematical background of moment-based feature and selection of shifted moment-based features to control 2-DOF of TLFM is presented in Section 3.
The feature error ε(t) can be defined as where s is the vector of visual features and s * is the desired value of visual features. The goal of the shifted moment-based IBVS controller is to ensure that the actual visual feature asymptotically reach the desired visual feature In general, the feature velocity ε˙(t) can be related to the tip/camera velocity (Ẋ c ) as In a slow subsystem of TLFM, it is assumed that the velocity of the tip depends on the rigid motion of the links From (29) and (30), we can write where L θ is the image Jacobian matrix derived in (8), and L s = L μ i j s is the interaction matrices related to shifted moment (20) of the tip with respect to the position variables. Given the non-linear TLFM system, the objective is to determine a hub velocity that can drive the system with respect to the desired image feature position.
It is necessary to design an IBVS-based control scheme for a closed-loop system (23) such that the output trajectory should track the reference output trajectory as close as possible.
Slow control input is designed as given in [16] τ¯s(x¯1, x¯2) = c 1 If the non-linear term of c 1 is compensated, then (23) reduces to double integrator system x¯1 = v. Considering feature error defined in (27) and feature velocity ε˙(t) defined in (29), we can write time derivative of (29) as Then is applied, with the assumption that the desired visual feature vector remains constant. In the form of visual feature error, the integrator system gives the following equation: where K P and K D are controller gains and can be selected as positive-definite matrices that are used to control the image feature velocity convergence.

Design of the fast subsystem controller
Here, the LQR controller is utilised to control the fast subsystem of the TLFM. The control problem is to determine fast control input τ f such that tip deflection converges to zero as fast as possible. In a fast controller, a state observer is generally needed to estimate the unmeasurable elastic/modal coordinates. Measurement noise plays an important role in the design of a state observer. In fact, strain gauges are inherently affected by very high noise, due to the electromagnetic interference. The scenario just depicted raises a problem of delayed signal estimation, where a Kalman filter, rather than a deterministic observer can be effectively used. A Kalman filter based on fast model that includes the first three modes, and a fast feedback that damps the first mode only, is the best choice with respect to the closed-loop system stability and robustness towards time delays [16]. The state-space representation of TLFM dynamics as given in (6) includes both rigid and flexible dynamics. To determine the feedback gain K = [K 1 , K 2 ] for the control law τ f = − Kx(t) to minimise the performance index is given by where Q and R are positive definite symmetric matrices. Equation (36) can be written as Then, by comparing both sides of (38) Let R = T T T, where T is the non-singular matrix Minimising the cost function (37), one obtains The fast subsystem control input is given by where P satisfies the following matrix Ricatti equation From (32) and (42), the new two-time scale IBVS control law is obtained as

Stability and robustness analysis
The study of the robustness of the closed-loop system is very important, as it affects performance and stability. Furthermore, due to the model uncertainty and non-collocated behaviour, FLM behaves as a non-minimum phase system that also motivates studying robustness. Here, the stability and robustness of the proposed method is analysed theoretically for the disturbance and un-modelled dynamics of the TLFM. Equation (22) can be written as where ΔF and ΔW represent the disturbance and un-modelled dynamics term, and i ⩽ 2. Assumption 1: The terms F, W, ΔF, and ΔW of (45) have the following properties.
iii. F( . ) and W( . ) are Lipschitz continuous to all x i (t) and z i (t), and (45) is controllable.

Equation (45) can be written as
In (46), w 3 (x i , z i ) is also considered as an uncertain parameter. Therefore, with respect to uncertainty, a quasi-manifold is defined as follows: A quasi-boundary layer model is defined as follows: Also, a quasi-reduced model can be written as Equations (47)-(49) are the manifold, the boundary layer model and the reduced model can be written as Now, the difference between quasi-manifold (50) and the real manifold (45) is determined. Initially, define k 1 , k 2 , and k 3 as follows: where B x ∈ ℜ 2n and B z ∈ ℜ m are compact. Therefore, norms of the continuous functions inside these areas are bounded. So, k 1 , k 2 , and k 3 exist. Also, it is assume that W, ΔW, and w 2 −1 be the continuous function such that k 1 (k 2 + k 3 Lemma 1: If the zero state equilibrium of dy/dt = w 2 (x i )y (quasi-boundary layer model) is uniformly exponentially stable in there is a Lyapunov function V(x, y) = y T P(x)y that satisfies where b is the positive constant, P(x) is the solution of P(x)w 2 (x) + w 2 T (x)P(x) = − I, and λ min P(x) and λ max P(x) are the minimum and maximum eigenvalue of P(x), respectively [28].
Theorem 1: Let the zero state equilibrium of dy/dt = w 2 (x i )y (quasi-boundary layer model) be uniformly exponentially stable in x i ∈ B x , and Assumption 1 holds for all (x i , z i ) ∈ B x × B z . Then the zero state equilibrium of system (45) is uniformly exponentially (52) Proof: See Theorem 1 of [28]. Theorem 2: Consider system (45) with Assumption 1. Let the zero state equilibrium of (48) be uniformly exponentially stable in x i (t) and the zero state equilibrium of (49) be exponentially stable. If (52) is satisfied, then there exists ξ * > 0 such that for all ξ < ξ * , the zero state equilibrium of (45) is exponentially stable.
Proof: If Assumption 1 is satisfied, then the manifold exists. Furthermore, if (52) is satisfied, then the zero state equilibrium of the boundary layer model is uniformly exponentially stable in x i (t) and the zero state equilibrium of the reduced model is exponentially stable. Thus, the zero state equilibrium of the overall system is exponentially stable for a small value of ξ. □ Also, the robustness of the proposed shifted moment-based new two-time scale IBVS controller is numerically investigated in the presence of modelling error and field-of-view (FOV) constraint, model uncertainty, and image noise uncertainty in Section 8.4.

Results and discussion
In this section, the tip-tracking performance of TLFM is analysed by simulation studies. Initially, the theoretical results of feature selection are validated. Then, the performance and robustness of the proposed slow subsystem controller is evaluated. Finally, tiptracking performance (performance of slow and fast subsystem controllers) is analysed.
The physical parameters of TLFM considered for simulation studies are given in [22]. In the simulation, the focal length of the camera and scale factor is considered of 0.008 m and 0.2 pixel/m, respectively. The proportional derivative (PD) and LQR controller parameters used in the simulation are given in Table 1. The mean absolute errors (MAEs) and root mean square errors (RMSEs) are used as a quantitative measure for comparing the tip-tracking performance of the proposed scheme [22]. It is assumed that the target always remains inside the camera FOV.

Feature validation
Theoretical results on the selection of the shifted point to control two rotational motion of TLFM using low-order shifted moment is validated here. It is assumed that the object is parallel to the image plane, i.e. X = Y = 0. The simulation result of two different object shapes symmetrical and non-symmetrical will be presented. Initially, the symmetrical object (rectangle) is considered that is shown in Fig. 5. The second object is a non-symmetrical object (whale) that is shown in Fig. 6. From Tables 2-7, it is observed that for both symmetrical and non-symmetrical objects, results obtained after applying translation and rotation, respectively, are identical to the original one. This validates the invariance of selected features to translations and rotation of I s1 , I s2 , I s3 , r s1 , r s2 , and r s3 . The third row of each table shows that the scale change in the image changes I s1 , I s2 , and I s3 only. Hence, the invariance properties of selected features proposed in Section 3.4 to control the rotational motion of TLFM are validated from the results.

Slow controller performance
In this section, the performance of the slow controller (momentbased IBVS controller) is evaluated. The invariance property of selected shifted moment-based features to control the rotational motion of TLFM is already validated in Section 8.1. The performance of the proposed controller is evaluated for two different object shapes (symmetrical and non-symmetrical). Here, tip positioning with a symmetrical object (rectangle) and nonsymmetrical object (whale) are termed as task-1 and task-2, respectively. The condition number is used as a performance index that represents the well-behavedness of the matrices used for approximating the interaction matrix. It gives a global measure of the visibility of motion. It also measured the stability of control scheme, i.e. it should be as low as possible to improve the robustness and the numerical stability of the system. Fig. 7 shows the initial and desired position of task-1. The interaction matrix as given in (20) is calculated for the desired position of task-1 with the invariants (r s4 and r s6 ) that is obtained from (16) and (17). It is noticed that the condition number is 3.78 that is satisfactory. The initial and desired values of the selected image features of task-1 are listed in Table 8. Fig. 8 shows the image feature errors of task-1 and the same is given in Table 8. It is observed from Fig. 8 that the feature error reaches zero in 45 s. Fig. 9a shows the initial position and Fig. 9b shows the desired position of task-2. The interaction matrix (20) is calculated for the desired position of task-2 with invariants (r s5 and r s6 ) that is obtained from (16) and (17). It is noticed that the condition number is 2.38 that is satisfactory. The initial and desired values of selected image features of task-2 are given in Table 8. Fig. 10 shows the image feature errors of task-2. It is observed from Fig. 10 that the feature errors converge to zero in 53 s. It is observed from the obtained results that selected image features converge to zero, i.e. VS using the proposed moment-based IBVS controller (slow subsystem controller) is successfully achieved. The tip positioning performance of link-1 and link-2 is shown in Figs. 11 and 12, respectively. It is observed from the tippositioning performance that the estimated tip position from the     encoder exhibits more overshoot than the reference position and tip position with a visual sensor exhibits less overshoot. Figs. 13 and 14 show the tip positioning error profiles for link-1 and link-2, respectively. From tip positioning error profiles, it can be observed that the error trajectory achieved by employing encoder feedback has yielded maximum overshoot compared to the controller with camera feedback.
Quantitative metrics (MAE and RMSE) are calculated from hub angle response curves obtained from simulation studies of the proposed controller, and the same are listed in Table 9. The tip positioning performance comparison between the proposed controller with an encoder feedback and the same controller with camera feedback is presented in Table 9. It is observed from Table 9 that the MAE and RMSE for the controller with camera feedback are lower than that obtained with encoder feedback.

Tip deflection performance:
Tip deflection performance of link-1 and link-2 with Kalman filter based on fast model (LQR controller) and LQR controller are shown in Figs. 15 and 16, respectively. It can be seen from the tip deflection performance that the Kalman filter-based LQR controller damps the vibration/ deflection. It also provides optimal performance with respect to closed-loop system stability and robustness towards measurement noise and time delays.
Figs. 17 and 18 show the tip deflection error profiles for link-1 and link-2, respectively. From Fig. 17, it can be seen that the tip deflection error of link-1 is 47.7 m/s 2 for LQR, and yields the minimum tip acceleration error of 36.8 m/s 2 for LQR with Kalman filter. The tip acceleration error of link-2 is 20.68 m/s 2 for LQR, and yields a minimum deflection error of 13.37 m/s 2 for Kalman filter based on the LQR controller.   Table 10. The tip deflection performance comparison between the LQR controller and LQR with a Kalman filter is presented in Table 10. It can be seen that MAE and RMSE for Kalman filter based on the LQR controller are lower than that obtained with the LQR controller, i.e. link vibration has been minimised with Kalman filter-based LQR controller as compared with the LQR controller.

Modelling error and FOV constraint:
Here, the effect of camera modelling error and FOV constraint is investigated to test the robustness of the proposed shifted moment-based IBVS controller. The errors in the camera parameter, i.e. 10 pixels on the coordinates of the principle point and 20% in focal length is considered as a modelling error. Furthermore, it is assumed that the object is partially occluded out of camera FOV. Figs. 19 and 20 show the results with modelling errors and FOV constraint for task-1 and task-2, respectively. The desired position of task-1 in this case is identical as in Fig. 7b. The image feature errors of task-1 are shown in Fig. 19b. It is observed from Fig. 19b that the feature errors converge to zero in 51 s. Also, the initial and desired values of selected image features of task-1 and task-2 with modelling error and FOV constraint are given in Table 11. Similarly, the desired position of task-2 is identical as in Fig. 9b. The image feature errors of task-2 are shown in Fig. 20b. It is observed from Fig. 20b that the feature errors converge to zero in 44 s.    the tip frame, and α c is the orientation of the camera optical axis with respect to the z-axis of the tip frame. In this case, the initial and desired position of task-1 and task-2 is identical as in Figs. 7 and 9. The image feature error in the presence of model uncertainty for task-1 and task-2 are shown in Fig. 21. The initial and desired value of image features of task-1 and task-2 with model uncertainty are given in Table 12. It is observed from Fig. 21 that despite the model uncertainty feature error of task-1 and task-2 is converged to zero in 55 and 47 s, respectively.

Image noise:
Moment-based features are considered as reliable features because values of moment invariants are insensitive to the presence of image noise. Therefore, in this section, robustness of the proposed shifted moment-based new two-time scale IBVS controller is investigated with image noise uncertainty. The white Gaussian noise is introduced in the image of the initial and desired position of task-1 and task-2. Also, the same initial and desired position of task-1 and task-2 is considered as in Figs. 7 and 9 respectively.
The interaction matrix as given in (20) is calculated for the desired position of task-1 and task-2 with the invariants (r s3 and r s4 and r s3 and r s5 , respectively). It is noticed that condition number is for task-1 and task-2 is 7.09 and 2.82, respectively, that is satisfactory. The initial and desired values of the selected image features of task-1 and task-2 with image noise are listed in Table 13. The image feature error in the presence of noise in the image for task-1 and task-2 are shown in Fig. 22. It is observed from Fig. 22 that despite image noise, the feature error of task-1     and task-2 converges to zero in 50 and 54 s, respectively. Thus, the robustness of the proposed controller against the image noise uncertainty is verified.
Remark 1: It is observed from the obtained results that despite the modelling error and FOV constraint, model uncertainty, and image noise uncertainty, selected image features converge to zero, i.e. the performance of the proposed shifted moment-based new two-time scale IBVS controller is similar to that of the previous case (Section 8.2), which validates the robustness of our proposed controller with respect to modelling errors, FOV constraint, model uncertainty, and image noise.

Performance comparison:
Performance comparison of the shifted moment-based new two-time scale IBVS controller and other moment-based IBVS controllers is presented in Table 14. It is observed from Table 14 that the IBVS controller with shifted moment-based feature yields better performance.

Conclusions
In this study, the shifted moment-based visual feature is exploited to deal with singularity in the interaction matrix and local minima in trajectories issues of the IBVS approach. Also, an image Jacobian matrix is designed with a minimal set of shifted momentbased visual features that can track the reference trajectory. The complete dynamics of TLFM is separated into fast and slow subsystems describing the flexible and rigid dynamics using a twotime scale SP approach. A new two-time scale IBVS controller based on the shifted moment is developed for tracking the reference trajectory and tip vibration suppression. Kalman filter based on the LQR controller is designed for the fast subsystem and the moment-based IBVS controller is employed for the slow subsystem for tip position tracking control of TLFM. It is observed from the results that the proposed controller effectively stabilises the oscillatory dynamics and tracks the reference trajectory with the smallest settling time and achieves better tip-tracking performance. Also, the robustness of the proposed controller is verified in the presence of modelling error and FOV constraint, model uncertainty, and image noise uncertainty. Future studies will focus on the implementation and adaptation of the proposed control scheme in the real-time flexible manipulator.