Lateral control for autonomous wheeled vehicles: A technical review

Autonomous driving has the ability to reshape mobility and transportation by reducing road accidents, traffic jams, and air pollution. This can yield energy efficiency, convenience, and more productivity as significant driving time will be gained and used in other activities instead. Autonomous vehicles are complex systems consisting of several modules that perform perception, decision-making, planning, and control. Control is essential for achieving automatic driving; it is basically divided into longitudinal control that handles speed tracking and lateral control which ensures accurate steering. The latter is primordial in path tracking applications and recent research has witnessed a huge leap in this field. The aim of this paper is to provide a technical survey of the lat-est research on the lateral control of autonomous vehicles as well as to highlight technical challenges and limits for further developments.


INTRODUCTION
Self-driving technology has been the trend for the last decades; it is one of the technologies that will completely change people's lifestyle.The shift towards autonomous driving is encouraged by the recent developments in artificial intelligence, big data, and information processing techniques.Autonomous driving will considerably reduce road accidents and traffic congestion, air pollution will be mitigated, and energy consumption will be optimized.Consequently, researchers are working towards full autonomy to reduce the human error and mitigate the risks and dangers of manual driving which accounts for almost 96% of all car accidents in the United States [1].Researchers in self-driving vehicles have been inspired by the DARPA challenge [2] that was organized by the United States which is a leading giant in the field.Waymo [3] has achieved great advancements in the field, with over 10 million miles of road testing and 7 billion miles of virtual testing.
Automatic driving systems comprise many modules that work in a coordinated way.The perception module consists of multiple sensors such as cameras, GPS, LIDAR, RADAR, and IMU.A variety of algorithms are used by the perception module to sense the environment and obtain relevant information.The planning and decision-making module uses sensor data to make decisions and plan the speed and trajectory profiles.These decisions are executed by the control module.Control is one of the most important tasks to achieve autonomous driving.Generally, the control module is divided into longitudinal and lateral control; longitudinal control handles speed tracking, and lateral control ensures accurate steering.This is achieved by commanding the different actuators such as the accelerator, the brakes, and the steering wheel.Self-driving cars are divided into five automation levels; level zero with no automation represents the conventional cars.In level one, simple automatic driving systems like adaptive cruise control and electronic stability control are deployed into the vehicle.Advanced systems like speed and steering control or emergency braking are introduced in level two.In level three, the vehicle is capable of sensing the environment through multiple sensors, and it is able to drive autonomously while requiring driver supervision and intervention in infeasible situations.The vehicle is autonomous in level four, with only occasional driver intervention for certain driving modes only.Level five vehicles are completely autonomous; they can handle all driving modes and without any driver supervision [4].
Several studies have been conducted to develop models that best describe the behavior of vehicles.The vehicle model is essential for developing controllers and simulating vehicle systems in general.Simulations are useful for tuning controllers and observing their performance.Generally, realistic vehicle models are highly nonlinear and complex, but linearized variants are usually used for controller design.The state space representation of the model is the most commonly used technique for developing controllers since it is much more convenient and facilitates the task.The latter is derived from the model mathematical equations after some manipulations.
Existing vehicle modeling techniques in the literature range from simple methods like the geometric model used in pure pursuit [5] and Stanley [6] controllers, to complicated multibody models as in Chebly et al. [7,8].The geometric models are well suited for path tracking; they are simple and require few parameters since they are mainly based on the geometry of the vehicle (position/dimensions).Although they are low on computation, they lack performance since they do not consider the motion of the vehicle and the forces applied on it.Complicated multibody models treat the vehicle as a multi-articulate system consisting of multiple bodies; this robotics formalism considers the vehicle chassis as the movable base with the wheels being the terminals.When it comes to controller design, two models are distinguished: the kinematic model and the dynamic model; these are the widespread models in the literature, since they accurately depict vehicle behavior while still being relatively simple.
This article presents a technical review of lateral control methods; it provides an overview of the most widely used steering controllers and illustrates their design methods.Model-free controllers are also reviewed; the advantages and disadvantages of the lateral controllers are detailed in a summarized comparison.Section 2 of this article presents the most widely used models for controller design.The different model-based control methods and their design approach are discussed in Section 3; Section 3 also presents model-free control techniques.Finally, Section 4 concludes the article.

Kinematic bicycle model
The kinematic model takes into consideration the kinematics of the vehicle, hence the name kinematic.The model describes the motion of the vehicle in terms of position, velocity, and acceleration and disregards the forces acting on the vehicle.Despite the fact that the kinematic model is more sophisticated compared with simple geometric models, it is restricted and describes the vehicle motion under certain assumptions.The model works well when there are no slip angles, that is to say the velocity vector and the orientation of the wheels are identical.Such assumption holds true for low speeds, typically bellow 5 m/s [9].
The simplified bicycle kinematic model is well-developed and very common in the literature [10][11][12], yet it is mostly limited to low-speed applications and cannot be relied on for highly dynamic maneuvers.Rajmani [9] provided a full description of the model in its simplest form; it has been used in the design of numerous controllers in earlier studies [13][14][15].The simplicity of the bicycle model is in reducing the two front and rear wheels to a single front wheel and a single rear wheel which resembles a bicycle.As described in Rajamani [9], the bicycle kinematic model can be summarized by the following equations expressed in the inertial frame: . x = v cos( + ) (1) . = v sin( + ) (2) . v =  (4) As can be seen in Figure 1, x and  represent the coordinates of the vehicle's center of gravity (CG) in the inertial frame (OXY).The heading is described by , while v represents the velocity of the vehicle.The front and rear wheel axles are distant from the CG by l  and l r respectively. denotes the side-slip angel (angle between velocity vector and vehicle longitudinal axis), and  is the vehicle longitudinal acceleration.The inputs to the model are the steering angles   and  r for the front and rear wheels, but most vehicles are only steerable through the front wheel, therefore in most case  r is null.The kinematic model requires the identification of only two parameters which are the distances from the wheel axles to the CG (l  and l r ), hence applying the same controller to other vehicles with different wheelbases is straightforward.

Dynamic bicycle model
A more accurate representation of the vehicle requires modeling its dynamics, which considers behaviors like side-slipping, oversteering, and friction.The dynamic model is more accurate than the kinematic model in the sense that it includes the applied forces on the vehicle and especially the tire forces.Newton's second law of motion and Euler Lagrange methods are applied on the vehicle system to obtain the dynamic model.The complete dynamic model is very complex and nonlinear accounting for translation and rotation motions in the 3D space and considering a full vehicle with four wheels [16].Such models are used mainly for validation purposes and are too complicated for controller design (see Section 2.3); simplified two-wheel dynamic models are used instead.The dynamic bicycle model accounts for 2D planar motion; translation in the X and Y axis (longitudinal∖lateral) and rotation around the Z axis (yaw) of the reference frame.This yields a 3 degrees of freedom (DoF) vehicle model [17], and in some cases, the longitudinal dynamics are disregarded such as in path tracking where the task is reduced to controlling lateral dynamics and yaw motion resulting in a 2DoF model.Considering the simple bicycle model in Figure 2 and applying Newton's laws results in the following model equations: where I z represents the moment of inertia, M z the rotation moment around the Z axis, and F x and F  are the longitudinal and lateral forces, respectively.These are applied on the front and rear wheels. x and   are the longitudinal and lateral inertial accelerations and are expressed in terms of longitudinal/lateral accelerations (ẍ, ÿ), the yaw rate .

x,
. ) as follows: From Equations ( 6) to (10) and considering the nonholonomic constraints, the full model can be derived as follows [18]: . Y = .

𝑦 cos 𝜓 (15)
From Figure 2, the relationship between the lateral and longitudinal forces acting on the tires and those acting on the vehicle's CG is as follows: The tires of the vehicle interact with the road surface and undergo deformations with different maneuvers resulting in tire forces in the longitudinal and lateral directions.

FIGURE 2 Dynamic bicycle model
Although these forces are nonlinear functions, they are often linearized to be used in the simple bicycle model.The linearization of tire forces is possible under the assumption of small slip angles, and it is done by considering that slip angles and tire forces are proportional.Typically, the linear region for longitudinal slip angles is under 1  2 g accelerations and for lateral slip angles under 5 • [19].In such cases, the lateral tire forces can be expressed as follows: The parameters C  and C r represent the lateral tire cornering stiffness coefficients for the front and rear wheels, respectively, which can be seen as the tangent line of the curve representing tire lateral force with respect to the slip angle.The terms   and  r are the respective front and rear wheel side-slip angles; they represent the deviation between the velocity vector and the direction of the wheel; see Figure 2. In general, both the vehicle model and the tire forces are nonlinear; the Pacejeka model known as magic formula works well with nonlinear models and has been extensively used in previous studies [19].The formula is given by the following: where F can be the longitudinal or the lateral tire force, and k is the slip ratio in the case of F representing the longitudinal force, and the slip angle when F represents the lateral force.The slip ratio defines the difference between the theoretical and the actual vehicle speed, which is mainly due to tire slipping phenomenon.The parameters b, c, d, e, S h , and S v are fit from experimental data that define certain characteristics like the stiffness factor, the shape factor, and the peak value.In lateral control applications, the slip angle is important and is defined as the angle between the orientation and the velocity vector of the wheel.Under the assumption that only the front wheel is steerable, the front and rear wheel slip angles are given as   =   −   and  r = − r , with the rear steering angle set to zero ( r = 0) and   ,r being the angles between the wheel longitudinal axis and longitudinal velocity which are obtained from the ratio of lateral to longitudinal velocity as follows: Using angle approximations for linearization and replacing equations ( 24) and ( 25) in ( 20) and ( 21) results in the following tire forces.
By manipulating the dynamic model equations stated previously and the obtained tire forces, a state space model for lateral dynamics can be derived as follows: .
where the control u =   , the state vector denoted by x = [ .

𝜓]
T with  being the vehicle's lateral position and the output vector Y = [ ] T .The matrices A, B, and C are defined as follows:

Full dynamic model
Although the single-track bicycle dynamic model is fit for control design, it lacks precision due to the simplifications and assumptions to build the model.As discussed earlier, full vehicle dynamic models are a more realistic representation of the vehicle dynamics.Such models take into account all fours wheels of the vehicle and do not neglect the vertical dynamics as presented in earlier research [16,20].Figure 3 shows a 3DoF dual-track full vehicle model that is defined by Equations ( 30)- (36) .Y = .
x sin  − . cos  (31) where F x and F  are the longitudinal and lateral tire forces, respectively.F z is given in Equation ( 37) and represents the displacement of suspension that models the vertical dynamics and load transfer between the four tires.The indices ( F,R ) and ( R,L ) refer to front/rear and right/left wheels, respectively.I z , I  , and I x represent the respective yaw, pitch, and roll inertias of the vehicle.
with (, ) being the displacement of the suspension for the given roll () and pitch () angles, while k s and d s define the suspension stiffness and damping.The longitudinal and lateral tire forces are functions of the slip-ratio and side-slip angle as discussed in Section 2.2, which is usually modeled by the Pacejka combined slip formula [21], taking into account the coupled interaction between longitudinal and lateral slipping.Note that this model only represents the chassis dynamics, More elaborate models may incorporate engine and brake dynamics as well.In addition, it is assumed that the road remains flat which means that the effect of slope and banking angle are neglected.

VEHICLE CONTROL
Automatic control is an essential task for autonomous driving systems; it solves the problem of following a set reference defined as a trajectory or a speed profile depending on the type of control.Control is a complex task for autonomous vehicles because it must ensure the stability of the vehicle and certain levels of performance.In general, the control task accounts for three aspects [22]: • The type of control which can be lateral control, longitudinal control or both at the same time.• The vehicle model used for implementing the control which can be kinematic or dynamic and linear or nonlinear.• The control strategy which can be coupled, coordinated, in cascade, and so on.
Lateral control deals with the yaw motion of the vehicle by acting on the steering angle of the front wheels; the goal is to follow a trajectory by reducing the angular and lateral position errors to zero.The trajectory can be predefined and generated offline, or it can be computed online such as for obstacle avoidance and lane change maneuvers [13].Lane keeping is completely dependent on lateral control where lanes can be detected from camera footage using image processing techniques or advanced deep learning methods.

Model-based controllers
Model-based lateral controllers require a model that describes the lateral motion of the vehicle.These models are usually kinematic or dynamic depending on the type of controller.Numerous studies have been carried out for lateral control; the following subsections present the most common model-based lateral controllers and the corresponding latest research.

Model predictive control(MPC)
MPC is a well-established strategy known as receding horizon control.MPC is based on the formulation of an optimization problem in the form of a finite horizon open-loop optimal control problem.The main idea of MPC is in using the discrete model of the system for predicting its behavior into the future up to a certain time step called prediction horizon.A control input sequence is generated along several discrete time steps known as the control horizon.The control sequence solves the optimization problem by minimizing a cost function while obeying to certain constraints.At each time step, a finite horizon window (prediction horizon) is shifted, and new measurements of the system and the environment are used to solve the optimization problem resulting in a new control sequence.The whole process is repeated iteratively and only the first control input is applied to the system.The advantage of MPC control is its ability to handle multi-input multi-output (MIMO) constraints that represent limitations on the actuator or physical and safety constraints.More details about MPC, its variants and its applications can be found in the following surveys [23,24,25] and [26].
In autonomous vehicle control, the MPC controller is generally composed of an optimizer and a vehicle model (see Figure 4).The optimizer oversees finding the optimal The formulation of MPC requires a descriptive system model that captures its significant dynamics while still being simple enough for solving the optimization problem in real time.Consider the system dynamics presented in equations ( 28) and ( 29) (vehicle lateral dynamics model for the case of lateral control) with X and u being the state and input vectors.Since MPC requires a discrete model, the system can be discretized with a sampling time T s giving: Given the model above with A k ∈ R n×n , B k ∈ R n×m and C k ∈ R m×n , the simple MPC controller is obtained as the following optimization problem: X min ⩽ X k+|k ⩽ X max (46) U ∈ R m * N c defines the optimized variable which is the control input, Q  ∈ R N p ×N p and R u ∈ R N c ×N c are weighting matrices for the outputs and the inputs.J is the cost function to be minimized in the form of a quadratic sum of weighted norms of the error and the input.The error is the difference between the predicted state Y p ∈ R m×N p and the reference Y re along the prediction horizon N p and the input vector U is optimized along the control horizon N c .Equations (42)(43)(44)(45)(46) specify the constraints which can be equality or inequality constraints and can be imposed on the outputs, the inputs and their rates.The weighted norms ) and ΔU T .R u .ΔU respectively, with Q  and R u being diagonal matrices whose weights penalize the corresponding variables.Therefore, the cost function J can be reformulated in a more convenient matrix form: with: Solving the MPC problem is then resolved by finding the control sequence U that minimizes the cost function J and obeys to the constraints listed in equations (41)(42)(43)(44)(45)(46).MPC is very suitable for the lateral control of autonomous vehicles since it can systematically handle constraints, several studies have been published recently that show the good performance achieved by MPC and its ability to overcome model imprecisions and external disturbances.In paper [27], Yao et al. proposed an MPC path tracking controller with longitudinal speed compensation to overcome the assumption of constant longitudinal speed along the control horizon.The speed compensation aims to reduce the control deviation due to rapid speed and acceleration variations which affect the path tracking accuracy.The authors used the single-track bicycle model with Fiala brush tire model [9] in the design of MPC controller with constraints set on the tire cornering capabilities, the vehicle side-slip angle, the input, and input rate.The compensation for the longitudinal speed is ensured by varying the velocity with constant acceleration.To check the performance of the designed controller, the authors used MATLAB and Carsim to simulate the system and compared it with an MPC controller without speed compensation.The simulations showed similar performance during gentle curve variations, but the designed controller performed much better in sudden curvature variations, where lateral and course deviation were significant for the normal MPC leading to greater tracking errors.
Kebbati et al. [28] developed an adaptive and optimized MPC controller for path tracking.The controller was based on the linear bicycle dynamics model; the authors used FIGURE 5 Adaptive model predictive control (MPC) control scheme [28] Laguerre functions to reduce the computation load and perform long control and prediction horizons.The authors proposed an improved particle swarm optimization (PSO) algorithm to optimize the parameters of the MPC controller including prediction, control horizons, and the cost function weighting matrices (see Figure 5); the PSO algorithm is further explained and used with PI controllers in paper [29].Their proposed controller adapts its parameters to varying longitudinal velocity, trajectory, and external disturbances through a lookup table approach.The authors tested their controller against the classic MPC and pure pursuit with different velocity profiles and trajectories and under varying external disturbances.The results showed that the proposed MPC is indeed adaptive and has superior performance and robustness.The same authors further enhanced their controller design in Kebbati et al. [30] by using adaptive neuro-fuzzy inference system (ANFIS) and fully connected neural networks to learn the optimal MPC parameters and perform online controller adaptation.The optimization was carried for variable velocity profiles, wind disturbance profiles, and road adhesion coefficients.The goal is to readjust the MPC controller to these varying working conditions and external disturbances.Significant tracking improvements were achieved.
Alcala et al. [31] developed in his work a solution for the mixed lateral and longitudinal control for trajectory tracking.They divided the control strategy into a cascade control scheme with an internal layer to control the vehicle dynamics (velocity) and an external layer which controls the vehicle kinematics (position and orientation) as can be seen in Figure 6.The controller of the internal loop is a linear quadratic regulator (LQR) formulated using linear parameter varying (LPV) approach as a dynamic (LPV-LQR) design; it was transformed from the nonlinear model presented previously in Alcala et al. [22].The controller in the external loop was formulated as a quadratic optimization problem using the nonlinear kinematic model in an LPV form (Kinematic LPV-MPC); it controls the position and orientation of the vehicle.The designed control strategy was simulated under MAT-LAB for a curved circuit using the dynamics of an electric rear wheel drive car.The performances were compared with the NMPC controller, and the results showed that the LPV-MPC had similar performances to NMPC except at handling external disturbances.On the other hand, the LPV-MPC proved to be 50 times faster than the NMPC, which is very important for real-time applications.The authors of [32] developed a coordinated lateral and longitudinal control strategy.They addressed the lateral control by designing an LPV-MPC controller that takes into account variable cornering stiffness coefficients; these were adapted online using recursive least squares estimators.The longitudinal control was ensured by a PSO-optimized proportional integral derivative (PID) controller (PSO-PID) that guarantees accurate speed tracking.Speed and lateral dynamics stability conditions were satisfied with regard to road curvatures.Furthermore, the authors enhance the MPC cost function by adding an exponential weight which improved trajectory tracking precision.
The NMPC has been used in combination with multisensor fusion in Rick et al. [33]; the authors aimed at exploring a parking lot autonomously and performing a parking maneuver.Three modules have been employed: a sensor fusion module for localization, mapping, and obstacle detection; a decision-making module for detecting parking spaces and for guidance; and a control module based on the NMPC.The kinematic single-track model was used in this study as it is sufficient for the low speeds and small external forces encountered in a parking maneuver.The different maneuvers were planned as a search for an optimal control problem that can be generalized as follows: The solution to this problem is the set of states x, controls u, and time T that minimize the objective function J.The solutions obey the dynamics of  and the constraints set by Ψ 1 and Ψ 2 .Path constraints such as obstacle avoidance are defined by C. The authors applied a direct method using the software framework (TransWORHP [34]) to transform FIGURE 6 Proposed cascade control scheme [31] the optimal control problem into a nonlinear optimization problem (NLP) and solve it with NLP solvers.Tests on a real autonomous vehicle (modified Volkswagen Passat GTE) showed that the computational load is heavy, but the implemented NMPC was able to solve the task of autonomously exploring the parking lot of the University of Bremen with high precision thanks to the consideration of the nonlinear dynamics of the vehicle.Li et al. [35] proposed a NMPC for trajectory tracking; the controller was based on nonlinear vehicle dynamics and Pacejka tire model [21], and it tracks the yaw angle and lateral position.The authors used the Sigmoid function to generate a smoothly curved reference.The NMPC is formulated as a standard optimization problem by discretizing the system dynamic equations; constraints were imposed on the control input and incremental control to prevent actuator saturation.The performance evaluation was done through simulations in Carsim and MAT-LAB by comparing the NMPC to the linear time-varying (LTV) version of MPC (LTV-MPC).The simulation results showed better performance for NMPC against LTV-MPC which exhibited frequent oscillations with higher overshoots and less precision.However, the NMPC required very long computations compared with the LTV-MPC controller.LPV-MPC was used by Alcala et al. [36] as a novel approach to perform autonomous racing, where an offline nonlinear model predictive planner was used to compute the optimal racing trajectory.The LPV-MPC approach was then used online to track the generated trajectory, it uses the standard nonlinear dynamic bicycle model to capture vehicle dynamics.The proposed control approach was evaluated through simulations and real experiments in a proposed circuit, where the trajectory planner aims to minimize the lap time and the LPV-MPC aims to track the optimal trajectory.The results show fast lap times and good trajectory tracking with certain errors attributed to non-modeled dynamics.A similar approach was proposed by Kebbati et al. [37] with an adaptive prediction model for the LPV-MPC controller, the authors used neural networks to adapt the cornering stiffness coefficients of the vehicle in real time.Furthermore, they proposed an enhanced genetic algorithms (GA) to fine-tune the cost function of the LPV-MPC and improve its tracking precision.However, the main disadvantage of the proposed strategy is the required initialization of the LPV model, since the state space matrices of the LPV model change with the scheduling variables.Wang et al. [38] proposed an improved MPC control strategy that includes an adaptive fuzzy controller to change the weights of the cost function.The aim of this design is to eliminate the problem of ride discomfort caused by fixed weights in classical MPC.The fuzzy controller uses five fuzzy sets; it has as inputs the errors of the lateral position and the heading and generates the weight matrices for the cost function.The designed improved MPC was compared with pure pursuit and classical MPC to evaluate its performance and ability to handle the ride discomfort problem.The comparison was done through simulations in Carsim and MATLAB, and the results proved that it is more accurate than the pure pursuit controller while providing smooth steering angle changes compared with classical MPC especially with high errors.Xu et al. [39] designed a lane keeping system based on the LTV-MPC controller.In their work, the reference trajectory was generated by fitting five preview points to reduce the number of sensors needed.The LTV-MPC was based on the linearized bicycle model.The trajectory tracking was solved as a quadratic programming (QP) problem, where the solution is a vector of optimal steering angles.The LTV-MPC controller was evaluated through co-simulation in MATLAB and Carsim, and the results showed good tracking performance in low and high speeds.However, some acceptable overshoots in steering angles were produced at high speeds.Sun et al. [40] proposed an MPC with switched tracking error to tackle the tracking deviation and overshoot at high speeds.A fuzzy logic classifier was used to determine the switching phase and instant, and the controller used the velocity heading deviation as the tracking error instead of the vehicle heading deviation.The fuzzy classifier switches between the real-time side-slip and steady-state side-slip so that the MPC computes the velocity heading deviation in the transient and steady-state conditions respectively.The MPC controller was based on the single-track bicycle model with Fiala tire model to capture vehicle lateral dynamics.The latter was compared with regular MPC controller and MPC controllers with fixed side-slip (steady state or real time).The designed controller achieved very high tracking accuracy.It scored the lowest tracking deviation RMS value, but failed to reject disturbances and model uncertainties.Guo et al. [41] proposed an MPC path following controller that considers road regions delimited by curves and measurable disturbances accounting for model mismatch and small angle assumptions.The MPC was based on the bicycle dynamic model that includes measurable disturbances.The authors introduced a differential evolution algorithm to ensure faster online optimization.The proposed path following controller was tested on an experimental platform (HQ430 autonomous car) where it outperformed PID and regular MPC showing promising computation and control performances.Kabzan et al. [42], developed an online learning MPC controller for autonomous racing.They adopted a simple bicycle vehicle model and improved it by learning the model errors online using Gaussian process regression [43].The control problem was formulated as a contouring-based MPC with a learning-based extension [44] and solved using FORCES Pro solver [45].The authors used the AMZ autonomous car [46] to test the proposed learning-based controller where it achieved improved performance and 10% shorter lap times compared with the normal controller without the learning extension.Tȃtulea-Codrean et al. [47] also dealt with autonomous race driving using NMPC for the F1/10 platform, to keep the vehicle inside the track the authors interpolated the circuit boundaries using third-order polynomials and implemented them as inequality constraints on the lateral position.A feed-forward neural network was introduced in the paper to learn to mimic the NMPC using input-output data.In the same racing context, Verschueren et al. [48] handled autonomous racing as a time-optimal problem, in which a slip-free bicycle model is used to build the NMPC controller.The key objective is minimizing lap time, which is achieved by a spatial reformulation of the model to include time as an optimization variable.Simulation and experimentation were conducted using ACADO for solving the NLP.In Verschueren et al. [49], the same authors extended their work by including a full nonlinear dynamic bicycle model and using the same time-optimal control approach with spatial reformulation.In a similar approach, Kloeser et al. [50] addressed autonomous racing for a 1:43 scale race car using a singularity-free path parametric model for NMPC predictions.Contrary to Verschueren et al. [48], they used partial spatial reformulation of the model to exclude singularities.The authors implemented obstacle avoidance in the optimization problem as constraints with the objective of maximizing progress on the path.The designed controller was validated both in simulation and experimentation, leading to good racing performance.Betz et al. [51] provide a thorough survey in the field of autonomous racing, where vehicles are driven at the limits of their performance with high speeds and accelerations.

Sliding mode controller
Sliding mode control (SMC) is a robust control technique suitable for nonlinear systems with parametric uncertainties and subject to external disturbances.In SMC, the states and control signals are not affected by the disturbances and uncertainties, because they are considered to be discontinuous.This technique forces the system to slide (converge) in a sliding surface (Figure 7 where x is the controlled variable) by applying a fast switching control signal.The design of SMC controllers is a two-part task; the desired dynamics are defined by the sliding surface in the first part such that the design specifications are satisfied by the sliding motion.The second part is the design of a switching function that ensures the system slides in the desired sliding surface.The theory of SMC is reviewed in details in [52][53][54].
The problem of lateral control with sliding mode has been studied in the literature [55,56].For instance, Talj et al. [57] addressed the lateral control of autonomous vehicles using a higher-order sliding mode controller, the authors minimized the lateral displacement by using the super-twisting algorithm.The control problem can be formulated by considering Equations ( 12), ( 13), (26), and ( 27) of the dynamic model given previously.From these equations, the model can be summarized as follows: with  and  being the lateral position and the yaw angle of the vehicle.The super-twisting algorithm ensures robust stability and reduces the chattering phenomenon; let us FIGURE 7 Sliding mode control (SMC) principle consider a system of the form: .x =  (t, x) + g(t, x)u(t) (50) with  and g being continuous functions and x and u the state vector and the control input, respectively.Let us now define a sliding variable s with a derivative expressed as follows: .
The sliding mode controller aims for the system to converge to the sliding surface which can be defined as s = 0. To achieve this, we assume that there exist s 0 , b min , b max , and C 0 as positive constants such that the system satisfies the following: Thus, the SMC can be given as follows [57]: where  and  are positive constants; the convergence in finite time is very important, and it can be assured by the conditions below: To apply this to the lateral control of autonomous vehicles, we consider the lateral error dynamics (ë =   −   re ), where we assume that the desired lateral acceleration can be expressed as x R with R being the radius of the road curvature.Using Equation (10) we obtain the following: Replacing ÿ by Equation ( 49) yields: Here  represents the control input which is the steering angle, the objective is to cancel the displacement error.Therefore, the sliding surface can be chosen as s = ̇e + e.The derivative is then obtained as follows: . s = ë +  ̇e.By replacing ë by its expression in Equation ( 56), we obtain the following: Using identification between Equations ( 57) and ( 51), we find that: Hence, the super-twisting algorithm (Equation ( 54)) yields the following control signal: The downside of SMC technique is its application limit due to the fast switching which results in the chattering phenomenon.In reality, the actuators suffer from imperfections and delays; therefore, the chattering from SMC method may damage the actuators and results in discomfort or energy loss which also creates additional disturbances [19].Researchers have adopted several methods to deal with this phenomenon; these methods include using higher-order sliding mode [58], the use of smooth functions to replace the discontinuous function (sign), and the inclusion of observers [57].
Many researchers adopted the SMC control technique to perform lateral control of autonomous vehicles.For instance, Wu et al. [59] designed a path following strategy based on SMC.They used the nonsingular terminal sliding mode technique.The control problem was simplified into a yaw tracking problem to ensure that the displacement deviation converges to zero.The linearized dynamic bicycle model was used in the design of the SMC controller.Figure 8 shows the path following algorithm where the linear-ESO is used to estimate the state and disturbance of the simplified yaw tracking system which ensures that the control algorithm works well even without an accurate mathematical model.The NTSM block in Figure 8 is the SMC controller based on nonsingular terminal sliding surface.Moreover, the SMC controller was coupled with an exponential convergence law to improve the convergence speed.Simulations in Carsim showed solid performance of the designed SMC controller and proved its abilities to overcome uncertainties, reject disturbances, and achieve very small tracking errors.
Kada et al. [60] proposed a robust SMC that eliminates the drawbacks of regular SMC controllers.The authors used a disturbance observer to enhance the controller; a neural network was used to estimate model uncertainties, and a fuzzy logic system was added to handle parameter variations.The disturbance observer which is detailed in Yang et al. [61] estimates the mismatch disturbances such as wind and unmodeled dynamics.The model uncertainties are estimated through a radial basis function neural network (RBFNN) formed by three layers.The algorithm FIGURE 8 Path following control algorithm [59] of this neural network approximates the different functions of the model.The fuzzy system was used for scheduling the gains of the sign function in the SMC controller [62].The design of the controller is detailed in Akermi et al. [60] with the chosen sliding surface and the stability analysis.For performance evaluation, the authors compared their controller to traditional SMC, observer-based SMC, and back-stepping SMC.The proposed controller ensured very fast convergence thanks to gain scheduling with the fuzzy system.In addition, the controller provided very precise tracking that easily overcame the other controllers even in sharp turns while eliminating the chattering.
Gao et al. [63] designed an adaptive distributed sliding mode controller for vehicular platooning.A distributed sliding surface for all the followers was used to control the full platoon.To ensure ride comfort, the authors employed an adaptive law instead of the switching function to compensate for model errors and avoid jerks.They proposed a structural decomposition method to decouple the sliding motion dynamics of the platoon.Moreover, they ensured the smoothness and quickness of the control by placing the poles of the sliding motion in predefined regions using linear matrix inequality (LMI) approach.It was found that the performance of the designed controller is satisfactory even though the platoon is composed of vehicles with uncertain parameters.However, the control performance was heavily affected by the interaction topology of the platoon.Norouzi et al. [64] designed an adaptive sliding mode controller for the task of autonomous lane change; the authors used a fuzzy boundary layer to address the chattering issues of SMC.The path planning for the lane change was performed using mathematical quintic function and vehicle boundary conditions, and the vehicle lateral dynamics were modeled using the bicycle model.The authors tested their controller in Carsim under different road conditions and compared it to regular SMC.The results showed a significant improvement, the use of adaptive rules improved the controller such that the range of uncertainties were no longer needed to be known, and the use of the fuzzy boundary layer achieved smaller tracking errors.Wang et al. [65] also worked on the path tracking problem, where they proposed an automatic steering control strategy based on the back-stepping SMC.For higher precision modeling, the authors used both kinematic and dynamic vehicle models as well as a vehicle-road system model.To address the chattering problem, the authors replaced the sign function of the SMC by a saturation and introduced a sliding mode term that overcomes disturbances and ensures that the system state is always on the sliding surface which in turn guaranties robustness.The proposed strategy was evaluated in Carsim and on a real vehicle at low speed and closed road conditions; the results showed that the proposed strategy significantly improved the steering performance.In the same context, Alika et al. [66] improved the work presented in Talj et al. [67], which addresses the lateral control of autonomous vehicles using a higher-order SMC with the super-twisting algorithm.The authors enhanced the SMC by optimizing the controller performance using the PSO algorithm.The authors performed the optimization for two cases; first, they optimized the sliding surface parameters and then the controller parameters.The conducted simulations showed that the proposed controller outperformed the SMC developed in Talj et al. [67] and guarantees much better trajectory tracking and more robustness.

H ∞ controller
H ∞ is a robust control method that easily handles modeling uncertainties and external disturbances.The objective is to minimize the H ∞ norm of the system to be controlled by solving an optimization problem that involves the Riccati equation.The control problem formulation usually accounts for noise, modeling uncertainties, and disturbances and is solved through LMI methods.Although the H ∞ approach is known for robust stability and performance, it may easily become complex to realize in practice.More details about H ∞ control theory and design can be found in Chang et al. [68].To apply the lateral control task based on H ∞ , a vehicle-road system model is required (Figure 9).The vehicle dynamics model given in Equations ( 11)-( 15) can be adopted along with external disturbances as follows: where the terms a i and b i are given by the following: To eliminate either the lateral error (e) or the heading error (ΔΦ), the state variables relevant to the reference path are obtained from the state variables of the vehicle dynamics as follows: with s being the distance along the path and v  and v x being the lateral and longitudinal velocities, respectively.He et al. [69] adopted the projected error e p to combine both lateral and heading errors at once: By using small angle approximation and differentiating e p , we obtain the following: Hence, we can obtain the following: where d 3 and d 4 denote uncertain external disturbances and k represents the path curvature.The following state space system with external disturbances is obtained by combining Equations ( 60) and ( 64 with K being a feedback gain matrix; for a positive symetric matrix P, the Lyapunov function can then be selected as follows: which when differentiated yields the following: . L = .
x T Px + x T P .
x = (Ax + Bu + Bd e ) T Px + x T P(Ax + Bu + Bd e ) = (Ax + BKx + Bd e ) T Px + x T P(Ax where Q = P(A + BK).Then by setting  = [x T d T e ] T , one can obtain the following: Equation ( 68) can be reformulated by combining ( 69) and ( 70): ]  (71) at this point; the performance index of the H ∞ can be selected based on an attenuation level  > 0: ]  (73) adding ( 71) to ( 73) yields the following equation: and by setting ] and integrating (74), we obtain: Thus, by using Equation ( 67), it can be proven that system states are bounded under bounded disturbance and finite time horizon as follows: Using Schur complement theorem, one can write: and based on Equations ( 68) and ( 69), we obtain the following: , and R = P −1 and multiplying Equation ( 78) by S, we obtain the first LMI (79), and considering that P is a positive symmetric matrix yields the second LMI (80): The H ∞ state feedback gain is obtained by solving the above mentioned LMIs.Several research papers dealing with H ∞ for the lateral control task of autonomous vehicles exist in the literature.For instance, Sun et al. [70] developed a fuzzy model-based H ∞ controller with feedforward for the lateral control of autonomous vehicles.The authors considered the desired yaw rate as external disturbance of the vehicle's dynamics and used the feedforward loop to improve the performance of the closed-loop system.The authors used the bicycle kinematic model for the feedforward loop and the bicycle dynamic model for the feedback loop.Fuzzy modeling was used to express the variation of vehicle velocity and mass which can change with the number of passengers onboard.The effectiveness of the proposed controller was demonstrated by Simulink-Carsim co-simulations.He et al. [69] proposed a robust H ∞ coordination control strategy to improve lateral motion control at handling limits.The approach was based on LMI method and coordinates front and rear wheel steering.The proposed controller was verified on dry asphalt through Simulink-Carsim co-simulations where external disturbances on the front and rear vehicle axles were included.The results showed high-path tracking capabilities close to driving limits and proved that the proposed approach is robust to uncertain external disturbances.Hu et al. [71] proposed a robust H ∞ control strategy for path following where the vehicle lateral velocity is not required to be known.The developed output feedback controller was based on a mixed approach involving GA and LMI to obtain the optimal feedback gain, and it accounts for model and road uncertainties and external disturbances.The designed controller was tested for a lane change maneuver with low adherence road and for a J-turn maneuver with high adherence road under tough conditions, and the results proved good performance and robustness in general.An LPV H ∞ controller for high-speed driving and evasive maneuvers was designed in Corno et al. [72].Contrary to the feedforward approach, the design was based on the lateral error and look-ahead distance of the vehicle to ensure better robustness; moreover, the actuator nonlinearities under low speeds were accounted for.The proposed controller was experimentally validated on an instrumented vehicle for different bends and double lane change maneuvers where sufficient and adequate performance for autonomous driving was achieved.Chaib et al. [73] compared H ∞ to self-adaptive, fuzzy, and PID controllers for the lateral control task.The H ∞ controller was designed based on the loop shape procedure and tested against the others under variable road adhesion and under lateral wind disturbances.The simulation results favored the self-adaptive controller; however, the H ∞ along with fuzzy controller performed just as well while PID was found to be the worst.Yang et al. [74] compared H ∞ with MPC for trajectory tracking control using a 3DoF vehicle dynamics model.The two controllers were tested for curves and double lane change maneuvers in MATLAB-Carsim co-simulations.The results showed that MPC outperformed H ∞ in terms of accuracy and response time, but when tough conditions were introduced, H ∞ was able to achieve higher longitudinal speeds compared with its rival.Wang et al. [75] developed a robust H ∞ controller that accounts for estimation and measurement delays and data dropouts.The authors formulated a general representation to include measurement and transmission delays and data dropouts.The design of the proposed controller was enhanced by including varying external disturbances.In addition, the uncertainties due to tire cornering stiffness coefficients were also included.Simulation results verified the effectiveness of the proposed approach, where good tracking was achieved even though the authors focused mainly on time invariant delays and data dropouts which can be time varying and unknown in real life.

LQR
The LQR approach is an optimal control technique that achieves great results at minimum cost.The idea is similar to MPC where a quadratic cost function is minimized resulting in an optimal feedback gain; this is achieved by solving the Riccati equation which is formulated based on the plant's state space model.
In lateral control applications, designing the LQR controller is a straight forward task.Considering the vehicle dynamics model given in Equations ( 28) and ( 29), an error-based dynamic model can be deduced.Assuming that e 1 is the lateral error (between trajectory and vehicle lateral position) and e 2 is the heading error (between vehicle and reference heading angles), the following relations hold true: Combining Equations ( 28) and (81) while considering e 1 , ̇e1 , e 2 , ̇e2 as the new states with   being the input, the error-based model can be defined by the following: where  re is the reference heading and the new matrices are given by the following: The role of LQR optimal controller is to minimize the lateral error and the vehicle lateral acceleration.Similar to MPC, LQR solves an optimization problem constructed with the following objective function: with Q being positive semi-definite diagonal weight matrix and R being a constant matrix penalizing the control effort.
As LQR is a state feedback control, the control signal is given in (66), where K is the feedback gain obtained by the variational method: with matrix P being the solution of the Riccati equation: [76] developed an LQR-based lateral controller with road disturbance compensation, the road was modeled using clothoid cubic polynomials, and a dynamic lateral model with look-ahead distance was adopted.The authors redesigned the state reference by adding an additional term to compensate for the winding road disturbance.Experimental tests showed superior performance of the LQR with disturbance compensation compared with regular LQR controller.Riofrio et al. [77] designed an LQR that handles lateral stability and rollover control taking into account the road bank angle which is estimated using a Kalman filter.Several tests were carried out in Trucksim where the proposed LQR controller was compared with a fuzzy controller.The results showed significant improvements in load transfer, yaw rate, roll, and side-slip angles.Good vehicle behavior was achieved in banked surfaces, and the designed controller was able to prevent oversteering effects often caused by rollover controllers.Vivek et al.
[78] conducted a comparative study between LQR, MPC, and Stanley controller for path tracking applications.The kinematic bicycle model was used, and the controllers were tested in CarMaker simulator.LQR and MPC provided optimal results compared with Stanley.MPC outperformed LQR at the expense of higher computation loads.Articulated heavy duty vehicles have large sizes and experience important mass variations which result in poor maneuverability.In this regard, Barbosa et al. [79] improved a robust LQR (RLQR) previously developed in earlier research [80,81] to take into account parametric variations.The proposed controller does not depend on offline parameter tuning but rather uses an overtime vanishing penalty parameter to foresee smoothness and robustness and maintain optimality.The proposed controller was evaluated against robust H ∞ where the vehicle was subject to variable loads.The results showed smoother and more robust performance of RLQR, especially for high loads with mass uncertainties.Kim et al.
[82] also worked on the lateral control of articulated vehicles, the authors designed an active steering controller based on LQR theory.The articulated vehicle was modeled by a 3DoF model where key parameters were identified with simulated annealing particle swarm optimization (SAPSO).The proposed LQR controller aims at minimizing yaw tracking error and side-slip angle simultaneously for both tractor and trailer.The performance of the proposed control strategy was validated through Trucksim simulations, and the results were satisfactory at low-and high-speed maneuvers.Alcala et al. [83] addressed the lateral control of autonomous vehicles using a Lypunov-based method with an LQR-LMI tuning approach.The controller was developed with an LPV kinematic bicycle model, and the resolution of the LQR-LMI determined optimal control parameters.To further complement the tuning process, the authors added a pole placement constraint to guarantee maximum achievable performance.The proposed control technique achieved good autonomous guidance in virtual reality and real scenario tests.

Other control methods
Other controllers include PID control which is often enhanced with neural networks or fuzzy logic systems to be adaptive to external disturbances and varying parameters [84-87].They include also back-stepping control which is a recursive method that associates Lyapunov function with feedback control [88 -91].There are also controllers based on the vehicle's geometry such as Pure Pursuit [92][93][94] and Stanley controllers [95][96][97].Several other techniques are developed in the literature.For instance, Chen et al. [98] worked on autonomous parking; they introduced an adaptive pseudo-spectral method for solving time and energy optimal control for parking applications, which is handled as a nonlinear programming problem.The approach was compared with interior point and piecewise Gauss pseudo-spectral resolution methods.Simulations showed higher computation efficiency and better control accuracy.Li et al. [99] used the hierarchical control method to reduce tire slip energy and improve vehicle stability with time-varying tire cornering stiffness and tire slip energy torque distribution.The authors handled tire slip energy with a scheme of semi-empirical tire slip energy model used with the previously mentioned hierarchical control, which ensures active steering and yaw control.Simulation results proved that considering time-varying cornering stiffness and tire slip energy has significant advantages on the control performance.Lv et al. [100] proposed a disturbance observer-based state-error port-controlled Hamiltonian (DOBC-SEPCH) control strategy to optimize energy consumption and enhance tracking performance of unmanned surface vehicles.A disturbance observer is introduced to estimate environmental perturbations coupled with an energy-based controller, designed using SEPCH technique.The performance of the proposed approach was validated in simulations with stability insurance.Several other geometric, kinematic, and dynamic controllers are reported in Amer et al. [18].

Controllers based on artificial intelligence
Besides model-based controllers, model-free controllers provide an alternative solution when modeling becomes complex or inaccurate.Such type of control strategies are mostly data-driven; data coming from different sensors like camera, Lidar, and IMU are exploited to predict control signals for the steering, acceleration or braking actions.

Supervised learning control
Neural networks have seen huge advancements recently [101], especially in the field of control.Autonomous driving applications are mostly based on fully connected neural networks [102], convolutional neural network (CNN) [103], and recurrent neural network (RNN) [104].
Fully connected networks are composed of multiple layers of neurons interconnected by synaptic weights; they are inspired from brain neural networks.Generally, they consist of one input layer, one output layer, and one or more hidden layers which define the depth of the network.The training process of neural networks is mostly based on the gradient descent algorithm [105].The forward pass propagates the inputs throughout the network layers, where each neuron normalizes its output by an activation function like the Sigmoid [106] or the ReLU [107].The prediction error at the output layer is backpropagated through the network to readjust its weights.CNNs differ from fully connected networks as they contain convolution layers whose neurons are connected to only some neurons of the next layer.Weights are shared between multiple connections, which greatly reduces network complexity in terms of trainable parameters leading to faster training.On the other hand, RNNs are able to retain memory.They contain feedback loops where layer outputs are not only sent to the next layer but also sent back to the previous layers.Common training problems in these types of neural networks include the vanishing and exploding gradients [108], making the task of hyperparameter tuning more difficult.A typical example of each network is illustrated in Figure 10.
For instance, Sharma et al. [109] used deep learning to control vehicle lateral and longitudinal motion.The authors used CNN in two different models; the first model tries to handle both lateral and longitudinal control tasks at once, and the second model consists of two CNN-based controllers for the steering and the speed control.Although different architectures with varying complexities were tested in the first model, none of them was able to handle the task and find patterns between input images and output control signals.However, using two separate CNN networks managed to achieve acceptable results where full autonomy was obtained on two different racing tracks.The authors used TORCS simulator for data collection and control tests.A complex CNN network was required for the steering task, while the speed control network was much simpler.Nevertheless, data collection and training are tedious tasks in such strategies.Jhung et al. [110] proposed end-to-end steering control with CNN-based closed-loop feedback.The authors divided the training process into supervised pretraining and reinforced closed-loop posttraining where the latter improves the control performance.The proposed controller was tested in simulators and on a real autonomous car and was able to perform lateral control with acceptable errors.Devineau et al. [111] used CNNs and fully connected networks to develop control strategies for the coupled longitudinal and lateral control.The authors used a high-fidelity vehicle model to generate a data set for training; the inputs to the network are the trajectory and vehicle states and the outputs are the steering angle and the applied torque on the wheels.Evaluation tests showed that both control strategies were able to handle the combined longitudinal and lateral control tasks, but the CNN-based controller performed better than the one based on multilayer perceptron (MLP).Rausch et al. [112] also used CNNs to develop a steering controller; the network comprised three convolution layers with one fully connected layer.Steering angle data and front facing camera footage were used to train the network to learn appropriate steering controls.The data were generated from simulations.In Eraqi et al. [113], an RNN was combined with CNN to predict steering controls from input images; the RNN adds temporal dependencies which enhanced the predictions and control performance by 35% compared with using only CNNs.These methods are known as supervised learning, since the data used to teach the network are labeled by an expert.

Reinforcement learning control
Reinforcement learning is an unsupervised approach where the network learns from trial and error, this is usually modeled as a Markov decision process (MDP) illustrated in Figure 11.In a general way, an MDP is defined by a state space S, an action space A, a state transition probability P, and a reward function R. The goal is to learn a policy (s t , a t ) that links rewards and actions such that at each iteration, the agent observes the set of states s t and takes an action a t from the action space A. Then the environment transitions according to probability model P. The agent then observes a new set of states s t+1 and receives a reward r t for the performed action a t .The policy  seeks to find the actions that maximize the rewards enabling the agent to learn by applying actions and receiving rewards or penalties.
One of the most widely used reinforcement learning algorithms is Q-learning [114]; the latter is a value-based approach where the reward is quantified by a value function V(s) and the objective of the algorithm is to map each action to a value for each state.Another type of algorithms is based on policy gradient; it parameterizes the policy instead of the value function to obtain the optimal policy.This is achieved through a loss function, whose gradient is estimated with regards to network parameters.Thus, the network parameters are adjusted according to the policy gradient.There are actor critic algorithms as well; these use a hybrid approach combining value-based and policy gradient algorithms.Hence, one network estimates the reward for each action and another actor network estimates the optimal policy.Li et al. [115] combined deep learning and reinforcement learning to steer an autonomous vehicle; the authors used a multitask learning CNN to learn track features and localize the vehicle from camera images; they employed policy gradient reinforcement learning to control the steering wheel on the virtual TORCS environment [116].In Manuelli and Florence [117], LIDAR data were used to develop and test reinforcement learning controllers based on policy gradient and Q-learning for obstacle avoidance maneuvers.
Wang et al. [118] used reinforcement learning to perform lane change maneuvers; the authors designed a Q-function approximator with closed-loop greedy policy to improve the computation efficiency.Evaluation tests showed that smooth and efficient control policies were achieved for lane change maneuvers.The same authors developed a quadratic Q-network for autonomous driving in Wang et al. [119]; the quadratic form achieved more optimal control actions that were tested in lane change maneuvers  [120], deep Q-learning and actor critic algorithms were used to learn optimal policies for lateral control.The developed controllers were tested in TORCS while considering interactions with other vehicles.The actor critic algorithm, being a continuous deterministic approach, performed better by producing continuous control actions especially in curves where the Q-leaning algorithm suffered from discontinuous control actions.Yu et al. [121] also designed deep Q-learning algorithms to steer the vehicle.The authors investigated the effect of several reward functions and experimented with different hyperparameters like gradient update rules and double Q-learning.

Fuzzy logic control
Fuzzy logic systems seek to model human expertise by using linguistic variables; these variables conceptualize the fuzziness in human decision-making.Such systems do not require high computational costs unlike conventional control methods, and they do not need any mathematical model which makes them suitable for systems with complex models.Human operational experience is then conceptualized into a fuzzy controller that consists of (IF/THEN) rules.These rules simulate the expert's "know how" as an algorithm governed by fuzzy sets and membership functions [122].An example of a fuzzy rule for steering control would be if "position error" is negative, then "steering angle" is positive, where the position error is the input and the steering angle is the output.The downside to this control approach is the expertise needed to set the fuzzy rules.Fuzzy logic systems consist of four parts as shown in Figure 12: • The fuzzifier which converts the crisp inputs into fuzzy sets (fuzzification).• The rule base that contains the fuzzy rules in the form of (IF/THEN) conditions.• The inference module which determines the matching percentage between fuzzy inputs and rules.• The defuzzifier which extracts output values from the fuzzy sets (defuzzification).
Elsayed et al. [123] used fuzzy logic control for obstacle avoidance maneuvers; the control task was divided into collision detection and then decision-making.Each of the tasks was performed by a fuzzy controller; the obstacle distance and angle were chosen as inputs for the obstacle detection module, and the outputs were steering maneuver and safe cruising speed.The inputs to the second module were set as tracking error and distance to obstacle and the output defines the magnitude of the control signal.
The results showed smooth performance but were limited to straight roads only.Allou et al. [124] dealt with path tracking for vehicles with electromechanical wheel systems; they developed a fuzzy logic controller that handles steering and speed control.The kinematic bicycle model was used and the controller was simplified with only three fuzzy sets.However, the starting point and initial orientation were found to have a significant impact on controller performance.Ngo and Tran [125] designed a fuzzy inference system to guide an autonomous vehicle under varying loads.The controller receives the tracking error and load as inputs and computes the speed for left and right wheels, the latter was subject to saturation.The controller was implemented on a microprocessor and tested for several scenarios.Shui et al. [126], presented a novel data-driven predictive control; the authors based their control approach on type 2 T-S fuzzy neural networks to achieve trajectory tracking of car-like mobile robots.The main advantage of such approach is that it is completely independent of mathematical models, and simulation results showed that this approach can deal with the influence of uncertain factors and obtain higher tracking accuracy.Korkmaz et al. [127] designed an autonomous vehicle control system based on fuzzy logic and deployed it into JavaScript racer game.The authors used a simple sliding histogram window approach to detect road lanes and used this information in fuzzy logic reference position and velocity generators.The fuzzy logic generators were based on a 7×7 rule base with triangular membership functions.The speed and position control were ensured by a PD controller given the reference values obtained form the fuzzy logic generators.Stoian [128] introduced a fuzzy control algorithm for vehicles moving close to boundaries; the authors used a four motion cycle program according to five collision proximity levels.The latter are obtained from sensor data.Five linguistic variables with 25 fuzzy rules were used to develop the fuzzy controller.Evaluation tests showed that the developed fuzzy controller performed well and was able to overcome sensor imperfections.Van et al. [129] proposed a fuzzy logic controller with a convolution neural network for steering control.Pilot-Net [130] was modified and used for predicting steering angles which in turn were used with vehicle and steering angular velocities as inputs to the fuzzy logic system.The latter recommends optimal steering and velocity controls that are based on fuzzy rules inspired from human experience.Similarly, Chen et al. [131] used CMAC neural network [132] with fuzzy logic to control an autonomous vehicle.The neural network ensures the self-learning ability of the controller which eliminate errors, while the fuzzy logic module improves the control quality by suppressing disturbances and increasing robustness.The latter is formed by 56 fuzzy rules and takes the position and speed errors as inputs.The combined CMAC-fuzzy controller was tested in straight and curved roads, where its performance was superior compared with PID control.Several research papers used fuzzy logic to improve classical controllers as well, especially PID control in earlier studies [133][134][135], MPC control [38,136,137], and SMC [138-140].
Both model-based and model-free control methods have advantages and drawbacks.In many situations, they can complement each other for a better altogether solution, an overall comparative summary is provided in Table 1.

CONCLUSIONS
This article provided a technical review regarding the latest research on path tracking and lateral control techniques for autonomous driving.Several control formulation concepts were presented and discussed; limits and strengths of each control approach were highlighted and compared.However, this review does not claim to provide an exhaus-tive and thorough discussion on the subject.Based on the reviewed research, it has been found that vehicle steering systems are rarely considered in the control design, which makes this an attractive area for future developments.Furthermore, faulty sensors and sensing limits need to be considered in control design, which is rarely the case in the literature.The trend seems to shift more and more towards data-based methods and black-box like techniques over the classical control approach.However, the majority of published works tend to focus on control performance optimization and disregard constraints and safety conditions.Nevertheless, stability insurance is still a major barrier to the deployment of such approaches despite their good performance.
Model predictive control has become one of the best control methods for the path tracking task due to its ability to guarantee state and input constraints.However, stability analysis for this type of control is still a challenge.H ∞ , SMC, and back-stepping control make a great choice for dealing with nonlinearities, parameter uncertainties, and disturbances at the expense of complicated theoretical derivations and design approaches.LQR, PID, and geometry-based controllers are suitable for low-speed applications, simplified models with negligible disturbances and uncertainties.Except MPC control, most model-based controllers cannot impose constraints on system inputs, outputs, and states.This is a very important quality for guaranteeing safety and comfort in automated guidance, especially for preserving stability conditions and enforcing actuator limits.
On the other hand, model-free controllers are a key solution when modeling becomes a daunting task.These methods become a very interesting approach when sufficient data are available; they can ensure multitasking without being affected by the nonlinear characteristics of the system.AI-based methods can also be used to enhance and improve model-based controllers by learning complex models and control algorithms; the main downside of these methods is the lack of interpretability and the black-box nature that cannot ensure stability and other physical properties.
In general, despite the huge advancements seen lately in the field of autonomous driving, several challenges such as steering systems, discontinuous data, and immeasurable parameters still persist and require extensive efforts to be resolved.

FIGURE 3
FIGURE 3 Full dynamic model

FIGURE 4
FIGURE 4 Model predictive control (MPC) block diagram

.
e p ΔΦ] T represents the state vector, u = [   r ] T is the control vector (general case where both wheels are steerable), and w = [d1 d2 d3 − x p K .s d 4 − K .s] T is the uncertain external disturbance which can be defined as w = Bd e with d e = [d e d er ] T being the external disturbance acting on the front and rear axles.The robust feedback control law can then be expressed as follows:

FIGURE 10 FIGURE 11 FIGURE 12
FIGURE 10 Examples of neural networks