Unstructured road parameter cognition for ICVs using multi‐frame 3D point clouds

School of Vehicle and Mobility, State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing, China State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, Hunan, China Department of Automation, University of Science and Technology of China, Anhui, China Institute of Advanced Technology, University of Science and Technology of China, Hefei, China


| INTRODUCTION
Intelligent and connected vehicles (ICVs) have been studied extensively. They can reduce the fuel consumption, pollutions, and collisions, and improve traffic flow [1]. Although ICVs are expected to improve transportation systems, their comprehensive implementation in public roads is still challenging owing to the immaturity of perception and decision techniques. Therefore, researchers are applying ICVs in some simple scenarios such as ports, industrial parks, farms, and construction sites. In these scenarios, there are few social vehicles and road users (e.g. pedestrians, cyclists, and scooters), and operating vehicles mainly work in a circulation pattern. Therefore, the application of ICVs in simple scenarios is feasible; it can improve personnel safety, operational efficiency, economic costs, and energy conservation [2].
In some of these areas, ICVs are faced with the challenges of dusty, rainy, snowy, and unstructured road conditions, which may greatly affect the performance of the perception and motion control of ICVs. In particular, unstructured roads often have uneven slope changes and high roughness, and these road conditions sometimes gradually vary, which makes it impossible to integrate this information into digital maps. To solve these problems, unstructured road parameters need to be recognized in real time using onboard sensors, which can be applied to the optimization of path planning and control strategies for ICVs. For example, vehicles can bypass uneven roads or accelerate in advance on steep slopes.
Typical onboard sensors for road parameter cognition include radio detection and ranging (RADAR), cameras, and light detection and ranging (LIDAR). RADAR can just achieve information about moving obstacles' position and speed without specific contour information, and cannot obtain sufficient information about the ground. Cameras can obtain enough specific information but are influenced by illumination and are limited by scale. For example, ICVs operated in ports or at construction sites often need to work at night, where cameras cannot work. Compared with RADAR and cameras, LIDAR has rich point clouds data that can describe the details of the ground and objects, and it is unaffected by illumination. Therefore, LIDAR is an ideal sensor for road parameter cognition.
There are some studies on road parameter cognition. Rychkov et al. used centroid thinning to build a piecewise linear ground model based on high-resolution terrestrial laser scanning point clouds, and proposed the detrended standard deviation of relative elevations as a measure of surface roughness [3]. Guo proposed an unstructured road information recognition method by combining colour information with the grey feature of roads efficiently [4]. Yan et al. proposed a road parameter cognition method for unstructured roads using single-frame three-dimensional (3D) point clouds [5]. Dawkin proposed the methods of characterizing terrain roughness by generating terrains based on the Weierstrass-Mandelbrot fractal function, and developed a seven degree of freedom suspension model to evaluate the response of the vehicle on the generated terrains [6]. Kumar and Mills used 3D point clouds to fit a surface grid of the road surface and estimated roughness using the elevation difference between the points and their surface grid equivalents [7]. To rate the level of a highway, Wang et al. proposed a method to estimate road slopes and superelevations (the amount by which the outer edge of a curve on the road or railroad is banked above the inner edge) using hierarchical segmentation and plane cognition [8]. Moreover, researching segmentation, Zhou et al. demonstrated that the scan line-based method could extract slopes effectively from point clouds [9]. Moreover, Narksri et al. proposed a sloperobust cascaded ground segmentation method in which the effect of slope on segmentation was considered [10]. Generally, most previous studies mainly focused on roadlevel rating and 3D road reconstruction instead of cognition of parameters of unstructured roads. Also, most studies considered only a single road parameter, which cannot meet the requirements of ICVs on unstructured roads and was applicable only to roads without obstacles. Furthermore, most existing research did not study correlations among multiple frame parameters.
A parameter cognition algorithm is proposed for ICVs to recognize unstructured road parameters using multi-frame 3D point clouds. First, the region of interest (ROI) extraction and division method based on multiple features is proposed, which is used to exclude obstacles on the roads such as pedestrians and other vehicles; it also divides roads into different regions with different slopes. Longitudinal and lateral slopes are then recognized by calculating the angles between two preference planes fitted using multi-region random sample consensus (RANSAC) and least squares, and an index is proposed to evaluate the roughness of roads. To improve cognition accuracy, a multi-frame parameter fusion method is proposed.
The main contributions of this research are as follows: First, the ROI extraction and division method based on multiple features can make this algorithm applicable in the presence of obstacles or multiple slopes on roads, which did not receive enough attention in previous studies. Second, compared with existing studies on paved roads that considered only longitudinal slopes and roughness, this algorithm can recognize slopes and roughness along the lateral direction by dividing 3D point cloud data into multiple subspaces in lateral directions, which may facilitate path planning and the control strategy design of ICVs operating on unstructured roads. Finally, the multi-frame fusion method can fuse recognized parameters in adjacent frames, which is helpful for improving accuracy and robustness.
The rest of this work is organized as follows: Section 2 introduces the ROI extraction and division method. Section 3 details the parameter cognition algorithm, including the selection and cognition of road parameters. Section 4 demonstrates the multi-frame fusion method. Section 5 presents experimental test results on various road sections. Finally, conclusions are presented in Section 7.

| EXTRACTION AND DIVISION OF REGIONS OF INTEREST
To extract the ground points and divide them into different regions with different slopes, ROI extraction and division based on multi-features is proposed in this section. As shown in Figure 1, the proposed method consists of four main steps. It inputs the disordered point clouds acquired from multiple LIDAR and outputs multi-region ROIs with different road parameters.

| Indexing with two-dimensional grids
Point clouds obtained from the multiple LIDAR are disordered, which would increase the calculation complication so greatly that the real time and robustness of the algorithm will be affected. To facilitate the sorting, indexing and searching of point clouds, the point clouds are rasterized to store point clouds in ordered 2D grids. It is necessary to ensure that the point clouds in the same grid are the same type of object for the benefit of operation on the next step. Relative to the size of obstacles on the roads, the point clouds in a single grid can be considered to represent the same object if the unit grid is small enough; most point clouds of the same grid may have similar or identical characteristics. Considering that data precision can present the maximum area that can keep the same characteristics, this algorithm finally selects the precision of LIDAR as the size of grids whose effect will be verified in the experimental section. Each grid is then the basic unit of data processing, the unordered point cloud is placed into its corresponding grid, and the same type of grid is marked (such as ground grid and non-ground grid). The point cloud data stored in each grid is uniformly processed andduring data processing.

| Extraction using height and intensity
According to the characteristics of ground points and nonground points, the height differences and reflection intensities of the point clouds in each grid are used to perform the initial extraction of ground points and non-ground points. Non-ground object point clouds (such as vehicles and pedestrians) often have large height differences, whereas the height differences of ground point clouds are small. Moreover, as shown in Figure 2, the reflection intensities of different materials are distinguished, and the ground points often have lower intensities, which can be used to discriminate the ground points. Because the point clouds in each grid belong to the same type of objects, this research divides grids into ground grids or non-ground grids. Therefore, the height difference and reflection intensities of the point clouds in a single grid can be used as criteria for ground extraction. The maximum and minimum heights of points in a single grid are denoted by high max and high min , and the average height and reflection intensity of the point clouds in a single grid are expressed as high avg and ri avg , respectively. Then, the criteria for ground extraction are given as where ε high is the predefined height difference threshold, and α high and α ri are the predefined thresholds of height and intensity, respectively. The grid will be marked as the ground grid when all the criteria are satisfied. Points in the ground grids extracted using the height and intensity features are put into set S 1 . Through initial extraction, not only ground objects (such as vehicles and pedestrians) but also hanging objects (such as trees branches and billboards) can be filtered approximately.

| Extraction using neighbouring-ring distances
Multilayer LIDAR equipped on ICVs usually consists of different scan lines in the vertical direction, especially rotating LIDAR. These scan lines are distributed in the vertical direction with an absolute angle difference and are F I G U R E 2 Point clouds with different reflection intensities. (The brighter the points are, the higher the intensity is) -171 reflected by objects at different pitch angles. As the LIDAR rotates, the point cloud will constitute different concentric rings, and points in the same ring are obtained by the same scan line. If there is only flat ground in the scan range, these rings will distribute as distances determined by the pitch angles when the position and installation direction of the LIDAR are fixed. However, in practical application, the neighbouring-ring distances are influenced by many factors, especially the slope of the scanned surface. As shown in Figure 3, the neighbouring-ring distances on the ground are long and increase as the distance from LIDAR increases, whereas the neighbouring-ring distances are close to zero on the obstacles and other non-ground objects. Therefore, this characteristic can be used for further ROI extraction from point set S 1 .
As shown in Figure 4, LIADR is installed at known height H from the ground. The pitch angles of the neighbouring scans are known to be θ i and θ iþ1 . When they both irradiate to the flat ground, the neighbouring-rings distance d flat can be expressed as while d slope is used to express the distance when the neighbouring scans irradiate to the surface with slope β: Therefore, the ROIs can be extracted by comparing the distances between the points in set S 1 with a distance threshold.
Considering a situation in which ROIs often have slopes, distance d flat cannot be used directly as the threshold. The neighbouring-ring distance on selected slope β th is used as the threshold d thi : where pitch angles θ i and θ i+1 can be known in the point cloud data. The details of extracting ROIs using neighbouring-rings distances are presented in Algorithm 1. Points in set S 1 are expressed as p i,j , where i is the scan number that can be known in the data, and j presents the points at the same horizontal angle. If neighbouring points p i+1,j do not exist, put point p i,j into set S 2 . Otherwise, compare the distance between p i,j and p iþ1;j ; if the distance is larger than calculated threshold d th , put point p i,j into set S 2 . The points with smaller distances are marked as the objects' points. Therefore, point set S 2 is the final result of the ROI extraction method based on the multifeatures.

| Division based on multi-features
The coverage of a frame of point clouds can reach more than 100 m. Because the ground may have different slopes and other situations, ROIs extracted in this coverage cannot be regarded as a plane. Considering that the ground can be treated as a plane in part, ROIs are divided into local point clouds with blocks along the x-axis direction (the vehicle's forward direction) based on the neighbouring-ring distances and the characteristics of LIDAR data.
As shown in Figure 5, when the scans irradiate to the flat ground, the distance of neighbouring rings is equal to the expected value, d flat . When they irradiate to the slope, the distance decreases as the angle increases. This feature can be expressed as the difference of the measured neighbouring-ring distance with the expected value. For the given vertical scan angle θ i and θ iþ1 , when the slope is β and the average height of the ith ring's ground points is h, the difference Δd can be expressed as where d falt and d slope is With the known θ i , θ iþ1 , H and h, the value of Δd depends only on the angle β. Therefore, slopes with different angles can be distinguished by difference Δd. Points in set S 2 are divided into military regions with different slopes.
The data of LIDAR have the characteristics of being intensive close to the origin of the coordinates and sparse away from the origin. To ensure that each region has enough data and the region close to the vehicle's centre has enough details, regions are further divided according to density. As shown in Figure 6, the region close to the vehicle occupies the most area in the frame; to ensure it has enough details, it is further divided by calculating the number of point clouds in different sections away from the origin (divided by blue lines). In the XIE ET AL. actual process, the division width close to the origin is small whereas the width away from the origin is large. Even the width of the farthest region can reach half the total area.

| ROAD PARAMETER COGNITION
This section presents the cognition method of road parameters. To make up for some deficiencies in the previous work, this research first discusses the parameters that are needed and then defines these parameters. Moreover, reference planes are fitted using the multi-region RANSAC and least squares, which are used to represent the reference ground of each region extracted. Then, the detailed parameter calculation process and formula are presented.

| Parameter selection
This part discusses information that ICVs need and which can be estimated using the data of 3D point clouds. When ICVs operate on unstructured roads, the vibration shock caused by uneven roads may lead to metal fatigue and may be harmful to humans and goods. As a result, it could shorten the life of the vehicle and increase fuel consumption. Roads with high roughness are often associated with accidents; vehicles may go out of control or run off the roads, and the rougher the road surface, the higher the accident rate [11,12]. With estimated information about the roughness of some areas, vehicles can easily bypass parts with high values of roughness. Under this condition, vehicles can run on unstructured roads safely and comfortably. Thus, roughness is a critical parameter and needs to be estimated [13,14].
As shown in Figure 7, slopes can be divided into longitudinal, θ x , and lateral, θ y ; one is along the vehicle's direction and the other is vertical to the direction of the vehicle. The longitudinal slope mainly affects the acceleration of vehicles, whereas the lateral slope mainly affects the speed.

| Multi-region reference plane fitting
After the point clouds are divided into independent regions, it is necessary to fit the plane represented by the ground of each region. The point clouds of set S 2 in each region are used as points needed to fit the plane of the road. The process of ROI F I G U R E 5 Neighbouring-ring distances on different slopes F I G U R E 6 Division based on characteristics of light detection and ranging data 174extraction is rough; some noise is caused by measuring instruments and other factors, so filtering marked point clouds is necessary. The RANSAC method is used here to denoise point clouds obtained by the initial extraction to obtain the inner points (ground points) required to fit the ground planes. Then, the reference planes are fitted with these inner points using the least squares method.
The RANSAC method has the feature of judging whether the data satisfy the fitted model [15]. The RANSAC noise reduction process includes four steps: (1) Three points are randomly selected as a random point set, because three points are needed to determine a plane equation model. (2) Calculate plane equation Ax þ By þ Cz þ 1 ¼ 0, represented by the three points selected. (3) Calculate the error among all point clouds relative to the plane indicated by the second step, find point clouds smaller than the set error threshold, and put them into the consistency points set. (4) Repeat the first to third steps, and select the consistent point set with the largest number of point clouds until the value of the iterator reaches the maximum. The value of the maximum iterators is calculated by the accuracy set, assuming the probability of randomly selected points from the data set as interior points ω is where n inliers is the number of point clouds belonging to the roads, and n outliers is the number of outlier points. Inspired by Fischler and Bolles [16], to represent the plane in this problem, three non-collinear points are needed. Thus, the number of iterations k required to achieve the set correct probability P can be calculated as The inner points obtained by RANSAC are presented as point set S 3 , which can be approximated as the ground points. However, only three non-collinear points are selected to calculate the reference plane, which is not enough. Therefore, the reference planes are further fitted by these inner points using the least squares fitting method. The plane equation is set as where A, B, and C are the values of elements in unit normal vector (A, B, C) of a plane, and the plane can be expressed as where n is the number of points in point set S 3 , and (x n , y n , z n ) is the coordinate of the nth points. Then, the normal vector can be expressed as

| Road slope cognition
Road slopes that are needed are the relative slope to the ground the ICVs are on, which can provide the vehicle with the information about when and where to control speed. The absolute slope can be calculated with the inertial measurement unit, which is not estimated here. The relative slopes between two regions of the ground are determined by the angles between two reference planes, which include the X-axis direction (the direction of the vehicle travelling) and the Y-axis (vertical to the X-axis direction). These angles can represent the longitudinal and lateral slope. Thus, the longitudinal slope can be similarly calculated as the angle between the vector ðA i ; 0; C i 0 Þ and ðA iþ1 ; 0; C iþ1 0 Þ, where C i 0 is the X-axis component of C i , given as F I G U R E 7 Schematic diagrams of longitudinal slope and lateral slope XIE ET AL. -175 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi Thus, the longitudinal slope θ x can be expressed as 0 j ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi A i 2 þ C i 0 2 p ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The lateral slope can be calculated as the angle between the vector ð0; B i ; C i 00 Þ and ð0; B i ; C iþ1 00 Þ. Therefore, the lateral slope θ y can be similarly expressed as 00 j ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi B i 2 þ C i 00 2 p ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where C i 00 ¼ C i � B i ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi θ x and θ y are both relative slopes between two adjacent reference planes instead of the absolute slopes relative to horizontal plane.

| Road roughness cognition
In previous research, to simulate the International Roughness Index, the elevation value is often used to calculate the index of estimated roughness; for instance, calculating the difference between the elevation and standard deviation of each point was selected as the index of estimation. These indexes can represent the dispersion of points in each subset of the ground instead of the height deviation from the standard ground.
Moreover, most can provide information about roughness along the longitude, which is not enough for an intelligent and connected vehicle. Root mean square error is used to measure the deviation between the observed value and the true value.
Here, the reference plane can be viewed as the plane in which ground points should be, so the number of points not on the plane and the distance from the reference can represent the roughness of roads. The index of roughness is defined as where d i is the distance between point i and its corresponding reference plane, which can be expressed as � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi

| MULTI-FRAME PARAMETER FUSION
In single-frame point clouds, the cognition of parameters is usually influenced by the measurement error of LIDAR, and the bumps of the vehicle could significantly decrease the estimation accuracy. To improve the accuracy of cognition, a multi-frame parameter fusion method is proposed. The time interval between the two adjacent frames of LIDAR is short, and the same area can be found in the adjacent frames. Therefore, the parameters of the last two frames are used to determine the reliability and error of the parameter recognized by the current frame. The parameters of area j at the kth frame are expressed as matrix Θ k,j : where θ ðkÞ jx , θ ðkÞ jy are the longitudinal slope and lateral slope of area j at the kth frame, and δ ðkÞ j is the value of roughness using the proposed index.
Therefore, the parameters of the last two frames are presented as However, the relative position of area j in each frame varies as the vehicle travels. Therefore, to determine the position of area j in the (k − 1)th and (k − 2)th frame, the iterative closest point (ICP) method is used to register the background points of the (k − 1)th and (k − 2)th frame to the kth [17]. Through the ICP method, the relative positions of the area j in the last two frames can be calculated as where Ρ k,j , Ρ k−1;j , and Ρ k−2;j are the relative positions of area j at the three consecutive frames, R k−1 and R k−2 are the rotation matrixes calculated by ICP, and T k−1 and T k−2 are the translation matrixes.
If the cognition accuracy is high, the parameter difference between these consecutive frames should be small. Therefore, the relative error of the parameters between the current frame and the last two frames is used to determine the accuracy of the current-frame parameter. Taking roughness as an example, the relative error of area j at kth frame is expressed as ϵ ðkÞ δ :

-
where λ 1 and λ 2 are the correlation coefficients of the error, which represent the correlation of the consecutive frames. When ϵ ðkÞ δj is more than predefined error threshold ϵ th δ , roughness at the current frame could be deemed invalid, and roughness is updated asδ Otherwise, roughness is presented as the parameter recognized:δ The final parameters of area j at the kth frame are expressed as matrixΘ k;j :

| EXPERIMENTS AND RESULT ANALYSES
To test the algorithm of parameter cognition of unstructured roads, real-world experiments were conducted on a construction site. The results are presented as the algorithm process, including extraction results, division results, slope-cognition results, and roughness-cognition results.

| Experiments conditions
The road experiments were conducted on a construction site with a similar environment, including unstructured roads, different slopes, and uneven roads. The experiment road is shown in Figure 8a. As shown in Figure 8b, the ICV is equipped with two 32line LIDAR sensors and four 16-line LIDAR sensors. The four 16-line LIDAR sensors are deployed in the lower part of the vehicle, whereas the other two 32-line LIDAR sensors are on the top.

| Extraction results
To verify the results of extracting ROIs, different results are shown using different methods in Figure 9d as the result of extraction using the algorithm with an angle threshold of β thi ¼ π=6. The orange points represent the extracted ground, the white points represent obstacles off the ground, and the yellow and green points represent suspension points such as trees or billboards. (a) is the result of using the difference in height. As shown in the blue circle, points near cars and road edges cannot be extracted. This algorithm may become invalid if traffic is heavy. (b) and (c) explain the reason for selecting the precision of LIDAR as the size of grids. (b) is the result of using bigger grids; there should be ground points in blue circles but they are not extracted (underdivision) because the points of ground in the same grid are marked as cars or road edges. (c) is the result using smaller grids, which extracts the cars and other obstacles as the ground as the grid is so small that the same things are recognized as different (overdivision).
For further analysis, the true positive rate (TPR) and false positive rate (FPR) [18] are selected as indicators to evaluate the results of (b-d), which are defined as Equations (31) and (32). The quantitative values are presented in Table 1.
where TP is the number of ground point clouds correctly marked, FN is the number of ground point clouds incorrectly marked, FP is the number of non-ground point clouds incorrectly marked, and TN indicates the number of nonground point clouds correctly marked. The larger the TPR value is, the larger the proportion of the ground point cloud correctly segmented is. Also, the smaller the FPR value is, the smaller the proportion is that the non-ground point cloud misclassified.
As shown in Figure 9 and Table 1, The proposed algorithm has higher performance than just using the height difference.  The size of grids selected by proposed mothed is suitable, whereas the bigger size may cause more point points not to be extracted, and the smaller size may cause many points that do not belong to the ground to be mistaken.

| Division results
To test the division method based on the multi-features, an experiment was conducted on an unstructured road with different slopes. As shown in Figure 10, each area in the different yellow frames represents the ground with different slopes. By observing the figure, these slopes can be estimated through the distance of neighbouring rings: slope 1 has the smallest angle, slope 2 has a slightly larger angle, slope 3 has the maximum angle, and slope 4 has a smaller angle than slope 2.
In the division experiment, the unstructured road was divided into four parts based on the neighbouring-ring distances; then, the first part was divided into two parts based on the density. As shown in Figure 11, the red lines are the dividing line obtained using the neighbouring-ring distances feature, whereas the green line shows the density feature. Comparing the results with the frames in Figure 10, this division method has excellent performance in dividing the unstructured road as regions with different slopes.

| Slope-cognition results
The slope-estimation algorithm was tested to estimate the slopes in the construction site and the non-sloped road in the urban environment (as shown in Figure 12a,c). The original point clouds of these scenes are shown in Figure 12b,d.
The diagrams of slopes according to their estimated values are shown in Figures 13 and 14. Figure 13 is a diagram of the road with slopes corresponding to Figure 12a and Figure 14 is a diagram without a slope corresponding to Figure 12c. The shapes of the diagrams are similar to the terrain of the sloping road and flat road, which demonstrate that the slope-cognition method has great performance.
For further analysis, Tables 2 and 3 presented the quantitative results of the longitudinal and lateral slopes.  Distance is the range of the regions from ICVs. Fra3 is the parameter recognized at the current frame. Fra1 and Fra2 are the parameters recognized in the last two frames. Fus is the final estimation result obtained by multi-frame parameter fusion. Gradient is the value of slopes measured by other sensors such as gradienter. Error is the relative error between the Fus and Grad.
Tables 2 and 3 indicate that estimated slopes are similar to the values measured by other sensors, and the relative errors are generally less than 6%. Thus, the slopes estimated by this algorithm are accurate and can be used to control ICVs.

| Roughness-cognition results
The scene of estimating roughness is shown in Figure 15a. There is a pit on the road with a large amount of roughness. In this algorithm, the pit needs to be identified so that ICVs can bypass it.
The results of recognized roughness mainly include the values of the index in different grids. The more values there are, the more uneven the road or the area is. These values are finally shown in Figure 16, which is the map of roughness. The more violently the lines undulate, the higher the roughness in this area is. Compared with Figure 15, the area with violent lines can represent the pit approximately. Therefore, the roughness cognition method has great performance in recognizing the area with high roughness.  -181 6 | CONCLUSION An unstructured road parameter cognition algorithm for ICVs using multi-frame 3D point clouds is proposed. The algorithm aims to recognize the road parameters, including slopes and roughness, which is helpful for optimizing path planning and control strategies for ICVs. First, an ROI extraction and division method based on multi-features is proposed, which is used to exclude obstacles on the roads such as pedestrians and other vehicles. It also divides roads into different regions with different slopes. Longitudinal and lateral slopes are then estimated by calculating the angles between two preference planes fitted using multi-region RANSAC and least squares, and an index is proposed to evaluate the roughness of roads. To improve the accuracy of estimation, a multi-frame parameter fusion method is proposed. Experiments were carried out on unstructured roads, and the results demonstrate that the proposed method satisfactorily estimates the parameters of unstructured roads.
In future work, the ROIs extraction and division method will be improved. The index of roughness also deserves further study.