Point-LIO: Robust High-Bandwidth Light Detection and Ranging Inertial Odometry

Herein, point light detection and ranging inertial odometry (LIO) is presented: a robust and high‐bandwidth light detection and ranging (LiDAR) inertial odometry with the capability to estimate extremely aggressive robotic motions. Point‐LIO has two key novelties. The first one is a point‐by‐point LIO framework that updates the state at each LiDAR point measurement. This framework allows an extremely high‐frequency odometry output, significantly increases the odometry bandwidth, and fundamentally removes the artificial in‐frame motion distortion. The second one is a stochastic process‐augmented kinematic model which models the IMU measurement as an output. This new modeling method enables accurate localization and reliable mapping for aggressive motions even with inertial measurement unit (IMU) measurements saturated in the middle of the motion. Various real‐world experiments are conducted for performance evaluation. Overall, Point‐LIO is capable to provide accurate, high‐frequency odometry (4–8 kHz) and reliable mapping under severe vibrations and aggressive motions with high angular velocity (75 rad s−1) beyond the IMU measuring ranges. Furthermore, an exhaustive benchmark comparison is conducted. Point‐LIO achieves consistently comparable accuracy and time consumption. Finally, two example applications of Point‐LIO are demonstrated, one is a racing drone and the other is a self‐rotating unmanned aerial vehicle, both have aggressive motions.


Introduction
Over the past decades, due to the direct, dense, active, and accurate measurements of depth, 3D light detection and ranging (LiDAR) sensors have been playing an increasingly important role in autonomous applications, such as view-based simultaneous localization and mapping (SLAM), [1,2] robotic exploration and inspection, [3,4] and autonomous driving. [5][6][7][8] Recent developments [9,10] in LiDAR technologies have enabled the commercialization and mass production of lightweight, costefficient, and high-performance LiDAR sensors, with the potential to benefit a range of existing and emerging applications such as autonomous navigation [11] and object detection. [12,13] A fundamental requirement of LiDAR sensors applied in navigation tasks is to provide accurate position estimations for robot control and consistent and high-rate mappings for timely perception of the environment. By measuring points at an extremely high rate (e.g., millions per second), LiDAR sensors could enable considerably high-rate odometry and mapping, which allows the tracking of extremely high-speed motions. However, existing approaches are all based on a frame architecture similar to vision-based methods, where the points in a frame are processed periodically at a certain frame rate (e.g., 10 Hz). However, in reality, the LiDAR points are sampled sequentially at different time instants; accumulating these points into a frame will introduce artificial motion distortion and adversely affect the mapping result and odometry accuracy. The low frame rate also increases latency in the odometry and limits the attainable bandwidth, where the odometry bandwidth is defined in analogy to the bandwidth of a dynamic system, which is the frequency where the system gain drops below 0.707. An odometry bandwidth represents how fast a motion could be such that the odometry can estimate satisfactorily.
In this work, we address these issues by two key novel techniques: point-by-point state update and stochastic-processaugmented kinematic model. More specifically, our contributions are as follows: 1) We propose a point-wise LiDAR-inertial odometry (LIO) framework, which fuses a LiDAR point at its actual sampling time without accumulating into a frame. The elimination of points accumulation removes the in-frame motion distortion and allows high odometry output and mapping update at nearly the point sampling rate, which further enables the system to track very fast motions; 2) To further advance the system bandwidth beyond the inertial measurement unit (IMU) measuring range, we use a stochastic process model [14] to model the IMU measurements. Then, we augment this model into the system kinematics and treat the IMU measurements as system output. The stochastic process-augmented kinematic model allows the smooth estimation of system state, including angular velocity and linear acceleration, even when IMU saturates; 3) We integrate these two key techniques into a full tightly coupled LIO system, termed as Point-LIO. The system uses an on-manifold extended Kalman filter (EKF) to update the system state by fusing each LiDAR point or IMU data at its respective sampling time. By exploiting the system sparsity and linearity, the developed system achieves realtime state estimation even on low-power Advanced RISC (reduced instruction set computer) Machines (ARM)-based computer onboard a microaerial vehicle; and 4) The developed system is tested in various challenging real-world data collected by an emerging solid-state LiDAR with very small FoV. The results show the ability of Point-LIO on motion distortion compensation, high odometry output rate (4)(5)(6)(7)(8), and high bandwidth (>150 Hz). The system is also able to estimate states under extremely aggressive motions (of angular velocity more than 75 rad s À1 ) with saturated IMU measurements after the initial stage. Furthermore, an exhaustive benchmark comparison on 12 sequences from various open LiDAR datasets shows that Point-LIO achieves consistently comparable accuracy and efficiency to other counterparts while costing fewer computation resources. Real-world applications on actual unmanned aerial vehicles (UAVs) are finally demonstrated.
The remaining article is organized as follows. In Section 2, we discuss relevant research works. We give an overview of the complete system pipeline in Section 3. Section 4 presents the system formulation, an EKF-based state estimator, and summarizes the algorithm. The evaluations of the system are presented in Section 5 and benchmark comparison on open datasets is reported in Section 6. Finally, applications of the system in real-world UAVs are shown in Section 7, followed by conclusions in Section 8.

LiDAR(-inertial) Odometry
Many recent works on 3D LiDAR odometry and mapping are based on the LiDAR-odometry and mapping (LOAM) structure, [15] where raw LiDAR points are accumulated into a frame (also called a scan) to extract feature points (e.g., edge and plane). The extracted feature points are then registered to the previous scan to produce an odometry output at the scan rate (i.e., 10 Hz), and a few recent scans are accumulated into a small submap which is finally registered and merged to the global map at a lower rate (i.e., 1 Hz) to refine the LiDAR pose with respect to the map. The separate structure between scan to scan and scan to map in LOAM has been adopted in many follow-up works, such as Lego-LOAM, [16] which considers the constraints arising from the ground during the scan-to-scan match to improve the odometry accuracy, LINS, [17] which fuses the IMU data with scan registration, and others such as in refs. [18,19], which focus on the improvements of computation efficiency or accuracy.
While the separation between scan to scan and scan to map can significantly alleviate the computation load required for the odometry, the scan-to-scan registration in odometry often leads to quick drift accumulation. Moreover, scan-to-scan registration requires large overlaps between consecutive scans, which may not be available in small FoV solid-state LiDARs. [20] To address these problems, direct scan to map (or scan to local map) has been widely adopted, such as those based on point maps, [11,21] (G-) ICP, [22] NDT, [22] Surfel maps, [23][24][25][26] or voxel maps. [27] In particular, ref. [20] proposes a parallel scan-to-map method to deal with the small FoV problems of solid-state LiDARs. Ref. [28] fuses the IMU measurements into the scan-to-map registration in an efficient iterated Kalman filter framework. A key problem in the scan-tomap framework is how to maintain the map structure such that it can incrementally add points from new scans while allowing efficient queries. To address this problem, ref. [29] proposes an incremental k-d tree, ikd-Tree, as the map structure. Benefiting from this efficient incremental mapping structure, the system FAST-LIO2 is able to perform odometry and mapping in real time at 10 Hz for spinning LiDARs and 100 Hz for solid-state LiDARs, even on low-power ARM-based computers.
One major drawback of scan-to-map registration is that the odometry is estimated at the rate of the scan (or frame), limiting the odometry output frequency at the frame rate. The limited output frequency will cause a delay in the odometry equal to the scan duration. Furthermore, the limited state estimation rate will put an unnecessary upper bound for the odometry bandwidth due to the Nyquist-Shannon sampling theorem. This problem has been partially addressed in Lola-SLAM [30] and LLOL, [31] which propose to slice a scan to multiple subscans and register each to the map once it is received, achieving an odometry at 160 and 80 Hz, respectively. Compared with these methods (i.e., the scan-to-scan, [15][16][17][18][19] the scan-to-map, [11,[21][22][23][24][25][26] and the sub-scan-to-map [30,31] ), our proposed system is a point-to-map framework, which registers each individual point to the map once it is received. This point-to-map framework allows an odometry at the point sampling rate in theory and 4-8 kHz in practice. The unprecedented high-frequency state update reduces the latency down to microseconds while significantly increasing the odometry bandwidth.

Motion Distortion Compensation
As mentioned above, existing works on LiDAR (-inertial) odometry and mapping are almost all based on scans (i.e., frames), which will suffer from in-frame motion distortion resulting from the continuous LiDAR motion during a frame. To correct such distortion, compensation methods are often necessary. Most efforts assume a constant-velocity motion within the frame to compensate the motion distortion, such as in refs. [15,[17][18][19][32][33][34][35][36][37][38][39]. The constant-velocity motion assumption is valid when the scan duration is short or the motion is gentle. But for very aggressive motions where the velocity may change during a scan, for example, in drone aerobatics, the constantvelocity method will often cause large drift or even failures in odometry.
Another popular method for motion compensation is based on continuous-time trajectory optimization, such as those based on B-Splines [26,[40][41][42] and Gaussian process model. [43][44][45] Continuous-time trajectories allow the evaluation of pose at any time instant and hence can compensate the distortion of each individual point. However, continuous-time trajectory optimization is very time-consuming and often implemented offline. [41,42,44,45] Although there are some online implementations, [26,40,43] the odometry rate is often low (e.g., 10 Hz) to ensure real-time optimization. Moreover, they require to accumulate sufficient points for reliable trajectory parameter optimization, which introduces considerable odometry latency. The inherent smoothness of continuous-time trajectories also prevents the description of highly aggressive motions experienced by the robot.
Leveraging IMU measurements is another effective method for motion compensation. [29,46] These methods integrate the LiDAR pose using the IMU data within a frame to undistort the contained points. Due to the high frequency of IMU measurements (e.g., 200 Hz), the IMU-based motion compensation is quite effective for usual robot motions and even in fast-rolling drone maneuvers [29] (up to 1242°s À1 ). However, the method is still limited by the IMU frequency and also suffers from IMU measurement noises and bias estimation errors.
The ad-hoc motion compensation reviewed above is ultimately due to the frame-based odometry framework in existing methods. In our system, we fuse each individual LiDAR point at its true sampling time instead of accumulating points into a frame. The elimination of frame accumulation fundamentally removes the motion distortion from the very beginning, hence suffering from no drawbacks mentioned earlier.

Formulation of Inertial Measurements
To fuse IMU measurements with LiDAR point registration, two mainstream methods are often used, that is, loosely coupled and tightly coupled. Loosely coupled methods integrate the IMU measurements to obtain a pose prior estimation and use this prior estimation as an initial pose for the subsequent scan registration. [15,38,[47][48][49] On the other hand, tightly coupled approaches fuse IMU measurements and LiDAR points in a joint optimization. Two implementations have been proposed for tightly coupled approaches: EKF based [17,29,50,51] and optimization based. [18,52,53] EKF-based methods [17,29,50,51] integrate the IMU measurements in an EKF's propagation procedure to obtain pose estimates, which are subsequently fused with the LiDAR measurements in an EKF update step. In contrast, optimization-based methods preintegrate the IMU measurements to obtain the relative pose constraints and then fuse this preintegrated relative pose constraints with point registration errors. [18,52,53] Tightly coupled methods often have higher robustness and accuracy than loosely-coupled methods. Yet, in all above tightly-coupled methods, IMU data are used as an input of a kinematic model, so it can be propagated in the EKF propagation or pre-integrated for one frame duration. [54] Such EKF propagation or pre-integration would suffer from saturation problems if the robot motion exceeds the IMU measuring range. In other studies, [26,44,45] IMU data is used to provide measurements of the angular velocity and linear acceleration predicted from the continuous-time trajectory, based on which the trajectory parameters are optimized along with the LiDAR scan registration factors. Viewing IMU data as measurements of the model output could naturally deal with IMU saturation caused by aggressive motions, although this capability is limited by the continuoustime model as reviewed earlier. Our method is similar to other studies [26,44,45] by viewing IMU data as measurements of the model output but models the robot motion as a stochastic process, which is then augmented with the kinematics. Such a stochastic process-augmented kinematic model allows the IMU to update the state along with LiDAR points in an EKF framework. The developed system is able to deal with saturated IMU measurement in extremely aggressive motions, like vibration and high-speed motion. To the best of our knowledge, it has not been demonstrated in any prior work that the LiDAR-inertial systems could work with saturated IMU measurements.

System Overview
Our design philosophy is to truthfully recognize that 1) the LiDAR points are sequentially sampled at respective time, instead of as a frame sampled at the same time, and 2) IMU data are measurements, instead of the input, of the system. We fuse these two measurements in an on-manifold EKF framework [55] once the respective measurements (each LiDAR point or IMU data) are received.
The overview of our designed system is shown in Figure 1; the sequentially sampled LiDAR points and IMU data are both used to update states at their respective time stamp, leading to an extremely high-rate odometry output, that is, 4-8 kHz in practice. In particular, for each LiDAR point received, a corresponding plane from the map is searched. If the point is matched with www.advancedsciencenews.com www.advintellsyst.com a plane fit from the points in the map, a residual is computed to update the system state using an on-manifold Kalman filter. The optimized pose finally registers the LiDAR point into global frame and merges to the map and then proceeds to the next measurement (LiDAR point or IMU data). Otherwise, if the point has no matched plane, it is directly added to the map by the Kalman filter-predicted pose. To enable fast plane correspondence search while admitting new registered points, we use an incremental k-d tree structure, ikd-Tree, originally developed in FAST-LIO2. [29] For each IMU measurement, the saturation check for each channel of the IMU is conducted separately, the channels that have saturated values would not be used for state update.

State Estimation
The state estimation of Point-LIO is a tightly coupled onmanifold Kalman filter. Here, we briefly explain the essential formulations and workflow of the filter and refer readers to [55] for more detailed and theoretical explanations of the on-manifold Kalman filter.

Notations
To ease the explanation, we adopt notations as follows.

Symbol meaning Meaning
x k State x at the k-th measurement sampling time.
x Ground-true value of state x.
x, x Propagated and updated value of state x.
δx Error between ground-true state x and its estimationx.
Furthermore, we introduce two encapsulated operations, ⊞ ("boxplus") and its inverse ⊟ ("boxminus") defined in ref. [55] to describe the system on a manifold ℳ of dimension n and parameterize the state error in Euclidean space ℝ n . Also, these operations can describe the system state space model in discrete time more compactly. We refer readers to ref. [55] for more detailed definitions and derivations; in this article, we are only concerned with the manifold SOð3Þ and ℝ n where ExpðrÞ ¼ I þ sinðkrkÞ r b c krk þ ð1 À cosðkrkÞÞ r b c 2 krk 2 is the exponential map on SOð3Þ and Log is its inverse map. For a compound manifold ℳ ¼ SOð3Þ Â ℝ n that is the Cartesian product between the two submanifolds ℳ ¼ SOð3Þ and ℝ n , we have

Kinematic Model
We first derive the system model, which consists of a state transition model and a measurement model.

State Transition Model
Taking the IMU frame (denoted as I) as the body frame and the first IMU frame as the global frame (denoted as G), the continuous kinematic model is where G R I , G p I , and G v I represent the IMU attitude, position, and velocity in the global frame. G g is the gravity vector in the global frame. b g and b a are random-walk IMU biases driven by Gaussian noises n b g % N ð0, Q b g Þ and n b a % N ð0, Q b a Þ, respectively. The notation ⌊a⌋ is the skew-symmetric cross product matrix of a ∈ ℝ 3 . I ω and I a denote the angular velocity and acceleration of IMU in the body frame, that is, IMU frame. As proposed in ref. [14], a certain robot motion (the angular velocity I ω and linear acceleration I a) can always be viewed as a sample of a collection or ensemble of signals, which enables us to describe, statistically, the robot motion by a random process. Moreover, as suggested in ref. [14], since the motion of robotic systems usually possesses certain smoothness (e.g., due to actuator delay), quick changes in angular velocities and accelerations are relatively unlikely and a N-th order integrator random process would often suffice the actual use.
In particularly, we choose first-order integrator models driven by Gaussian noises w g % N ð0, Q g Þ and w a % N ð0, Q a Þ to model the angular velocity I ω and linear acceleration I a, respectively. The continuous model (2) is then discretized at each measurement step k. Denote Δt k the current measurement interval, which is the time difference between the previous measurement (an IMU data or LiDAR point) and the current measurement (an IMU data or LiDAR point). The continuous model (2) is discretized by assuming the input holds constant for the interval Δt k , leading to where the manifold ℳ, function f, state x, and the process noise w are defined as Þ is the covariance matrix of the process noise w.

Measurement Model
The system has two measurements, a LiDAR point or an IMU data (consists of angular velocity and acceleration measurements). These two measurements are often sampled and received by the system at different time, so we model them separately. Assume that the LiDAR frame coincides with the body (i.e., IMU) frame or has precalibrated extrinsic, a LiDAR point I p m k is equal as the true position in the local IMU coordinate frame I p gt k , which is unknown, contaminated by an additive This true point, after projecting to the global frame using the true (yet unknown) IMU pose G T I k ¼ ð G R I k , G p I k Þ, should lie exactly on a local small plane patch in the map (see Figure 2), that is where G u k is the normal vector of the corresponding plane and G q k is any point lying on the plane. Note that G T I k is contained in the state vector x k . (6) imposes an implicit measurement model for the state vector x k . The IMU measurement consists of angular velocity measurement ( I ω m ) and acceleration measurement ( I a m ) where n g % N ð0, ℛ g Þ and n a % N ð0, ℛ a Þ are both Gaussian noises. Collectively, n I ¼ ½n T g n T a T % N ð0, ℛ I Þ ¼ N ð0, diagðℛ g , ℛ a ÞÞ is the measurement noise of the IMU. As shown, the two states ω, b g (and similarly a, b a ), which are separate in the state Equation (2), are now correlated in the angular velocity measurement ω m (or acceleration measurement a).
To sum, the measurement model of the system could be presented in the following compact form

Extended Kalman Filter
A tightly coupled EKF is used for the state estimation of Point-LIO. The workflow of the EKF is presented in this section.

State Propagation
Assume that we have received measurements up to step k and the updated state at that time step is x k along with the updated covariance matrix P k . The state propagation from step k to next measurement step k þ 1 follows directly the state transition model in Equation (3) by setting w k ¼ 0 And the covariance is propagated aŝ where Q k is the covariance of the process noise w k , and the matrices F x k , F w k are computed as where x kþ1 is true value of state vector at time step k þ 1, and

LiDAR
Measurement: With the predicted pose GT I kþ1 ¼ ð GR I kþ1 , Gp I kþ1 Þ from the Kalman propagation (9), we project the measured LiDAR point I p m kþ1 to the global frame  www.advancedsciencenews.com www.advintellsyst.com ikd-Tree. The found nearest-neighboring points are then used to fit a local small plane patch with normal vector G u kþ1 and centroid G q kþ1 , as shown in the measurement model (see Equation (6) and also Figure 2). If the five nearest points do not lie on the fit plane path (i.e., distance of any point to the plane is larger than 0.1 m), current measurement of LiDAR point Gp kþ1 is directly merged into the map without residual computation or state update. Otherwise, if the local plane succeeds to fit, a residual (r L kþ1 ) is calculated according to Equation (8) as where δx kþ1 ¼ x kþ1 ⊟x kþ1 with x kþ1 being the true value of state vector at time step k þ 1, and IMU Measurement: For an IMU measurement, we first assess if any channel of the IMU is saturated by checking the gap between the current measurement and the rated measuring range. If the gap is too small, this channel of IMU measurement is discarded without updating the state. Then, acceleration and angular velocity measurements from unsaturated IMU channels are collected to calculate the IMU residual (r I kþ1 ) according to Equation (7) (to simplify the notation, we use all six channel measurements here).
where δx kþ1 ¼ x kþ1 ⊟x kþ1 with x kþ1 being the true value of state vector at time step k þ 1, and To sum, the residual, from either LiDAR point measurement the (12) or IMU measurement (14), is related to the state x kþ1 and the respective measurement noise by the following relation is where for a LiDAR point measurement we have , and for an IMU measurement, we have

State Update
The propagated statex kþ1 in (9) and covarianceP kþ1 in (10) impose a prior Gaussian distribution for the unknown state x kþ1 , as follows The observation model (16) gives another Gaussian distribution for δx kþ1 Then combining the prior distribution in (17) with the measurement model from (18) yields the posterior distribution of the state x kþ1 (which is represented by δx kþ1 equivalently).
The optimization problem in (19) is a standard quadratic programming and the optimal solution δx o kþ1 can be easily obtained, which is essentially the Kalman update [56] Then the update of x kþ1 is The updated state will be used in the next step propagation. To do so, we need also to estimate the covariance, denoted by P kþ1 , of the error between the state estimate x kþ1 and groundtruth where J kþ1 is the projection matrix and the covariance of ðδx kþ1 À δx o kþ1 Þ is the inversion of the Hessian matrix in (19), which is also the matrix P kþ1 in (20). As a result, the covariance of x kþ1 ⊟ x kþ1 , according to (22), is (24) www.advancedsciencenews.com www.advintellsyst.com The updated state x kþ1 along with the covariance matrix P kþ1 are used in propagation of the next measurement. The overall procedure of our state estimation is summarized in Algorithm 1.

Analysis
In our proposed LIO framework, the state is updated by consuming each LiDAR point at its reception, leading to a point-wise odometry. This point-wise architecture is completely different from the existing frame-based LOAM frameworks [11,[15][16][17]21,[29][30][31] and enables an extremely high-rate odometry output that is ideally equal to the point rate of the LiDAR sensor (from hundreds of thousands to million points per second). The high-rate state update provides timely correction of the state estimate before the estimation error increases too large in the forward propagation step, leading to a potential high-bandwidth odometry that could even survive in extremely high-speed movements.
Another benefit of the point-wise LIO framework is the fundamental removal of in-frame motion distortion. The existing frame-based framework [11,[15][16][17]21,[29][30][31] accumulates the sequentially sampled points into a frame and uses to update the state by assuming the frame is sampled at the same time. Since the points in a frame are actually sampled at different times, this accumulation would cause motion distortion for points in a frame. To compensate such distortion, ad-hoc methods, such as IMU integration, [17,18,21,29,38,44,53,57] constant velocity assumption, [15,[32][33][34][35][36][37]58] or continuous time trajectory, [26,45,52] must be used. In contrast, our point-wise LIO framework updates the state at each point's true sampling time, suffering from no such motion distortion.
The second main difference of our LIO framework is how we model the IMU measurements. Unlike existing methods, [17,29,50,51] where the IMU measurements are modeled as the input to a kinematic model, we used a colored stochastic process to describe a robot's dynamic behaviors, that is, angular velocity and acceleration in (2), and modeled the IMU measurements as the model output (7). Modeling the IMU measurements as the output provides an elegant way to cope with saturated IMU measurements, which allows the estimation of motion beyond the IMU measuring range.
In our system, we adopted an EKF, instead of an iterated Kalman filter used in frame-based methods such as FAST-LIO2 [29] and LINS. [17] This is because for the LiDAR measurement, the update rate is very high, so we do not need to iterate the state update exhaustively at each time step, while for IMU measurement, the measurement Equation (7) is essentially linear. The elimination of iteration can effectively lower the time on state update. Moreover, since the state is updated by the LiDAR measurements at a point-by-point base, the measurement model of LiDAR points is of 1D, as shown in Equation (6). The lowdimension measurement equation along with the great sparsity (e.g., F x k , F w k in (11), H L kþ1 in (13) and H I kþ1 in (15)) in the system can also effectively lower the computation time.

Evaluation
In this section, we evaluate the performance of our developed system in three aspects: 1) removal of motion distortion; 2) high odometry frequency with high bandwidth; and 3) state estimation with saturated IMU measurements in the middle.

Implementation
The proposed Point-LIO is implemented in Cþþ and Robots Operating System (ROS). The EKF is implemented based on the IKFoM toolbox developed in our previous work. [55] We used the incremental k-d tree, ikd-Tree, developed in FAST-LIO2 [29] ) as our mapping structure with its default parameters: local map size L ¼ 2000 m, spatial downsample resolution l ¼ 0.25 m, the rebalancing thresholds of ikd-Tree are α bal ¼ 0.6, α del ¼ 0.5, and the subtree size threshold for parallel rebuilding (in a second thread) is N max ¼ 1500.
Although our system is designed to perform state estimate after each LiDAR point reception, in practice, limited by the available drivers provided by LiDAR manufacturers, LiDAR points are packaged after accumulating a complete scan and then sent to the LIO systems. To cater for this practical limitation, Point-LIO sorts all LiDAR points and IMU data contained in a received package according to their respective timestamps. Then, the sorted data are processed one by one by Point-LIO.
In all the evaluations, we compare Point-LIO to a state-of-theart frame-based odometry, FAST-LIO2. [29] All the experiment results for FAST-LIO2 are collected using the public version of FAST-LIO2 with its default parameter values (which are essentially also the values of our parameters as detailed earlier). Since FAST-LIO2 performs a spatial downsampling with resolution 0.3 for each received LiDAR scan, for a fair comparison, we also perform such spatial downsampling in Point-LIO before the data sorting and point-by-point update.

Input:
Last odometry output x k and P k ; A LiDAR point or an IMU measurement; Output: New odometry output x kþ1 , P kþ1 ; Workflow: 1: State propagation from the last time step k to current time step k þ 1 via (9) and (10) to obtain state predictionx kþ1 and its covarianceP kþ1 . 2: if A LiDAR point then 3: if PlaneCorrespondenceExist( Gp kþ1 ) then 5: Compute r Lkþ1 , H Lkþ1 , D Lkþ1 via (12) In addition, to study the effect of point-by-point update, we provide a system that only integrates this scheme but not applies the colored stochastic model as an ablation study. The ablation study system is the same as FAST-LIO2, which models the IMU measurements as the input to the system kinematic model, but differs in that the each individual LiDAR point in a scan is used to update the system sequentially as our proposed system does. This leads to a state update frequency similar to ours. In the following sections, we denote our developed system Point-LIO as "Point-LIO" and the above ablation-study system as "Point-LIO-input" to distinguish the role of IMU measurements in the two systems.

Platforms
To collect real-world data, we develop a sensor suite, shown in Figure 3a, which consists of a solid-state 3D LiDAR, Livox Avia, a first-person-view (FPV) camera, and five Vicon markers for ground-truth measurement. With a 70.4°(horizontal) Â 77.2°( vertical) circular FoV and an unconventional nonrepetitive scanning pattern, the Livox Avia LiDAR produces 230 000 Hz point measurements and a built-in IMU (model BMI088) producing 200 Hz IMU data. The points and IMU data are packaged at a frequency adjustable from 10 to 100 Hz. To produce different types of motion, three different platforms are constructed to carry the sensor suite, including a robot car (see Figure 3b), the RoboMaster 2019 AI developed by DJI Shenzhen, a rotating platform driven by a step motor (see Figure 3c), Nimotion STM4260A, and a pendulum (see Figure 3d).

Resolving Motion Distortion
To verify the effectiveness of the proposed point-by-point update scheme in addressing motion distortion, we collect some sequences using the robot car shown in Figure 3b. Three different scenes are tested, that is, the Belcher Bay Park (a unstructured scene), the Centennial Small Square of HKU (a semistructured scene), and a corridor in the Haking Wong building of HKU (a structured scene). The three sequences are denoted as "Park", "Square", and "Corridor", respectively. In all sequences, the robot car returns to the starting point, which enables the drift computation. The LiDAR package frequency is 10 Hz.
One challenge of this experiment is the strong vibration when the robot car moves on the ground. Since the sensor suite is attached to the chassis without any vibration absorber, the vibration will directly pass to the sensor and cause severe jerky motions, as indicated by the built-in IMU data shown in Figure 4. When the robot car is stationary in the beginning, the MU measurements are stably small. As the car starts moving, the measurement of IMU changes rapidly.

Mapping Results
The mapping results of Park, Square, and Corridor are shown in Figure 5, 6, and 7 respectively. In each figure, we first present a global view of the final mapping results (i.e., subfigure (a)) and then focus on certain local areas (sub-figure (b)) containing large planes (e.g., a wall). We investigate the consistency of points on the wall (sub-figure (c)), from which we can compare the mapping accuracy of FAST-LIO2 (i.e., (b1), (c1)), Point-LIO-input (i.e., (b2), (c2)), and Point-LIO (i.e., (b3), (c3)). Finally, to show the in-frame motion distortion, points of one scan (accumulation over one scan period 0.1 s) in the above-selected local areas are shown (sub-figure (d)) together with the further zoom-in (sub-figure (e)) for FAST-LIO2 (i.e., (d1), (e1)), Point-LIO-input (i.e., (d2), (e2)), and Point-LIO (i.e., (d3), (e3)), where the red points refer to registered LiDAR points in the current scan, and the white points are map results accumulated up to the current scan.
As shown in the sub-figures (c) of Figure 5, 6, and 7, the overall map of FAST-LIO2 (c1) is obviously thicker than the Point-LIO-input (c2), and Point-LIO (c3) produces a further thinner wall than the Point-LIO-input. The reason for this phenomenon lies in the in-frame motion compensation in each individual scan, as shown in subfigure (e) of Figure 5, 6, and 7. As shown, all red points around the selected wall are supposed to belong to the same plane, but they actually scatter off the wall for FAST-LIO2 (e1) due to the in-frame motion distortion. This in-frame distortion phenomenon for the Point-LIO-input and Point-LIO is much alleviated ((e2) and (e3)).
FAST-LIO2 uses a backward propagation based on IMU measurements to project all the points of a scan to the scan-end pose. This process is easily disturbed by the IMU measurement noises, biases estimation error, and the limited IMU sampling rate. In particular, caused by the sensor vibration, the acceleration and angular velocity change in a high rate even within one sample interval of the IMU, leading to large IMU propagation errors which assume that the angular velocity and acceleration are www.advancedsciencenews.com www.advintellsyst.com constant during one sampling interval. Moreover, the low frame rate (i.e., 10 Hz) also requires long-time (i.e., 100 ms) IMU propagation, which accumulates pose errors and results in large in-frame distortions as shown in subfigures (b1) and (c1). In contrast, Point-LIO fuses the LiDAR point at its true sampling time without any point accumulation, which fundamentally eliminates the motion distortion, as shown in subfigures (c2) for the Point-LIO-input and subfigure (c3) for the Point-LIO. Furthermore, when comparing the Point-LIO-input (c2) with the Point-LIO (c3) both with the point-by-point update scheme, the Point-LIO performs slightly better. This is because the Point-LIO-input still uses the IMU measurements to propagate the state (although for only one LiDAR point interval), hence still suffering from the IMU measurement noise and bias estimation errors. In contrast, the Point-LIO uses the filtered but not the raw data of IMU measurements to propagate the state, which slightly reduces the motion distortion as observed in (e3). The video of an example sequence, the Park, is available online https://youtu.be/oS83xUs42Uw.

Drift Results
The drifts of FAST-LIO2, Point-LIO-input, and Point-LIO are summarized in Table 1. Due to the imperfect operation of the  www.advancedsciencenews.com www.advintellsyst.com robot car, the distance between the starting and ending position is not exactly zero, but less than 10 cm. As shown, all the FAST-LIO2, Point-LIO-input, and Point-LIO have comparable drifts for Square and Corridor sequences, while for the Park sequence, FAST-LIO2 fails to return the starting point. This is because the Park is an unstructured environment, which makes the effect of motion distortion on odometry accuracy more evident.

High Odometry Output Frequency and High Bandwidth
We test the frequency of odometry output of FAST-LIO2, Point-LIO-input, and Point-LIO on an indoor dataset (denoted as "Odo") collected using the rotating platform (see Figure 3c) by giving quickly varying speed commands to the step motor; the package rate of LiDAR is 100 Hz in this experiment. Even though the dataset is collected with LiDAR rate of 100 Hz, the framework of FAST-LIO2 is naturally extendable to higher-state update frequency by splitting one frame into multiples (but below the IMU rate 200 Hz). Thus we divide one LiDAR frame into two for FAST-LIO2. Figure 8 shows the distribution of the number of odometry output per second. The output frequency for FAST-LIO2 is 200 Hz, which is the frame rate. As comparison, output frequencies for the Point-LIO-input and the Point-LIO are in the range of 4 and 8 kHz, which are the number of points that pass the plane correspondence check.
To enable the bandwidth analysis, the above experiment is reconducted with ground-truth measurements from Vicon Tracker recorded at the highest frequency 300 Hz. Dividing system output (the yaw angle estimated from the odometry) by the system input (the ground-true yaw angle measured by the Vicon system), we obtain the magnitude response (dB) of FAST-LIO2, Point-LIO-input, and Point-LIO, at different input frequencies, as shown in Figure 9. As can be seen, the magnitude response of FAST-LIO2 starts dropping when the input frequency is near 100 Hz, suggesting a 100 Hz bandwidth (see Table 2). 100 Hz is also the highest attainable bandwidth when the odometry output frequency is 200 Hz according to the Nyquist-Shannon sampling theorem. In contrast, both the Point-LIO-input and the Point-LIO have bandwidth larger than 150 Hz which is beyond the measuring capability of Vicon system.

Extremely Aggressive Motion with Saturated IMU Measurements
While the Point-LIO-input and Point-LIO have comparable performances so far, in this section, we show that the Point-LIO is able to track extremely aggressive motions even beyond the IMU measuring ranges. Two types of motion are produced in the experiments, one is spinning motion (denoted as "Satu-1") and the other is circling in the space (denoted as "Satu-2"). Both experiments suffer from IMU saturation after initial stage caused by either the high spinning rate or the large centrifugal forces. To the best of our knowledge, no prior SLAM systems could cope with such aggressive motions or the saturated IMU measurements.

Spinning Motion
This experiment is conducted using the rotating platform placed in a cluttered laboratory environment (see Figure 10a1-a4). During the experiment, the sensor suite is rotated by the step www.advancedsciencenews.com www.advintellsyst.com motor under step angular speed commands, which increases from zero to a peak value step by step and then decreases to zero at the end. The resultant peak angular velocity is 75 rad s À1 (in yaw), which far exceeds the IMU measuring range, that is, 35 rad s À1 . The high angular velocity also causes a peak acceleration around 80 m s À2 , also far beyond the IMU measuring range, that is, around 30 m s À2 . The onboard FPV images shown in Figure 10c1 and c2 give an illustration of the rotation in process.
The mapping result of Point-LIO is shown in Figure 10b1, which shows a fairly consistent mapping of the environment, and the estimated ending position coincides with the starting position very well as shown in Figure 10b2. The estimated kinematic states, including rotation in Euler angles and position, are compared with the ground truth in Figure 11a, where the x   The slightly large RMSE of translation is mostly caused by the y-direction, where the constraints in this direction are insufficient from the beginning. Considering the extreme motions in the experiment, this translation error is well acceptable. Another benefit of our Point-LIO is the capability of estimating angular velocity and acceleration (which are states of our system and can hence be estimated by the Kalman filter) where the IMU is saturated. The estimation versus the IMU measurements are plotted in Figure 11b. As shown, during the time period 50-106 s, the IMU saturates (z-axis for gyroscope and y-axis for accelerometer), while our Point-LIO can still give a reasonable estimate. Outside this region, the estimation from our Point-LIO is in good agreement with the IMU measurements albeit some high-frequency components are filtered.
We further challenge Point-LIO by starting it at different initial angular speeds of the motor. As shown by the map results in Figure 12 and the RMSE of rotation in Table 3, when the initial angular velocity is below the IMU saturation value, that is, 35 rad s À1 , the Point-LIO is able to survive by constructing a reasonable map and state estimation. The quality of state estimation is slightly degraded when compared to the above case where the sensor starts from a stationary pose. This performance degradation is reasonable since the fast initial angular speed causes Point-LIO to build a biased map at the very beginning, which further misleads the subsequent state estimate. This also causes the RMSE of rotation to increase with the initial angular velocity, as shown in Table 3. When the initial angular velocity is beyond the IMU measuring range, Point-LIO fails due to the drastic large initial state estimate (e.g., the initial angular velocity estimate is set to zero while the actual is above 35 rad s À1 ).
Finally, as a comparison, we run the FAST-LIO2 and Point-LIO-input on the same dataset, and the error comparisons of rotation and position are presented in Figure 13. As shown, the estimations of FAST-LIO2, Point-LIO-input, and Point-LIO have comparable rotation error during the first 50 s, where IMU works normally. From time 50 s, where IMU starts saturating, the estimations of FAST-LIO2 and Point-LIO-input start to diverge immediately for rotation, and the estimation of position also starts to drift and then diverge. In conclusion, the FAST-LIO2 as well as the Point-LIO-input fail to work under saturated IMU measurements while our proposed Point-LIO can survive if the saturation does not last from the beginning.

Circling Motion
This experiment is conducted using the pendulum in the same laboratory environment (see Figure 14d). In this experiment, the   www.advancedsciencenews.com www.advintellsyst.com sensor suite is tied to one end of a rope that swings into a circling trajectory in the vertical plane (see Figure 14b). This motion causes an acceleration at the bottom of the circle up to 40 m s À2 , which exceeds the IMU measuring range 30 m s À2 . FPV images shown in Figure 14e give an visual illustration of the motion. More details are shown in a video available online https://youtu.be/oS83xUs42Uw.
Qualitatively, Figure 14c1 and c2 shows the mapping results of our proposed Point-LIO. Since the LiDAR is facing front with a 70.4°Â 77.2°circular FoV, only one side of the laboratory is mapped. The estimated trajectory is shown in Figure 14a, which is in high agreement with the actual sensor path shown in Figure 14b.
Quantitatively, the estimated rotation in Euler angles and position are compared with ground-truth measured by Vicon in Figure 15a, where both the Euler angle estimation and the position estimation successfully demonstrate the 12 sequential circles. The RMSEs of the average rotation and translation errors are 4.42°and 0.0990 m, respectively. Figure 15b shows the estimated angular velocity and acceleration by the Point-LIO versus the IMU measurements. As shown, the estimations agree with the IMU measurements pretty well when those dynamic states   are within the IMU measurements. Moreover, our Point-LIO can give reasonable acceleration estimates even when the IMU saturates. Figure 16 further shows the error comparison among the Point-LIO, the Point-LIO-input, and the frame-based odometry FAST-LIO2. Similar to the last experiment, our Point-LIO achieves consistently lower estimation errors than the other two due to the ability to cope with IMU saturation.

Real-Time Performance
The average time costs of each step of the Point-LIO-output for processing one scan of LiDAR points are shown in Figure 17, which are tested on an intel i7-based micro-UAV onboard computer, a DJI Manifold 2-C7 with a 1.8 GHz quad-core Intel i7-8550U CPU, and 8 GB RAM. The mapping includes searching of nearest points and adding of point to the map, which takes up the largest part of time consumption. Even the system states are updated at each LiDAR point, the time for EKF filtering including state propagation and update is less than 10 ms for 10 Hz sequences and less than 1 ms for 100 Hz sequences. The average total time consumption for one scan is compared among FAST-LIO2, Point-LIO-input, and Point-LIO-output in   www.advancedsciencenews.com www.advintellsyst.com Table 4. Â indicates that the LIO fails in those sequences. As can be seen, our Point-LIO, with either the Point-LIO-input (i.e., Point-LIO-input) or the output model (i.e., Point-LIO), has time consumption comparable to FAST-LIO2, and they all achieve real-time performance, that is, within 100 ms for 10 Hz sequences and within 10 ms for 100 Hz sequences. Finally, the average numbers of points processed per second (including those with and without plane correspondences) over all sequences are 33 710, and the average processing time per point is 9 us.

Benchmark Results
In this section, we test Point-LIO on various public sequences, which have more gentle motion without IMU saturation and compare it with other state-of-the-art LIO methods, including FAST-LIO2, [29] LILI-OM, [11] LIO-SAM, [21] and LINS. [17] The computation platform for benchmark comparison is the same lightweight UAV onboard computer as used in FAST-LIO2, [29] which is a DJI Manifold 2-C7 with a 1.8 GHz quad-core Intel i7-8550U CPU and 8 GB RAM; hence, the results for FAST-LIO2, LILI-OM, LIO-SAM, and LINS can be directly obtained from the original paper. [29] Since Point-LIO uses the same mapping structure as FAST-LIO2, for a fair comparison, we set the mapping parameters of Point-LIO to the default values of FAST-LIO2 used in ref. [29], that is, the local map size L ¼ 1000 m, the LiDAR raw points are directly fed into state estimation after a 1:4 (one out of four LiDAR points) temporal downsampling and spatial downsample resolution l ¼ 0.5 m with rebalancing thresholds of ikd-Tree as α bal ¼ 0.6, α del ¼ 0.5, and N max ¼ 1500.
For the EKF part of Point-LIO, the LiDAR measurement noise of Kalman filter is set as ℛ k ¼ 10 À2 ⋅ I (see (18)). These parameter values are kept the same for all sequences. We evaluate our method on the same 12 sequences used in FAST-LIO2, [29] which are drawn from 4 different public datasets, that is, "lili" from LILI-OM, [11] "utbm", [59] "ulhk", [60] and "liosam" from the work LIO-SAM. [21] Among them, "lili" uses a solid-state 3D LiDAR, Livox Horizon, while the other three datasets use the spinning LiDARs, that is, Velodyne HDL-32E LiDAR for "utbm" and "ulhk" and VLP-16 LiDAR for "liosam". We refer the readers to ref. [29] for more detailed information about the dataset and the selected sequences.

Accuracy Evaluation
Similar to ref. [29], two criteria, the RMSE of average translation error (for sequences with good ground-true trajectory) and the end-to-end error (for sequences starting and ending at the same location), are used for the accuracy evaluation.

RMSE Benchmark
The RMSEs are computed using the same method of the publication [29] and reported in Table 5. Compared with other LIO methods, our Point-LIO achieves the best performances in 4 out of 5 sequences, especially an obvious improvement on utbm_9, while it has a slightly higher RMSE in liosam_1. The overall accuracy of our method is on par to (and most cases better than) the counterparts.

Drift Benchmark
The end-to-end errors are reported in Table 6. The overall trend is similar to the RMSE benchmark results, that our Point-LIO achieves the lowest drifts in 5 out of 7 sequences. The result for sequence lili_8 is worse than LILI-OM and FAST-LIO2, which is because LILI-OM has tuned parameters for each of their own sequences "lili", while parameters of Point-LIO are kept the same among all the sequences. Since lili_8 has a much longer   www.advancedsciencenews.com www.advintellsyst.com trajectory than the other two "lili" sequences, the drift caused by inappropriate parameters would accumulate along the following process and result in more than 10 m drift worse than FAST-LIO2. Also, Point-LIO shows slightly larger RMSE on sequence ulhk_6 relative to FAST-LIO2 albeit the margin is very small. As can be seen from the above benchmark results, Point-LIO achieves higher accuracy in most sequences while for the rest, the margin from the best method is not significant. Considering the various types of LiDARs, environments, and moving platforms across all datasets and sequences, this effectively shows the accuracy and robustness of our method on real-world data. Table 7 shows the processing time of Point-LIO, FAST-LIO2, LILI-OM, LIO-SAM, and LINS in all the sequences. Both Point-LIO and the FAST-LIO2 integrate odometry and mapping together, where the map is updated immediately at each step the odometry is updated. Therefore, the total time ("Total" presented in Table 7) counts all possible procedures occurring in the odometry, including point-to-map-matching, state estimation, and mapping. On the other hand, LILI-OM, LIO-SAM, and LINS are all based on a separate architecture of odometry (including feature extraction and rough pose estimation) and mapping (such as back-end fusion in LILI-OM, [11] incremental smoothing and mapping in LIO-SAM, [21] and map-refining in LINS [17] ), whose average processing time per LiDAR scan is summed up by those two parts ("Odo". and "Map". respectively presented in Table 7) when ranking the computation time.

Processing Time Evaluation
As shown, our proposed method and FAST-LIO2 achieve the least computation time when compared with other methods.
When compared to FAST-LIO2, our method takes less time on 7 out of 12 sequences. The average computation time of these two methods is very close. Note that, due to the frame-based architecture, FAST-LIO2 uses four threads to parallelize the nearest point search, while our method, due to the point-wise architecture, has to perform such operations sequentially. Still, our method takes comparable average computation time, suggesting fewer use of the computation resources, which could be reserved for other modules (e.g., planning, control). This computation efficiency is attributed to the great sparsity of the system and the elimination of iteration in the Kalman filter, as explained in Section 4.4.
In summary, Point-LIO has accuracy and computation efficiency comparable to FAST-LIO2, while costing fewer computation resources. In the meantime, the Point-LIO is significantly faster than the current state-of-the-art LIO algorithms, that is, LILI-OM, LIO-SAM, LINS, while achieving highly competitive or better accuracy.

Applications
In this section, we apply our proposed Point-LIO to the state estimation of two UAVs: one is on a racing quadrotor drone shown in Figure 18a which has a thrust-to-weight ratio up to 5.4. The high-thrust-weight ratio enables the drone to perform extremely agile motions. The other UAV is on an agile, single-actuated aircraft called self-rotating UAV, as shown in Figure 18b. The propeller blades are attached to the motor shaft through two passive hinges, [61,62] which, by modulating momentary acceleration and deceleration on the propeller rotation speed, is able to produce the roll and pitch moment necessary to stabilize the UAV's attitude. Due to the uncompensated moment produced by the motor, the UAV will produce a high-rate continuous yaw rotation.  www.advancedsciencenews.com www.advintellsyst.com

Racing Drone
As shown in Figure 18a, the racing drone is mounted with a Livox Avia LiDAR and a FPV camera. The LiDAR FoV is aligned to that of the FPV camera, based on which an expert human pilot manually controls the drone to perform extremely agile flight maneuvers. The flights are conducted above a farmland with vegetation, pond, and buildings (see Figure 19b,c). Several aggressive maneuvers are performed during the flight, including extremely fast rolling flips (see Figure 19e1-e3), cliff diving, and lateral accelerations. During the flip, the angular velocities reach 59.37 rad s À1 , which exceeds the IMU measuring range 35 rad s À1 . Readers can refer to the accompanying video https://youtu.be/oS83xUs42Uw for better visual illustration of the experiment. We conducted two flights and our method succeeded on both of them. Due to the space limit, we present the results of one flight only. The mapping results are shown in Figure 19d1-d3; it is seen that the constructed map is consistent with easily distinguishable fine structures on the ground, such as trees and buildings. The estimation of rotation in Euler angles, position, and velocity are shown in Figure 20a, and the estimation of the angular velocities versus the IMU measurements are shown   www.advancedsciencenews.com www.advintellsyst.com in Figure 20b. From these results, we can see that our method is able to estimate the drone's states even with extremely aggressive motions. As noted, the maximum velocity on one axis reaches 14.63 m s À1 , maximum acceleration reaches 30 m s À2 , and angular velocity reaches 59 rad s À1 . Besides the extremely agile motion, the LiDAR also occasionally faces the sky, which causes no LiDAR measurements, during maneuvering. Still, our method is able to estimate the state stably.
7.2. Self-Rotating UAV Figure 18b presents the self-rotating UAV. As shown in this figure, the Livox Avia LiDAR is front, leading to a fast FoV change when the UAV undergoes continuous yaw rotation. We use the LiDAR built-in IMU and set the measuring range to 17.5 rad s À1 , while the average yaw rate of the UAV is around 25 rad s À1 . The UAV is also equipped with a low-power, ARM-based computer, Khadas VIM3 Pro, which has 2.2 GHz quad-core Cortex-A73 CPU and 4 GB RAM. The onboard computer runs our Point-LIO to estimate the UAV state in real time. The estimated state is then fed to the flight controller, Pixhawk 4 Mini, to do the real-time control task. Two experiments are conducted using this UAV, one is an outdoor experiment outside the Haking Wong building of the University of Hong Kong, as shown in Figure 21a1 and a2, and the other is an indoor experiment in a cluttered laboratory shown in Figure 21b1 and b2. The real-time mapping results of the two experiments are shown in Figure 21, i.e., (c) for the outdoor experiment and (d) for the indoor experiment. It is seen that our method manages to build maps of both environments without noticeable mismatches. Figure 22a,b further show the estimation of the kinematic state (i.e., rotation, position, and velocity), angular velocity, and acceleration, respectively. The plots are enlarged during the time interval 53-55 s, to better show the estimation results. As shown, our method can produce stable state estimate that is in line with the IMU measurements, despite the IMU saturation in the middle and quick FoV changes caused by fast spinning. The average processing time per LiDAR package of our method is 14.63 ms while the package rate is 50 Hz, ensuring real-time performance. This is also confirmed by the experiments, where the controller is able to perform stable controlled flight with the state feedback from our method.

Conclusion
This article proposes Point-LIO, a robust and high-bandwidth LIO framework. The framework is based on a novel point-bypoint update scheme, which updates the system state at the true sampling time of each point without accumulating points into a frame. The elimination of points accumulation removes the longstanding in-frame motion distortion and allows high-odometry output at nearly the point sampling rate (4-8 kHz), which further enables the system to track very fast motions. To further boost the system bandwidth beyond the IMU measuring range, a colored stochastic process is augmented into the kinematic model treating the IMU measurements as system output. The bandwidth, robustness, accuracy, and computation efficiency of the developed system are exhaustively tested in real-world experiments with extremely aggressive motions and public datasets with diversified LiDAR types, environments, and motion patterns. In all tests, Point-LIO achieves comparable computation efficiency and odometry accuracy to other state-of-the-art LIO algorithms while significantly boosting the system bandwidth.
As an odometry, Point-LIO could be used in various autonomous tasks, such as trajectory planning, control, and perception, especially in cases involving very fast ego-motions (e.g., in the presence of severe vibration and high angular or linear velocity) or requiring high-rate odometry output and mapping (e.g., for high-rate feedback control and perception).