Consumer‐grade UAV solid‐state LiDAR accurately quantifies topography in a vegetated fluvial environment

Unoccupied aerial vehicles (UAVs) with passive optical sensors have become popular for reconstructing topography using Structure from Motion (SfM) photogrammetry. Advances in UAV payloads and the advent of solid‐state LiDAR have enabled consumer‐grade active remote sensing equipment to become more widely available, potentially providing opportunities to overcome some challenges associated with SfM photogrammetry, such as vegetation penetration and shadowing, that can occur when processing UAV‐acquired images. We evaluate the application of a DJI Zenmuse L1 solid‐state LiDAR sensor on a Matrice 300 RTK UAV to generate digital elevation models (DEMs). To assess flying height (60–80 m) and speed parameters (5–10 ms−1) on accuracy, four point clouds were acquired at a test site. These point clouds were used to develop a processing workflow to georeference, filter and classify the point clouds to produce a raster DEM product. A dense control network showed that there was no significant difference in georeferencing from differing flying height or speed. Building on the test results, a 3 km reach of the River Feshie was surveyed, collecting over 755 million UAV LiDAR points. The Multiscale Curvature Classification algorithm was found to be the most suitable classifier of ground topography. GNSS check points showed a mean vertical residual of −0.015 m on unvegetated gravel bars. Multiscale Model to Model Cloud Comparison (M3C2) residuals compared UAV LiDAR and Terrestrial Laser Scanner point clouds for seven sample sites demonstrating a close match with marginally zero residuals. Solid‐state LiDAR was effective at penetrating sparse canopy‐type vegetation but was less penetrable through dense ground‐hugging vegetation (e.g. heather, thick grass). Whilst UAV solid‐state LiDAR needs to be supplemented with bathymetric mapping to produce wet–dry DEMs, by itself, it offers advantages to comparable geomatics technologies for kilometre‐scale surveys. Ten best practice recommendations will assist users of UAV solid‐state LiDAR to produce bare earth DEMs.


| INTRODUCTION
Unoccupied aerial vehicles (UAVs; Joyce, Anderson, & Bartolo, 2021) have been transformative in providing a platform to deploy sensors to quantify the topography of the Earth's surface, for investigations from the spatial scale of individual landform features upwards (Piégay et al., 2020;Tomsett & Leyland, 2019). Where logistical or legislative constraints allow flying, and spatial coverage can be achieved timeously, UAV-mounted sensors have largely superseded alternative approaches to surveying, including terrestrial laser scanning (TLS; Brasington, Vericat, & Rychkov, 2012;Williams et al., 2014;Alho et al., 2011). Sensors that have been mounted onto UAVs to acquire data to map topography can be grouped into two remote sensing categories: passive and active (Lillesand, Kiefer, & Chipman, 2015). To date, the former category has dominated geomorphological applications, but technological developments in LiDAR technology herald the potential for the return of more active remote sensing methods for topographic reconstruction.
Passive sensors include digital cameras that are used to acquire images that are subsequently used in Structure from Motion (SfM) photogrammetry . Although SfM photogrammetry has enabled a plethora of geomorphic investigations (e.g. Bakker & Lane, 2017;Cucchiaro et al., 2018;Eschbach et al., 2021;Llena et al., 2020;Marteau et al., 2017), there are aspects of SfM photogrammetry that limit what can be achieved to reconstruct topography. The passive nature of the technology poses particular problems for reconstructing bare earth topography; imagery cannot penetrate vegetation cover, and vegetated areas are typically associated with poorer processing quality due to weaker image matching Eltner et al., 2016;Iglhaut et al., 2019;Resop, Lehmann, & Cully Hession, 2019).
Shadows caused by vegetation and/or topographic features also reduce and sometimes eliminate the effectiveness of SfM photogrammetry in what are often key areas of a survey such as steep and geomorphologically dynamic river banks (Kasvi et al., 2019;Resop, Lehmann, & Cully Hession, 2019). Whilst workflows to minimise potential systematic errors, such as large forward and lateral overlap of imagery, as well as double grid flying patterns (James & Robson, 2014;Wackrow & Chandler, 2011) have been established, these do not overcome localised errors that arise from image quality, and in many situations, they significantly add to UAV flight time.
In contrast to SfM photogrammetry, active remote sensing offers a direct survey of topography. Airborne Light Detection and Ranging (LiDAR) surveys (Glennie et al., 2013) that have been acquired using sensors mounted on crewed planes or helicopters have been transformative in enabling the construction of digital elevation models (DEMs) at spatial scales >1 km 2 . Such datasets have been widely used for a variety of geomorphological investigations (Clubb et al., 2017;Jones et al., 2007;Sofia, Fontana, & Tarolli, 2014). Although the importance of these sensors cannot be understated (Tarolli & Mudd, 2020), the cost of the instruments and associated deployment logistics have limited most geomorphologists to using archival airborne LiDAR datasets (Crosby, Arrowsmith, & Nandigam, 2020). Early integration of LiDAR sensors on UAV platforms was demonstrated in forestry applications (Jaakkola et al., 2010;Lin, Hyyppä, & Jaakkola, 2011;Wallace et al., 2012). More recently, UAV LiDAR including topographicbathymetric systems has been demonstrated across several fluvial environments and applications (e.g. Islam et al., 2021;Mandlburger et al., 2020;Resop, Lehmann, & Cully Hession, 2019;Resop, Lehmann, & Hession, 2021). Despite these pertinent examples, the growth trajectory of UAV LiDAR surveys remains significantly slower than that of UAV SfM photogrammetry when it was in its geomorphic application infancy (Babbel et al., 2019;Pereira et al., 2021), due to the relatively high entry cost of LiDAR sensors and associated large payload UAV platforms required. However, a new generation of cheaper, solid-state LiDAR sensors (Štroner, Urban, & Línková, 2021) offers potential for a return to active remote sensing of dry topography, now using UAV platforms. However, this technology has not yet been applied and assessed in geomorphic environments.
LiDAR measurements in their traditional form consist of a pulse or wave being emitted from a laser sensor, which is steered across an area of interest using moving components (i.e. mirrors) that are precisely aligned and regularly calibrated. Either the time-of-flight between the emission of the laser and its subsequent reflection or variability in the reflected laser frequency is then used to determine range. Many LiDAR sensors can also detect multiple returns (Resop, Lehmann, & Cully Hession, 2019;Wallace et al., 2012), usually based on the intensity of the return. In contrast to traditional LiDAR, solidstate LiDAR systems feature few or no moving parts, being composed of modern electronic components instead. They use an array of aligned sensors, which when combined enable significantly increased scanning rates (Velodyne LiDAR, 2022). The development of solidstate LiDAR can be traced back to obstacle avoidance and navigation for autonomous vehicle development in the mid-2000s when the limited scanning rate of mechanical LiDAR systems was deemed insufficient for these tasks (Pereira et al., 2021;Raj et al., 2020). The difference between mirror-based mechanical and solid-state LiDAR systems parallels the difference between traditional whiskbroom and newer push-broom scanning systems found on space-based satellites (Abbasi-Moghadam & Abolghasemi, 2015). The change in internal components from mechanical to electronic resolves limitations in mounting LiDAR units on UAVs due to the relatively large size, fragility and the cost of mirror-based sensors. Indeed, the escalating demand for solid-state LiDAR units from automotive, robotic production line and autonomous delivery industries (Kim et al., 2019) has necessitated scalable manufacture of these units and a subsequent reduction in unit cost. Moreover, automotive specifications for this technology have demanded a wide field-of-view (FOV) and fine angular resolution to enable higher detail at longer range, meaning solidstate instruments are often of comparable or better quality than their traditional mechanical counterparts.
The aim of this paper is to evaluate the performance of a consumer-grade solid-state LiDAR sensor mounted on a UAV to reconstruct the topography of a vegetated fluvial environment. Our first objective is to acquire and process LiDAR point clouds using a variety of UAV flight heights and speeds and assess their associated horizontal and vertical errors, for a test site, an artificial grass football pitch. Our second objective is to acquire and assess a LiDAR survey of a 3-km-long reach of the braided River Feshie to quantify dry topography. In the discussion, we (i) reflect upon the advantages of consumer-grade LiDAR compared with the existing set of geomatics technologies that are available for geomorphologists to quantify the form of the Earth's surface, (ii) discuss errors in vegetated areas and approaches that could be used to quantify topography in wet areas and (iii) offer recommendations for acquiring airborne LiDAR surveys with UAVs.

| LIDAR SENSOR AND FIELD SETTING
We focus upon testing a DJI Zenmuse L1 solid-state LiDAR sensor, which integrates a Livox AVIA solid-state LiDAR module, a high-accuracy Inertial Measurement Unit (IMU), and a camera with a 1-inch Complementary Metal Oxide Semiconductor (CMOS) sensor on a 3-axis stabilized gimbal. The DJI L1 solid-state LiDAR sensor was mounted on a DJI Matrice 300 Real-Time Kinematic (RTK) UAV platform, which is capable of undertaking mapping flights of around 35 min with the sensor payload. The aircraft and sensor were linked to a D-RTK 2 GNSS base station by radio to enable the receipt of accurate RTK-GNSS position data.
Testing of the DJI L1 solid-state LiDAR system was undertaken at the University of Glasgow Garscube Sports Campus (Figure 1b) to assess the positional accuracy of the system. An artificial sports pitch was chosen as the initial test site, given the relative flatness of the football pitch, the abundance of pitch markings for check points and the ability to easily distribute and position a further dense grid of ground control targets.
A braided reach of the River Feshie, Scotland, was chosen to assess the LiDAR system in a natural vegetated fluvial environment ( Figure 1c). This reach is iconic as a site to assess geomatics technologies for the quantification of topography, including RTK-GNSS (Brasington, Rumsby, & McVey, 2000), aerial blimps (Vericat et al., 2008), TLS (Brasington, Vericat, & Rychkov, 2012), wearable LiDAR (Williams, Lamy, et al., 2020) and RTK-GNSS-positioned UAV imagery for SfM photogrammetry (Stott, Williams, & Hoey, 2020), as well as geomorphological application to quantify sediment budgets (Wheaton et al., 2010) and to shed light on the mechanisms of channel change (Wheaton et al., 2013). This history of innovation, and the low vertical amplitude of topographic variation, made this both an ideal and challenging site to test the use of the LiDAR in a natural environment. The Feshie reach is characterised by a D 50 surface grain size of 50-110 mm (Brasington, Vericat, & Rychkov, 2012). At the time of survey, the reach featured a network of shallow anabranches, which were up to c. 1 m in depth and occupied $15% of the active width. The active reach features a number of vegetated bars, colonised with grasses, sedges and heather, as well as Scots Pine (Pinus sylvertris), silver birch (Betula pendula) and common/grey alder (Alnus glutinosa/Alnus incana). Across the River Feshie riverscape, woody vegetation densities are generally increasing across the valley bottom, including within and on the banks of the active channel, due to an active and ongoing approach to manage deer numbers (Ballantyne et al., 2021). The presence of a variety of vegetation, with different heights and densities, presents a useful applied context for evaluating the ability of the LiDAR system to detect ground returns

| UAV LiDAR data collection
Flights were planned directly in the DJI Pilot app on the aircraft controller, using imported KML polygon areas. Automated IMU calibration was activated; LiDAR scan side overlap was set to 50%; and triple returns were recorded, with a sampling rate of 160 kHz. The flight path pattern was aligned at both sites to remain within UK CAA Visual Line-of-Sight recommendations for flying UAVs. Moreover, the flight path patterns ensured that sufficiently frequent sharp turning (every 100 s or every 1000 m with a flight speed of 10 m/s) was undertaken for IMU calibration purposes, in line with the manufacturer recommendations. The LiDAR data were stored on an SD card within the DJI L1 solid-state LiDAR sensor.
This initial testing at Garscube consisted of four flights over a synthetic football pitch and surrounds, each with different flying heights (60 and 80 m) and speed variables (5 and 10 m/s; Table 1).
At the River Feshie site, the required flight path pattern resulted in the reach being split into six flight blocks (Table 1)

| GNSS data collection
Twenty-six chessboard pattern ground control targets were laid in a semi-regular pattern across the Garscube sports pitch ( Figure 1b

| UAV LiDAR data processing
The Garscube datasets were used to develop a data processing workflow from the point cloud through to an output digital terrain model (DTM; Figure 2); this workflow was subsequently applied to process the River Feshie data. The data were first processed in DJI Terra software to create an initial LAS point cloud file and flight path trajectory files. In this step, processing involved the initial georeferencing of the point cloud, based on the RTK-GNSS onboard the aircraft (direct georeferencing; Dreier et al., 2021), using the Optimise Point Cloud Accuracy setting. The point cloud was then exported in WGS84 latitude and longitude coordinates with ellipsoidal heights.
Next, the data were imported into TerraSolid software and processed T A B L E 1 Flight parameters, point counts and densities for unoccupied aerial vehicle (UAV) LiDAR data collection. The point cloud data were thinned (Resop, Lehmann, & Cully Hession, 2019) using two processes to reduce and balance the point density such that processing over larger areas (e.g. Feshie study area = c. 1.5 km 2 ) did not become computationally cumbersome because of the high point densities (Table 1)

| XYZ residual analysis: GCPs
Two methods were used to select LiDAR points from each pre-

| Z residual analysis: GCPs and check points
Upon initial inspection of some of the orthometric height results from the point-to-point methods described above, some significantly larger residuals were identified. Further investigation determined that it was caused when the selected LiDAR point was not quite representative of the local sample of points and their recorded orthometric heights ( Figure 3d). Therefore, an additional method of residual analysis was devised that used a sample of the LiDAR points located within a  (2007), scale (λ or s) of 1.5 and curvature (t) of 0.3, were used based on the findings of these tests. Due to the intensity of computational processing, each of the six River Feshie point clouds was processed separately to extract a subset of groundclassified points.
The ground-classified point clouds (four at Garscube, six at Feshie) were then interpolated into a raster DTM of 0.2-m resolution using the Topo to Raster tool in ArcGIS Pro (Hutchinson, 1989;Smith, Holland, & Longley, 2003). Three flight blocks at the Feshie were merged into a single interpolation meaning only two halves needed merged, using the centre of the overlap zone between Flight 3 & Flight 4. The Feshie and Garscube DTMs were then also assessed for vertical accuracy against the known GNSS heights using data from all the various surface and target types.

| TLS comparison-River Feshie
TLS data collected at seven sample sites across the River Feshie were used to quantify the M3C2 differences (Lague, Brodu, & Leroux, 2013) between the UAV LiDAR and the TLS point clouds Association, 2016; also see Table 2). Firstly, the planimetric and verti-   (Table 1), and each column represents a different method for calculating the residuals. Note that the first three columns are for XYZ residuals, and the right column is the mean average of Z residuals, for the GCPs and football markings, respectively.
T A B L E 2

| Ground classification and DTM creation
Ground classification is a key step to produce a realistic terrain product for further use. Therefore, particular attention was paid to selecting the best algorithm and parameters for the variety of features seen in vegetated fluvial environments.
Three different ground class algorithms and a range of associated parameters were tested on Garscube Flight 1 and a test area within the River Feshie site. This resulted in 146 test point clouds being created, with nearly 2500 residual calculations. These residuals were then tested to see if there was any statistically significant difference between any of the algorithms across all parameter settings. Figure 7 shows the distribution of residuals plotted for each algorithm, and almost no difference can be seen between them. All three algorithms converge around minimal to no elevation residual when compared against the GNSS measurements. The performance of the three algorithms could not be statistically separated. The MCC algorithm was chosen (using λ = 1.5 and t = 0.3 as input parameters) for this ground classification for two reasons. First, it gave the best qualitative result by removing non-ground features like buildings and trees from the test sites used. Second, it also did not remove too much data, resulting in large holes in the point cloud that were associated with other alternative algorithms and parameter settings.
Converting point cloud data into continuous gridded raster products required an appropriate interpolation method. Further analysis was undertaken with all four Garscube flights, comparing the Topo to Raster interpolation, available in ESRI ArcGIS products (Hutchinson, 1989;Smith, Holland, & Longley, 2003)

| M3C2 differences
The local M3C2 calculations for the seven sample sites, which com- F I G U R E 7 Boxplots for each of the three ground classification algorithms trialled using the lidR coding package (Roussel et al., 2020 surveys, but the inherent noise in the point cloud data (Figure 6) will likely occlude opportunities for grain size mapping from elevation distributions as demonstrated in a range of investigations that have developed empirical relationships between detrended surface roughness and grain size (e.g. Brasington, Vericat, & Rychkov, 2012;Pearson et al., 2017;Reid et al., 2019).
The UAV solid-state LiDAR to TLS point cloud comparison clearly indicates marginally zero residuals in unvegetated areas. Thus, future geomorphic applications of the DJI L1 solid-state LiDAR sensor need not conduct error analysis assessment to the degree that has been undertaken here to quantify horizontal and vertical residuals. Table 2 summarises the errors from this investigation relative to those from alternative geomatics technologies. The errors reported here, for the River Feshie, using UAV solid-state LiDAR are comparable with those from the other geomatics technologies detailed. However, the UAV solid-state LiDAR system also enables a larger extent to be covered at a much higher survey density. Although the workflow is not fully streamlined into one software application, it is both reproducible and modifiable. Indeed, since data collection and processing of the Garscube and River Feshie datasets, updates to DJI Terra software could further streamline the processing workflow with respect to coordinate conversions, datums and point cloud densities.

| Vegetation and bathymetry
An advantage of using active remote sensing techniques, such as LiDAR, is their penetration of vegetation and thus the ability to derive a bare earth DTM instead of vegetated DSM. In this paper, we demonstrate that the error in vegetated areas varies (À0.007 to conducted a post-survey digitisation to map water extent from the orthomosaic image produced by the camera in the L1 solid-state LiDAR sensor, which was also further supported by measured RTK-GNSS positions along the channel edge. However, several other semiautomated approaches could also be considered to identify the extent of wet areas such as the use of spectral information from the orthomosaic image to colour the LiDAR point cloud Islam et al., 2021), waveform feature statistics and neighbourhood analysis (Guo et al., 2023) or using a more advanced geometric approach (e.g. Passalacqua et al., 2010). All these suggested semi-automated approaches currently utilise raster data formats (i.e. orthoimagery or a Digital Elevation Model), but there may be potential to explore the use of the original LiDAR point cloud data.
Once the wet area extent has been established, there are three broad approaches that could be applied to reconstruct the topography of wet areas, which could subsequently be fused (Williams et al., 2014) into the dry bare earth DTM. First, wet topography could be directly surveyed using robotic total station, RTK-GNSS or echo-sounding (e.g. Williams et al., 2014;Williams, Bangen, et al., 2020). Second, RGB images that are acquired as part of the DJI L1 solid-state LiDAR survey, to colourise the point cloud, could be used to produce an orthomosaic image and depth could then be reconstructed using spectrally based optimal band ratio analysis (OBRA; Legleiter, Roberts, & Lawrence, 2009), a technique that has been operationalised by Legleiter (2021) in the Optical River Bathymetry Toolkit (ORByT). This approach requires glint-free images, or images with glint removed (Overstreet & Legleiter, 2017), and independent depth observations to select the band ratio that yields the strongest correlation between depth and the image-derived quantity. Finally, the third approach is to acquire a set of RGB images from the UAV platform that can be processed using SfM photogrammetry and then corrected for light refraction through the water column using either a constant refractive index (Woodget et al., 2015) or by deriving refraction correction equations for every point and camera combination in an SfM photogrammetry point cloud (Dietrich, 2017). All three approaches require water surface elevation to be reconstructed before bed levels are calculated; this requires diligence and can be a source of significant error (Williams et al., 2014;Woodget, Dietrich, & Wilson, 2019). Of these three approaches, optical empirical bathymetric reconstruction requires the least additional data collection and processing; direct survey involves time-consuming ground-based sampling, whereas bathymetric correction techniques require images and computational overheads associated with SfM photogrammetry. All these techniques are widely established and have been applied to a range of rivers; it is thus beyond the scope of our investigation to demonstrate these techniques here for the Feshie.  Evans and Hudak (2007) and other lidR package documentation. Sinkhole-type artefacts, seen in some of our early test results with other anthropogenically focused algorithms (e.g. in Ter-raSolid), were elucidated in Evans and Hudak (2007) as negative blunders resulting from scattering of the LiDAR pulses. The sinkhole artefacts tended to be most obvious on harder surfaces such as road and gravel bars, because of the uniformity of these surfaces. These sinkholes appeared to result from commission errors (classifying nonground point as ground, false positive) using erroneous points that were below the actual ground and caused these significant artefacts in the first tests of gridded raster terrain model outputs. These sinkhole artefacts did not appear to be replicated in the more natural algorithms like MCC, which was used in the final product, although anthropogenic areas (e.g. farm buildings, Figure 10b) did have artefacts that were of less concern given the topographic context.

| Best practice recommendations
Item 8 considers the algorithm choice to interpolate to a raster.
Item 9 focuses on accuracy assessment. At the same stage as flight and independent survey data planning, the accuracy assessment requirements need to be considered. It is recommended that these are split into three stages: pre-processing to assess the survey, postprocessing to assess the ground classification and raster interpolation to assess the gridded product. Finally, the approach for reconstructing wet areas, if required, needs to be determined. Options are discussed above, in Section 5.2, and may influence flight planning and a need to acquire depth data.

| CONCLUSION
This investigation has evaluated a new consumer-grade UAV solidstate LiDAR sensor for topographic surveying and geomorphic characterisation of fluvial systems. Given that this new type of LiDAR technology has mainly been used outwith topographic surveying until very recently (Kim et al., 2019;Raj et al., 2020;Štroner, Urban, & Línková, 2021), the importance of our investigation lies in the extensive geolocation error evaluation across study areas with different degrees of topographic complexity.
Our results suggest that, in unvegetated areas, the accuracy of the DJI Zenmuse L1 solid-state UAV LiDAR system is comparable with other current UAV or aerial-based methods such as SfM photogrammetry, and statistically indistinguishable from detailed groundbased TLS surveys. It is possible to produce DEMs that achieve subdecimetre scale (<0.1 m) geolocation accuracy from the RTK aircraft position alone, even when surveying in fluvial environments that are characterised by 'noise' from surface roughness associated with sediment and sparse canopy-type vegetation. However, the solid-state LiDAR sensor was unable to penetrate dense ground-hugging vegetation like heather or thick grass, resulting in elevation bias in areas characterised by these types of vegetation.
Our investigation provides an initial processing workflow for UAV solid-state LiDAR data, when applied to vegetated parts of the Earth's surface. Although the workflow is currently discontinuous, using a variety of different software to process and assess the dense point clouds that are acquired using these sensors, further software development will likely improve processing efficiency. This will enable the characterisation of the topography, and objects such as vegetation, using the increased density of data that UAV solid-state LiDAR provides, and the increasingly large areas that can be surveyed with contemporary UAV platforms. Glenfeshie Estate for their ongoing support of our fieldwork. University of Glasgow Sport is thanked for allowing access to the sports field facilities.

CONFLICT OF INTEREST STATEMENT
The authors certify that they have no conflict of interest in the subject matter or materials discussed in this manuscript.