Cooperative 3D tunnel measurement based on 2D–3D registration of omnidirectional laser light

In this study, we proposed a high‐density three‐dimensional (3D) tunnel measurement method, which estimates the pose changes of cameras based on a point set registration algorithm regarding 2D and 3D point clouds. To detect small deformations and defects, high‐density 3D measurements are necessary for tunnel construction sites. The line‐structured light method uses an omnidirectional laser to measure a high‐density cross‐section point cloud from camera images. To estimate the pose changes of cameras in tunnels, which have few textures and distinctive shapes, cooperative robots are useful because they estimate the pose by aggregating relative poses from the other robots. However, previous studies mounted several sensors for both the 3D measurement and pose estimation, increasing the size of the measurement system. Furthermore, the lack of 3D features makes it difficult to match point clouds obtained from different robots. The proposed measurement system consists of a cross‐section measurement unit and a pose estimation unit; one camera was mounted for each unit. To estimate the relative poses of the two cameras, we designed a 2D–3D registration algorithm for the omnidirectional laser light, and implemented hand‐truck and unmanned aerial vehicle systems. In the measurement of a tunnel with a width of 8.8 m and a height of 6.4 m, the error of the point cloud measured by the proposed method was 162.8 and 575.3 mm along 27 m, respectively. In a hallway measurement, the proposed method generated less errors in straight line shapes with few distinctive shapes compared with that of the 3D point set registration algorithm with Light Detection and Ranging.

A three-dimensional (3D) measurement, which is applied in various on-site fields, including autonomous driving (Abbasi et al., 2022;Arnold et al., 2019), agriculture (Lin et al., 2021;Schneider et al., 2019), and construction (Chen et al., 2019;Puri & Turkan, 2020), is also an essential process in tunnel construction sites.The measured 3D point clouds demonstrate how the shapes and installed equipment of a constructed tunnel differ from the blueprints.Additionally, changes in the cross-section shapes help detect structural defects in the lining concrete and the deformations of the bedrock to prevent tunnel accidents during construction (Luo et al., 2016;Seo et al., 2008;Zhou et al., 2020).
In tunnels, Light Detection and Ranging (LiDAR) (Sankey et al., 2018;Wallace et al., 2012) and terrestrial laser scanners (Gikas, 2012;Seo et al., 2008;Xie & Lu, 2017;Zhao et al., 2023) are widely used 3D measurement instruments.These instruments measure distances based on the time-of-flight method, which calculates the distance from the time light takes to travel to the target object through a medium.Although these sensors measure large objects by rotating a single-point 3D measurement, the densities of the point clouds tend to be low because it measures every direction from the origin of the scanners point by point.In tunnels, denser 3D measurement instruments are desirable because defects on even small objects can cause serious accidents (Elbaz et al., 2016).
Vision-based 3D measurement methods using cameras, such as the structure from motion (Liu et al., 2018;Mancini et al., 2013), stereo camera (Fan et al., 2019;Hu et al., 2017;Tang et al., 2022), multicamera (Chen et al., 2019;Tang et al., 2019), neural network (Tang et al., 2023), and line-structured light (Wang et al., 2017) methods can improve the point cloud density.To compute the 3D coordinates of multiple points simultaneously, cameras obtain multiple ray vectors of a target object from a single image.
Particularly, the line-structured light method calculates multiple 3D coordinates of the line laser light caught in an image by repeating the triangulation between the ray vectors of light obtained from the camera and the corresponding laser direction vector, which has been calibrated before measurement.Moreover, an omnidirectional laser, a line laser that emits laser beams in a ∘ 360 direction, can simulta- neously measure a cross-section 3D point cloud of the target object (Figure 1) (Higuchi et al., 2019;Kawata et al., 2021;Yamashita et al., 2011).
One of the most challenging aspects of obtaining the 3D measurements of tunnels is integrating the coordinates of the measured cross-section point clouds.Tunnels are long structures, and attempting to measure distant areas from one position causes blind spots.Furthermore, if the angle between the wall and the laser beam is sharp, the projected laser light becomes an area rather than a point, reducing the accuracy of the measurement.Therefore, crosssection point clouds should be measured at various camera positions by updating the location of the cross-section measurement instrument.To integrate the coordinates of the measured point clouds of all the locations, the poses of the cross-section measurement system are applied to the point clouds.
One approach that estimates the poses of a camera calculates the movement of the measurement instrument from the rotation speed of its wheels.Several studies have estimated the poses of 3D measurement systems using the line-structured light method based on the wheel rotation speed in railroad tunnels (Farahani et al., 2019;Zhan et al., 2015).Although these methods estimated precise poses of the system on smooth and flat rails, the floating and slipping of wheels in underconstruction tunnels with bumpy and often wet ground, reduces the pose estimation accuracy.To eliminate the effect of mechanical floating and slips, Xue et al. (2021) used environmental textures for pose estimation.A camera used for 3D measurement computed its relative pose from reference images taken in various spots in different ground conditions.However, these methods only work in environments with rich color textures and are unsuitable in tunnels with monotone colors.
Point cloud registration is an alternative method that uses the 3D features of the measured point clouds.In Thomson et al. (2013), Kaijaluoto et al. (2015), and Zhao et al. (2023), a terrestrial laser scanner was used for local 3D measurements and for the estimation of the relative pose of each point cloud by matching the distinctive 3D patterns.However, the monotonous shapes of tunnels with nearly the same horseshoe shape from the tunnel entrance to the exit cause uncertainty when matching 3D features.
Unlike the previous studies, no strict 2D/3D environmental features are required for pose estimation using cooperating robots.In Jeong et al. (2012) and Kurazume et al. (1994), the authors used three robots for 3D measurements and pose estimations.In this system, a parent robot moved while two child robots stayed in the tunnel.
Although an individual robot does not estimate its own poses, the parent robot is able to estimate the poses of the child robots from the previous measurement position by aggregating the data of the relative distances and angles of the child robots.The parent robot consisted of a laser scanner for 3D measurement, a total station for pose estimation, and an autoleveling system for tilt angle estimations, resulting in a large measurement system.To prevent disturbing the construction process, the measurement system should have minimal sensors and a camera for use in the line-structured light method.
F I G U R E 1 One-shot cross-section measurement with the linestructured light method.
Although it is possible to use a single LiDAR or laser scanner mounted on a robot to estimate the relative pose of other robots, the lack of a distinctive 3D shape makes it difficult to match 3D point clouds directly obtained from different robots.
The objective of this study is to measure high-density 3D point clouds using the line-structured light method with cooperative robots equipped with cameras.We especially target tunnels that have typical horseshoe shapes.
In the proposed method, we implement the cross-section measurement unit based on the line-structured light method and use an additional camera unit for pose estimations.To estimate the relative poses of a camera for measurement from the additional camera, we perform a 2D-3D registration on the 2D ray vector and 3D cross-section point cloud of the laser light, which are projected on the cross-sections.Here we assume that the 2D-3D registration differs from 2D/3D registration, where the pose parameters of the 3D cross-sections are optimized in the 3D space, but the objective likelihood function is computed in the 2D space.After integrating the relative poses of each position, we obtain a high-density 3D point cloud of the tunnel area.
The contributions of this study are as follows: 1.The proposed pose estimation method based on a 2D-3D registration algorithm using two cameras integrates the measured cross-section point clouds into one coordinate system.
2. We also propose a robust 2D points acquisition method of the laser light in the image by masking the current image of the laser using an estimated pose of the previous cross-section.
3. We implement two cross-section 3D measurement systems based on the line-structured light method: one mounted on a hand truck and the other on an unmanned aerial vehicle (UAV).These systems measure the cross-section point clouds at 1/3 of the speed of the camera frame rates.The UAV system can measure the tunnels regardless of the ground conditions by flying during the measurement.

| PROPOSED METHOD
2.1 | Overview of the proposed 3D measurement system

| Cooperative measurement of alternatively moving two cameras
The overview of the proposed method is shown in Figure 2. To integrate the coordinate systems of the cross-sections measured by the linestructured light method into a single coordinate system, it is necessary to estimate the pose of the point cloud relative to the environment.The previous method by Higuchi et al. (2019) leveraged the environmental textures in images to estimate the pose of a camera using some referencing points in the environment.In the proposed method, to estimate the camera poses of an environment without textures, an additional camera was employed as the anchoring point of movement.
We alternatively moved two cameras and continuously obtained the relative poses between both of the cameras.Given that the relative poses of the cameras were known each time, we could display all cross-section point clouds in the integrated coordinate system.
Regarding the problem of estimating the pose of a 3D point cloud based on camera images, Li et al. (2019) used 2D feature points in the environment to calculate the pose of the point cloud measured by a 3D measurement device.On the other hand, we estimate the pose of the 3D point clouds without environmental features.Since, to estimate the relative pose from the fixed camera to the moving camera without using environmental textures, the proposed method exploited the visible 3D shape of the cross-section based on the point set registration algorithm.The rigid 2D-3D registration is a method that minimizes the 2D loss when projecting the 3D point cloud onto the 2D plane to optimize the rotation matrix and translation vector applied to the 3D point cloud.When one of the cameras is observing the visible 3D shape of the cross-section, the relative pose of the measured cross-section is estimated as the rotation matrix and translation vector.

| Measurement process
We refer to the two cameras as the measurement and registration cameras with the following roles: Measurement camera This camera unit consists of one fisheye camera and one omnidirectional laser.It performs 3D measurements based on the line-structured light method, which can measure high-density cross-section point clouds in a few video frames.
Registration camera This camera unit consists of a single camera.It observes the omnidirectional laser light that is projected on the tunnel lining to estimate the relative pose of the measurement camera.
If a laser light projected on the tunnel cross-section is captured from both cameras, the measurement camera computes the 3D point cloud of the cross-section and the registration camera obtains the 2D ray vectors of the cross-section.Here, the relative pose of the cameras can be calculated by the 2D-3D registration algorithm.
F I G U R E 2 Measurement procedure of proposed method.
To obtain the 2D ray vectors and the 3D point cloud of the crosssection for each time period, we alternatively moved the two cameras for C cycles to measure the long sections in the tunnel.The 3D measurement of cycle c c C ( = 1, 2, …, ) consisted of the following steps: 1.The measurement camera moved, whereas the registration camera was fixed in the environment.A computer attached to the measurement camera toggled the omnidirectional laser ON and OFF to obtain an image at laser ON and laser OFF.2.2 | Line-structured light-based cross-section measurement

| Extract ray vectors of omnidirectional laser light
To reduce the weight of the robot, we used only one camera as a sensor mounted on each robot.To obtain the 3D point cloud of a cross-section using a single camera image, we adopted the line-structured light method using an omnidirectional laser, which calculated the 3D coordinates of the laser light illuminated on the cross-section based on triangulation.
The proposed method measures the 3D coordinates of the crosssection in each position (in cycle c, cross-section s).The first step of the line-structured light method, the pixels of the omnidirectional laser light in an image were extracted in the image from the measurement camera (Figure 4).The objects that appeared in the measurement camera were only illuminated by the laser light in the pitch-black areas.However, we should consider that, in addition to the omnidirectional laser, the environmental lamps also illuminated the tunnels for the construction workers.To distinguish the laser light from other objects by exploiting the brightness of the laser, a small computer attached to the cross-section measurement unit toggled the omnidirectional laser continuously and the measurement camera F I G U R E 4 Calculation of omnidirectional laser pixels with ON and OFF laser images.
obtained the images of the laser when ON and OFF.The camera positions of the laser ON and OFF images were slightly different because the cross-section measurement unit moved during measurement, resulting in erroneous bright pixels in the subtraction image.
Therefore, we applied a maximization filter, which overwrote the central pixel with the brightest pixels in the image window, to the laser OFF image before subtraction.
After subtracting the images, we computed the pixel coordinates of the laser light for each angle in the M division for ∘ 360 .For angle θ m , the pixel coordinate of the laser was determined as the gravity center when the image was scanned from its image center to its periphery.

| Triangulation using omnidirectional laser
The line-structured method calculated the 3D coordinate of the laser light triangulating using the ray vector and direction of the laser beam.Unlike the line-structured light method in Higuchi et al. (2019), we used a fisheye camera to decrease the measurement unit size.To undistort the images from the fisheye camera, we used a camera model by Mei and Rives (2007).From the undistorted image, the mth ray vector of the laser light x m in a cross-section was calculated as follows: where K is the camera intrinsic matrix and u v ( , ) m m is the undistorted pixel coordinate of the laser light.The 3D coordinates of the laser light were calculated as a cross point of the camera ray vector and the plane of the laser, which was formed by the omnidirectional laser.
The following equations show the relation between the mth 3D coordinate y m in a cross-section point cloud and the ray vector x m : (2) where n is a perpendicular vector to the plane in the measurement camera coordinate, and The scale s m was determined as follows: This scale was applied to the ray vectors according to (2) to compute the 3D coordinates of the laser light.

| Parameter calibration of cross-section measurement unit
To calibrate the position of the omnidirectional laser light, we calculated some of the 3D coordinates of the laser points of the cross-section using several augmented reality (AR) markers with known origin coordinates.To match the ray vectors of the laser light in the measurement camera coordinate system and the corresponding 3D coordinates in another coordinate system, we physically overlapped the omnidirectional laser light on the origins of the AR markers.We chose a pixel coordinate of the laser light on the AR marker as the nearest point from the origin of the AR marker and used the AR marker coordinates as the 3D coordinates of the laser lights.To compute the 3D coordinate of the laser light in the measurement camera coordinates, we solved the perspective-n-point problem of the 2D ray vectors and 3D coordinates of the laser light.
The perpendicular vector n ˆwas estimated as a vector directed to the plane formed by the several laser points on the AR markers using the least-squares method as follows: where Y y y y = ( , , …, ) T and I is the number of AR markers.The least- T −1 T (7) 2.3 | Coordinate integration of cross-sections

| Extract laser pixels with previously estimated transformation
When the relative pose between the measurement and registration cameras increased, the observed laser light from the registration camera became weak.As shown in the top left illustration of Figure 5, the obtained images from the registration camera are noisy, making it difficult to distinguish the laser light from the sensor noise.To reduce the sensor noise from the subtracted image, we created mask images of the crosssection using the estimated rotation and translation of the previous crosssection (top illustration of Figure 5).Here, the calculated point cloud of cross-section s in cycle c was c s , and the 2D-3D registration estimation result of the previous cross-section was c s , −1 .Because the previous pose c s , −1 was considerably similar to the current pose c s , , we applied c s , −1 to the current cross-section c s , and created a mask image of the projected point cloud of the current cross-section c s , .Moreover, in the registration camera image, the center points of the scanning laser light varied because the cross-section measurement unit was moved.To update the center point of the scan, the projected origin of the previous 3D cross-section point cloud was used as the new center point.Using this low-noise image with a proper rotation center, the laser pixels were extracted using the same method in Section 2.2.1.When the laser points were extracted, the ray vectors of the registration camera were calculated using the camera models (Mei & Rives, 2007;Zhang, 2000).

| Estimating transformation matrix with 2D-3D registration
In the proposed measurement process, the measurement camera moved and measured the cross-sections in various coordinate where ∈  T A B 4×4 is the transformation matrix from the coordinate system Σ A to coordinate system Σ B , and The first cross-section in cycle c [cycle c; section 1] and the last cross-section in cycle c − 1 [cycle c − 1; section S c−1 ] were in the same absolute poses, following were known, all the cross-sections were integrated into Σ Q 1 by ( 8).This means that the registration camera pose was computed using the poses of the measurement camera.
To estimate the relative poses in a real object scale, we and translation matrix 1 2 3 .The projective transformation of the cross-section point cloud to the image plane of the registration camera is described as follows: where s h is the scaling parameter, R c s ( , ) is the rotation matrix, and is the translation vector of [cycle c; section s].We set the objective variable as the following vector which consists of decomposed Euler angles of R c s ( , ) and the translation vector: A negative log-likelihood was optimized about the objective variable based on the expectation-maximization (EM) algorithm: Noise reduction of the laser extraction of subtraction images using a masking process.
( ) where the component M + 1 is a uniform distribution, and σ i 2 is a variance of iteration i.
In the E-step, we computed the probability that a sample of the 2D ray vector was generated from each component in the Gaussian mixture distribution as . In the M-step, the Q function was minimized for the objective variable as follows: To solve this equation, we used Newton's method because it converges quadratically.When describing one iteration of Newton's method as p p p ← + Δ , the updating direction p Δ is computed as follows: where H is Hessian, and g is the gradient.Both the EM algorithm and Newton's method are iterative algorithms.We updated the objective variable using Newton's method for every iteration of the EM algorithm.The variance σ 2 was updated based on the posterior probabilities of the latent variables as follows: , ) . (20) In the 2D-3D registration algorithm, parameter p was initialized as the estimated results of the previous cross-section.Here, the pose parameters were obtained in a real object scale using the 2D ray vectors and 3D point clouds computed from the calibrated registration and measurement cameras, respectively.

| System implementations on hand truck and UAV
To validate the proposed method, we conducted 3D measurement experiments in an actual-sized model tunnel and a hallway with straight lines and turns.The proposed method was implemented in two patterns: one was a hand-truck measurement system with a high load weight limitation, and the other was a UAV measurement system that could fly over rough or wet ground.The detailed specifications of the two measurement systems are described as follows: 1. System with a hand truck (Figure 7a) Nikon Z7II (60 fps, 3840 × 1920 pixels) cameras with fisheye lenses (AF-S Fisheye NIKKOR 8-15 mm f/3.5-4.5EED) were used as the measurement and registration cameras.The cameras were installed on tripods for stability.The observation camera was manually moved by hand.The laser was switched 20 times per second, and 10 cross-sections were measured per second.
2. System with UAVs (Figure 7b) An action camera (Kodak PIXPRO SP360 4K, 30 fps, 2880 × 2880 pixels) was used as the measurement camera and was installed on a DJI Matrice 300 RTK.For the registration camera, an originally mounted camera on the DJI Mavic Mini (30 fps, 2720 × 1530 pixels), which has small propellers, was used to reduce the shaking of the UAV as much as possible.Here, we assume that the UAV attempts to maintain its pose in a certain position in the air.The laser was switched 10 times per second, and five cross-sections were measured per second.The measurement camera and omnidirectional laser were attached to the UAV.
We created a fixture using a 550 mm of carbon main frame and several 3D printed materials made of polylactic acid using the Ultimaker S5 Pro Bundle and printing software Ultimaker Cura to mount a microcomputer and battery to the UAV, keeping a high reproducibility.A perspective camera without fisheye distortion F I G U R E 6 A 2D-3D registration of laser light illuminated by omnidirectional laser projected on a cross-section.
was used as the registration camera on the Mavic Mini, removing the undistortion process from the computation of the ray vectors.
To evaluate the performance at the maximum specification for each measurement system with different load limitations, the camera fps was set to the highest value keeping the highest image quality.
We used omnidirectional lasers, in which a cone mirror was attached in front of the laser beam irradiation port to form a plane laser beam.
Both the cross-section measurement units were equipped with a Raspberry Pi 4 Model B to switch the omnidirectional laser on and off.The omnidirectional lasers were switched at 1/3 of the camera frame rate, considering the rolling shutter effect.Both the measurement systems were designed to be moved manually, and the UAV was moved by two operators using flight controllers at a speed of approximately 0.1 m/s.
The 3D point clouds of the measurement objects were generated offline from the videos using a desktop PC (OS, Windows 10; CPU, AMD Ryzen Threadripper 3970X 32-Core Processor 3.69 GHz; RAM, 64 GB).The computations were implemented using the OpenCV module in C++ with a parallel calculation of OpenMP.
To reduce computation time, pose estimation was skipped while the registration camera was moving.The initial pose of the first crosssection of each cycle was set to p = [0; 0; 0; 0; 0; 5000] for the hand-truck system and p = [0; 0; 0; 0; 0; 10, 000] for the UAV system, based on the approximate real distance.

| Conditions
To evaluate the proposed method in a tunnel, the proposed method was implemented in a model tunnel.The measurement conditions are shown in Figure 8.The tunnel was approximately 8.8 m wide and 6.4 m high, and the measurements were performed along 30 m.To prove that the proposed method can extract laser pixels even in an illuminated environment, such as a construction site, the lamps along the tunnel were ON during the measurements.Because the laser beam was attenuated when the measurement and registration cameras were far, the registration camera was moved closer to the measurement camera every 3.6 m.The measurement systems were operated by a controller for the UAVs and manually pushed for the hand truck.
The camera intrinsic parameter was estimated using a chessboard with 9 × 6 corners formed by 23.9 mm 2 .A total of 12 AR markers were used for the laser position parameter calibration according to the method described in Section 2.2.3.To evaluate the accuracy of the proposed method, four AR markers were installed for each of the six cross-sections.For the ground truth estimation of the AR marker coordinates, the total station of the Nikon NST-C1r in the non-prism mode measured all the AR marker positions in several hours.To measure the coordinates of the AR markers using the proposed method, we calculated the nearest laser pixels from the AR markers, extracted from the measurement camera, and the 3D point cloud was obtained by the triangulation based on the line-structured light method.The AR markers were detected automatically from the measurement camera image.The initial frames of the videos of the measurement and registration cameras were visually chosen to match the shutter timing.We used ArUco (Garrido-Jurado et al., 2016;Romero-Ramirez et al., 2018) markers as the AR makers.
The number of cross-section measurement points M and the registration points N were set to 1000 and 400 for the 2D-3D registration, respectively.The weight w of the uniform distribution was set to 0.5 as a parameter of the 2D-3D registration.Data processing was performed offline after the measurement data were acquired in the tunnel.After the subtraction of the OFF images, we computed the 3D measurement based on the triangulation of the ray vectors and the laser plane according to the method in Section 2.2.2.To evaluate the accuracy of the proposed method, the deviations of the 3D coordinates of the AR markers from the total station were evaluated.The measurement errors of the six cross-sections between the total station and the proposed method are presented in Table 1.The measurement error was higher in the UAV unit than that of the hand-truck unit because the length between the measurement camera and laser mounted on the UAV was shorter and the pixel changes, shown in Figure 9  The probabilistic proposed 2D-3D registration algorithm converged appropriately in such noisy cases.However, the probabilistic 2D-3D registration failed to converge without masking because too many outliers were included in the 2D point clouds in the registration camera.As shown in Figure 10b,d, the proposed method extracted numerous 2D point clouds, even in longer distances, that converge in the probabilistic 2D-3D cross-section registrations.Therefore, the masking process in Section 2.3.1 was suitable for the noise reduction of the pose estimation algorithm.

| Results and discussions
As a result of the 2D-3D registration, relative poses between the measurement and registration cameras were obtained.The proposed method applied the poses to the measured cross-sections and integrated all the coordinate systems of the cross-sections.The line-structured light method extracts cross-sections as distinctive 3D shapes from the monotonous 3D shape of the tunnel.Thus, 2D and 3D point clouds obtained from different cameras could be matched, even with a monotonous tunnel shape.
To examine the accuracy of the integrated 3D point cloud of the tunnel, we calculated the deviations of the measured point cloud from the total station on the AR markers.The 3D coordinates of the markers by the proposed method were calculated by extracting the nearest 2D points from the detected AR markers in each frame and setting the AR marker absolute position as the nearest position in the image plane.The accuracy of the cross-section integrations is presented in Table 2.The accumulation error was evaluated by comparing various measurement distances.When integrating 5.4 m from the initial point, the integration error was 80.5 mm in the hand-truck system, which was slightly different from the cross-section measurement accuracy (Table Therefore, each local tunnel shape was precisely measured using the proposed method.Although accumulation errors did not occur when the registration camera was fixed, they were observed when the position of the registration camera was changed.As a result, errors of 162.8 and 575.3 mm were observed for the hand-truck and UAV systems when integrating 27.0 m from the initial point, respectively.
The point clouds integrated by each measurement system are shown in Figure 11.These point clouds were measured using the hand-truck and UAV systems in 5 and 4 min, respectively, for a tunnel with a length of approximately 30 m.The hand-truck system acquired point clouds that were twofold denser than that of the UAV because the density was a function of the camera frame rates.Therefore, the hand-truck system could visualize a 280 × 280-mm power box and a 250 × 250-mm AR marker pattern whose square size was 40 mm.
Inconsistent errors occurred in the point cloud measured by the hand-truck system at the beginning of each cycle.This initialization error was the result of sensitive changes in the ray vectors as the two units approached each other.Consequently, several cross-sections are needed to accurately estimate the poses.To suppress this effect, continuous pose estimation while the registration camera is moving is T A B L E 2 Error of the cross-section integration of the proposed 2D-3D registration algorithm in the tunnel measurement.Landmarks that can be detected by the registration camera, such as grooves on the concretes and construction machines in the tunnel, will be explored as a potential method for diminishing this pose shift effect in the future.

| Conditions
To evaluate the proposed method for a variety of cross-sections and for a longer structure, we measured a hallway 154 m long using the hand-truck system.The hallway had five straight lines and four turning points, and was 2.3 m wide and 2.7 m high.The lamps on the ceilings were ON during measurements.
The hand-truck system moved straight at the straight lines, and the registration camera position was updated every 4 m to prevent attenuation of the laser light imaged on the camera.On the other hand, while the system was at the corner of the hallways, we moved the system in the following manner to decrease the blind spots of the laser light with regard to the viewpoint of the registration camera: 1.The yaw angle of the measurement camera was changed from 0°t o approximately 45°from the direction of the registration camera where the laser light was as visible as possible.
2. The position of the registration camera was updated to where the laser light on the entire cross-section appeared on the camera image.This step resulted in an angle of approximately −45°for the two cameras.
3. The yaw angle of the measurement camera was changed from approximately −45°to 0°.
The camera intrinsic parameter was estimated using a chessboard with 9 × 6 corners formed by 23.9 mm 2 .For the laser calibration according to Section 2.2.3, we used a chessboard with environment and with various cross-section sizes as shown in Figure 12a-c.The proposed method could estimate poses when one of the edges was lacking, with the direction in yaw angles, and at the handrail spot.These results show that the proposed 2D-3D registration scheme that leverages the cross-section shapes for pose estimation with the initialization of the poses with the previous cross-section is effective in various crosssection conditions.
The distance from the nearest points measured by the proposed method and LiDAR with ICP are shown in Table 3, and the corresponding measured point clouds are shown in Figure 13.From Table 3, the proposed method, with an output error of 0.823 m, had a better registration result than the LiDAR with ICP, with an output error of 4.867 m.In detail, the point cloud measured by the LiDAR shown in Figure 13b was incorrectly merged in the straight lines, while the proposed method shown in Figure 13c had a more precise measurement.
This shows that the proposed method is effective in an environment with a small amount of distinctive 3D shapes.
In Figure 12c, for the measurements performed by the proposed hand-truck system, there is a region where the point clouds are joined with the pose estimation errors.This occurred because the laser has a blind spot on one or more of the edges of the crosssections.The proposed method can detect weak laser light owing to the masking algorithm in Section 2.3.1, but there were a few points in which the proposed algorithm generated pixels that were not true laser light.To decrease this misdetection of the laser pixels, a judging algorithm that can discriminate blind spots or weak laser light should be developed.
In this experiment, we set the laser vertically to the ground to create the same condition that cross-section shape does not make lots of blind spots in tunnels.However, the hallways that have more vertices in their shape and cross-section must not necessarily be measured.In such cases, we can tilt the laser beam and make less blind spots.We evaluated the proposed method in a tunnel 8.8-m wide and 6.4-m high and measured the cross-section for a few centimeter errors.For accurate cross-section measurements, even in larger tunnels, we must take the mechanical parameters of the implemented system into account.For example, the distance between the measurement camera and the omnidirectional laser affects the distortion of the fisheye camera.
The proposed method with vertical laser beams, which are the simplest implementation, was not good at direction estimation compared with that of the LiDAR, which has horizontal laser beams.
This means that horizontal lasers are more effective for the precise direction estimation of the measurement system.Therefore, in the future, we will implement a more complex structured-light pattern that is a combination of the omnidirectional laser light and horizontal lasers and develop a laser pixel detection algorithm.The horizontal lasers will also be helpful in increasing the visible laser light even for sudden diameter changes.
In this paper, we targeted horseshoe-shaped tunnels.However, more circular-shaped cross-sections tunnels exist, such as shield tunnels.To achieve pose estimation in circular-shaped tunnels, we will consider the use of additional lasers for the proposed method related to the direction estimation accuracy improvement.
To estimate the roll angle of the circle cross-section, we can also use the gravity direction obtained from accelerometers.Therefore, sensor fusion will be considered in the future.
As a limitation of the proposed method, the drift of the registration camera on UAVs has to be considered.Although the registration camera of the implemented UAV system employed a small propeller UAV to minimize the shaking of the UAV, the translation vectors of the measured cross-sections can be erroneous owing to the gradual drift of the UAV.The translation vectors can be corrected using landmarks, such as grooves on the concretes, construction machines, or electronic apparatus in tunnels.This means that the position of the registration camera has to be carefully chosen considering the position of the landmarks.Therefore, a measurement plan design for real construction sites must be considered in future works.
Our method can be implemented as an automatic measurement system because it can estimate the poses of each camera unit.The automatic measurement also has the advantage that the proposed method can be evaluated more quantitatively by precisely varying measurement speed and direction.Moreover, our dense point clouds can be combined with other applications, such as object recognition tasks, which will give us a more thorough understanding of the measured point clouds of the proposed method.

| CONCLUSIONS
In this study, we proposed a high-density tunnel 3D measurement system based on the 2D-3D point set registration algorithm of the laser light illuminated by an omnidirectional laser.In the proposed method, an omnidirectional laser and a fisheye camera were fixed using a solid frame to measure the cross-section point clouds based on triangulation.To estimate the camera poses of the cross-section measurement unit at various measurement points without using environmental 2D/3D textures, which are scarce in tunnels, an additional camera was used to observe the shape of the laser light illuminated on the cross-section.The 2D-3D registration of the projected omnidirectional laser light on the tunnel wall estimated the relative poses between the two cameras.By integrating the crosssection point clouds and estimated relative poses, the proposed method obtained a high-density point cloud of the tunnel.
In the experiment of the tunnel 3D measurements, we 2. The measurement camera was fixed in the environment and the position of the registration camera was updated from cycle c to cycle c + 1 by approaching the measurement camera to keep the laser light visible from the registration camera.The entire workflow of the proposed method is shown in Figure 3.After the measurement step in the tunnel, we subtracted the laser OFF image from the laser ON image to eliminate the background illuminations.Thereafter, we extracted the 2D ray vectors of laser light on the images to calculate the 3D coordinates of the cross-section based on triangulation.The relative poses of the cameras were estimated by the 2D-3D registration algorithm.Finally, the measured cross-section point clouds were transformed to integrate the coordinate systems.
systems (S c sections in cycle c), whereas the registration camera observed the omnidirectional laser light in one coordinate system.To integrate the coordinate systems of all the crosssections, we converted every point in the coordinate system of the measurement camera y c s m P , , c s , into the coordinate system of the registration camera of cycle 1 Σ Q 1 based on the following equation: designed a 2D-3D registration algorithm for the 2D ray vectors and 3D point cloud of the cross-section.Although the probabilistic point cloud registration algorithm inMyronenko and Song (2010) assumes the Gaussian mixture model for robustness against outlier points, this method is suitable for the point set registration of the same dimensions.To conduct the 2D-3D registration in the proposed measurement system, we proposed the following algorithm for the 2D and 3D point clouds of a crosssection illuminated by the omnidirectional laser.The overview of the proposed 2D-3D registration algorithm is shown in Figure6.While the measurement camera measured the cross-section 3D point clouds, which were visible using the omnidirectional laser light, the registration camera obtained the 2D ray vectors of the laser light.Here, we performed the 2D-3D registration on the image plane of the registration camera.The proposed method assumed that the correspondence of the 2D and 3D points could be expressed as the following Gaussian mixture model: one of the observed ray vectors x n matches a cross-section 3D point y m with the probability  p x y ( ) n m .To project the cross-section 3D point cloud on the image plane of the registration camera, we applied the rotation matrix R Cross-section measurement: During measurement, the laser beam was toggled continuously to obtain the alternate ON and OFF laser images.The acquired laser ON images and results of the pixel coordinate extraction are shown in Figure 9.In the acquired images from both units, the laser light appeared in the periphery area of the measurement camera image.If the laser pixels were extracted only from the ON images, the fluorescent lamp, emergency exit light, and bright gray linings were detected as laser lights.In contrast, the Cross-section measurement unit with (a) the hand-truck system and (b) the UAV system.(a) Cross-section measurement unit with hand truck and (b) cross-section measurement unit with UAV. proposed method, which subtracted the laser OFF image from the laser ON image, suppressed the wrong detection laser pixels by eliminating continuously bright pixels in the images.
, were smaller.Both cross-section measurement units measured the tunnel with a few centimeter errors.The units captured the centimeter-order deformations of the tunnel.Moreover, the local deformation was clearly visible because the minimum resolution of the cross-section point clouds at a distance of 4.4 m, the most distant position to the tunnel wall, and the minimum distance between points were approximately 5.5 mm for M = 1000.Cross-section integration: To integrate the measurement camera coordinates of the cross-sections into the first registration camera coordinates, the proposed method performed probabilistic 2D-3D registrations for each measured 3D cross-section point cloud.Some registration results are shown in Figure 10.

Figure
Figure 10a,c shows the registration results at the beginning of one measurement cycle.The 2D point cloud observed using the registration camera contained several outliers because no previous pose estimation results could be utilized to mask the detection area.
U R E 10 Result of 2D-3D registration at different measurement points in the tunnel measurement.(a, c) Registration results of first cross-sections in measurement cycle.(b, d) Results when the measurement cameras were moving.(a, b) The hand-truck system and (c, d) the UAV system.
the point cloud of the UAV system, the point cloud contained more wavy results in the contour owing to the pose shift problem of the registration camera.The gradual pose changes were mainly contained in the translation vector of the cross-section point clouds because the direction of the Mavic Mini was more stable.

4 × 3
Figure 12.The overall 2D-3D registration results show that the proposed method can work in a square-shaped cross-section Abbreviations: ICP, iterative closest point; LiDAR, Light Detection and Ranging.
implemented hand-truck and UAV measurement systems.The two cameras comprising each measurement system were alternately moved, and the hand-truck and UAV systems measured a 30-m-long tunnel in 5 and 4 min, respectively.The line-structured light method with background subtraction obtained a cross-section point cloud from two images, and both measurement systems obtained dense point clouds at 1/3 of the speed of the camera frame rates.The handtruck system measured more precise point clouds than the UAV F I G U R E 13 Top view of measured point clouds in the hallway measurement for the same scale.Colored points (black/blue/orange) were manually selected to compute errors.(a) LiDAR aligned using ICP with manual initialization, (b) LiDAR aligned using ICP initializing poses as the pose of the previous measurement position, and (c) proposed method.ICP, iterative closest point; LiDAR, Light Detection and Ranging.