In‐fleet structural health monitoring of roadway bridges using connected and autonomous vehicles’ data

Drive‐by structural health monitoring (SHM) is a cost‐efficient alternative to the direct SHM of short‐ to medium‐size bridges requiring no sensors to be installed on the structure. However, drive‐by SHM is generally known as a short‐term monitoring technique due to the challenges associated with using multiple passages of instrumented vehicles for a long time. This paper proposes combining the potentiality of connected and autonomous vehicles (CAVs) into drive‐by damage detection by introducing In‐Fleet SHM. To the authors’ knowledge, this is the first study that proposes using CAVs for SHM application in civil engineering structures. Each In‐Fleet CAV could automatically collect the vehicle's persistent and temporal data by the embedded sensors and transmit them to edge computing systems for analysis. These persistent data include type and model and temporal parameters encompassing position, speed, heading, and vertical acceleration of CAVs. Knowing the persistent and temporal data of the passing vehicles over the transportation infrastructures enables the identification of the dynamic parameters of the bridge from the vehicles’ vertical acceleration response using drive‐by techniques and, on the other hand, reconstruction of the finite element model of the passing vehicles over the supporting bridges in a near real‐time manner. In contrast to the drive‐by SHM, In‐Fleet monitoring has an expanded spatial and temporal coverage, enabling continuous near real‐time monitoring of highway bridges of the transportation network. The accuracy and resolution of the identified modal components in In‐Fleet SHM are enhanced due to the crowdsensing nature of the collected data. Furthermore, by offering a unique set of characteristics, this method fills the crucial gap in implementing Industry 4.0 technologies and digital twins for SHM of bridges.

component level (to detect local damages) and the system level (to detect global damages) but not at the network level (to detect damaged bridges on the transportation network-wide scale).
Direct vibration-based damage detection methods require sensors, storage and communication electronics, data acquisition, and power sources to be installed on target bridges (Eltouny & Liang, 2023).However, these components' installation, operation, and maintenance costs are relatively high and not economical for most medium-and short-span bridges (Brownjohn et al., 2016;Shokravi et al., 2020).
Operational modal analysis is the most common method for the identification of modal parameters of bridges that use ambient excitation (e.g., traffic-induced vibrations, wind-induced vibrations, temperature variations) as the input signal of the system.The basic assumption in operational modal analysis is that the bridge input excitation is a white noise sequence (Cui et al., 2019).Although this assumption might be correct in long-span bridges with a dense traffic flow, its validity is disputable for short-and medium-span bridges for which the span accommodates only a small number of vehicles at a time (Ditlevsen, 1994;Hou et al., 2020).Parameters such as dynamic characteristics, speed, and mass of the passing vehicles influence the bridge vibration by altering the bridge's effective mass (Khan et al., 2016;Yang et al., 2004).Kim et al. (2001) showed that the natural frequencies of short-span bridges with relatively small masses were changed by 5.4%.Meanwhile, the dynamic amplification of the static load in short-and medium-span bridges is the dominant traffic loading scenario, placing greater importance on acquiring vehicular loading data (Cooper, 2011).Therefore, direct monitoring methods, which do not consider vehicle characteristics (e.g., loading, vehicle size) and operational traffic conditions (e.g., vehicle position, traffic speed), are prone to false diagnoses (Khan et al., 2016).
Drive-by monitoring methods have shown great potential for short and medium-span bridges due to their low operational cost (Mokalled et al., 2022).These techniques do not require sensors to be installed on the bridge since they can perform under operating conditions without disrupting traffic flow (Li et al., 2022).The bridge response obtained by a drive-by monitoring system corresponds to the excitation of the bridge at different spatial points (Yang et al., 2021).In contrast to direct methods used to continuously monitor bridges, the drive-by SHM has generally been considered a short-time monitoring system due to the challenges associated with using a single instrumented vehicle's passage over an extended period (McGetrick et al., 2017).
Several parameters influence the vibration parameters of bridges in drive-by SHM.Lin and Yang (2005) studied the effect of different vehicular and road parameters on bridge natural frequency.They found that vehicle speed, road surface conditions, and the dynamic parameters of the moving vehicle or test cart must be known to accurately determine the fundamental frequency of bridge vibration.Table 1 summarizes the key parameters in drive-by SHM.
The vehicular data needed for drive-by SHM can be partitioned into persistent and temporal vehicle parameters.
The methods for extraction of on-road vehicle parameters during their fleet fall under the heading of vehicle classification.Vehicle classification method relying on fixed-location sensors (e.g., pneumatic, piezoelectric, fiber optic, and strain gauge) could provide valuable vehicular information; however, they can collect temporal information only at the point where they are installed (Zhao et al., 2019;Zhu et al., 2015).
Vision-based monitoring systems are more cost-effective vehicle classification techniques and can provide information on persistent vehicle parameters such as model and brand (Hyun & Jin, 2018;Sen et al., 2019).Moreover, vision-based systems can retrieve the temporal vehicles parameters, such as speed, acceleration, and heading, in addition to the persistent data.However, vehicle classification using vision-based sensors can be obtained only within the confined coverage area of a camera (Buch et al., 2011;Rožić & Rožić, 2005).
Smartphone-based global positioning systems (GPSs) or portable GPSs can extract location and movement data; however, they cannot provide vehicles' persistent parameters (Shokravi et al., 2020).Table 2 summarizes available vehicle classification methods that can be used to extract the parameters of moving vehicles on bridges.A review of these methods reveals that they cannot exploit all data required for drive-by SHM in a real-time and network-wide manner.
Connected and autonomous vehicles (CAVs) are intelligent road vehicles that autonomously collect a vast amount of data from the vehicle and the environment in short intervals to make decisions and ensure safe navigation (Gouda et al., 2021).The temporal data of each CAV are automatically collected by perception sensors (Du et al., 2023).These data, along with the persistent  Cantieni (1992) parameters, are disseminated into a cloud computing system for processing (Chen et al., 2021).The vehicles' spatiotemporal data, such as position (i.e., longitudinal and transverse) and speed, are obtained from the global navigation satellite system (GNSS) and odometer sensors.Light detection and ranging (LiDAR) collect the lane distribution of the vehicles at each timestamp.The drive-by method needs multiple passages of single or several instrumented vehicles, while scanning is carried out only when the specialist vehicle crosses the supporting bridges.Therefore, the drive-by method is considered impractical for many real-world scenarios.The In-Fleet framework is proposed here to deal with these challenges.This framework combines the capabilities of CAVs with drive-by damage detection.In contrast to drive-by monitoring, In-Fleet monitoring uses CAV-based crowdsensing with an expanded spatial and temporal coverage, that enables continuous transportation network-wide monitoring of bridges.While on-site SHM systems can detect local and global damages only at the component and system levels, the In-Fleet method offers a novel fleet-wide monitoring capabilities across the transportation network known as "network-level" monitoring that did not exist before.Hence, the unique contributions of IN-Fleet SHM concerning other methods can be summarized as follows.
1. Particularly suitable for short-and medium-span bridges.2. Enables drive-by SHM with no requirement for specialist test vehicles.3. Facilitates continuous near real-time drive-by SHM. 4. Enables network-level SHM of bridges in the transportation network. 5. Provides higher accuracy and resolution of the identified modal components, compared to conventional drive-by SHM methods due to incorporating crowdsourcing.6. Enhances sustainability due to using the available resources without the need for installing in situ sensors on bridges and incurring maintenance costs.
Using CAVs to indirectly monitor bridges could extend the health monitoring coverage into short-, medium-, and large-span bridges.This practice could lead to a new paradigm in monitoring transportation infrastructures, namely, "network-level SHM," which has not existed before.Hence, using CAVs for the SHM of road bridges could provide opportunities and directions to develop a more sustainable management of assets.In-Fleet SHM addresses the limitations of conventional drive-by SHM methods by facilitating continuous near real-time monitoring, providing improved accuracy.Furthermore, by offering a unique set of characteristics, this method fills the crucial gap in implementing Industry 4.0 technologies and digital twin application for SHM of bridges.Consequently, the successful implementation of the In-Fleet approach would pave the way for transformative advancements in SHM practices within the transportation network.

CAVs
CAVs are intelligent road vehicles that can navigate themselves to a predetermined destination by utilizing information shared among vehicles in cooperative vehicular networks (Zhou et al., 2023).The DARPA Grand Challenge conducted by the US Defense Advanced Research Projects Agency was a turning point to push autonomous vehicles closer to reality (Gandia et al., 2019).However, the Grand Challenge 2004 had no winner, as no team was able to completely navigate the course successfully.In the 2007 DARPA Urban Challenge, "Boss" from Carnegie Mellon University won first place finishing the route in the fastest time (Benbarka, 2023).Boss used a 2007 Chevy Tahoe vehicle equipped with a combination of 17 different sensors, including LiDAR, radar, and GPS (Urmson et al., 2009).The competing vehicles in the Grand challenges encompassed human-driven vehicles (HDVs) that were modified for autonomous driving.The knowledge and expertise gained from DARPA Challenges played a crucial role in shaping the future of CAVs.Currently, the CAVs in the market are of two main classes encompassing (1) the autonomous vehicles developed from scratch such as Zoox, Cruise Origin, Nuro, T-Log, and R2, while some others (2) use retrofitting existing HDVs by installing comprehensive self-driving technology suit on vehicles such as Waymo, G2 and G3 Cruise AV, Aptive, Pony, P2, and Kodiak.
CAVs are in the development stage, and there is not mandatory standardized set of sensors for autonomous driving, leading to varieties of perception and localization sensors configuration and technologies across different CAV prototypes and manufacturers.Nonetheless, strict and stringent competencies and functionality requirements are set by standardization bodies to be achieved before rolling out a specific technology in CAVs into the market (Connected and Automated Driving Europe, 2020).
LiDAR, radars, and cameras are primary perception sensors in CAVs to collect data needed for autonomous driving.LiDAR sensors exploit point clouds and identify the distance and size.Radar imaging systems complement cameras and LiDARs by instant tracking the trajectory of vehicles, pedestrian, and cyclists and their speed, particularly in challenging weather conditions.LiDAR sensors actively scan their 360 • surroundings, frequently capturing huge amounts of data on objects within scanning range (Zhu et al., 2014).These relatively expensive sensors have high-range coverage and large detection distances (Berrio et al., 2018).The images by LiDAR are in 3D, which must be converted to 2D after removing noise and filtering out unwanted points.The moving object is separated from the original LiDAR data, and vehicles are classified using clustering techniques (Zhang et al., 2020).The obtained data can be used to extract vehicle patterns and intervehicle distance.
In-vehicle cameras are essential components for perception modules of autonomous vehicles.Stereo cameras are the dominant in-vehicle cameras in CAVs due to their higher performance (Kato et al., 2006).They are composed of two cameras placed apart, simulating human binocular vision.So it enables depth perception and the creation of 3D point clouds (Wang et al., 2022).Depth perception in stereo cameras can help detect the distance to objects, understand their size and shape, and estimate the vehicle's position relative to the surroundings.Image processing technologies transform the images captured by the wide-angle cameras around the vehicle into a 360-degree panoramic vision (Wang et al., 2022).
Accurate localization and navigation of CAVs can be obtained from the integration of inertial measurement unit (IMU), GNSS, and high-definition (HD) maps, which support dependable decision-making in determining the best routes, actions, or commands to be executed (Van Brummelen et al., 2018;Wu et al., 2019).The IMUs play a crucial role in providing accurate and stable estimation of the position and orientation of CAVs using three-axis accelerometer, gyroscopes, and magnetometers (Dudek & Jenkin, 2016;Prayudi et al., 2011).
The methods used for verification of the research studies on CAVs generally use experimental and/or simulationbased models, which are further discussed in the following.

Experimental models
The early autonomous vehicles were open-source, allowing for unrestricted source code modification and customization; however, modern commercial CAVs have shifted away from open-source models, restricting access and modifications to proprietary source codes, which makes them less suitable for research studies and development.Small-scale to full-scale vehicle prototypes are used as experimental models to assess and refine the navigation performance, safety, and their adaptability to real-world scenarios.
The number of full-scale open-source CAVs developed from scratch dedicated to research and educational purposes is quite limited due to their complexity, high development costs, and proprietary considerations (Li et al., 2022;Maurer et al., 2016;Pérez et al., 2010).ISEAUTO is designed and manufactured at Tallinn University of Technology, Estonia, in cooperation with TalTech and the AuVe Tech companies (Bellone et al., 2021).The main middleware is the robot operating system (ROS), and software component integration is based on the Autoware stack to keep the development open (Sell et al., 2018).
Using scaled-down CAV models has proven to be an agile approach for experimentation and development of autonomous vehicles.Arduino and Raspberry Pi are two computing platforms in mobile robot models that are widely used for building robots with limited applicability (Krauss & IEEE, 2016).Raspberry Pi is a more compelling choice for building small autonomous robots, compared to Arduino due to its high adaptability with ROS.TurtleBot3 Burger and the TurtleBot3 Waffle Pi are two popular testing platforms for autonomous driving that take advantage of Raspberry Pi 3 and 4, respectively.They are equipped with multiple sensors, such as LiDAR, camera, and IMU and use the ROS framework for their software stack (Martínez, 2021).For instance, Nonomura et al. (2023) used Waffle Pi as the test autonomous vehicle, which was equipped with LiDAR, a camera, an IMU and wheel encoders, to study optimized delivery plan of a carsharing system in smart mobility platforms.Donkeycar opensource self-driving platform developed by DIY Robocars is another mobile robot that uses the Raspberry Pi computing platform supporting ROS and high-level Python libraries such as tornado, keras, tensorflow, and open source computer vision library (OpenCV) for autonomous driving (Roscoe, 2019).Yun and Park (2021) used Donkeycar for verification of their proposed digital twin framework of an autonomous vehicle.For the implementation of autonomous driving in industry with the capability of development and testing, several companies have provided robots with higher computing resources capable of supporting higher precision perception and localization sensors.The AgileX Robotics (2023) developed AutoWare Kit software and hardware platform stack with 8 cores ASUS VivoMini computer, Robotsense RS16 LiDAR and supporting stereo camera, laser, GPS, and IMU extensions (AgileX_Robotics, 2022).The AutoKit robots use ROS-Gazebo simulation environment as the middleware suite.Ginerica et al. (2021) utilized AgileX Scout 2.0, 1:4 scaled car, equipped with a Hesai Pandar 40 LiDAR, 4x e-130 A cameras providing, a VESC IMU, GPS and an NVIDIA AGX Xavier to develop the test experiment of their owned algorithm for the predictive control of autonomous vehicles.

Simulation tools
Automotive testing standards for HDVs before market entry mainly rely on metrics that can be assessed with a limited number of standardized tests such as crash testing, emissions testing, and performance testing.However, the assessment of CAVs requires a more complex set of testing to ensure safe and robust autonomous navigation capabilities resulting in an exponential growth of the required number of tests.Thus, virtual testing using simulation tools became an essential component of the assessment framework, enabling cost-effective, safer, and highly efficient testing of CAVs in a wide range of scenarios (Aparow et al., 2019).
Willow Garage robotics research lab developed ROS open-source software suite in 2010, which rapidly became the standard tool among robotics researchers due to enabling modular design (Raju et al., 2019).ROS is a distributed computing framework with multiple nodes communicating with each other by delivering messages with well-defined formats (Tang et al., 2017).The latest version of ROS is Noetic, which is supported until 2025.ROS2 is the successor of ROS to address the limitations and introduce some new features to meet evolving requirements in the robotics field (Bonci et al., 2023).The advantage of ROS2 over its predecessor can be attributed to using data distribution service to enhance efficiency, reliability, latency, and scalability in building robotic systems (Kronauer et al., 2021).Iron and Humble are the active versions of version of ROS2, which are supported until 2024 and 2027, respectively.Gazebo is a powerful robot simulation environment of ROS and ROS2 that is widely used for simulating autonomous vehicles.Using plugins expands the capabilities of Gazebo to use stereo cameras, LiDAR, GPS, IMU, and RADAR sensors (Rosique et al., 2019).
Several software platforms have been developed in recent years to provide a complete set of self-driving modules.Autoware is the world's first "all-in-one" open-source platform to simulate autonomous driving in urban areas, highways, freeways, hilly regions, and geo-fenced areas (Park et al., 2022;Raju et al., 2019).Autoware is based on ROS with pre-built software libraries for self-driving modeling.In Autoware software, point cloud library is used to filter and interpret point clouds to manage LiDAR scans and 3D mapping data, as well as visualization functions; compute unified device architecture is for handling the computation-intensive tasks involved in self-driving; convolutional architecture for fast feature embedding is a deep learning framework and OpenCV for image processing (Xin et al., 2022).Apollo is another open-source, Unity-based platform that is widely used in simulating autonomous vehicles created by Baidu (Almanee et al., 2021).Apollo encompasses a set of perception, routing, planning, localization, prediction, control, CanBus, HD-Map, human machine interface (HMI), monitor, and guardian modules, while it supports three glog, gflags, and ROS libraries (Alcon et al., 2020).Autoware and Apollo are the two most popular open-source software stacks for the simulation of autonomous driving worldwide (Kanakagiri, 2021).
Car learning to act (CARLA) is another well-known open-source platform to simulate autonomous driving.CARLA offers complete control of environments and objects to simulate a variety of different driving features such as lane incursion and collision (Deschaud, 2021).CARLA is constructed on Unreal Engine 4 and provides a high-fidelity visualization of urban driving environments for testing in a controlled and repeatable manner (Dosovitskiy et al., 2017).It also offers a simple Python application programming interface (API) for collecting data from the built-in sensors of the simulated vehicle (Cao & Ramezani, 2023).LG Silicon Valley Lab (LGSVL) is a simulator developed by LG Electronics America R&D Center based on the multi-robot Unity game engine (Jiao et al., 2021;Rong et al., 2020).It offers a variety of cars and pedestrians, supporting the creation of maps with RoadRunner or accessing ready-to-use maps capable of simulating complex driving scenarios.
LGSVL supports flexible configurations of sensors like GPS, radar, depth camera, and LiDAR (Xu & Liu, 2022).Further details regarding other simulation platforms for autonomous driving can be found in the review by Meftah and Braham (2022).

DRIVE-BY SHM
The central concept underlying drive-by SHM is that the acceleration response of traveling vehicles contains bridge vibration parameters (Yang & Lin, 2005).Yang et al. (2004) analytically confirmed the feasibility of the drive-by method for extracting bridge frequencies from a passing vehicle using moving sprung mass and a one-dimensional The analyzed simply supported beam has a span length , a mass per unit length , moment of inertia , Poisson's ratio , Young's modulus , and frequency   and was subjected to a constant-speed () moving sprung mass with mass   , stiffness   , and frequency   .The deflection of the beam at the mid-span is   , and the displacement of sprung mass is   (Yang et al., 2004).() denotes the contact force between the sprung mass and the beam, and  is the delta function.The derivatives concerning coordinate  and time  of the beam are denoted by prime and dot superscripts, respectively.The beam displacement in the first mode is approximated into  (, ) =   ()sin(), considering the contact force   () =   (  −   =  ) +   .The approximations simplify derivation of the formula to prove applicability of the drive-by method.
The natural frequencies (rad/s) of the vehicle and the bridge are denoted as   = √   ∕  and   =  2 ∕ 2 √ ∕, respectively.Based on the assumption that the bridge mass is much higher than the vehicle load while defining the frequency ratio as  =   ∕  and speed parameter as  = ∕  .So the vehicle displacement   , the velocity q , and acceleration q of the vehicle can be written as follows (Malekjafarian et al., 2022;Yang et al., 2019): (3) Shi ( 2020) presented the theoretical model of the sprung mass moving on a beam with different boundary conditions.Both the vehicle and bridge damping were considered in the proposed model by extracting five bridge mode shapes from the vehicle response.The data and code were also presented (Shi & Uddin, 2020).
The analytical model of a sprung mass with the following parameters is constructed to illustrate the concept and to find the frequency response function (FRF) of the vertical acceleration time-series of the sprung mass using the above equations.This model utilized the following data:   = 1200 kg,   = 500 kN/m, passing over a beam of  = 25 m,  = 2.0 m 2 (2.35702 × 0.8485281 m),  = 0.12 m 4 , mass  = 2400 kg/m 3 , and  = 27.5 GN/m 2 (Yang et al., 2019).The fundamental vibration frequency of bridge and vehicle   and   were 2.08 and 3.25 Hz, respectively.The vehicle speed over the beam  was 10 m/s.The displacement, velocity, and acceleration responses of the passing vehicle and the beam midpoint are shown in Figure 2. Figure 3 shows the FRF of the 2D sprung mass model.
In both figures, the first mode of the bridge at 2.08 Hz is observable, while the sprung mass's first mode (3.25 Hz) is also detected in the FRF obtained from the vehicle acceleration response.The frequency shift (   = ∕ ) has appeared in the FRF of the vehicle response, where  denotes the nth disturbance frequency produced by the movement of the load.Ma et al. (2019) indicated that when the combination of   and vehicle frequency (  ) (i.e.,   +   or −  +   ) approaches the nth natural frequency (  ) of the beam, the amplitude of the dynamic response is enlarged, resulting in bridge resonance.Cantieni (1992) indicated that the added mass due to vehicles' passage is an essential parameter in the frequency shift of short-and medium-span bridges with high vehicle-to-bridge mass ratios.Specifically, the authors reported that such bridge frequency shifts reach up to 23% of the natural frequency.Meanwhile, Kim et al. (2003) indicated that the frequency shift is negligible in large-span bridges, while the change can be up to 5.4% of the measured natural frequencies in short-span bridges.Cantero et al. (2019) found that the natural frequencies in indirect monitoring were influenced by the number and mass of vehicles on a bridge, the mechanical parameters (i.e., natural frequency and damping), and positions of the vehicles on the target bridge.The results were similar for vehicle and bridge midpoint responses, while they varied when approaching the boundary conditions.
The obtained result from drive-by damage detection inevitably entails uncertainties due to poor road surfaces, which may cause variations in the characteristics of the structural vibration.Traditional methods of data identification often rely on limited or controlled datasets, which may not capture the full range of variations and complexities present in real-world scenarios.Crowdsourcing is a collaborative approach to improve the quality of the identified modal parameters using a pool of data.Miyamoto and Yabe (2012) proposed using crowdsourcing of data in driveby SHM by installing acceleration sensors on public transit buses.The repeated passages of these public city buses along the same bridge ensured monitoring continuity.The study used single and multiple bus passages over the KW Bridge in Japan at two different speeds (30 and 40 km/h).The authors found that the bridge deflection was different for the undamaged and severely damaged scenarios.
Smartphone devices have been widely used for crowdsensing, as they are equipped with many sensors and are carried everywhere; however, they cannot provide persistent data of vehicles passing over bridges.McGetrick et al. (2017) studied the potential of using smartphones as drive-by SHM sensors in ordinary vehicles to collect vehicle acceleration and position data when crossing bridges.The acceleration and location data were exploited by the smartphones' triaxial gyroscope and accelerometers and GNSS sensors.Better accuracy, cost efficiency, and ease of use were achieved for smartphones than wired in-car accelerometers and professional GNSS devices (survey-grade Leica Geosystems Viva GS14).However, the authors did not record the variability of the identified modal parameters that could have arisen due to the speed variation and road surface irregularities.Matarazzo et al. (2018) used smartphones as sensors to collect acceleration responses from vehicles crossing a bridge.The precision of the results increased when datasets of several smartphones were combined to detect several modal frequencies of the bridge.Quqa et al. (2022) were the first to study light vehicles, such as bicycles and electric kick scooters, using the drive-by technique.Standardized shared micromobility vehicles with temporarily installed smartphones were used to extract the dynamic parameters of a real footbridge in Bologna, Italy.Mei (2021) proposed monitoring transportation infrastructure using moving vehicles.The researchers used smartphone data for crowdsensing the acceleration data of bridges utilizing a simplified laboratory scale of a model car carrying a smartphone device.Matarazzo et al. (2022) determined the frequencies of highway bridges from the in-vehicle smartphones during vehicles' crossing over the target bridge.It was suggested that crowdsourced smartphone datasets can provide more valuable information for monitoring bridges.Shokravi et al. (2020) were the first to suggest using smart vehicles for SHM.Gkoumas et al. (2021) replaced the term CAVs for smart-vehicles with the justification that vehicles with autonomy and connectivity could be a better source of data for SHM.However, none of these works introduced a framework for implementing the proposed systems.
The existing crowdsourcing methods in the literature primarily rely on smartphones.However, it is important to note that CAVs are capable to operate without human intervention having the highest smartness levels of 4 and 5.In contrast, smartphones require human operators with a maximum smartness level of 3. Furthermore, utilizing smartphones may not be practical in real-life scenarios.

IN-FLEET SHM
The In-Fleet monitoring uses the principles of drive-by SHM, where the acceleration response is extracted from the moving vehicle over the target bridge.The physical parameters of the specialist vehicle such as vehicle weight, axle track width, axle spacing, and axle stiffness are generally measured before their operation.On the other hand, the temporal parameters such as speed, heading, and vehicle position are regularly recorded during their operation to calibrate the obtained vehicle response with the real state of the test vehicle.
In the In-Fleet monitoring, each CAV operates as a specialist test vehicle that automatically collects the bridge acceleration response.IMU is the motion-based sensor in CAVs that could provide six degrees of freedom estimate of in-motion position, velocity, and acceleration using measurement systems of magnetometers, gyroscopes, and accelerometers (Dudek & Jenkin, 2016).Vertical acceleration obtained by IMU is used in the In-Fleet method to extract the bridge modal parameters.Data from IMU, accompanied with the GNSS, and HD map enables retrieval of the spatiotemporal vehicle parameters such as speed, direction, and longitudinal and transverse position of the vehicle.The physical parameters of a CAV are retrievable from the disseminated messages.The obtained data by GPS, IMU, LiDAR, and cameras are fused to integrate multiple information sources and compensate for individual sensors' limitations.The data fusion is based on the timestamp defined in the message packet.The timestamp for each vehicle is derived from the vehicle's internal clock, which may vary within the network, so the time reference to synchronize among all vehicles comes from the GNSS time reference based on IEEE 802.11p recommendation (Peixoto et al., 2023).Figure 4 shows the schematic of In-Fleet SHM, where the CAV disseminates the data package to the edge computing, and after processing data, the evaluation of the condition of the bridge is conducted.
In order to verify the concept, a simulation model of an autonomous vehicle equipped with camera, LiDAR, IMU, and GPS sensors navigating throughout the highway transportation network of an area within a radius of 3 km from Universiti Teknologi Malaysia (UTM) (1 • 33′32.9″N103 • 38′18.7″E) is introduced (the small area is selected due to the limitation of 50,000 nodes in OpenStreetMap; for larger area coverage, planet.osmcan be used).The model of the CAV is simulated in the Humble Hawksbill edition of ROS2.The bridges and roads within the target monitoring zone are shown in Figure 5.
In this study, we have assumed that the IMU is publishing the vertical acceleration data to the subscribers during its operation regardless the vehicle passing over the bridge or not.For more realistic scenarios, a limiting condition could be added to the algorithm to limit the publication  2022) is adopted.

CONCEPT VERIFICATION
A simulation model of an autonomous vehicle, equipped with LiDAR, camera, GPS, and IMU is generated in ROS2 Humble to verify the proposed In-Fleet concept.The model is simulated on a system running Linux Ubuntu 20.04 LTS (Focal Fossa), compatible with ROS2 and the Gazebo 11 simulation environment.The model encompasses a complex integration of sensors, processing modules, and control algorithms to enable the simulation of a real-world autonomous navigation.The nodes in the architecture communicate over topics to publish or information.
Gazebo is used to simulate the visual aspects of the physical world environment that the CAV operates.The data for autonomous navigation are obtained from various simulated sensors, which included 'gps_simulator' to simulate the GPS data, providing the vehicle's position; '/gazebo_ros_head_hokuyo_controller' to simulate LiDAR sensor for obstacle detection and environment mapping; 'camera_node' to simulate camera and provide visual input; 'imu_subscriber' to subscribe to the IMU data, essential for understanding the vehicle's movement and orientation.On the other hand, the 'imu_vertical_accel_publisher' publishes vertical acceleration data, to be used for In-Fleet SHM of the bridge.
Specialized packages facilitate the simulation of the ROS2 environment of the CAV model.Each package is designed to serve a specific function in simulating the components of an autonomous vehicle system.The packages in the workspace are presented in Figure 6.(Due to brevity and page limitations, only the important files and folders are selected to be shown.)The workspace of the simulated CAV for verification of the In-Fleet SHM can be found in Shokravi (2024).
The 'launches' package initiates the simulation environment and serves as the entry point for starting the simulation.By running the launch file, necessary dependencies are imported and the paths to various

F I G U R E 6
The structure of the workspace of the connected and autonomous vehicle (CAV) simulation model.
packages, and their associated files are defined.Then the Gazebo simulation environment is activated to define via a unified robot description format (URDF).Subsequently, the navigation nodes including: 1. Localisation: to estimate the robot's position 2. GlobalPlanner: to computes the route from the robot's current position to the destination defined 3. LocalPlanner: to coordinate obstacles avoidance maneuvers 4. GPSSimulator: to provide geographical positioning data 5. PathTracker: to ensure the robot stays on course The view of the simulated CAV in Gazebo autonomously navigating through defined 'cav_map.pgm'map.

AMCL: to refine the robot's pose estimate based on
IMUSubscriber and CameraNode nodes to handle visual input and processing.
Last, to publish vertical acceleration data, the 'imu_vertical_accel_publisher' subscribes to raw IMU data and extracts the z-axis acceleration, which is especially pertinent for analyzing the dynamics of the supporting bridge using drive-by processing techniques.
The 'cav_description' package and the 'cav.xacro'as the core component of the package form the physical characteristics of the robot including its chassis, sensors, and actuators configurations in an extensible markup language (XML)-based URDF machine-readable format.Wheels are represented by stereolithography (STL) meshes and 'hokuyo_link' defined LiDAR sensor, 'imu_link' is used to define the IMU sensor, and 'camera_link' is for camera.In essence, the 'cav_description' package is the virtual incarnation of the CAV's physical entity.
The 'cav_gazebo' package is designed to bridge the URDF and Gazebo environment to simulate dynamic behaviors and real-world physics.The URDF model of the CAV in Gazebo is shown in Figure 7.This package configures sensor simulations to provide the required data for navigation and control algorithms and by employing the 'ackermann_drive' plugin, navigational commands are translated into realistic vehicular motion to mimic actual operation scenarios.
The 'cav_map' package serves as the spatial cognition module of the CAV to ensure the awareness of CAV in navigating its environment.'map.yaml'file is the component in 'cav_map', which acts as a key descriptor for the map, to link 'cav_map.pgm'file.The YAML file defines the resolution, origin, scale, reference point of the map, 'occupied_thresh', and 'free_thresh' parameters.
To create the 'cav_map.pgm'file, the map of the target area is exported in the form of OpenStreetMap XML Data.The exported map contains detailed information about the roads, paths, and various features of the environment that are essential for autonomous navigation.Once the OpenStreetMap (OSM) data are acquired, it is converted into PGM format using QGIS open-source application.Before conversion, the OSM data are manipulated within QGIS selecting the desired area and removing unnecessary details to improve the efficiency of the processing.After refining the map in QGIS, the map is converted into a grayscale image, where the varying shades represent different features and their occupancy status.The OSM and PGM format map of the target area is shown in Figure 8.
The 'cav_msgs' package is designed to ensure seamless information share among various software components, enabling the complex interplay between sensing, decisionmaking, and action of autonomous systems.
The 'cav_nav' package is the navigational brain in a simulated autonomous vehicle that enables robust and intelligent navigation.
Central to the 'cav_nav' package is the 'GlobalPathPlanner' class, which is responsible for generating a global path for the CAV to follow.This module publishes a Path2D message that defines the global waypoints that the vehicle must traverse.The defined planner adjusts the path in real-time based on the vehicle's current state, ensuring that the CAV adheres to a dynamically updated and optimal route.The Transform(tf) frame diagram of the workspace is shown in Figure 9.
The in ROS Qt GUI toolkit ('rqt') is a critical tool in ROS2 to develop and troubleshoot robotic systems that enable real-time inspection, monitoring, parameter tuning, and debugging systems (Thale et al., 2020).It is a plugin-based extensions that depict ROS2 architecture of the simulated model, all running nodes, processes, and the communication between them via graph visualization and data plotting.
The 'rqt' diagram of the simulated CAV is shown in Figure 10.
As shown in Figure 10, the 'imu_subscriber' node in CAV subscribes to raw IMU data from '/cav/imu/data_raw', and these data are further processed by the 'imu_vertical_accel_publisher' to extract vertical acceleration, the key metric that is crucial for drive-by health monitoring of bridges.The 'global_planner' node in the navigation stack, computes a route from the current location to the destination, and the 'local_planner' node refines the path to accommodate immediate obstacles and local terrain, publishing to '/cav/path'.The 'path_tracker' node subscribes to the '/cav/path' and takes on the responsibility of following the planned trajectory by issuing velocity commands to '/cav/cmd_vel'.This node translates 'robot_camera/image_raw' topic to capture the visual input needed for object detection.

FUTURE WORKS
The adoption of the In-Fleet method has the potential to revolutionize the field of SHM in transportation infrastructures.So far, the concept of network-level In-Fleet SHM is described here, and the algorithm for implementing the proposed method for monitoring roadway bridges is presented.The authors are working on experimental implementation of the In-Fleet SHM; however, they believe that several research areas are opened and of high importance as follows.
1.It is necessary to implement research studies using CAVs to verify the potential challenges in real-life applications of In-Fleet SHM. 2. Full deployment of autonomous driving throughout transportation networks may still be a long way off, so further research studies on In-Fleet systems are crucial, especially during the transition period when both HDVs and CAV fleets coexist within the transportation network.3. In-Fleet SHM offers unique characteristics for implementing Industry 4.0 technologies in SHM of roadway bridges.Therefore, it is necessary to conduct additional studies to define the most appropriate methodologies for implementing Industry 4.0 in this context.4. Future research on In-Fleet SHM should explore alternative damage features that are more suitable for this purpose, considering the advancements in computing power, and the possibility of using multiple features.5. In-Fleet monitoring relies on factory-released information of the vehicles such as kerb weight or suspension system.Further studies should be conducted to address the reliability of using these data and potential variations in the results.
Consequently, the successful implementation of this method would pave the way for transformative advancements in SHM practices within the transportation network.

CONCLUSION
Indirect SHM utilizes the dynamic responses of instrumented passing vehicles to identify damage to bridges without requiring costly instrumentation.Drive-by SHM is generally known as a short-term monitoring technique due to the challenges associated with using multiple passages of instrumented vehicles for a long time.This study addressed this problem by considering the potential use of CAVs to identify moving vehicle parameters on bridges.ROS is the most popular framework for autonomous and robotic research.So, in order to verify the applicability of autonomous vehicles for In-Fleet SHM, a simulation model of a CAV with LiDAR, GPS, IMU, and camera sensors is generated defining a workspace in Linux Ubuntu system with ROS2 Humble.
The results show that the workspace can be implemented in real-life applications, and the methods can provide a wide range of information, including persistent parameters such as vehicle manufacturer, vehicle type and brand, axle track width, axle spacing, and axle stiffness and temporal parameters such as speed, acceleration, heading, and vehicle position.Incorporating vehicles' persistent and temporal parameters (obtained in real-time from CAVs) is expected to increase the accuracy and reliability of indirect damage detection results.

A C K N O W L E D G M E N T S
Hoofar Shokravi is a researcher at Universiti Teknologi Malaysia under the Postdoctoral Fellowship Scheme for the Project: "Smart-Vehicle-Assisted Structural Health Monitoring.".We thank Research Management Centre, Universiti Teknologi Malaysia for their financial support through the Grant: Q.J130000.21A2.06E79.
Open access publishing facilitated by Western Sydney University, as part of the Wiley -Western Sydney University agreement via the Council of Australian University Librarians.

TA B L E 2
Abbreviation: LiDAR, Light detection and ranging.

F
Moving sprung mass over a beam.beam.In the sprung mass model, rigid masses were connected by springs and dampers with different degrees of freedom (de Almeida & da Silva, 2006; Yang & Lin, 2005).Sprung mass simulation models consist of the quarter-, half-, and full-vehicle models of articulated or real-world conventional cars and trucks.A moving sprung mass model is shown in Figure 1.Damping is neglected in the figure and the following equations to simplify the derivation.

F
The vertical displacement, velocity, and acceleration response of the sprung mass and bridge midpoint.

F
The frequency response function (FRF) of the vertical acceleration response for the bridge midpoint and the sprung mass.

F
Schematic of the proposed In-Fleet structural health monitoring system.

F
The bridges and the roads for monitoring the target zone simulation using In-Fleet monitoring.of the IMU vertical acceleration only when the vehicle fleet is on the geographic position of the target bridges.The wheeled chassis-based Ackerman steering simulation model byWinston and Thomas (

F
I G U R E 8 The (a) OSM map and (b) PGM format map of the target area with the latitude and longitude coordinates of 1.577221 and 103.657648 at the center.In the PGM format, the unnecessary information is removed.planned paths into actionable commands to the control mechanism of the vehicle.The 'localisation' node, receiving data from '/cav/gps' and '/cav/odom', maintains a real-time estimate of the vehicle's position and orientation, enabling situational awareness and coherent function of the other components.The 'gps_simulator' simulates geospatial coordinates, mocking the GPS data a real vehicle, and the 'robot_camera'publishes F I G U R E 9 The Transform('tf') frame diagram of the workspace.

F
The 'rqt' diagram of the simulated CAV.

vehicle Damage feature Road Segment Road profile Speed Position Mechanical parameters Mass and number Numerous runs Mass ratio Coupling effect Author/Ref.
The influential parameters in drive-by structural health monitoring.
TA B L E 1