Accelerated Augmented Reality Holographic 4k Video Projections Based on Lidar Point Clouds for Automotive Head‐Up Displays

Identifying road obstacles hidden from the driver's field of view can ensure road safety in transportation. Current driver assistance systems such as 2D head‐up displays are limited to the projection area on the windshield of the car. An augmented reality holographic point cloud video projection system is developed to display objects aligned with real‐life objects in size and distance within the driver's field of view. Light Detection and Ranging (LiDAR) point cloud data collected with a 3D laser scanner is transformed into layered 3D replay field objects consisting of 400 k points. GPU‐accelerated computing generated real‐time holograms 16.6 times faster than the CPU processing time. The holographic projections are obtained with a Spatial Light Modulator (SLM) (3840×2160 px) and virtual Fresnel lenses, which enlarged the driver's eye box to 25 mm × 36 mm. Real‐time scanned road obstacles from different perspectives provide the driver a full view of risk factors such as generated depth in 3D mode and the ability to project any scanned object from different angles in 360°. The 3D holographic projection technology allows for maintaining the driver's focus on the road instead of the windshield and enables assistance by projecting road obstacles hidden from the driver's field of view.


Introduction
Around 16 000 people lose their lives every day in traffic accidents due to human error worldwide. [1]The connected and autonomous vehicles market value is expected to grow from $1.6 bln in 2022 to $11.0 bln in 2028. [2] Connected vehicle technologies based on point clouds can enhance obstacle and traffic detection mechanisms due to real-time traffic data availability in different transportation scenarios. [4]n 2017, Germany introduced the Act on Automated Driving, and in 2018, the UK and California followed with regulations that allow for deploying autonomous vehicles on public roads for testing purposes. [5]he vehicle-to-everything (V2X) technologies such as vehicle-to-vehicle and vehicleto-roadside infrastructure become increasingly significant in terms of accuracy to ensure safety and security on public roads. [6]iDAR constitutes a pulsed light source to illuminate a chosen 3D object, the reflected light pulses are then measured employing the return time, Time of Flight (TOF), to calculate the 3D object distance. [7]Microelectromechanical Systems (MEMS)based Light Detection and Ranging (LiDAR) platforms were developed to promote safety and security during transportation. [8]his technology is classified into different categories such as ground-based, spaceborne, and airborne LiDAR. [9]Autonomous vehicles (Level 4 of Driving Automation, SAE International Standard J3016) rely on vision systems that combine sensing and data processing techniques.Currently, Uber, Waymo, BMW, and Toyota utilise predominantly a MEMS LiDAR for generating an accurate 3D map of the surroundings within 100 m. [10] Largescale MEMS-scanning mirrors have been developed for longrange (100 m) LiDAR systems.Due to resolution standards, the LiDAR sensors do not currently meet the safety standards (ISO/TS 19 159 ) for Level 4 autonomous vehicles on public roads. [11]ugmented reality optical systems can be utilized in a plethora of real-world contexts including education, infotainment, surgical operation by providing an augmented view with virtual objects. [12]Self-adapting holographic devices can automatically adjust to their augmented reality environment settings such as change the focal distance and size of the hologram to align with real-life objects. [13]Such self-adapting holographic devices comprise a display device with associated processor and memory.A synthesized panorama augmented reality Head-up Display (HUD) based on community-contributions such as media data has been developed to promote the efficiency of incar navigation. [14]Social media data such as photographs timetagged with geo-referenced locations have been incorporated into the HUD system. [15]A mobile device displayed a real scene on HUD with the generated mixed-reality experience for navigation improvement on roads.Recently, point cloud data has been utilized in 3D surface reconstruction using Cylindrical Millimeter-Wave (MMW) holography based on the active arraybased radar imaging mechanism. [16]Compared to conventional synthetic aperture radar (SAR) systems, MMW system antennas span a wide beamwidth leading to a relatively large aperture range of 60°.Hence, a wideband signal can be achieved, together with Ultra-High Definition (UHD) 3D image projection. [17]Compared to optical metrology systems, the MMW method was independent of ambient light conditions and reasonably penetrates through non-metallic materials and objects such people on the street and trees.Frequency interferometry techniques (32.5-37.5 GHz) were introduced to reconstruct the depth map of the target objects under planar geometry. [17]he applications of AR HUDs for safety purposes include: (i) hidden road obstacle warning systems, (ii) collision warning systems, and (iii) communication with other vehicles and roadside infrastructure to prevent accidents and improve road traffic.V2X based on LiDAR point cloud data storage and transfer relies on sharing accurate emergency road obstacle information which is based on non-trivial computational processes.The computational algorithms include 3D LiDAR object encompassing, processing of this data to create an AR hologram from point cloud points and projecting the hologram into the driver's eyes.Hence, it is highly desirable to integrate computing features into the roadside infrastructure and improve V2V communications for a reliable communication process of emergency information propagation.
Time-accelerating algorithms have been proposed for creating Computer-Generated Holograms (CGHs) [18] Point cloud and polygon methods can be used to characterize 3D object reconstruction.The point cloud method consists of a summation of self-illuminated points, which assemble into a 3D object.18a,19] In the first step, the superposition of light waves emitted from a point cloud within a virtual plane is calculated; in the second step, a diffraction calculation is elaborated to proceed from the virtual plane to the hologram plane. [20]Look-Up table methods have been developed to collect the light waves, which are stored on a CGH to generate the resulting CGH.The memory needed for Look-Up tables presents a challenge and has been tackled by introducing a radial symmetric interpolation to compress the tables by 5-10% from the original tables. [21]Another method is the recurrence relation, which uses relations such as the Taylor expansion to reduce the number of cosine computations in the CGH equation, and hence the overall computation time. [22]This method does not directly calculate the optical path but compares the CGHs with conventional equations.The recurrence relation method calculates a CGH up to 8 times faster than conventional CGH formulae on a CPU and GPU.Other methods include the polygon-based method consisting of small facets which introduces novel methods of calculating tilted polygons with reference to a CGH. [23] computational method was developed to achieve 3D transformation from an arbitrary triangle through 3D rotation and 3D affine transformation, followed by the overall transformation by the product of a rotation matrix and a 2D affine matrix. [24]Other methods constitute the Fresnel diffraction, the angular spectrum method which can be further accelerated to generate the CGH by Fast Fourier Transform (FFT) implemented convolution. [25]here are limitations to these diffraction calculation methods as the sampling rate of the source plane equates to the destination plane sampling rate due to the FFT method.Other studies proposed double-step Fresnel diffraction to extend the scale of the method of calculation for generating large CGHs with 8K × 4K px. [26]Others achieved shifted-Fresnel diffraction, which divides layers to the CGH plane with variable sampling rates.This method provides scaling to the replay field results without the necessity of optical zoom. [27]ere, a 4K holographic setup was developed to display LiDARderived point-cloud objects in 360°video mode for accurate obstacle detection and driver alert system.The concept of projecting a 360°obstacle assessment for drivers stemmed from meticulous data processing, ensuring clear visibility of each object's depth.While more data collection from diverse locations enhances accuracy, our study's unique contribution lies in enabling a 360°v iew by judiciously choosing data points from single scans of specific objects (e.g., trucks, buildings).This approach, enabling a comprehensive assessment of road hazards, is an innovative addition to head-up display research, addressing a critical need for driver safety.Furthermore, this work demonstrates the GPUaugmented acceleration of a 3D point cloud method capable of integrating chosen LiDAR 4K 3D objects into the driver's field of view in real time.A terrestrial LiDAR scanner was used to enhance the driver's vision of hidden obstacles on public roads.The work addresses the current challenge for drivers utilizing headup displays for infotainment purposes projected with a 2D projector on a small area of the windshield.This forces the driver to shift the gaze from the road onto the windshield causing possible distractions from the main task of driving.The methods and results described in this work focus on current driver assistance systems to support navigation and decrease road accidents.The novelty of this work is threefold: (1) The augmented reality aspect that the road obstacles are holographic projections with depth aligned with real-life objects in size and distance to render the field of view of the driver as natural as possible to the road environment, (2) The 360°LiDAR obstacle extraction from the collected point cloud data sets, accelerated GPU parallel processing allowing a processing time for 25 points in 4.8 s, and a full rotation of the obstacle so that the driver could assess the full width and size of the obstacle and react accordingly, and (3) The point cloud collection and extraction of obstacles form a vast point cloud scenery and storage with sharing concepts from vehicle to vehicle; hence, drivers in the hazard area would be automatically alerted.

Results
A 3D coordinate terrestrial laser scanner was utilized to collect the LiDAR data.The laser scanner had a reference beam wavelength of 1550 nm, beam divergence of 0.35 mrad, measurement range of 600 m, data captivity of 122 kHz, and range accuracy of 5 mm with a repeatability of 3 mm.A public road (Malet Street, London, UK) was surveyed at 11 different locations with the Li-DAR scanner in upright and tilt positions.The gathered point cloud data sets contained a manifold of hidden obstacles that could be displayed to alert the driver on the head-up display.Five different major obstacles were chosen to demonstrate the concept of extracting hidden road objects.The criteria utilized to choose these objects are: being located at different places, measuring different sizes, and containing different numbers of points.The algorithm could extract other hidden road obstacles, however, the demonstration became redundant, because they had similar sizes or numbers of points.Figure 1a illustrates the bird's-eye view of Malet Street where the objects were scanned.The data was converted into x,y, and z-point cloud data, which was postprocessed to produce a co-register point cloud in an arbitrary coordinate system.An open-source Python library was used to perform material separation, the identification of geometric features, and structural analysis.In the data set, each object was evaluated to determine visible 3D road obstacles and hidden objects from a scanning point of view.3D images were selected for different data visualization scenarios to evaluate the LiDAR 3D point cloud data advantages for drivers.Figure 1b-f illustrates LiDAR reflectance images of (i) a tree in front of a road object, (ii) bicycle racks with pedestrians standing nearby, (iii-iv) a truck parked on the side of the road, and (v) a building.Each point cloud object was scaled to a specific size to match the real-life objects for projections.
The display technology used in this work is Liquid Crystal on Silicon (LCoS).The working principle of the SLM is to control the alignment of the birefringent liquid crystal molecules by changing the direction and the strength of the electric field applied in the phase shift and retardance of each pixel (Figure 2a).The device consists of a liquid crystal layer that is sandwiched between a conductive ITO layer and a reflective coating.The pixels that will be displayed are organized as individual Al electrodes.The formation of an electric field through the application of voltage to the system, which aligns the liquid crystal modules based on the strength of the electric field.The retardance or the phase shift of the pixels depends on the alignment patterns of the liquid crystals.Figure 2b shows the layout and configuration of the SLM consisting of HDMI/USB inputs, a driver board, and a 4K LCoS panel (inset).A CGH setup was devised to project floating 3D replay field images of the LiDAR point clouds (Figure 2c).The optical setup employed a 4K SLM (3840×2160 px) and a He-Ne laser ( = 632.8nm, 5 mW).The laser beam propagates through an aspheric lens (L 1 , f = 3.30 mm, NA = 0.47), achromatic doublet lens (L 2 , f = 75 mm), two linear polarizers (P1/2, 38% transmission) set at 45°, a polymer zero-order half-wave plate (HW), a non-polarizing beam splitter (BS, 50:50 split, 30 mm), and subsequently is projected on the panel of the 4K SLM (Figure 2d).Image properties such as precision, accuracy, and sharpness are characterized by the modulation transfer function (MTF) based on the spatial frequency response of the optical system to a given illumination.The higher the spatial frequencies, the more detailed the images appeared in the replay field.The effective optical system resolution was determined by utilizing defined test charts.The resolution was determined by the distance between the smallest group of discriminable details.To ensure the accuracy of the UHD replay field results, calibration tests were carried out by projecting test targets: lines, checkerboard, Secchi disk, yin yang, and Siemens star within the field of view and measuring the intensity at each position (Figure 2e-h).Figure 2e illustrates the original resolution charts of targets, where the dashed lines represent the analyzed projection areas.The targets consist of Lines, Checkerboard, Secchi Disk, Yin Yang, and Siemens Star.Each specific resolution pattern has a meaning regarding the optical system resolution: The Lines target contains a positive set of five vertical lines with a frequency ranging from 1-10 linepairs per cm.The resolution of the optical was determined by identifying the highest frequency line set which was 5 linepairs per cm.The spacing between the 5 lines in the resolution target is equal to the thickness of the lines.The clarity of the vertical lines determined the resolution of the optical system.This resolution target was utilized to evaluate the field distortion, the contrast, the parfocal stability, and the overall resolution of the optical system.This test target and the Checkerboard allowed for an evaluation of the contrast and resolution between lines by plotting the intensity over a distance plot.All three masks have been evaluated by superposition and the results show little noise, clear overlaps, and peaks proving high contrast and accuracy.The Secchi Disk, Yin Yang, and Siemens Star allowed for testing the focus errors, astigmatism, and other aberrations.The Secchi Disk and Siemens star both show a clear focusing point of the optical system in the middle of the target without astigmatism.The Siemens start reached a resolution of 16 bars over 360°with an outer star diameter of 30 mm and an inner 600 μm.
Figure 2f shows the CGHs which were run through the generated 3D algorithm to be projected in the optical system.The obtained replay field results represent the final calibration results from the generated optical system (Figure 2g).The intensity data obtained (dashed lines) showed that the test targets provided a high accuracy and contrast of the 3D project images (Figure 2h) as the lines match.
CGHs can be calculated through point cloud data such as wavefront recording or direct calculation methods.The CGHs were calculated from 3D object data points collected with the terrestrial LiDAR scanner.The 3D virtual LiDAR object is constituted of 0-N points of emitted light from the scanner.A global coordinate system was arranged for the k th point of emitted light to be (x k ,y k ,z k ).The distribution of the 3D object light was defined as O (x o ,y o ).The calculation process to achieve the CGH was a summation of propagating light from all points of emitted light in the defined system boundaries.This summation can be expressed as: where  is the wavelength of the He-Ne laser beam (633 nm, 1.2 mW) and A k is the amplitude of the k th points of emitted light. [28]To create the CGH, a phase-only SLM based on Kinoform technique was utilized. [29]The SLM had a phase of 2.The Liquid Crystal on Silicon (LCoS) SLM used reflective coating and a panel resolution of 3840×2160 px.Equation 2 was approximated according to Fresnel approximation theory. [30]As the SLM used was phase-only, the CGH calculation method was carried out by taking the phase from the complex amplitude from Equation 1: where the ℜ{(x o , y o )} and ℑ{(x o , y o )} are respectively the real and the imaginary parts of the complex expression in Equation 3. The amplitude of the CGH was calculated by the interference pattern between the object beam and the reference beam.
where A k is the amplitude and ϕ k (x o ,y o ) is the phase of the reference light.The intensity of the resulting interference pattern can be calculated as: where * represents the complex conjugate term.Equation 4 can be further simplified as: The CGH algorithm was run on both the CPU and GPU on six different machines.The computation speeds between the CPUs and GPUs were compared in generating replay field results.Traditionally, the GPU has been applied for computer graphics processes. [31]A previously introduced method of General-Purpose Computing on Graphics Processing Units (GPGPU) enabled the shifting numerical computations usually carried out by the CPU to the GPU. [32]The parallel architecture of the GPU outperforms the CPU in general-purpose computation in terms of speed. [33]To run the algorithm on the GPU, a MATLAB toolbox for parallel computing was integrated into the toolbox.This toolbox operated on the Nvidia Compute Unified Device Architecture (CUDA) parallel computing platform and model. [34]The programming model of CUDA involves the CPU in the role of the host and several GPUs.A kernel function was created elaborating on a single thread, which later was invoked with a larger number of threads from the host on one of the devices in the CUDA model.All threads were processed in parallel on several cores on the GPU.
In the first step, the hologram data based on the LiDAR point clouds was sent from the host to the computer by invoking a kernel to calculate the transfer functions.For reconstruction through the optimization, a Fresnel CGH algorithm was developed to directly view the replay field results focused at infinity.This approach was based on an advanced Gerchberg-Saxton algorithm for phase retrieval.The replay field results created floating replay field reconstructions.The Gerchberg-Saxton approach was based on a chosen phase map.The phase of the SLM was determined to be between 0 and 2.This limited the number of gratings within the bandwidth of the optical system.The phase map was applied to the original image with an iterative approach with the square root of the intensity map of pixels.This method was chosen due to its simplicity and high scalability.After the hologram generation step, the problem solution was obtained with a transfer from the computer memory to the host system.The kernels for the transfer function and the optimization step were carried out in parallel as the kernel operations are vector operations.In vector operations, each element was processed independently with a single thread. [35]igure 3a illustrates the 3D CGH process that allows for the generation of the virtual Fresnel lens to recreate each point from the extracted object in the replay field resulting in matching the distance and size of the real-life object.To map the boundaries of the field of view produced, the panel resolution of the SLM was shown in a schematic with virtual Fresnel lenses with different focal lengths (f = 50 mm, f = 75 mm) in Figure 3b.The optical setup was reduced in size by two optical lenses due to the virtual Fresnel lenses and the field of view was enlarged with holographic 360°v ideo projections.Figure 3c shows the 3D object rotations performed on the 3D LiDAR extracted tree at 0°and at 30°to show the full depth of each obstacle for the driver to fully assess the sit-uation.Figure 3d illustrates the 3D point cloud object extraction and the collected object depth information with the intensity profile.The intensity was integrated into the algorithm to be able to project 3D object information with depth and brightness control for personalized HUD layouts.Each point represents a different blue intensity which incorporates details of the x, y, and z coordinates in the point cloud.This information is utilized to recreate a 3D object with depth and to rotate the object around any axes.For example, the truck was rotated 30°with the depth information maintained in the replay field result.
A point-addition model was developed to present the accuracy, speed, brightness, and position of the chosen 3D LiDAR object for the HUD applications.A GPU-accelerated point-based method can be utilized to display the replay field results.The silhouette of the chosen objects can be visualized with even 100 points, so it might not be necessary to compute 400k points to achieve 4k resolution for alerting the driver of an upcoming obstacle (Figure 4).This approach allows for rapid computation time to project the obstacles in real time to alert the driver.Hence, a comparison of speed against image accuracy can be made to judge the number of points needed for assessing the obstacle.100 points showed the silhouette of the obstacles such as the tree and the back of a truck displayed in Figure 4, and a maximum of 400 k points achieved the 4K image resolution accuracy in the replay field results.10k points generated an adequate replay field result for the application of displaying hazards in the driver's field of view in real time.Figure 4 shows two chosen hidden obstacles: A tree and a truck from the back side of the replay field resulting in 100 points to 400 k points.
To assess a situation of danger while driving, a 360°view of the obstacle could be of importance to the driver.The advantage of the LiDAR point cloud data is not only hidden road obstacle scanning and detection but also a 360°view of all scanned data including the potential road obstacles.General 3D rotation matrices can be expressed as: [36] v 3D rotation matrices were integrated into the algorithm to perform a 360°rotation of a LiDAR object.A general 3D rotation was performed with the generated algorithm and the replay field results.The rotations were performed in radians which were later converted into degrees.Only the rotation of the coordinate frame around the y-axis by  was performed to create a natural perspective toward the viewer.A 360°rotation was performed to recreate the full depth and perception for the driver to estimate any obstacle on the road with its exact dimensions.trates the LiDAR object rotation concept for all-round obstacle viewing.Figure 5a shows the rotations of the LiDAR point object with 300 k points exerted from location d in Figure 1a.The computational rotations of a truck in 3D are shown.Figure 5b shows the holographic computational results of the respective rotations obtained in Figure 5a. Figure 5c shows the replay field results which will be displayed in the HUD to the driver as 360°all-round viewing obstacle identifications.Figure 5d illustrates the rotations of the LiDAR point cloud object with 400k points exerted from location b in Figure 1a.The computations rotations of a detailed tree are shown.Figure 5e shows the holographic computational results of the respective rotations obtained in Figure 5d.In Figure 5f the replay field results of the rotated 3D object are shown.This method can help aid in understanding the full di-mensions of any obstacle for the driver under any weather condition.The depth perception of the simulated MATLAB images is depicted within the generated holograms.The UHD replay field projections with 300 k and 400 k points were successfully obtained with GPU acceleration on an NVIDIA GeForce GTX 1650 Max-Q 4GB GDDR5 graphics card.
LiDAR point clouds (400 k points) were analyzed to create a replay field result.The analyses of point clouds were accelerated via parallel computing on the GPU and compared to sequential code which was run on the CPU.Due to the parallel architecture of the CUDA model, the time for processing one point on average should decrease with increasing number of points (Figure 6a-d).The highest number of cores used in this work was 16.This 16-core machine achieved respectively the highest processing speed (Figure 6a-c).The processing time became more efficient the greater the number of points.Consequently, the longest processing times were associated with 4 cores (i7-8665U and i7-1185G7).The speed performance was evaluated by tic and toc commands to obtain the time lapsed to generate the holograms.The holographic LiDAR projections offer system operability with high luminance images in the replay field for automotive head-up displays.The CPU processors used to generate Figure 6a,b were the i7-8665U, i9-10885H, i7-1185G7, RTX 3070 7 5800H, i7-1280P, and i9-12900H.For Figure 6c,d the GPUs were compared according to CUDA cores and processing speed.The following GPUs were utilized: Quadro P520, Quadro RTX 5000, Quadro T500, RTX A5000, and the RTX A5500.All running times were compared equally between the different machines.The approximated curves show behavior resembling the Logistic equation.Figure 6c,d shows that the GPUs outperform the CPUs approximately by a factor of 2.
By incorporating virtual Fresnel lenses, our system gained realtime flexibility.The focal length adjustments, achievable via MAT-LAB algorithms, facilitated dynamic display distances.This operational adaptability reduced system size and enhanced realtime functionality, making virtual lenses a practical choice for our head-up display.The method for holographic projections utilizing a Spatial Light Modulator (SLM) (3840×2160 px) and virtual Fresnel lenses to achieve an eye box size of 25 mm × 36 mm was introduced to display multiple layers of holographic 3D projections.This was achieved by adding virtual concave Fresnel lenses at varying focal lengths.The method focused on the augmented reality part of the head-up display and was able to align the virtual objects in size and distance with real-life objects on the road.With one or two objects the eye box was be enlarged with virtual concave Fresnel lenses to a size of eye box increases to 52 mm × 75 mm.The authors concluded that more layers projecting more hidden obstacles greater than 2 resulted in a smaller eye box of 25 mm × 36 mm.
The wavelength of the He-Ne laser (632.8 nm) and the pixel pitch (3.74 μm) of the SLM determine the maximum FOV of 6.856°.However, the 360°rotation of the LiDAR obstacle with one scan was introduced to overcome the narrow field of view and allow the driver to fully assess the obstacle from all angles.

Discussion
The present study focused on a 3D point cloud method capable of extracting chosen 3D objects from a vast point cloud and projecting those objects in 4K and 360°into the driver's field of view in real-time.In addition, this work presented a method of monitoring each pixel brightness in the replay field result to accurately reproduce the shadows and occlusion of the 3D object.The work demonstrated the real-time 3D LiDAR object projection with GPU parallel processing accurately with 100 points in 12 s.When the number of points was increased the parallel processing algorithm improved and the time decreased when compared to a single-point processing.Hence, 1000 points the algorithm processed in just 26.02 s, and 10 000 points the algorithm processed in 243.24 s.The number of points needed to explicitly visualise and assess a hidden obstacle was presented in Figure 4 and is 1000 points.For 4K resolution, the number of points needed was 400 000 points.Additionally, the density of point cloud points was controlled at all times which could be reduced to project a silhouette of the obstacle in real-time for the driver.However, to alert the driver in real-time a silhouette of the hidden obstacle could be shown first containing just 25 points in 4.8 s from a distance and when coming closer to the obstacle the full 3D holographic projection rotated 360°would be shown in 26.02 s consisting of 1000 points.Other researchers have proposed adaptive point cloud scanning methods to improve the resolution of the objects of interest and decrease the resolution of the background while scanning the scene. [37]he deployment of LiDAR point clouds enables continuous improvement in accuracy and 3D object assessment in navigation systems.This approach allows for the real-time implementation of LiDAR point cloud objects in AR mode for HUD enhancement.Such point cloud methods could be integrated with the interactive urban environment to monitor traffic security and provide a basis for autonomous navigation. [38]revious studies have explored holographic reconstruction from point cloud data by generating an algorithm to display 3D replay field images at a chosen position of up to 10 cm. [39]This has been achieved through a holographic optical element with an off-axis concave mirror function.Such near-eye display techniques were achieved by introducing an interactive holographic system capable of drawing and erasing 3D images in real time. [40]dditionally, holograms could be generated from point clouds with occlusion by integrating the Phong illumination model. [41]ence, reflections and shadows were generated within the replay field results.Full-color holographic replay field results have been achieved by utilizing GPU acceleration by relocating point cloud grids with a resolution of the replay field results of 1080×1080 px. [42]he point cloud data can be stored and used for traffic warning scenarios within smart cities.Through the method of LiDAR point cloud, hidden road obstacles, such as a cyclist behind a truck or a tree covering a street sign, can be projected into the driver's field of view at 360°to reduce traffic accidents.The computational method of controlling every single pixel's brightness, the density control of points, and the full object rotation were introduced in this work.However, the LiDAR scanner could be improved in future research with a two-axis scanner using liquid crystal control in which the scanning ranges were 360 degrees in the horizontal direction and 10 degrees in the vertical direction. [43]The operating principle of the beam spread angle is 0.3°× 0.8°vertically up to 100 Hz frequency and rotationally between 0.8°and 3.5°.The truck and the building objects from Figure 1 were collected solely with a single scan which makes the 360°view possible on-demand within a car setting and a valuable addition to the assessment of the hazard on the road.LiDAR point cloud 3D scene recognition has been explored by various research groups [44] and others have focused on the rotation invariant part with neural network integration. [45]Future research based on LiDAR point clouds could explore environmental 3D geometric information recognition and integration of rotationinvariant neural networks with GPU acceleration for accurate scene recognition and sharing options.
The full 3D 4K accuracy in the replay field was determined when projecting an obstacle with 400 k points.The adjustment from 100 points to 400 k points is critical for different applications such as driver safety.Less than 400k points are needed for the driver to fully assess an obstacle.In terms of point cloud sampling applications, other researchers have introduced nonuniformly sampled 2D images that were processed and 3D colorholographic images created based on hologram generation, wavefront recording planes, and depth grid generation.For an object made from 1162 890 points with 6000 depth grids a total running time of 366.838 seconds was utilized on a CPU.The hologram resolution achieved was 1080 × 1080 pixels at a pixel size of 7.4 μm. [46]Other researchers focused on the acceleration aspect of processing point cloud points by utilizing oriented-separable convolution with the wavefront-recording plane (WRP) method and recurrence formulae on FPGAs instead of the CPU and GPU methods. [47]This proposed method allowed time efficiency, but for static applications as the running times exceed real-time application, e.g., a butterfly of 4 110 805 points was processed with the proposed method in 271 s compared to 313 s with the established layer-based method.However, for other road safety applications such as dynamic navigation systems, the 4K resolution is critical.Future studies could incorporate the redistribution of luminous flux to individual points from point clouds.In other studies, an eye-safe replay field luminance of 300 k cd m −2 has been achieved. [48]This approach will lead to mimicking the real-world scenery for a natural immersive experience.Another important aspect for future work is replay field results that are free of visual fatigue for the user and free from the accommodation-vergence conflict. [49]Future research could involve driver gesture recognition offering a flexible design of FOV and point cloud point density in the Fourier region based on single-layer metasurfaces. [50]A further acceleration of the algorithm could be beneficial for real-time traffic monitoring.This could be achieved when improving the logical architecture of the algorithm.Another area of focus could be point cloud data storage and on-demand sharing with the interactive urban environment to inform drivers of different traffic scenarios in advance.This work demonstrated real-time 4K holographic augmented reality video HUD projections extracted from scanned LiDAR point cloud data.

Experimental Section
LiDAR Data Acquisition: A RIEGL VZ-400 (RIEGL Laser Measurement Systems GmbH, Austria) was utilized for LiDAR data collection.The scanner used has a wavelength of 1550 nm, a beam divergence of 0.35 mrad, and a measuring range of around 600 m.The LiDAR data was obtained by scanning Malet Street in London.Data were post-processed in RiSCAN Pro (RIEGL Laser Measurement Systems GmbH) to produce a co-register point cloud in an arbitrary coordinate system.The objects on the sidewalks of the scanned street were processed by using separation algorithms containing four different filters.
Algorithms and Hologram Generation: The obtained augmented reality holographic image projections required the capability to map each point of the point cloud data in x, y, and z coordinates.The number of points and the brightness of each pixel were controlled to recreate real-life LiDAR objects.Each extracted point cloud object identified as a potential hazard varied in the number of points.The maximum number of points was the tree and the truck objects with 400k points each.An algorithm was generated to extract the point cloud points of the chosen hazardous object.The algorithm extracted a pre-programmed number of points from the total collected amount of point cloud points from the position uti-lized to collect the LiDAR data.After the 3D object was extracted from the overall data, it was mapped in MATLAB (R2022b, MathWorks) in x, y, and z coordinates with a controlled number of points (nk = 100, 1000, 10 000, 100 000, and 400 000).The algorithm was performed at each number of points to obtain the time with the CPU and the GPU parallel processing.
The processors used were the Intel Core i7-8665U @ 1.90 GHz, 2.11 GHz with 48 GB RAM, the Intel Core i9-10885H @ 2.40 GHz, 2.40 GHz with 64 GB RAM, the Intel Core i7-1185G7 @ 3.00 GHz with 48 GB RAM, RTX 3070 AMD Ryzen 7 5800H Processor @ 3.20 GHz, up to 4.40 GHz Max Boost, 8 Cores Nvidia RTX A5000 24 GB GDDR6 Graphics Card, Intel Core i7-1280P 1.8 GHz at 48.0 GB RAM and the Intel Core i9-12900HX @ 5 GHz.All running times were compared equally between the different machines.The machine with the highest number of cores ( 16) performed most efficiently and the machines with the least number of cores (4) performed least efficiently.
The use of GPU cores in parallel with CUDA allowed for escaping the queue in the graphics pipeline for all calculations performed.GPUs used in this study all had more cores than the CPUs, hence the parallel performance of the GPUs increased providing improved computation time.Every GPU possesses its memory chips with dedicated memory interfaces allowing for the simultaneous reading and processing of data sets on different GPUs.In this work only global GPU memory was utilized, however, this could be expanded to shared memory use for further enhancement of the computational power.This work aimed at introducing the GPU approach in head-up display research to CPU use as the latter has one memory interface for the CPU core creating the queuing and longer computation times.Even the expected disadvantages of GPUs such as the error rate proved insignificant for this specific application in holographic HUDs as the results meet the accuracy and precision requirements.In future research the processing times accelerated with GPU could be improved.However, at this point, a silhouette of 25 points could be projected in realtime to the driver, and a fully rotated 4K obstacle in 26.02 s.
The reason for an extensive comparison of the running times between the CPUs and GPUs was to prove the hypothesis that 1. the greater the processing power the faster the running times and 2. With an increased number of points the running times per single point decrease.All CPU and GPU machines were from Lenovo to compare purely the system architectures.After the time comparison of the GPU against the CPU, the brightness of each pixel was controlled by an algorithm that extracted the depth information of the LiDAR point cloud data and recreated a matching brightness of each pixel representing depth information.This method was implemented to generate depth cues such as accommodation and occlusion in the replay field results.Additionally, a method was developed to match the real-life objects in size and distance in the replay field by introducing a virtual Gabor lens based on the multichannel imaging approach.First, the hologram was computationally created by using the previously described algorithm to create a layer in the replay field in the Fraunhofer region.Second, the hologram was placed in the middle of the replay field at a different depth compared to the first CGH to match the size of the replay field result to the original object.The virtual Gabor lens was at a calculated distance d away from the input, which is a complex transmission function U(x,y) illuminated by a collimated monochromatic He-Ne laser light ( = 633 nm) and amplitude 1.The 360°rotations (20°, 80°, 120°, 160°,180°, 200°, 250°, and 310°) were generated due to the imported Li-DAR point cloud data into the x, y, and z coordinates in MATLAB.A 3D rotation was performed based on the 3D rotation matrices to obtain the gradual 3D object rotations.This process was carried out to demonstrate that the depth information was present in the replay field results and for the full assessment of the 3D object as a hazard to the driver.
Modulation of Linear Polarizers: In the optical setup, two linear polarizers (Ø1, N-BK7, 38% transmission) were placed to control the polarization of the He-Ne light source as the laser light was not polarized to any axes.The polarizers were utilized to calibrate the optical setup by generating a grayscale map to find the liquid crystal switching angle.A single polarizer was rotated at a time.The most accurate 3D replay field results were obtained when both polarizers were placed at 45°.Hence, the liquid crystal switching angle of the SLM was 45°.

Figure 1 .
Figure 1.LiDAR scanned objects on a public road.a) The bird's-eye view of Malet Street in London, UK where the objects were scanned and the LiDAR object positioning is shown on the map in blue (Google Earth 2023).Scanned objects were chosen to be projected: including b) a tree, c) pedestrians next to a bicycle rack, d,e) a truck, and f) a building.

Figure 2 .
Figure 2. Optical setup for the CGH generation from LiDAR point cloud data.a) The principle of LCoS is based on the alignment of liquid crystals.b) The assembly of the SLM consists of optoelectronic components.Scale bar: 1 cm.The inset shows the LCoD panel.Inset scale bar: 5 mm.c) Optical projection system showing a 4K SLM, a He-Ne laser, an aspheric lens (L 1 ), a focusing lens (L 2 ), polarizers (P 1-2 ), a half-wave plate (HW), and a beam splitter (BS).d) Developed holographic projection setup to display 3D floating images in 4K.e) Resolution chart targets: lines, checkerboard, Secchi Disk, yin yang, and Siemens star to calibrate the device.f) Computer-Generated Holograms of the resolution targets.Dashed lines show analyzed regions.g) Replay field results of the resolution targets.h) Intensity-to-distance plots of the original targets, the CGH-generated targets, and the replay field results in a comparison of the selected projected regions (dashed regions).

Figure 3 .
Figure 3. Depth information of the holography setup to recreate 360°floating 3D objects in the replay field.a) Point Cloud extracted data with a separation algorithm which was post-processed into a CGH and an intensity profile in the replay field result.Each point had an intensity value assigned.b) The optical focusing lenses were reduced to virtual Fresnel lenses as part of the post-processing algorithm.The virtual Fresnel lenses were introduced at a focal length of f = 50 mm and f = 75 mm.c) LiDAR object rotations with depth information as pixel intensity and depth information of the object to be displayed to the driver as a 360°-rotated fully assessable obstacle.The rotation process is shown at 0°and 30°.d) 3D object rotation of the extracted LiDAR object and its corresponding replay field result: LiDAR truck presented at 0°, intensity map of the LiDAR object, replay field result; 3D object rotation around the y-axis, LiDAR image of the truck rotated at 30°; and replay field result.

Figure 4 .
Figure 4. Holographic replay field results of 3D LiDAR processed data sets.LiDAR truck and tree objects are displayed.The number of points: (i) 10 2 , (ii) 10 3 , (iii) 10 4 , (iv) 10 5 , and (v) 4×10 5 nk.The results compare the LiDAR data processing and hologram generation speeds between CPU and GPU MATLAB parallel processing modules.