Next Article in Journal
Human Action Recognition for Dynamic Scenes of Emergency Rescue Based on Spatial-Temporal Fusion Network
Next Article in Special Issue
D-STGCN: Dynamic Pedestrian Trajectory Prediction Using Spatio-Temporal Graph Convolutional Networks
Previous Article in Journal
TCAD Device Modeling and Simulation Study of Organic Field Effect Transistor-Based pH Sensor with Tunable Sensitivity for Surpassing Nernst Limit
Previous Article in Special Issue
VMLH: Efficient Video Moment Location via Hashing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time Synchronization and Space Registration of Roadside LiDAR and Camera

1
Shandong High-Speed Group Co., Ltd., Jinan 250098, China
2
School of Microelectronics, Shandong University, Jinan 250101, China
3
Suzhou Research Institute, Shandong University, Suzhou 215000, China
4
Shandong Academy of Transportation Science, Jinan 250357, China
5
School of Transportation, Lanzhou Jiaotong University, Lanzhou 730070, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(3), 537; https://doi.org/10.3390/electronics12030537
Submission received: 29 November 2022 / Revised: 12 January 2023 / Accepted: 17 January 2023 / Published: 20 January 2023
(This article belongs to the Special Issue Deep Perception in Autonomous Driving)

Abstract

:
The sensing system consisting of Light Detection and Ranging (LiDAR) and a camera provides complementary information about the surrounding environment. To take full advantage of multi-source data provided by different sensors, an accurate fusion of multi-source sensor information is needed. Time synchronization and space registration are the key technologies that affect the fusion accuracy of multi-source sensors. Due to the difference in data acquisition frequency and deviation in startup time between LiDAR and the camera, asynchronous data acquisition between LiDAR and camera is easy to occur, which has a significant influence on subsequent data fusion. Therefore, a time synchronization method of multi-source sensors based on frequency self-matching is developed in this paper. Without changing the sensor frequency, the sensor data are processed to obtain the same number of data frames and set the same ID number, so that the LiDAR and camera data correspond one by one. Finally, data frames are merged into new data packets to realize time synchronization between LiDAR and camera. Based on time synchronization, to achieve spatial synchronization, a nonlinear optimization algorithm of joint calibration parameters is used, which can effectively reduce the reprojection error in the process of sensor spatial registration. The accuracy of the proposed time synchronization method is 99.86% and the space registration accuracy is 99.79%, which is better than the calibration method of the Matlab calibration toolbox.

1. Introduction

Sensors are widely used in the field of traffic. Light Detection and Ranging (LiDAR) and a camera can provide real-time perception information of the surrounding environment [1,2], which can provide powerful data support for vehicle–road cooperation technology. To improve the target fusion effect [3,4,5,6,7,8], time synchronization and space registration of sensor data information are needed to achieve multi-source sensors’ robust information perception ability in different environments.
As the main component of the visual perception system, the camera can provide abundant color and image information at a low cost, and become the indispensable hardware base for the comprehensive perception of complex road conditions [9]. However, the camera is greatly affected by illumination changes, and its performance stability is reduced in dark conditions. With the rapid development of the 3D laser industry, LiDAR can obtain high angles and speed resolution, provide rich 3D data information, is not affected by lighting conditions, and has been widely studied and applied in the field of traffic [10]. However, LiDAR cannot provide the color information of the target. It can only rely on the three-dimensional size information to track the target, which is prone to target-matching errors.
Therefore, the fusion technology can realize the complementary advantages of LiDAR and camera [11,12], which can effectively improve the performance and efficiency of target detection [13,14]. However, due to the different frequencies and spatial location of data acquisition between LiDAR and the camera [15,16,17], time synchronization and space registration need to be solved. Otherwise, these easily lead to data fusion deviation [18,19,20]. For example, the time deviation of data acquisition between LiDAR and the camera can lead to an obvious position deviation of the same target, affecting object calibration and detection accuracy [21,22]. Therefore, solving the problems of time synchronization and space registration between LiDAR and camera is a critical technical link to improving the accuracy of data fusion [23,24,25].
To solve these problems, this paper proposes a simple and efficient sensor time synchronization method and adopts a space registration method based on a nonlinear optimization algorithm. These two methods solve the problems of time synchronization and space registration between LiDAR and camera data, effectively reducing the amount of data after time synchronization of the multi-source sensor. When the LiDAR and camera positions are different, the space registration of multi-source sensor data is realized, and the ideal data fusion effect is obtained. Finally, the robustness of the proposed method is verified by comparing it with relevant spatial synchronization methods.
The remaining parts of the paper are structured as follows: Section 2 presents a brief review of relevant work. Section 3 introduces the data acquisition as well as the principle of time synchronization and the space registration of sensors. Section 4 introduces the experiment and data processing. Section 5 analyzes the comparison between this study and other methods. Section 6 shows the conclusion and related future works.

2. Related Work

Due to the frequent application of LiDAR and camera in recent years, fusion technology has been continuously developed. As a prerequisite for data fusion of multi-source sensors, time synchronization and space registration have been studied by some researchers [26,27,28,29] (such as hardware synchronization, clock synchronization, network synchronization, etc.) to solve the problem of extrinsic parameter calibration in different sensor modes. Meanwhile, the necessity of sensor time synchronization and space registration is reflected in the research and development of many intelligent systems and devices. Therefore, time synchronization and space registration between multi-source sensors have attracted much attention and research.
Firstly, time synchronization methods are mainly divided into hardware synchronization and software synchronization. For example, the internal clocks of different sensors can synchronize time based on the same GPS reference clock, so that data information of multiple sensors can be matched and relevant data processing can be realized [30,31,32]. The hardware synchronization method based on the GPS reference clock is mainly applied to sensors with related interfaces. At present, most projects in the field of intelligent transportation are based on the Ubuntu robot operating system (ROS). The ROS system contains the simplest method of time soft synchronization. Sensors connected to the ROS system perform time matching when data arrive at the computer and need to subscribe to the topics of different sensors. By viewing timestamp data headers from topics published by different sensors [33,34], which are time synchronized to receive different topics, synchronized functions for data processing are finally. However, this method has a large time deviation and low efficiency, and the system is easy to crash when processing data from multiple sensors. Therefore, Furgale et al. proposed a new framework to jointly estimate the time deviation between different sensor measurements and the spatial displacement between them. It is realized through a continuous time batch processing estimation, and the time deviation is seamlessly combined in the strict theoretical framework of maximum likelihood estimation [35]. However, although this method can accurately calculate and eliminate the time deviation, it has a large amount of data and takes a long time. Zhang et al. proposed a simple self-calibration method for the internal time synchronization of MEMS (micro-electro-mechanical System) LiDAR. This method can automatically calculate whether the sensor has performed time synchronization, without any manual participation. Finally, an actual experiment on MEMS LiDAR was carried out to verify the effectiveness of this method [36]. This method is only applicable to the internal calibration of LiDAR and fails to achieve time synchronization in multiple sensors, such as camera and LiDAR.
At present, there are many methods of calibrating LiDAR and camera with extrinsic parameters [37,38,39]. The calibration method is mainly divided into two parts: one is based on dynamic calibration and the other is based on point, line, and plane calibration. The process of dynamic calibration is mainly used to calibrate the trajectory of the LiDAR and camera sensors. Meanwhile, the relationship between image attitude estimation, point cloud attitude estimation, and vehicle attitude information are mainly judged by similarity evaluation between tracks. Based on the point, line, and plane method, 3D point clouds and 2D images are matched directly for calibration.
Zhang et al. earlier proposed an extrinsic calibration theory consisting of a camera and a two-dimensional laser rangefinder, pointing out that the angle between the calibration plate plane and the laser scanning plane can affect the calibration accuracy [40]. Xiang et al. proposed a joint calibration method based on the correspondence principle of the distance from the sensor origin to the calibration plane [41]. However, the influence of the calibration plate attitude on the calibration result is not studied, which is not conducive to improving the accuracy of the calibration result. Chai et al. proposed a 3D–2D corresponding feature method for LiDAR and camera calibration [42], and then performed rigid body transformation calculations to obtain more stable calibration results. Lyu et al. used an interactive LiDAR camera calibration toolbox to calculate the transformation of intrinsic and extrinsic parameters. The corner of the board can be automatically detected through the LiDAR data frame. The board used here refers to the two-dimensional code calibration board. Meanwhile, the toolbox uses genetic algorithms to estimate and support multiple transformations [43]. Pusztai et al. studied the calibration of a vision system consisting of LiDAR and RGB camera sensors and proposed a new method for calibration using cardboard boxes of known sizes [44]. At present, time synchronization and space registration between sensors are often applied in automatic driving, but there is still insufficient research on the static detection of urban road vehicles and pedestrians by roadside sensors [45,46]. Moreover, the dynamic calibration method cannot achieve high stability [47], and the sensor time synchronization requirements are very high and need to introduce a time offset. Adding an offset will reduce the calibration accuracy [48].
However, the time synchronization and space registration of sensors are mainly reflected in autonomous driving, and the research on the time synchronization and spatial matching of roadside LiDAR and cameras lacks objective generalization ability. Especially in the aspect of time synchronization, there are not many detailed theoretical derivations and objective evaluation methods. At present, the calibration method of Matlab is the most commonly used and relatively stable calibration method [49]. In this paper, we propose a time synchronization method of frequency self-matching for roadside sensors. Based on the time synchronization and the relative position differences between roadside LiDAR and camera, a nonlinear optimization algorithm is introduced to effectively reduce the reprojection error in the process of spatial synchronization. Compared with the existing Matlab toolbox calibration methods, the feasibility of the proposed method is proven.

3. Data Collection and Methods

3.1. Data Collection

In this experiment, RS-LiDAR-32 and a camera were used for data acquisition (as shown in Figure 1), and the single-lens camera of the Jieruiweitong brand was adopted with a resolution of 1080P. The RS-LIDAR-32 has 32 laser transceiver modules for 360-degree scanning, with a scanning speed of 5 Hz, 10 Hz, and 20 Hz. Table 1 describes the parameters of the RS-LIDAR-32; the data are from the 32-line mechanical LiDAR technical manual on the official website of Shenzhen Sagitar Juchuang Technology Co., Ltd.

3.2. Time Synchronization

Due to the different frequencies of data acquisition, there is time asynchronism between roadside LiDAR and camera data acquisition. To effectively fuse the collected data, time synchronization is required, and there are two methods of processing: (1) Time synchronization based on LiDAR data, and (2) time synchronization based on camera data. As the frequency of the LiDAR sensor is less than that of the camera, to reduce the calculation amount of the method in this paper and prevent the over-fitting phenomenon in the subsequent neural network training process, we choose the LiDAR data as the reference for time synchronization. LiDAR is used as the core sensor for time synchronization. Every frame of LiDAR data is received, the current data are used as the starting point for time interpolation, and two frames of data information of the camera, before and after this time point, are obtained.
It is assumed that the interval of point cloud data of each frame of LiDAR is T 1 seconds, and the gap of an image of each camera frame is T 2 seconds. Because the data collection frequency of LiDAR is less than the collection frequency of camera data, T 2 is less than T 1 (namely, T 2 < T 1 ), therefor the threshold should be T 2 / 2 . The two sensors are set to start working at time zero; this means that the camera and the LiDAR use the same device for data acquisition. After the same time, the LiDAR obtains m frames per point cloud, and the camera obtains n m ( n m = m T 1 / T 2 ) frames per image.
The next frame image is numbered as the n m + 1 ( n m + 1 = [ m T 1 / T 2 ] + 1 = [ m T 1 T 2 + 1 ] ) , where all square brackets in this section indicate that they integers. When m T 1 / T 2 [ m T 1 / T 2 ] < T 2 / 2 , the point cloud of frame m and the image of frame n m form a set of point pairs. m T 1 / T 2 [ m T 1 / T 2 ] represents the difference between the time used by the n m frame image and the time used by the corresponding m frame LiDAR data. If the difference is less than the set threshold, the image data of this frame can be retained; otherwise, they will be eliminated. When [ m T 1 T 2 + 1 ] m T 1 / T 2 < T 2 / 2 , the point cloud of frame m and the image of frame n m + 1 form a set of point pairs. [ m T 1 T 2 + 1 ] m T 1 / T 2 represents the difference between the time used by the n m + 1 frame image and the time used by the corresponding m frame LiDAR data. If the difference is less than the set threshold, the image data of this frame can be retained; otherwise, they will be eliminated. The remaining image information that does not satisfy the above two inequalities is eliminated. Eventually, the LiDAR and camera data are combined into new packets. The time synchronization of the information collected by the two sensors is realized through the operation process.

3.3. Space Registration

To optimize the reprojection error of the camera and LiDAR in the process of space synchronization, a nonlinear optimization algorithm with joint calibration parameters is adopted. However, in nonlinear joint calibration, the alignment of data between the LiDAR and camera needs to be considered. Therefore, based on the above time synchronization, the space synchronization between the LiDAR and camera is studied. It is basic work to calibrate the camera’s intrinsic parameters before joint calibration. The main process is the conversion between the pixel coordinate system, image coordinate system, LiDAR coordinate system, camera coordinate system, and world coordinate system. The camera’s intrinsic parameters are calibrated through the Zhang calibration method [50].
To determine the calibration plate plane, the point cloud coordinates of the calibration plate plane should be extracted first. The calibration plane is the reference plane in the process of spatial synchronization between the camera and LiDAR, namely the calibration plate. It is helpful to determine the relationship between a point in the three-dimensional space of the LiDAR and the corresponding geometric position in the image so that the spatial synchronization between the camera and the LiDAR is more accurate. KD-tree, as a data structure, is used to represent the set of points in k-dimensional space [51]. The KD-tree method in the PCL library is used to select the point cloud around a specified point. Due to the complexity of the test environment, there is no pre-processing of point cloud data. Therefore, according to the selected point cloud data, the random sampling consensus algorithm (RANSAC) is used to calculate the plane equation of the calibration plate. Let the fitting plane equation be a x + b y + c z + s = 0 where a , b , c , and s are used as unknown fitting parameters, and four points near the center of the plane plate are selected to calculate the plane equation. Other 3D point cloud coordinates are substituted into Formula (1) to calculate the distance between 3D points and the fitting plane.
D = | a x + b y + c z + s | a 2 + b 2 + c 2 ,
The threshold is set to U. If the distance between other 3D points is greater than U, they are deleted. Finally, 3D points within the threshold are left. The number of iterations is set as 2000, and the points in U in each iteration are compared. The fitting plane with the most points is the real plane.
After finding the optimal plane, calibration optimization is needed, and the main purpose of optimization is to reduce the reprojection error in the 3D–2D point pair projection process. Finally, the space synchronization of LiDAR and camera is realized. The process of projecting the 3D point cloud onto the image is as follows:
Z [ u v 1 ] = [ f x 0 u 0 0 0 f y v 0 0 0 0 1 0 ] [ R T 0 1 ] [ X Y Z 1 ] ,
where l = [ X , Y , Z ] T represents the 3D point cloud coordinates of LiDAR. R stands for the rotation matrix R = ( r 11 , r 12 , r 13 , , r 31 , r 32 , r 33 ) . T stands for the translation matrix T = ( t x , t y , t z ) . K = ( f x , f y , u 0 , v 0 ) represents the camera’s internal parameter. Z represents distance depth information. ( v , u ) represents the pixel coordinate points of the actual projection. Among them, the parameters that need to be optimized are R and T . In the optimization process, nonlinear function optimization is introduced to minimize the reprojection error and obtain more accurate values of R and T . Based on time synchronization between camera and LiDAR data, parameters were optimized by using the recursive optimization method (LM) improved by the gradient descent method and the Gauss–Newton method. The main advantage of an LM algorithm is to ensure fast convergence while ensuring decline. Since the position of the camera and LiDAR relative to the calibration plate is unknown, the camera and LiDAR, under different attitudes, are calibrated by moving the calibration plate in the process of space synchronization, so a least-squares problem is constructed using the inclination angle and azimuth angle of the camera and LiDAR relative to the calibration plate as the error source. Therefore, the global loss function between LiDAR and camera can be defined as:
W lf _ lc = i = 1 n ( φ l i φ c i 2 + θ l i θ c i 2 ) ,
In Formula (3), φ l i , φ c i , respectively, represent the inclination angles of the LiDAR and camera with respect to the calibration plate, and θ l i , θ c i , respectively, represent the azimuth angles of the LiDAR and camera with respect to the calibration plate. Based on the above nonlinear optimization method, the reprojection error between the LiDAR and the camera should be calculated According to the principle of reprojection error, let T l c = [ R T 0 1 ] , P l i be the point projected onto the camera coordinate system by the 3D point cloud. P c i represents 3D points projected onto the image coordinate system by the camera’s internal matrix K . According to Formula (2), the reprojection error of the checkerboard calibration board can be expressed as:
= i = 1 n P c i 1 Z K T l c P l i 2 ,
In Formula (4), the reprojection error in the space synchronization process can be obtained, where represents the reprojection error in the sensor calibration process.

4. Experimental Analysis

4.1. Time Synchronization Verification Method

Based on the time synchronization principle mentioned above, a flat road surface was selected for the time synchronization verification test of the multi-source sensor. A certain brand of car was used as the vehicle for the verification test. We placed three triangular cones in a straight line every 5 m along the road direction. The three triangular cones were numbered successively, and the first triangular cone was placed at the center of the left front wheel of the car, taking the center point as the starting point, as shown in Figure 2. We started the car and drove at a constant speed. The constant speed of the car was 40 km/h. Through the driver’s operation on the vehicle, the vehicle entered the fixed speed cruise mode, to ensure that the vehicle traveled at a uniform speed. The car passed the second and third triangular cones in turn. When the center of the front wheel of the vehicle overlapped with the second and third triangular cones (as shown in Figure 3 and Figure 4), the point cloud data at each triangular cone was obtained by LiDAR, as shown in Figure 5. According to Formula (5), the coordinates ( x i ¯ , y i ¯ , z i ¯ ) of each point cloud center were be obtained, and then the distance ( L i 1 ) between the point cloud center of any triangular cone and the point cloud center at the starting point were calculated according to Formula (6).
x i ¯ = 1 n j = 1 n x i j ;   y i ¯ = 1 n j = 1 n y i j ;   z i ¯ = 1 n j = 1 n z i j , i = ( 1 , 2 , 3 ) ;   j = ( 1 , 2 , , n )
where, x i ¯ , y i ¯ , and z i ¯ represent the location coordinates of the LiDAR point cloud center at the front wheel of the vehicle.
L i 1 = ( x i ¯ x 1 ¯ ) 2 + ( y i ¯ x 1 ¯ ) 2 + ( z i ¯ z 1 ¯ ) 2
where L i 1 represents the distance between the LiDAR point cloud center at the front wheel of the vehicle and the starting point.
With the left front wheel used as the target, the vehicle was driven from the first triangular cone to the third, while the LiDAR and camera were tested. This was repeated for a total of 10 tests. Ten groups of data after time synchronization were obtained in the test and compared with the vehicle position distance before time synchronization by the sensor.
The results are shown in Figure 6 and Figure 7. Figure 6 shows the position deviation when the vehicle moved 5 m, while Figure 7 shows the comparison of position deviation when the vehicle moved 10 m. As can be seen from the two figures, the data deviation after sensor realizes time synchronization is less than that without time synchronization. After time synchronization, the accuracy of moving 5 m and 10 m can reach 99.86 percent and 99.49 percent, respectively.

4.2. Space Registration Verification Method

Based on the above two theoretical methods, the road of the author’s campus was selected as the test site. During the experiment, corner points inside the calibration board were identified according to the position changes of the calibration board, as shown in Figure 8. Finally, the point cloud was effectively projected onto the calibration board. The black point cloud part represents the background point where the LiDAR scans the surrounding environment, and the red point represents the point cloud scanned onto the calibration board. During the moving process of the calibration board, the inclination and azimuth of the calibration board at different times were be recorded, as shown in Figure 9.
Through the calibration of the camera’s internal parameters and the calibration of the external parameters between the LiDAR and the camera, the rotation parameter matrix and shift parameter matrix are, respectively, obtained as follows:
R = [ 6.5577526781824158 e 02 4.9700067859074659 e 02 9.9660899616448506 e 01 9.9778758283539681 e 01 1.4208891757146347 e 02 6.4946492857811400 e 02 1.0932864248457241 e 02 9.9866311168974153 e 01 5.0521894555605629 e 02 ] T 1 = [ 1.8226658771695058 e 02 8.1717580020885579 e 03 2.4798673724301969 e 02 ]
where R is the rotation matrix between the LiDAR and the camera, which can be interpreted as the projection of the LiDAR coordinate system to the camera coordinate system through the transformation matrix R . T 1 is the transformation matrix from camera to LiDAR, and T 1 is composed of rotation matrix R and translation vector T .

5. Results and Discussions

Based on the application of time synchronization and space registration methods of LiDAR and camera data in this paper, after time synchronization and space registration, the fusion experiment was carried out under the condition that the relative position of LiDAR and camera remained unchanged. The ideal fusion of LiDAR and camera was obtained, as shown in Figure 10. In the process of joint calibration of LiDAR and camera, different relative positions of LiDAR and camera were selected for a space registration experiment under the common view, and the comparison error of the obtained results is shown in Table 2. Table 2 shows that the reprojection error changes when the LiDAR and the camera are in different relative positions. When the height difference between the LiDAR and the camera is less than 20 cm and the horizontal distance is less than 150 cm, the fusion effect is the best.
As the Matlab calibration toolbox is one of the advanced and convenient calibration technologies, the proposed LiDAR and camera time synchronization and space registration method was compared with the Matlab toolbox method based on Zhou et al. [52], and field experiments were carried out to verify the comparison data results, as shown in Figure 11, which provides a comparison and reference scheme for time synchronization, space registration, and data fusion of multi-source sensors. The comparison test used the same instruments and equipment for data acquisition, and was performed in the same experimental environment.

6. Conclusions

To solve the problem of the cumbersome calibration process between LiDAR and camera, a time synchronization method based on self-frequency and a space synchronization method based on a nonlinear optimization algorithm are proposed in this study, which can effectively realize the time synchronization between LiDAR and camera data. Based on time synchronization, a nonlinear optimization algorithm of joint calibration parameters is proposed, which can effectively reduce the reprojection error in the process of space registration of sensors. Compared with the existing work, the randomness and complexity of the experimental scenarios of time synchronization and space registration in this paper can be applied to most environments and provide accurate results.
The main innovations of the method established in this study are as follows: 1. Through visualization experiments, the time synchronization accuracy of LiDAR and camera data can reach 99.86%. Compared with non-time synchronization, the accuracy of data after time synchronization is improved by 9.63%. Meanwhile, the time synchronization process is simplified and the time synchronization efficiency is improved. 2. Based on the time synchronization of roadside sensor data information, the method can accurately and efficiently realize sensor space registration and adapt to different complex environments.
This method provides a practical and effective solution for the data fusion of vehicle–road cooperative multi-source sensor equipment. For example, it can be used to solve the influence of relative position on data fusion in the installation process of different roadside sensors. Meanwhile, the method is also suitable for the traffic scene in a complicated urban environment.
There are also some limitations to this paper. For instance, only one kind of LiDAR was used for data collection in the experiment, and the experiment under rain and snow and other bad weather conditions was not carried out, which is the content that needs to be made up in the next research work. At present, we have made a plan to carry out experiments with different types of LiDAR. Meanwhile, the next step will be synchronizing LiDAR, millimeter-wave radar, and cameras in time and space.

Author Contributions

Conceptualization, S.L. and C.W.; methodology, S.L.; software, X.L.; validation, X.W., S.L. and C.W.; formal analysis, S.L.; investigation, C.W.; resources, X.L.; data curation, X.W.; writing—original draft preparation, S.L. and C.W.; writing—review and editing, X.L.; visualization, C.W.; supervision, C.W.; project administration, X.W.; funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded partly by the National Natural Science Foundation of China, grant number 52002224, partly by the National Natural Science Foundation of Jiangsu Province, grant number BK20200226, partly by the Program of Science and Technology of Suzhou, grant number SYG202033, and partly by the Key Research and Development Program of Shandong Province, grant number 2020CXG010118.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beltrán, J.; Guindel, C.; García, F. Automatic extrinsic calibration method for lidar and camera sensor setups. arXiv 2021, arXiv:2101.04431. [Google Scholar] [CrossRef]
  2. Raj, T.; Hashim, F.H.; Huddin, A.B.; Ibrahim, M.F.; Hussain, A. A Survey on LiDAR Scanning Mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  3. Wu, J.; Xiong, Z. A soft time synchronization framework for multi-sensors in autonomous localization and navigation. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Auckland, New Zealand, 9–12 July 2018; pp. 694–699. [Google Scholar]
  4. Wu, J.; Zhang, Y.; Xu, H. A novel skateboarder-related near-crash identification method with roadside LiDAR data. Accid. Anal. Prev. 2020, 137, 105438. [Google Scholar] [CrossRef] [PubMed]
  5. Guan, L.; Chen, Y.; Wang, G.; Lei, X. Real-Time Vehicle Detection Framework Based on the Fusion of LiDAR and Camera. Electronics 2020, 9, 451. [Google Scholar] [CrossRef] [Green Version]
  6. Wei, P.; Cagle, L.; Reza, T.; Ball, J.; Gafford, J. LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System. Electronics 2018, 7, 84. [Google Scholar] [CrossRef] [Green Version]
  7. Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar]
  8. Wu, J.; Xu, H.; Tian, Y.; Zhang, Y.; Zhao, J.; Lv, B. An automatic lane identification method for the roadside light detection and ranging sensor. J. Intell. Transp. Syst. 2020, 24, 467–479. [Google Scholar] [CrossRef]
  9. Franke, U.; Pfeiffer, D.; Rabe, C.; Knoeppel, C.; Enzweiler, M.; Stein, F.; Herrtwich, R. Making bertha see. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 214–221. [Google Scholar]
  10. Wang, Z.; Wang, L.; Xiao, L.; Dai, B. Unsupervised Subcategory Domain Adaptive Network for 3D Object Detection in LiDAR. Electronics 2021, 10, 927. [Google Scholar] [CrossRef]
  11. Guo, Y.P.; Zou, K.; Chen, S.D. Road Side Perception Simulation System for Vehicle-Road Cooperation. Comput. Syst. Appl. 2021, 30, 92–98. [Google Scholar]
  12. Chen, J.; Tian, S.; Xu, H.; Yue, R.; Sun, Y.; Cui, Y. Architecture of Vehicle Trajectories Extraction With Roadside LiDAR Serving Connected Vehicles. IEEE Access 2019, 7, 100406–100415. [Google Scholar] [CrossRef]
  13. Song, H.; Choi, W.; Kim, H. Robust Vision-Based Relative-Localization Approach Using an RGB-Depth Camera and LiDAR Sensor Fusion. IEEE Trans. Ind. Electron. 2016, 63, 3725–3736. [Google Scholar] [CrossRef]
  14. Yoo, J.H.; Kim, Y.; Kim, J.; Choi, J.W. 3d-cvf: Generating joint camera and lidar features using cross-view spatial feature fusion for 3d object detection. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 720–736. [Google Scholar]
  15. Bruscato, L.T.; Heimfarth, T.; de Freitas, E.P. Enhancing Time Synchronization Support in Wireless Sensor Networks. Sensors 2017, 17, 2956. [Google Scholar] [CrossRef]
  16. Li, J.; Mechitov, K.A.; Kim, R.E.; Spencer, B.F., Jr. Efficient time synchronization for structural health monitoring using wireless smart sensor networks. Struct. Control Health Monit. 2016, 23, 470–486. [Google Scholar] [CrossRef]
  17. Liu, S.; Yu, B.; Liu, Y.; Zhang, K.; Qiao, Y.; Li, T.Y.; Tang, J.; Zhu, Y. Brief industry paper: The matter of time—A general and efficient system for precise sensor synchronization in robotic computing. In Proceedings of the IEEE 27th Real-Time and Embedded Technology and Applications Symposium, Nashville, TN, USA, 18–21 May 2021; pp. 413–416. [Google Scholar]
  18. Li, J.; Zhang, X.; Li, J.; Liu, Y.; Wang, J. Building and optimization of 3D semantic map based on Lidar and camera fusion. Neurocomputing 2020, 409, 394–407. [Google Scholar] [CrossRef]
  19. Yu, B.; Hu, W.; Xu, L.; Tang, J.; Liu, S.; Zhu, Y. Building the computing system for autonomous micromobility vehicles: Design constraints and architectural optimizations. In Proceedings of the Annual IEEE/ACM International Symposium on Microarchitecture, Athens, Greece, 17–21 October 2020; pp. 1067–1081. [Google Scholar] [CrossRef]
  20. Zhao, L.; Zhou, H.; Zhu, X.; Song, X.; Li, H.; Tao, W. Lif-seg: Lidar and camera image fusion for 3d lidar semantic segmentation. arXiv 2021, arXiv:2108.07511. [Google Scholar]
  21. Zheng, J.; Li, S.; Li, N.; Fu, Q.; Liu, S.; Yan, G. A LiDAR-Aided Inertial Positioning Approach for a Longwall Shearer in Underground Coal Mining. Math. Probl. Eng. 2021, 2021, 6616090. [Google Scholar] [CrossRef]
  22. Moleski, T.W.; Wilhelm, J. Trilateration Positioning Using Hybrid Camera-LiDAR System; AIAA Scitech 2020 Forum: Orlando, FL, USA, 2020; p. 0393. [Google Scholar]
  23. Chang, X.; Chen, X.D.; Zhang, J.C. Target detection and tracking based on Lidar and camera information fusion. Opto-Electron. Eng. 2019, 46, 1–11. [Google Scholar]
  24. Liu, Z. Research on Spatio-Temporal Consistency and Information Fusion Technology of Multi-Sensor. Ph.D. Thesis, National University of Defense Technology, Changsha, China, 2008. [Google Scholar]
  25. Pusztai, Z.; Eichhardt, I.; Hajder, L. Accurate calibration of multi-lidar-multi-camera systems. Sensors 2018, 18, 2139. [Google Scholar] [CrossRef] [Green Version]
  26. Faizullin, M.; Kornilova, A.; Ferrer, G. Open-Source LiDAR Time Synchronization System by Mimicking GPS-clock. arXiv 2021, arXiv:2107.02625. [Google Scholar]
  27. Nikolic, J.; Rehder, J.; Burri, M.; Gohl, P.; Leutenegger, S.; Furgale, P.T.; Siegwart, R. A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 431–437. [Google Scholar]
  28. Wu, J.; Xu, H.; Zheng, Y.; Zhang, Y.; Lv, B.; Tian, Z. Automatic Vehicle Classification using Roadside LiDAR Data. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 153–164. [Google Scholar] [CrossRef]
  29. Anderton, D.C. Synchronized Line-Scan LIDAR/EO Imager for Creating 3D Images of Dynamic Scenes: Prototype II. Master’s Thesis, Utah State University, Logan, UT, USA, 2005. [Google Scholar] [CrossRef]
  30. Kim, R.; Nagayama, T.; Jo, H.; Spencer, J.B.F. Preliminary study of low-cost GPS receivers for time synchronization of wireless sensors. In Proceedings of the Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, San Diego, CA, USA, 26–30 April 2012; Volume 8345, p. 83451A. [Google Scholar]
  31. Koo, K.Y.; Hester, D.; Kim, S. Time Synchronization for Wireless Sensors Using Low-Cost GPS Module and Arduino. Front. Built Environ. 2019, 4, 82. [Google Scholar] [CrossRef] [Green Version]
  32. Skog, I.; Handel, P. Time Synchronization Errors in Loosely Coupled GPS-Aided Inertial Navigation Systems. IEEE Trans. Intell. Transp. Syst. 2011, 12, 1014–1023. [Google Scholar] [CrossRef]
  33. Zofka, M.R.; Tottel, L.; Zipfl, M.; Heinrich, M.; Fleck, T.; Schulz, P.; Zollner, J.M. Pushing ROS towards the Dark Side: A ROS-based Co-Simulation Architecture for Mixed-Reality Test Systems for Autonomous Vehicles. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Karlsruhe, Germany, 14–16 September 2020; pp. 204–211. [Google Scholar]
  34. Anwar, K.; Wibowo, I.K.; Dewantara, B.S.B.; Bachtiar, M.M.; Haq, M.A. ROS Based Multi-Data Sensors Synchronization for Robot Soccer ERSOW. In Proceedings of the International Electronics Symposium, Surabaya, Indonesia, 29–30 September 2021; pp. 167–172. [Google Scholar]
  35. Furgale, P.; Rehder, J.; Siegwart, R. Unified temporal and spatial calibration for multi-sensor systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1280–1286. [Google Scholar]
  36. Zhang, Y.; Di, X.; Yan, S.; Zhang, B.; Qi, B.; Wang, C. A Simple Self-calibration Method for The Internal Time Synchronization of MEMS LiDAR. arXiv 2021, arXiv:2109.12506. [Google Scholar]
  37. Zheng, B.; Huang, X.; Ishikawa, R.; Oishi, T.; Ikeuchi, K. A new flying range sensor: Aerial scan in omni-directions. In Proceedings of the International Conference on 3D Vision, Lyon, France, 19–22 October 2015; pp. 623–631. [Google Scholar]
  38. Galilea, J.L.L.; Lavest, J.-M.; Vazquez, C.A.L.; Vicente, A.G.; Munoz, I.B. Calibration of a High-Accuracy 3-D Coordinate Measurement Sensor Based on Laser Beam and CMOS Camera. IEEE Trans. Instrum. Meas. 2009, 58, 3341–3346. [Google Scholar] [CrossRef]
  39. Zhang, J.; Kaess, M.; Singh, S. A real-time method for depth enhanced visual odometry. Auton. Robot. 2015, 41, 31–43. [Google Scholar] [CrossRef]
  40. Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306. [Google Scholar]
  41. Xiang, Z.Y.; Zheng, L. Novel joint calibration method of camera and 3D laser range finder. J. Zhejiang Univ. 2009, 43, 1401–1405. [Google Scholar]
  42. Chai, Z.; Sun, Y.; Xiong, Z. A novel method for LiDAR camera calibration by plane fitting. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Auckland, New Zealand, 9–12 July 2018; pp. 286–291. [Google Scholar]
  43. Lyu, Y.; Bai, L.; Elhousni, M.; Huang, X. An interactive lidar to camera calibration. In Proceedings of the IEEE High Performance Extreme Computing Conference, Waltham, MA, USA, 24–26 September 2019; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  44. Pusztai, Z.; Hajder, L. Accurate calibration of LiDAR-camera systems using ordinary boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 6–9 June 2017; pp. 394–402. [Google Scholar]
  45. Taylor, Z.; Nieto, J. Motion-Based Calibration of Multimodal Sensor Extrinsics and Timing Offset Estimation. IEEE Trans. Robot. 2016, 32, 1215–1229. [Google Scholar] [CrossRef]
  46. Wu, J.; Xu, H.; Zheng, J.; Zhao, J. Automatic Vehicle Detection With Roadside LiDAR Data Under Rainy and Snowy Conditions. IEEE Intell. Transp. Syst. Mag. 2020, 13, 197–209. [Google Scholar] [CrossRef]
  47. Cui, Y.; Xu, H.; Wu, J.; Sun, Y.; Zhao, J. Automatic Vehicle Tracking With Roadside LiDAR Data for the Connected-Vehicles System. IEEE Intell. Syst. 2019, 34, 44–51. [Google Scholar] [CrossRef]
  48. Yiğitler, H.; Badihi, B.; Jäntti, R. Overview of time synchronization for IoT deployments: Clock discipline algorithms and protocols. Sensors 2020, 20, 5928. [Google Scholar] [CrossRef]
  49. Kolar, P.; Benavidez, P.; Jamshidi, M. Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation. Sensors 2020, 20, 2180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Piscataway, NJ, USA, 20–27 September 1999; Volume 1. [Google Scholar]
  51. Zhou, K.; Hou, Q.; Wang, R.; Guo, B. Real-time KD-tree construction on graphics hardware. ACM Trans. Graph. 2008, 27, 1–11. [Google Scholar] [CrossRef] [Green Version]
  52. Zhou, L.; Li, Z.; Kaess, M. Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar] [CrossRef]
Figure 1. LiDAR and camera data acquisition system.
Figure 1. LiDAR and camera data acquisition system.
Electronics 12 00537 g001
Figure 2. The starting point of the test.
Figure 2. The starting point of the test.
Electronics 12 00537 g002
Figure 3. Vehicle driving distance of 5 m.
Figure 3. Vehicle driving distance of 5 m.
Electronics 12 00537 g003
Figure 4. Vehicle driving distance of 10 m.
Figure 4. Vehicle driving distance of 10 m.
Electronics 12 00537 g004
Figure 5. LiDAR point cloud diagram.
Figure 5. LiDAR point cloud diagram.
Electronics 12 00537 g005
Figure 6. Comparison of distance errors before and after synchronization when the vehicle drives a distance of 5 m.
Figure 6. Comparison of distance errors before and after synchronization when the vehicle drives a distance of 5 m.
Electronics 12 00537 g006
Figure 7. Comparison of distance errors before and after synchronization when the vehicle dives a distance of 10 m.
Figure 7. Comparison of distance errors before and after synchronization when the vehicle dives a distance of 10 m.
Electronics 12 00537 g007
Figure 8. Identification of the corner points of the calibration board.
Figure 8. Identification of the corner points of the calibration board.
Electronics 12 00537 g008
Figure 9. Point cloud calibration plate reprojection.
Figure 9. Point cloud calibration plate reprojection.
Electronics 12 00537 g009
Figure 10. Effect of LiDAR and camera data fusion.
Figure 10. Effect of LiDAR and camera data fusion.
Electronics 12 00537 g010
Figure 11. Reprojection error comparison.
Figure 11. Reprojection error comparison.
Electronics 12 00537 g011
Table 1. The LiDAR parameters.
Table 1. The LiDAR parameters.
IndicatorValue
Laser beams32
Scan FOV40° × 360°
Vertical angle resolution0.33°
Rotation rate300/600/1200 (r/min)
Laser wavelength905 nm
Vertical field of view−16°~+15°
Operating temperature−20~60 °C
Single echo data rate650,000 points/s
Measuring range100 m~200 m
Communication InterfacePPS/UDP
Table 2. Reprojection errors of LiDAR and camera at different distances.
Table 2. Reprojection errors of LiDAR and camera at different distances.
Height (cm)Horizontal Distance (cm)Reprojection Error (Pixel)
10500.159981
101000.166632
101500.263361
102000.339633
20500.169532
201000.176923
201500.294632
202000.369654
30500.219987
301000.321463
301500.322134
302000.329786
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Liu, S.; Wang, X.; Lan, X. Time Synchronization and Space Registration of Roadside LiDAR and Camera. Electronics 2023, 12, 537. https://doi.org/10.3390/electronics12030537

AMA Style

Wang C, Liu S, Wang X, Lan X. Time Synchronization and Space Registration of Roadside LiDAR and Camera. Electronics. 2023; 12(3):537. https://doi.org/10.3390/electronics12030537

Chicago/Turabian Style

Wang, Chuan, Shijie Liu, Xiaoyan Wang, and Xiaowei Lan. 2023. "Time Synchronization and Space Registration of Roadside LiDAR and Camera" Electronics 12, no. 3: 537. https://doi.org/10.3390/electronics12030537

APA Style

Wang, C., Liu, S., Wang, X., & Lan, X. (2023). Time Synchronization and Space Registration of Roadside LiDAR and Camera. Electronics, 12(3), 537. https://doi.org/10.3390/electronics12030537

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop