Skip to Content
SensorsSensors
  • Article
  • Open Access

29 November 2022

An Entropy Analysis-Based Window Size Optimization Scheme for Merging LiDAR Data Frames

,
,
and
1
Department of Civil Engineering, Hongik University, Seoul 04066, Republic of Korea
2
Department of Computer Engineering, Inha University, Incheon 22212, Republic of Korea
3
School of Computing, Gachon University, Seongnam 13120, Republic of Korea
4
Department of Civil Engineering, Kyung Hee University, Yongin 17104, Republic of Korea

Abstract

LiDAR is a useful technology for gathering point cloud data from its environment and has been adapted to many applications. We use a cost-efficient LiDAR system attached to a moving object to estimate the location of the moving object using referenced linear structures. In the stationary state, the accuracy of extracting linear structures is low given the low-cost LiDAR. We propose a merging scheme for the LiDAR data frames to improve the accuracy by using the movement of the moving object. The proposed scheme tries to find the optimal window size by means of an entropy analysis. The optimal window size is determined by finding the minimum point between the entropy indicator of the ideal result and the entropy indicator of the actual result of each window size. The proposed indicator can describe the accuracy of the entire path of the moving object at each window size using a simple single value. The experimental results show that the proposed scheme can improve the linear structure extraction accuracy.

1. Introduction

Light Detection and Ranging (LiDAR) technologies have been rapidly developed and there are various applications based on LiDAR. LiDAR systems are classified into three types: spatial, spectral, and temporal information capturing [1]. Spatial schemes obtain point cloud data based on Time of Flight (ToF) measurements and spectral schemes measure the information of a material using what is termed Laser Return Intensity (LRI). Temporal schemes gather additional information based on spatial and spectral information using a repeated LiDAR technique. Application developers must select an optimal product because each LiDAR type has different functionalities and specifications such as range, Field of View (FoV), precision, and accuracy.
In mobile environments, LiDAR has extended application areas and most autonomous vehicles are equipped with LiDAR. LiDAR technology is essential for Advanced Driver-Assistance Systems (ADAS) that handle automatically steering, accelerating, and braking under the driver’s supervision [2]. Autonomous driving vehicles use LiDAR sensors, which provide high-resolution and real-time 3D representation data to detect the surrounding environment and obstacles. Odometry is essential for accurate self-localization for path planning and environment perception, which are key features related to driving safety [3]. LiDAR-based odometry can handle environmental variations by taking advantage of its active sensor emitting laser beams. The main functionality of LiDAR odometry is a registration between the current scan data and the reference point cloud data, which is solved by Iterative Closed Point (IPC) algorithm.
Simultaneous Localization and Mapping (SLAM) is also a promising field related to mobile LiDAR. It is designed to build or update a map of an unknown environment while simultaneously keeping track of an agent’s location. LiDAR is a more popular mechanism in SLAM compared to other mechanisms such as radar and ultra-wideband positioning due to LiDAR’s high precision, wide coverage, and longevity [4]. Traditional LiDAR-based SLAM algorithms mainly leverage the geometric features from the scene context, while the intensity information from LiDAR is ignored. The SLAM framework, which uses both geometry and intensity information for odometry estimations, provides reliable and accurate localization in multiple environments and outperforms geometric-only methods [5]. LiDAR-based SLAM can also be used in indoor navigation systems of autonomous vehicles because LiDAR-based SLAM can provide more robust localization than image-based SLAM in a lack textured environment [6].
In this paper, we use a mounted LiDAR system on a moving object to monitor an excavation site in an urban area. We extract linear structures that have references to the location information to calibrate the accuracy of a satellite-based navigation system. The location of a moving object is measured through the triangulation method using the extracted linear structures. The Velodyne Puck equipped with our system has relatively low specifications, as shown in Table 1. As the distance between the Velodyne Puck and linear structures increases, the extraction accuracy rapidly decreases due to its low vertical angular resolution. Therefore, we propose a sliding window mechanism to improve the accuracy of collected data from the low-spec LiDAR system. The proposed sliding window mechanism merges consecutive frames to acquire more point cloud data and improves the possibility of linear structure extraction by considering the movement of the mobile object.
Table 1. Velodyne Puck and Ultra Puck specifications.
We also propose an optimal window size decision algorithm based on Shannon entropy analysis. We calculate the entropy of the desired extracted linear structures at each point as a reference structure to find the optimal window size. Upon a change in the location of the moving object, our algorithm recalculates the entropy of each point with a change in the window size. The optimal window size of each point is set to the minimum difference between the reference entropy and the recalculated entropy. Our experimental results show that the proposed sliding window mechanism and the optimal window size decision algorithm perform well.
The rest of this paper is organized as follows. Section 2 describes related works. Section 3 presents the proposed entropy analysis-based window size optimization scheme. Section 4 presents the evaluation results, and Section 5 concludes the paper.

3. Entropy Analysis based Window Size Optimization Scheme

3.1. Application Scenario

Figure 1 shows an overview of the proposed mobile position system based on linear structure extraction from point cloud data collected using 3D-LiDAR. A mobile object equipped with LiDAR collects point cloud data while moving through the monitoring space. If there are obstacles such as tall buildings or street trees near the monitoring space, the accuracy of the satellite-based positioning system is reduced. The proposed system extracts a linear structure that serves as a reference from the collected point cloud data to correct the position of the mobile object. When three or more linear structures are extracted, the distance between the linear structure and the mobile object can be measured. The position of the mobile object can be estimated through trilateration.
Figure 1. Mobile Positioning System based on Linear Structures Extraction.
Figure 2 shows the hardware components of our system. There are three main components: a Jetson Xavier as the processing module, a Velodyne Puck as the LiDAR module, and a LPMS-USBAL2 unit as the IMU (Inertial Measurement Unit). The Jetson Xavier has an 8-core ARM v8.2 64-bit architecture-based CPU, a 512-core NVIDIA volta architecture-based GPU, 16 GB of memory, 32 GB of storage space, and an additional 1 TB of removable storage space to save data. The aforementioned Velodyne puck is connected to the Jetson Xavier via a 100 Mbps LAN connection. The LPMS-USBAL2 unit connected to the Jetson Xavier with USB has roll and yaw of ±180°, a pitch of ±90°, and a resolution of 0.01°. The accuracy of the IMU is 0.5° and 2° in static and dynamic environments, respectively.
Figure 2. Hardware Components of the Proposed System.
Figure 3 shows the workflow of the proposed positioning system. First, point cloud data are recorded in the Robot Operating System (ROS) BAG file format at regular intervals in LiDAR. The recorded BAG file is then converted to a PCD (Point Cloud Data) file in which one frame is saved, and calibration is conducted by referring to the IMU data. Linear structures are extracted from the calibrated PCD file, and if the number of extracted linear structures is three or less, the PCD file is merged with the next consecutive frame file to increase the point cloud data, after which the linear structures are extracted again. When four or more linear structures are extracted, the distance is calculated through a comparison with the reference linear structures, and the position of the mobile object is measured based on the calculated result.
Figure 3. Workflow of the Proposed System.

3.2. System Modeling

In our model, we assume that we know the location information of all reference linear structures and that the mobile object repeatedly traverses the monitored area along the driving route without stopping. Points of interest requiring accurate location information can be randomly located and are mainly targeted at areas where satellite-based location signals are not captured or at areas where large errors occur, even when signals are captured. Figure 4 shows the process of extracting a linear structure at each point of interest from a mobile object equipped with a LiDAR system with a limited detection range. When the mobile object arrives at a point of interest while moving in the monitored area along the detection path, the linear structure is extracted and the ID of the extracted linear structure is acquired.
Figure 4. Linear Structure Extraction at Point of Interests.
Figure 5a shows the ID list of the ideal linear structure extracted from the point of interest along the moving path of the mobile object. All of the linear structures that should be detected are extracted from the results, and there are no false detection results. However, if a linear structure is extracted using the actual point cloud data measured while the mobile object is moving, it may contain results that were not extracted or that are false positives (F/P), as shown in Figure 5b. Therefore, an indicator is needed to determine how accurate the actual result is compared to the ideal result.
Figure 5. Extracted References at Point of Interest. (a) Ideal result (b) Actual result (F/P is False Positive).
We applied the Shannon entropy analysis-based window size optimization scheme proposed by Wu et al. [25] to our system to evaluate the actual accuracy of the reference linear structure extraction results at the points of interest.
Table 2 describes the notations used in this paper.
Table 2. Notations.
According to the Shannon entropy definition, the indicator of system E(X) is defined as Equation (1).
E X = i = 0 n P x i l o g P x i ,   w h e r e   n   i s   t h e   n u m b e r   o f   p o i n t s
We defined P x i as the probability of a correctly detected linear structure at point i, as derived from the relationship between the total number of extracted linear structures of all points of interest in the ideal result and the number of correctly detected linear structures in the actual result at point i. This relationship is defined as Equation (2).
P x i = C i F i N   ,   w h e r e   N = i = 1 n C i   i n   i d e a l   r e s u l t
Our goal function described in Equation (3) is to find the minimum point between the entropy indicator of the ideal result and the entropy indicator of the actual result with a window size of W.
min E R X E W X ,   w h e r e   2     W 10 )  

3.3. Window Size Optimization Algorithm

Algorithm 1 shows the pseudocode for the proposed window size optimization algorithm based on the entropy analysis. These input parameters include the set of point of interests (I), the total number of point of interests (TI), the set of referenced linear structures’ IDs (R), and the total number of extracted linear structures at each point of interest in the ideal result (N). First, the entropy indicator of the ideal result (ER(X)) is calculated using N and Ci and initializes the minimum value of the difference between ER(X) and the entropy indicator of the actual result at window size i (Ei(X)). Next, the entropy indicator at each point of interest is repeatedly accumulated by TI. When the absolute value of the current difference between ER(X) and Ei(X) is less than Vmin, Vmin and the optimal window size (ω) are updated. Finally, when the loop ends, we can find the optimal window size, ω, and the algorithm returns the ω value.
Algorithm 1 Finding Optimal Window Size
Input: I, TI, R, N
Output: Optimal window size ω
1Calculate ER(X)
2Vmin = INF
3for i = 2 to 10 do
4     Ei(X) = 0
5     for j = 1 to TI do
6      Ei(X) += P x j log 1 P x j     ,       w h e r e   x j   I
7      end for
8     if E R X E i X < Vmin then
9      Vmin = E R X E i X
10     ω = i
11end if
12end for
13return ω

4. Evaluation Results

4.1. Experimental Environment

Figure 6 depicts the appearance of a moving vehicle equipped with a LiDAR device (Velodyne Puck), an edge computing platform, and an IMU. We run the test vehicle on a test field to estimate the location of the vehicle using the proposed scheme.
Figure 6. Test Vehicle Appearance and Test Field Run.
Figure 7 shows the preparation process including the location measurements of the referenced linear structures and points of interest using a high-accuracy GNSS (Global Navigation Satellite System) device, in this case, a Trimble R10. The Trimble R10 is a high-accuracy position measurement device that has a horizontal error of less than 2 cm and a vertical error of 5 cm during stationary measurements.
Figure 7. Trimble R10 Measurements.

4.2. Effects of the Proposed Window Mechanism

When we extract linear structures from a single frame (without the proposed window mechanism), the accuracy of linear structure extraction is low due to the low density of the Velodyne Puck. When applying the proposed window mechanism, the accuracy of linear structure extraction increases. As shown in Figure 8, when we set the window size to 3, the contours of trees, traffic signs, and streetlamps are clear due to the merging consecutive point cloud data. The merged point cloud data, which reflect the vertical movements of the moving vehicle, can improve the extraction accuracy of the linear structure extraction process.
Figure 8. Comparison between (a) Without and (b) With Window Mechanism. Red box areas noticeably improve the shape of linear structures.
However, the proposed window mechanism has a side effect, as shown in Figure 9. As the number of merged point cloud data frames increases, the error also increases. This accumulated error makes it difficult to extract linear structures. Therefore, it is important to measure the effect of the proposed window mechanism and to determine the optimal window size.
Figure 9. Side Effect of the Proposed Window Mechanism.

4.3. Entropy Indicator Comparison

We analyzed the entropy indicators of a single frame with a static window size (W = 3) and with the optimal window size for the same route for vehicles moving at about 2.5 km/h (Normal) and about 5 km/h (Fast). Along the moving path, there are 12 reference linear structures and five points of interest. Figure 10 shows the entropy indicators of three schemes at different speeds of the vehicle.
Figure 10. Entropy Indicator Values Comparison among the Three Schemes at Different Speeds. (a) Entropy indicator of single frame; (b) Entropy indicator of static window size; (c) Entropy Indicator of optimal window size; (d) Entropy indicator comparison of each scheme.
For the single frame case (a), the entropy indicator is high at each point of interest because faulty linear structures are detected and referenced linear structures are missed due to the low density of the point cloud data. At the static window size (b), the entropy indicator fluctuates regardless of whether the static window size is identical to the optimal window size. When the static window size closes to the optimal window size, the entropy indicator approaches zero. When the static window size is further from the optimal window size, the entropy indicator is higher. The optimal window size (c) shows the lowest entropy indicator value at every point of interest. In the comparison of the three schemes in terms of the total entropy indicator values (d), the optimal window size scheme shows the lowest value. The speed does not have a significant effect on the entropy indicator values of the three schemes.
Figure 11 shows the entropy indicator values of three schemes with the movements of the vehicle. We control the moving vehicle by having it follow a set path (Normal) and by having it drive in a zigzag direction (Zigzag). When the vehicle drives severely from side to side, the entropy indicator values of all schemes increase dramatically because the point cloud data are distorted. Accumulating frames with heavy noise data makes it difficult to extract linear structures.
Figure 11. Entropy Indicator Values Comparison among the Three Schemes during Movement. (a) Entropy indicator of single frame; (b) Entropy indicator of static window size; (c) Entropy Indicator of optimal window size; (d) Entropy indicator comparison of each scheme.
Figure 12 shows the proposed scheme overhead in terms of the PCD file size, merging time, and the linear structure execution time. In the case of merging time, it is stable as increasing the number of frames. However, merged PCD file size linearly increases and the linear structure exponentially increases execution time. The weak point of the proposed scheme is the high overhead during the merging process.
Figure 12. Proposed scheme Overhead in terms of File size and Execution Time.

5. Conclusions

We used a low-power LiDAR system attached to a moving object to estimate the location of the moving object. The movement of the moving object creates differences in each collected point cloud data instance at every moment. We applied a data frame merging scheme to improve the accuracy of linear structures. The proposed window scheme calculates a single indicator to describe the effect of the window size on the entire path of the moving object using entropy analysis. We also show various experimental results to verify the accuracy improvement of the proposed scheme. Our future works will include the development of a dynamic optimization algorithm that determines the optimal result at each point of interest and a technique to mitigate scattered point cloud data when merging data frames during fast movements. We will also attempt to apply IMU-based calibration methods for the future system to reduce noise from raw point cloud data.

Author Contributions

Conceptualization, Y.-H.J. and H.M.; Methodology, T.K.; Software, J.J.; Validation, T.K., J.J. and H.M.; Formal Analysis, Y.-H.J.; Investigation, J.J.; Resources, H.M.; Data Curation, T.K.; Writing—Original Draft Preparation, J.J. and H.M.; Writing—Review & Editing, Y.-H.J.; Visualization, T.K.; Supervision, Y.-H.J. and H.M.; Project Administration, Y.-H.J. and H.M.; Funding Acquisition, Y.-H.J. and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant (code: 22SCIP-C151582-04) from the Construction Technologies Program funded by the Ministry of Land, Infrastructure and Transport of the Korean government and the Gachon University research fund of 2021 (GCU-2021-202008450001).

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raj, T.; Hashim, F.H.; Huddin, A.B.; Ibrahim, M.F.; Hussain, A. A Survey on LiDAR Scanning Mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  2. Roriz, R.; Cabral, J.; Gomes, T. Automotive LiDAR Technology: A Survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6282–6297. [Google Scholar] [CrossRef]
  3. Zheng, X.; Zhu, J. Efficient LiDAR Odometry for Autonomous Driving. IEEE Robot. Autom. Lett. 2021, 6, 8458–8465. [Google Scholar] [CrossRef]
  4. Khan, M.U.; Zaidi, S.A.; Ishtiaq, A.; Bukhari, S.U.; Samer, S.; Farman, A. A Comparative Survey of LiDAR-SLAM and LiDAR based Sensor Technologies. In Proceedings of the 2021 Mohammad Ali Jinnah University International Conference on Computing (MAJICC), Karachi, Pakistan, 15–17 July 2021; pp. 1–8. [Google Scholar]
  5. Wang, H.; Wang, C.; Xie, L. Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment. IEEE Robot. Autom. Lett. 2021, 6, 1715–1721. [Google Scholar] [CrossRef]
  6. Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A Comparative Analysis of LiDAR SLAM-Based Indoor Navigation for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6907–6921. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Wang, J.; Wang, X.; Dolan, J.M. Road-Segmentation-Based Curb Detection Method for Self-Driving via a 3D-LiDAR Sensor. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3981–3991. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Wang, L.; Jiang, X.; Zeng, Y.; Dai, Y. An efficient LiDAR-based localization method for self-driving cars in dynamic environments. Robotica 2021, 40, 38–55. [Google Scholar] [CrossRef]
  9. Zhao, L.; Wang, M.; Su, S.; Liu, T.; Yang, Y. Dynamic Object Tracking for Self-Driving Cars Using Monocular Camera and LIDAR. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 10865–10872. [Google Scholar]
  10. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather Influence and Classification with Automotive Lidar Sensors. In Proceedings of the IEEE Intelligent Vehicles Symposium, Paris, France, 9–12 June 2019; pp. 1527–1534. [Google Scholar]
  11. Cheng, X.; Hu, X.; Tan, K.; Wang, L.; Yang, L. Automatic Detection of Shield Tunnel Leakages Based on Terrestrial Mobile LiDAR Intensity Images Using Deep Learning. IEEE Access 2021, 9, 55300–55310. [Google Scholar] [CrossRef]
  12. Luo, C.; Sha, H.; Ling, C.; Li, J. Intelligent Detection for Tunnel Shotcrete Spray Using Deep Learning and LiDAR. IEEE Access 2020, 8, 1755–1766. [Google Scholar]
  13. Ma, L.; Li, Y.; Li, J.; Yu, Y.; Junior, J.M.; Goncalves, W.N.; Chapman, M.A. Capsule-Based Networks for Road Marking Extraction and Classification From Mobile LiDAR Point Clouds. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1981–1995. [Google Scholar] [CrossRef]
  14. De Silva, V.; Roche, J.; Kondoz, A. Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots. Sensors 2018, 18, 2730. [Google Scholar] [CrossRef]
  15. Hu, T.; Sun, X.; Su, Y.; Guan, H.; Sun, Q.; Kelly, M.; Guo, Q. Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications. Remote Sens. 2021, 13, 77. [Google Scholar] [CrossRef]
  16. Park, C.; Moghadam, P.; Williams, J.L.; Kim, S.; Sridharan, S.; Fookes, C. Elasticity Meets Continuous-Time: Map-Centric Dense 3D LiDAR SLAM. IEEE Trans. Robot. 2021, 38, 978–997. [Google Scholar] [CrossRef]
  17. Karimi, M.; Oelsch, M.; Stengel, O.; Babaians, E.; Steinbach, E. LoLa-SLAM: Low-Latency LiDAR SLAM Using Continuous Scan Slicing. IEEE Robot. Autom. Lett. 2021, 6, 2248–2255. [Google Scholar] [CrossRef]
  18. Zhou, L.; Koppel, D.; Kaess, M. LiDAR SLAM With Plane Adjustment for Indoor Environment. IEEE Robot. Autom. Lett. 2021, 6, 7073–7080. [Google Scholar] [CrossRef]
  19. Chen, Y.; Hao, C.; Wu, W.; Wu, E. Robust dense reconstruction by range merging based on confidence estimation. Sci. China Inf. Sci. 2016, 59, 092103. [Google Scholar] [CrossRef]
  20. Morita, K.; Hashimoto, M.; Takahashi, K. Point-Cloud Mapping and Merging Using Mobile Laser Scanner. In Proceedings of the 3rd IEEE International Conference on Robotic Computing, Naples, Italy, 25–27 February 2019; pp. 417–418. [Google Scholar]
  21. Gao, X.; Shen, S.; Zhou, Y.; Cui, H.; Zhu, L.; Hu, Z. Ancient Chinese architecture 3D preservation by merging ground and aerial point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 72–84. [Google Scholar] [CrossRef]
  22. Serafin, J.; Grisetti, G. Using extended measurements and scene merging for efficient and robust point cloud registration. Robot. Auton. Syst. 2017, 92, 91–106. [Google Scholar] [CrossRef]
  23. Wang, D.; Brunner, J.; Ma, Z.; Lu, H.; Hollaus, M.; Pang, Y.; Pfeifer, N. Separating Tree Photosynthetic and Non-Photosynthetic Components from Point Cloud Data Using Dynamic Segment Merging. Forests 2018, 9, 252. [Google Scholar] [CrossRef]
  24. Kwon, S.; Park, J.-W.; Moon, D.; Jung, S.; Park, H. Smart Merging Method for Hybrid Point Cloud Data using UAV and LIDAR in Earthwork Construction. Procedia Eng. 2017, 196, 21–28. [Google Scholar] [CrossRef]
  25. Wu, W.; Huang, Y.; Kurachi, R.; Zeng, G.; Xie, G.; Li, R.; Li, K. Sliding Window Optimized Information Entropy Analysis Method for Intrusion Detection on In-Vehicle Networks. IEEE Access 2018, 6, 45233–45245. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.