Development of a High-Precision Lidar System and Improvement of Key Steps for Railway Obstacle Detection Algorithm
Abstract
:1. Introduction
- (a)
- We have developed a region-adjustable 3D mechanical lidar suitable for railway obstacle detection, achieving the entire process from hardware system selection to software implementation. We also performed accurate corrections on lidar point clouds.
- (b)
- We have developed a novel set of obstacle detection algorithms for lidar point clouds, which enables the high-precision monitoring of obstacles in the track area.
- (c)
- We have improved and experimented on the Lo-RANSAC and Robust ICP algorithms in the process, which has enhanced their robustness and operational efficiency, making them applicable in most cases.
2. Overall System Structure
Precise Coordinate Error Correction
3. Point Cloud Obstacle Detection Algorithm
- (a)
- Large-scale ground point cloud segmentation and railway model extraction using RS-Lo-RANSAC;
- (b)
- Background point cloud (BP) and foreground point cloud (FP) (including obstacles) fine registration using improved Robust ICP;
- (c)
- Obstacle recognition based on VFOR algorithm.
3.1. Large-Scale Ground Point Cloud Segmentation and Railway Track Extraction
- (a)
- Randomly select to sample points within an adaptively determined range capable of fitting the model and employ these points for least squares sample fitting.
- (b)
- Iterate through a loop and perform local optimization when the best model so far is identified, utilizing the Total Least Square (TLS) method to optimize the inlier points of the model.
- (c)
- Implement early termination of iterations based on probabilistic principles. By satisfying certain conditions derived from probability statistics, we can determine when the model has achieved a stable state, allowing for the termination of the iteration process.
Algorithm 1: RS-Lo-RANSAC algorithm use for ground segmentation and track extraction |
Input:; |
Output: |
do |
← randomly |
calculate the current model |
caculate the inlier points of the plane |
then 7: // run Local Optimization do model estimated by TLS inliers then 14: end if 16: end for 17: end if |
3.2. Fine Registration of BP and FP
- (a)
- Given an initial source point cloud set P and input set Q, find the subset X in set P that corresponds to a set Y in Q, which can be described as , in which k represents the number of iterations of the ICP and is initialized as zero.
- (b)
- Calculate the mapping matrix based on the point set relation between and , and the mapping matrix is represented as (18):
- (c)
- Apply the mapping matrix to the input source point cloud for pose transformation, , and use point cloud as the input source for the next round. And calculate the mean squared point matching error .
- (d)
- If the value of is less than a certain threshold, exit the loop and consider the current mapping matrix T as the optimal solution. Otherwise, continue iterating until the maximum number of iterations k is reached.
Algorithm 2: Improved point-to-point Robust ICP using Welsch’s function |
Input: : initial transformation; ; : maximum iteration count; : convergence ability of iterative functions; : convergence capability of the previous iteration; : number of iterations; : energy threshold ; Initial |
do |
; |
do |
then |
6: else break; 7: end if then break; end if 13: //Using Anderson acceleration; [33,34] 16: ; 17: end while ; 19: end if 21: end while |
3.3. Extracting Voxel Features from Fused Point Clouds
4. Experiment
4.1. Point Cloud Coordinate Correction
4.2. RS-Lo-RANSAC
- (a)
- Level of visualization achieved in extracting regions along the track of the point cloud.
- (b)
- The computational efficiency of the algorithm was measured by its execution time.
- (c)
- The accuracy of ground segmentation was assessed by calculating the Root Mean Square Error (RMSE).
4.3. Improved Robust ICP
- (a)
- Convergence ability was measured by the variation of the Gaussian weights during the correspondence step. This variation can be expressed as follows:
- (b)
- Mean Square Error (MSE) at each iteration:
- (c)
- The runtime t of the algorithm.
- (d)
- The interval between fixed marker points in the foreground and background point clouds indicates the level of registration, and the smaller , the better the registration effect.
4.4. VFOR
5. Conclusions
- (a)
- Ground filtering and track extraction based on RS-Lo-RANSAC;
- (b)
- Fusion of the BP and the FP using an improved Robust ICP;
- (c)
- Obstacle recognition based on voxel features extracted from the fused point cloud map.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ye, T.; Zhang, X.; Zhang, Y.; Liu, J. Railway Traffic Object Detection Using Differential Feature Fusion Convolution Neural Network. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1375–1387. [Google Scholar] [CrossRef]
- Pan, H.; Li, Y.; Wang, H.; Tian, X. Railway Obstacle Intrusion Detection Based on Convolution Neural Network Multitask Learning. Electronics 2022, 11, 2697. [Google Scholar] [CrossRef]
- Kapoor, R.; Goel, R.; Sharma, A. An intelligent railway surveillance framework based on recognition of object and railway track using deep learning. Multimed. Tools Appl. 2022, 81, 21083–21109. [Google Scholar] [CrossRef] [PubMed]
- Ristic-Durrant, D.; Franke, M.; Michels, K. A Review of Vision-Based On-Board Obstacle Detection and Distance Estimation in Railways. Sensors 2021, 21, 3452. [Google Scholar] [CrossRef] [PubMed]
- Dodge, D.; Yilmaz, M. Convex Vision-Based Negative Obstacle Detection Framework for Autonomous Vehicles. IEEE Trans. Intell. Veh. 2023, 8, 778–789. [Google Scholar] [CrossRef]
- Perić, S.; Milojković, M.; Stan, S.-D.; Banić, M.; Antić, D. Dealing with Low Quality Images in Railway Obstacle Detection System. Appl. Sci. 2022, 12, 3041. [Google Scholar] [CrossRef]
- Kyatsandra, A.K.; Saket, R.K.; Kumar, S.; Sarita, K.; Vardhan, A.S.S.; Vardhan, A.S.S. Development of TRINETRA: A Sensor Based Vision Enhancement System for Obstacle Detection on Railway Tracks. IEEE Sens. J. 2022, 22, 3147–3156. [Google Scholar] [CrossRef]
- Shao, W.; Zhang, H.; Wu, Y.; Sheng, N.; Zhou, S. Application of fusion 2D lidar and binocular vision in robot locating obstacles. J. Intell. Fuzzy Syst. 2021, 41, 4387–4394. [Google Scholar] [CrossRef]
- Miao, Y.; Tang, Y.; Alzahrani, B.A.; Barnawi, A.; Alafif, T.; Hu, L. Airborne Lidar Assisted Obstacle Recognition and Intrusion Detection Towards Unmanned Aerial Vehicle: Architecture, Modeling and Evaluation. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4531–4540. [Google Scholar] [CrossRef]
- Wang, D.; Watkins, C.; Xie, H. MEMS Mirrors for Lidar: A review. Micromachines 2020, 11, 456. [Google Scholar] [CrossRef]
- Zheng, L.; Zhang, P.; Tan, J.; Li, F. The Obstacle Detection Method of UAV Based on 2D Lidar. IEEE Access 2019, 7, 163437–163448. [Google Scholar] [CrossRef]
- Amaral, V.; Marques, F.; Lourenço, A.; Barata, J.; Santana, P. Laser-Based Obstacle Detection at Railway Level Crossings. J. Sens. 2016, 2016, 1719230. [Google Scholar] [CrossRef]
- Xie, D.; Xu, Y.; Wang, R. Obstacle detection and tracking method for autonomous vehicle based on three-dimensional LiDAR. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419831587. [Google Scholar] [CrossRef]
- Sun, Y.; Zuo, W.; Huang, H.; Cai, P.; Liu, M. PointMoSeg: Sparse Tensor-Based End-to-End Moving-Obstacle Segmentation in 3-D Lidar Point Clouds for Autonomous Driving. IEEE Robot. Autom. Lett. 2021, 6, 510–517. [Google Scholar] [CrossRef]
- Gao, F.; Li, C.; Zhang, B. A Dynamic Clustering Algorithm for Lidar Obstacle Detection of Autonomous Driving System. IEEE Sens. J. 2021, 21, 25922–25930. [Google Scholar] [CrossRef]
- Anand, B.; Senapati, M.; Barsaiyan, V.; Rajalakshmi, P. LiDAR-INS/GNSS-Based Real-Time Ground Removal, Segmentation, and Georeferencing Framework for Smart Transportation. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
- Fang, J.; Zhou, D.; Yan, F.; Zhao, T.; Zhang, F.; Ma, Y.; Wang, L.; Yang, R. Augmented Lidar Simulator for Autonomous Driving. IEEE Robot. Autom. Lett. 2020, 5, 1931–1938. [Google Scholar] [CrossRef]
- Chen, J.; Kira, Z.; Cho, Y.K. LRGNet: Learnable Region Growing for Class-Agnostic Point Cloud Segmentation. IEEE Robot. Autom. Lett. 2021, 6, 2799–2806. [Google Scholar] [CrossRef]
- Ci, W.Y.; Xu, T.; Lin, R.Z.; Lu, S.; Wu, X.L.; Xuan, J.Y. A Novel Method for Obstacle Detection in Front of Vehicles Based on the Local Spatial Features of Point Cloud. Remote Sens. 2023, 15, 1044. [Google Scholar] [CrossRef]
- Li, J.; Li, R.; Wang, J.Z.; Yan, M. Obstacle information detection method based on multiframe three-dimensional lidar point cloud fusion. Opt. Eng. 2019, 58, 116102. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Sangappa, H.K.; Ramakrishnan, K.R. A probabilistic analysis of a common RANSAC heuristic. Mach. Vis. Appl. 2018, 30, 71–89. [Google Scholar] [CrossRef]
- Chum, O.; Matas, J.; Kittler, J. Locally optimized RANSAC. In Proceedings of the Pattern Recognition: 25th DAGM Symposium, Magdeburg, Germany, 10–12 September 2003; pp. 236–243. [Google Scholar]
- Huang, X.; Walker, I.; Birchfield, S. Occlusion-aware multi-view reconstruction of articulated objects for manipulation. Robot. Auton. Syst. 2014, 62, 497–505. [Google Scholar] [CrossRef]
- Jiang, S.; Jiang, W.; Li, L.; Wang, L.; Huang, W. Reliable and Efficient UAV Image Matching via Geometric Constraints Structured by Delaunay Triangulation. Remote Sens. 2020, 12, 3390. [Google Scholar] [CrossRef]
- Estépar, R.; Brun, A.; Westin, C.-F. Robust Generalized Total Least Squares Iterative Closest Point Registration. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2004: 7th International Conference, Saint-Malo, France, 26–29 September 2004; pp. 234–241. [Google Scholar]
- Zhong, S.; Guo, M.; Lv, R.; Chen, J.; Xie, Z.; Liu, Z. A Robust Rigid Registration Framework of 3D Indoor Scene Point Clouds Based on RGB-D Information. Remote Sens. 2021, 13, 4755. [Google Scholar] [CrossRef]
- Senin, N.; Colosimo, B.M.; Pacella, M. Point set augmentation through fitting for enhanced ICP registration of point clouds in multisensor coordinate metrology. Robot. Comput.-Integr. Manuf. 2013, 29, 39–52. [Google Scholar] [CrossRef]
- He, Y.; Yang, J.; Hou, X.; Pang, S.; Chen, J. ICP registration with DCA descriptor for 3D point clouds. Opt Express 2021, 29, 20423–20439. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; Song, P. A modified ICP algorithm based on dynamic adjustment factor for registration of point cloud and CAD model. Pattern Recognit. Lett. 2015, 65, 88–94. [Google Scholar] [CrossRef]
- Chetverikov, D.; Stepanov, D.; Krsek, P. Robust Euclidean alignment of 3D point sets: The trimmed iterative closest point algorithm. Image Vis. Comput. 2005, 23, 299–309. [Google Scholar] [CrossRef]
- Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse Iterative Closest Point. Elev. Eurographics/Acmsiggraph Symp. Geom. Process. 2013, 32, 113–123. [Google Scholar] [CrossRef]
- Zhang, J.; Yao, Y.; Deng, B. Fast and Robust Iterative Closest Point. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3450–3466. [Google Scholar] [CrossRef] [PubMed]
- Walker, H.F.; Ni, P. Anderson Acceleration for Fixed-Point Iterations. SIAM J. Numer. Anal. 2011, 49, 1715–1735. [Google Scholar] [CrossRef]
- Kong, S.; Shi, F.; Wang, C.; Xu, C. Point Cloud Generation from Multiple Angles of Voxel Grids. IEEE Access 2019, 7, 160436–160448. [Google Scholar] [CrossRef]
- Shi, G.; Wang, K.; Li, R.; Ma, C. Real-Time Point Cloud Object Detection via Voxel-Point Geometry Abstraction. IEEE Trans. Intell. Transp. Syst. 2023, 24, 5971–5982. [Google Scholar] [CrossRef]
- Tzamarias, D.E.O.; Chow, K.; Blanes, I.; Serra-Sagrista, J. Fast Run-Length Compression of Point Cloud Geometry. IEEE Trans. Image Process 2022, 31, 4490–4501. [Google Scholar] [CrossRef] [PubMed]
- Koh, N.; Jayaraman, P.K.; Zheng, J. Truncated octree and its applications. Vis. Comput. 2021, 38, 1167–1179. [Google Scholar] [CrossRef]
- Young, M.; Pretty, C.; McCulloch, J.; Green, R. Sparse point cloud registration and aggregation with mesh-based generalized iterative closest point. J. Field Robot. 2021, 38, 1078–1091. [Google Scholar] [CrossRef]
- Das, A.; Waslander, S.L. Scan registration using segmented region growing NDT. Int. J. Robot. Res. 2014, 33, 1645–1663. [Google Scholar] [CrossRef]
RANSAC | Lo-RANSAC | RS-Lo-RANSAC | |||||
---|---|---|---|---|---|---|---|
RMSE (mm) | Time (s) | RMSE (mm) | Time (s) | RMSE (mm) | Time (s) | ||
Scene A | 23.4768 | 0.0115 | 23.5502 | 0.02186 | 23.5212 | 0.0207 | |
24.2188 | 0.0112 | 29.5557 | 0.02675 | 26.8629 | 0.0237 | ||
25.0101 | 0.0108 | 30.2094 | 0.02251 | 25.4805 | 0.0211 | ||
Scene B | 22.8415 | 0.0112 | 36.2378 | 0.0237 | 28.5658 | 0.0212 | |
23.7015 | 0.0133 | 37.8211 | 0.0283 | 27.0003 | 0.0266 | ||
24.6473 | 0.0110 | 32.8467 | 0.0255 | 29.3878 | 0.0231 | ||
Scene C | 45.4117 | 0.0124 | 26.5277 | 0.0347 | 28.0463 | 0.0314 | |
48.2419 | 0.0131 | 37.8211 | 0.00301 | 37.9345 | 0.0219 | ||
51.1454 | 0.0143 | 39.2738 | 0.0308 | 39.2418 | 0.0254 |
Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|---|---|
RANSAC (mm) | 26.22 | 36.47 | 29.35 | 46.25 | 26.09 | 26.44 | 28.57 |
Lo-RANSAC (%) | −7.28 | −28.58 | −6.71 | 3.89 | −4.10 | −6.77 | −11.34 |
RS-Lo-RANSAC (%) | −14.26 | −32.60 | −8.72 | −4.97 | −5.67 | −6.88 | −10.12 |
Method | MSE (mm2) | Time Cost (s) | (cm) |
---|---|---|---|
ICP | 395.87 | 11.56 | 3.407 |
GICP | 383.11 | 1.02 | 3.502 |
NDT | 383.83 | 0.75 | 3.398 |
RICP | 144.53 | 5.82 | 3.378 |
Ours | 171.12 | 0.38 | 3.342 |
Obstacles | Size (cm) | Detection Distance (m) | ||||
---|---|---|---|---|---|---|
±5 | ±10 | ±15 | ±20 | ±25 | ||
bin | 35 × 20 × 18 | 100% | 100% | 100% | 100% | 100% |
person | 50 ×50 × 170 | 100% | 100% | 100% | 100% | 100% |
train | 500 × 250 × 350 | 100% | 100% | 100% | 100% | 100% |
box | 30 × 30 ×30 | 100% | 100% | 100% | 100% | 100% |
box | 20 × 20 × 20 | 100% | 100% | 100% | 100% | 100% |
box | 15 × 15 ×15 | 100% | 100% | 100% | 96% | 91% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nan, Z.; Zhu, G.; Zhang, X.; Lin, X.; Yang, Y. Development of a High-Precision Lidar System and Improvement of Key Steps for Railway Obstacle Detection Algorithm. Remote Sens. 2024, 16, 1761. https://doi.org/10.3390/rs16101761
Nan Z, Zhu G, Zhang X, Lin X, Yang Y. Development of a High-Precision Lidar System and Improvement of Key Steps for Railway Obstacle Detection Algorithm. Remote Sensing. 2024; 16(10):1761. https://doi.org/10.3390/rs16101761
Chicago/Turabian StyleNan, Zongliang, Guoan Zhu, Xu Zhang, Xuechun Lin, and Yingying Yang. 2024. "Development of a High-Precision Lidar System and Improvement of Key Steps for Railway Obstacle Detection Algorithm" Remote Sensing 16, no. 10: 1761. https://doi.org/10.3390/rs16101761
APA StyleNan, Z., Zhu, G., Zhang, X., Lin, X., & Yang, Y. (2024). Development of a High-Precision Lidar System and Improvement of Key Steps for Railway Obstacle Detection Algorithm. Remote Sensing, 16(10), 1761. https://doi.org/10.3390/rs16101761