Online Extrinsic Calibration on LiDAR-Camera System with LiDAR Intensity Attention and Structural Consistency Loss
Abstract
:1. Introduction
- (i)
- Considering that the object with higher LiDAR intensity has more salient co-observed constraint, LIA-Net is proposed to utilize LiDAR intensity as the attention feature map to generate the salient CoC features;
- (ii)
- To prevent the risk of vanishing gradient in the later training stage, SC loss is presented to approximately reduce the alignment error from LiDAR point cloud and RGB image by minimizing the difference of the projected and its GT projected LiDAR images;
- (iii)
- Taking the advantages of both LIA-Net and SC loss, deep learning based extrinsic calibration method LIA-SC-Net is presented to estimate the accurate extrinsic parameters of a LiDAR-camera system.
2. Related Works
2.1. Calibration Object Based Extrinsic Calibration Method
2.2. Information Fusion Based Extrinsic Calibration Method
2.3. Deep Learning Based Extrinsic Calibration Method
2.4. Analysis of the Deep Learning Based Extrinsic Calibration Method
3. Proposed Method
3.1. Problem Statement and Method Overview
3.2. LiDAR Intensity Attention Based Backbone Network
3.3. Parameters Regression
3.4. Structural Consistency Loss
3.5. Iterative Inference Scheme
4. Experimental Results
4.1. Experiment Configuration
4.1.1. Dataset and Preparations
4.1.2. Implementations
4.1.3. Evaluation Metrics
4.2. Verification of the Proposed Method
4.2.1. Verification on Dense Operation
4.2.2. Verification on LiDAR Intensity Attention
4.2.3. Verification on Structural Calibration Loss
4.2.4. Verification on Iterative Inference and Time-Consuming Test
4.3. Comparison Results
4.3.1. Comparisons with State-of-the-Art Methods
4.3.2. Calibration Visualization
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
LiDAR | Light detection and ranging |
MI | Mutual information |
CNN | Convolution neural network |
LIA-Net | LiDAR intensity attention based backbone network |
SC | Structural consistency |
FOV | Field of view |
FPS | Frame per second |
GT | Ground truth |
Net | Network |
CoC | Co-observed calibration |
PnP | Perspective-n-point |
DLT | Direct linear transformation |
BA | Bundle adjustment |
ICP | Iterative closest point |
BLSMI | Bagged least-squares mutual information |
VO | Visual odometry |
FC | Fully connected |
CVC | Cost volume computation |
FPS | Frame per second |
Appendix A
Appendix A.1. Projection Details
Parameter | Values |
---|---|
721.537 pixel | |
721.537 pixel | |
0.0 pixel | |
609.559 pixel | |
216.379 pixel | |
0.08 deg | |
0.40 deg |
Appendix A.2. Selection of k in the Dense Operation
Appendix A.3. Details of the Cost Volume Computation Module
Appendix A.4. Relation of the Quaternion and Rotation Matrix
References
- Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D Object Detection From RGB-D Data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Nagy, B.; Benedek, C. On-the-Fly Camera and Lidar Calibration. Remote Sens. 2020, 12, 1137. [Google Scholar] [CrossRef] [Green Version]
- Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3D Proposal Generation and Object Detection from View Aggregation. In Proceedings of the International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
- Zhuang, Z.; Li, R.; Jia, K.; Wang, Q.; Li, Y.; Tan, M. Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation. In Proceedings of the IEEE Conference on International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 16260–16270. [Google Scholar]
- Yin, L.; Luo, B.; Wang, W.; Yu, H.; Wang, C.; Li, C. CoMask: Corresponding Mask-Based End-to-End Extrinsic Calibration of the Camera and LiDAR. Remote Sens. 2020, 12, 1925. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Wang, C.; Chen, S.; Bian, X.; Lai, B.; Shen, X.; Cheng, M.; Lai, S.; Weng, D.; Li, J. Y-Net: Learning Domain Robust Feature Representation for ground camera image and large-scale image-based point cloud registration. Inf. Sci. 2021, 581, 655–677. [Google Scholar] [CrossRef]
- An, P.; Ma, T.; Yu, K.; Fang, B.; Zhang, J.; Fu, W.; Ma, J. Geometric calibration for LiDAR-camera system fusing 3D-2D and 3D-3D point correspondences. Opt. Express 2020, 28, 2122–2141. [Google Scholar] [CrossRef]
- An, P.; Gao, Y.; Ma, T.; Yu, K.; Fang, B.; Zhang, J.; Ma, J. LiDAR-camera system extrinsic calibration by establishing virtual point correspondences from pseudo calibration objects. Opt. Express 2020, 28, 18261–18282. [Google Scholar] [CrossRef]
- Park, Y.; Yun, S.; Won, C.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [Green Version]
- Dhall, A.; Chelani, K.; Radhakrishnan, V.; Krishna, K.M. LiDAR-Camera Calibration using 3D-3D Point correspondences. arXiv 2017, arXiv:1705.09785. [Google Scholar]
- Pusztai, Z.; Hajder, L. Accurate Calibration of LiDAR-Camera Systems using Ordinary Boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 394–402. [Google Scholar]
- Pandey, G.; Mcbride, J.R.; Savarese, S.; Eustice, R.M. Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information. In Proceedings of the Twenty-Sixth Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012; pp. 1–7. [Google Scholar]
- Wu, X.; Zhang, C.; Liu, Y. Calibrank: Effective Lidar-Camera Extrinsic Calibration by Multi-Modal Learning to Rank. In Proceedings of the IEEE International Conference on Image Processing, Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 3189–3193. [Google Scholar]
- Oren, M.; Nayar, S.K. Generalization of the Lambertian model and implications for machine vision. Int. J. Comput. Vis. 1995, 14, 227–251. [Google Scholar] [CrossRef]
- Yoo, J.H.; Kim, Y.; Kim, J.; Choi, J.W. 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection. In Proceedings of the ECCV, Glasgow, UK, 23–28 August 2020; pp. 1–16. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. In Proceedings of the ICAIS, Klagenfurt, Austria, 6–8 September 2011; pp. 315–323. [Google Scholar]
- Mnih, V.; Heess, N.; Graves, A.; Kavukcuoglu, K. Recurrent Models of Visual Attention. In Proceedings of the NeurIPS, Montreal, QC, Canada, 8–13 December 2014; pp. 2204–2212. [Google Scholar]
- Ribeiro, A.H.; Tiels, K.; Aguirre, L.A.; Schön, T.B. Beyond exploding and vanishing gradients: Analysing RNN training using attractors and smoothness. In Proceedings of the AISTATS, virtually, 18 September 2020; Volume 108, pp. 2370–2380. [Google Scholar]
- Lepetit, V.; Noguer, F.M.; Fua, P. EPnP: An accurate O(n) solution to the PnP problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef] [Green Version]
- Gao, X.; Hou, X.; Tang, J.; Cheng, H. Complete Solution Classification for the Perspective-Three-Point Problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar]
- Abdel-Aziz, Y.I.; Karara, H.M. Direct linear transformation into object space coordinates in close-range photogrammetry. In Proceedings of the Symposium on Close-Range Photogrammetry, Urbana, IL, USA, 26–29 January 1971; pp. 1–18. [Google Scholar]
- Triggs, B.; Mclauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Proceedings of the Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 2000; pp. 298–372. [Google Scholar]
- Ge, Y.; Maurer, C.R.; Fitzpatrick, J.M. Surface-based 3D image registration using the iterative closest point algorithm with a closest point transform. Proc. SPIE- Int. Soc. Opt. Eng. 1996, 2710, 358–367. [Google Scholar]
- Horn, B.K.P.; Hilden, H.M.; Negahdaripour, S. Closed-form Solution of Absolute Orientation Using Orthonormal Matrices. J. Opt. Soc. Am. A 1988, 5, 1127–1135. [Google Scholar] [CrossRef] [Green Version]
- Guindel, C.; Beltrán, J.; Martín, D.; Garcia, F. Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- Zhou, L.; Deng, Z. Extrinsic calibration of a camera and a lidar based on decoupling the rotation from the translation. In Proceedings of the IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 642–648. [Google Scholar]
- Wang, W.; Sakurada, K.; Kawaguchi, N. Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard. Remote Sens. 2017, 9, 851. [Google Scholar] [CrossRef] [Green Version]
- Cui, J.; Niu, J.; Ouyang, Z.; He, Y.; Liu, D. ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems. arXiv 2020, arXiv:2011.08516. [Google Scholar]
- Ou, J.; Huang, P.; Zhou, J.; Zhao, Y.; Lin, L. Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization. Sensors 2022, 22, 2221. [Google Scholar] [CrossRef]
- Gong, X.; Lin, Y.; Liu, J. 3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron. Sensors 2013, 13, 1902–1918. [Google Scholar] [CrossRef] [Green Version]
- Lee, G.; Lee, J.; Park, S. Calibration of VLP-16 Lidar and multi-view cameras using a ball for 360 degree 3D color map acquisition. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Daegu, Korea, 16–18 November 2017; pp. 64–69. [Google Scholar]
- Pusztai, Z.; Eichhardt, I.; Hajder, L. Accurate Calibration of Multi-LiDAR-Multi-Camera Systems. Sensors 2018, 18, 2139. [Google Scholar] [CrossRef] [Green Version]
- Chai, Z.; Sun, Y.; Xiong, Z. A Novel Method for LiDAR Camera Calibration by Plane Fitting. In Proceedings of the IEEE International Conference on Advanced Intelligent Mechatronics, Auckland, New Zealand, 9–12 July 2018; pp. 286–291. [Google Scholar]
- Kümmerle, J.; Kühner, T. Unified Intrinsic and Extrinsic Camera and LiDAR Calibration under Uncertainties. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 6028–6034. [Google Scholar]
- Caselitz, T.; Steder, B.; Ruhnke, M.; Burgard, W. Monocular camera localization in 3d lidar maps. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Daejeon, Korea, 9–14 October 2016; pp. 1926–1931. [Google Scholar]
- Ryoichi, I.; Takeshi, O.; Katsushi, I. LiDAR and Camera Calibration using Motion Estimated by Sensor Fusion Odometry. In Proceedings of the IEEE International Conference on Intelligence Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 7342–7349. [Google Scholar]
- Chen, S.; Li, X.; Zhao, L. Multi-source remote sensing image registration based on sift and optimization of local self-similarity mutual information. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 2548–2551. [Google Scholar]
- Wolcott, R.W.; Eustice, R.M. Visual localization within LIDAR maps for automated urban driving. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 176–183. [Google Scholar]
- Irie, K.; Sugiyama, M.; Tomono, M. Target-less camera-LiDAR extrinsic calibration using a bagged dependence estimator. In Proceedings of the IEEE International Conference on Automation Science and Engineering, Fort Worth, TX, USA, 21–25 August 2016; pp. 1340–1347. [Google Scholar]
- Zhu, Y.; Li, C.; Zhang, Y. Online Camera-LiDAR Calibration with Sensor Semantic Information. In Proceedings of the International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 4970–4976. [Google Scholar]
- Xiao, Z.; Li, H.; Zhou, D.; Dai, Y.; Dai, B. Accurate extrinsic calibration between monocular camera and sparse 3D Lidar points without markers. In Proceedings of the Intelligent Vehicles Symposium, Los Angeles, CA, USA, 11–14 June 2017; pp. 424–429. [Google Scholar]
- John, V.; Long, Q.; Liu, Z.; Mita, S. Automatic calibration and registration of lidar and stereo camera without calibration objects. In Proceedings of the IEEE International Conference on Vehicular Electronics and Safety, Yokohama, Japan, 5–7 November 2015; pp. 231–237. [Google Scholar]
- Schönberger, J.L.; Frahm, J. Structure-from-Motion Revisited. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Fassi, I.; Legnani, G. Hand to sensor calibration: A geometrical interpretation of the matrix equation AX = XB. J. Field Robot. 2005, 22, 497–506. [Google Scholar] [CrossRef]
- Ban, X.; Wang, H.; Chen, T.; Wang, Y.; Xiao, Y. Monocular Visual Odometry Based on Depth and Optical Flow Using Deep Learning. IEEE Trans. Instrum. Meas. 2021, 70, 1–19. [Google Scholar] [CrossRef]
- Chien, H.; Klette, R.; Schneider, N.; Franke, U. Visual odometry driven online calibration for monocular LiDAR-camera systems. In Proceedings of the IEEE International Conference on Pattern Recognition, Cancun, Mexico, 4–8 December 2016; pp. 2848–2853. [Google Scholar]
- Girshick, R.B. Fast R-CNN. In Proceedings of the IEEE Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Schneider, N.; Piewak, F.; Stiller, C.; Franke, U. RegNet: Multimodal sensor registration using deep neural networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017. [Google Scholar]
- Ganesh, I.; R, K.R.; Krishna, M.J. CalibNet: Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Cattaneo, D.; Vaghi, M.; Ballardini, A.L.; Fontana, S.; Sorrenti, D.G.; Burgard, W. CMRNet: Camera to LiDAR-Map Registration. In Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), Auckland, New Zealand, 27–30 October 2019. [Google Scholar]
- Neubert, P.; Schubert, S.; Protzel, P. Sampling-based methods for visual navigation in 3D maps by synthesizing depth images. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; pp. 2492–2498. [Google Scholar]
- Lin, M.; Chen, Q.; Yan, S. Network In Network. In Proceedings of the International Conference on Learning Representations, Banff, Canada, 14–16 April 2014; pp. 1–10. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Sun, D.; Yang, X.; Liu, M.; Kautz, J. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8934–8943. [Google Scholar]
- Yuan, K.; Guo, Z.; Wang, Z.J. RGGNet: Tolerance Aware LiDAR-Camera Online Calibration With Geometric Deep Learning and Generative Model. IEEE Robot. Autom. Lett. 2020, 5, 6956–6963. [Google Scholar] [CrossRef]
- Ye, C.; Pan, H.; Gao, H. Keypoint-Based LiDAR-Camera Online Calibration With Robust Geometric Network. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar] [CrossRef]
- Ku, J.; Harakeh, A.; Waslander, S.L. In defense of classical image processing: Fast depth completion on the CPU. In Proceedings of the IEEE Conference on Computer and Robot Vision, Toronto, ON, Canada, 8–10 May 2018; pp. 16–22. [Google Scholar]
- An, P.; Fu, W.; Gao, Y.; Ma, J.; Zhang, J.; Yu, K.; Fang, B. Lambertian Model-Based Normal Guided Depth Completion for LiDAR-Camera System. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- An, P.; Liang, J.; Yu, K.; Fang, B.; Ma, J. Deep structural information fusion for 3D object detection on LiDAR-camera system. Comput. Vis. Image Underst. 2022, 214, 103295. [Google Scholar] [CrossRef]
- Mukherjee, S.; Mohana, R.; Guddeti, R. A Hybrid Algorithm for Disparity Calculation From Sparse Disparity Estimates Based on Stereo Vision. In Proceedings of the International Conference on Signal Processing and Communications, Bangalore, India, 22–25 July 2014; pp. 1–6. [Google Scholar]
Levels | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
0.0 | 4.0 | 8.0 | 12.0 | 16.0 | 20.0 | |
D/m | 0.0 | 0.30 | 0.60 | 0.90 | 1.20 | 1.50 |
Training | × | × | × | × | × | |
Testing | 🗸 | 🗸 | 🗸 | 🗸 | 🗸 | 🗸 |
Methods | Dense Operation | Encoder | LIA-Net | Lreg | LSC |
---|---|---|---|---|---|
baseline-1 | × | 🗸 | × | 🗸 | × |
baseline-2 | 🗸 | 🗸 | × | 🗸 | × |
baseline-3 (LIA+reg) | 🗸 | × | 🗸 | 🗸 | × |
baseline-4 (LIA+SC) | 🗸 | × | 🗸 | × | 🗸 |
baseline-5 (LIA-SC-Net) | 🗸 | × | 🗸 | 🗸 | 🗸 |
Methods | Rotation Error/deg | Translation Error/m |
---|---|---|
Baseline-1 | 0.869/0.062 | 0.0693/0.0087 |
Baseline-2 | 0.812/0.054 | 0.0648/0.0074 |
Baseline-3 | 0.786/0.052 | 0.0601/0.0068 |
Baseline-4 | 0.768/0.047 | 0.0538/0.0064 |
Baseline-5 | 0.751/0.045 | 0.0447/0.0061 |
Gain | 15.71% | 55.03% |
M | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
Er/deg | 0.751 | 0.654 | 0.538 | 0.525 | 0.525 |
Et/m | 0.0447 | 0.0402 | 0.0396 | 0.0396 | 0.0396 |
Module | Preparation | LIA-Net | Regression | Total |
---|---|---|---|---|
Time | 7.6 | 13.5 | 8.1 | 29.2 |
Methods | Rotation Error/deg | Translation Error/m |
---|---|---|
CalibNet | 0.621 | 0.0495 |
CMR-Net | 0.557 | 0.0437 |
LIA-SC-Net (Our) | 0.525 | 0.0396 |
Methods | CalibNet | CMR-Net | LIA-SC-Net |
---|---|---|---|
Time | 36.8 | 14.7 | 29.2 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
An, P.; Gao, Y.; Wang, L.; Chen, Y.; Ma, J. Online Extrinsic Calibration on LiDAR-Camera System with LiDAR Intensity Attention and Structural Consistency Loss. Remote Sens. 2022, 14, 2525. https://doi.org/10.3390/rs14112525
An P, Gao Y, Wang L, Chen Y, Ma J. Online Extrinsic Calibration on LiDAR-Camera System with LiDAR Intensity Attention and Structural Consistency Loss. Remote Sensing. 2022; 14(11):2525. https://doi.org/10.3390/rs14112525
Chicago/Turabian StyleAn, Pei, Yingshuo Gao, Liheng Wang, Yanfei Chen, and Jie Ma. 2022. "Online Extrinsic Calibration on LiDAR-Camera System with LiDAR Intensity Attention and Structural Consistency Loss" Remote Sensing 14, no. 11: 2525. https://doi.org/10.3390/rs14112525
APA StyleAn, P., Gao, Y., Wang, L., Chen, Y., & Ma, J. (2022). Online Extrinsic Calibration on LiDAR-Camera System with LiDAR Intensity Attention and Structural Consistency Loss. Remote Sensing, 14(11), 2525. https://doi.org/10.3390/rs14112525