**6. Conclusions and Future Outlook**

The development of SLAM technology based on 3D LIDAR in recent years has been rapid. Among them, excellent works of multi-sensor fusion have emerged in an endless stream. Throughout the development history of fusion SLAM, we have seen from filterbased probabilistic methods to information-based optimization methods; from raw dataassisted front end fusion to error-coupling-based back end optimization; from single sensor systems to complex systems with multiple subsystems coupled; from independent error models to tightly coupled complete graphical models. Various application scenarios and

demands have promoted the diversity of SLAM technology. Continuous advancements in sensor technology provide the foundation and impetus for it.

This paper mainly classifies and summarizes the papers and works that have appeared in recent years based on the data coupling method of the SLAM system. The main innovations of the paper are mentioned together when describing the details of data association. The strengths and weaknesses of each work are based on the qualitative analysis of the system composition. However, our work is far from perfect. Obviously, this paper does not list all related works for readers' reference. Only a portion of representative works are shown. There are more works based on deep learning for multi-sensor fusions, mostly used in environment perception, object detection and semantic segmentation. They may play auxiliary roles in SLAM systems.

Multi-sensor fusion is a key to building robust systems. Complex systems based on multi-sensors need to be lightweight, accurate, scalable, and versatile for SLAM. From the experimental part, we know that dynamic environment, object occlusion, and long corridor environment are the key challenges for feature-based SLAM methods. Combining the sensor with the control model of the robot or vehicle can effectively alleviate the problem of odometer degradation in special cases. With the increase of the number of sensors, the amount of data, and the continuous expansion of application scenarios, it is difficult for SLAM systems to further improve the accuracy of positioning and mapping within a specified computing time. Therefore, SLAM has large development space in the applications of various scenes. Distributed multi-robot collaboration, land–air collaboration, and sea–air collaboration systems can effectively solve the problems faced in large scenes. In addition, hardware acceleration and parallel processing feature extraction and pose optimization can effectively relieve the computational pressure of the system due to the multi-sensor data fusion. On the other hand, deep learning is undoubtedly one of the hottest directions at present. There have been a lot of efforts towards combining deep learning with SLAM systems. The application of deep learning can be seen in almost all key steps such as feature extraction, depth estimation, environment perception, pose estimation, and semantic map. In the current works, deep learning only replaced limited parts of the SLAM system. For example, optimizing depth estimation of monocular camera to obtain landmark points, directly estimating pose without feature extraction, perceiving the environment to distinguish moving objects, and building high-precision semantic maps. These are research directions with great potential in the future. The application of deep learning will further improve and expand the performance and functions of SLAM. In future work, the combination of data fusion of multiple sensors and deep learning to optimize and improve the SLAM algorithm will receive more attention.

**Author Contributions:** Collection and organization of references, C.C. and W.W.; summarization and classification of references, X.X and L.Z.; writing—original draft preparation, L.Z.; writing—review and editing, L.Z., X.X. and J.Y.; supervision, Y.R., Z.T. and M.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Changzhou Sci&Tech Program (Grant No. CE20215041), the Fundamental Research Funds for the Central Universities (Grant No. B220202023), and Jiangsu Key R&D Program (Grant No. BE2020082-1).

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

