Learning and SLAM Based Decision Support Platform for Sewer Inspection
Abstract
:1. Introduction
2. Concepts and Methodology
2.1. Positioning in a Pipeline
2.2. Internal Defect Identification
2.3. Flatness and Slope Analysis of a Pipeline
3. Validation and Analysis
4. Discussions
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Nassiraei, A.A.F.; Kawamura, Y.; Ahrary, A.; Mikuriya, Y.; Ishii, K. Concept and Design of A Fully Autonomous Sewer Pipe Inspection Mobile Robot “KANTARO”. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 136–143. [Google Scholar] [CrossRef]
- Kim, J.-H.; Sharma, G.; Iyengar, S.S. FAMPER: A Fully Autonomous Mobile Robot for Pipeline Exploration. In Proceedings of the 2010 IEEE International Conference on Industrial Technology, Vina del Mar, Chile, 14–17 March 2010; pp. 517–523. [Google Scholar] [CrossRef]
- Knedlová, J.; Bílek, O.; Sámek, D.; Chalupa, P. Design and Construction of an Inspection Robot for the Sewage Pipes. MATEC Web Conf. 2017, 121, 01006. [Google Scholar] [CrossRef] [Green Version]
- Imajo, N.; Takada, Y.; Kashinoki, M. Development and Evaluation of Compact Robot Imitating a Hermit Crab for Inspecting the Outer Surface of Pipes. J. Robot. 2015, 6, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- R&R VISUAL, INC. (800) 776-5653. Available online: http://seepipe.com/ (accessed on 31 January 2020).
- Roh, S.; Choi, H.R. Differential-Drive in-Pipe Robot for Moving inside Urban Gas Pipelines. IEEE Trans. Robot. 2005, 21, 1–17. [Google Scholar]
- Kwon, Y.-S.; Yi, B.-J. Design and Motion Planning of a Two-Module Collaborative Indoor Pipeline Inspection Robot. IEEE Trans. Robot. 2012, 28, 681–696. [Google Scholar] [CrossRef]
- Jahanshahi, M.R.; Masri, S.F. Adaptive Vision-Based Crack Detection Using 3D Scene Reconstruction for Condition Assessment of Structures. Autom. Constr. 2012, 22, 567–576. [Google Scholar] [CrossRef]
- Yamaguchi, T.; Hashimoto, S. Fast Crack Detection Method for Large-Size Concrete Surface Images Using Percolation-Based Image Processing. Mach. Vis. Appl. 2010, 21, 797–809. [Google Scholar] [CrossRef]
- Hu, Y.; Zhao, C. A Novel LBP Based Methods for Pavement Crack Detection. JPRR 2010, 5, 140–147. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Cha, Y.-J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
- Borrmann, D.; Elseberg, J.; Lingemann, K.; Nüchter, A.; Hertzberg, J. Globally Consistent 3D Mapping with Scan Matching. Robot. Auton. Syst. 2008, 56, 130–142. [Google Scholar] [CrossRef] [Green Version]
- Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Fusiello, A.; Castellani, U.; Ronchetti, L.; Murino, V. Model Acquisition by Registration of Multiple Acoustic Range Views. In Computer Vision—ECCV 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2002; pp. 805–819. [Google Scholar] [CrossRef]
- Bae, K.-H.; Lichti, D.D. A Method for Automated Registration of Unorganised Point Clouds. Isprs J. Photogramm. Remote Sens. 2008, 63, 36–54. [Google Scholar] [CrossRef]
- Habib, A.; Bang, K.I.; Kersting, A.P.; Chow, J. Alternative Methodologies for LiDAR System Calibration. Remote Sens. 2010, 2, 874–907. [Google Scholar] [CrossRef] [Green Version]
- Al-Durgham, M.; Habib, A. A Framework for the Registration and Segmentation of Heterogeneous Lidar Data. Available online: https://www.ingentaconnect.com/content/asprs/pers/2013/00000079/00000002/art00001 (accessed on 10 March 2020).
- Gruen, A.; Akca, D. Least Squares 3D Surface and Curve Matching. ISPRS J. Photogramm. Remote Sens. 2005, 59, 151–174. [Google Scholar] [CrossRef] [Green Version]
- Akca, D. Co-Registration of Surfaces by 3D Least Squares Matching. Available online: https://www.ingentaconnect.com/content/asprs/pers/2010/00000076/00000003/art00006 (accessed on 10 March 2020).
- Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef] [Green Version]
- Mendes, E.; Koch, P.; Lacroix, S. ICP-Based Pose-Graph SLAM. In Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland, 23–27 October 2016. [Google Scholar]
- Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
- Reich, M.; Unger, J.; Rottensteiner, F.; Heipke, C. On-Line Compatible Orientation of a Micro-Uav Based on Image Triplets. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 3, 37–42. [Google Scholar] [CrossRef] [Green Version]
- Stamatopoulos, C.; Chuang, T.Y.; Fraser, C.S.; Lu, Y.Y. Fully Automated Image Orientation in the Absence of Targets. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; XXII ISPRS Congress: Melbourne, Australia, 2012; Volume 39B5, pp. 303–308. [Google Scholar] [CrossRef] [Green Version]
- Beder, C.; Steffen, R. Incremental Estimation without Specifying A-Priori Covariance Matrices for the Novel Parameters. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; pp. 1–6. [Google Scholar] [CrossRef]
- Parekh, H.S.; Thakore, D.G.; Jaliya, U.K. A Survey on Object Detection and Tracking Methods. Int. J. Innov. Res. Comput. Commun. Eng. 2014, 2, 2970–2979. [Google Scholar]
- Long, Y.; Gong, Y.; Xiao, Z.; Liu, Q. Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2486–2498. [Google Scholar] [CrossRef]
- Jiao, J.; Zhang, Y.; Sun, H.; Yang, X.; Gao, X.; Hong, W.; Fu, K.; Sun, X. A Densely Connected End-to-End Neural Network for Multiscale and Multiscene SAR Ship Detection. IEEE Access 2018, 6, 20881–20892. [Google Scholar] [CrossRef]
- Leibe, B.; Schindler, K.; Cornelis, N.; Van Gool, L. Coupled Object Detection and Tracking from Static Cameras and Moving Vehicles. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1683–1698. [Google Scholar] [CrossRef] [Green Version]
- Akcay, S.; Kundegorski, M.E.; Willcocks, C.G.; Breckon, T.P. Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery. IEEE Trans. Inf. Forensics Secur. 2018, 13, 2203–2215. [Google Scholar] [CrossRef] [Green Version]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar] [CrossRef] [Green Version]
- Chuang, T.-Y.; Perng, N.-H.; Han, J.-Y. Pavement Performance Monitoring and Anomaly Recognition Based on Crowdsourcing Spatiotemporal Data. Autom. Constr. 2019, 106, 102882. [Google Scholar] [CrossRef]
- Lu, F.; Milios, E. Globally Consistent Range Scan Alignment for Environment Mapping. Auton. Robot. 1997, 4, 333–349. [Google Scholar] [CrossRef]
Structure | Functionality |
---|---|
rupture, deformation, dislocation, disjoint, leakage, corrosion, branch pipeline insertion, crack | deposition, scaling, obstacles, roots, ponding, dam head, scum |
Infrared Image (512 × 424) | Depth Image (512 × 424) | Point Clouds |
---|---|---|
Front IR | ||||
Right IR | ||||
Left IR | ||||
Depth Image | ||||
Point Cloud |
Front IR Imagery | ||
Avg. depth | Bag: 2.70 m; Box: 4.09 m; Ball: 7.08 m | Box: 2.12 m; Ball: 5.02 m |
GSD | Bag: 1.4 mm; Box:2.4 mm; ball:4.4 mm | Box: 1.2 mm; Ball: 2.6 mm |
Sectional area () | Bag: 830.28; Box: 909.67; Ball: 414.11 | Box: 837.23; Ball: 452.71 |
Left IR Imagery | ||
Depth | 0.9 m | 1.8 m |
GSD | 1.1 mm | 2.2 mm |
Length of crack | 755 mm | N/A |
Width of crack | Max: 17 mm; Min: 13 mm | N/A |
Area of flaking | 347.45 | 1949.16 |
Object (a) | Object (b) | Object (c) | Object (d) | |
---|---|---|---|---|
Distance | 162 mm | 138 mm | 153 mm | 171 mm |
GSD | 0.047 mm | 0.040 mm | 0.044 mm | 0.049 mm |
Sectional area of pipeline | 113.04 | 113.04 | 113.04 | 113.04 |
Sectional area of obstacle | 11.14 | 18.79 | 9.37 | 11.71 |
Block ratio | 9.85% | 16.62% | 8.29% | 10.36% |
Crack (a) | Crack (b) | Crack (c) | Crack (d) | |
---|---|---|---|---|
Estimated length of tiny crack | 12.56 cm | 27.91 cm | 21.27 cm | 35.83 cm |
Detection rate | 32.3% | 58.2% | 54.7% | 62.6% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chuang, T.-Y.; Sung, C.-C. Learning and SLAM Based Decision Support Platform for Sewer Inspection. Remote Sens. 2020, 12, 968. https://doi.org/10.3390/rs12060968
Chuang T-Y, Sung C-C. Learning and SLAM Based Decision Support Platform for Sewer Inspection. Remote Sensing. 2020; 12(6):968. https://doi.org/10.3390/rs12060968
Chicago/Turabian StyleChuang, Tzu-Yi, and Cheng-Che Sung. 2020. "Learning and SLAM Based Decision Support Platform for Sewer Inspection" Remote Sensing 12, no. 6: 968. https://doi.org/10.3390/rs12060968
APA StyleChuang, T. -Y., & Sung, C. -C. (2020). Learning and SLAM Based Decision Support Platform for Sewer Inspection. Remote Sensing, 12(6), 968. https://doi.org/10.3390/rs12060968