A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs
Abstract
:1. Introduction
2. High Dam Safety Intelligent Inspection Based on Visual Perception
2.1. High Dam Safety Inspection
2.2. The Application of Visual Perception in Intelligent Inspections
3. Visual Perception and Processing of Defects in High Dam
3.1. Image Enhancement Methods
3.1.1. Histogram Equalization Methods for Image Enhancement
3.1.2. Retinex Methods for Image Enhancement
3.1.3. Deep Learning Methods for Image Enhancement
3.2. Visual Perception Methods for Concrete Defects
3.2.1. Visual Perception Methods for Concrete Cracks
3.2.2. Visual Perception Methods for Other Defects
3.3. Visual Perception Methods for Defect Quantification
4. Environmental Visual Perception Methods for High Dam Safety Inspection
4.1. Obstacle Perception Methods Based on Traditional Image Detection
4.2. Obstacle Perception Methods Based on Deep Learning
5. Conclusions
5.1. Summary
5.2. Outlook
- Conducting intelligent inspection tasks through visual collection systems poses challenges related to factors such as complex objects, harsh environments, heavy inspection workloads, and a high incidence of emergent inspection tasks. The data captured by visual sensors on the inspection equipment may exhibit issues such as unclear images, jitter, and distortion. Systematically analyzing and semantically interpreting the complex inspection environment, and developing a highly adaptive collection system along with appropriate data preprocessing methods, are directions worthy of research.
- Long-term inspections generate a large volume of image data, presenting challenges regarding data management, analysis, optimization, and visualization. Thus, there is an urgent need for research into the 2D and 3D visualization of inspection data, intelligent analysis, and high dam health assessment and risk prediction based on visual perception.
- For the identification and quantification of structural defects, it is urgent to address issues such as efficient and precise parallel identification, localization, quantification, and 3D perception. The existing methods have limited defect sample data and low robustness. Future research can focus on developing visual perception methods based on small-sample/zero-sample learning, in order to solve the problem of limited defect data; undertaking multi-task learning method research to extract valuable information from multiple related tasks, in order to enhance the generalization ability of the designed algorithms; carrying out studies on multi-source heterogeneous data fusion combining radar, vibration, laser, and other sensors with visual sensors, in order to improve identification accuracy and efficiency of inspections; and exploring methods combining visual perception with structural modal analysis, in order to enhance the confidence level in safety evaluations.
- In the task of intelligent patrol inspections for the safety of high dams, efficiently and accurately perceiving the complex inspection environment remains a challenge. To combine the advantages of existing methods with semantically enriched on-site environments, exploring multi-sensor fusion perception methods is a viable technical approach.
Author Contributions
Funding
Conflicts of Interest
References
- Ye, X.W.; Dong, C.Z.; Liu, T. A review of machine vision-based structural health monitoring: Methodologies and applications. J. Sens. 2016, 2016, 7103039. [Google Scholar] [CrossRef]
- Feng, D.; Feng, M.Q. Computer vision for SHM of civil infrastructure: From dynamic response measurement to damage detection—A review. Eng. Struct. 2018, 156, 105–117. [Google Scholar] [CrossRef]
- Xu, Y.; Brownjohn, J.M.W. Review of machine-vision based methodologies for displacement measurement in civil structures. J. Civ. Struct. Health Monit. 2018, 8, 91–110. [Google Scholar] [CrossRef]
- Spencer, B.F., Jr.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
- Dong, C.Z.; Catbas, F.N. A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 2021, 20, 692–743. [Google Scholar] [CrossRef]
- Dam Safety Management: Pre-Operational Phases of the Dam Life Cycle; International Commission on Large Dams: Paris, France, 2021.
- Regulations on Reservoir Dam Safety Management; State Council of the People’s Republic of China: Beijing, China, 1991.
- Xiang, Y.; Jing, M.T. Guidelines for Safety Inspection of Reservoir Dams; China Water Resources and Hydropower Press: Beijing, China, 2021. [Google Scholar]
- Federal Guidelines for Dam Safety; FEMA P-93; U.S. Department of Homeland Security: Washington, DC, USA, 2023.
- Guo, J.; Ma, J.; García-Fernández, F.; Zhang, Y.; Liang, H. A survey on image enhancement for Low-light images. Heliyon 2023, 9, e14558. [Google Scholar] [CrossRef]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [Google Scholar] [CrossRef]
- Wang, Q.; Ward, R.K. Fast image/video contrast enhancement based on weighted thresholded histogram equalization. IEEE Trans. Consum. Electron. 2007, 53, 757–764. [Google Scholar] [CrossRef]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
- Li, C.Y.; Guo, J.C.; Cong, R.M.; Pang, Y.W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef] [PubMed]
- Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Si, L.; Wang, Z.; Xu, R.; Tan, C.; Liu, X.; Xu, J. Image enhancement for surveillance video of coal mining face based on single-scale retinex algorithm combined with bilateral filtering. Symmetry 2017, 9, 93. [Google Scholar] [CrossRef]
- Xiao, J.; Peng, H.; Zhang, Y.; Tu, C.; Li, Q. Fast image enhancement based on color space fusion. Color Res. Appl. 2016, 41, 22–31. [Google Scholar] [CrossRef]
- Tao, F.; Yang, X.; Wu, W.; Liu, K.; Zhou, Z.; Liu, Y. Retinex-based image enhancement framework by using region covariance filter. Soft Comput. 2018, 22, 1399–1420. [Google Scholar] [CrossRef]
- Gu, Z.; Li, F.; Fang, F.; Zhang, G. A novel retinex-based fractional-order variational model for images with severely low light. IEEE Trans. Image Process. 2019, 29, 3239–3253. [Google Scholar] [CrossRef]
- Hao, S.; Han, X.; Guo, Y.; Xu, X.; Wang, M. Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimed. 2020, 22, 3025–3038. [Google Scholar] [CrossRef]
- Zhang, Q.; Nie, Y.; Zhu, L.; Xiao, C.; Zheng, W.-S. Enhancing underexposed photos using perceptually bidirectional similarity. IEEE Trans. Multimed. 2020, 23, 189–202. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNET: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Ren, W.; Liu, S.; Ma, L.; Xu, Q.; Xu, X.; Cao, X.; Du, J.; Yang, M.-H. Low-light image enhancement via a deep hybrid network. IEEE Trans. Image Process. 2019, 28, 4364–4375. [Google Scholar] [CrossRef] [PubMed]
- Tao, L.; Zhu, C.; Song, J.; Lu, T.; Jia, H.; Xie, X. Low-light image enhancement using CNN and bright channel prior. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3215–3219. [Google Scholar]
- Li, X.; Shang, J.; Song, W.; Chen, J.; Zhang, G.; Pan, J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. Sensors 2022, 22, 6126. [Google Scholar] [CrossRef] [PubMed]
- Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Shi, Y.; Wu, X.; Zhu, M. Low-light image enhancement algorithm based on retinex and generative adversarial network. arXiv 2019, arXiv:1906.06027. [Google Scholar]
- Yang, Q.; Wu, Y.; Cao, D.; Luo, M.; Wei, T. A lowlight image enhancement method learning from both paired and unpaired data by adversarial training. Neurocomputing 2021, 433, 83–95. [Google Scholar] [CrossRef]
- Chen, Y.S.; Wang, Y.C.; Kao, M.H.; Chuang, Y.Y. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6306–6314. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Wang, W.; Chen, Z.; Yuan, X. Simple low-light image enhancement based on Weber-Fechner law in logarithmic space. Signal Process. Image Commun. 2022, 106, 116742. [Google Scholar] [CrossRef]
- Lu, Y.; Gao, Y.; Guo, Y.; Xu, W.; Hu, X. Low-Light Image Enhancement via Gradient Prior-Aided Network. IEEE Access 2022, 10, 92583–92596. [Google Scholar] [CrossRef]
- Rasheed, M.T.; Shi, D. LSR: Lightening super-resolution deep network for low-light image enhancement. Neurocomputing 2022, 505, 263–275. [Google Scholar] [CrossRef]
- Zhou, J.; Sun, J.; Zhang, W.; Lin, Z. Multi-view underwater image enhancement method via embedded fusion mechanism. Eng. Appl. Artif. Intell. 2023, 121, 105946. [Google Scholar] [CrossRef]
- Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 2654–2662. [Google Scholar]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 12504–12513. [Google Scholar]
- Fujita, Y.; Mitani, Y.; Hamamoto, Y. A method for crack detection on a concrete structure. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 3, pp. 901–904. [Google Scholar]
- Fujita, Y.; Hamamoto, Y. A robust automatic crack detection method from noisy concrete surfaces. Mach. Vis. Appl. 2011, 22, 245–254. [Google Scholar] [CrossRef]
- Talab, A.M.A.; Huang, Z.; Xi, F.; HaiMing, L. Detection crack in image using Otsu method and multiple filtering in image processing techniques. Optik 2016, 127, 1030–1033. [Google Scholar] [CrossRef]
- Asdrubali, F.; Baldinelli, G.; Bianchi, F.; Costarelli, D.; Rotili, A.; Seracini, M.; Vinti, G. Detection of thermal bridges from thermographic images by means of image processing approximation algorithms. Appl. Math. Comput. 2018, 317, 160–171. [Google Scholar] [CrossRef]
- Chen, B.; Zhang, X.; Wang, R.; Li, Z.; Deng, W. Detect concrete cracks based on OTSU algorithm with differential image. J. Eng. 2019, 2019, 9088–9091. [Google Scholar] [CrossRef]
- Liu, Z.; Suandi, S.A.; Ohashi, T.; Ejima, T. Tunnel crack detection and classification system based on image processing. In Machine Vision Applications in Industrial Inspection X; SPIE: Bellingham, WA, USA, 2002; Volume 4664, pp. 145–152. [Google Scholar]
- Luo, Q.; Ge, B.; Tian, Q. A fast adaptive crack detection algorithm based on a double-edge extraction operator of FSM. Constr. Build. Mater. 2019, 204, 244–254. [Google Scholar] [CrossRef]
- Fisher, W.D.; Camp, T.K.; Krzhizhanovskaya, V.V. Crack detection in earth dam and levee passive seismic data using support vector machines. Procedia Comput. Sci. 2016, 80, 577–586. [Google Scholar] [CrossRef]
- Fan, X.; Wu, J.; Shi, P.; Zhang, X.; Xie, Y. A novel automatic dam crack detection algorithm based on local-global clustering. Multimed. Tools Appl. 2018, 77, 26581–26599. [Google Scholar] [CrossRef]
- Nishikawa, T.; Yoshida, J.; Sugiyama, T.; Fujino, Y. Concrete crack detection by multiple sequential image filtering. Comput.-Aided Civ. Infrastruct. Eng. 2012, 27, 29–47. [Google Scholar] [CrossRef]
- Gordan, M.; Georgakis, A. A novel fuzzy edge detection and classification scheme to aid hydro-dams surface examination. In Proceedings of the Swedish Society for Automated Image Analysis (SSBA’06), Uppsala, Sweden, 16–17 March 2006; pp. 121–124. [Google Scholar]
- Dung, C.V. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [Google Scholar] [CrossRef]
- Ni, F.T.; Zhang, J.; Chen, Z.Q. Pixel-level crack delineation in images with convolutional feature fusion. Struct. Control Health Monit. 2019, 26, e2286. [Google Scholar] [CrossRef]
- Feng, C.; Zhang, H.; Wang, S.; Li, Y.; Wang, H.; Yan, F. Structural damage detection using deep convolutional neural network and transfer learning. KSCE J. Civ. Eng. 2019, 23, 4493–4502. [Google Scholar] [CrossRef]
- Feng, C.; Zhang, H.; Wang, H.; Wang, S.; Li, Y. Automatic pixel-level crack detection on dam surface using deep convolutional network. Sensors 2020, 20, 2069. [Google Scholar] [CrossRef] [PubMed]
- Feng, C.; Zhang, H.; Li, Y.; Wang, S.; Wang, H. Efficient real-time defect detection for spillway tunnel using deep learning. J. Real-Time Image Process. 2021, 18, 2377–2387. [Google Scholar] [CrossRef]
- Modarres, C.; Astorga, N.; Droguett, E.L.; Meruane, V. Convolutional neural networks for automated damage recognition and damage type identification. Struct. Control Health Monit. 2018, 25, e2230. [Google Scholar] [CrossRef]
- Pang, J.; Zhang, H.; Feng, C.; Li, L. Research on crack segmentation method of hydro-junction project based on target detection network. KSCE J. Civ. Eng. 2020, 24, 2731–2741. [Google Scholar] [CrossRef]
- Li, R.; Yuan, Y.; Zhang, W.; Yuan, Y. Unified vision-based methodology for simultaneous concrete defect detection and geolocalization. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 527–544. [Google Scholar] [CrossRef]
- Deng, Y.X.; Luo, X.J.; Li, H.L. Research on dam surface crack detection of hydropower station based on unmanned aerial vehicle tilt photogrammetry technology. Technol. Innov. Appl. 2021, 5, 158–161. [Google Scholar]
- Cheng, B.; Zhang, H.; Wang, S. Research on dam surface crack detection method based on full convolution neural network. J. Hydroelectr. Eng. 2020, 39, 52–60. [Google Scholar]
- Zou, Q.; Zhang, Z.; Li, Q.; Qi, X.; Wang, Q.; Wang, S. Deepcrack: Learning hierarchical convolutional features for crack detection. IEEE Trans. Image Process. 2018, 28, 1498–1512. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Bao, T.; Xu, B.; Shu, X.; Zhou, Y.; Du, Y.; Wang, R.; Zhang, K. A deep residual neural network framework with transfer learning for concrete dams patch-level crack classification and weakly-supervised localization. Measurement 2022, 188, 110641. [Google Scholar] [CrossRef]
- Zhu, Y.; Tang, H. Automatic damage detection and diagnosis for hydraulic structures using drones and artificial intelligence techniques. Remote Sens. 2023, 15, 615. [Google Scholar] [CrossRef]
- Wu, Y.; Han, Q.; Jin, Q.; Li, J.; Zhang, Y. LCA-YOLOv8-Seg: An Improved Lightweight YOLOv8-Seg for Real-Time Pixel-Level Crack Detection of Dams and Bridges. Appl. Sci. 2023, 13, 10583. [Google Scholar] [CrossRef]
- Zhang, E.; Shao, L.; Wang, Y. Unifying transformer and convolution for dam crack detection. Autom. Constr. 2023, 147, 104712. [Google Scholar] [CrossRef]
- Xiang, C.; Guo, J.; Cao, R.; Deng, L. A crack-segmentation algorithm fusing transformers and convolutional neural networks for complex detection scenarios. Autom. Constr. 2023, 152, 104894. [Google Scholar] [CrossRef]
- German, S.; Brilakis, I.; DesRoches, R. Rapid entropy-based detection and properties measurement of concrete spalling with machine vision for post-earthquake safety assessments. Adv. Eng. Inform. 2012, 26, 846–858. [Google Scholar] [CrossRef]
- Dawood, T.; Zhu, Z.; Zayed, T. Machine vision-based model for spalling detection and quantification in subway networks. Autom. Constr. 2017, 81, 149–160. [Google Scholar] [CrossRef]
- Gao, Y.; Mosalam, K.M. Deep transfer learning for image-based structural damage recognition. Comput. -Aided Civ. Infrastruct. Eng. 2018, 33, 748–768. [Google Scholar] [CrossRef]
- Hoang, N.D.; Huynh, T.C.; Tran, V.D. Concrete spalling severity classification using image texture analysis and a novel jellyfish search optimized machine learning approach. Adv. Civ. Eng. 2021, 2021, 5551555. [Google Scholar] [CrossRef]
- Nguyen, H.; Hoang, N.D. Computer vision-based classification of concrete spall severity using metaheuristic-optimized Extreme Gradient Boosting Machine and Deep Convolutional Neural Network. Autom. Constr. 2022, 140, 104371. [Google Scholar] [CrossRef]
- Huang, B.; Zhao, S.; Kang, F. Image-based automatic multiple-damage detection of concrete dams using region-based convolutional neural networks. J. Civ. Struct. Health Monit. 2023, 13, 413–429. [Google Scholar] [CrossRef]
- Zhao, S.; Kang, F.; Li, J. Concrete dam damage detection and localisation based on YOLOv5s-HSC and photogrammetric 3D reconstruction. Autom. Constr. 2022, 143, 104555. [Google Scholar] [CrossRef]
- Li, Y.; Bao, T. A real-time multi-defect automatic identification framework for concrete dams via improved YOLOv5 and knowledge distillation. J. Civ. Struct. Health Monit. 2023, 13, 1333–1349. [Google Scholar] [CrossRef]
- Dang, M.; Wang, H.; Nguyen, T.-H.; Tightiz, L.; Tien, L.D.; Nguyen, T.N.; Nguyen, N.P. CDD-TR: Automated concrete defect investigation using an improved deformable transformers. J. Build. Eng. 2023, 75, 106976. [Google Scholar] [CrossRef]
- Hong, K.; Wang, H.; Yuan, B.; Wang, T. Multiple Defects Inspection of Dam Spillway Surface Using Deep Learning and 3D Reconstruction Techniques. Buildings 2023, 13, 285. [Google Scholar] [CrossRef]
- Chen, D.; Huang, B.; Kang, F. A review of detection technologies for underwater cracks on concrete dam surfaces. Appl. Sci. 2023, 13, 3564. [Google Scholar] [CrossRef]
- Chen, C.P.; Wang, J.; Zou, L.; Zhang, F.J. Underwater Dam Image Crack Segmentation Based on Mathematical Morpholog. Appl. Mech. Mater. 2012, 220–223, 1315–1319. [Google Scholar] [CrossRef]
- Fan, X.N.; Cao, P.F.; Shi, P.F.; Chen, X.Y.; Zhou, X.; Gong, Q. An Underwater Dam Crack Image Segmentation Method Based on Multi-Level Adversarial Transfer Learning. Neurocomputing 2022, 505, 19–29. [Google Scholar] [CrossRef]
- Li, Y.; Bao, T.; Huang, X.; Chen, H.; Xu, B.; Shu, X.; Zhou, Y.; Cao, Q.; Tu, J.; Wang, R.; et al. Underwater crack pixel-wise identification and quantification for dams via lightweight semantic segmentation and transfer learning. Autom. Constr. 2022, 144, 104600. [Google Scholar] [CrossRef]
- Qi, Z.; Liu, D.; Zhang, J.; Chen, J. Micro-concrete crack detection of underwater structures based on convolutional neural network. Mach. Vis. Appl. 2022, 33, 74. [Google Scholar] [CrossRef]
- Xin, G.; Fan, X.; Shi, P.; Luo, C.; Ni, J.; Cao, Y. A fine extraction algorithm for image-based surface cracks in underwater dams. Meas. Sci. Technol. 2022, 34, 035402. [Google Scholar] [CrossRef]
- Ni, F.T.; Zhang, J.; Chen, Z.Q. Zernike-moment measurement of thin-crack width in images enabled by dual-scale deep learning. Comput. -Aided Civ. Infrastruct. Eng. 2019, 34, 367–384. [Google Scholar] [CrossRef]
- Wang, W.; Zhang, A.; Wang, K.C.; Braham, A.F.; Qiu, S. Pavement Crack Width Measurement Based on Laplace’s Equation for Continuity and Unambiguity. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 110–123. [Google Scholar] [CrossRef]
- Rezaiee-Pajand, M.; Tavakoli, F.H. Crack detection in concrete gravity dams using a genetic algorithm. Proc. Inst. Civ. Eng.-Struct. Build. 2015, 168, 192–209. [Google Scholar] [CrossRef]
- Peng, X.; Zhong, X.; Zhao, C.; Chen, A.; Zhang, T. A UAV-based machine vision method for bridge crack recognition and width quantification through hybrid feature learning. Constr. Build. Mater. 2021, 299, 123896. [Google Scholar] [CrossRef]
- Zhang, C.; Jamshidi, M.; Chang, C.-C.; Liang, X.; Chen, Z.; Gui, W. Concrete Crack Quantification using Voxel-Based Reconstruction and Bayesian Data Fusion. IEEE Trans. Ind. Inform. 2022, 18, 7512–7524. [Google Scholar] [CrossRef]
- Chen, B.; Zhang, H.; Li, Y.; Wang, S.; Zhou, H.; Lin, H. Quantify pixel-level detection of dam surface crack using deep learning. Meas. Sci. Technol. 2022, 33, 065402. [Google Scholar] [CrossRef]
- Ding, W.; Yang, H.; Yu, K.; Shu, J. Crack detection and quantification for concrete structures using UAV and transformer. Autom. Constr. 2023, 152, 104929. [Google Scholar] [CrossRef]
- Mukojima, H.; Deguchi, D.; Kawanishi, Y.; Ide, I.; Murase, H.; Ukai, M.; Nagamine, N.; Nakasone, R. Moving camera background-subtraction for obstacle detection on railway tracks. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3967–3971. [Google Scholar]
- Tastimur, C.; Karakose, M.; Akin, E. Image processing based level crossing detection and foreign objects recognition approach in railways. Int. J. Appl. Math. Electron. Comput. 2017, 1, 19–23. [Google Scholar] [CrossRef]
- Selver, M.A.; Er, E.; Belenlioglu, B.; Soyaslan, Y. Camera based driver support system for rail extraction using 2-D Gabor wavelet decompositions and morphological analysis. In Proceedings of the 2016 IEEE International Conference on Intelligent Rail Transportation (ICIRT), Birmingham, UK, 23–25 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 270–275. [Google Scholar]
- Teng, Z.; Liu, F.; Zhang, B.; Kang, D.-J. An approach for security problems in visual surveillance systems by combining multiple sensors and obstacle detection. J. Electr. Eng. Technol. 2015, 10, 1284–1292. [Google Scholar] [CrossRef]
- Felzenszwalb, P.; McAllester, D.; Ramanan, D. A discriminatively trained, multiscale, deformable part model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Gool, V. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- He, D.; Zou, Z.; Chen, Y.; Liu, B.; Miao, J. Rail transit obstacle detection based on improved CNN. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
- Li, C.-J.; Qu, Z.; Wang, S.-Y.; Liu, L. A method of cross-layer fusion multi-object detection and recognition based on improved faster R-CNN model in complex traffic environment. Pattern Recognit. Lett. 2021, 145, 127–134. [Google Scholar] [CrossRef]
- He, D.; Qiu, Y.; Miao, J.; Zou, Z.; Li, K.; Ren, C.; Shen, G. Improved Mask R-CNN for obstacle detection of rail transit. Measurement 2022, 190, 110728. [Google Scholar] [CrossRef]
- Xu, H.; Li, S.; Ji, Y.; Cao, R.; Zhang, M. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries. Comput. Electron. Agric. 2021, 184, 106104. [Google Scholar] [CrossRef]
- Xue, J.; Cheng, F.; Li, Y.; Song, Y.; Mao, T. Detection of Farmland Obstacles Based on an Improved YOLOv5s Algorithm by Using CIoU and Anchor Box Scale Clustering. Sensors 2022, 22, 1790. [Google Scholar] [CrossRef]
- Yasin, J.N.; Mohamed, S.A.; Haghbayan, M.H.; Heikkonen, J.; Tenhunen, H.; Yasin, M.M.; Plosila, J. Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors. In Proceedings of the 2020 IEEE SENSORS, Rotterdam, The Netherlands, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
- Qiu, Z.; Zhao, N.; Zhou, L.; Wang, M.; Yang, L.; Fang, H.; He, Y.; Liu, Y. Vision-based moving obstacle detection and tracking in paddy field using improved yolov3 and deep SORT. Sensors 2020, 20, 4082. [Google Scholar] [CrossRef]
- She, X.; Huang, D.; Song, C.; Qin, N.; Zhou, T. Multi-obstacle detection based on monocular vision for UAV. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1067–1072. [Google Scholar]
- Chen, H.; Lu, P. Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation. Robot. Auton. Syst. 2022, 154, 104124. [Google Scholar] [CrossRef]
- Chang, S.; Zhang, Y.; Zhang, F.; Zhao, X.; Huang, S.; Feng, Z.; Wei, Z. Spatial attention fusion for obstacle detection using mmwave radar and vision sensor. Sensors 2020, 20, 956. [Google Scholar] [CrossRef] [PubMed]
Category | Method | Purpose | Merits |
---|---|---|---|
Histogram Equalization | AHE [11] | Adaptive histogram equalization algorithm | Enhances the local contrast and details of the image |
AHE [12] | Calculate local histograms based on the AHE method | Enhances image contrast while preserving a significant amount of detail | |
WTHE [13] | Video enhancement based on the WTHE Algorithm | Over-enhancement and level saturation artifacts are effectively avoided | |
LDR-HE [14] | Enhances images efficiently in terms of both objective and subjective quality | A novel contrast enhancement algorithm using the LDR of Lee et al. | |
Underwater HE [15] | Enhances images efficiently in terms of both objective and subjective quality | Produces a pair of output versions |
Category | Method | Purpose | Merits |
---|---|---|---|
Retinex | Retinex [16] | Remove or reduce the effects of illumination and preserve the essential characteristics of the object | Offers a robust and flexible framework for image enhancement tasks |
SSR [17] | Decompose an image into two different components: the reflection component and the illumination component | Reduces the effects of illumination and preserves the essential features of objects | |
MSR [18] | Estimate the illumination component by combining several different scales using a central surround function | Balances local and global dynamic range compression | |
SSRBF [19] | Address low and uneven lighting issues | Merges SSR and Bilateral Filter | |
Retinex + HSV [20] | Achieve the goal of eliminating halo artifacts | Improves visibility and eliminates color distortion in HSV space | |
RCF + CFAHE + NLF + GF [21] | Estimate illumination in the presence of spatially variant phenomena | Increases contrast, eliminates noise, and enhances details at the same time | |
Fractional-order variational Retinex [22] | Enhance images with severely low light | Controls the regularization extent more flexibly | |
Robust Retinex Model [23] | Address the issue that Retinex models often fail to deal with noise | Adaptable to a variety of tasks | |
Concrete image enhancement [24] | Enhance the poor image quality of underwater concrete | Provides balanced enhanced images in terms of color, contrast, and brightness |
Method | Network | Dataset | PSNR | SIMM |
---|---|---|---|---|
LLnet [25] | Deep Autoencoder | Synthetic images | 19.81 | 0.67 |
Hybrid network [26] | RNN | Synthetic images | 28.43 | 0.96 |
Denoise [27] | CNN | Real images | 0.85 | |
CLAR [28] | Retinex + CNN | VV | 19.65 | 0.61 |
DSICE [29] | CNN | Real under-exposed images | 20.27 | 0.94 |
LightenNet [30] | CNN | Self | 21.71 | 0.93 |
Retinex-Net [31] | CNN | LOL | ||
Retinex-GAN [32] | GAN | LOL | 31.33 | 0.88 |
Global-SRA-U-net [33] | GAN | LOL | 19.46 | 0.75 |
Deep Photo Enhancer [34] | GAN | MIT-Adobe 5K | 23.80 | 0.90 |
EnlightenGAN [35] | GAN | LOL | ||
Weber-Fechner law in log space [36] | CNN | LOL | 20.508 | 0.952 |
GPANet [37] | CNN | LOL | 20.862 | 0.7842 |
LSR [38] | CNN | LOL | 20.712 | 0.821 |
MFEF [39] | CNN | UIEB | 23.352 | 0.910 |
LLFormer [40] | Transformer | LOL | 23.6491 | 0.8163 |
Retinexformer [41] | Transformer | LOL | 25.16 | 0.845 |
Reference | Method | Merits | Limitations |
---|---|---|---|
Fujita et al. [42] | Threshold | Handles irregular lighting conditions, shadows, and imperfections | Sensitivity and adaptability can be easily disrupted. |
Fujita et al. [43] | Adaptive thresholds | Robust automatic crack detection from noisy concrete surface images | Suitable adaptive parameters and ranges must be selected |
Talab et al. [44] | Otsu | Classifies background and foreground | Limited adaptation to images with complex background |
Asdruabali et al. [45] | Threshold | Thermal image enhancement with Kantorovich operator | Adapted to infrared images |
Chen et al. [46] | Otsu | Difference image adaptation for defect detection in complex background | The accuracy of noisy image processing needs to be further studied |
Reference | Method | Merits | Limitations |
---|---|---|---|
Liu et al. [47] | SVM | Utilizes the balanced local image | Limited accuracy |
Luo et al. [48] | SVM | Adaptive binarization procedure | Black thin long writing and dirtiness are recognized as cracks. |
Fisher et al. [49] | SVM | A novel data-driven approach | Generalization ability needs to be improved |
Fan et al. [50] | Clustering | The threshold for realizing image binarization is self-adaptive | Limited environmental adaptability |
Nishikawa et al. [51] | Genetic Encoding | Wavelet transforms at different scales | No scale adaptation |
Gordan et al. [52] | Clustering | Fuzzy C-means clustering edge detection operator algorithm | High false alarm rate |
Reference | Method | Merits | Dataset | Performance (%) |
---|---|---|---|---|
Dung et al. [53] | FCN | Reasonably detected and crack density is also accurately evaluated | Concrete Crack Images for Classification | AP = 89.3% |
Ni et al. [54] | CNN | Delineates cracks accurately and rapidly | Own collection | Precision = 79.28% |
Feng et al. [55,56,57] | CNN | Provides accuracy considerably higher than that of a support vector machine | Own collection | Precision = 93.48% |
Modarres et al. [58] | CNN | Outperforms several other machine learning algorithms | Own collection | Accuracy = 99.6% Precision = 97.5% |
Pang et al. [59] | RCNN | Crack segmentation method of hydro-junction project | Own collection | Iou = 52.7% |
Li et al. [60] | CNN | Ideal for integration within intelligent autonomous inspection systems | Own collection | Accuracy = 80.7% |
Deng et al. [61] | CNN | Defect detection on dam surface by using UAV tilt photogrammetry technology combined with machine vision | Own collection | Accuracy = 76.39% |
Chen et al. [62] | FCN | Accurate identification and quantification of cracks on the dam surface | Own collection | Accuracy = 75.13% |
Zou et al. [63] | U-Net | A novel end-to-end trainable convolutional network—DeepCrack | CRKWH100 | AP = 93.15% |
Li et al. [64] | Transfer Learning | Realized high-precision crack identification | Own collection | Precision = 91.23% |
Zhu et al. [65] | Deeplab V3+ | The fusion of a lightweight backbone network and attention mechanism | Own collection | Precision = 91.23% |
Wu et al. [66] | LCA-YOLOv8-Seg | Suitable for low-performance devices | Concrete Crack Images for Classification | = 93.30% |
Zhang et al. [67] | UTCD-Net | Demonstrated superior generalizability with respect to complex scenes | CFD dataset | Precision = 62.85% |
Xiang et al. [68] | DTrC-Net | More adaptable and robust to crack images captured under complex conditions | Crack3238 | Precision = 75.60% |
Reference | Defect Type | Merits | Dataset | Performance (%) |
---|---|---|---|---|
German et al. [69] | Spalling | Automatically detecting spalled regions on a concrete structural element and retrieving significant properties of these spalled regions in terms of their length and depth | Own collection | AP = 80.90% |
Dawood et al. [70] | Spalling | An integrated framework to detect and quantify spalling distress based on image data processing and machine learning | Own collection | Precision = 94.80% |
Gao et al. [71] | Spalling | Spalling classifier based on deep transfer learning + VGGNet | Own collection | Accuracy = 91.50% |
Hoang et al. [72] | Spalling | Image texture analysis and novel jellyfish search optimization method | Own collection | Precision = 93.20% |
Nguyen et al. [73] | Spalling | Combination of meta-heuristic optimized extreme gradient boosting | Own collection | Precision = 99.03% |
Huang et al. [74] | Multiple Types | An automatic multiple-damage detection method for concrete dams based on faster re-gion-based CNN | Own collection | mAP = 88.77% |
Zhao et al. [75] | Multiple Types | A system for detecting damages in concrete dams that combines the proposed YOLOv5s-HSC algorithm and a three-dimensional (3D) photogrammetric reconstruction method to accurately identify and locate objects. | Own collection | mAP = 79.80% |
Minh Dang et al. [76] | Multiple Types | accurately distinguish different types of structural defects in concrete dams under the interference of environmental noises. | Own collection | = 89.40% |
Hong et al. [77] | Multiple Types | An end-to-end transformer-based model | large-scale dataset | = 63.80% |
Chen et al. [80] | Underwater | This method enables accurate and efficient detection and classification of underwater dam cracks in complex underwater environments | Own collection | / |
Fan et al. [81] | Underwater | The proposed method achieves accurate segmentation of underwater dam crack images | Own collection | Precision = 47.74% |
Li et al. [82] | Underwater | Lightweight semantic segmentation and transfer learning | Own collection | Precision = 91.51% |
Qi et al. [83] | Underwater | Combination of traditional approaches and deep learning techniques | Own collection | Accuracy = 93.9% |
Xin et al. [84] | Underwater | Edge detection model based on artificial bee colony algorithm | Own collection | Precision = 90.10% |
Reference | Defect Type | Merits | Dataset | Performance (%) |
---|---|---|---|---|
Ni et al. [85] | Crack | The Zernike moment operator (ZMO) for achieving subpixel accuracy in measuring thin-crack width | Own collection | Precision = 88.65% |
Wang et al. [86] | Crack | A new crack width definition and formulates it using Laplace’s Equation | Own collection | / |
Rezaiee-Pajand et al. [87] | Crack | Concrete crack detection based on genetic algorithm | Own collection | Precision = 94.80% |
Peng et al. [88] | Crack | crack recognition and width quantification through hybrid feature learning | Own collection | Precision = 92.00% |
Zhang et al. [89] | Crack | Combination of voxel reconstruction and Bayesian data fusion | Own collection | AP = 87.3% |
Chen et al. [90] | Crack | Combining semantic segmentation and morphology | Own collection | Precision = 90.81% |
Ding et al. [91] | Crack | Based on IBR-Former, it can quantify cracks with widths less than 0.2 mm | Own collection | Precision = 85.32% |
Perception Type | Reference | Method | Merits | Limitations |
---|---|---|---|---|
Traditional Image Detection | Mukojima et al. [92] | background subtraction algorithm | Moving camera background subtraction for forward obstacle detection | Limited light adaptation |
Tastimur et al. [93] | background subtraction algorithm | HSV color transformation, image difference extraction, gradient computation, filtering, and feature extraction | Foreign objects in the level crossing are not enough | |
Selver et al. [94] | Gabor wavelets | Trajectory edge enhancement and noise filter | Limited detection range | |
Teng et al. [95] | background subtraction algorithm | Combination of multiple sensors and the vision-based snag detection algorithm | Limited detection range | |
Deep Learning | He et al. [99] | FE-YOLO | A flexible and efficient multiscale one-stage object detector | Limited real-time performance |
Li et al. [100] | Faster R-CNN | Multi-object detection and recognition | Limited generalization ability | |
He et al. [101] | Mask R-CNN | High precision in small target detection | Limited generalization ability | |
Xu et al. [102] | Optical flow algorithm | Dynamic obstacle detection based on panoramic vision | Single direction of motion | |
Xue et al. [103] | Improved YOLOv5s | The small-weight file | Limited generalization ability | |
Yasin et al. [104] | Hough transform | Used an adaptive slicing algorithm based on accumulating number of events | Limitation to specific scenarios | |
Qiu et al. [105] | YOLOv3 | Vision-based moving obstacle detection and tracking | No center point detection | |
She et al. [106] | YOLOv3 + SURF | Multi-obstacle detection | Limited generalization ability | |
Chen et al. [107] | The forbidden pyramids | Real-time identification and avoidance of simultaneous static and dynamic obstacles | Limited robustness |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Peng, Z.; Li, L.; Liu, D.; Zhou, S.; Liu, Z. A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs. Sensors 2024, 24, 5246. https://doi.org/10.3390/s24165246
Peng Z, Li L, Liu D, Zhou S, Liu Z. A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs. Sensors. 2024; 24(16):5246. https://doi.org/10.3390/s24165246
Chicago/Turabian StylePeng, Zhangjun, Li Li, Daoguang Liu, Shuai Zhou, and Zhigui Liu. 2024. "A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs" Sensors 24, no. 16: 5246. https://doi.org/10.3390/s24165246
APA StylePeng, Z., Li, L., Liu, D., Zhou, S., & Liu, Z. (2024). A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs. Sensors, 24(16), 5246. https://doi.org/10.3390/s24165246