Next Article in Journal
Fast-Activated Minimal Gated Unit: Lightweight Processing and Feature Recognition for Multiple Mechanical Impact Signals
Previous Article in Journal
Microscale Sensor Arrays for the Detection of Dopamine Using PEDOT:PSS Organic Electrochemical Transistors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs

1
School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
2
School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang 621010, China
3
Sichuan Engineering Technology Research Center of Industrial Self-Supporting and Artificial Intelligence, Mianyang 621010, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(16), 5246; https://doi.org/10.3390/s24165246
Submission received: 2 July 2024 / Revised: 9 August 2024 / Accepted: 11 August 2024 / Published: 14 August 2024
(This article belongs to the Section Fault Diagnosis & Sensors)

Abstract

:
There are many high dam hubs in the world, and the regular inspection of high dams is a critical task for ensuring their safe operation. Traditional manual inspection methods pose challenges related to the complexity of the on-site environment, the heavy inspection workload, and the difficulty in manually observing inspection points, which often result in low efficiency and errors related to the influence of subjective factors. Therefore, the introduction of intelligent inspection technology in this context is urgently necessary. With the development of UAVs, computer vision, artificial intelligence, and other technologies, the intelligent inspection of high dams based on visual perception has become possible, and related research has received extensive attention. This article summarizes the contents of high dam safety inspections and reviews recent studies on visual perception techniques in the context of intelligent inspections. First, this article categorizes image enhancement methods into those based on histogram equalization, Retinex, and deep learning. Representative methods and their characteristics are elaborated for each category, and the associated development trends are analyzed. Second, this article systematically enumerates the principal achievements of defect and obstacle perception methods, focusing on those based on traditional image processing and machine learning approaches, and outlines the main techniques and characteristics. Additionally, this article analyzes the principal methods for damage quantification based on visual perception. Finally, the major issues related to applying visual perception techniques for the intelligent safety inspection of high dams are summarized and future research directions are proposed.

1. Introduction

High dams serve as vital strategic infrastructure, bolstering the efficient development of hydropower and the comprehensive utilization of water resources. They continually and stably fuel economic and social development by providing energy supply and flood protection, thus generating substantial social and economic benefits. The safety of high dams is a critical component of national public safety, with the assurance of their secure operation and the full realization of their functions being a strategic necessity. The routine inspection of high dams has become an essential measure to guarantee their safe operation. In recent years, with the rapid advancement of unmanned equipment, computer vision, machine learning, and intelligent perception technologies, among others, widespread research and applications have been conducted in the field of construction safety monitoring and inspection techniques based on visual perception.
In the domain of construction safety monitoring, Ye et al. [1] briefly summarized the application of image processing techniques in safety inspections and reviewed the potential error sources in vision perception methods. Meanwhile, Feng et al. [2] outlined the displacement measurement methods based on visual perception from both hardware and software perspectives, and reviewed the extended applications related to construction safety patrol tasks that utilize the obtained displacement data. Furthermore, Xu et al. [3] conducted a detailed review of the development of displacement measurement algorithms based on visual perception, summarizing the applications developed on the basis of vision-based displacement measurements. Spencer et al. [4] summarized advancements in the field of structural inspection and monitoring based on visual perception, presenting methods for incorporating deep learning-based visual perception techniques in structural engineering applications. They also discussed the applications of static and dynamic displacement measurements based on visual perception in laboratory and field scenarios. Dong et al. [5] analyzed and summarized the application and development of computer vision technology in the context of local and global structural health monitoring.
The above literature has outlined both the mainstream methods and recent advancements in visual perception technology for construction safety monitoring and inspection. However, comprehensive reviews on visual perception techniques and methods for use in the intelligent safety inspection of high dams remain scarce. In recent years, researchers have introduced numerous visual perception methods tailored for the inspection of specific building structures and, despite the scarcity of studies focusing on high dams in this regard, methods designed for similar characteristic structures hold significant reference value. This article serves to primarily review visual perception methodologies for the intelligent inspection of high dams and analogous structures, highlighting related research findings and advancements. Furthermore, it anticipates future technological development trajectories in this domain. This article primarily synthesizes the latest achievements and developments in visual perception methods and key application technologies for intelligent safety inspections of high dams, allowing the direction of technological advancements to be anticipated.

2. High Dam Safety Intelligent Inspection Based on Visual Perception

2.1. High Dam Safety Inspection

In accordance with the ‘Dam Safety Management: Pre-operational Phases of the Dam Life Cycle’ (International Commission on Large Dams, Bulletin 175, 2021) [6], the ‘Regulation on the Administration of Reservoir Dam Safety’ (Decree No.77 of the State Council of the People’s Republic of China, 1991) [7], and other provisions, regular safety monitoring and inspections need to be carried out on the hydraulic buildings of hydropower hubs during the operation period.
Based on the safety patrol inspection analysis detailed in the ‘Guidelines on the Patrol and Inspection of Reservoir Dam Safety’ (Dam Safety Management Center of the Ministry of Water Resources, China, ISBN: 9787517094777, 2021) [8] and ‘Federal Guidelines for Dam Safety’ (Department of Homeland Security, USA, FEMA P-93, 2023) [9], the primary goal of safety inspections for high dam hubs is to identify potential safety hazards in the structures of the high dam hubs. Typical safety hazards in the structures of high dam hubs include defects such as concrete cracks, cavitation, erosion, spalling, weathering, hollow drumming, falling off, corrosion-induced coarse aggregate exposure, pits, troughs, exposure of reinforcing steel, metal corrosion, deformation, displacement, seepage, and foreign objects. During the inspection process, it is necessary to identify the location, scale, and shape of the defects, as detailed in Figure 1. Most of these safety hazards and their characteristics can be identified through human visual inspection during the safety patrol inspection process. Therefore, visual perception methods can be utilized to achieve intelligent inspections.

2.2. The Application of Visual Perception in Intelligent Inspections

In intelligent safety inspections, visual technologies such as image classification, measurement, detection, recognition, and segmentation based on visual perception are utilized. These technologies can extract local structural health safety state indicators from visible images of a structure’s surface, including cracks, spalling, corrosion, delamination, and voids. Using methods such as optical flow and visual tracking, global health safety state indicators of high dam structures, such as structural responses including displacement, vibration, deformation, and misalignment, can be obtained. While the tasks and routes of high dam intelligent safety inspections are planned according to prior models and maps, unforeseen situations and temporary tasks outside of the planned factors may occur during the inspection process. Autonomous obstacle recognition and local adaptive inspection route planning based on visual perception are vital means to ensure the reliable completion of intelligent inspections. Visual perception methods can allow for long-distance, non-contact, low-cost, high-efficiency, and automated inspections to be successfully carried out. Figure 2 demonstrates the relevant visual perception technologies that may be applied in high dam intelligent safety inspections, mainly including image enhancement, target recognition, image classification, image segmentation, image measurement, image generation, visual tracking, three-dimensional reconstruction, position estimation, and visual navigation.

3. Visual Perception and Processing of Defects in High Dam

The safety inspection of a high dam requires a thorough analysis of its structural conditions and safety levels through an assessment of defects such as cracks, spalling, delamination, and corrosion. The routine procedure for inspectors is to first visually inspect the structure and then manually mark locations on structural drawings. This process is labor-intensive and time-consuming, requiring a significant workforce. Thus, utilizing new sensors and information processing techniques for defect characteristic perception in high dam safety inspections has aroused widespread attraction in both industry and academia. In recent years, significant advancements have been achieved in this area through the use of computer vision and machine learning methodologies.
As illustrated in Figure 3, the procedure for visual defect perception during high dam safety inspections comprises several stages. Initially, image acquisition is conducted, where cameras mounted on intelligent inspection devices capture images of the inspection area. Due to variations in the capture environment, the quality of the collected images may not be uniform. To facilitate efficient recognition, image filtering, and enhancement procedures must be carried out. Then, the processed high-quality images can be used for defect identification and classification tasks and, ultimately, defect parameters are quantified to furnish supporting data for subsequent safety evaluations.

3.1. Image Enhancement Methods

The quality of images and videos greatly influences the accuracy of visual perception systems. Many high dams are situated in mountainous regions with complex environments and intricate structural objects to inspect, which are easily affected by lighting, rain, snow, and other factors in fluid–solid environments. This leads to issues such as substantial background noise, loss of detail, uneven illumination, low light, and decreased resolution of images, as illustrated in Figure 4. Therefore, image enhancement is required before performing high-dam structural defect perception.
The main purposes of high dam image enhancement are to expand the difference between the features of different objects in the image, suppress the uninteresting features, improve the image quality, enrich the information, strengthen the image interpretation and recognition effect, and meet the requirements of defect and environmental sensing [10]. Image enhancement algorithms can improve the quality of collected inspection images, thus reducing the processing time and increasing recognition accuracy. We analyze the enhancement methods that can be used for high dam images in three categories: histogram equalization (HE) methods, retinex methods, and deep learning methods.

3.1.1. Histogram Equalization Methods for Image Enhancement

The principle of histogram equalization involves evenly distributing the grayscale values in an original image from a relatively concentrated range across the entire grayscale space. This process achieves nonlinear stretching of the image and redistributes its pixel values, thereby enhancing the image’s contrast and dynamic range through the analysis and redistribution of pixel grayscale values based on histogram distributions [11]. For example, Wang et al. [12] proposed the utilization of the adaptive histogram equalization (AHE) method to compute local histograms, subsequently redistributing brightness levels to enhance an image’s contrast while preserving a significant amount of detail. However, this approach inherently increases the time complexity by segmenting the image into multiple distinct sections. Additionally, Wang et al. [13] presented a Weighted Threshold Histogram Equalization (WTHE) algorithm, which is suitable for video enhancement, effectively mitigating issues related to over-enhancement and average saturation artifacts, despite the algorithm’s susceptibility to high noise levels and incomplete detail preservation. Lee et al. [14] employed the Local Difference Representation (LDR) method to amplify the differential representation of adjacent pixels in grayscale within a two-dimensional histogram, demonstrating not only superior speed but also enhanced efficacy. Moreover, Li et al. [15] introduced an underwater image enhancement technique based on the underwater dark channel model, which minimizes information loss and relies on prior knowledge of the histogram distribution for image de-fogging. Nonetheless, the method tended to induce over-enhancement, potentially resulting in a loss of image detail. The purpose and merits of each method are listed in Table 1.

3.1.2. Retinex Methods for Image Enhancement

Retinex theory, proposed by Land et al. [16] in 1977, involves the decomposition of images into illumination and reflectance components, offering a robust and flexible framework for image enhancement tasks. The core of Retinex theory is that the color of an object is determined by its reflective capacity towards red, green, and blue light, not by the absolute intensity of the reflected light. The color of an object remains consistent regardless of variations in illumination, underscoring the foundation of Retinex theory in color constancy. Based on Land’s theory, a given image S ( x , y ) can be decomposed into two distinct images: the reflected object image R ( x , y ) and the incident light image L ( x , y ) . A schematic diagram illustrating this principle is shown in Figure 5.
Stemming from this theory, a range of related methods have been developed, such as the single-scale Retinex (SSR) and multi-scale Retinex (MSR) low-light image enhancement techniques, developed by Jobson et al. [17,18]. The SSR method is sensitive to high-frequency components and enhances edge information in images well, although it might over-enhance the image and lose real information. Compared to SSR, MSR can accomplish color enhancement, color constancy, and local and global dynamic range compression; however, it falls short in terms of smooth edges and improvement of high-frequency detail. Furthermore, Si et al. [19] presented the SSRBF algorithm, which merges SSR and a bilateral filter (BF) to address low and uneven lighting issues, thereby improving video image quality. Xiao et al. [20] presented scaled Retinex with fast mean filtering applied to the luminance component in hue–saturation–value (HSV) color space and proposed a rapid image enhancement method based on color space fusion. Tao et al. [21] presented a fusion framework based on MSR, which integrates Range Covariance Filtering (RCF), Contrast-Limited Adaptive Histogram Equalization (CLAHE), Non-Local Means Filtering (NLF), and Guided Filtering (GF) to correct for uneven illumination and remove noise in images. Gu et al. [22] suggested an enhancement for severely underlit images based on a Retinex model with fractional-order variation, while Hao et al. [23] advanced the field by introducing a semi-decoupled decomposition-based method for low-light image enhancement. Moreover, Zhang et al. [24] utilized a bidirectional perceptual similarity method to enhance underexposed images. These approaches transform the Retinex approach into a statistical inference problem, solving image enhancement challenges through the application of various constraints with good results. However, when considering images characterized by a complex environment, such as those commonly observed in high dam inspection scenarios, these algorithms still have the problems of low distortion and reduction. The purpose and merits of each method are listed in Table 2.

3.1.3. Deep Learning Methods for Image Enhancement

Traditional image enhancement methods often face problems after adjusting the color, brightness, and contrast of the image, such as amplified noise, loss of details, and color distortion. With the advancement of deep learning approaches, they have recently been widely applied in image enhancement tasks by many scholars. Image enhancement methods for low-light images based on deep learning are data-driven approaches in which the model autonomously learns the features of images under standard lighting conditions. This method aims to diminish the impact of light on the image, enhancing its overall quality.
The prevalent structure for image enhancement algorithms based on deep learning is the encoder–decoder architecture. LLNet [25] stands out as the pioneering algorithm for enhancing low-light images through deep learning, achieving remarkable outcomes. In LLNet, a modified version of the stacked-sparse denoising autoencoder is utilized to learn from synthetically darkened and noise-injected training samples, effectively enhancing images captured in natural low-light settings or those affected by hardware degradation. Ren et al. [26] aimed to improve the visibility of low-light images using a trainable hybrid network, which incorporates encoder–decoder architecture to estimate global content, while introducing a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details. Tao et al. [27] proposed a convolutional neural network (CNN)-based model to denoise low-light images with a bright channel prior to estimating the transmission parameter. Some scholars have combined the concept of Retinex with deep learning algorithms such as CNNs to conduct research on their image enhancement effect. For example, Li et al. [28] introduced a method for low-light image enhancement based on a constrained low-rank approximation Retinex model. Meanwhile, Cai et al. [29] introduced an enhancement algorithm based on Retinex and deep learning, replacing the CNN with a weighted least squares method for image decomposition. Li et al. [30] presented LightenNet, which employs a CNN to generate illumination maps and traditional methods for enhancement. Retinex-Net [31] has been presented by Wei et al., which consists of a Decom-Net for splitting the input image into lighting-independent reflectance and structure-aware smooth illumination and an Enhance-Net for illumination adjustment. These methods fully combine the advantages of Retinex and CNN, obtaining better enhancement effects for specific scenes.
Some scholars have paid attention to the role of Generative Adversarial Networks (GANs) in image enhancement. Shi et al. [32] proposed Retinex-GAN, an early attempt at combining Retinex with GANs. Yang et al. [33] proposed a method for enhancing low-light images through adversarial training using both paired and unpaired data. Notably, all of the above methods were trained only on paired data, while not utilizing a vast amount of unpaired data. Addressing this gap, Chen et al. [34] and Jiang et al. [35] subsequently introduced the Deep Photo and EnlightenGAN methods, respectively, capitalizing on unpaired data for training to achieve image enhancement. Meanwhile, there have been many recent studies on deep learning-based image enhancement. For instance, Wang et al. [36] introduced a simple low-light image enhancement model based on the Weber–Fechner law in log space. Lu et al. [37] used gradient prior-assisted networks for low-light image enhancement, while Rasheed et al. [38] focused on a brightness super-resolution deep network using super-resolution methods. Zhou et al. [39] presented a novel multi-feature underwater image enhancement method with an embedded fusion mechanism (MFEF). Considering that underwater images usually have the characteristics of low contrast, color distortion, and blurred details, WB and CLAHE input processing methods were utilized to obtain superior quality inputs, and the MFF module was used to make the multiple forms of features fully interactive while retaining a great level of detail. This allows valuable features to be highlighted while the unhelpful features are inhibited using PCAM. Overall, methods considering multiple colors have great practical value for high-dam image enhancement.
Some researchers have recently paid attention to the value of transformers for image enhancement. Wang et al. [40] presented LLFormer, a transformer-based low-light enhancement method. The core components of LLFormer are the axis-based multi-head self-attention and cross-layer attention fusion block, which significantly reduce the linear complexity of the model. Cai et al. [41] proposed a novel transformer-based method, Retinexformer, for low-light image enhancement. Through analyzing the corruptions hidden in under-exposed scenes caused by the brightening process, perturbation terms were introduced into the original Retinex model to formulate the one-stage, Retinex-based framework (ORF). Then, they designed an Illumination-Guided Transformer (IGT) that utilizes the illumination information captured by the ORF to direct the modeling of long-range dependencies and interactions between regions with different lighting conditions. The Retinexformer was derived by integrating the IGT and ORF. In general, image enhancement methods combined with transformers provide better performance. The performance metrics for the deep learning methods used for image enhancement detailed above are listed in Table 3.
From the analysis of the aforementioned image enhancement methods, the trends in image enhancement research are depicted in Figure 6.

3.2. Visual Perception Methods for Concrete Defects

Researchers have recently introduced a plethora of vision-based methods for the perception of specific structural defects in concrete. However, there is a noticeable gap in research concerning concrete structures in the high dam setting, underscoring the significance of similar methodologies for reference. In this section, we delve into research findings pertaining to concrete crack detection, the identification of other types of defects, and defect quantification.

3.2.1. Visual Perception Methods for Concrete Cracks

Most structural defects evolve from cracks, with many types of internal damage manifesting in the form of cracks. An image of cracks on the concrete surface of a high dam is shown in Figure 7. The original images collected during inspections have complex backgrounds, making detection and quantification challenging. There have been numerous recent studies on visual perception methods for crack detection, targeting structures such as dams, bridges, roads, tunnels, and so on. At present, visual perception methods for concrete crack detection predominantly hinge on traditional image processing, machine learning, and deep learning methodologies.
(1) Crack Detection Methods based on Traditional Image Processing
In traditional image processing approaches for crack detection, threshold-based methods are used to differentiate between non-crack pixels and pixels within crack areas through the comparison of threshold values, identifying pixels with higher values as cracks. Fujita et al. [42] introduced a threshold-based method for concrete crack detection, which can handle irregular lighting conditions, shadows, and imperfections. However, its sensitivity and adaptability are contingent upon the threshold values. A later improvement by Fujita et al. [43] adopted a probabilistic statistical method for coarse crack localization and a local adaptive threshold technique for fine crack detection, enhancing the effectiveness of identification. Moreover, Talab AMA et al. [44] proposed a floor crack detection method based on the Otsu algorithm, utilizing grayscale images as inputs and applying appropriate threshold values for background and foreground classification. This method leverages Sobel filtering to eliminate residual noise and the Otsu algorithm for crack detection, thereby improving the identification accuracy. Asdrubali et al. [45] developed an algorithm to detect thermal bridge contours using infrared thermal imagery, incorporating the Kantorovich operator to enhance the thermal images. This method analyzes histograms and employs suitable threshold values for the effective detection of detecting contours. Chen et al. [46] developed a method for the detection of concrete cracks based on the OTSU algorithm and differential images. Their experimental results showed that cracks can be discerned from complex backgrounds using their method. These methods primarily focus on utilizing preprocessing, filtering, thresholding, edge detection, and other techniques to enhance the capability of the model to detect and recognize cracks and defects. They have been shown to be effective in specific scenarios, reducing the costs associated with manual labor. Threshold-based methods detect cracks by leveraging the difference in grayscale values between the crack area and its surroundings. However, in high dam inspection images, uneven lighting, noise blur, and shadows make the contrast between crack grayscale values and background regions less distinct, leading to difficulties in crack identification. The merits and limitations of each method are listed in Table 4.
(2) Crack Detection Methods based on Machine Learning
As machine learning algorithms continue to advance, their applications are becoming increasingly widespread across various domains. In recent years, researchers have started to explore the use of machine learning algorithms to address the challenge of perceiving surface cracks in concrete structures, achieving some remarkable results. Machine learning algorithms possess robust data processing and pattern recognition capabilities, enabling them to automatically extract valuable information from large datasets and discover the intrinsic patterns and relationships within the data. This unique capability provides machine learning algorithms with distinct advantages for the perception of surface cracks in concrete structures, enhancing both their accuracy and robustness in perception. For example, Liu et al. [47] employed a support vector machine (SVM) as a classifier to detect cracks in images. This process entailed preprocessing the images to extract features based on pixel intensity, with the SVM training process consistently aiming for global optimization to prevent overfitting. Luo et al. [48] introduced a crack detection method that utilizes a random classifier to distinguish between crack and non-crack images. Fisher et al. [49] presented an SVM-based approach for detecting cracks in embankments using passive seismic data. Fan et al. [50] devised an automated crack detection technique for surfaces of hydropower station dams, leveraging local–global clustering. Nishikawa et al. [51] developed image filters through genetic encoding to eliminate residual noise, proposing a highly resilient method for the identification of surface cracks on concrete structures. Gordan et al. [52] assessed the efficacy of classical edge detection operators in crack detection, proposing a fuzzy C-means clustering edge detection algorithm to aid in crack identification in infrared images.
These methods leverage the powerful capabilities of machine learning algorithms, achieving significant progress in enhancing both their detection accuracy and robustness. Feature extraction methods based on machine learning, similar to those based on traditional image processing, require the manual design of features. Different categories of defects require different types of features for description. Further research is warranted to explore the robustness and adaptability of feature extraction and identification methods for diverse defects in high, concrete dam structures. The merits and limitations of each method are listed in Table 5.
(3) Crack Detection Methods based on Deep Learning
With the rapid development of deep learning methods in fields such as object classification, recognition, and segmentation, many researchers have begun exploring the development of crack detection methods based on deep learning techniques. Dung et al. [53] trained a Fully Convolutional Network (FCN) model for semantic segmentation to extract cracks from images, providing detailed crack maps with shapes and distributions. Ni et al. [54] combined GoogLeNet with classification and CDN to achieve pixel-level crack detection. In crack detection methods based on deep learning, users only need to input the image into the end-to-end framework to obtain a crack map with detailed information as an output. As shown in Figure 8, a concrete image is input into a pre-trained neural network. After passing through three sets of convolutional layers (encoders) and three sets of deconvolutional layers (decoders), the crack shapes are segmented and displayed as a crack map.
Numerous crack detection methods grounded in deep learning have recently emerged. Feng and Zhang et al. [55,56,57] carried out a series of deep learning-centric investigations concerning defect identification and detection in hydropower station spillway dams. Their research introduced diverse methodologies, including a classification recognition approach for prominent defects, such as cracks in spillway dam structures, leveraging transfer learning and convolutional neural networks. This method effectively identifies defects such as cracks, seepage, and spalling. Additionally, they proposed an automated perception method for pixel-level crack detection on dam surfaces, employing deep convolutional networks, along with a real-time inspection method for flood discharge tunnel defects based on deep learning techniques. Through employing various deep learning networks and optimizing them for spillway dam defect detection, significant advancements were achieved.
Modarres et al. [58] proposed a CNN methodology for crack recognition and classification, utilizing the ReLU activation function to mitigate the gradient explosion problem. Pang et al. [59] developed a technique for detecting visible cracks in hydropower hub facilities by integrating ResNetv2 and Faster-RCNN as a feature extraction network for deep feature extraction. Li et al. [60] suggested a synchronized localization method for concrete crack detection, incorporating crack and geographical information to detect and categorize cracks under challenging conditions without external location data. Deng et al. [61] utilized UAVs to collect images of hydropower station dam surfaces and established a CNN for discerning dam surface cracks. Cheng et al. [62] proposed an enhanced FCN for dam surface crack detection and performed a quantitative crack analysis based on relevant imaging principles.
Zou et al. [63] presented an end-to-end DeepCrack crack detection network architecture, which enables automatic detection through the learning of high-level crack features, further showcasing adaptability to samples with noisy labels. Li et al. [64] proposed an automatic crack classification and localization method for concrete dam structures using deep residual neural networks and transfer learning. Zhu et al. [65] devised an automatic detection and diagnosis method for damages in hydraulic structures, employing the streamlined parameters of the Xception backbone network for efficient crack feature extraction. To combat the challenges of losing details in microcrack images and limited target information in hydraulic concrete structures, they devised an image semantic segmentation algorithm based on Deeplab V3+ and an adaptive attention mechanism. This method, when leveraging a diverse dataset of concrete crack images from hydraulic structures, could effectively identify various crack types across complex background scenarios.
Novel deep learning networks have recently been applied for defect detection on concrete surfaces, achieving good performance. For example, Wu et al. [66] introduced a pixel-level, real-time crack segmentation method based on the LCA-YOLOv8-seg model, proposing a lightweight LCANet backbone network and a lighter prototype mask branch to reduce model complexity. Zhang et al. [67] presented UTCD-Net, a pixel-level crack detection network integrating Transformer and CNN models to capture both local and global crack features, facilitated by an attention fusion module to refine the segmentation crack morphology results. Xiang et al. [68] introduced DTrC-Net, a novel dual-encoder network structure combining the transformer and CNN to simultaneously capture local information and long-range dependencies for crack image segmentation, enhancing the extraction of both global contextual information and local crack features. The merits and selected performance metrics for each method are listed in Table 6.

3.2.2. Visual Perception Methods for Other Defects

There is a diverse range of surface defects of concrete on high dams. This section delves into the analysis of relevant methods and achievements, focusing separately on the detection of multi-type defects and underwater concrete defects.
(1) Detection of Multiple Types of Concrete Defects
In concrete structures, spalling is a serious issue that causes the exposure and potential corrosion of the reinforcing steel due to the removal of its protective concrete layer. Additionally, spalling is regarded as a significant indicator of severe damage to structural components during earthquakes. An example of spalling is illustrated in Figure 9. There have been a limited number of studies focused on the visual perception of spalling. German et al. [69] developed a spalling identification method that combines a global adaptive threshold algorithm with template matching and morphological operations, which is capable of measuring the depth and length of concrete column spalling. Dawood et al. [70] proposed a spalling detection method based on regression analysis and a hybrid algorithm, incorporating techniques such as image smoothing, thresholding, histogram equalization, Gaussian blur, color transformation, intelligent filtering, and image scaling. Furthermore, Gao et al. [71] developed a spalling classifier by retraining the VGG-16 network through deep transfer learning, aimed at identifying and categorizing spalling in concrete structures. Hoang et al. [72] introduced a machine learning method for classifying the severity of concrete spalling based on image texture analysis and a novel jellyfish search optimization. Nguyen et al. [73] suggested a method for classifying the severity of concrete spalling utilizing a meta-heuristic optimized extreme gradient boosting machine and a deep CNN.
Some scholars have recently researched methods for the perception of multiple types of defects. Huang et al. [74] proposed an automatic multi-damage detection method based on the improved Faster R-CNN, which is used to identify and locate different types of concrete dam damage. This method showed good effects regarding the detection of cracks, spalling, and sedimentation. Zhao et al. [75] proposed a system combining the You Only Look At Once V5s-HSC (YOLOv5s-HSC) algorithm and 3D photogrammetry reconstruction technology for the detection of concrete dam damages. This system incorporates Swin Transformer blocks and coordinate attention modules to enhance its feature extraction capabilities and employs a projection method to precisely locate and map the detected damages. Li et al. [76] utilized the lightweight YOLOv5 and Adaptive Spatial Feature Fusion (ASFF) technology to integrate input data enhancement, feature extraction, fusion, and multi-scale training processes, proposing a real-time automatic identification and quantification method for multiple defects in concrete dams. This method can effectively deal with defects such as dam cross-section cracks, water curtains, microcracks, and concrete spalling damages. Minh Dang et al. [77] proposed a transformer-based concrete defect identification model for the perception of four types of dam concrete defects. Hong et al. [78] have introduced a robotic solution for the automated detection, counting, and reconstruction of surface defects in dam spillways. This innovative approach merges deep learning methodologies with visual 3D reconstruction techniques, tackling challenges arising from limited real-world dam defect datasets and incomplete registration of minor defects during 3D mesh model fusion. It can be seen, from these studies, that interest in studying multi-type defect perception methods for high dam inspections has been increasing among scholars.
(2) Underwater Concrete Defect Detection
For submerged concrete structures, underwater concrete testing is imperative in high dam inspections. Underwater concrete structures are vulnerable to environmental influences like water quality and hydrostatic pressure, potentially resulting in diminished concrete integrity, crack propagation, and reinforcement corrosion. Significant progress has been made in the detection of defects on the surface of concrete dams using image processing techniques [79]. Chen et al. [80] proposed an adaptive underwater dam surface edge detection algorithm based on multi-structure and multi-scale elements. According to the differences between cracks and background regions in pre-processed images, crack detection algorithms based on local or global features have also been used to detect underwater cracks on dam surfaces. Fan et al. [81] applied prior knowledge obtained from the source domain to underwater crack image segmentation by using a multi-level adversarial transfer network to reduce the data labeling effort and integrated an attention mechanism into the segmentation network to achieve higher segmentation accuracy. Li et al. [82] researched a lightweight semantic segmentation and transfer learning method for pixel-level identification and quantification of underwater dam cracks and developed visualization software with offline and online functionalities. Furthermore, Qi et al. [83] introduced a three-step method for the automatic detection of micro-cracks in concrete during underwater structure operations. This method combines traditional approaches with deep learning techniques to accurately locate the cracks. Xin et al. [84] proposed a precise algorithm for identifying underwater surface cracks in dams from collected images. It employs adaptive histogram equalization to address the problem of uneven illumination, cluster analysis to extract crack areas, and Gaussian mixed models for classification. Their experimental results demonstrated a 90.1% extraction accuracy with low error rates. The merits and performance of each method are listed in Table 7.

3.3. Visual Perception Methods for Defect Quantification

To assess the potential risks of structural defects, it is generally necessary to quantify characteristics such as the size, shape, and depth of the defects. Some of the existing defect quantification methods are detailed in the following. Ni et al. [85] proposed a crack width estimation method based on Zernike moment operators. Wang et al. [86] developed a method to automatically measure the crack width using binary crack images. Rezaiee Pajand et al. [87] introduced a concrete crack detection approach utilizing genetic algorithms and finite element modeling to optimize crack detection performance and conduct a non-linear analysis of two-dimensional crack features, ultimately determining the location and size of the cracks. Ref. [88] investigated a UAV-based machine vision method for bridge crack recognition and width quantification through hybrid feature learning, which achieved over 90% accuracy in quantifying real bridge crack widths. Zhang et al. [89] proposed a crack quantification framework based on voxel reconstruction and Bayesian data fusion, which can identify the geometric properties of entire cracks using a set of unordered inspection images. Chen et al. [90] introduced a dam surface crack detection network based on deep learning, primarily addressing the semantic segmentation issue of dam surface cracks. Ding et al. [91] developed an improved calibration method that establishes the full-field scale of UAV gimbal cameras, allowing the scale factor to be conveniently indexed at various measurement attitudes without the need for recalibration. Additionally, an independent boundary refinement transformer (IBR-Former) was proposed for crack segmentation from UAV-captured images, wherein the IBR scheme can further refine the crack boundary in a model-agnostic manner. The proposed framework can quantify cracks with widths less than 0.2 mm.
The merits and performance of each method are listed in Table 7. Defect quantification provides vital data support for safety evaluations, and further research is warranted to enhance the precision and adaptability of these methods. The merits and performance of each method are listed in Table 8.
The classification of methods for the visual perception of concrete defects is depicted in Figure 10. The developmental trajectory has progressed from methods based on image processing and identification to those involving machine learning, deep learning, and object recognition. Due to insufficient research on the mechanisms and development patterns of structural damage and defects in high dams, along with a lack of in-depth analysis of damage characteristics at different stages, the designed perception methods often have limitations. Additionally, due to complex environmental factors such as low light, distortion, and sedimentation, the accuracy of defect localization is insufficient. The degree of automation in perception remains low, and the detection accuracy requires further improvement. Therefore, there are still many tasks to be undertaken to enable the effective identification and quantification of structural damage and defects in high dams.

4. Environmental Visual Perception Methods for High Dam Safety Inspection

In the task of intelligent safety inspections of high dams, methods allowing for the perception and understanding of unknown target entities in the complex inspection environment are key technologies ensuring the completion of adaptive inspection tasks. Due to the reliable and excellent perception capabilities of visual perception methods, they can obtain intuitive obstacle information from the inspection environment, providing data for the adaptive path planning of inspection equipment and ensuring the completion of inspection tasks.
This section reviews the development path of—and recent research on—obstacle visual perception methods for use in complex inspection environments. The background for obstacle visual perception in the environment of high dam safety inspections is static. Methods for obstacle perception against a static background can be divided into traditional image detection methods and deep learning-based target detection methods.

4.1. Obstacle Perception Methods Based on Traditional Image Detection

The traditional methods for image detection are classified into frame difference methods and texture feature methods. Mukojima et al. [92] proposed a background subtraction method that is suitable for mobile cameras, which calculates the frame correspondence between the current and reference image sequences, detecting obstacles through image subtraction of corresponding frames. Regarding texture feature-based detection methods, Tastimur et al. [93] applied HSV color transformation, image difference extraction, gradient computation, filtering, and feature extraction for object detection. This method employs a single camera to calculate the distance between the obstacle and the camera. Selver et al. [94] presented a robust approach that divides video frames into four regions, with each region being processed through wavelet filtering at different scales to enhance trajectory edges and filter noise. Teng et al. [95] utilized gradient histograms as extracted features and employed SVM as the learning algorithm to extract features from the superpixels of obstacles. Classic methods for image feature extraction include the Deformable Part Model (DPM) [96], Scale-Invariant Feature Transform (SIFT) [97], and Speeded Up Robust Features (SURFs) [98]. These methods extract image gradients and edge features for post-processing, offering stable detection performance; however, this comes at the cost of high time consumption and low robustness, thus failing to meet real-time requirements.

4.2. Obstacle Perception Methods Based on Deep Learning

In recent years, obstacle detection methods based on deep learning have been extensively applied. He et al. [99] proposed a rail traffic obstacle detection method based on an improved CNN. Addressing small object detection and multi-viewpoint scenarios, Li et al. [100] presented a cross-layer fusion multi-object detection and recognition algorithm based on Faster R-CNN, using a five-layer VGG16 structure to gather more feature information. As network structures have deepened and new frameworks have emerged, several new obstacle perception methods have been proposed. He et al. [101] enhanced the Mask R-CNN model to improve the augmentation and fusion of information at different scales, enhancing the feature extraction and fusion performance. Xu et al. [102] explored a method for the rapid detection of dynamic obstacles around moving agricultural machinery using a panoramic camera, employing optical flow algorithms to detect moving obstacles in panoramic images. Furthermore, Xue et al. [103] developed a farmland obstacle detection approach based on the improved YOLOv5s algorithm with CIOU and anchor box scale clustering. Yasin et al. [104] investigated a night vision obstacle detection and avoidance method based on bio-inspired visual sensors. Qiu et al. [105] proposed a method for the visual detection and tracking of moving obstacles in paddy fields based on improved YOLOv3 and Deep SORT, achieving obstacle detection and localization. In addition, Lalak et al. [106] studied multi-obstacle detection for UAVs based on monocular vision. Chen et al. [107] introduced a real-time recognition and avoidance method for both static and dynamic obstacles in UAV navigation point clouds, demonstrating certain advantages in tracking robustness, energy cost, and computation time. Chang et al. [108] suggested a spatial attention fusion obstacle detection method based on millimeter-wave radar and visual sensors, offering a novel approach that integrates visual methods with other sensors for obstacle recognition. The merits and limitations of each method are listed in Table 9.
In summary, obstacle identification methods are gradually evolving towards the use of artificial intelligence techniques such as neural networks for object recognition. However, these methods still require comprehensive improvements to adapt to the complex tasks associated with safety inspections at high dams, considering the specific tasks and inspection carrier control methods. As shown in Figure 11, developing highly adaptable obstacle perception methods for inspection environments, based on the analysis of intelligent inspection theories and technologies in line with the safety requirements of high dams, is a worthwhile avenue for further research.

5. Conclusions

5.1. Summary

In this article, we provided a literature review on visual perception technologies for the intelligent inspection of high dam hubs. We first outlined the context and requirements of intelligent inspections in high dams, clarified the means of application of visual perception technologies, and analyzed the process of visual defect perception during high dam inspections. We also analyzed the characteristics of inspection images captured at high dams and detailed the research outcomes relating to image enhancement methods from the perspectives of traditional, Retinex theory, and deep learning approaches, outlining future directions for the development of image enhancement technologies tailored to the needs of high dam inspections.
We conducted a detailed analysis of the current state of research regarding visual perception methods for concrete surface defects in high dams. Traditional image processing methods primarily rely on preprocessing techniques, filtering methods, and thresholding to improve defect detection and recognition capabilities. While these methods have shown some effectiveness in specific scenarios, they are sensitive to threshold selection and have limited adaptability. Researchers have shifted their focus towards machine learning-based methods, which utilize support vector machines, clustering, image filtering, and preprocessing techniques to provide improved detection accuracy and robustness. However, these methods still require manual feature design and heavily depend on the empirical rules of the machine learning algorithms, posing challenges in extracting and recognizing diverse defect features. With the rapid development of deep learning, researchers have dedicated efforts to developing deep learning-based concrete surface defect perception methods. Convolutional Neural Networks (CNNs), VGG16, Faster-RCNN, Single Shot Detector (SSD), ResNet, and other deep learning network models, along with their improved variants, have been applied to study the perception of concrete surface defect types. The recent literature has focused on utilizing state-of-the-art network models such as YOLO series and Visual Transformer models, which have good feature recognition and context attention abilities, thereby achieving higher accuracy compared to existing models. Deep learning-based semantic segmentation algorithms have been widely used for concrete defect morphology perception, with various deep learning architectures being employed to improve the precision and efficiency in this context. Some researchers have also attempted to characterize and quantify crack defects from a three-dimensional perspective. Existing research has predominantly focused on concrete surface crack defects, with increasing attention being paid to the detection of various types of defects.
In terms of environmental perception, multiple deep learning networks are being used for obstacle detection, and some scholars have begun to focus on the recognition and boundary differentiation of high dam structures.
In summary, the field of visual perception for high dam inspections has seen significant advancements—particularly in the areas of image enhancement, defect detection, and environmental perception—driven by the integration of traditional image processing techniques, machine learning approaches, and deep learning methodologies. Future research trends may include further improvements to model robustness, the reduction in computational resources required, and the development of automated inspection systems that are suitable for real-world deployment.

5.2. Outlook

From the above analysis, visual perception methods can address some of the issues encountered in high dam safety inspections. However, due to the significant limitations of traditional image processing methods and the high data and computational demands of machine learning-based approaches, many challenges and opportunities still exist regarding the utilization of visual perception technology more effectively for intelligent safety inspections in both research and industrial applications.
  • Conducting intelligent inspection tasks through visual collection systems poses challenges related to factors such as complex objects, harsh environments, heavy inspection workloads, and a high incidence of emergent inspection tasks. The data captured by visual sensors on the inspection equipment may exhibit issues such as unclear images, jitter, and distortion. Systematically analyzing and semantically interpreting the complex inspection environment, and developing a highly adaptive collection system along with appropriate data preprocessing methods, are directions worthy of research.
  • Long-term inspections generate a large volume of image data, presenting challenges regarding data management, analysis, optimization, and visualization. Thus, there is an urgent need for research into the 2D and 3D visualization of inspection data, intelligent analysis, and high dam health assessment and risk prediction based on visual perception.
  • For the identification and quantification of structural defects, it is urgent to address issues such as efficient and precise parallel identification, localization, quantification, and 3D perception. The existing methods have limited defect sample data and low robustness. Future research can focus on developing visual perception methods based on small-sample/zero-sample learning, in order to solve the problem of limited defect data; undertaking multi-task learning method research to extract valuable information from multiple related tasks, in order to enhance the generalization ability of the designed algorithms; carrying out studies on multi-source heterogeneous data fusion combining radar, vibration, laser, and other sensors with visual sensors, in order to improve identification accuracy and efficiency of inspections; and exploring methods combining visual perception with structural modal analysis, in order to enhance the confidence level in safety evaluations.
  • In the task of intelligent patrol inspections for the safety of high dams, efficiently and accurately perceiving the complex inspection environment remains a challenge. To combine the advantages of existing methods with semantically enriched on-site environments, exploring multi-sensor fusion perception methods is a viable technical approach.

Author Contributions

Conceptualization, Z.P., Z.L. and L.L.; methodology, Z.P.; investigation, Z.P., D.L. and S.Z.; writing—original draft preparation, Z.P. and L.L.; writing—review and editing, Z.P., D.L. and S.Z.; visualization, S.Z.; supervision, L.L.; project administration, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number ”U21A20157”.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ye, X.W.; Dong, C.Z.; Liu, T. A review of machine vision-based structural health monitoring: Methodologies and applications. J. Sens. 2016, 2016, 7103039. [Google Scholar] [CrossRef]
  2. Feng, D.; Feng, M.Q. Computer vision for SHM of civil infrastructure: From dynamic response measurement to damage detection—A review. Eng. Struct. 2018, 156, 105–117. [Google Scholar] [CrossRef]
  3. Xu, Y.; Brownjohn, J.M.W. Review of machine-vision based methodologies for displacement measurement in civil structures. J. Civ. Struct. Health Monit. 2018, 8, 91–110. [Google Scholar] [CrossRef]
  4. Spencer, B.F., Jr.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  5. Dong, C.Z.; Catbas, F.N. A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 2021, 20, 692–743. [Google Scholar] [CrossRef]
  6. Dam Safety Management: Pre-Operational Phases of the Dam Life Cycle; International Commission on Large Dams: Paris, France, 2021.
  7. Regulations on Reservoir Dam Safety Management; State Council of the People’s Republic of China: Beijing, China, 1991.
  8. Xiang, Y.; Jing, M.T. Guidelines for Safety Inspection of Reservoir Dams; China Water Resources and Hydropower Press: Beijing, China, 2021. [Google Scholar]
  9. Federal Guidelines for Dam Safety; FEMA P-93; U.S. Department of Homeland Security: Washington, DC, USA, 2023.
  10. Guo, J.; Ma, J.; García-Fernández, F.; Zhang, Y.; Liang, H. A survey on image enhancement for Low-light images. Heliyon 2023, 9, e14558. [Google Scholar] [CrossRef]
  11. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  12. Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [Google Scholar] [CrossRef]
  13. Wang, Q.; Ward, R.K. Fast image/video contrast enhancement based on weighted thresholded histogram equalization. IEEE Trans. Consum. Electron. 2007, 53, 757–764. [Google Scholar] [CrossRef]
  14. Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
  15. Li, C.Y.; Guo, J.C.; Cong, R.M.; Pang, Y.W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef] [PubMed]
  16. Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
  17. Jobson, D.J.; Rahman, Z.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  18. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  19. Si, L.; Wang, Z.; Xu, R.; Tan, C.; Liu, X.; Xu, J. Image enhancement for surveillance video of coal mining face based on single-scale retinex algorithm combined with bilateral filtering. Symmetry 2017, 9, 93. [Google Scholar] [CrossRef]
  20. Xiao, J.; Peng, H.; Zhang, Y.; Tu, C.; Li, Q. Fast image enhancement based on color space fusion. Color Res. Appl. 2016, 41, 22–31. [Google Scholar] [CrossRef]
  21. Tao, F.; Yang, X.; Wu, W.; Liu, K.; Zhou, Z.; Liu, Y. Retinex-based image enhancement framework by using region covariance filter. Soft Comput. 2018, 22, 1399–1420. [Google Scholar] [CrossRef]
  22. Gu, Z.; Li, F.; Fang, F.; Zhang, G. A novel retinex-based fractional-order variational model for images with severely low light. IEEE Trans. Image Process. 2019, 29, 3239–3253. [Google Scholar] [CrossRef]
  23. Hao, S.; Han, X.; Guo, Y.; Xu, X.; Wang, M. Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimed. 2020, 22, 3025–3038. [Google Scholar] [CrossRef]
  24. Zhang, Q.; Nie, Y.; Zhu, L.; Xiao, C.; Zheng, W.-S. Enhancing underexposed photos using perceptually bidirectional similarity. IEEE Trans. Multimed. 2020, 23, 189–202. [Google Scholar] [CrossRef]
  25. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNET: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  26. Ren, W.; Liu, S.; Ma, L.; Xu, Q.; Xu, X.; Cao, X.; Du, J.; Yang, M.-H. Low-light image enhancement via a deep hybrid network. IEEE Trans. Image Process. 2019, 28, 4364–4375. [Google Scholar] [CrossRef] [PubMed]
  27. Tao, L.; Zhu, C.; Song, J.; Lu, T.; Jia, H.; Xie, X. Low-light image enhancement using CNN and bright channel prior. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3215–3219. [Google Scholar]
  28. Li, X.; Shang, J.; Song, W.; Chen, J.; Zhang, G.; Pan, J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. Sensors 2022, 22, 6126. [Google Scholar] [CrossRef] [PubMed]
  29. Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
  30. Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
  31. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  32. Shi, Y.; Wu, X.; Zhu, M. Low-light image enhancement algorithm based on retinex and generative adversarial network. arXiv 2019, arXiv:1906.06027. [Google Scholar]
  33. Yang, Q.; Wu, Y.; Cao, D.; Luo, M.; Wei, T. A lowlight image enhancement method learning from both paired and unpaired data by adversarial training. Neurocomputing 2021, 433, 83–95. [Google Scholar] [CrossRef]
  34. Chen, Y.S.; Wang, Y.C.; Kao, M.H.; Chuang, Y.Y. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6306–6314. [Google Scholar]
  35. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  36. Wang, W.; Chen, Z.; Yuan, X. Simple low-light image enhancement based on Weber-Fechner law in logarithmic space. Signal Process. Image Commun. 2022, 106, 116742. [Google Scholar] [CrossRef]
  37. Lu, Y.; Gao, Y.; Guo, Y.; Xu, W.; Hu, X. Low-Light Image Enhancement via Gradient Prior-Aided Network. IEEE Access 2022, 10, 92583–92596. [Google Scholar] [CrossRef]
  38. Rasheed, M.T.; Shi, D. LSR: Lightening super-resolution deep network for low-light image enhancement. Neurocomputing 2022, 505, 263–275. [Google Scholar] [CrossRef]
  39. Zhou, J.; Sun, J.; Zhang, W.; Lin, Z. Multi-view underwater image enhancement method via embedded fusion mechanism. Eng. Appl. Artif. Intell. 2023, 121, 105946. [Google Scholar] [CrossRef]
  40. Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 2654–2662. [Google Scholar]
  41. Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 12504–12513. [Google Scholar]
  42. Fujita, Y.; Mitani, Y.; Hamamoto, Y. A method for crack detection on a concrete structure. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 3, pp. 901–904. [Google Scholar]
  43. Fujita, Y.; Hamamoto, Y. A robust automatic crack detection method from noisy concrete surfaces. Mach. Vis. Appl. 2011, 22, 245–254. [Google Scholar] [CrossRef]
  44. Talab, A.M.A.; Huang, Z.; Xi, F.; HaiMing, L. Detection crack in image using Otsu method and multiple filtering in image processing techniques. Optik 2016, 127, 1030–1033. [Google Scholar] [CrossRef]
  45. Asdrubali, F.; Baldinelli, G.; Bianchi, F.; Costarelli, D.; Rotili, A.; Seracini, M.; Vinti, G. Detection of thermal bridges from thermographic images by means of image processing approximation algorithms. Appl. Math. Comput. 2018, 317, 160–171. [Google Scholar] [CrossRef]
  46. Chen, B.; Zhang, X.; Wang, R.; Li, Z.; Deng, W. Detect concrete cracks based on OTSU algorithm with differential image. J. Eng. 2019, 2019, 9088–9091. [Google Scholar] [CrossRef]
  47. Liu, Z.; Suandi, S.A.; Ohashi, T.; Ejima, T. Tunnel crack detection and classification system based on image processing. In Machine Vision Applications in Industrial Inspection X; SPIE: Bellingham, WA, USA, 2002; Volume 4664, pp. 145–152. [Google Scholar]
  48. Luo, Q.; Ge, B.; Tian, Q. A fast adaptive crack detection algorithm based on a double-edge extraction operator of FSM. Constr. Build. Mater. 2019, 204, 244–254. [Google Scholar] [CrossRef]
  49. Fisher, W.D.; Camp, T.K.; Krzhizhanovskaya, V.V. Crack detection in earth dam and levee passive seismic data using support vector machines. Procedia Comput. Sci. 2016, 80, 577–586. [Google Scholar] [CrossRef]
  50. Fan, X.; Wu, J.; Shi, P.; Zhang, X.; Xie, Y. A novel automatic dam crack detection algorithm based on local-global clustering. Multimed. Tools Appl. 2018, 77, 26581–26599. [Google Scholar] [CrossRef]
  51. Nishikawa, T.; Yoshida, J.; Sugiyama, T.; Fujino, Y. Concrete crack detection by multiple sequential image filtering. Comput.-Aided Civ. Infrastruct. Eng. 2012, 27, 29–47. [Google Scholar] [CrossRef]
  52. Gordan, M.; Georgakis, A. A novel fuzzy edge detection and classification scheme to aid hydro-dams surface examination. In Proceedings of the Swedish Society for Automated Image Analysis (SSBA’06), Uppsala, Sweden, 16–17 March 2006; pp. 121–124. [Google Scholar]
  53. Dung, C.V. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [Google Scholar] [CrossRef]
  54. Ni, F.T.; Zhang, J.; Chen, Z.Q. Pixel-level crack delineation in images with convolutional feature fusion. Struct. Control Health Monit. 2019, 26, e2286. [Google Scholar] [CrossRef]
  55. Feng, C.; Zhang, H.; Wang, S.; Li, Y.; Wang, H.; Yan, F. Structural damage detection using deep convolutional neural network and transfer learning. KSCE J. Civ. Eng. 2019, 23, 4493–4502. [Google Scholar] [CrossRef]
  56. Feng, C.; Zhang, H.; Wang, H.; Wang, S.; Li, Y. Automatic pixel-level crack detection on dam surface using deep convolutional network. Sensors 2020, 20, 2069. [Google Scholar] [CrossRef] [PubMed]
  57. Feng, C.; Zhang, H.; Li, Y.; Wang, S.; Wang, H. Efficient real-time defect detection for spillway tunnel using deep learning. J. Real-Time Image Process. 2021, 18, 2377–2387. [Google Scholar] [CrossRef]
  58. Modarres, C.; Astorga, N.; Droguett, E.L.; Meruane, V. Convolutional neural networks for automated damage recognition and damage type identification. Struct. Control Health Monit. 2018, 25, e2230. [Google Scholar] [CrossRef]
  59. Pang, J.; Zhang, H.; Feng, C.; Li, L. Research on crack segmentation method of hydro-junction project based on target detection network. KSCE J. Civ. Eng. 2020, 24, 2731–2741. [Google Scholar] [CrossRef]
  60. Li, R.; Yuan, Y.; Zhang, W.; Yuan, Y. Unified vision-based methodology for simultaneous concrete defect detection and geolocalization. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 527–544. [Google Scholar] [CrossRef]
  61. Deng, Y.X.; Luo, X.J.; Li, H.L. Research on dam surface crack detection of hydropower station based on unmanned aerial vehicle tilt photogrammetry technology. Technol. Innov. Appl. 2021, 5, 158–161. [Google Scholar]
  62. Cheng, B.; Zhang, H.; Wang, S. Research on dam surface crack detection method based on full convolution neural network. J. Hydroelectr. Eng. 2020, 39, 52–60. [Google Scholar]
  63. Zou, Q.; Zhang, Z.; Li, Q.; Qi, X.; Wang, Q.; Wang, S. Deepcrack: Learning hierarchical convolutional features for crack detection. IEEE Trans. Image Process. 2018, 28, 1498–1512. [Google Scholar] [CrossRef] [PubMed]
  64. Li, Y.; Bao, T.; Xu, B.; Shu, X.; Zhou, Y.; Du, Y.; Wang, R.; Zhang, K. A deep residual neural network framework with transfer learning for concrete dams patch-level crack classification and weakly-supervised localization. Measurement 2022, 188, 110641. [Google Scholar] [CrossRef]
  65. Zhu, Y.; Tang, H. Automatic damage detection and diagnosis for hydraulic structures using drones and artificial intelligence techniques. Remote Sens. 2023, 15, 615. [Google Scholar] [CrossRef]
  66. Wu, Y.; Han, Q.; Jin, Q.; Li, J.; Zhang, Y. LCA-YOLOv8-Seg: An Improved Lightweight YOLOv8-Seg for Real-Time Pixel-Level Crack Detection of Dams and Bridges. Appl. Sci. 2023, 13, 10583. [Google Scholar] [CrossRef]
  67. Zhang, E.; Shao, L.; Wang, Y. Unifying transformer and convolution for dam crack detection. Autom. Constr. 2023, 147, 104712. [Google Scholar] [CrossRef]
  68. Xiang, C.; Guo, J.; Cao, R.; Deng, L. A crack-segmentation algorithm fusing transformers and convolutional neural networks for complex detection scenarios. Autom. Constr. 2023, 152, 104894. [Google Scholar] [CrossRef]
  69. German, S.; Brilakis, I.; DesRoches, R. Rapid entropy-based detection and properties measurement of concrete spalling with machine vision for post-earthquake safety assessments. Adv. Eng. Inform. 2012, 26, 846–858. [Google Scholar] [CrossRef]
  70. Dawood, T.; Zhu, Z.; Zayed, T. Machine vision-based model for spalling detection and quantification in subway networks. Autom. Constr. 2017, 81, 149–160. [Google Scholar] [CrossRef]
  71. Gao, Y.; Mosalam, K.M. Deep transfer learning for image-based structural damage recognition. Comput. -Aided Civ. Infrastruct. Eng. 2018, 33, 748–768. [Google Scholar] [CrossRef]
  72. Hoang, N.D.; Huynh, T.C.; Tran, V.D. Concrete spalling severity classification using image texture analysis and a novel jellyfish search optimized machine learning approach. Adv. Civ. Eng. 2021, 2021, 5551555. [Google Scholar] [CrossRef]
  73. Nguyen, H.; Hoang, N.D. Computer vision-based classification of concrete spall severity using metaheuristic-optimized Extreme Gradient Boosting Machine and Deep Convolutional Neural Network. Autom. Constr. 2022, 140, 104371. [Google Scholar] [CrossRef]
  74. Huang, B.; Zhao, S.; Kang, F. Image-based automatic multiple-damage detection of concrete dams using region-based convolutional neural networks. J. Civ. Struct. Health Monit. 2023, 13, 413–429. [Google Scholar] [CrossRef]
  75. Zhao, S.; Kang, F.; Li, J. Concrete dam damage detection and localisation based on YOLOv5s-HSC and photogrammetric 3D reconstruction. Autom. Constr. 2022, 143, 104555. [Google Scholar] [CrossRef]
  76. Li, Y.; Bao, T. A real-time multi-defect automatic identification framework for concrete dams via improved YOLOv5 and knowledge distillation. J. Civ. Struct. Health Monit. 2023, 13, 1333–1349. [Google Scholar] [CrossRef]
  77. Dang, M.; Wang, H.; Nguyen, T.-H.; Tightiz, L.; Tien, L.D.; Nguyen, T.N.; Nguyen, N.P. CDD-TR: Automated concrete defect investigation using an improved deformable transformers. J. Build. Eng. 2023, 75, 106976. [Google Scholar] [CrossRef]
  78. Hong, K.; Wang, H.; Yuan, B.; Wang, T. Multiple Defects Inspection of Dam Spillway Surface Using Deep Learning and 3D Reconstruction Techniques. Buildings 2023, 13, 285. [Google Scholar] [CrossRef]
  79. Chen, D.; Huang, B.; Kang, F. A review of detection technologies for underwater cracks on concrete dam surfaces. Appl. Sci. 2023, 13, 3564. [Google Scholar] [CrossRef]
  80. Chen, C.P.; Wang, J.; Zou, L.; Zhang, F.J. Underwater Dam Image Crack Segmentation Based on Mathematical Morpholog. Appl. Mech. Mater. 2012, 220–223, 1315–1319. [Google Scholar] [CrossRef]
  81. Fan, X.N.; Cao, P.F.; Shi, P.F.; Chen, X.Y.; Zhou, X.; Gong, Q. An Underwater Dam Crack Image Segmentation Method Based on Multi-Level Adversarial Transfer Learning. Neurocomputing 2022, 505, 19–29. [Google Scholar] [CrossRef]
  82. Li, Y.; Bao, T.; Huang, X.; Chen, H.; Xu, B.; Shu, X.; Zhou, Y.; Cao, Q.; Tu, J.; Wang, R.; et al. Underwater crack pixel-wise identification and quantification for dams via lightweight semantic segmentation and transfer learning. Autom. Constr. 2022, 144, 104600. [Google Scholar] [CrossRef]
  83. Qi, Z.; Liu, D.; Zhang, J.; Chen, J. Micro-concrete crack detection of underwater structures based on convolutional neural network. Mach. Vis. Appl. 2022, 33, 74. [Google Scholar] [CrossRef]
  84. Xin, G.; Fan, X.; Shi, P.; Luo, C.; Ni, J.; Cao, Y. A fine extraction algorithm for image-based surface cracks in underwater dams. Meas. Sci. Technol. 2022, 34, 035402. [Google Scholar] [CrossRef]
  85. Ni, F.T.; Zhang, J.; Chen, Z.Q. Zernike-moment measurement of thin-crack width in images enabled by dual-scale deep learning. Comput. -Aided Civ. Infrastruct. Eng. 2019, 34, 367–384. [Google Scholar] [CrossRef]
  86. Wang, W.; Zhang, A.; Wang, K.C.; Braham, A.F.; Qiu, S. Pavement Crack Width Measurement Based on Laplace’s Equation for Continuity and Unambiguity. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 110–123. [Google Scholar] [CrossRef]
  87. Rezaiee-Pajand, M.; Tavakoli, F.H. Crack detection in concrete gravity dams using a genetic algorithm. Proc. Inst. Civ. Eng.-Struct. Build. 2015, 168, 192–209. [Google Scholar] [CrossRef]
  88. Peng, X.; Zhong, X.; Zhao, C.; Chen, A.; Zhang, T. A UAV-based machine vision method for bridge crack recognition and width quantification through hybrid feature learning. Constr. Build. Mater. 2021, 299, 123896. [Google Scholar] [CrossRef]
  89. Zhang, C.; Jamshidi, M.; Chang, C.-C.; Liang, X.; Chen, Z.; Gui, W. Concrete Crack Quantification using Voxel-Based Reconstruction and Bayesian Data Fusion. IEEE Trans. Ind. Inform. 2022, 18, 7512–7524. [Google Scholar] [CrossRef]
  90. Chen, B.; Zhang, H.; Li, Y.; Wang, S.; Zhou, H.; Lin, H. Quantify pixel-level detection of dam surface crack using deep learning. Meas. Sci. Technol. 2022, 33, 065402. [Google Scholar] [CrossRef]
  91. Ding, W.; Yang, H.; Yu, K.; Shu, J. Crack detection and quantification for concrete structures using UAV and transformer. Autom. Constr. 2023, 152, 104929. [Google Scholar] [CrossRef]
  92. Mukojima, H.; Deguchi, D.; Kawanishi, Y.; Ide, I.; Murase, H.; Ukai, M.; Nagamine, N.; Nakasone, R. Moving camera background-subtraction for obstacle detection on railway tracks. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3967–3971. [Google Scholar]
  93. Tastimur, C.; Karakose, M.; Akin, E. Image processing based level crossing detection and foreign objects recognition approach in railways. Int. J. Appl. Math. Electron. Comput. 2017, 1, 19–23. [Google Scholar] [CrossRef]
  94. Selver, M.A.; Er, E.; Belenlioglu, B.; Soyaslan, Y. Camera based driver support system for rail extraction using 2-D Gabor wavelet decompositions and morphological analysis. In Proceedings of the 2016 IEEE International Conference on Intelligent Rail Transportation (ICIRT), Birmingham, UK, 23–25 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 270–275. [Google Scholar]
  95. Teng, Z.; Liu, F.; Zhang, B.; Kang, D.-J. An approach for security problems in visual surveillance systems by combining multiple sensors and obstacle detection. J. Electr. Eng. Technol. 2015, 10, 1284–1292. [Google Scholar] [CrossRef]
  96. Felzenszwalb, P.; McAllester, D.; Ramanan, D. A discriminatively trained, multiscale, deformable part model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar]
  97. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  98. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, V. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  99. He, D.; Zou, Z.; Chen, Y.; Liu, B.; Miao, J. Rail transit obstacle detection based on improved CNN. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
  100. Li, C.-J.; Qu, Z.; Wang, S.-Y.; Liu, L. A method of cross-layer fusion multi-object detection and recognition based on improved faster R-CNN model in complex traffic environment. Pattern Recognit. Lett. 2021, 145, 127–134. [Google Scholar] [CrossRef]
  101. He, D.; Qiu, Y.; Miao, J.; Zou, Z.; Li, K.; Ren, C.; Shen, G. Improved Mask R-CNN for obstacle detection of rail transit. Measurement 2022, 190, 110728. [Google Scholar] [CrossRef]
  102. Xu, H.; Li, S.; Ji, Y.; Cao, R.; Zhang, M. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries. Comput. Electron. Agric. 2021, 184, 106104. [Google Scholar] [CrossRef]
  103. Xue, J.; Cheng, F.; Li, Y.; Song, Y.; Mao, T. Detection of Farmland Obstacles Based on an Improved YOLOv5s Algorithm by Using CIoU and Anchor Box Scale Clustering. Sensors 2022, 22, 1790. [Google Scholar] [CrossRef]
  104. Yasin, J.N.; Mohamed, S.A.; Haghbayan, M.H.; Heikkonen, J.; Tenhunen, H.; Yasin, M.M.; Plosila, J. Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors. In Proceedings of the 2020 IEEE SENSORS, Rotterdam, The Netherlands, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  105. Qiu, Z.; Zhao, N.; Zhou, L.; Wang, M.; Yang, L.; Fang, H.; He, Y.; Liu, Y. Vision-based moving obstacle detection and tracking in paddy field using improved yolov3 and deep SORT. Sensors 2020, 20, 4082. [Google Scholar] [CrossRef]
  106. She, X.; Huang, D.; Song, C.; Qin, N.; Zhou, T. Multi-obstacle detection based on monocular vision for UAV. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1067–1072. [Google Scholar]
  107. Chen, H.; Lu, P. Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation. Robot. Auton. Syst. 2022, 154, 104124. [Google Scholar] [CrossRef]
  108. Chang, S.; Zhang, Y.; Zhang, F.; Zhao, X.; Huang, S.; Feng, Z.; Wei, Z. Spatial attention fusion for obstacle detection using mmwave radar and vision sensor. Sensors 2020, 20, 956. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of high dam intelligent safety inspection.
Figure 1. Overview of high dam intelligent safety inspection.
Sensors 24 05246 g001
Figure 2. Visual perception methods for high dam intelligent safety inspection.
Figure 2. Visual perception methods for high dam intelligent safety inspection.
Sensors 24 05246 g002
Figure 3. Procedure of the defect perception method.
Figure 3. Procedure of the defect perception method.
Sensors 24 05246 g003
Figure 4. Image under the influence of the high dam environment: (a) uneven illumination image of the high dam; and (b) low-light image of the high dam.
Figure 4. Image under the influence of the high dam environment: (a) uneven illumination image of the high dam; and (b) low-light image of the high dam.
Sensors 24 05246 g004
Figure 5. The basic principle of Retinex theory.
Figure 5. The basic principle of Retinex theory.
Sensors 24 05246 g005
Figure 6. The research trends for image enhancement methods.
Figure 6. The research trends for image enhancement methods.
Sensors 24 05246 g006
Figure 7. Picture of cracks in the concrete surface of a high dam.
Figure 7. Picture of cracks in the concrete surface of a high dam.
Sensors 24 05246 g007
Figure 8. An example of pixel-based crack detection based on FCN.
Figure 8. An example of pixel-based crack detection based on FCN.
Sensors 24 05246 g008
Figure 9. An example of concrete structure spalling.
Figure 9. An example of concrete structure spalling.
Sensors 24 05246 g009
Figure 10. Classification of visual defect perception methods.
Figure 10. Classification of visual defect perception methods.
Sensors 24 05246 g010
Figure 11. The technology path of intelligent inspection obstacle perception.
Figure 11. The technology path of intelligent inspection obstacle perception.
Sensors 24 05246 g011
Table 1. Comparison of histogram equalization methods for image enhancement.
Table 1. Comparison of histogram equalization methods for image enhancement.
CategoryMethodPurposeMerits
Histogram EqualizationAHE [11]Adaptive histogram equalization algorithmEnhances the local contrast and details of the image
AHE [12]Calculate local histograms based on the AHE methodEnhances image contrast while preserving a significant amount of detail
WTHE [13]Video enhancement based on the WTHE
Algorithm
Over-enhancement and level saturation artifacts are effectively avoided
LDR-HE [14]Enhances images efficiently in terms of both objective and subjective qualityA novel contrast enhancement algorithm using the LDR of Lee et al.
Underwater HE [15]Enhances images efficiently in terms of both objective and subjective qualityProduces a pair of output versions
Table 2. Comparison of Retinex methods for image enhancement.
Table 2. Comparison of Retinex methods for image enhancement.
CategoryMethodPurposeMerits
RetinexRetinex [16]Remove or reduce the effects of illumination and preserve the essential characteristics of the objectOffers a robust and flexible framework for image enhancement tasks
SSR [17]Decompose an image into two different components: the reflection component and the illumination componentReduces the effects of illumination and preserves the essential features of objects
MSR [18]Estimate the illumination component by combining several different scales using a central surround functionBalances local and global dynamic range compression
SSRBF [19]Address low and uneven lighting issuesMerges SSR and Bilateral Filter
Retinex + HSV [20] Achieve the goal of eliminating halo artifactsImproves visibility and eliminates color distortion in HSV space
RCF + CFAHE + NLF + GF [21]Estimate illumination in the presence of spatially variant phenomenaIncreases contrast, eliminates noise, and enhances details at the same time
Fractional-order variational Retinex [22]Enhance images with severely low lightControls the regularization extent more flexibly
Robust Retinex Model [23]Address the issue that Retinex models often fail to deal with noiseAdaptable to a variety of tasks
Concrete image enhancement [24]Enhance the poor image quality of underwater concreteProvides balanced enhanced images in terms of color, contrast, and brightness
Table 3. Summary of performance metrics of deep learning methods for image enhancement.
Table 3. Summary of performance metrics of deep learning methods for image enhancement.
MethodNetworkDatasetPSNRSIMM
LLnet [25]Deep AutoencoderSynthetic images19.810.67
Hybrid network [26]RNNSynthetic images28.430.96
Denoise [27]CNNReal images 0.85
CLAR [28]Retinex + CNNVV19.650.61
DSICE [29]CNNReal under-exposed images20.270.94
LightenNet [30]CNNSelf21.710.93
Retinex-Net [31]CNNLOL
Retinex-GAN [32]GANLOL31.330.88
Global-SRA-U-net [33]GANLOL19.460.75
Deep Photo Enhancer [34]GANMIT-Adobe 5K23.800.90
EnlightenGAN [35]GANLOL
Weber-Fechner law in log space [36]CNNLOL20.5080.952
GPANet [37]CNNLOL20.8620.7842
LSR [38]CNNLOL20.7120.821
MFEF [39]CNNUIEB23.3520.910
LLFormer [40]TransformerLOL23.64910.8163
Retinexformer [41]TransformerLOL25.160.845
Table 4. The merits and limitations of crack detection methods based on traditional image processing approaches.
Table 4. The merits and limitations of crack detection methods based on traditional image processing approaches.
ReferenceMethodMeritsLimitations
Fujita et al. [42]ThresholdHandles irregular lighting conditions,
shadows, and imperfections
Sensitivity and adaptability can be easily disrupted.
Fujita et al. [43]Adaptive
thresholds
Robust automatic crack detection from noisy concrete surface imagesSuitable adaptive parameters and ranges must be selected
Talab et al. [44]OtsuClassifies background and foregroundLimited adaptation to images with complex background
Asdruabali et al. [45]ThresholdThermal image enhancement with
Kantorovich operator
Adapted to infrared images
Chen et al. [46]OtsuDifference image adaptation for defect detection in complex backgroundThe accuracy of noisy image processing needs to be further studied
Table 5. The merits and limitations of crack detection methods based on machine learning.
Table 5. The merits and limitations of crack detection methods based on machine learning.
ReferenceMethodMeritsLimitations
Liu et al. [47]SVMUtilizes the balanced local imageLimited accuracy
Luo et al. [48]SVMAdaptive binarization procedureBlack thin long writing and dirtiness are recognized as cracks.
Fisher et al. [49]SVMA novel data-driven approachGeneralization ability needs to be improved
Fan et al. [50]ClusteringThe threshold for realizing image binarization is self-adaptiveLimited environmental adaptability
Nishikawa et al. [51]Genetic EncodingWavelet transforms at different scalesNo scale adaptation
Gordan et al. [52]ClusteringFuzzy C-means clustering edge detection operator algorithmHigh false alarm rate
Table 6. The merits and performance of crack detection methods based on deep learning.
Table 6. The merits and performance of crack detection methods based on deep learning.
ReferenceMethodMeritsDatasetPerformance (%)
Dung et al. [53] FCNReasonably detected and crack density is also accurately evaluatedConcrete Crack Images for Classification AP = 89.3%
Ni et al. [54]CNNDelineates cracks accurately and rapidlyOwn collectionPrecision = 79.28%
Feng et al. [55,56,57] CNNProvides accuracy considerably higher than that of a support vector machineOwn collectionPrecision = 93.48%
Modarres et al. [58]CNNOutperforms several other machine learning algorithmsOwn collectionAccuracy = 99.6%
Precision = 97.5%
Pang et al. [59]RCNNCrack segmentation method of hydro-junction projectOwn collectionIou = 52.7%
Li et al. [60]CNNIdeal for integration within intelligent
autonomous inspection systems
Own collectionAccuracy = 80.7%
Deng et al. [61]CNNDefect detection on dam surface by using UAV tilt photogrammetry technology combined with machine visionOwn collectionAccuracy = 76.39%
Chen et al. [62]FCNAccurate identification and quantification of cracks on the dam surfaceOwn collectionAccuracy = 75.13%
Zou et al. [63]U-NetA novel end-to-end trainable convolutional network—DeepCrackCRKWH100AP = 93.15%
Li et al. [64]Transfer LearningRealized high-precision crack
identification
Own collectionPrecision = 91.23%
Zhu et al. [65]Deeplab V3+The fusion of a lightweight backbone network and attention mechanismOwn collectionPrecision = 91.23%
Wu et al. [66]LCA-YOLOv8-SegSuitable for low-performance devicesConcrete Crack Images for Classification m A P = 93.30%
Zhang et al. [67]UTCD-NetDemonstrated superior generalizability with respect to complex scenesCFD datasetPrecision = 62.85%
Xiang et al. [68]DTrC-NetMore adaptable and robust to crack images captured under complex conditionsCrack3238Precision = 75.60%
Table 7. The merits and performance of other defect detection methods.
Table 7. The merits and performance of other defect detection methods.
ReferenceDefect TypeMeritsDatasetPerformance (%)
German et al. [69]SpallingAutomatically detecting spalled regions on a concrete structural element and retrieving significant properties of these spalled regions in terms of their length and depthOwn collectionAP = 80.90%
Dawood et al. [70]SpallingAn integrated framework to detect and quantify spalling distress based on image data processing and machine learningOwn collectionPrecision = 94.80%
Gao et al. [71]SpallingSpalling classifier based on deep transfer learning + VGGNetOwn collectionAccuracy = 91.50%
Hoang et al. [72]SpallingImage texture analysis and novel jellyfish search optimization methodOwn collectionPrecision = 93.20%
Nguyen et al. [73]SpallingCombination of meta-heuristic optimized extreme gradient boostingOwn collectionPrecision = 99.03%
Huang et al. [74]Multiple TypesAn automatic multiple-damage detection method for concrete dams based on faster re-gion-based CNNOwn collectionmAP = 88.77%
Zhao et al. [75]Multiple TypesA system for detecting damages in concrete dams that combines the proposed YOLOv5s-HSC algorithm and a three-dimensional (3D) photogrammetric reconstruction method to accurately identify and locate objects.Own collectionmAP = 79.80%
Minh Dang et al. [76]Multiple Typesaccurately distinguish different types of structural defects in concrete dams under the interference of environmental noises.Own collection m A P = 89.40%
Hong et al. [77]Multiple TypesAn end-to-end transformer-based modellarge-scale dataset m A P = 63.80%
Chen et al. [80]UnderwaterThis method enables accurate and efficient detection and classification of underwater dam cracks in complex underwater environmentsOwn collection/
Fan et al. [81]UnderwaterThe proposed method achieves accurate segmentation of underwater dam crack imagesOwn collectionPrecision = 47.74%
Li et al. [82]UnderwaterLightweight semantic segmentation and transfer learningOwn collectionPrecision = 91.51%
Qi et al. [83]UnderwaterCombination of traditional approaches and
deep learning techniques
Own collectionAccuracy = 93.9%
Xin et al. [84]UnderwaterEdge detection model based on artificial
bee colony algorithm
Own collectionPrecision = 90.10%
Table 8. The merits and performance of defect quantification methods.
Table 8. The merits and performance of defect quantification methods.
ReferenceDefect TypeMeritsDatasetPerformance (%)
Ni et al. [85]CrackThe Zernike
moment operator (ZMO) for achieving subpixel accuracy in measuring thin-crack width
Own collectionPrecision = 88.65%
Wang et al. [86] CrackA new crack width definition and formulates it using Laplace’s EquationOwn collection/
Rezaiee-Pajand et al. [87] CrackConcrete crack detection based on genetic algorithmOwn collectionPrecision = 94.80%
Peng et al. [88]Crackcrack recognition and width
quantification through hybrid feature learning
Own collectionPrecision = 92.00%
Zhang et al. [89] CrackCombination of voxel reconstruction
and Bayesian data fusion
Own collectionAP = 87.3%
Chen et al. [90]CrackCombining semantic segmentation and morphologyOwn collectionPrecision = 90.81%
Ding et al. [91]CrackBased on IBR-Former, it can quantify cracks with widths less than 0.2 mmOwn collectionPrecision = 85.32%
Table 9. The merits and limitations of obstacle perception methods.
Table 9. The merits and limitations of obstacle perception methods.
Perception TypeReferenceMethodMeritsLimitations
Traditional Image
Detection
Mukojima et al. [92]background subtraction algorithmMoving camera background
subtraction for forward obstacle detection
Limited light
adaptation
Tastimur et al. [93]background subtraction algorithmHSV color transformation, image difference extraction, gradient computation, filtering, and feature extractionForeign objects in the level crossing are not enough
Selver et al. [94]Gabor waveletsTrajectory edge enhancement and noise filterLimited detection range
Teng et al. [95]background subtraction algorithmCombination of multiple sensors and the vision-based snag detection algorithm Limited detection range
Deep
Learning
He et al. [99]FE-YOLOA flexible and efficient multiscale one-stage object detectorLimited real-time performance
Li et al. [100]Faster R-CNNMulti-object detection and recognitionLimited generalization ability
He et al. [101]Mask R-CNNHigh precision in small target detectionLimited generalization ability
Xu et al. [102]Optical flow algorithmDynamic obstacle detection based on panoramic visionSingle direction of motion
Xue et al. [103]Improved YOLOv5sThe small-weight fileLimited generalization ability
Yasin et al. [104]Hough transformUsed an adaptive slicing algorithm based on accumulating number of eventsLimitation to specific scenarios
Qiu et al. [105]YOLOv3Vision-based moving obstacle detection and trackingNo center point detection
She et al. [106]YOLOv3 + SURFMulti-obstacle detectionLimited generalization ability
Chen et al. [107]The forbidden
pyramids
Real-time identification and avoidance of simultaneous static and
dynamic obstacles
Limited robustness
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, Z.; Li, L.; Liu, D.; Zhou, S.; Liu, Z. A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs. Sensors 2024, 24, 5246. https://doi.org/10.3390/s24165246

AMA Style

Peng Z, Li L, Liu D, Zhou S, Liu Z. A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs. Sensors. 2024; 24(16):5246. https://doi.org/10.3390/s24165246

Chicago/Turabian Style

Peng, Zhangjun, Li Li, Daoguang Liu, Shuai Zhou, and Zhigui Liu. 2024. "A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs" Sensors 24, no. 16: 5246. https://doi.org/10.3390/s24165246

APA Style

Peng, Z., Li, L., Liu, D., Zhou, S., & Liu, Z. (2024). A Comprehensive Survey on Visual Perception Methods for Intelligent Inspection of High Dam Hubs. Sensors, 24(16), 5246. https://doi.org/10.3390/s24165246

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop