Next Article in Journal
MPF Problem over Modified Medial Semigroup Is NP-Complete
Previous Article in Journal
Interactive Tuning Tool of Proportional-Integral Controllers for First Order Plus Time Delay Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Enhancement for Inspection of Cable Images Based on Retinex Theory and Fuzzy Enhancement Method in Wavelet Domain

1
School of Power and Mechanical Engineering, Wuhan University, Wuhan 430000, China
2
Guangdong Keystar Intelligence Robot Co., Ltd., Guangzhou 510000, China
*
Authors to whom correspondence should be addressed.
Symmetry 2018, 10(11), 570; https://doi.org/10.3390/sym10110570
Submission received: 24 September 2018 / Revised: 27 October 2018 / Accepted: 29 October 2018 / Published: 1 November 2018

Abstract

:
Inspection images of power transmission line provide vision interaction for the operator and the environmental perception for the cable inspection robot (CIR). However, inspection images are always contaminated by severe outdoor working conditions such as uneven illumination, low contrast, and speckle noise. Therefore, this paper proposes a novel method based on Retinex and fuzzy enhancement to improve the image quality of the inspection images. A modified multi-scale Retinex (MSR) is proposed to compensate the uneven illumination by processing the low frequency components after wavelet decomposition. Besides, a fuzzy enhancement method is proposed to perfect the edge information and improve contrast by processing the high frequency components. A noise reduction procedure based on soft threshold is used to avoid the noise amplification. Experiments on the self-built standard test dataset show that the algorithm can improve the image quality by 3–4 times. Compared with several other methods, the experimental results demonstrate that the proposed method can obtain better enhancement performance with more homogeneous illumination and higher contrast. Further research will focus on improving the real-time performance and parameter adaptation of the algorithm.

1. Introduction

To improve the efficiency and safety of inspection and maintenance of power transmission line, especially those distributed in complex environments (e.g., mountains and forests), several live-line inspection robots [1,2,3,4] running along cables have been developed. The detection efficiency and accuracy of multiple targets attached to the powerline largely depend on the quality of images and videos captured by the cameras mounted on the robot [5]. However, due to the uncontrollable lighting condition in the wild, the illumination distributed on the surface of the inspection images is always uneven, especially in the case of backlighting or partial lighting. It also leads to low global and local image contrast as well as relatively weak details in the dark region. Moreover, small volume and low-cost CCD or CMOS cameras are used in view of the volume, weight and cost of the inspection robot [6,7]. However, it generates some speckle noise and limited dynamic range, which make the situation worse. Therefore, there is an urgent need to study an illumination correction method to obtain better visual effects which can facilitate the monitoring for users and increase the detection or recognition performance for the following vision processing procedure.
During the last few decades, many image enhancement approaches have been reported, which can generally be classified into two categories: spatial domain based methods and transform domain based methods. Histogram equalization (HE) method is one of traditional image enhancement methods in the spatial domain. It can enhance the overall contrast of image by transforming the histogram of the given image into uniform distribution and fully utilizing the dynamic range [8,9], but it tends to over-enhance the illumination and magnify the noise due to its characteristic of uniform histogram. Contrast Limited Adaptive Histogram Equalization (CLAHE) [10,11] was proposed to suppress over-enhanced phenomenon and noise amplification by altering the threshold value of the histogram. An improved time-domain recursive filtering algorithm with three-dimensional filtering coefficients was proposed in [12], which effectively removed the noise in low-light-level video and improved the quality of low-light-level video.
Retinex method, proposed by Land and Mccann [13], is another image enhancement method in the spatial domain. The basic theory is to treat the given image as the product of two components such as the illumination image and the reflection image. Work of Si et al. [14] proposed a new hybrid algorithm (SSRBF) which is based on the SSR and bilateral filter (BF) to handle the low illumination and uneven illumination problem to enhance the image quality of the mining face videos. Work of Xie et al. [15] introduced a novel enhancement method based on guided filter (GF) and SSR for finger vein image under uneven light which is caused by the difference of the capacities and percentages of finger tissues between different people. A fast mean filtering technique combined with SSR was used to compensate the uneven illumination image in Work of Xiao et al. [16]. Work of Tao et al. [17] used the MSR framework combined with region covariance filter, CLAHE, non-local means filter and GF to correct illumination, eliminate noise and enhance details simultaneously. Thus, these methods have their characteristics and limitations in practice. For example, the SSRBF method can eliminate noise and enhance details to a certain extent by using BF filter which considers spatial proximity and intensity similarity between pixels of an image simultaneously. However, it fails to correct the illumination properly, which leads to relatively low global contrast. The SSRGF method can achieve better visual perception but cannot eliminate noise in the image. The method proposed in [17] can obtain image with high quality, little noise and enhanced details, but it consumes vast computation resources and affects the real-time performance.
There are many studies on image enhancement methods based on transform domain, which can roughly be divided into two categories: homomorphic filtering (HF) and wavelet methods. Work of Fan et al. [18] proposed an image enhancement method based on homomorphic filter for face recognition under varying illumination. The illumination correction is obtained by reducing the low frequency part but magnifying the high frequency part by varying the parameters of the selected high pass filter. Work of Faraji et al. [19] introduced an adaptive homomorphic filter to improve the face recognition accuracy under dynamic illumination. The adaptive factor for enhancement performance is obtained by the statistics of frequency magnitude. However, the effect of illumination uniformity is far more dissatisfactory because the homomorphic filter cannot separate the illumination and reflection accurately. To address this problem, wavelet based methods are proposed to achieve better visual effects because of their spatial and frequency characteristics and multi-resolution properties. Work of Łoza et al. [20] proposed an automatic contrast enhancement method based on local statistics of wavelet coefficients for low-light and uneven images. Combined with the shrinkage function in the wavelet domain, the uneven illumination is compensated and noise amplification is avoided for single image and videos. Work of Shen et al. [21] introduced an uneven illumination correction (UIC) algorithm to obtain better image quality by changing the coefficients of low and high frequency components of each channel property for the remote sensing images covered with thin clouds. Work of Kim et al. [22] proposed an image contrast enhancement algorithm by maximizing the entropy defined in wavelet domain, which can correct the uneven illumination in the image and increase the image quality. For this method, there is no need to implement the post-processing for the noise reduction procedure because of the characteristic of the entropy scaling. For cephalometric images with poor quality, Work of Kaur et al. [23] introduced a contrast enhancement algorithm based on wavelet-based modified adaptive histogram equalization, which can result in uniform illumination distribution. Work of Jung et al. [24] developed a method based on dual-tree wavelet transform (DT-CWT) for low-light and uneven illumination image enhancement. These methods can persevere the edge details perfectly and handle the low-light case thoroughly, but the compensation effect of the uneven illumination is not well enough.
Based on the observation of the methods mentioned above, none of them can compensate uneven illumination, remove noise and enhance the details simultaneously to increase the quality of the inspection images. Hence, we propose a hybrid method based on the integration of modified MSR and fuzzy enhancement to meet the above three demands at the same time. The input image is decomposed into one low frequency component and three high frequency components by DWT. An improved MSR framework has been applied to the low frequency component to compensate the uneven illumination. Then, a combination technique is used to merge the three results processed by improved SSRs to obtain one single enhanced low frequency components. For these three high frequency components, a soft threshold based image noise reduction is processed firstly to eliminate the noise. Then, the image details are classified into high contrast edges and low contrast edges according to the proposed contrast measurement defined in the wavelet domain. After that, a modified membership function and a fuzzy enhancement function are applied to each high frequency component to enhance the edges and increase the contrast properly. Finally, an inverse DWT is used to transform the results of four components to compensate the uneven illumination, remove noise and enhance the contrast at the same time.
The rest of the article is arranged as follows. Section 2 shows the inspection image acquisition and its spectrum analysis. Section 3 shows the details of the proposed method including low and high frequency components enhancement. The effectiveness of the proposed method is validated in Section 4 by conducting several experiments and qualitative and quantitative analysis. The conclusion and further work are presented in Section 5.

2. Inspection Image Acquisition and Spectrum Analysis

2.1. Inspection Image Acquisition

As shown in Figure 1a, the vision system of CIR consists of two hand–eye cameras and two Pan–Tilt–Zoom (PTZ) network cameras. The hand–eye cameras are mainly used to recognize overhead ground wire (OGW) when crossing the obstacles such as dampers and tower head from a top view. The PTZ cameras are mainly used to captured images and videos of towers, transmission lines and other interest targets. Because of the particular installation position of hand–eye camera with limited field of view (FOV), the distribution of the illumination is always uneven as the arm keeps out of the nature light (as shown in the first column of Figure 1b), and the difficulties to detect OGWs increase obviously when the OGWs is in the dark region. Moreover, due to the limitations of the LUX of the cameras, the noises are generated in the shadow region and the contrast is relatively low. The PTZ cameras are installed on both sides of the robot body with broader FOV. However, the uneven illumination is presented in the frequently occurred backlight or polarizing scenes, as shown in the second column of Figure 1b. The local contrast of the dark region is low, and, unfortunately, important image details such as bolts, dampers, and insulator are always located in this region. Thus, it is a challenge to recognize and locate these valuable targets in the inspection image. In addition, different lighting intensity and lighting angles will also cause a large range of dynamic changes of image lighting, which will result in more severe lighting condition. Hence, to obtain high quality images which can facilitate the subsequent detection and identification, it is necessary to correct the illumination, remove noise and increase the local contrast for enhancing the image details information.

2.2. Spectrum Analysis

The changing intensity distribution of the gray levels can be observed in spectrum image after 2D Fourier transformation. To correct the illumination, remove the noise and enhance the image details, it is necessary to conduct spectral analysis for the image in advance. In general, image details such as the edge of ground wire are considered to have large variation in grayscale and should be located in the high-frequency domain of the spectrum image. Moreover, noise also generally exists in the high frequency range. However, the illumination information of the image is generally corresponding to the lower frequency range. To visualize the distribution of the illumination, noise and edges of the inspection image under different brightness, spectrum distribution maps of four sampled inspection images in the same scene with varying illuminance are shown in Figure 2.
In Figure 2a,c, we can see that the spectrum has sufficient illuminance. There are two clear close vertical lines distributed in the second and fourth quadrants, which correspond to the edges of OGW and robot body in Figure 2a. When the illumination of the image becomes less uniform and darker, as shown in Figure 2b–d, the two bright straight lines become more fuzzy and the distribution of the center spectrums becomes wider. Besides, there are more signals in the middle frequency domain. These characteristics indicate that, as the image illumination unevenness is strengthening and the local contrast of the image is decreasing, the noise information is mostly distributed in the intermediate frequency.
Therefore, the method based on Retinex theory is proposed to compensate the uneven illumination in the low-frequency domain, while a fuzzy enhancement method is proposed to enhance the local contrast in the high frequency domain. The soft threshold based method is used to remove noise in the high frequency domain simultaneously. However, since the noise and edge information are almost in the same frequency band, it is difficult for conventional low pass and high pass filters to retain the complete edge information while removing the noise. Hence, DWT method is adopted to obtain multi-scale image expression to achieve image enhancement.

3. Inspection Image Enhancement Principal

We propose a new method to compensate the uneven illumination, remove noise and enhance the contrast of the inspection images, as illustrated in Figure 3, which mainly includes two parts: low frequency component enhancement and high frequency component enhancement.

3.1. Discrete Wavelet Transformation (DWT)

The discrete wavelet transformation (DWT) proves to be useful in image processing [25,26], which is capable of providing the fine to coarse representation of the input image according to the different scales of independent decomposition [27]. Moreover, the advantage in the joint spatial–frequency representation [28] over the Fourier transformation makes it be a better choice to analyze the image in the frequency domain such as filtering and noise reduction. Hence, the DWT is used to transform the inspection images into frequency domain.
For the 2Ddiscrete wavelet decomposition, an image can be decomposed into four sub-bands, which is represented as LL, HL, LH and HH, respectively (Figure 3). The LL, which is also called low frequency component, can be decomposed into another four parts to obtain higher level coefficients representation. Another three sub-bands are called high frequency components. As we know, the illumination part of an image corresponds to the low frequency component and the intrinsic features correspond to the high frequency components. Hence, the illumination compensation can be obtained in the LL sub-band, and the local contrast enhancement can be achieved in the HL, LH and HH sub-bands. The one-level Haar wavelet DWT is applied in this paper for its celerity and simplicity.

3.2. Low Frequency Component Enhancement Based on Improved MSR

3.2.1. Basic Principal of Image Enhancement Based on Retinex Model

Retinex assumes the color constancy of color image under varying illumination condition, and the color constancy refers to gray intensity constancy. Theoretically, the source image I(x,y) is the product of the illumination image L(x,y) and the reflection image R(x,y), as shown in the following equation:
I ( x , y ) = L ( x , y ) ·    R ( x , y )
where x and y are the coordinate indexes of pixel in the image. The illumination image L(x,y) represents the overall light intensity around the foreground objects, which has nothing to do with the objects. However, the reflection image R(x,y) describes the intrinsic characteristic of the image such as the edge details, which have nothing to do with the surrounding lighting conditions as well. Hence, if the illumination image and reflection image can be separated, the image enhancement could be achieved by varying the proportion of illumination part and reflection part in the original image. To separate them, a logarithm transformation is applied on both side of Equation (1), and we get
log I ( x , y ) = log L ( x , y ) + log R ( x , y )
Based on the above analysis, Jobson et al. proposed the single-scale Retinex (SSR) algorithm, as shown in Figure 4, and it is proved that Gaussian function could accurately estimate the illumination image, which can be represented as
log R ( x , y ) = log I ( x , y ) log [ G ( x , y ) I ( x , y ) ] G ( x , y ) = 1 2 π σ 2 exp [ ( x 2 + y 2 ) / 2 σ 2 ]
where G(x,y) is Gaussian function (surrounding function) with the scale parameter σ, and “*” is the convocation operation.
Retinex based methods are widely used to solve the problems of illumination compensation and image enhancement, and the enhancement effect largely depends on the standard deviation. Since the value of the parameter σ affects the dynamic compression capability and the gray intensity constancy, it can be set between 50 and 100 empirically. However, one single SSR could not provide sufficient dynamic compression capability and accurate gray intensity constancy, MSR were proposed to solve this problem and can be represented as
log R ( x , y ) = k = 1 K W k { log I ( x , y ) log [ G k ( x , y ) I ( x , y ) ] }
where K denotes the quantity of the surrounding functions, Gk is the kth scale surrounding function and Wk is the corresponding weight factor. The excellent enhancement performance can be achieved by adopting MSR with three different scales (K = 3) according to work of Xue, Gao et al. [29,30].

3.2.2. Implementation of the Low Frequency Component Enhancement Algorithm Based on Improved MSR

The enhancement effects of the traditional MSR algorithm largely depend on the scale parameter σ. Unreasonable deviations would result in under- or over-enhanced effects, especially for those highest and lowest brightness areas. To solve this problem, Xue [29] and Fu and Liu [31] proposed two improved MSR algorithm by using different Gaussian functions to process the source image, and the filtered image are combined to obtain good enhancement effects. However, since these two methods applied MSR algorithm to the original gray image, the denoising process and the details enhancement process cannot be completed, which cannot satisfy the image enhancement of inspection images in the environment of uneven illumination, low contrast, etc.
Considering the above imperfections, this paper proposes an improved MSR image enhancement algorithm for the low frequency component in the wavelet domain. Gaussian functions with three different scales (K = 3) are used to filter the low frequency image, and the detailed steps are shown in Figure 5.
Step 1: Decompose the given gray image I by Haar wavelet functions into four components which are low frequency component I L L 1 ( i , j ) , and three high frequency components I H L 1 , I L H 1 and I H H 1 .
Step 2: Convert the coefficients into grayscale range. Considering the particularity of the wavelet coefficients which are float numbers with positive and negative symbols, the SSR algorithm designed for the grayscale images with integer intensity values in range (0, 255) cannot be applied to I L L 1 ( i , j ) directly. The coefficients of the I L L 1 ( i , j ) should be scaled to (0, 255) before Retinex by the following equation.
I L L 1 ( i , j ) = I L L 1 ( i , j ) I L L 1 ( i , j ) min I L L 1 ( i , j ) max I L L 1 ( i , j ) min 255
Step 3: Modify MSR, apply three SSR algorithm shown in Figure 5 to I L L 1 ( i , j ) , and then use Gaussian functions with three different standard deviation σ to get reflection part of the image R L L 1 ( i , j ) . Finally, the results of these three modified SSRs are R L L 1 1 ( i , j ) , R L L 1 2 ( i , j ) and R L L 1 3 ( i , j ) , as shown in Figure 5.
Step 4: Convert the result image R L L 1 ( i , j ) to the range [ I L L 1 ( i , j ) min , I L L 1 ( i , j ) min ] by using the following equation:
R L L 1 ( i , j ) = R L L 1 ( i , j ) R L L 1 ( i , j ) min R L L 1 ( i , j ) max R L L 1 ( i , j ) min
where R L L 1 ( i , j ) max and R L L 1 ( i , j ) min are the maximum and minimum values of the result of the previous steps. R L L 1 ( i , j ) is the normalization result. Then, as in Step 1, the wavelet decomposition with Haar wavelet functions are applied to the three R L L 1 ( i , j ) to obtain corresponding low frequency and high frequency components.
Step 5: Conduct combination procedure. The high frequency component reflects the image details. The max operation is performed on the three results obtained by previous steps to get more details, and the equations are as follows.
R H L 2 ( i , j ) = max { R H L 1 ( i , j ) , R H L 2 ( i , j ) , R H L 3 ( i , j ) } R L H 2 ( i , j ) = max { R L H 1 ( i , j ) , R L H 2 ( i , j ) , R L H 3 ( i , j ) } R H H 2 ( i , j ) = max { R H H 1 ( i , j ) , R H H 2 ( i , j ) , R H H 3 ( i , j ) }
The low components are combined as follow,
R L L 2 ( i , j ) = ( R L L 2 1 + R L L 2 2 + R L L 2 3 ) / 3
Step 6: Apply wavelet reconstruction procedure to R L L 2 , R H L 2 , R L H 2 and R H H 2 to get the final enhancement result.
To get the optimal effect, the scale factors σ of the MSR algorithm are set as 25, 80, and 100, respectively, by conducting intensive experiments.

3.3. High Frequency Components Enhancement Based on Fuzzy Enhancement

After the wavelet decomposition, the detail edge information and noise are mainly located in the high frequency component. Hence, the purpose of the enhancement is to increase the local contrast which highlights edge details and reduces noise simultaneously in this frequency range. Moreover, according to the characteristic of the wavelet decomposition, the coefficient of the detailed information is basically larger than that of the noise. Therefore, it is quite possible to separate these two parts and handle them separately. Thus, the noise reduction of the high frequency components is carried out first, and then an improved fuzzy enhancement algorithm is applied under the motivation of work of Du, Pal et al. [32,33] to extract useful edge information.

3.3.1. Noise Reduction

The methods of noise reduction in wavelet domain can be generally divided into two categories: hard thresholding methods and soft thresholding methods. For the hard thresholding method, the image edge information can be persevered well but the noise reduction performance is unsatisfactory. However, the soft thresholding method can obtain relatively smooth visual effect and handle the noise well. Proposed by work of Bichao et al. [34], Shrink threshold estimation method considering the prior information of the coefficients of original image without noise is used to calculate the threshold as follow:
T l , k = σ n 2 ( l ) σ l , k
where l represents the different scale, k represents the index of high frequency components in each scale, and k = 1 , 2 , 3 denotes the L H , H L and H H components, respectively. The variance σ n of noise can be obtained from this component using Median Absolute Deviation (MAD) method. The variance σ l , k of original image without noise in scale l and k component can be calculated as following equation using maximum likelihood estimation method in work of Bichao et al. [34].
Three soft threshold values corresponding to three high components can be obtained by Equation (9) and applied to reduce noise in the high frequency components.

3.3.2. Implementation of Local Contrast Enhancement in the High Frequency Components Based on Fuzzy Enhancement

After the previous steps, some of the noises contained in the high frequency components are removed. Therefore, image edges and some noises are distributed in high frequency components. Moreover, large values in high frequency component represent the detailed information of high contrast, while small ones often belong to the detailed information of noise and low contrast. To improve visual quality and targets detection accuracy, the high contrast edge details such as the texture of the OGW and its two border lines should be enhanced, while the edge details with lower contrast such as the texture of the background should be weakened.
The high frequency components after the first wavelet decomposition own large values and a wide distribution range. To analyze the distribution of low-contrast and high-contrast edge information, the three high frequency wavelet components mentioned previously are firstly compressed into the original gray-scale range of the image, i.e., (0, 255). Figure 6 shows histograms of the normalized wavelet horizontal, vertical and diagonal components, in which the horizontal axis represents the gray scale of the compressed wavelet coefficients and the vertical axis represents the occurrence frequency of this gray value in the whole image. When these three histograms are enlarged, several signals are still in the gray levels from 60 to 255, which can hardly be distinguished by us since they are too weak relative to those data in the range of (0, 60). Interestingly, these signals correspond to the high contrast detailed information that we want to enlarge. The detailed information of low contrast and some of the noises are mainly distributed in the range of (0, 60). Based on this analysis, we propose an improved fuzzy enhancement algorithm to enhance the high frequency components. The detailed steps are shown in Figure 7.
Step 1: Construct the fuzzy set. These three high frequency components are converted to fuzzy set by using the proposed improved membership function. Because the component can be roughly divided into noise part and detailed information part in terms of the coefficient intensity, the intensity level of each component can be used as fuzzy feature. Moreover, considering the distribution of the coefficients, a power function is selected as the membership function for its simplicity and effectiveness in separating the two parts. The function is as follows
μ i j = F ( C i j ) = C i j β  
where C i j is the coefficient intensity value in each high frequency component, and β is the critical parameter for the shape of the membership function, which is in range (0, 1).
Step 2: Fuzzy set transformation. To increase the separability of the fuzzy set, a power function is applied to alter the fuzzy set for its flexibility. The optimal image enhancement effects can be obtained by varying the power. The enhancement function is as follows.
μ i j = { 2 γ 1 μ i j γ , 0 μ i j 0.5 1 2 γ 1 ( 1 μ i j ) γ , 0.5 < μ i j 1
Step 3: Transform the fuzzy set back to the image high frequency component. Opposite to Step 1, the features in fuzzy set domain are inversely transformed to high frequency component domains by using Equation (12). The results are the enhanced high frequency coefficients.
e i j = F 1 ( μ i j ) = μ i j ( 1 / β )
The improved fuzzy enhancement algorithm discussed above is applied to three high frequency components: I L H 1 , I H L 1 and I H H 1 . The enhanced components are denoted I H L , I L H and I H H .
The enhancement effect is controlled by two parameters β and γ which determine the shape of the fuzzy enhancement curve. To increase the enhancement effect as much as possible, the values of β and γ should be determined reasonably. We discuss how to choose the values in detail in the next section.

4. Experimental Results and Analysis

4.1. Parameters Selection

To obtain the optimal performance, the parameters of the proposed method should be chosen reasonably. The parameters selection of the proposed method are discussed in detail in the following section.

4.1.1. The Parameters for Improved MSR

The single scale Retinex (SSR) cannot provide good dynamic range compression (detailed information such as edges) and fine tonal rendition (brightness information) at the same time. In fact, there is a distinct trade-off between dynamic range compression and tonal rendition, controlled by the scale of the surrounding function, which can only be improved at the cost of reducing others. It has been verified by many researchers of the work of Zia-Ur, Rahman et al. [35,36] that the combination of multiple surroundings, namely multi-scale Retinex (MSR), is necessary to alleviate the trade-off. Of course, the scales number used for the MSR depends on the application. It has been found empirically, however, that a combination of three scales representing narrow, medium, and wide surrounds is sufficient to provide both dynamic range compression and tonal rendition of the work of Zia-Ur, Rahman et al. [35,36]. More details information and brightness information are provided by narrow and wide scale respectively. Figure 8 shows the representative input inspection image: Figure 8b–d shows the results when the different surround functions are applied to the original image, and Figure 8e shows the effect of the MSR. As shown in Figure 8, single scale Retinex cannot obtain ideal enhancement result. As shown in Figure 8b,c, the imperfection of the narrow and medium surround cases is self-evident. The wide surround case produces a better output image, but the edges are not clear enough, as shown in Figure 8d. The MSR image contains features from all three scales simultaneously, providing fine edge details and uniform brightness, as shown in Figure 8e.
The mechanism of MSR is the combination of three SSR with narrow, medium and wide surround functions. Of course, the optimal values of these parameters depend on the application. However, we found that there was no significant difference between the cases when the three parameters were in a limit range. The narrow, medium and wide surrounds are in the ranges (5, 30), (30, 90) and (90, 150), respectively. That is why there are minor differences in the selection of the three parameters in many studies of the work of Rahman, Jobson, Wang et al. [36,37,38] using MSR algorithm. Based on this practical experience, the three parameters in our work are set as 25, 80 and 100. The effectiveness of these parameters is verified in the following section.

4.1.2. The Parameters for Fuzzy Enhancement

There are two parameters in high-frequency components enhancement, including membership function coefficient β and image enhancement factor γ.
The parameter γ affects the shape of enhancement curve. For the given fuzzy features µ obtained by Equation (10), different γ lead to varying enhancement results, and the curves corresponding to Equation (11) are shown in Figure 9. Any given value of γ tends to strengthen the higher fuzzy features and suppress the lower fuzzy features to amplify the high contrast details and restrain the low contrast details information. Specifically, when γ = 1, it is a linear transformation, which has no effect on the given fuzzy features. As the value of γ increases, the enhancement effect on the image becomes more obvious. However, when γ ≥ 4, most of the fuzzy values lower than 0.3 or bigger than 0.7 are set to 0 or 1, respectively. This will lead to over-enhancement. Hence, to obtain relatively optimal enhancement results, γ is set to 3 in our work.
The parameter β of modified membership function determines the local detail contrast enhancement performance. The optimal value of β is obtained in (0, 1) by simulation experiments. The enhanced results of the given original image based on the proposed method with five different β are shown in Figure 10. To see the details clearly, the selected regions in red boxes are enlarged.
In Figure 10, noise exists and is amplified in the enhanced image with β < 0.2 because lower β means that it tends to enhance the low contrast regions which consist of noises. The images are over-enhanced with loss of image details when β > 0.3 . We can conclude that the best enhancement effects can be obtained by β in range (0.2, 0.3). Image entropy is an indicator of the richness of the detail information of the given image. The image with high local contrast and clear details will lead to higher entropy value and vice versa. The image entropy can be calculated as follow:
E = i = 0 255 P i × log 2 P i
where P i represents the ratio of pixels with intensity i to the number of all the pixels in the image, and the intensity level is assumed to be 256. Therefore, to evaluate the enhancement performance objectively, we conduct another experiment to evaluate the image entropy of the selected enlarged images using 21 values in range (0.15, 0.35) with an interval of 0.01. The results are plotted in Figure 11, which shows the maximum entropy can be obtained when β = 0.24 . Hence, β was set to 0.24 for the following experiments.

4.2. Validation of the Effectiveness of the Proposed Method

To verify the effectiveness of the proposed algorithm in this article, four categories of most representative inspection images were selected as test samples. The first kind of inspection images was single overhead ground wires which were captured by hand–eye camera, providing a basis for robot to automatically cross obstacles. The other three types of images were captured by the Pan–tile–zoom (PTZ) camera. The first is the image of the tower head and suspension clamp, which could provide important visual navigation information for the robot to make intelligent behavior planning. The second is the spacer image on multi-split conductors, which can be used to see whether there are safety hazards such as shedding or loose bolt. The third category is insulator images provided with electrical insulation. By analyzing the images, it can be observed that there may be potential safety hazards such as insulator shedding, cracks and pollution.
Firstly, considering different lighting environments, noise and other factors, three most common images are selected as test samples from each category. After the processing by the algorithm proposed in this paper, the effectiveness of the algorithm is verified by observing the final processing effect.

4.2.1. Visual Analysis

With the above parameters, the effects of the algorithm proposed in this paper on four types of images are shown in the following figures.
Figure 12 shows three overhead ground wire (OGW) images under different illumination condition and its corresponding enhancement results. Image Figure 12a is characterized by low overall brightness, uneven illumination distribution and low image contrast. After processing the algorithm with the above parameters, the overall brightness of the image is raised and the brightness distribution is more uniform than before. Moreover, the image contrast is enhanced, and the texture of the OGW is clearly visible, which provides a guarantee for the detection and recognition of the OGW. The brightness distribution of image Figure 12b is uneven, and the OGW is in a darker area, while the OGW in Figure 12c is just opposite to Figure 12b, which is in a brighter area. From the results of enhanced images (Figure 12e,f), the light on the surface of OGW becomes more uniform in both cases, and the contrast between the OGW and the background is enhanced. Therefore, the method proposed in this paper can achieve uniform illumination and enhanced image contrast for OGW images under different lighting conditions.
Figure 13 shows three tower head and suspension clamp images under different lighting condition and its corresponding enhancement results. Figure 13a,b shows images taken in the backlight with a brighter background, resulting in the lack of clarity in the details of the tower head such as the shock hammer and the suspension clamp. After the image enhancement, the uneven illumination was reduced, and the contour of the hammer and the suspension clamp became visible. Figure 13c shows an image taken under normal light. From the enhancement results, the image details were enhanced while the illumination of the image did not change much. Therefore, it can be seen that the proposed algorithm in this paper can enhance the inspection images in the case of uneven light, while not reducing the image quality and clear content with uniform light.
Figure 14 shows three spacer images under cloudy condition and its corresponding enhancement results. The inspection images under cloudy conditions are generally dark, resulting in low image contrast. Some details such as bolts at the junction between the spacing bar and the wire are not clear enough. Seen from the enhancement effect (Figure 14d–f), the proposed algorithm can effectively improve the image contrast under cloudy conditions and make the details clearer.
Figure 15 shows three insulator images under different lighting condition and their corresponding enhancement results. The image in Figure 15a was taken on a cloudy day. In Figure 15b, the overall brightness of the image is increased, the illumination distribution is more uniform, and the details of the image such as the contour of the insulator plate are clearer. Figure 15b shows an insulator image taken under the condition of strong illumination, which is characterized by strong light, resulting in unclear details. In Figure 15e, although the illumination uniformity of the image is not improved, the details of the image are enhanced. Figure 15c is an insulator image taken from a distance. It is characterized by complex background and low contrast with noise. In Figure 15f, the contrast of the image is enhanced. Hence, the proposed algorithm can not only enhance the image with low illumination and non-uniformity, but also enhance the image with high illumination and noise.

4.2.2. Image Quality Analysis

From the visual analysis presented above, it can be seen that the enhanced image is clearer and the illumination is more uniform, and the image quality is definitely improved. To evaluate the effectiveness of the proposed method more objectively, the image quality evaluation for the inspection images before and after the enhancement is conducted and compared. Since the robot has carried out many inspection tests, mny inspection images have been accumulated. According to the four categories mentioned above, 50 images of the five light levels were selected for each category to form the image quality test dataset. The five light levels are, respectively, too low, low, normal, strong, and too strong, and they are expressed in terms of 1–5. Due to the uncontrolled illumination condition, it is often difficult to get reference images. Thus, two kinds of classical methods of image quality evaluation without reference are used in this paper, namely, Natural Image Quality Evaluator (NIQE) [39] and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [40]. Both methods represent the statistics of mean subtracted contrast normalized (MSCN) coefficients. The former obtains the regression model through support vector regression (SVR) method on the training dataset, while the latter learns the mixed Gaussian model on the training dataset, and the obtained models are used to predict the image quality. In this article, some of the images in the LIVE database are used as training sample. As different methods score the image quality in different ways, and the order of magnitude is different, the image quality ratio R is adopted to verify the effectiveness of the method proposed in this paper, which is represented by
R = Q A f t e r Q B e f o r
where QBefor and QAfter represent the pre-enhancement and post-enhancement image quality scores under the same image quality assessment method, respectively. The results are shown in the following figures.
In Figure 16, the proposed method can effectively improve the image quality. When the illumination of the image is low, the image quality is improved to the maximum multiple, indicating that this method has the best enhancement effect on the dark non-uniform image. At the same time, when the light is strong, the image quality is improved by a relatively small multiple, which indicates that the enhancement effect of this method on uneven images under the strong light is not as good as that when the light is low. When the image is under normal light, although the method proposed in this paper can enhance the image contrast to a certain extent and improve the image quality, the image quality improvement is the least multiple due to the high quality of the image itself.

4.3. Comparison with Other Methods

To further verify the advantages of the proposed method, the performance of the proposed method was compared with two histogram based techniques, HE [8] and CLAHE [11]; three Retinex theory based techniques, SSR [13], MSR [17] and SSR + BF [14]; and two transform domain based techniques, HF [19] and UIC [21]. The most representative image was selected from each type of image (four types) as the test image for further analysis.

4.3.1. Visual Comparison

Figure 17 shows the comparison of a low-light and uneven illumination image captured by the hand–eye camera. There is an OGW wire in the right part of the image, and the background contains the slideway, the pincher roller and the ground. The image is dim and the brightness of the robot surface is much higher than that of the OGW which is the detection target. Hence, the details of the OGW are almost cannot be observed. All eight methods corrected the uneven illumination and enhanced the contrast. However, the visual effects are different from each other. HE method enhances the overall contrast of the images and the details of the OGW texture. However, it over-enhances the image since there are some distinct dark and bright regions which may shelter important details in the enhanced image. Moreover, it amplifies noise and generates some artifacts. The CLAHE method solves the over-enhancement problem to some extent by restricting the enhancement of contrast, and enhances the details of the OGW and the pinch roller. However, it does not sufficiently enhance the brightness of image and the noise is still magnified because the technique is still based on histogram equalization (HE). The enhancement results using SSR and MSR methods tend to make the given image have day-light-like visual effects and do not enhance the local contrast. SSR results in some halo artifacts by reducing the intensity of the pixels which are positioned near the brighter pixels. MSR reduces the halo effects to some degree by considering the illumination information at different scales. As shown in the sixth row of the first column, the BFR can eliminate image noise partially. However, the brightness is not enhanced as much as for CLAHE method. Moreover, the details of the OGW are blurry and not clear enough, while HF method enhances the details successfully. However, the overall illumination is not bright enough. UIC method can correct the uneven illumination effectively. However, the details are not clear enough and there is still some noise in it. The proposed method achieves the best enhancement results. The enhanced images are natural looking with uniform illumination, clear details and higher contrast, which is similar to UIC method except that the noise is reduced more completely and the local contrast is higher.
Figure 18 shows the comparison of an uneven illumination image with an OGW, a tower head, and a suspension clamp in it. The PTZ network camera, carried by the robot, captures the image when the robot is approaching the tower in the backlighting scene. HE expands the dynamic range of the image, but the region of the tower head becomes darker and the structure details are almost invisible. CLAHE obtains a uniform illumination appearance, but some regions are relatively blurred such as the bolts in the tower head and the shock hammers. SSR and MSR tend to make the image excessively white and there are still some halo artifacts in the enhancement image. Some significant details such as the bolts bounder are removed by the BFR method. For HF, the uneven illumination is not compensated completely though the edges of the tower, shock hammer and the OGW are clear enough. The UIC method obtains relatively better visual effects with uniform illumination and sharp local edges. However, the overall contrast is relatively low and the suspension clamp is blurry. The proposed method obtains the optimal result with clear bolts, suspension clamp and OGW. The overall illumination is homogeneous and the details of the image are enhanced.
Figure 19 and Figure 20 show two images captured by the PTZ network camera on a cloudy day. The third column shows a four-bundle conductor with a space and four clamps. The clamps are relatively darker than the other parts and the details are unable to be observed. The fourth column shows an electrical insulator and a part of the pylon. The surface of the insulator is dim and obscure because of the influence of the backlighting. HE can effectively enhance the contrast of the two images. However, the illumination is more uneven than that of the given image. For example, the region of the clamps and the electric conductor are darker than the other region, and it is hard to observe the details. The illumination of the image obtained by CLAHE is more uniform than HE method, but there are still some blurry effects around the clamps. The images processed by SSR and MSR method show a day-light like appearance, which makes the image fuzzy. BFR method effectively eliminates noise, but the details around clamps and electric conductor are not clear enough. HF fails to improve the image brightness though it enhances the details. UIC method successfully compensates the uneven illumination and reveals the clamps and electric conductor in the dark region, but there are still some noise and blurry region in the image. The proposed method achieves the best enhancement performance with uniform illumination and clear edges.

4.3.2. Quantitative Comparison

To evaluate the enhancement performance of the proposed method objectively and its advancement, seven image enhancement methods were used for comparison. Five measurement criteria were used: mean value (Mean), standard deviation (STD), peak signal noise ratio (PSNR), image entropy (Entropy) and average gradient (AG). The calculations for these five measurements are shown below.
(a)
Average intensity value represents the overall brightness of the image:
μ = 1 M × N x = 1 M y = 1 N f ( x , y )
(b)
Standard deviation represents the overall contrast of the given image. Higher standard deviation value means more contrast information in the image:
σ = 1 M × N x = 1 M y = 1 N [ f ( x , y ) μ ] 2  
(c)
Mean square error (MSE) and peak signal ratio (PSNR) represent the de-noising performance of the method:
MSE = 1 M × N i = 0 M 1 j = 0 N 1 ( I ( i , j ) K ( i , j ) ) 2 , PSNR = 10 × log 10 ( MAX I 2 / MSE )
where MAX I is the maximum intensity value in image I. Higher PSNR value means less noise in the image.
(d)
Image entropy is an important factor that represents the richness of the information of an image. Higher entropy value means more details in the image, and the entropy of the enhanced image should be larger than that of given image:
E = i K P i × log 2 P i
where P i is the ratio between the number of pixels with intensity i to the whole pixels numbers.
(e)
Average gradient represents the local contrast of the details by considering the intensity difference of pixels. The larger the average gradient value is, the higher the local contrast among the details in the image will be:
AG = 1 ( M 1 ) ( N 1 ) x = 1 M 1 y = 1 N 1 [ f ( x + 1 , y ) f ( x , y ) ] 2 + [ f ( x , y + 1 ) f ( x , y ) ] 2 2
Table 1 shows the quantitative results of the enhanced image by different method on four images, A, B, C and D, which correspond to the original images in Figure 17, Figure 18, Figure 19 and Figure 20, respectively. The statistical analysis results in Table 1 are shown in Figure 21 with a block diagram.
In term of mean intensity value of image, most of the methods increase this value, which will brighten the overall image since the illumination of the four given image are relatively low. However, the HF method does not improve the brightness since it suppresses the illumination, which is seen as low-frequency component in the image. To objectively evaluate the illumination correction performance, the image histograms of different methods are shown in Figure 21. In the figure, the gray level of the image are limited to (0, 200) and there are many details drowned in the darkness. Most methods extend the dynamic range except the HF method, although it may enhance the details in the dark to a certain degree. The mean value of the SSR and MSR are larger than that of the other methods which gives the enhanced image a day-like lighting visual appearance. The proposed method obtains an approximate uniform illumination distribution. It proves that the proposed methods can effectively compensate the uneven illumination of the image.
In term of standard deviation of the image, HE method achieves the largest value since it over-enhances the image with overall strong contrast which gives the image an unnatural sense of vision. The proposed method achieves the next highest standard deviation. It proves that the proposed method can enhance the local contrast while correcting uneven illumination.
In terms of peak signal noise ratio, BFR method achieves the best results since it uses a bilateral filter to reduce noise. HE, SSR and MSR obtain relatively lower PSNR value since they amplify noise in the process of compensating the uneven illumination. The proposed method obtains the second highest PSNR, which proves that it can eliminate noise effectively.
In terms of image entropy, which stands for the richness of details in the image, the proposed method achieves the maximum value since the fuzzy enhancement procedure enhances the high contrast region emphatically. The results show that the enhancement factor selected by the parameter selection procedure is optimal. The image entropy of HE is relatively high due to its over-enhanced characteristic for low-light and uneven illumination image. The CLAHE method also obtains a higher value, but still lower than that of HE, since it alleviates the over-enhancement tendency and can always reveal more details. The image entropy of SSR and MSR methods is relatively low since the illumination component of the image is hard to estimate accurately. The BFR shares the same characteristic with SSR. The image comparisons with different methods prove that the proposed method can enhance the local contrast and the image details effectively.
In terms of the average gradient of the image, which stands for the local contrast of the image, since HE methods get the highest value, as it over-enhances the local contrast. Thus, HE method is left out during the analysis of the average gradient. The proposed method gets the next highest value after the HE method. It also proves the contrast and details enhancement function of the proposed method.
The above results validate the effectiveness and superiority of the proposed method, but its calculation efficiency should be further verified. The computational complexity was analyzed in terms of MATLAB execution delay for enhancement on a 2.5-GHz Inter Core i7 CPU with 4 GM RAM system running Window 10 OS. Table 2 indicates that the time cost of the method based on transform domain is larger than that of the method based on spatial domain. Because the proposed method is based on the frequency domain and involves the conversion between the spatial domain and frequency domain, it is time consuming. Therefore, the algorithm can only be used in offline or non-real-time scenes at present. However, to further improve the intelligence of the CIR, the real-time processing of the algorithm is an engineering problem that must be solved. Given that the time is measured by using unoptimized MATLAB codes, more efficient C/C++ compilable implementation of the proposed method will significantly improve the applicability of the proposed method.

5. Conclusions

In this paper, a novel image enhancement method based on Retinex and fuzzy enhancement algorithm in the wavelet domain is proposed for inspection images obtained by the cable inspection robot. It can compensate the uneven illumination, reduce noise and enhance the local contrast simultaneously. The effectiveness of the proposed method is validated by visual analysis and image quality assessment. It can obtain the best enhancement effect to the uneven illumination under the low light condition, and the effect is worse when the illumination is stronger. It is also found that the algorithm can enhance the contrast to some extent even for well-lit images. In general, this method is suitable for outdoor lighting conditions and can improve image quality. Compared with seven other effective methods using five measurements, the proposed method achieves the best performance of illumination compensation and contrast enhancement, which obtains uniform illumination for the four given images and reveals abundant image details such as the texture of the OGW, the shape detail of the shock damper, the bundle conductor and the texture of the surface of the electrical insulator under backlighting. The results show that the proposed method can be applied to the CIR to enhance the quality of the image, which can provide high quality inspection images for the operator and increase the detection performance of the targets on the inspection cable lines inevitably.
For the proposed method, the key parameters such as the three scale factors and the parameter β of modified membership function are set to fixed values, which are broadly feasible. However, since the inhomogeneous characteristic of each image is slightly different, the method with fixed parameters may not enhance the image effectively. Thus, our future work is to investigate an adaptive parameter selection method for obtaining better enhancement results. Considering that the algorithm is time-consuming at present and cannot meet the requirement of real-time processing, improving the real-time performance of the algorithm is also one of the key points for further research.

Author Contributions

In this research, Conceptualization, X.Y. and G.W.; Methodology, X.Y.; Software, X.Y. and L.H.; Validation, L.H. and Y.Z.; Formal Analysis, F.F.; Investigation, G.W.; Resources, X.Y. and G.W.; Writing-Original Draft Preparation, X.Y.; Writing-Review & Editing, X.Y.; Visualization, X.Y.; Supervision, G.W.; Project Administration, G.W.; Funding Acquisition, G.W.

Acknowledgments

This work was supported by Guangdong Robot Special Project (2015B090922007), Foshan Technical Innovation Team Project (2015IT100143) and South Wisdom Valley Innovative Research Team Program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wei, W.; Gongping, G.; Yucheng, B. Hand-eye-vision based control for an inspection robot’s autonomous line grasping. J. Cent. South Univ. 2014, 21, 2216–2227. [Google Scholar]
  2. Richard, P.L.; Pouliot, N.; Montambault, S. Introduction of a LIDAR-based obstacle detection system on the LineScout power line robot. J. Endourol. 2014, 28, 330–334. [Google Scholar]
  3. Song, Y.; Wang, H.; Zhang, J. A vision-based broken strand detection method for a power-line maintenance robot. IEEE Trans. Power Deliv. 2014, 29, 2154–2161. [Google Scholar] [CrossRef]
  4. Qin, X.; Wu, G.; Ye, X.; Huang, L.; Lei, J. A Novel Method to Reconstruct Overhead High-Voltage Power Lines Using Cable Inspection Robot LiDAR Data. Remote Sens. 2017, 9, 753. [Google Scholar] [CrossRef]
  5. Xuhui, Y.; Gongping, W.; Fei, F.; XiangYang, P.; Ke, W. Overhead ground wire detection by fusion global and local features and supervised learning method for a cable inspection robot. Sens. Rev. 2018, 38, 376–386. [Google Scholar]
  6. Wenming, C.; Yaonan, W. Research on obstacle recognition based on vision for deicing robot on high voltage transmission line. Chin. J. Sci. 2011, 32, 2049–2056. [Google Scholar]
  7. Maini, R.; Aggarwal, H. A Comprehensive Review of Image Enhancement Techniques. arXiv, 2010; arXiv:1003.4053. [Google Scholar]
  8. Kim, Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 2002, 43, 1–8. [Google Scholar] [CrossRef]
  9. Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [Google Scholar] [CrossRef]
  10. Wang, W.; He, C.; Tang, L.; Ren, Z. Total variation based variational model for the uneven illumination correction. Neurocomputing 2018, 281, 106–120. [Google Scholar] [CrossRef]
  11. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization; Academic Press Professional, Inc.: San Diego, CA, USA, 1994. [Google Scholar]
  12. Fu, R.G.; Feng, S.; Shen, T.Y.; Luo, H.; Wei, Y.F.; Yang, Q. A low-Light-Level Video Recursive Filtering Technology Based on the Three Dimensional Coefficients; Optics and Photonics for Information Processing Xi; International Society for Optics and Photonics: Bellingham, WA, USA, 2017. [Google Scholar]
  13. Land, E.H.; McCann, J.J. Lightness and Retinex Theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef] [PubMed]
  14. Si, L.; Wang, Z.; Xu, R.; Tan, C.; Liu, X.; Xu, J. Image Enhancement for Surveillance Video of Coal Mining Face Based on Single-Scale Retinex Algorithm Combined with Bilateral Filtering. Symmetry 2017, 9, 93. [Google Scholar] [CrossRef]
  15. Xie, S.J.; Lu, Y.; Yoon, S.; Yang, J.; Park, D.S. Intensity Variation Normalization for Finger Vein Recognition Using Guided Filter Based Singe Scale Retinex. Sensors 2015, 15, 17089–17105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Xiao, J.; Peng, H.; Zhang, Y.; Tu, C.; Li, Q. Fast image enhancement based on color space fusion. Color Res. Appl. 2016, 41, 22–31. [Google Scholar] [CrossRef]
  17. Tao, F.; Yang, X.; Wu, W.; Liu, K.; Zhou, Z.; Liu, Y. Retinex-based image enhancement framework by using region covariance filter. Soft Comput. 2018, 22, 1399–1420. [Google Scholar] [CrossRef]
  18. Fan, C.N.; Zhang, F.Y. Homomorphic filtering based illumination normalization method for face recognition. Pattern Recognit. Lett. 2011, 32, 1468–1479. [Google Scholar] [CrossRef]
  19. Faraji, M.R.; Qi, X. Face recognition under varying illumination based on adaptive homomorphic eight local directional patterns. IET Comput. Vis. 2014, 9, 390–399. [Google Scholar] [CrossRef]
  20. Łoza, A.; Bull, D.R.; Hill, P.R.; Achim, M.A. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Digit. Signal Process. 2013, 23, 1856–1866. [Google Scholar] [CrossRef]
  21. Shen, X.; Li, Q.; Tian, Y.; Shen, L. An Uneven Illumination Correction Algorithm for Optical Remote Sensing Images Covered with Thin Clouds. Remote Sens. 2015, 7, 11848–11862. [Google Scholar] [CrossRef] [Green Version]
  22. Kim, S.E.; Jeon, J.J.; Eom, I.K. Image contrast enhancement using entropy scaling in wavelet domain. Signal Process. 2016, 127, 1–11. [Google Scholar] [CrossRef]
  23. Kaur, A.; Singh, C. Contrast enhancement for cephalometric images using wavelet-based modified adaptive histogram equalization. Appl. Soft Comput. 2017, 51, 180–191. [Google Scholar] [CrossRef]
  24. Jung, C.; Yang, Q.; Sun, T.; Fu, Q.; Song, H. Low light image enhancement with dual-tree complex wavelet transform. J. Vis. Commun. Image Represent. 2017, 42, 28–36. [Google Scholar] [CrossRef]
  25. Wang, C.L.; Tang, Y.C.; Zou, X.J.; SiTu, W.M.; Feng, W.X. A robust fruit image segmentation algorithm against varying illumination for vision system of fruit harvesting robot. Optik 2017, 131, 626–631. [Google Scholar] [CrossRef]
  26. Tu, G.J.; Karstoft, H.; Pedersen, L.J.; Jørgensen, E. Illumination and Reflectance Estimation with its Application in Foreground Detection. Sensors 2015, 15, 21407–21426. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Mallat, S.G. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Comput. Soc. 1989, 11, 674–693. [Google Scholar] [CrossRef]
  28. Zhao, X.; Lin, Y.; Ou, B.; Yang, J. A wavelet-based image preprocessing method for illumination insensitive face recognition. J. Inf. Sci. Eng. 2015, 31, 182–189. [Google Scholar]
  29. Xue, G.; Xue, P.; Liu, Q. A Method to Improve the Retinex Image Enhancement Algorithm Based on Wavelet Theory. Int. Symp. Comput. Intell. Des. 2010, 1, 182–185. [Google Scholar]
  30. Gao, T. Face recognition based on multi-scale Retinex in discrete wavelet transform model under difficult lighting condition. Video Eng. 2012, 36, 122–125. [Google Scholar]
  31. Fu, F.; Liu, F. Wavelet-Based Retinex Algorithm for Unmanned Aerial Vehicle Image Defogging. Int. Symp. Comput. Intell. Des. 2016, 1, 426–430. [Google Scholar]
  32. Du, Y.; Tong, M.; Zhou, L.; Dong, H. Edge detection based on Retinex theory and wavelet multiscale product for mine images. Appl. Opt. 2016, 55, 9625–9637. [Google Scholar] [CrossRef] [PubMed]
  33. Pal, S.K.; King, R.A. Image Enhancement Using Smoothing with Fuzzy Sets. Int. Symp. Comput. Intell. Des. 1981, 11, 494–501. [Google Scholar]
  34. Bichao, Z.; Yiquan, W. Infrared Image Enhancement Method Based on Stationary Wavelet Transformation and Retinex. Acta Opt. Sin. 2010, 30, 2788–2793. [Google Scholar] [CrossRef]
  35. Rahman, Z.U.; Jobson, D.J.; Woodell, G.A. Multi-scale retinex for color image enhancement. Int. Conf. Image Process. 2002, 1003, 1003–1006. [Google Scholar]
  36. Rahman, Z.U.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. Hum. Vis. Electron. Imaging 2004, 7, 100–110. [Google Scholar]
  37. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 2002, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  38. Wang, W.; Liu, W.; Lang, F.; Zhang, G.; Gao, T.; Cao, T.; Wang, F.; Liu, S. Froth Image Acquisition and Enhancement on Optical Correction and Retinex Compensation. Minerals 2018, 8, 103. [Google Scholar] [CrossRef]
  39. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  40. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The cameras and the captured inspection image under uneven illumination. (a) Cameras installation; (b) Inspection image under uneven illumination.
Figure 1. The cameras and the captured inspection image under uneven illumination. (a) Cameras installation; (b) Inspection image under uneven illumination.
Symmetry 10 00570 g001
Figure 2. (ad) Sample inspection images under different illumination heterogeneity in the same scene; and (eh) corresponding spectrum maps to images in (ad).
Figure 2. (ad) Sample inspection images under different illumination heterogeneity in the same scene; and (eh) corresponding spectrum maps to images in (ad).
Symmetry 10 00570 g002
Figure 3. The flowchart of the proposed inspection image enhancement method.
Figure 3. The flowchart of the proposed inspection image enhancement method.
Symmetry 10 00570 g003
Figure 4. Schematic diagram of a single-scale Retinex (SSR) algorithm.
Figure 4. Schematic diagram of a single-scale Retinex (SSR) algorithm.
Symmetry 10 00570 g004
Figure 5. Improved MSR enhancement method in low frequency component.
Figure 5. Improved MSR enhancement method in low frequency component.
Symmetry 10 00570 g005
Figure 6. Gray-scale histogram of: (a) horizontal; (b) vertical; and (c) diagonal wavelet components in high frequency.
Figure 6. Gray-scale histogram of: (a) horizontal; (b) vertical; and (c) diagonal wavelet components in high frequency.
Symmetry 10 00570 g006
Figure 7. The flow chart of the fuzzy enhancement algorithm.
Figure 7. The flow chart of the fuzzy enhancement algorithm.
Symmetry 10 00570 g007
Figure 8. (a) The original inputl (b) narrow surround σ = 25l (c) medium surround σ = 80; (d) wide surround σ = 100; and (e) MSR output with Wk = 1/3, k = 1, 2, 3. The narrow-surround acts as a high-pass filter to obtain the edge details, but some brightness information is lost. The wide-surround captures the right brightness information, but some detailed information is lost. The MSR is the average of the three renditions.
Figure 8. (a) The original inputl (b) narrow surround σ = 25l (c) medium surround σ = 80; (d) wide surround σ = 100; and (e) MSR output with Wk = 1/3, k = 1, 2, 3. The narrow-surround acts as a high-pass filter to obtain the edge details, but some brightness information is lost. The wide-surround captures the right brightness information, but some detailed information is lost. The MSR is the average of the three renditions.
Symmetry 10 00570 g008
Figure 9. The effect of parameter γ.
Figure 9. The effect of parameter γ.
Symmetry 10 00570 g009
Figure 10. The local contrast enhancement effects under different β: (a) the original image and the output image of proposed method with different parameters; and (b) the enlarged images corresponding to the regions with red box in (a).
Figure 10. The local contrast enhancement effects under different β: (a) the original image and the output image of proposed method with different parameters; and (b) the enlarged images corresponding to the regions with red box in (a).
Symmetry 10 00570 g010
Figure 11. Image entropy under different β corresponding to Figure 10.
Figure 11. Image entropy under different β corresponding to Figure 10.
Symmetry 10 00570 g011
Figure 12. Overhead ground wire (OGW) images under different illumination condition and its corresponding enhancement results: (a) low light; (b) uneven light and OGW is in the dark area; (c) uneven light and OGW is in the bright area; and (df) the enhancement results corresponding to (ac).
Figure 12. Overhead ground wire (OGW) images under different illumination condition and its corresponding enhancement results: (a) low light; (b) uneven light and OGW is in the dark area; (c) uneven light and OGW is in the bright area; and (df) the enhancement results corresponding to (ac).
Symmetry 10 00570 g012
Figure 13. Tower head and suspension clamp image under different lighting condition and its corresponding enhancement result: (a) uneven illumination caused by backlighting; (b) backlighting; (c) uniform light; and (df) enhancement results corresponding to (ac).
Figure 13. Tower head and suspension clamp image under different lighting condition and its corresponding enhancement result: (a) uneven illumination caused by backlighting; (b) backlighting; (c) uniform light; and (df) enhancement results corresponding to (ac).
Symmetry 10 00570 g013
Figure 14. Spacer images and enhancement results: (ac) three spacer images under cloudy condition; and (df) enhancement results corresponding to (ac).
Figure 14. Spacer images and enhancement results: (ac) three spacer images under cloudy condition; and (df) enhancement results corresponding to (ac).
Symmetry 10 00570 g014
Figure 15. Insulator images under different lighting condition and its corresponding enhancement results: (a) cloudy condition; (b) high light condition; and (c) low contrast with noise. (df) enhancement results corresponding to (ac)
Figure 15. Insulator images under different lighting condition and its corresponding enhancement results: (a) cloudy condition; (b) high light condition; and (c) low contrast with noise. (df) enhancement results corresponding to (ac)
Symmetry 10 00570 g015
Figure 16. Image quality assessment based on two methods: (a) OGW; (b) tower head; (c) spacer; and (d) insulator.
Figure 16. Image quality assessment based on two methods: (a) OGW; (b) tower head; (c) spacer; and (d) insulator.
Symmetry 10 00570 g016
Figure 17. The OGW and its enhancement results corresponding to the proposed method and seven other methods.
Figure 17. The OGW and its enhancement results corresponding to the proposed method and seven other methods.
Symmetry 10 00570 g017
Figure 18. Tower head image and its enhancement results corresponding to the proposed method and seven other methods.
Figure 18. Tower head image and its enhancement results corresponding to the proposed method and seven other methods.
Symmetry 10 00570 g018
Figure 19. Spacer image and its enhancement results corresponding to the proposed method and seven other methods.
Figure 19. Spacer image and its enhancement results corresponding to the proposed method and seven other methods.
Symmetry 10 00570 g019
Figure 20. The insulator image and its enhancement results corresponding to the proposed method and seven other methods.
Figure 20. The insulator image and its enhancement results corresponding to the proposed method and seven other methods.
Symmetry 10 00570 g020
Figure 21. Box plots on the evaluation results in Table 1.
Figure 21. Box plots on the evaluation results in Table 1.
Symmetry 10 00570 g021
Table 1. Performance comparison between the proposed method and seven other methods (the best and the next best results are marked in bold for clarity).
Table 1. Performance comparison between the proposed method and seven other methods (the best and the next best results are marked in bold for clarity).
ImagesMetricsOriginalHECLAHESSRMSRBFRHFUICProposed
Aμ29.1978127.918559.9373109.7300109.267252.294341.236385.329278.5231
σ45.624473.472662.734170.099972.85861.611733.411372.462472.6589
PSNR07.378316.16139.11249.059318.581219.759916.529317.3363
E5.57657.12556.85207.25637.12746.76325.93477.36878.52
AG11.245925.600019.477024.427924.713917.7188910.382223.587324.9759
Bμ114.2131127.2792118.5694217.3490217.3276173.255789.3493170.9758170.0026
σ35.515575.066244.804327.203227.203139.588528.869948.258650.2931
PSNR014.878917.15657.80227.804012.447917.227316.765717.5285
E6.791047.80547.49645.98835.985286.54786.70407.56477.9810
AG3.01686.55154.35122.51932.51173.21903.29475.48635.6898
Cμ95.9608126.6853114.8822204.8801204.6748166.859281.0755175.3968164.3482
σ16.963976.224637.663914.340014.305723.046321.964036.385638.5258
PSNR011.165518.27497.37707.393310.989521.113017.107618.3802
E5.68797.94207.24754.99724.99585.57916.213216.72788.8295
AG5.787223.829518.18444.12984.11024.54196.369716.751519.5665
Dμ71.4139127.0236107.1109194.9818195.0240143.929676.5124157.2113148.1063
σ16.250175.744642.488326.433226.485626.765822.217344.824545.2240
PSNR09.680714.77056.25446.251110.769724.496614.434215.3605
E5.70037.95417.43665.97425.97735.89956.37857.80358.3831
AG6.775039.959523.09308.85548.86705.26988.785523.094025.1184
Table 2. Execution time in seconds per image (320 × 240 for Time 1 and 1080 × 1920 for Time 2).
Table 2. Execution time in seconds per image (320 × 240 for Time 1 and 1080 × 1920 for Time 2).
Algorithm HECLAHESSRMSRBFRHFUICProposed
Time 1 (s)0.0150.0520.0320.0920.0610.1220.1500.132
Time 2 (s)0.1200.4240.2200.8420.6601.2411.5261.250

Share and Cite

MDPI and ACS Style

Ye, X.; Wu, G.; Huang, L.; Fan, F.; Zhang, Y. Image Enhancement for Inspection of Cable Images Based on Retinex Theory and Fuzzy Enhancement Method in Wavelet Domain. Symmetry 2018, 10, 570. https://doi.org/10.3390/sym10110570

AMA Style

Ye X, Wu G, Huang L, Fan F, Zhang Y. Image Enhancement for Inspection of Cable Images Based on Retinex Theory and Fuzzy Enhancement Method in Wavelet Domain. Symmetry. 2018; 10(11):570. https://doi.org/10.3390/sym10110570

Chicago/Turabian Style

Ye, Xuhui, Gongping Wu, Le Huang, Fei Fan, and Yongxiang Zhang. 2018. "Image Enhancement for Inspection of Cable Images Based on Retinex Theory and Fuzzy Enhancement Method in Wavelet Domain" Symmetry 10, no. 11: 570. https://doi.org/10.3390/sym10110570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop