Next Article in Journal
Generation of Typical Meteorological Sequences to Simulate Growth and Production of Biological Systems
Next Article in Special Issue
Improved Retinex-Theory-Based Low-Light Image Enhancement Algorithm
Previous Article in Journal
A Comprehensive Review of GNSS/INS Integration Techniques for Land and Air Vehicle Applications
Previous Article in Special Issue
Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Dark Channel Prior for Image Dehazing Based on Transmittance Estimation by Variant Genetic Algorithm

1
School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
2
School of Information and Control, Keyi College of Zhejiang Sci-Tech University, Shaoxing 312369, China
3
Institute of Optical Target Simulation and Test Technology, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 4825; https://doi.org/10.3390/app13084825
Submission received: 17 February 2023 / Revised: 3 April 2023 / Accepted: 10 April 2023 / Published: 12 April 2023
(This article belongs to the Special Issue Advances in Digital Image Processing)

Abstract

:
Image dehazing has always been one of the main areas of research in image processing. The traditional dark channel prior algorithm (DCP) has some shortcomings, such as incomplete fog removal and excessively dark images. In order to obtain haze-free images with high quality, a hybrid dark channel prior (HDCP) algorithm is proposed in this paper. HDCP first employs Retinex to remove the interference of the illumination component. The variant genetic algorithm (VGA) is then used to obtain the guidance image required by the guided filter to optimize the atmospheric transmittance. Finally, the modified dark channel prior algorithm is used to obtain the dehazed image. Compared with three other modified DCP algorithms, HDCP has the best subjective visual effects of haze removal and color fidelity. HDCP also shows superior objective indexes in the mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and information entropy (E) for different haze degrees.

1. Introduction

In recent years, with the increased demands of photography, transportation, military, aerospace and other fields, image dehazing has gradually become a popular area of research in image processing.
Researchers generally deal with haze from the perspectives of image enhancement and image restoration. The former focuses on enhancing low-level features of images, such as contrast, sharpness, edges, low-light stretch, histogram equalization, and homomorphic filtering [1,2,3,4,5]. In the research on the physical degradation model of foggy images, the latter method enriches image information lost in the fogging process by optimally estimating the haze-free images [6,7,8]. However, the estimation of parameters in the degradation model is a challenging task, as it is an ill-conditioned problem based on the unique given condition of one hazy image. Therefore, some image restoration methods require additional information or assumptions [9,10,11].
Dong [12] used a haziness flag to measure the degree of haziness, which obtains the adaptive initial transmission value by establishing the relationship between the image contrast and the haziness flag. The method has superior haze removal and color balancing capabilities for images with different haze densities. In addition, Fattal [13] estimated the irradiance of the scene and deduced the transmittance image based on the assumption that the transmittance of the light and shadow of the object surface in the scene were locally irrelevant. To deal with outdoor images in sand-dust environments, Park [14] used successive color balance with a coincident chromatic histogram to adjust the pixels of each color component based on the statistical characteristics of the green component. Yan et al. [15] improved the dark channel prior theory and applied the contrast-limited adaptive histogram equalization (CLAHE) method to enhance the optimized transmittance image. This method has made significant improvements over the original DCP method and can be applied to the defogging of infrared dense fog images.
Han [16] presented the reason for underwater image degradation. The state-of-the-art intelligence algorithms, such as deep learning in underwater image dehazing and restoration, were surveyed, demonstrating the performance of underwater image dehazing and color restoration with different methods. The paper introduced an underwater image color evaluation metric and provided an overview of the major underwater image applications. This provides great convenience for follow-up research. Zhang et al. [17] designed a fully end-to-end dehazing network for single image dehazing named the dense residual and dilated dehazing network (DRDDN). A dilated, densely connected block was designed to fully exploit multi-scale features through an adaptive learning process. The deep residual was used to propagate the low-level features to high-level layers. Li et al. [18] used an adversarial game between a pair of neural networks to accomplish end-to-end photorealistic dehazing. The generator learned to simultaneously restore the haze-free image and capture the non-uniformity of haze to avoid uniform contrast enhancement. A task-driven training strategy was proposed to optimize the object detection of dehazed images without updating the parameters of the object detector. Qin et al. [19] proposed an end-to-end feature fusion attention network (FFA-Net) to directly restore the haze-free image. The network was composed of three parts: (1) a novel feature attention (FA) module, (2) a basic block consisting of local residual learning and feature attention, and (3) an attention-based different-level feature fusion structure. The feature weights were adaptively learned from the FA module, giving more weight to important features. The experimental results demonstrated strong progress in both indoor and outdoor defogging fields.
All of the above deep learning-based defogging methods have good defogging effects. However, they also have certain drawbacks. These models may perform badly under certain lighting conditions, such as strong sunlight and shadows. The models require a large amount of training data, and the decreased size of training data may lead to overfitting and poor generalization ability. Meanwhile, due to the large amount of computing resources and data required to train these models, the costs for deployment can be high. The most undesired part is the incompleteness of fog removal with varied haze densities for the deep-learning-based demisting methods, especially for situations with high haze density, which will be specifically stated in Chapter 3. The traditional defogging algorithm does not have such problems.
The dark channel prior (DCP) method proposed by He et al. [20] is one of the most famous single-image dehazing methods. DCP imposes the assumption that there exists an extremely dark pixel in a local non-sky patch for every color channel of the image [21]. Due to the computationally expensive drawback of implementing DCP with the soft-matting method [22], some studies [23,24,25] have employed guided filtering, bilateral filtering, and edge substitution to replace the soft-matting process. This significantly improved the efficiency of DCP. Salazar-Colores [26] proposed a novel methodology based on depth approximations through DCP, local Shannon entropy, and fast guided filter to reduce artifacts and improve image recovery on sky regions with a significant decrease in calculation time. Peng [27] used the depth-dependent color variation, scene ambient light difference, and adaptive color-corrected image formation model (IFM) to better restore degraded images. This method produces satisfying restored and enhanced results. Therefore, the approach has been approved to unify and generalize a wide variety of all DCP-based methods for underwater, nighttime, haze, and sandstorm images. Singh [28] proposed a new haze removal technique according to DCP, which integrates the dark channel prior with CLAHE to remove the haze from color images. A bilateral filter was used to reduce the noise in images, and it showed quite effective results in noise reduction and correcting uneven illumination.
In this paper, in order to overcome the problems of deep, dark pictures and incomplete defogging by the inaccurate estimation of atmospheric transmittance, the hybrid dark channel prior (HDCP) is proposed. In HDCP, Retinex [29] is first utilized to remove the interference of the illumination component and improve the brightness of the image. The atmospheric light intensity is further refined iteratively. Then, a tolerance-improved DCP is introduced to obtain the dehazed image. In the algorithm, a variant genetic algorithm (VGA) [30] is proposed to enhance the grayscale of the original image, which is used as a guided filtering image to optimize the transmittance. In order to verify the algorithm, the public datasets of O-HAZE [31] and NYU2 [32] are used as the experimental images. Compared with other DCP-based algorithms, the average MSE by the proposed method decreases by 26.98%. The average SSIM increases by 10.298%. The average entropy increases by 7.780%. Compared with the conventional DCP, the result of the proposed algorithm has higher brightness and a more complete degree of fog removal. There are no serious image or color distortions.

2. Materials and Methods

Images with haze are characterized by uneven illumination, low visibility, and low contrast. The atmospheric scattering model describes the degradation process of foggy images [33] and is expressed as:
I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) )
where I(x) represents the original image, and x represents the pixels. J(x) is the clear image restored by dehazing. A is the atmospheric luminance intensity, and t(x) is the transmittance.
The unknowns A and t(x), the keys to obtaining a clear image J(x), are generally estimated in algorithms such as DCP.
DCP is designed based on a basic assumption, which is that, in most non-sky local areas, there are some pixels with very low values in at least one color channel, approaching 0. Therefore, the relevant parameters are estimated as follows.
The atmospheric luminance intensity A is normally estimated from the pixels with the highest fog concentration and highest brightness in the image [34]. The calculation of the transmittance t(x) is as below [20]:
t ( x ) = 1 ω min c R , G , B min y Ω x I c y A
where Ω(x) is a local patch centered at x, I c is a color channel of I, and ω = 0.95, which is a constant to ensure the true image perception. It represents retaining a small amount of fog in the resulting image to pursue the authenticity of the image, especially for distant objects. In fact, the dual min function obtains the dark channel of the normalized haze image. It directly provides the estimation of the transmission.
DCP has good defogging effects for landscape photos. However, the disadvantage of DCP is color distortion in bright areas or areas with massive gray and white colors, which results in dark dehazed images [35].
In order to solve the above problems, this paper proposes a hybrid dark channel prior (HDCP) algorithm.
In Figure 1, Retinex is used in the preprocessing to remove the interference of the illumination component. In the modified DCP, the atmospheric light intensity Ai of each of the RGB channels is estimated iteratively after the dark channel is determined. Then, the transmittance t(x) is optimized by guided filtering. The grayscale image of R(x) enhanced by the variant genetic algorithm (VGA) is the guided image in the filter. Finally, the fog-free image J(x) is obtained according to the atmospheric scattering model.

2.1. Retinex Algorithm

Based on Retinex, the original image I is expressed as:
I ( x ) = R ( x ) L ( x )
where I(x) is the original image, R(x) is the reflectance component, and L(x) is the illumination component.
The purpose of image enhancement based on Retinex is to estimate L(x) from I(x) and thereby decompose R(x). In the meantime, the effect of uneven brightness is eliminated, and the visual effect of the image improves.
Conventional Retinex has halo effects in areas with large brightness differences. In this paper, McCann’s Retinex [29] is employed, as it is suitable for image enhancement for images with shallow shading or uneven illumination. The reflectance component for the center pixel in the window is expressed as:
R c = R 0 + i = 1 m I c I m i + 1 ( x ) 2 i
where Rc is the final reflectance component estimation of the center pixel, and R0 is the largest pixel value. Ic and Im are the logarithmic values of the center point and selected point, respectively. i represents the indexes of different points. m is the total number of selected pixels.

2.2. Modified DCP

The solution of the dark channel needs to first calculate the minimum of the RGB components for each pixel and save it in a grayscale image with the same size as the original image. Then the grayscale image is processed by the minimum filter.
In order to improve the estimation accuracy of atmospheric light intensity A, an iterative method is introduced to distinguish the RGB channels. The scheme of the Ai is shown in Figure 2.
Ai is specified as:
A i = R , G , B n + 1 = 1 2 A i n + R x n + 1 x R ˜
where R ˜ is the top 0.2% points of the brightness value in the dark channel of R(x) in descending order. The value of Ai is obtained by comparing and updating the average of the corresponding pixel points in R ˜ multiple times. Therefore, each iteration needs to be compared with it. Through iteration, the points where the dark channel brightness is not very prominent are taken into account.
Through the above method, the atmospheric light intensity Ai corresponding to the RGB channels can be obtained.
In the optimization of transmittance by the soft-matting method [22], the grayscale image of the original image is first employed as the guidance image. The transmittance matrix is secondly employed as the guidance image of the guided filter to filter the transmittance itself to preserve the edges. The fog-free image J(x) is finally obtained according to Equation (1).

2.3. Variant Genetic Algorithm (VGA)

The accuracy of the grayscale image as a guidance image affects the transmittance optimization. In this paper, a variant genetic algorithm (VGA) is used to obtain the transfer function. The role of the transfer function is to map the original image to its corresponding high-contrast image. The guidance image Rg(x) for guided filtering is thus obtained. The scheme of the VGA is shown in Figure 3.
The illumination components from Retinex are firstly converted into grayscale image, which will be used as the guidance image. VGA is used to update the transfer function by varying the parameter set. The feedback function will also be updated every turn for the subsequent update of the transfer function. The fitness function is used to verify the quality of the current transfer function. A new parameter set is generated through crossover and mutation to obtain a new transfer function.
The low-contrast image (pixel values range from Iin-min to Iin-max) is converted into a high-contrast image (0~255) by mapping the transfer function. The generated transfer function should remain monotonically increasing. All the points less than Iin-min are set to 0, and the points greater than Iin-max are set to 255. The generation of the transfer function is derived from the exploration point from the lower left (Iin-min,0) point to the upper right (Iin-max,255) point. Additionally, there are three selection directions of the exploration point (upper, right, and upper right). The whole process seems to draw a curve from the bottom left to the top right. The selection of the next derived point is based on the roulette wheel technique, and the selection probability P(i) is calculated according to neighborhood points as:
P i = 1 + τ i α × 1 + k i γ 10 × η i β i G ( i ) 1 + τ i α × 1 + k i γ 10 × η i β i = 1 , 2 , 3
where G(i) is the set of neighborhood points around the exploration point. i, with the values of 1, 2, 3, represents the upper, right, and upper-right domains, respectively. Τi is the magnitude of the feedback function, which is determined by the last iteration. It is used to control the corresponding probability. The larger the feedback function is, the greater the probability in this direction becomes. Hi is the heuristic value. Ki represents the current exploration point, which is used to record the distance traveled by the exploration point in the horizontal and vertical directions. γ, α, and β are constants that can be changed by VGA. Among these, α and β control the weight of the feedback function and the heuristic value. Additionally, the combination of γ and ki can control the probabilities of moving up or right.
The purpose of the heuristic value is to obtain a monotonically increasing transfer function. The specific settings of ηi are η1 = Cup, η2 = Cright, η3 = 1, with the values for the rest of the areas being 0. The specific settings of ki are k1 = Iin-Iin-min, k2 = Iout, k3 = constant, and it is set to 0 for any other neighbor. Therefore, for P(1) in the upper direction, k1 shows the distance that the exploration point has moved to the right. Similarly, for P(2) in the right direction, the value of k2 represents the distance that the exploration point has moved upward. The parameters α, β, γ, Cup, and Cright are determined by the variant genetic algorithm.
After selecting 20 exploration points for updating the transfer function, the feedback function is updated as below:
τ i j t + 1 = 1 + ρ × τ i j t + l = 1 20 Δ τ i j l t
where ρ is the reduction rate of the feedback function, set to 0.4. Δ τ i j l is the magnitude of the feedback function updated by the l-th exploration point between points i and j, which equals F/(30 × BF). F is the fitness value of the l-th exploration point. BF is the best fitness value used to normalize the feedback function. The definition of F is as below:
F = S T D × E N T R O P Y × S O B E L 3
where STD and ENTROPY are the global standard deviation and information entropy of the grayscale image enhanced by the transfer function, respectively. The specific expression is as follows:
S T D = 1 m n i = 0 m 1 j = 0 n 0 R g i , j R ¯ g 2 E N T R O P Y = i = 0 255 p i log 2 p i
where R ¯ g is the average value of Rg for pixels. Pi refers to the number of pixels whose gray value is i in the image. Additionally, SOBEL is the average intensity of the grayscale image obtained by applying the vertical and horizontal Sobel operators, respectively [36]. SOBEL is defined as:
S O B E L = mean s o b e l v + s o b e l h
where s o b e l v and s o b e l h are the images obtained by applying the vertical and horizontal Sobel operators, respectively. The mean(.) operator denotes averaging.
In VGA, the reproduction stage is carried out by crossover and mutation. The population size is set to 20. The reproduction transfers the parent parameter set (i.e., α, β, γ, Cup and Cright) into a sequence by binary code (also called chromosomes). This paper adopts a uniform crossing method with a probability of 85%. Mutations change the code for perturbations with a probability of 0.05. In this algorithm, the mutation only changes one of the 5 parameters in the set and is limited to 10% of the original values.
VGA controls the generation process of the transfer function. In the initial stage, VGA needs crossovers and mutations in each iteration to achieve fast optimization. However, in the subsequent iterations, the numbers of crossover and mutation need to be reduced. Through the experiment, it is most appropriate to set the number of VGA iterations to 10, considering the final effect and processing speed. At the same time, GA participated in iterations 1, 2, 4, 6, and 9.
After the transfer function is obtained, the grayscale image of R(x) can be enhanced to obtain the guidance image Rg(x), and guided filtering can be used to optimize t(x).
The problem of image color distortion is often caused by inaccurate estimation of t(x) [37]. In this paper, the tolerance K is divided by the difference between the pixel value Ri(x) and the atmospheric light intensity Ai to further ensure that the color of the restoration result is not distorted:
J i ( x ) = I ( x ) A i min 1 , t ( x ) * max K / R i ( x ) A i , 1 + A i i = R , G , B
The tolerance K is a constant whose value is between 0 and 1. Ri(x) and Ai are normalized values; K / R i ( x ) A i is used to multiply the transmittance t(x) to amplify t(x).

3. Experimental Results and Analysis

In order to verify the effectiveness of the proposed algorithm, the public dataset of O-HAZE with varied hazy densities is selected for comparison. In O-HAZE, the fog is a real haze generated by a professional haze machine. The same visual contents are recorded in both the hazy and dehazed conditions under the same light conditions. The dehazing methods of Salazar [26], Peng [27], Yan [15], and Qin [19] are cited as comparisons. The results are shown in Figure 4. DCP is the conventional dark channel prior dehazing algorithm. FDCP is the method of Salazar [26]. GDCP is the method of Peng [27]. MDCP is the method of Yan [15]. FFA is the method of Qin [19]. HDCP is the proposed method.
Group (a) has abundant colors. The results are entirely distinct. The restored pictures of DCP and FDCP are too dark. The restored picture of GDCP has too much exposure, which leads to the loss of the original color. Meanwhile, the fog in the distant woods in the upper left of the picture is not completely removed. In order to show the line and color details of the restored image, enlarged contrasts in group (a) are added at the bottom of the figure, which focuses on enlarging the color cards placed in the figure. From the results, it can be seen that the processed pictures of DCP and FDCP are too dark, and the color contrast is low, which does not achieve the ideal situation. The results of GDCP are seriously blurred, and the details are markedly lost. The results of MDCP have been greatly improved, but the fog has not been completely removed. The FFA results are relatively complete overall, but the image is a bit dark. The quality of the image restored by the proposed method is significantly improved. The colors are richer and more realistic. The contrast and clarity are better. The texture details are clear.
In group (b), the white of the chairs and the grey of the ground occupy a large part of the picture. The fog concentration of the original picture is relatively high, which makes the implementation of the dehazing algorithms more difficult. Additionally, the whole picture is blurred, and the textures are notably missing. The result of DCP directly suffers from severe color distortion. The result of FDCP has been improved, but the problems of dark picture brightness and insufficient texture details still exist. The results of GDCP still have the same problem. The brightness is too high, and the fog removal is incomplete. The result of MDCP is more realistic, but the problem of incomplete mist removal still exists. The result of FFA is generally grayish so that the entire photo appears unnatural. The results of the proposed algorithm are better. The fog removal is more complete. The contrast is higher. The picture is more realistic, where the contrast color card on the chair is more clearly visible.
Group (c) mixes chairs, columns, and woods with more texture details behind them. The colors of the columns and chairs are not uniform, and the details of the woods are missing. The results of DCP are still dark and distorted. The overall appearance is bluish, especially for the pillars on both sides. The results of FDCP are similar to DCP. However, the color tone of the columns is more realistic and reasonable. GDCP still has high brightness and incomplete edge details. The resulting color of MDCP is biased towards situations similar to grayscale images, probably due to the modification of the prior theory of dark channels. The images of FFA have been improved significantly, but the contrast is slightly insufficient. The detailed information on the ground is incomplete. The result of the proposed method is more similar to reality, with significantly improved contrast and clarity. The colors of the columns and chairs are more uniform and similar to the original image.
In group (d), the concentration of the fog in the picture is not uniform. There is more fog in the upper middle of the picture. Additionally, this group of pictures has the richest texture details. The picture of DCP is still dull, and there are more black blocks in the grass, such as where the red box shows. The result of FDCP has been improved to a certain extent. However, the texture details in the grass are still insufficient. The result of GDCP has been completely degraded, and no valid information can be obtained. MDCP improves the darkening of the image, but the green and yellow parts of the bushes are missing, and the overall color is lost. In the result of FFA, the fog removal is incomplete, but the texture details are relatively complete. The result of HDCP is more in accord with the real situation. The fog removal is more complete. The texture details are rich. Additionally, there are fewer black blocks as interference.
It can be seen that the method proposed in this paper has a wide range of applicability, and it can also play a good role in defogging in the case of complex environments and uneven fog concentration.
In order to further verify the performance of the proposed algorithm and objectively evaluate the enhanced image enhancement quality, this paper adopts the mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and information entropy (E) of the image as evaluation criteria.
(1) Mean squared error (MSE):
M S E = 1 m n i = 0 m 1 j = 0 n 1 I i , j J i , j 2
where I(i,j) and J(i,j) are the original and restored images with sizes of m by n.
(2) Peak signal-to-noise ratio (PSNR):
P S N R = 10 log 10 M A X I 2 M S E
where MAXI is the maximum possible pixel value of the picture. Generally, for uint8 data, the maximum pixel value is 255.
(3) Structural similarity index (SSIM):
S S I M x , y = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
where x, y are the two images. μ x and μ y are the pixel means of x and y. σ x 2 and σ y 2 are the variances of x and y. σ x y is the covariance of x and y. c 1 and c 2 are two constants used to maintain stability and avoid division by zero.
(4) Information entropy (E):
E = i = 0 255 j = 0 255 k = 1 3 p i j k · log 2 p i j k
where i, j, and k are the sizes of the image. pijk means the probability of occurrence of pij in channel k.
The comparison of different algorithms is compared as follows. The best-performing indexes are shown in bold.
In Table 1, the proposed method in this paper has the smallest MSE, indicating that the method does not alter the images much.
In Table 2, there is no significant difference in the statistical sense as well as in numeral values for DCP, FDCP, and GDCP. The results show that the normal DCP and the other two improved DCP algorithms have almost the same PSNRs. The PSNRs of MDCP and FFA are relatively close. The proposed HDCP has the biggest PSNR. This indicates that the image distortion degree of the processing result in this paper is the least.
As shown in Table 3, in terms of SSIM, FFA has the best processing results, and the gap between HDCP and FFA is not significant, indicating that the processed images are more similar to the original images.
For the entropy in Table 4, the method in this paper still has an absolute advantage. The larger the value is, the richer the information contained in the restored image.
For generality, this paper processed all the images in the O-Haze dataset, containing 45 groups of foggy and fog-free photos. The average MSE, PSNR, SSIM, and entropy are listed in Table 5.
Table 5 shows that the average PSNRs of all 6 methods are relatively close. The average SSIM of HDCP is also in second place, and the gap between the first two algorithms is not huge. Compared with other DCP algorithms, the average MSE of the proposed method decreases by 26.98%. The average SSIM increases by 10.298%. The average entropy increases by 7.780%.
In order to show defogging abilities for different haze densities, the public dataset of NYU2 was then selected as the experimental object. The results are as follows:
In Figure 5, the first row is the images with the continuously increased fog densities compared to the original image. The next rows correspond to the different restored images. In the results of FDCP, as the haze densities increase, the images show that the fog is not removed cleanly. The results of GDCP are unstable. The results of MDCP have a certain stability, except for the last image. FFA has the same results as FDCP. However, its objective evaluation indicators have deteriorated seriously. The proposed method in this paper can still completely remove the influence of fog. The processing results are basically stable even with the increase in haze densities.
For generality, all 1750 photos of the public dataset of NYU2 were processed. Among them, there were 250 individual scenes with 7 levels of haze densities. The evaluation criteria were calculated, and the statistics of the results are as follows.
Table 6 shows that the proposed method in this paper still has a great advantage even with different haze densities. The performances of HDCP are all higher than the values of the compared algorithms. Among them, the average MSE decreases by 49.29%. The average information entropy increases by 3.029%.
In summary, the proposed algorithm in this paper has better performance when improving the quality of hazy images, with improvements seen in contrast and sharpness. The details are more abundant. Moreover, the colors of the image enhanced by this algorithm are closer to the reality, and the color fidelity is higher. Finally, when faced with images of different fog densities, the method presented in this article exhibits strong stability both in terms of supervisor vision and objective evaluation indicators.

4. Conclusions

In order to improve the contrast, sharpness, color fidelity, and visual effect of images under hazy weather conditions, this paper proposes a hybrid dark channel prior. (1) The original image was first processed by Retinex in order to remove the interference of the illumination component and make the enhanced image more natural. (2) In order to improve the DCP, the iterative method was employed instead of the conventional method to calculate the atmospheric light intensity. (3) The variant genetic algorithm was employed to enhance the guidance image to optimize the transmittance. (4) Tolerance was introduced to prevent color distortion during the process.
This paper conducts subjective analysis and objective evaluation of different fog images. From the subjective analysis, it can be seen that the new algorithm has improvements in image enhancement and detail preservation. The sharpness and contrast are significantly improved. The visual effect is excellent. In terms of the O-HAZE dataset, compared with other DCP algorithms, the average MSE by the proposed method decreases by 26.98%. The average SSIM increases by 10.298%. The average entropy increases by 7.780%. At the same time, the color fidelity of the enhanced image is high and closer to reality. The proposed algorithm can effectively enhance foggy images, improving the visual effect and visibility, contrast, and clarity of the image. The image is more realistic and natural.

Author Contributions

Conceptualization, L.W. and J.C.; methodology, J.C.; software, J.C.; validation, L.W. and J.C.; formal analysis, L.W.; investigation, X.Y.; resources, X.Y.; data curation, L.X.; writing—original draft preparation, J.C.; writing—review and editing, L.W.; visualization, S.C.; supervision, S.C.; project administration, X.Y.; funding acquisition, L.X.; Validation, Y.Z. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61801429 and the Natural Science Foundation of Zhejiang Province under Grant LY20F010001 and LQ20F050010. This work was also supported by the Fundamental Research Funds of Zhejiang Sci-Tech University under Grant 2021Q030.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Deng, G. A generalized unsharp masking algorith. IEEE Trans. Image Process. 2011, 20, 1249–1261. [Google Scholar] [CrossRef] [PubMed]
  2. Ngo, D.; Kang, B. Image detail enhancement via constant-time unsharp masking. In Proceedings of the 2019 IEEE 21st Electronics Packaging Technology Conference (EPTC), Singapore, 4–6 December 2019; pp. 743–746. [Google Scholar]
  3. Ngo, D.; Lee, S.; Kang, B. Nonlinear unsharp masking algorithm. In Proceedings of the 2020 International Conference on Electronics, Information, and Communication (ICEIC), Barcelona, Spain, 19–22 January 2020; pp. 1–6. [Google Scholar]
  4. Chang, Y.; Jung, C.; Ke, P.; Song, H.; Hwang, J. Automatic contrast-limited adaptive histogram equalization with dual gamma correction. IEEE Access 2018, 6, 11782–11792. [Google Scholar] [CrossRef]
  5. Mustafa, W.A.; Khairunizam, W.; Yazid, H.; Ibrahim, Z.; Shahriman, A.B.; Razlan, Z.M. Image correction based on homomorphic filtering approaches: A study. In Proceedings of the 2018 International Conference on Computational Approach in Smart Systems Design and Applications (ICASSDA), Kuching, Malaysia, 15–17 August 2018; pp. 1–5. [Google Scholar]
  6. Chen, X.Q.; Yan, X.P.; Chu, X. Visibility estimated in foggy road traffic based on atmospheric scattering model. In Proceedings of the 2010 Second International Conference on Computational Intelligence and Natural Computing, Wuhan, China, 13–14 September 2010; pp. 325–328. [Google Scholar]
  7. Guo, J.; Yang, J.; Yue, H.; Hou, C.; Li, K. Landsat-8 OLI multispectral image dehazing based on optimized atmospheric scattering model. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10255–10265. [Google Scholar] [CrossRef]
  8. Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image dehazing and exposure using an enhanced atmospheric scattering model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef]
  9. Xiao, J.S.; Gao, W.; Zou, B.Y.; Yao, Y.; Zhang, Y.Q. Image dehazing based on sky-constrained dark channel prior. Acta Electron. Sin. 2017, 45, 346–352. [Google Scholar]
  10. Chen, D.D.; Chen, L.; Zhang, Y.X.; Yan, H. Single-image dehazing algorithm to correct atmosphere veil. J. Image Graph. 2017, 22, 787–796. [Google Scholar]
  11. Qian, J.; Li, J.; Wang, Y.; Liu, J.; Wang, J.; Zheng, D. Underwater image recovery method based on hyperspectral polarization imaging. Opt. Commun. 2021, 484, 126691. [Google Scholar] [CrossRef]
  12. Dong, T.; Zhao, G.; Wu, J.; Ye, Y.; Shen, Y. Efficient traffic video dehazing using adaptive dark channel prior and spatial-temporal correlations. Sensors 2019, 19, 1593. [Google Scholar] [CrossRef] [Green Version]
  13. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 721–729. [Google Scholar] [CrossRef]
  14. Park, T.H.; Eom, I.K. Sand-dust image enhancement using successive color balance with coincident chromatic histogram. IEEE Access 2021, 9, 19749–19760. [Google Scholar] [CrossRef]
  15. Yan, S.; Zhu, J.; Yun, K.; Wang, Y.; Xu, C. An infrared image dehazing method based on modified dark channel prior. In Proceedings of the 2022 International Conference on Biometrics, Microelectronic Sensors, and Artificial Intelligence, The International Society for Optical Engineering, Guangzhou, China, 9 May 2022; Volume 12252, pp. 1–7. [Google Scholar]
  16. Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A review on intelligence dehazing and color restoration for underwater images. In IEEE Trans. Syst. Man Cybern. 2020, 50, 1820–1832. [Google Scholar] [CrossRef]
  17. Zhang, S.D.; Zhang, J.T.; He, F.Z.; Hou, N. DRDDN: Dense residual and dilated dehazing network. Vis. Comput. Int. J. Comput. Graph. 2023, 39, 953–969. [Google Scholar] [CrossRef]
  18. Li, Y.; Liu, Y.; Yan, Q.; Zhang, K. Deep Dehazing Network with Latent Ensembling Architecture and Adversarial Learning. IEEE Trans. Image Process. 2021, 30, 1354–1368. [Google Scholar] [CrossRef] [PubMed]
  19. Qin, X.; Wang, Z.L.; Bai, Y.C. FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11908–11915. [Google Scholar]
  20. He, K.M.; Sun, J.; Tang, X.O. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
  21. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C. D-HAZY: A dataset to evaluate quantitatively dehazingalgorithms. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2226–2230. [Google Scholar]
  22. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 228–242. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. He, K.M.; Sun, J.; Tang, X.O. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  24. Pang, C.Y.; Ji, X.Q.; Sun, L.N.; Lang, X.L. An improved method of image fast defogging. Acta Photonica Sin. 2013, 42, 872–877. [Google Scholar] [CrossRef]
  25. Zhang, D.Y.; Ju, M.Y.; Wang, X.M. A fast image haze removal algorithm using dark channel prior. Acta Electron. Sin. 2015, 43, 1437–1443. [Google Scholar]
  26. Salazar-Colores, S.; Moya-Sanchez, E.U.; Ramos-Arreguin, J.M.; Cabal-Yepez, E.; Flores, G.; Cortes, U. Fast single image defogging with robust sky detection. IEEE Access 2020, 8, 149176–149189. [Google Scholar] [CrossRef]
  27. Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
  28. Singh, Y.; Goyal, E.R. Haze removal in color images using hybrid dark channel prior and bilateral filter. Int. J. Recent Innov. Trends Comput. Commun. 2014, 2, 4165–4171. [Google Scholar]
  29. Funt, B.; Ciurea, F.; McCann, J. McCann Retinex in Matlab. In Proceedings of the IS&T/SID Eighth Color Imaging Conference: Color Science, Systems and Applications, Scottsdale, AZ, USA, 7–10 November 2000; pp. 112–121. [Google Scholar]
  30. Alaoui, N.; Adamou-Mitiche, A.B.H.; Mitiche, L. Effective hybrid genetic algorithm for removing salt and pepper noise. Iet Image Process. 2020, 14, 289–296. [Google Scholar] [CrossRef]
  31. Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 867–8678. [Google Scholar]
  32. Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor segmentation and support inference from RGBD images. In Proceedings of the 12th European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; pp. 746–760. [Google Scholar]
  33. Hell, J.; Freeman, F. Optics of the Atmosphere (Book Review). Phys. Today 2008, 30, 76. [Google Scholar]
  34. Lee, Y.; Han, C.; Park, J.; Park, S.; Nguyen, T.Q. Efficient airlight estimation for defogging. In Proceedings of the 2014 International SoC Design Conference (ISOCC), Jeju, Republic of Korea, 3–6 November 2014; pp. 154–155. [Google Scholar]
  35. Yang, Y.; Wang, Z.W. Haze removal: Push DCP at the edge. IEEE Signal Process. Lett. 2020, 27, 1405–1409. [Google Scholar] [CrossRef]
  36. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  37. Sun, W.; Zhang, W.; Wang, J.T. Fusion algorithm for foggy image enhancement based on transmittance weight factor. In Proceedings of the 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 28–31 May 2021; pp. 419–422. [Google Scholar]
Figure 1. The hybrid dark channel prior algorithm.
Figure 1. The hybrid dark channel prior algorithm.
Applsci 13 04825 g001
Figure 2. Estimation of atmospheric light intensity Ai.
Figure 2. Estimation of atmospheric light intensity Ai.
Applsci 13 04825 g002
Figure 3. Transfer function optimization based on VGA.
Figure 3. Transfer function optimization based on VGA.
Applsci 13 04825 g003
Figure 4. Comparison of example pictures in the O-HAZE dataset. The processing results are, respectively, from: (Ⅰ): DCP [20], (Ⅱ): FDCP [26], (Ⅲ): GDCP [27], (Ⅳ): MDCP [15], (Ⅴ): FFA [19], and (Ⅵ): HDCP. The last row is the enlarged result of group a’s color card.
Figure 4. Comparison of example pictures in the O-HAZE dataset. The processing results are, respectively, from: (Ⅰ): DCP [20], (Ⅱ): FDCP [26], (Ⅲ): GDCP [27], (Ⅳ): MDCP [15], (Ⅴ): FFA [19], and (Ⅵ): HDCP. The last row is the enlarged result of group a’s color card.
Applsci 13 04825 g004
Figure 5. Each algorithm’s results for different haze densities comparison groups. The processing results are, respectively, from: (): FDCP [26], (Ⅱ): GDCP [27], (): MDCP [15], (): FFA [19], and (): HDCP.
Figure 5. Each algorithm’s results for different haze densities comparison groups. The processing results are, respectively, from: (): FDCP [26], (Ⅱ): GDCP [27], (): MDCP [15], (): FFA [19], and (): HDCP.
Applsci 13 04825 g005
Table 1. MSE of different algorithms.
Table 1. MSE of different algorithms.
GroupDCPFDCPGDCPMDCPFFAHDCP
a253.79251.86254.90211.89118.87102.52
b255.00254.94255.00202.09167.91147.05
c254.92254.74254.98204.36195.47155.55
d254.50254.54229.28196.64196.83195.81
Table 2. PSNR of different algorithms.
Table 2. PSNR of different algorithms.
GroupDCPFDCPGDCPMDCPFFAHDCP
a24.0924.1224.0724.8727.3828.02
b24.0724.0624.0625.0825.8826.46
c24.0624.0724.0625.0225.2226.21
d24.0724.0724.5325.1925.1925.21
Table 3. SSIM of different algorithms.
Table 3. SSIM of different algorithms.
GroupDCPFDCPGDCPMDCPFFAHDCP
a0.62080.65340.49570.67190.77800.6795
b0.46300.58760.33540.54470.60650.5466
c0.53310.58130.54420.53440.67810.5881
d0.56620.57100.43990.50750.46750.4737
Table 4. Entropy of different algorithms.
Table 4. Entropy of different algorithms.
GroupDCPFDCPGDCPMDCPFFAHDCP
a6.74816.85397.61967.47077.07807.6467
b7.32467.27256.85767.61426.98807.8542
c6.86116.85237.07517.46787.03647.4951
d7.13767.49186.84957.71877.35247.7435
Table 5. Comparison of the overall objective evaluation results of different algorithms on the enhancement of the O-HAZE dataset.
Table 5. Comparison of the overall objective evaluation results of different algorithms on the enhancement of the O-HAZE dataset.
Average MSEAverage PSNRAverage SSIMAverage Entropy
DCP254.8124.070.35087.1909
FDCP252.9524.100.41797.0646
GDCP253.2924.090.49207.2142
MDCP187.3425.410.46487.2110
FFA178.7925.610.47237.0621
HDCP173.1425.750.47587.7280
Table 6. Comparison of the overall objective evaluation results of different algorithms on the enhancement of the NYU2 dataset.
Table 6. Comparison of the overall objective evaluation results of different algorithms on the enhancement of the NYU2 dataset.
Average MSEAverage PSNRAverage SSIMAverage Entropy
DCP234.3624.430.70267.4116
FDCP230.5824.500.85037.4222
GDCP152.2226.310.66497.2812
MDCP179.0925.600.72517.5161
FFA154.5526.240.78457.4289
HDCP96.4328.290.77807.6365
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, L.; Chen, J.; Chen, S.; Yang, X.; Xu, L.; Zhang, Y.; Zhang, J. Hybrid Dark Channel Prior for Image Dehazing Based on Transmittance Estimation by Variant Genetic Algorithm. Appl. Sci. 2023, 13, 4825. https://doi.org/10.3390/app13084825

AMA Style

Wu L, Chen J, Chen S, Yang X, Xu L, Zhang Y, Zhang J. Hybrid Dark Channel Prior for Image Dehazing Based on Transmittance Estimation by Variant Genetic Algorithm. Applied Sciences. 2023; 13(8):4825. https://doi.org/10.3390/app13084825

Chicago/Turabian Style

Wu, Long, Jie Chen, Shuyu Chen, Xu Yang, Lu Xu, Yong Zhang, and Jianlong Zhang. 2023. "Hybrid Dark Channel Prior for Image Dehazing Based on Transmittance Estimation by Variant Genetic Algorithm" Applied Sciences 13, no. 8: 4825. https://doi.org/10.3390/app13084825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop