Next Article in Journal
Dynamic Disturbance and Error Analysis of Flexible Support System for Large Optical Mirror Processing
Previous Article in Journal
Traffic Noise Prediction Applying Multivariate Bi-Directional Recurrent Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Single Image Enhancement Technique Using Dark Channel Prior

1
School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
2
Shanghai Institute of Satellite Engineering, Shanghai 200240, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2021, 11(6), 2712; https://doi.org/10.3390/app11062712
Submission received: 16 December 2020 / Revised: 11 March 2021 / Accepted: 15 March 2021 / Published: 18 March 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
In this paper, we propose a novel single image enhancement technique for defogging by using dark channel prior. The traditional dark channel prior methods for defogging have problems of high time complexity, edge effect, and failure of dark channel prior. To overcome the problems of high time complexity and edge effect, firstly, a four-point weighting algorithm is proposed to estimate the atmospheric light value accurately, and the dark channel prior is used to estimate the rough transmittance. Then, the gray-scale image of the input image is used to refine the transmittance. After that, an atmospheric scattering model is designed to restore the fog-free image. To solve the problem that the dark channel prior can not process the high brightness area, a combination of edge detection and maximum inter-class variance is used to segment the sky area and non-sky area. Finally, the improved defogging method is used for processing the non-sky area, and the enhancement algorithm via sequential decomposition is used for handling the sky area. Extensive experiments show that the improved algorithm can not only reduce the time complexity, but also effectively improve the edge effect. At the same time, it can also solve the problem of failure of dark channel prior.

1. Introduction

In the condition of fog and haze, the propagation of light will be affected by the scattering of suspended particles [1], which will attenuate features such as contrast and color of outdoor natural scenes captured by the image equipment. In the end, the image quality is severely degraded, and it will affect people’s sight. Therefore, an image enhancement technique for processing of foggy images is a very practical requirement.
The techniques of removing fog from degraded images have a wide range of applications [2,3,4]. For this reason, many scholars have conducted long-term theoretical research and analysis on this direction, and many defogging algorithms have been proposed. From the perspective of image processing, existing mainstream defogging algorithms can be mainly divided into two categories: image enhancement technology-based methods and image restoration technology-based methods.
The image enhancement algorithms actually ignore the reasons for degradation of foggy images. They only consider the low brightness and low contrast characteristics of foggy images, and then directly enhance the information of interest in the fogging images. Common image enhancement algorithms include global or local histogram equalization algorithms [5] and defogging algorithms based on retinex theory [6]. The global histogram equalization algorithm has disadvantages of poor detail enhancement and color distortion. Although the local histogram equalization algorithm overcomes the shortcomings of the global histogram equalization algorithm, it has a large amount of calculations, so it is unsuitable for real-time processing systems. The defogging algorithm based on retinex theory works well for thin fog images, but when there is dense fog in the image, local areas of fogging images are easily over-enhanced. In that case, the overall visual effect after defogging looks very unnatural.
Defogging methods based on image restoration can overcome the above disadvantages of enhancement technology-based methods. The authors of these methods conducted in-depth research on the specific foggy images, and proposed a series of image defogging algorithms based on atmospheric scattering models. According to the difference of additional information in the imaging scene, image restoration methods can be roughly divided into two types. In the first category, they first need to collect different weather images in the same scene, and then use these images as inputs [7] to implement their methods. The principle of these methods for image restoration is the differences between the input images. These methods have satisfactory defogging effect, but they need to acquire images under different weather, which results in poor real-time performance. In the second category, they take a fogging image as input, and prior knowledge or assumptions are used to perform image restoration. Literature [8] proposes an image defogging algorithm based on the prior condition, which is simple and fast, but local areas of restored images are too saturated. Literature [9] estimates the transmittance by assuming that the local transmittance is locally uncorrelated with the shading area in the scene. This algorithm has a better defogging effect on thin fog, but distortion will occur in dense fog areas. Tarel [10] assumes that the atmospheric dissipation function tends to the maximum value in a certain area, and then uses the median filtering to estimate its value. However, the median filter has a weak defogging ability in the areas of abrupt depth.
Aiming at overcoming the shortcomings of the above defogging algorithms, He [11] proposes a defogging algorithm based on the dark channel prior. This algorithm estimates the transmittance through the prior knowledge of dark channel, and uses soft matting technology to optimize the transmittance. It can achieve a good defogging effect for most natural images. However, the introduction of soft matting technology will increase the time complexity. At the same time, the algorithm is particularly prone to distortion when processing sky areas. Based on these shortcomings, a large number of researchers proposed a series of improved methods based on the dark channel prior. For example, in order to improve the speed of the defogging algorithm, a bilateral filtering instead of soft matting is used to refine the transmission. For defogging of the sky area, a tolerance mechanism [12] is proposed to correct the transmittance. Although these methods make up the shortcomings of He’s algorithm to some extent, there is still room for improvement.
Specifically, traditional dark channel prior methods for defogging have problems of high time complexity, edge effect, and failure of dark channel prior. In this paper, we propose an improved image defogging method based on dark channel prior. For the first two disadvantages, we have the following improvements. First, the atmospheric light is accurately estimated by using a four-point weighting algorithm, then, guided filtering is used to repair the coarse transmittance obtained by using the dark channel prior. Finally, a fog-free image is restored through an atmospheric scattering model. Experimental results show that the improved algorithm can shorten the processing time and improve the edge effect. Furthermore, aiming at solving the problem of failure of dark channel prior in large areas with high brightness, we have the following improvements. First, the sky and non-sky areas are separated according to the method of edge detection. Then we use the enhancement algorithm via sequential decomposition to enhance the sky areas, and use the optimized dark channel prior method to handle the non-sky areas. Finally, we propose an improved image defogging method based on dark channel prior, which can effectively solve the problem of the failure of dark channel prior.

2. Dark Channel Prior Algorithm

2.1. Atmospheric Scattering Model

McCartney [13] simplified the atmospheric scattering model based on the Mie scattering theory, and then obtained the physical model of foggy images. The model consists of two parts, where the first part is called the incident light attenuation model, and the second part is called the atmospheric light enhancement model. In the process of an object light reaching the observation device, the atmospheric light in other directions will enter the observation device, so that the atmospheric light obtained by the observation device is enhanced.
In the incident light attenuation model, the intensity of light reaching the observation device decreases exponentially as the depth of the scene increases, and the attenuation term of the incident light can be formulated as:
E D ( d , λ ) = E 0 ( λ ) e β ( λ ) d
where E D ( d , λ ) represents the light intensity of the object away from the target, E 0 ( λ ) denotes the intensity of the object light reflected by the target, β ( λ ) represents the atmospheric scattering coefficient, and d denotes the scene depth.
In the atmospheric light enhancement model, the intensity of light reaching the observation device increases exponentially as the depth of the scene increases, and the atmospheric light enhancement term can be formulated as:
E A ( d , λ ) = E ( λ ) ( 1 e β ( λ ) d )
where E A ( d , λ ) represents the intensity of atmospheric light away from the target, E ( λ ) denotes the total intensity of unscattered atmospheric light, β ( λ ) represents the atmospheric scattering coefficient and d denotes the scene depth.
From the above analyses, the physical model of foggy images can be formulated as:
E ( d , λ ) = E D ( d , λ ) + E A ( d , λ )
Substituting Equations (1) and (2) into Equation (3), we can get:
E ( d , λ ) = E 0 ( λ ) e β ( λ ) d + E ( λ ) ( 1 e β ( λ ) d )
If I ( x ) = E ( d , λ ) , J ( x ) = E 0 ( λ ) , t ( x ) = e β ( λ ) d , and A = E ( λ ) , the model of foggy images can be simplified as:
I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) )
In the model, x represents the two-dimensional coordinate value of the pixel, I ( x ) denotes the intensity of the degraded image. I ( x ) is a foggy image, and it also represents the input for image defogging algorithms. J ( x ) means the intensity of the image before degradation, and it also represents the output of image defogging algorithms. A represents the atmospheric light intensity, t ( x ) denotes the transmittance and is used to describe the proportion of reflected light transmitted to the image acquisition device without being scattered on the surface of objects. Therefore, in order to obtain the defogged image J ( x ) , t ( x ) and A need to be estimated from I ( x ) .

2.2. Dark Channel Prior Theory

The phenomenon of fog usually appears in the scenes of outdoor and urban, so He [11] selected foggy outdoor scenes and urban scenes from the database. They randomly selected 5000 images and manually removed the sky area. In addition, it should be noted that He only paid attention to the fogging images in daytime scenes. These images were resized to 500 × 500 pixels, then they used a 15 × 15 window to calculate the dark channel values.
For the images excluding sky areas, the principle of dark channel prior is that at least one of the intensity values of the three channels of R, G, and B in the local area is very small, or even close to zero. For an input image J, the mathematical description of the dark channel prior J d a r k is as follows:
J d a r k ( x ) = min c { r , g , b } ( min y ω ( x ) ( J C ( y ) ) )
where J C represents the intensity values of the three color channels R, G, and B at location y, and x is the center of the window ω ( x ) .
In Equation (6), after two minimization operations, the dark channel value is obtained. The first step is to store the minimum value of the three color channels R, G, and B into a gray-scale image, which has the same size as the original image. In the second step, the window ω ( x ) is used to perform the minimum filtering operation on the gray-scale image obtained in the first step. The radius of the filter is determined by the window size.
There are three factors that result in a lower dark channel value: (1) The existence of shadows, such as the car shadows, buildings, the window of urban images, the shadow of large trees and leaves, and so on. (2) The color of objects, such as green grasses, forests, leaves, red flowers, blue water, etc. Due to the lack of other colors, i.e., the color is concentrated in a certain channel, which causes the values of other channels to be relatively low, a dark channel is generated. (3) Black objects, such as black stones, tree trunks, etc. In the real world, most of the objects have color information or have shadow areas, thereby ensuring the universality of the dark channel prior.

2.3. Disadvantages of Defogging Using Dark Channel Prior

In the soft matting algorithm, the matting model can be formulated as:
I = F a + B ( 1 a )
where F and B represent foreground and background areas, respectively, a is the transparency of the foreground.
Assuming that the refined transmittance is t ( x ) , and  t ( x ) and t ( x ) are re-written to t and t , the objective function of soft matting can be formulated as:
E ( t ) = t T L t + λ ( t t ) T ( t t )
where t T L t and λ ( t t ) T ( t t ) are the smoothing term and data term, respectively, λ is a weighting factor, L is a matte Laplacian matrix, and the elements ( i , j ) in L are defined as:
k ( i , j ) ω k ( θ i j 1 | ω k | ( 1 + ( I i u k ) T ( k + γ | ω k | U 3 ) 1 ( I k u k ) ) )
where u k and k are the mean and variance of the matrix in the window ω k . I i and I j denote the color of the input image I at pixels i and j, respectively. θ i j represents the Kronecker function, U 3 represents a 3 × 3 identity matrix, γ represents a normalization parameter and | ω k | denotes the number of pixels in the window ω k .
We can get the optimal t by solving the sparse linear system of Equation (10):
( L + λ U ) t = λ t
Using this matting algorithm to repair the acquired transmission can achieve satisfactory results, but the algorithm involves the reversible operation of a large matrix, which takes up more than 95% time of the defogging algorithm. Aiming at using the soft matting technology to refine the transmittance results in a high time complexity, we use guided filtering instead of soft matting to refine the transmittance, which can not only maintain the edge information, but also shorten the defogging time.
Dark channel prior defogging algorithms use minimum filtering strategy in the process of obtaining transmittance. Then, using local block area ω ( x ) as a template to perform point-by-point calculations, and take the minimum value of the R, G, and B channels in the block area as the dark channel value of the center pixel. Finally, the transmittance is estimated by the dark channel prior theory. The local filtering process shows that the transmittance of each pixel in the image is obtained by estimating the transmittance of the pixels in its adjacent area, but it is not the true transmittance value. Figure 1 shows local filtering for smooth depth of field and depth-changing edges. Among them, 3 × 3 local block is the filter window ω ( x ) , the dark box area represents its central pixel, the A-side area is a distant area with low pixel intensity, and the B-side area is a near-field area with high pixel intensity.
As shown in Figure 1a (left side), when the filtering window ω ( x ) is a scene with a smooth depth of field (the filtering window is the highlighted area of close scene B in the figure), the dark channel value of the central pixel can be accurately obtained through the minimum filtering. The transmittance of each pixel in the window can take approximately the same value, and it can achieve the desired defogging effect. As shown in Figure 1b (right side), when the depth of filter window ω ( x ) changes (the boundaries of A and B regions in the filter window), the dark channel value of the central pixel obtained by the minimum filter will become small, and the transmittance value of the pixels in the window are not similar. In this way, a dark transition zone (C region as shown in Figure 1b) will appear between A and B regions after filtering, resulting in the blur of depth-changing edges in the defogged images, thereby causing obvious edge effects. Aiming at solving the above-mentioned problems, we first use a four-point weighting algorithm to accurately estimate the atmospheric light value. Then, under the premise that the atmospheric light value is accurately estimated, the dark channel prior is used to obtain an accurate transmittance value. At the same time, we use the gray-scale image of the input image as the guide image, which can better maintain the edge of the image. Thus, the edge effect of defogged images can be improved.
Furthermore, the failure of the dark channel prior theory generally occurs in the areas where the intensity of the scene is close to the intensity of the atmospheric light, such as gray and white scenes, sky, reflective water surface, and other large areas with high brightness. This is because the true transmittance value of such areas is relatively high, and the visual effect is basically the same under fog-free and foggy weather. However, in the dark channel map, the gray value of the dark channel map is very high because there are no low pixel values in such areas. According to the transmittance formula, the estimated transmittance of this type of area is very small, which is different from the true transmittance of this area. As shown in Figure 2a (left side), the color of foggy images in large areas (e.g., the sky area) is similar to the visual effect of the atmospheric light intensity. As shown in Figure 2b (right side), after defogging based on the dark primary color principle, there is a phenomenon that the defogging is not obvious and the visual effect is poor in the sky area. In response to the above-mentioned problem that knowledge of dark channel prior for large areas of high brightness will fail, we first use a combination of edge detection and maximum inter-class variance to separate the sky and non-sky regions. Then, an enhancement algorithm based on sequential decomposition is used on the sky area; thus, the fog in large areas with high brightness can be removed thoroughly.

3. Improved Image Defogging Method Based on Dark Channel Prior

3.1. Improvement of High Time Complexity and Edge Effects of Dark Channel Prior Algorithm

Aiming at solving the problems of high time complexity and edge effect of traditional dark channel prior algorithms, in this paper, we propose an improved method. First, a four-point weighting algorithm is used to estimate the atmospheric light value accurately. Then, the rough transmittance is estimated by using the dark channel prior principle. In order to avoid edge effect and maintain the defogging edge, a guided filter is introduced to refine the coarse transmittance map. Finally, the fog-free image is restored by the atmospheric scattering model. The detailed improvements are described as below.

3.1.1. Atmospheric Light Value Estimation Based on Four-Point Weighting

The estimation of the atmospheric light value in He’s algorithm [11] is equal to 0.1% of the gray value in the global dark channel image, which will get poor results when the image contains a large bright area. In order to accurately estimate the atmospheric light value A, this paper proposes a four-point weighting algorithm to find the optimal atmospheric light value.
The atmospheric light value is obtained in the area with the highest fog concentration in the foggy image, where the area is generally defined as a rectangular area. For an image, when the fog concentration in the area is higher, the pixel value becomes higher, but the difference between the pixels becomes smaller, so the difference between the mean value and the standard deviation of the pixel becomes larger.
To estimate the atmospheric light value A of an inputted image, as shown in Algorithm 1, we divide the input image I into four regions (i.e., i n , n = 1 , 2 , 3 , 4 ) with the same size, and the difference of each region is calculated by Equation (11):
S ( n ) = M ( n ) D ( n )
where M ( n ) and D ( n ) represent the mean and variance of a region, respectively, n = 1 , 2 , 3 , 4 represents one of the four regions.
We select the region with the largest difference in four regions. Then, the selected region is used as the new input image to repeat the above processing until the preset requirements P r are met, and the final selected region is defined as Y ( x ) . In this paper, the preset requirements P r are defined as: (1) the size of the segmented area is less than 1/16 of the original input image; (2) there are two regions with the largest difference and the average of two regions is the same; (3) there are three regions with the maximum difference in four regions.
In order to obtain the atmospheric light value A in the region Y ( x ) , we first calculate the average pixel value M ( n ) of Y ( x ) . Then, the pixels in Y ( x ) are divided into two parts according to the pixel value. All pixels larger than M ( n ) are called bright pixels, and all pixels smaller than M ( n ) are called dark pixels. The number of bright and dark pixels are denoted as N b and N d , respectively. Finally, we find the maximum values A b and A d of dark channel in the light and dark areas, respectively, and it is assumed that A b and A d are obtained at Y ( n 1 ) and Y ( n 2 ) . The atmospheric value A is calculated as follows:
A = W b Y ( n 1 ) + W d Y ( n 2 ) W b = N b L W W d = N d L W W b + W d = 1
where L W denotes the size of the region, and the value of atmospheric light A is estimated by weighting Y ( x ) . When N b > N d , Y ( n 1 ) dominates the atmospheric light value A. When N b < N d , Y ( n 2 ) dominates the atmospheric light value A. Y ( n 1 ) and Y ( n 2 ) influencing A together makes the obtained atmospheric light value A reasonable. Thus, using the atmospheric light value A to remove fog from an image makes the defogged image consistent with the human visual system.
Algorithm 1 Four-point Weighting Algorithm
  • Require: Input image I, Preset requirements Pr
  •     while !Pr do
  •          i n = d i v i d e ( I ) , n = 1 , 2 , 3 , 4
  •         for m < n do
  •              S ( n ) = M ( n ) D ( n )
  •          end for
  •          S e l = m a x ( S ( n ) )
  •          S e l _ i n d e x = i n d e x ( m a x ( S ( n ) ) )
  •          I = i ( S e l _ i n d e x )
  •      end while
  •      Y ( x ) = S e l
  •      M ( n ) = m e a n ( Y ( x ) )
  •      N b = n u m ( p i x e l > M ( n ) )
  •      N d = n u m ( p i x e l < M ( n ) )
  •      W b = N b L W
  •      W d = N d L W
  •      A = W b Y ( n 1 ) + W d Y ( n 2 )
  • Output: Atmospheric Light A

3.1.2. Refinement of Coarse Transmittance Based on Guided Filtering

After obtaining the rough transmittance by using the dark channel prior, we use guided filtering instead of soft matting to refine the transmittance. Guided filtering has a filtering feature of locally linear smooth-preserving edges, which can be used to repair the rough transmittance obtained by the dark channel prior. The principle is to filter the input image through a guide image, and the output image can fully obtain the detailed changes of the guide image while retaining the overall characteristics of the input image. Compared with soft matting, guided filtering greatly improves the efficiency of the algorithm without affecting the visual effect.
Guided filtering assumes that there is a local linear relationship between the guided image I and the output image, that is:
q i = a k I i + b k ( i w k )
where w k is a square window, k is the center pixel, a k and b k are linear factors in the window and constant in the window, respectively. Equation (13) represents a linear transformation centered on k in the image I. This linear transformation ensures that when the guiding image I in the window has edges, the output image will have corresponding edges. The linear regression method is used to search the window coefficients a k and b k with the smallest window cost, and then the local linear filtering with the smallest difference between the input image and the output image is used by the guide image:
E ( a k , b k ) = i w k ( ( a k I i + b k p i ) 2 + θ a k 2 ) )
where θ is an hyper-parameter, and its purpose is to prevent the value of a k from being too large. The window coefficient can be obtained by linear regression analysis in the literature [14]:
a k = 1 | w | i w k ( p i I i u k p k ) ω k 2 + θ
b k = p k a k u k
In Equations (15) and (16), | w | is the number of pixels in the window w k . u k and ω k are the mean and variance of the guide graph in the window w k , and p k is the mean of the input image in the window w k . As a result that point i may be contained in different windows w k and the values of a k and b k are different, it is necessary to average the values of a k and b k involving all points i. Therefore, the final expression of the guided filtering is obtained:
q i = 1 | w | k w k ( a k I i + b k ) = a i I i + b i
Through the above analysis, the specific steps of using the guided filtering to optimize the transmittance are as follows: First, we use the rough transmittance map estimated by the dark channel prior algorithm as the input image p. Then, because the guide image I needs the edge information to reflect the depth information, we use the gray-scale image of the original image as the guide image I.
After obtaining the improved atmospheric light value and transmittance, the atmospheric scattering model is used as a physical model for foggy image degradation, and then this model is used to obtain a defogged image. A comparative analysis is performed by using an unimproved dark channel prior algorithm and the improved method proposed in this paper, as shown in Figure 3.
As we can see from Figure 3a, after using the dark channel prior to removing the fog, there are obvious edge effects on the tree periphery, the car outline, the signboard periphery, and the road side. At the same time, there is also obvious color cast. However, after using the improved method of this paper to remove the fog, it can be seen from Figure 3b, the edge effects of trees, cars, signs, and roadsides have been greatly improved, and the color of the image is natural.

3.2. Treatment of Dark Channel Prior Failure Areas

The sky region does not conform to the principle of dark channel prior, so the transmission of the sky region is mis-estimated, causing halo distortion in sky regions. As a result, various defogging algorithms based on sky region segmentation are emerging. For example, Wang Guangyi [15] uses gradient threshold and region growth algorithm to obtain the unicom region, and then identifies the sky region based on the pixel brightness threshold of the unicom region. However, the phenomenon of missed detection in the sky region and the time-consuming calculation of the unicom region occurred. Aiming at solving these problems, in this paper, we propose an improved method as follows. First, an algorithm based on a combination of edge detection and maximum inter-class variance is used to separate the sky and non-sky areas. Then, the enhancement algorithm via sequential decomposition is used for the sky area, and the optimization method of Section 3.1 is used for the non-sky area. Finally, the enhanced sky and non-sky areas after defogging are fused to obtain a clear defogged image.

3.2.1. Sky Region Segmentation Based on Edge Detection and Maximum Inter-Class Variance

In this section, we propose a method based on edge detection and maximum inter-class variance to optimize the sky region segmentation algorithm. The specific steps of sky segmentation in this paper are as follows: (1) convert RGB images into gray-scale images with the goal of retaining more edge information; (2) use Sobel to calculate the gradient information of the gray-scale image; (3) the gradient information is distinguished according to the gradient threshold and brightness threshold. The gradient threshold in this paper is set to 0.83. The brightness threshold is obtained by maximum inter-class variance. The segmentation image is shown in Figure 4.

3.2.2. Sky Enhancement Algorithm Based on Sequence Decomposition

In this section, we propose a sky enhancement algorithm based on sequential decomposition for the sky area. The enhancement algorithm based on sequence decomposition performs decomposition based on the Retinex model in continuous sequences, and continuously estimates a smooth illumination and a reflectance. After getting the illumination and reflectance, the light layer is used to produce an enhanced effect, and the specific steps are as follows:
Step 1: Estimate the initial illumination L , and convert the RGB image to a YUV image. In the YUV space, set the Y channel of the input image to the initial illumination L .
Step 2: Use Equation (18) to estimate the optimal illumination L.
a r g m i n | | L L | | F 2 + a | | Θ L | | 1
where | | L L | | F 2 represents the fidelity of the initial illumination L and the optimal illumination L, and Θ represents the first order differential operator.
Step 3: Calculate the correlation weight matrix W of the input image and the adjustment gradient G of the input image.
W = 1 | Θ s | + ω
where ω is a threshold value for eliminating small gradient.
Step 4: Use Equation (20) to estimate the optimal reflectance R.
a r g m i n R | | R S / L | | F 2 + β | | W Θ R | | F 2 + w | | Θ R G | | F 2
where | | R S / L | | represents the fidelity of R and S / L , and | | W Θ R | | F 2 represents the spatial smoothness on the enhanced reflectance R. The purpose of | | Θ R G | | F 2 is to reduce the gradient of the reflectance R and the observed image S.
Step 5: Use Equation (21) to enhance the image.
S = R L 1 r
The improved algorithm is compared with the sky defogging algorithm based on cost function [16], sky defogging algorithm based on tolerance mechanism [12] and defogging algorithm [15] based on sky segmentation, and Figure 5 illustrates the comparison figures.

4. Experimental Results And Analysis

We verify the effectiveness and performance of the improved algorithm through experiments, simulating and analyzing the subjective visual effect and objective quantitative results of the defogged image. Following [11], 5000 images from the dataset are used in our experiments, which are captured from foggy outdoor and urban scenes. All our experiments are simulated in Matlab (2017a), and the operating environment is 3.20 GHz Intel Core (i5-4460) CPU, 4G memory, 64-bit win10 system. In Equation (18), we set a = 0.001 . In Equation (20), we set β = 0.007 , ω = 0.016 . Under normal circumstance, such parameter settings can obtain satisfactory defogged results.

4.1. Subjective Visual Evaluation

In order to verify that the improved algorithm proposed in this paper has greatly improved the edge effect, an experimental simulation is performed. The obtained defogged restoration results are compared with defogged results of He’s algorithm. Figure 6 illustrates the defogged images obtained by two defogging algorithms in different foggy scenes.
The first image and the second image in Figure 6b are defogged images obtained by using He’s algorithm [11]. The overall color of images has a large chromatic aberration phenomenon. At the same time, an obvious edge effect appears in the tree outlines, the car peripheries, and on the signboards and the roads. This is mainly because the atmospheric light value in He’s defogging algorithm is not appropriate. When there is a large white object in the image, the method incorrectly estimates the atmospheric light value. This will cause an inaccurate transmittance estimation and eventually cause an edge effect.
The first image and the second image in Figure 6c are defogged images obtained by using our improved method. It can be seen that the defogged images we obtained not only make the color of the overall image very real, but also greatly improve the edge effects on the sides of trees, cars, signs, and roads. This is mainly because our improved method estimates the atmospheric light value by using a four-point weighting algorithm. This algorithm can deal with the situation that the estimation of the atmospheric light value is not accurate.
In summary, the improved algorithm we proposed can subjectively improve the edge effect. Through the experimental simulation, it is verified that the improved method can achieve better defogged results on large areas with high brightness where the dark channel prior fails than He’s method. Comparing our defogged images with current algorithms demonstrates that our method performs better on large areas with high brightness than other methods, such as defogging method based on cost function [16], defogging method based on tolerance mechanism [12], and defogging method based on sky segmentation [15]. Figure 7 illustrates the defogged images from four defogging algorithms.
Furthermore, the two scenes in Figure 7 are a large sky area and a large reflective water surface. The first and second images in Figure 7b are defogged images obtained by using defogging algorithm based on the cost function [16]. It can be seen from the two images that the fog becomes darker, resulting in less detailed information in the image after defogging. This is mainly because the defogging method based on the cost function does not emphasize the refinement of transmittance.
The first picture and the second picture of Figure 7c are defogged images obtained by using defogging algorithm [12] based on the tolerance mechanism. It can be seen from the two pictures that the fog in the sky area of the first picture is still not removed. It shows large distortion on the water surface and the lake in the second image. This is mainly because the essence of the defogging algorithm [12] based on the tolerance mechanism uses a controllable parameter to distinguish the sky area from the non-sky area.
The first image and the second image in Figure 7d are defogged images obtained by using segmentation algorithm [15]. It can be seen that most of the fog has been removed, but the thick fog has not been removed. This is mainly because the idea of this algorithm is to segment the sky and non-sky, and then perform different processing on the transmittance of the two parts. Among them, the transmittance of the sky region is set to a constant. This approach also makes the transmittance estimation of the sky area inaccurate, which will cause undesired defogging in the sky area.
The first and second images in Figure 7e are defogged images obtained by using our improved method. It can be seen from the first image that not only the dense fog in the original image is removed, but also the small villages next to the mountains can be seen. From the second picture, we can see that the fog on the reflective water surface has been removed. This is mainly because the defogging method we used for large areas of high lightness is not dark channel prior, but a new method. First, the sky segmentation algorithm is used to separate the sky region from the non-sky region. Then, an optimized dark channel prior algorithm is used for the non-sky region. Finally, an enhancement algorithm based on sequential decomposition is used for the sky region. Our method does not perform noise cancellation and sky enhancement separately, but processes them at the same time. In this sequence of suppressing noise, we perform spatial smoothing for each component, and use the weighting matrix to suppress noise.
Based on the above described subjective analysis, the sky enhancement algorithm based on sequential decomposition can effectively overcome the problem of failure of defogging in large areas with high lightness. Our method can not only remove the fog subjectively, but also enrich the details of the fogged image.

4.2. Quantitative Comparisons

In order to fully evaluate the defogging effect, PSNR and SSIM are used to quantify the defogged results of different algorithms. Taking Figure 6 and Figure 7 as examples, the evaluation results are shown in Table 1 and Table 2, respectively. Moreover, we also report the evaluation results on the test set in Table 3.
Peak Signal-to-Noise Ratio (PSNR) measures the image quality by calculating the pixel error between the image to be evaluated and the reference image. The larger the PSNR value, the smaller the distortion between the image to be evaluated and the reference image, and the better the image quality. The definition of PSNR can be formulated as:
P S N R = 10 l o g 10 M A X I 2 M S E ,
where M A X denotes the maximum pixel value of the image I, M S E represents the mean square error between two images.
Structural SIMilarity (SSIM) is an objective criterion for image quality that meets the characteristics of the human visual system. The larger the value of SSIM, the better it can reflect people’s subjective feelings. The formulation of SSIM can be defined as:
S S I M ( x , y ) = ( 2 u x u y + c 1 ) ( 2 δ x y + c 2 ) ( u x 2 + u y 2 + c 1 ) ( δ x 2 + δ y 2 + c 2 ) ,
where (x,y) denotes two samples, u x denotes the mean value of x, u y denotes the mean value of y, δ x denotes the variance of x, δ y denotes the variance of y, δ x y denotes the co-variance of x and y, and c 1 and c 2 are constants.
As can be seen from Table 1, in scene 1, the SSIM value obtained by using the algorithm of this paper is 0.19 larger than that obtained by He’s defogging algorithm, which means that the defogged image obtained by using the algorithm of this paper can better reflect people’s subjective feelings. The PSNR value obtained by using the algorithm of this paper is 4.50 larger than that obtained by He’s defogging algorithm. That is, the defogging effect obtained by using the improved algorithm of this paper has smaller pixel error and better image quality than the first image of Figure 6a. The time obtained by the algorithm of this paper is 0.3s less than the time obtained by He’s defogging algorithm. In He’s defogging algorithm, he used the soft matting to refine the transmittance. Although this matting algorithm can be used to repair the obtained transmittance and achieve satisfactory results, the algorithm involves the reversible operation of large matrices, resulting in high time complexity. In this paper, guided filtering instead of soft matting can not only refine the transmittance, but also shorten the image processing time. A similar improvement can be found in scene 2.
In summary, no matter from the SSIM, PSNR, or time, we can objectively verify that the proposed method can improve the edge effect and shorten the image processing time.
As can be seen from Table 2, the algorithm in this paper is quantitatively compared with three classic sky defogging algorithms. For scene 1, the improved algorithm we proposed is 5.45 larger in PSNR than the defogging algorithm [16] based on cost function, 5.72 larger than the PSNR of the defogging algorithm based on tolerance [12], and 3.58 larger than the PSNR of the defogging algorithm [15] based on the sky segmentation. Therefore, whether it is from the previous subjective analysis or objective index analysis, it can explain that the defogged images obtained by our improved method have less distortion and better image quality. For scene 2, the improved algorithm we proposed is 0.28 larger in SSIM than the defogging algorithm [16] based on cost function, 0.28 larger than the SSIM of defogging algorithm [12] based on the tolerance mechanism, and 0.11 larger than the SSIM of defogging algorithm [15] based on the sky segmentation. Therefore, it is verified again from objective indicators that the defogged image obtained by our improved algorithm can better reflect human subjective feelings.
Moreover, following [11], 5000 images from the dataset are used in our experiments. Here, we use 4000 images to select the hyper-parameters in our proposed method, and use the remaining 1000 images to verify the effectiveness of our method. The evaluation results are shown in Table 3. From Table 3, we can see that our method surpasses other methods by a large margin in PSNR and SSIM respectively, which further demonstrates the effectiveness of our proposed methods.

5. Conclusions

Based on the traditional dark channel prior model, this paper proposes several improvements for three shortcomings (i.e., high time complexity, edge effect, and failure of dark channel prior) of He’s algorithm. First, a four-point weighting algorithm is used to accurately estimate the atmospheric light value, and guided filtering is introduced instead of the more complex soft matting algorithm to refine the coarse transmission map. In this case, not only the processing speed of the algorithm is improved, but also the edge effect is effectively improved. Then, the sky area does not conform to the dark channel prior principle, which can cause halo distortion in sky areas. An algorithm combining edge detection and maximum inter-class variance is proposed to separate the sky area from the non-sky area, and then an enhanced algorithm via sequential decomposition is applied to the sky area. Experimental results show that the improved methods not only improve the visual effect of the defogged image subjectively, but also objectively evaluate that the defogging effect of our proposed method is more thorough and the recovered details are more abundant than other methods.

Author Contributions

Methodology, C.W., Y.Z. and L.W.; Supervision, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the China Postdoctoral Science Foundation (Grant No. 259822), the National Postdoctoral program for Innovative Talents (Grant No. BX20200108), the National Science Foundation of China (Grant No. 61976070), and the Science Foundation of Heilongjiang Province (YQ2020F005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

We would like to note that in the manuscript entitled “A Single Image Enhancement technique using Dark Channel Prior”, no conflict of interest exits in the submission of this manuscript, and manuscript is approved by all authors for publication.

References

  1. Mccartney, E.J. Optics of the Atmosphere—Scattering by Molecules and Particles. Phys. Bull. 1977, 28, 521. [Google Scholar] [CrossRef]
  2. Chen, Z.; Li, X.; Ding, M. An Atomic Force Acoustic Microscopy Image Fusion Method Based on Grayscale Inversion and Selection of Best-Fit Intensity. Appl. Sci. 2020, 10, 8645. [Google Scholar] [CrossRef]
  3. Wang, L.; Zhang, D.; Guo, J.; Han, Y. Image Anomaly Detection Using Normal Data Only by Latent Space Resampling. Appl. Sci. 2020, 10, 8660. [Google Scholar] [CrossRef]
  4. Sultan, W.; Anjum, N.; Stansfield, M.; Ramzan, N. Hybrid Local and Global Deep-Learning Architecture for Salient-Object Detection. Appl. Sci. 2020, 10, 8754. [Google Scholar] [CrossRef]
  5. Chen, X. Fog Removal from Video Sequences Using Contrast Limited Adaptive Histogram Equalization. In Proceedings of the International Conference on Computational Intelligence & Software Engineering, Wuhan, China, 10–12 December 2010. [Google Scholar]
  6. Ren, X.; Li, M.; Cheng, W.H.; Liu, J. Joint enhancement and denoising method via sequential decomposition. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
  7. Narasimhan, S.G.; Nayar, S.K. Interactive(de) weathering of an image using physical models. In Proceedings of the IEEE Workshop on Color and Photometric Methods in Computer Vision, Nice, France, 13–16 October 2003. [Google Scholar]
  8. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the IEEE Computer Society Conference on Computer Vision & Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  9. Fattal, R. Single Image Dehazing. ACM Trans. Graph. 2008, 27, 547–555. [Google Scholar] [CrossRef]
  10. Hautière, N. Fast Visibility Restoration from a Single Color or Gray Level Image. In Proceedings of the IEEE 12th International Conference on Computer Vision, ICCV 2009, Kyoto, Japan, 27 September–4 October 2009. [Google Scholar]
  11. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  12. Jiang, J.G.; Hou, T.F.; Mei-Bin, Q.I. Improved algorithm on image haze removal using dark channel prior. J. Circuits Syst. 2011, 16, 7–12. [Google Scholar]
  13. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 228–242. [Google Scholar] [CrossRef] [Green Version]
  14. Gibson, K.B.; Nguyen, T.Q. Fast single image fog removal using the adaptive Wiener filter. In Proceedings of the IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013. [Google Scholar]
  15. Wang, G.; Ren, G.; Jiang, L.; Quan, T. Single Image Dehazing Algorithm Based on Sky Region Segmentation. Inf. Technol. J. 2013, 12, 1168–1175. [Google Scholar] [CrossRef] [Green Version]
  16. Kim, J.H.; Jang, W.D.; Sim, J.Y.; Kim, C.S. Optimized contrast enhancement for real-time image and video dehazing. J. Vis. Commun. Image Represent. 2013, 24, 410–425. [Google Scholar] [CrossRef]
Figure 1. The figure of local filtering for flat depth-of-field areas and depth-of-field changing edges.
Figure 1. The figure of local filtering for flat depth-of-field areas and depth-of-field changing edges.
Applsci 11 02712 g001
Figure 2. Comparisons of dark channel failure examples.
Figure 2. Comparisons of dark channel failure examples.
Applsci 11 02712 g002
Figure 3. Comparison of the defogging effect of two methods.
Figure 3. Comparison of the defogging effect of two methods.
Applsci 11 02712 g003
Figure 4. The sky segmentation example.
Figure 4. The sky segmentation example.
Applsci 11 02712 g004
Figure 5. Comparison results of different defogging algorithms, including cost function defogging algorithm [16], tolerance mechanism defogging algorithm [12], sky segmentation algorithm [15], and our proposed method.
Figure 5. Comparison results of different defogging algorithms, including cost function defogging algorithm [16], tolerance mechanism defogging algorithm [12], sky segmentation algorithm [15], and our proposed method.
Applsci 11 02712 g005
Figure 6. Comparison with He’s algorithms [11] in different scenarios.
Figure 6. Comparison with He’s algorithms [11] in different scenarios.
Applsci 11 02712 g006
Figure 7. Comparison of the effect of different defogging algorithms, including cost function defogging algorithm [16], tolerance mechanism defogging algorithm [12], and sky segmentation algorithm [15], on defogging in bright areas.
Figure 7. Comparison of the effect of different defogging algorithms, including cost function defogging algorithm [16], tolerance mechanism defogging algorithm [12], and sky segmentation algorithm [15], on defogging in bright areas.
Applsci 11 02712 g007
Table 1. Quantitative results of He’s algorithm [11] and our proposed methods in the images of Figure 6.
Table 1. Quantitative results of He’s algorithm [11] and our proposed methods in the images of Figure 6.
MethodsScenario 1Scenario 2
PSNRSSIMRun TimePSNRSSIMRun Time
He’s [11]10.650.441.816.50.791.7
Ours15.150.631.517.190.811.4
Table 2. Quantitative results of other defogging methods and our proposed method in the images of Figure 7.
Table 2. Quantitative results of other defogging methods and our proposed method in the images of Figure 7.
MethodsScenario 1Scenario 2
PSNRSSIMPSNRSSIM
Cost function defogging method [16]13.250.758.790.46
Tolerance mechanism defogging method [12]12.980.779.150.46
Sky segmentation defogging method [15]15.120.9011.370.63
Ours18.700.9213.320.74
Table 3. Quantitative results of other defogging methods and our proposed method in the test set.
Table 3. Quantitative results of other defogging methods and our proposed method in the test set.
MethodPSNRSSIM
Cost function defogging method [16]11.580.66
Tolerance mechanism defogging method [12]13.160.72
Sky segmentation defogging method [15]15.060.88
Ours19.200.91
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, C.; Ding, M.; Zhang, Y.; Wang, L. A Single Image Enhancement Technique Using Dark Channel Prior. Appl. Sci. 2021, 11, 2712. https://doi.org/10.3390/app11062712

AMA Style

Wang C, Ding M, Zhang Y, Wang L. A Single Image Enhancement Technique Using Dark Channel Prior. Applied Sciences. 2021; 11(6):2712. https://doi.org/10.3390/app11062712

Chicago/Turabian Style

Wang, Cong, Mingli Ding, Yongqiang Zhang, and Lina Wang. 2021. "A Single Image Enhancement Technique Using Dark Channel Prior" Applied Sciences 11, no. 6: 2712. https://doi.org/10.3390/app11062712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop