Next Article in Journal
A Joint Positioning Algorithm in Industrial IoT Environments with mm-Wave Communications
Previous Article in Journal
A Hybrid Framework Model Based on Wavelet Neural Network with Improved Fruit Fly Optimization Algorithm for Traffic Flow Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eximious Sandstorm Image Improvement Using Image Adaptive Ratio and Brightness-Adaptive Dark Channel Prior

Department of Digital Media engineering, The University of Tongmyong in Busan, Busan 48520, Korea
Symmetry 2022, 14(7), 1334; https://doi.org/10.3390/sym14071334
Submission received: 5 June 2022 / Revised: 21 June 2022 / Accepted: 25 June 2022 / Published: 28 June 2022

Abstract

:
Sandstorm images have a color cast by sand particles. Hazy images have similar features to sandstorm images due to these images having a common obtaining process. To improve hazy images, various dehazing methods are being studied. However, not all methods are appropriate for enhancing sandstorm images as they experience color degradation via an imbalanced color channel and degraded color distributed around the image. Therefore, this paper proposes two steps to improve sandstorm images. The first is a color-balancing step using the mean ratio of the color channel between red and other colors. The sandstorm image has a degraded color channel, and therefore, the attenuated color channel has different average values for each color channel; the red channel’s average value is the highest, and that of the blue channel is the lowest. Using this property, this paper balances the color of images via the ratio of color channels. Although the image is enhanced, if the red channel is still the most abundant, the enhanced image may have a reddish color. Therefore, to enhance the image naturally, the red channel is adjusted by the average ratio of the color channel; those measures (as with the average ratio of color channels) are called image adaptive ratio (IAR). Because color-balanced sandstorm images have the same characteristics as hazy images, to enhance them, a dehazing method is applied. Ordinary dehazing methods often use dark channel prior (DCP). Though DCP estimates the dark region of an image, because the intensity of brightness is too high, the estimated DCP is not sufficiently dark. Additionally, DCP is able to show the artificial color shift in the enhanced image. To compensate for this point, this paper proposes a brightness-adaptive dark channel prior (BADCP) using a normalized color channel. The image improved using the proposed method has no color distortion or artificial color. The experimental results show the superior performance of the proposed method in comparison with state-of-the-art dehazing methods, both subjectively and objectively.

1. Introduction

An image is obtained by a camera through a medium, and the features of the image reflect the condition of the medium. A hazy image is dimmed due to hazy particles. Sandstorm images have similar features to hazy images, such as being dimmed. However, the sandstorm image has color distortion, such as being reddish or yellowish, due to sand or dust particles. These particles hinder the propagation of light, creating a color-shifted image via attenuated color channels. The sandstorm image has a reddish or yellowish color cast. This is caused by attenuated green and blue channels in comparison with the red color channel. Because the color degradation of sandstorm images has symmetrical distribution all around the image, it is a tough task to apply the color-distorted sandstorm image directly into computer vision and image-recognition areas. When the object in the sandstorm image is detected, because of the degraded and attenuated color channel, the performance of detection is not suitable for recognition and this makes for low precision. Therefore, distorted sandstorm images should be enhanced in order to be applied in computer vision and image recognition areas. Due to the similarity of the obtaining process of sandstorm images and hazy images, dehazing methods are applied to enhance the sandstorm image. However, the images improved using dehazing methods sometimes have an artificial color because sandstorm images have an imbalanced color channel. Therefore, to enhance a degraded sandstorm image, a color-balancing step is needed.

2. Related Work

To enhance hazy images, various methods have been studied. He et al. [1] proposed the dark channel prior (DCP) method to enhance hazy images. This method estimates the darkest region in the image and the image is enhanced using it. Though the DCP method [1] estimates the darkest region of an image, if the image has a sky region, due to the sky region being too bright, the estimated dark region is also bright, which creates a halo effect in the improved image. Meng et al. improved hazy images using boundary constraints on a transmission map [2]. Meng et al.’s method [2] is good at estimating whether the transmission map or the image has a sky region or not. This has a strong comparison to the DCP method [2]. Zhu et al. suggested a haze-removal method using color attenuation prior [3]. Zhu et al.’s method uses a model of an image’s scene depth and learns the parameters to easily estimate an image’s transmission map [3]. This method is able to effectively remove the hazy region of an image, but the estimation of the adaptive back-scattering coefficient in the atmosphere is not observed in the image under various circumstances [3]. Schechner et al. proposed a dehazing method using polarization [4]. Their method uses atmospheric particles that are partially polarized and seem to haze [4]. Therefore, to enhance the hazy image, they used the polarization effect [4]. Naseeba et al. improved hazy images by using the visibility-restoration method [5]. This method consists of three modules, as in the depth estimation module (DEM), which operates using median filter and gamma correction, a color analysis module (CAM) that is based on the gray world assumption to estimate the color characteristics of the hazy image, and a visibility-restoration module (VRM), which uses an adaptive transmission map [5]. Narasimhan suggested an effective dehazing method using a simple color model for atmospheric scattering and a new geometric framework for analyzing the chromatics effect of atmospheric scattering [6]. Nayar et al. suggested an image depth map to improve the image, regardless of the scene points and the atmospheric conditions [7].
Because the existing dehazing methods have no color-balancing step, if they are applied in the case of sandstorm image enhancement, artificial color appears. To improve sandstorm image suitability, various methods have been tested. Al Ameen [8] enhanced sandstorm images using the tri thresholds, and their method is sometimes able to enhance sandstorm images. However, owing to the constant threshold, it is not a suitable method for sandstorm images in particular circumstances [8]. Cheng et al. enhanced a sandstorm image using white balance [9] with blue channel prior and fusion [10,11]. Shi et al. proposed a sandstorm image enhancement method using gamma correction [12] and gray world algorithm with a mean shift of color components [13]. This method is able to correct the color distortion, but due to the mean shift, artificial color may appear. Gao et al. worked on a sandstorm image enhancement method using a blue channel prior [14]. This method is good at correcting the color of images. However, artificial color may appear due to there being no adjustment progress in the red channel. Though the red channel of the sandstorm image is abundant, because the intensity value is too bright, the color shift may appear in the enhanced image. Shi et al. corrected sandstorm images using a mean shift of color components with a gray world assumption [15,16]. Their method is effective in the color correction of sandstorm images. However, a new color shift may occur due to a mean shift of color ingredients. Cheng et al. improved sandstorm images using blue channel compensation with white balance [9] and guided image filtering [17,18]. This method is able to balance the color channel by the correction of the blue channel. However, artificial color may remain, such as reddish or yellowish, because there is no regard for correction progress on the green or red channels. Because the red channel’s intensity value in the sandstorm image is abundant, to balance the image naturally, the red channel’s intensity should also be adjusted, otherwise, a new color may appear.
Recently, dehazing methods based on machine learning have been studied. Wang et al. proposed a sandstorm image enhancement method using a convolution neural network (CNN) [19]. This method estimates the atmospheric illumination of a hazy image based on luminance channels due to the atmospheric illumination having a greater influence on the luminance channel than the chrominance channel [19]. Ren et al. improved hazy images using CNN [20]. This method consists of two networks. The first is a coarse-scale net that predicts a transmission map of the input image, and the other has roles such as refinement of the transmission map [20].
There are many methods that use the mean value to process images. Zhang et al.’s method clusters the image using the deviation sparse fuzzy c means [21]. Tang et al. use the local c means clustering to segment the image [22].
A sandstorm image has a color shift caused by color channel attenuation and its hindrance factor when applying the dehazing procedure. This paper aims to improve sandstorm image suitability and naturally proposes two steps. The first is the color-compensation step using image adaptive ratio (IAR). The color-degraded sandstorm image has a reddish or yellowish color cast, caused by imbalanced color channels such as an abundant red channel and a rare blue channel. The abundant red channel has a high-intensity value, and its mean value is also high. Additionally, the rare blue channel has a low-intensity value, and its mean value is also low. Therefore, this paper uses the mean intensity ratio of the color channel, and as a result, relatively attenuated channels are able to be balanced. Moreover, the red channel of sandstorm images is more abundant than other channels; if the adjustable procedure is not performed on the red channel, the enhanced image may have a reddish color. At this point, this paper operates the adjusting procedure on the red channel; those ratios (as with the average ratio of color channels) are called image adaptive ratios (IAR). The image enhanced using the proposed method seems natural. The balanced image has similar features as the hazy image, so, to enhance the image, a dehazing method is applied. Ordinary dehazing methods use dark channel prior (DCP). However, this has a weak point in bright regions, and it creates a halo effect, such as an artificial color in the enhanced image. To compensate for this point, this paper proposes a brightness-adaptive dark channel prior (BADCP) and its roles in the second step to enhance sandstorm images. The experimental results show the compatibility of the proposed method in comparison with state-of-the-art methods, both objectively and subjectively.

3. Sandstorm Image Enhancement Using IAR and BADCP

3.1. Color-Channel Compensation Using IAR

Sandstorm images have color degradation, such as being reddish or yellowish due to the attenuation of color channels by sand particles. If a color-compensation step does not exist, artificial color may occur. When enhancing sandstorm images, dehazing is needed due to hazy images and sandstorm images being similar. However, the dehazing step does not consider color distortion. Therefore, such distortion is still present in the enhanced sandstorm image. The distorted sandstorm image has an imbalanced color channel when enhanced: abundant red channels and a rare blue channel. Figure 1 shows a non-degraded sandstorm image (Figure 1a) and a degraded sandstorm image (Figure 1b), and the tables below the figure indicate the mean value of each color channel. As shown in Figure 1b, the red channel of the sandstorm image is more abundant than other channels and its mean value is also the highest, and as a result, a yellowish color degradation appears. At the same time, the mean value of the green channel is half that of the red channel. Additionally, the mean value of the blue channel is three times lower than that of the red channel. However, in the case of a non-degraded sandstorm image, as in Figure 1a, the mean values of all color channels have uniform distribution; the highest mean value and lowest mean value are similar. Therefore, to enhance the color-casted sandstorm image, attenuated color channels should be enhanced, and all color channels should be uniform. As shown in Figure 1, the channel’s mean value reflects its condition; if the channel is attenuated, then its mean value is low. Additionally, when the channel is less attenuated, its mean value is high. Thus, to balance the color channels, if using mean values, the distorted color channel is able to be compensated for.
Therefore, this paper proposes an image adaptive ratio (IAR) based on the mean ratio of color channels. If the sandstorm image is degraded, certain color channels are attenuated, and the other color channel is relatively maintained. If sand particles have color, the attenuated channel may occur and vice versa. Additionally, the sandstorm image is also degraded by the condition of the sand particles. The white balance (WB) method [23] is used often to compensate for the rare color channel. This method [23] uses the reverse channel and abundant channel in the image and designates it as a mask image. The channel is more attenuated, and the reversed channel is brighter. Additionally, by using the abundant color channel, the mask image is not all white, though the channel is rare. The mask image is able to compensate for the degraded color channel, but to apply mask image adaptively, a measure is needed due to the intensity of the mask image being too high. This paper applies a measure that reflects the image’s condition with the mean value of the color channel. The mean value of the red channel is higher than that of other channels due to the red channel being less attenuated, and when the color channel is more attenuated, its mean value is also low. At the same time, the reverse of the mean value of the more attenuated channel is higher than that of the less attenuated channel. If this relationship is used, the distorted image is able to be suitably balanced. In other words, the attenuated color channel is compensated using the ratio between the reverse of the mean value of the distorted color channel and the mean value of the not-distorted color channel. Additionally, it is adopted when the mean value of the red channel is larger than that of the green channel. The compensated channel is described as:
I M c ( x ) = ( 1 I c ( x ) ) · G I ( x ) ,
δ c = 1 m ( I c )   δ 0 ,
I B c ( x ) = I c ( x ) + δ c · I M c ( x ) ,
where I B c ( x ) is the compensated image, G I ( x ) is a gray-scale image and its role is the compensation of rare channel component because it has all the color channel’s features, I c ( x ) is the initial green and blue channels, x is the location of the pixel, I M c ( x ) is the mask image, c { g ,   b } , m ( I c ) is the average value of the initial green and blue channels, δ c is the image adaptive ratio (IAR) on green and blue channels and roles as scale factor, which controls the ratio of the compensated mask image (if the color channel is more attenuated, then the reverse mean is high and vice versa), and δ 0 is the control factor for which the image is dark or bright. The estimation of an image’s darkness is adjusted by the sum of the mean intensity values of the mask images. If the sum of the mean intensity values of the mask image is lower than that of the red channel, this image is bright. Although the sandstorm image is degraded if it is obtained in bright conditions, the red channel is still maintained because the image has a reddish color cast. Because the mask image reflects the condition of the degraded image by reversing the channel, if the image is bright (generally in sandstorm images, the mean intensity value of the mask is lower than the mean intensity value of the red channel because the sandstorm image is obtained in the bright environment; so, the image brightness level is determined by comparison of the mean values of the mask image and red channel), the maintained red channel is brighter than the mask image and vice versa. Additionally, in this case, δ 0 is set to m ( I r ) ; otherwise, δ 0 is set to 1. The red channel is relatively better maintained in a degraded sandstorm image than other channels. If the image is degraded, its attenuated color channel is dark and its reverse color channel is bright. Therefore, the ratio of the red-to-reverse channels is close to 1 or higher because the red channel is relatively maintained in the sandstorm image. Additionally, in low-light images (dark images), because the mean value of the mask image that consists of a reversed channel is higher than that of the red, δ 0 is set to 1. Equations (1)–(3) are similar to the WB method [23]. The difference is that the WB method [23] uses just the mean difference in the abundant color channel, and it does not compensate for the green channel. However, in sandstorm images, due to green channels and blue channels being attenuated, both the green and blue channels should be compensated. Additionally, the WB method [23] has no reflecting measure of the color channel. However, Equation (2) reflects the condition of the color channel using the reverse mean ratio comparison with the mean value of the red channel. The more attenuated, the more compensated, and the less attenuated, the less compensated. The attenuated color channels are compensated by Equations (1)–(3). Through Equations (1)–(3), the more attenuated color channel is more compensated by the reversing channel and mean ratio and vice versa. Though the attenuated color channel is compensated by Equations (1)–(3), if the intensity of the red channel is excessively high, then the artificial color may still appear.
Figure 2a–c shows the degraded input image and Figure 2d–f shows the balanced image without controlling the red channel. As shown, though the image is balanced, as the red channel is abundant, the balanced image still has a reddish color cast.
To solve this phenomenon, the intensity value of the red channel should also be controlled like variation in other color channels. This paper, in order to obtain a naturally balanced sandstorm image, controls the intensity of the red channel using the mean ratio of each color channel, which is the difference between the imbalanced and balanced images. As shown in Figure 1, the balanced image has a uniform mean value for each color channel, and the degraded sandstorm image has a non-uniform mean value for each color channel; red is the highest, while blue is rare. Therefore, this work, to enhance the red channel, used four measures. The first is the mean ratio of the initial total color channels and the red channel. If the image is severely degraded, the red color channel is relatively more maintained and other channels are distorted, and this is reflected in the mean value. If the image is more degraded, the total mean is far from the mean of the red channel. If not, the image is less degraded, and the total mean is close to that of the red channel. Therefore, this term reflects the image’s condition. The second and third factors reflect the difference between the initial red channel and the balanced blue and green channels. If the initial image is severely degraded, the mean value of the red channel is higher than the other color channels. Therefore, the comparison measure between the balanced color channel and the initial red channel reflects how the image is degraded using the difference; the red channel is abundant, so then the compared measure is less abundant and it adjusts the red channel. The fourth measure indicates how the red channel is abundant in the initial color channels using the mean value of total color channels. According to these terms, though the red channel is abundant, it is able to be adjusted. The combination of the four measures is described as:
δ r = M m ( I r ) + { m ( I B g ) m ( I r ) } + { m ( I B b ) m ( I r ) } + { m ( I r ) M } ,
M = m ( I r ) + m ( I g ) + m ( I b ) 3 ,
I B r ( x ) = δ r · I r ( x ) + I r ( x ) 2 ,
where δ r controls the scale factor of the red channel, m ( I c ) is the average value of each initial color channel, c { r , g ,   b } , and I B r ( x ) is the adjusted red channel. Because the red channel is adjusted by δ r (which is named image adaptive ratio on red channel) the adjusted red channel and initial red channel are combined to make the natural color channel. Additionally, M is the total mean of the initial all-color channel. Via Equations (1)–(6), the distorted sandstorm image is balanced. Figure 3 shows the input distorted sandstorm image and the balanced image. As shown in Figure 3, the balanced image with the proposed method seems natural and the mean value of each channel is similar in comparison to the input image.
Therefore, the proposed color-balancing method is suitable and superior to compensate for the degraded color channel.

3.2. Dehazeing Using the BADCP

The color-balanced sandstorm image has similar features to a hazy image. That is why, to enhance the balanced sandstorm image, this paper uses a dehazing method. The DCP method [1] is often used in the dehazing method. This method [1] estimates the darkest region in an image. However, the ordinary DCP method [1] has a weak point in the sky region of an image. In general, because the sky region of an image has a high-intensity value, the estimated DCP is bright and not dark enough. Additionally, it creates a halo effect in the enhanced image with color shift. To compensate for this point, this paper proposes the brightness-adaptive DCP (BADCP). If each of the color channels is normalized, the intensity value is uniform, without brightness and darkness. Therefore, this paper also applies a normalized dark channel to estimate the DCP. The estimation procedure of the DCP is described as [1]:
I D ( x ) = m i n y Ω ( x ) ( m i n c ( I B c ( y ) A B c ) ) ,
and the normalized dark channel is described as:
I N ( x ) = m i n y Ω ( x ) ( m i n c ( I B c ( y ) s u m ( I B c ( y ) ) ) ) ,
where I N ( x )   is the normalized image, c { r , g ,   b } , and Ω ( x ) is the patch region; the size of the patch fits in a maximum of 1.25% between row and col, A B c is the back-scatter light (scattered components by all light) of the balanced image and it is estimated using [1], s u m (.) is sum operation, and I D ( x ) is the DCP image. Additionally, Equations (7) and (8) are assembled as:
I P D ( x ) = I N ( x ) · ( α + I D ( x ) ) ,
where I P D ( x ) is the proposed brightness-adaptive DCP (BADCP), which estimates the dark region of the image adaptively regardless of the existence of bright regions, and α controls the intensity value of the BADCP via the image’s condition. If the image has a sky region, then the DCP is dark. Additionally, if the image has no sky region, then the DCP is bright. However, the intensity of the normalized channel is maintained regardless of the presence or absence of a bright region. Therefore, to reflect the existence or absence of a sky region, the mean values of DCP and normalized channel are signified as α. α is applied by the sum or difference between the mean value (upper 0.2% of image size) of I N ( x ) and I D ( x ) . If the image has a sky region, then α is set to the sum of each mean value: the upper 0.2% of the image size. As a result, the intensity value of I N ( x ) is more adopted than I D ( x ) . If not, α is set to the difference of each mean value: the upper 0.2% of the image size. Thus, the intensity value of I N ( x ) is less adopted than I D ( x ) . Therefore, the BADCP estimates the brightness-adaptive DCP as though the image has a sky region. The transmission that shows the propagation of the light is estimated by the reverse of the DCP [1]. It is described as:
t ( x ) = 1 w · I P D ( x ) ,
where t ( x ) is the transmission map and w reflects the ‘aerial perspective’ of the image [1,24,25] and it is set to 0.95. According to Equations (7) and (8), the estimated BADCP has a square effect, and it makes a halo effect when estimating the transmission map. To overcome this, a point-guided image filter [17] is used, but it is not suitable. Because of that, this paper aims to suitably refine the transmission map using the bi-dark channel. The dark channel is used to estimate the dark region of an image. However, the dark channel uses a certain constant mask, and it makes a halo effect as with the block in the transmission map. To compensate for this point, the same mask size is used, and the transmission map is estimated by reversing the DCP; it is bright sometimes. As a result, the transmission map is estimated as suitably and sufficiently dark. Therefore, this paper proposes the refinement of the transmission map using a bi-dark channel. The refined transmission map, which indicates the path of propagation of light, is estimated by applying the dark channel once again. It is described as:
t r ( x ) = m i n y Ω ( x ) ( t ( y ) ) ,
where t r ( x ) is the refined transmission map. Through Equation (11), the proposed transmission map has no halo effects, such as a square effect.
Figure 4 and Figure 5 show the proposed transmission map and existing transmission map [1] with each method’s DCP. As shown in Figure 4 and Figure 5, the enhanced image using ordinary DCP [1] has a halo effect, with a ringing effect and color shift, but the enhanced image, using the proposed transmission map, has no halo effect (blue dotted line indicates the proposed method). Looking at Figure 4 and Figure 5, we can see that to refine the transmission map, applying the bi-dark channel is a reasonable and superior method as there is no ringing effect in the enhanced image.

3.3. Improving the Sandstorm Image

The distorted sandstorm image is corrected using the proposed balancing method. Additionally, due to the balanced image having similar features as a hazy image, enhancing the image uses the proposed dehazing method: BADCP and refined transmission map. Using these terms, the improvement in the procedure is described as [1,26,27,28]:
J c ( x ) = I B c ( x ) A B c max ( t r ( x ) , t 0 ) ,
where J c ( x ) is the improved sandstorm image, A B c is back-scattered light of the balanced image using the DCP method [1], and t 0 is set to 0.1 to prevent it from being divided into zero. As shown in Equation (12), the improved image is obtained by the color-balanced image and refined transmission map using the proposed method. Additionally, to refine the image, a guided image filter [17] is used. The procedure of the guided image filter [17] and its application is described as:
J G c ( x ) = GF { J c ( x ) ,   K ,   e p s } ,
J E c ( x ) = ( J c ( x ) J G c ( x ) ) · γ + J G c ( x ) ,
where GF { · } is the guided image filter, J G c ( x ) is the filtered image, γ is the control factor to refine the edge components, which is set to 5 and is used in [17], K is the local kernel and it is set to 2, empirically, e p s is set to 0.4 2 , and J E c ( x ) is the refined and enhanced image.
As shown through Equations (1)–(14), and Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5, the proposed method has good performance in enhancing the degraded sandstorm image using the image’s statistical features with the ratio of average values of color channels. Moreover, the proposed method is applied adaptively because it uses the image’s features to enhance it. This is the strong point of the proposed method.

3.4. Summary of Proposed Method

Algorithm 1 shows the pseudo-code of the proposed method. The color correction consists of stages (1–2), where stage (1) corrects the green and blue color channels, and stage (2) corrects the red color channel. Additionally, the normalized channel (NC) is obtained by stage (3). By using the NC, BADCP is obtained by stage (4). Additionally, the transmission map is obtained by stage (5). The refined transmission map by the bi-dark channel is obtained using stage (6). The improved image is obtained by stage (7). Additionally, the refined image is obtained by stage (7).
Algorithm 1 The pseudo-code of the proposed method
Input: Sandstorm image I
Output: improved image J
(1):   Color compensation using (3). Obtain I B c
(2):   Red channel compensation using (6). Obtain I B r
(3):   Estimates the NC using (8). Obtain I N
(4):   Estimates the BADCP using (9). Obtatin I P D
(5):   Estimate the transmission map using (10). Obtain t
(6):   Estimates the refined transmission using (11). Obtain t r
(7):   Obtain the improved sandstorm image using (12). Obtain J
(8):   Refined the image using (14)
This paper proposes a sandstorm image enhancement method using the image adaptive ratio with the image’s statistical features, which are the average values of color channels. The degraded sandstorm has different contrasts for each color channel and their average values are also different. The red channel’s average value is the highest and the blue channel’s average value is the lowest. Using this relationship, this paper uses the mean ratio of the color channels to enhance the degraded sandstorm image naturally and calls this measure the image adaptive ratio (IAR) of green and blue channels. Moreover, the intensity value of the red channel is higher than that of other channels. Though the green and blue channels are enhanced, in the enhanced image a reddish color may occur. Therefore, this paper uses the adjustable red channel compensation using the average ratio of color channels and a measure which is called an image adaptive ratio (IAR) for the red channel. The image enhanced using the proposed method has no reddish color and seems natural. Because the color-balanced image seems hazy or dusty, to enhance the image, a dehazing procedure is needed. Unlike the existing method, the proposed method uses the normalized dark channel (which has bright adaptive features), to naturally enhance the image even if there is bright regions. The estimated BADCP has no ringing effect due to the normalized dark channel and has no effect on bright regions. The enhanced image using the IAR and BADCP methods seems natural.

4. Experimental Results and Discussion

As the compensated sandstorm image has similar features to hazy images, enhancing the images requires the application of a dehazing method using the proposed BADCP and a refined transmission map. This section shows the performance of the proposed method in comparison with the state-of-the-art methods, both objectively and subjectively. The sandstorm image has degraded channels due to attenuation. Therefore, assessing the enhanced sandstorm image consists of two procedures. One is the subjective comparison, and the other is the objective comparison. In the subjective comparison procedure, this paper observe how naturally color-balanced and dehazed the images are, with attention to colorfulness and sharpness. In addition, to compare the images objectively, two metrics are used which are frequently used in degraded image assessment areas.
To compare them subjectively, we present two steps. The first is a comparison of the color corrections and the other is of the improved images. Each step uses 10 kinds of sandstorm images using the DAWN dataset [29], which has various sandstorm images. Additionally, the other section shows the objective comparison of the enhanced images using various measures.

4.1. Subjective Comparison

4.1.1. Comparison of Color Corrections

The sandstorm image has a color shift toward a reddish or yellowish color. To correct the casted color, this paper uses the mean ratio between the red channel and other channels. This section shows a performance comparison between the proposed color-correction method and the state-of-the-art methods, as in Shi et al. [16], Shi et al. [13], and Al Ameen [8]. Shi et al.’s method [16] corrects the casted color using the mean shift of the color component with the gray world assumption [10]. Shi et al. [13] compensate for the color-distorted image by using the mean shift of the color component with a gamma correction [12]. Al Ameen [8] enhances the color-shifted image using constant values. Figure 6, Figure 7, Figure 8 and Figure 9 show the corrected images of various degraded images.
Figure 6 and Figure 7 show the variously degraded images and corrected images using the proposed method and the state-of-the-art methods. Shi et al.’s method [16] enhances the lightly degraded sandstorm image. It also [13] corrects the degraded sandstorm image. Al Ameen’s [8] method corrects the sandstorm image, but the enhanced image using this method has a color shift due to a constant value that does not reflect the condition of the image, even though it is just a lightly degraded image. The proposed method corrects the image naturally without a color shift, and the corrected image seems to be just hazy or dusty, without any color casting, such as bluish, yellowish, or reddish. Therefore, the proposed method sufficiently corrects the color cast and is competitive with other methods because the proposed method has an image-adaptive color-correction step, including compensation of rare components and adjustment of abundant components using image-adaptive measures.
Figure 8 and Figure 9 show the variously degraded sandstorm images and enhanced images. The enhanced image, using the Shi et al. method [16], has a color shift due to the mean shift of color components. Shi et al.’s method [13] enhances the sandstorm image. However, the improved image using this method has color casting owing to the mean shift of color ingredients. Though the images are enhanced using the Shi et al. [13] and Shi et al. [16] methods to enhance the color distortion using the mean shift of the color components, the color cast remains because the mean shift alone does not reflect the image conditions. Al Ameen’s method [8] enhances the sandstorm image and the only lightly degraded sandstorm image, but, in severely degraded sandstorm images, a color shift appears due to this method having no image adaptive measures. Concurrently, the enhanced image using the proposed method has no color shift in either the lightly or severely degraded images because the proposed method has an image-adaptive color-correction step using image-adaptive measures; rare components are compensated, and abundant components are adjusted.

4.1.2. Improved Image Comparison

The color-corrected image using the proposed method has no color casting, as shown in Figure 6, Figure 7, Figure 8 and Figure 9. The color-balanced image seems to be a hazy image. To enhance the image’s haze component, various dehazing methods are applied. This section compares the images enhanced using both state-of-the-art methods and the proposed method. Because the color-corrected sandstorm image seems to be a hazy image, to compare the enhanced sandstorm images, state-of-the-art dehazing methods are included. He et al.’s method [1] enhances the hazy image using DCP. Meng et al.’s method [2] improves the hazy image using a refined transmission map. Ren et al. [20] enhance the hazy image using CNN. Shi et al.’s method [16] improves the sandstorm image using the image-adaptive transmission map and mean shift of the color components. Gao et al. [14] enhance the sandstorm image using the RBCP and color-compensation method. Al Ameen’s method [8] improves the sandstorm image using various constant measures.
Figure 10, Figure 11, Figure 12 and Figure 13 show the enhanced images using the proposed method and state-of-the-art methods.
Figure 10 and Figure 11 show the variously degraded sandstorm images and enhanced images using the proposed method and state-of-the-art methods. The dehazing methods, such as in He et al. [1], Meng et al. [2], and Ren et al. [20], have no color-correction steps. This is why the enhanced image using these methods still has a color shift. Additionally, the enhanced image based on a low-light image is still dark and displays color distortion. However, on lightly degraded sandstorm images, these dehazing methods improve the image. Shi et al.’s [16] method has a color-compensation step. This method uses the mean shift of the color ingredients and gray world assumption [10] to correct the color, but this leads to color shifting in some images because the dehazing step is applied to the corrected image. Even though the image is improved using this method, it still has a color shift. Additionally, in low-light images, the color is not suitably corrected, and a color shift appears. Gao et al. enhance the sandstorm image using RBCP [14]. This method improves the sandstorm image. However, in some images, the haze still remains due to a lack of an image-adaptive transmission map. Al Ameen’s method [8] improves the sandstorm image using various constant values. The enhanced image using this method has a color shift in some images because this method does not use image-adaptive values, only constant values. The proposed method improves sandstorm images naturally without color shift, and in low-light images, the enhanced image has no color distortion. As shown in Figure 10 and Figure 11, the improvement of lightly degraded sandstorm images is not a difficult task for image-enhancement methods.
Figure 12 and Figure 13 show the variously degraded sandstorm images and improved sandstorm images. The dehazing methods, such as those by He et al. [1], Meng et al. [2], and Ren et al. [20], have color shifts because these methods have no color-correction steps, and a dehazing step is applied to the color-casted images. Shi et al. [16] enhances sandstorm images using a color-correction step. This method also causes a color shift in some images because the enhancement step is applied to color-distorted images, but the enhanced image still has color degradation. Gao et al. [14] enhances the sandstorm images using just the channel’s mean ratio, but this method causes color shifting in some images, as greatly degraded sandstorm images have rare color channel components, so color shift occurs. Al Ameen’s [8] method has a color-correction step using constant values, and this method causes color shifting in some images due to this method having no image adaptive measures. The proposed method enhances sandstorm images naturally without any color cast.

4.2. Objective Comparison

As seen in Figure 10, Figure 11, Figure 12 and Figure 13, the performance of the proposed method is subjectively superior to that of other state-of-the-art methods. In this paper, to assess the results objectively, two measures are used. The first is the nature image quality evaluation (NIQE) [30], and the other is the underwater image quality measure (UIQM) [31]. The NIQE measure [30] indicates that the image is improved in terms of how natural it appears. This measure is based on the construction of statistical natural scene features by collecting quality awareness (which is a simple but highly regular natural scene statistic (NSS)) and fitting it to a multivariate Gaussian (MVG) model to express the distance between MGV and NSS features from the distorted image [30]. The UIQM measure [31] shows that the image is enhanced nicely. The UIQM measure [31] uses an underwater image enhancement area. The sandstorm image and the underwater image have a common point in the attenuated color channel. The underwater image has an attenuated color channel in the red and blue channels due to insufficient lightness. The sandstorm image also has attenuated blue and green color channels, and certain color casting occurs, such as reddish or yellowish. Because the underwater image and sandstorm image have similar features in terms of attenuated color channel and color cast, the UIQM score is a suitable measure for rating the improved sandstorm image. The UIQM measure [31] indicates using a combination of image components, such as sharpness, colorfulness, and contrast. If the image is well-enhanced, the NIQE [30] score is low, and vice versa. Additionally, if the image is well-improved, the UIQM [31] score is high, and vice versa. Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show the UIQM [31] and NIQE [30] scores in Figure 10, Figure 11, Figure 12 and Figure 13. Though Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 all show the improved sandstorm images, because Figure 6, Figure 7, Figure 8 and Figure 9 show just the color-corrected images and not completely enhanced images, an objective comparison is only in Figure 10, Figure 11, Figure 12 and Figure 13 as these are completely enhanced sandstorm images.
Table 1 and Table 2 show the NIQE [30] scores in Figure 10, Figure 11, Figure 12 and Figure 13.
Table 1 shows the NIQE scores in Figure 10 and Figure 11 (10 kinds of degraded and enhanced images). As seen in Figure 10 and Figure 11, the image enhanced using the dehazing methods has a color shift, but the NIQE [30] scores are lower than in the Gao et al. [14] method in some image. Meng et al.’s [2] method has the low NIQE [30] values of the comparison methods, though the enhanced image has a color shift in some image. Gao et al.’s [14] method has a higher NIQE [30] score than the other methods, though the improved image using this method has less color casting in some image. The enhanced image using the proposed method has a low NIQE score because the enhanced image has no color casting, and the hazy ingredients are enhanced suitably. Overall, the enhanced image using the proposed method seems natural in various conditions.
Table 2 shows the NIQE [30] scores in Figure 12 and Figure 13 (10 kinds of distorted and improved images). The He et al. [1], Meng et al. [2], and Ren et al. [20] methods have higher NIQE [30] scores than the sandstorm image enhancement methods. Al Ameen [8] has a higher NIQE [30] score among the sandstorm image enhancement methods, but this method has better performance than dehazing methods. The proposed method has a lower NIQE [30] score than the other comparison methods because, in lightly degraded images, the proposed method suitably corrects the color of the image and the dehazing procedure is adaptively applied.
Table 3 shows the average NIQE [30] scores in Figure 10, Figure 11, Figure 12 and Figure 13 (10 kinds of color-casted and improved images) and the DAWN dataset [29]. The dehazing methods have a higher NIQE [30] score than Shi et al.’s [16] method because these methods have no color correction steps. Al Ameen [8] and Gao et al.’s methods [14] have higher NIQE scores than Meng et al.’s method [2], though these methods have a color-correction step. The proposed method has a lower NIQE [30] score than the other methods because the proposed method has image-adaptive color correction and dehazing.
Table 4 and Table 5 show the UIQM [31] scores in Figure 10, Figure 11, Figure 12 and Figure 13 (10 kinds of degraded and enhanced images). The image is improved well, and the UIQM score is high.
Table 4 shows the UIQM [31] scores in Figure 10 and Figure 11 (10 kinds of distorted and improved images). The dehazing methods have a lower UIQM score than Geo et al.’s [14] method in some image. The Meng et al. method [2] has higher UIQM scores than the other comparison methods, though the enhanced image has a color shift in some image. In the case of lightly degraded images, the UIQM score rates the image on its sharpness. Therefore, although Meng et al.’s method [2] has color shift, the UIQM score is higher than that of the other methods because it [2] has competitive performance in only the dehazing area, and the lightly degraded sandstorm image seems to be a hazy image. The Al Ameen [8] method has a higher UIQM score than other sandstorm image enhancement methods, though the enhanced image has a color shift in some image. The proposed method has a higher UIQM score [31] than the other methods because the proposed method has a color-correction step, and the enhanced image has no color casting. Moreover, the dehazing step is applied by reflecting the image’s condition.
Table 5 shows the UIQM [31] scores in Figure 12 and Figure 13 (10 kinds of color-casted and improved images). The UIQM scores of the dehazing methods have lower scores than the sandstorm image enhancement methods even though the enhanced image has color distortion. The Al Ameen method [8] has a lower UIQM score than the other sandstorm image enhancement methods in some image. The proposed method has a higher UIQM score than the other comparison methods because the enhanced image using the proposed method has no color casting, and the haze components are improved naturally.
Table 6 shows the average UIQM [31] scores in Figure 10, Figure 11, Figure 12 and Figure 13 (30 kinds of degraded and enhanced images) and the DAWN dataset [29]. The He et al. method [1], Meng et al. method [2], and Ren et al. [20] method have lower UIQM [31] scores than the Al Ameen [8] and Shi et al. [16] methods. Gao et al.’s [14] method has a lower UIQM score than the dehazing methods, though this method has a color-correction step. The proposed method has a higher UIQM score than that of the other methods because the proposed method has an image-adaptive color-correction step and a dehazing step; moreover, to refine the dimmed components as with edge ingredients, we applied the image-guided filter [17]. Therefore, the improved image using the proposed method seems natural and the edge components are maintained well, which is reflected in the NIQE and UIQM scores. The enhanced image using the existing methods seems dimmed in the edge area and has a higher NIQE score and lower UIQM score than that of the proposed method. As shown in Figure 10, Figure 11, Figure 12 and Figure 13 and Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, to enhance the sandstorm image naturally, both image-adaptive color-correction steps and dehazing methods are needed.
Table 7 shows the summary of the DAWN dataset [29]. As shown in Table 7, the DAWN dataset [29] has images of various weather conditions.
This paper proposes a sandstorm image enhancement method using the image adaptive ratio (IAR) with the image’s statistical features, which are the average values of the color channels. The degraded sandstorm image has different contrasts for each color channel and their average values are also different. Using this relationship, this paper uses the IAR of the color channels to enhance the degraded sandstorm image naturally. The image enhanced using the proposed method has no degraded color and seems natural. Because the enhanced image seems hazy or dusty, to enhance the image naturally, a dehazing procedure is needed. Therefore, this paper uses the normalized dark channel to obtain the bright adaptive DCP. The existing DCP methods have artificial effects and ringing effects however, the proposed BADCP has no ringing effect due to the normalized dark channel experiencing no hindrance from bright regions. The enhanced image using the proposed IAR and BADCP method seems natural. The contribution of this paper is that it uses the image’s statistical features to enhance the degraded sandstorm image naturally, and as a result, the performance of this method is superior to that of the state-of-the-art methods because the image enhanced using the proposed IAR and BADCP method seems natural compared to the degraded images.

5. Conclusions

Sandstorm images have color distortion, such as appearing reddish or yellowish because of sand particles, owing to the attenuated color channels, such as the green and blue channels, creating an imbalanced image. Therefore, to correct the distorted sandstorm image, a color-correction step is needed. The red channel of the sandstorm image is abundant, and the blue channel is rare. In other words, the intensity of the red channel is high, and its mean value is also high, but the intensity of the blue channel is low, and its mean value is also low. The balanced sandstorm image has similar features to hazy images. Therefore, to enhance the sandstorm image, we applied a dehazing algorithm as with DCP. The DCP algorithm is good at the dehazing part, but it has a weak point in the sky region of images. This paper, to compensate for this weak point, proposes the brightness-adaptive DCP (BADCP) using a normalized color channel. The strong point of this method is that the imbalanced image is corrected using just the image’s features with the mean ratio, moreover, the dehazing procedure reflects the image’s natural features and as a result, the enhanced image seems natural. When the image has a thick dusty layer or appears hazy, the enhanced image still has a dimmed effect and that is a weak point of this paper. The image enhanced using the proposed method is superior to that of the other state-of-the-art methods, both subjectively and objectively. Future work should focus on more suitable color-correction methods and the estimation of a DCP that has no hindrance in bright regions. Moreover, estimating the adaptive DCP including the distance would indicate the image’s depth and make the image clear, even though the object of the image is far away.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

NIQENatural image quality evaluator
UIQMUnderwater image quality measure
DCPDark channel prior
DAWNVehicle detection in adverse weather natural dataset

References

  1. He, K.; Jian, S.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  2. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE international conference on computer vision, Sydney, Australia, 1–8 December 2013. [Google Scholar]
  3. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar]
  4. YSchechner, Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar]
  5. Naseeba, T.; Binu, K.P.H. Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions. Int. Res. J. Eng. Technol. 2016, 3, 135–139. [Google Scholar]
  6. Narasimhan, S.G.; Nayar, S.K. Chromatic framework for vision in bad weather. In Proceedings of the IEEE in Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), Hilton Head Island, SC, USA, 15 June 2000; Volume 1. [Google Scholar]
  7. Nayar, S.K.; Narasimhan, S.G. Vision in bad weather. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2. [Google Scholar]
  8. Al Ameen, Z. Visibility enhancement for images captured in dusty weather via tuned tri-threshold fuzzy intensification operators. Int. J. Intell. Syst. Appl. 2016, 8, 10. [Google Scholar] [CrossRef] [Green Version]
  9. Huo, J.Y.; Chang, Y.L.; Wang, J.; Wei, X.X. Robust automatic white balance algorithm using gray color points in images. IEEE Trans. Consum. Electron. 2003, 52, 541–546. [Google Scholar]
  10. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  11. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. Blue channel and fusion for sandstorm image enhancement. IEEE Access 2020, 8, 66931–66940. [Google Scholar] [CrossRef]
  12. Thu, Q.H.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar]
  13. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand-dust image enhancement. IET Image Process. 2020, 14, 747–756. [Google Scholar] [CrossRef]
  14. Gao, G.; Lai, H.; Jia, Z.; Liu, Y.; Wang, Y. Sand-dust image restoration based on reversing the blue channel prior. IEEE Photonics J. 2020, 12, 1–16. [Google Scholar] [CrossRef]
  15. Wang, J.; Pang, Y.; He, Y.; Liu, C. Enhancement for dust-sand storm images. In Proceedings of the International Conference on Multimedia Modeling, Miami, FL, USA, 4–6 January 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
  16. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Let you see in sand dust weather: A method based on halo-reduced dark channel prior dehazing for sand-dust image enhancement. IEEE Access 2019, 7, 116722–116733. [Google Scholar] [CrossRef]
  17. He, K.; Jian, S.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  18. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. A fast sand-dust image enhancement algorithm by blue channel compensation and guided image filtering. IEEE Access 2020, 8, 196690–196699. [Google Scholar] [CrossRef]
  19. Wang, A.; Wang, W.; Liu, J.; Gu, N. AIPNet: Image-to-image single image dehazing with atmospheric illumination prior. IEEE Trans. Image Process. 2018, 28, 381–393. [Google Scholar] [CrossRef] [PubMed]
  20. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
  21. Zhang, Y.; Bai, X.; Fan, R.; Wang, Z. Deviation-sparse fuzzy c-means with neighbor information constraint. IEEE Trans. Fuzzy Syst. 2018, 27, 185–199. [Google Scholar] [CrossRef]
  22. Tang, Y.; Ren, F.; Pedrycz, W. Fuzzy C-means clustering through SSIM and patch for image segmentation. Appl. Soft Comput. 2020, 87, 105928. [Google Scholar] [CrossRef]
  23. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef] [Green Version]
  24. Preetham, A.J.; Shirley, P.; Smits, B. A practical analytic model for daylight. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999. [Google Scholar]
  25. Goldstein, E.B. Sensation and Perception; Wadsworth: Belmont, CA, USA, 1980. [Google Scholar]
  26. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
  27. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  28. Nayar, S.K.; Narasimhan, S.G. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar]
  29. Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature dataset. arXiv 2020, arXiv:2008.05402. [Google Scholar]
  30. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a completely blind image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  31. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2018, 41, 541–551. [Google Scholar] [CrossRef]
Figure 1. The comparison of various conditions of sandstorm images and mean value of each color channel: (a) non-degraded sandstorm image and mean value of each color channel; (b) degraded sandstorm image and mean value of each color channel.
Figure 1. The comparison of various conditions of sandstorm images and mean value of each color channel: (a) non-degraded sandstorm image and mean value of each color channel; (b) degraded sandstorm image and mean value of each color channel.
Symmetry 14 01334 g001
Figure 2. The comparison of input image (ac) and color-balanced image (df) with each channel’s mean value (the tables below the images indicate the average value of each color channel): (ac) Input degraded sandstorm image with mean value of each color channel; (df) Initial color-balanced image without control of the red channel with mean value of each color channel (the tables below the images indicate the average value of each color channel).
Figure 2. The comparison of input image (ac) and color-balanced image (df) with each channel’s mean value (the tables below the images indicate the average value of each color channel): (ac) Input degraded sandstorm image with mean value of each color channel; (df) Initial color-balanced image without control of the red channel with mean value of each color channel (the tables below the images indicate the average value of each color channel).
Symmetry 14 01334 g002
Figure 3. The comparison of input image and color-balanced image with each channel’s mean value: (a,d) Input sandstorm image with mean value of each color channel; (b,e) Initial color-balanced image without control of the red channel with mean value of each color channel; (c,f) Proposed color-balanced image with mean value of each color channel.
Figure 3. The comparison of input image and color-balanced image with each channel’s mean value: (a,d) Input sandstorm image with mean value of each color channel; (b,e) Initial color-balanced image without control of the red channel with mean value of each color channel; (c,f) Proposed color-balanced image with mean value of each color channel.
Symmetry 14 01334 g003
Figure 4. Example 1: The comparison of the proposed transmission map and existing transmission map with existing DCP [1] (blue dotted line indicates the proposed method).
Figure 4. Example 1: The comparison of the proposed transmission map and existing transmission map with existing DCP [1] (blue dotted line indicates the proposed method).
Symmetry 14 01334 g004
Figure 5. Example 2: The comparison of the proposed transmission map and existing transmission map with existing DCP [1] (blue dotted line indicates the proposed method).
Figure 5. Example 2: The comparison of the proposed transmission map and existing transmission map with existing DCP [1] (blue dotted line indicates the proposed method).
Symmetry 14 01334 g005
Figure 6. Example 1: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Figure 6. Example 1: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Symmetry 14 01334 g006
Figure 7. Example 2: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Figure 7. Example 2: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Symmetry 14 01334 g007
Figure 8. Example 3: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Figure 8. Example 3: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Symmetry 14 01334 g008
Figure 9. Example 4: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Figure 9. Example 4: The comparison of color-balanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) Shi et al. [16]; (c) Shi et al. [13]; (d) Al Ameen [8]; (e) Proposed method.
Symmetry 14 01334 g009
Figure 10. Example 1: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Figure 10. Example 1: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Symmetry 14 01334 g010
Figure 11. Example 2: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Figure 11. Example 2: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Symmetry 14 01334 g011
Figure 12. Example 3: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Figure 12. Example 3: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Symmetry 14 01334 g012
Figure 13. Example 4: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Figure 13. Example 4: The comparison of enhanced images in various conditions of degraded sandstorm images using state-of-the-art methods and the proposed method: (a) Input image; (b) He et al. [1]; (c) Meng et al. [2]; (d) Ren et al. [20]; (e) Shi et al. [16]; (f) Gao et al. [14]; (g) Al Ameen [8]; (h) Proposed method.
Symmetry 14 01334 g013
Table 1. The comparison of NIQE [30] scores in Figure 10 and Figure 11, in order. (If the image is enhanced well, the score is low).
Table 1. The comparison of NIQE [30] scores in Figure 10 and Figure 11, in order. (If the image is enhanced well, the score is low).
He et al. [1]Meng et al. [2]Ren et al. [20]Shi et al. [16]Gao et al. [14]Al Ameen [8]Proposed Method
19.79718.72919.70119.65219.70019.60717.901
19.65819.56419.78519.59719.81419.67415.948
21.06420.73920.95121.40021.85521.97516.728
19.37619.41319.41619.15819.45819.21316.850
18.55816.99218.07418.16318.95418.19315.920
21.05321.83220.82920.24620.35420.42517.980
19.41419.16519.42419.23819.65919.58618.135
19.77919.75019.73819.69919.85619.61918.355
19.63119.63319.64219.52119.60519.61519.107
20.61120.68320.64120.32820.51320.94319.355
AVG19.89419.65019.82019.70019.97719.88517.628
Table 2. The comparison of NIQE [30] scores in Figure 12 and Figure 13, in order. (If the image is enhanced well, the score is low).
Table 2. The comparison of NIQE [30] scores in Figure 12 and Figure 13, in order. (If the image is enhanced well, the score is low).
He et al. [1]Meng et al. [2]Ren et al. [20]Shi et al. [16]Gao et al. [14]Al Ameen [8]Proposed Method
20.30220.16720.24820.13320.09319.90215.424
20.20220.10220.13019.81219.90119.51518.621
20.24320.25020.21220.14220.18320.11819.383
19.04518.97619.03819.21918.99418.81115.752
19.76819.79819.79119.75019.67419.64917.911
19.41919.44619.44819.23319.39019.22316.836
20.10119.60319.95519.91819.29619.58017.920
19.37819.75619.33118.78718.56519.20716.970
19.70419.19219.52919.57719.77419.73018.594
19.91419.93619.95019.21818.94319.67716.411
AVG19.80819.72319.76319.57919.48119.54117.382
Table 3. The comparison of average NIQE [30] scores in Figure 10, Figure 11, Figure 12 and Figure 13 and the DAWN dataset [29]. (If the score is low, the image is enhanced well).
Table 3. The comparison of average NIQE [30] scores in Figure 10, Figure 11, Figure 12 and Figure 13 and the DAWN dataset [29]. (If the score is low, the image is enhanced well).
He et al. [1]Meng et al. [2]Ren et al. [20]Shi et al. [16]Gao et al. [14]Al Ameen [8]Proposed Method
AVG(20)19.85119.68619.79219.63919.72919.71317.505
AVG(323)19.86319.69819.89219.71419.93119.80317.839
Table 4. The comparison of the UIQM [31] scores in Figure 10 and Figure 11, in order. (If the image is enhanced well, the score is high).
Table 4. The comparison of the UIQM [31] scores in Figure 10 and Figure 11, in order. (If the image is enhanced well, the score is high).
He et al. [1]Meng et al. [2]Ren et al. [20]Shi et al. [16]Gao et al. [14]Al Ameen [8]Proposed Method
0.5790.9080.7790.7020.6030.5821.469
0.8350.9480.8270.6520.6080.7561.898
1.1591.3581.1941.1200.9491.2091.737
0.6550.7940.6370.7990.5330.7651.228
1.2161.4781.3371.2930.9611.3131.597
0.5520.6050.8710.5770.6950.8281.179
0.7481.1340.8170.9590.4400.9001.578
0.8290.8860.9610.7080.5950.8501.481
0.5660.5470.5490.8080.4850.9301.151
0.7190.629s0.7610.9440.6531.0761.173
AVG0.7860.9290.8730.8560.6520.9211.449
Table 5. The comparison of the UIQM [31] scores in Figure 12 and Figure 13 in order. (If the image is enhanced well, the score is high).
Table 5. The comparison of the UIQM [31] scores in Figure 12 and Figure 13 in order. (If the image is enhanced well, the score is high).
He et al. [1]Meng et al. [2]Ren et al. [20]Shi et al. [16]Gao et al [14]Al Ameen [8]Proposed Method
0.8880.9100.9511.0700.9271.2051.936
0.4930.4680.6300.9240.7391.0031.339
0.4180.4340.5530.6930.5020.8231.107
0.9610.9820.9770.8960.8480.9291.867
0.6640.6300.5740.6110.5720.6991.504
0.7900.7350.8370.8320.7150.8761.627
0.7781.2401.0180.9251.1270.9631.575
0.9100.9391.0671.0291.0601.1821.533
0.5740.7550.7370.9400.6900.8661.669
0.6630.6620.8210.9070.8810.9581.622
AVG0.7140.7760.8170.8830.8060.9501.578
Table 6. The comparison of average UIQM [31] scores in Figure 10, Figure 11, Figure 12 and Figure 13 and DAWN dataset [29]. (If the score is high, the image enhanced well).
Table 6. The comparison of average UIQM [31] scores in Figure 10, Figure 11, Figure 12 and Figure 13 and DAWN dataset [29]. (If the score is high, the image enhanced well).
He et al. [1]Meng et al. [2]Ren et al. [20]Shi et al. [16]Gao et al. [14]Al Ameen [8]Proposed Method
AVG(20)0.7500.8520.8450.8690.7290.9361.514
AVG(323)0.8060.9280.8400.8700.6710.9381.524
Table 7. Summary of the DAWN dataset [29] (includes the real-world images collected under various adverse weather conditions).
Table 7. Summary of the DAWN dataset [29] (includes the real-world images collected under various adverse weather conditions).
CategoryRainFogSnowSand
Images200300204323
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, H.S. Eximious Sandstorm Image Improvement Using Image Adaptive Ratio and Brightness-Adaptive Dark Channel Prior. Symmetry 2022, 14, 1334. https://doi.org/10.3390/sym14071334

AMA Style

Lee HS. Eximious Sandstorm Image Improvement Using Image Adaptive Ratio and Brightness-Adaptive Dark Channel Prior. Symmetry. 2022; 14(7):1334. https://doi.org/10.3390/sym14071334

Chicago/Turabian Style

Lee, Ho Sang. 2022. "Eximious Sandstorm Image Improvement Using Image Adaptive Ratio and Brightness-Adaptive Dark Channel Prior" Symmetry 14, no. 7: 1334. https://doi.org/10.3390/sym14071334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop