Next Article in Journal
The Arcsine Kumaraswamy-Generalized Family: Bayesian and Classical Estimates and Application
Previous Article in Journal
Some Formulas and Recurrences of Certain Orthogonal Polynomials Generalizing Chebyshev Polynomials of the Third-Kind
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sandstorm Image Enhancement Using Image-Adaptive Eigenvalue and Brightness-Adaptive Dark Channel Network

Department of Electronics Engineering, University of Pukyong National, 45 Yongso-ro, Nam-gu, Busan 48513, Korea
Symmetry 2022, 14(11), 2310; https://doi.org/10.3390/sym14112310
Submission received: 14 September 2022 / Revised: 19 October 2022 / Accepted: 27 October 2022 / Published: 3 November 2022

Abstract

:
Sandstorm images suffer from dust particles and the attenuation of light. Degraded sandstorm images have a reddish or yellowish color cast owing to dust particles, which have a certain color. Because of this phenomenon, distorted sandstorm images have imbalanced red and blue channels. Existing sandstorm enhancement methods improve the image by only focusing on dehazing, due to sandstorm images having similar features to hazy or dusty images. However, the enhanced images using previous methods have a color cast. Therefore, to enhance the sandstorm image naturally, a balancing procedure of color components is needed. This paper proposes a color-balancing method using image-adaptive eigenvalues. Because eigenvalues describe image characteristics, improved images have balanced color components. However, these enhanced images have dusty features. Therefore, to enhance balanced images a dehazing procedure is needed, something which this paper applies using a multiscale convolution neural network, thus generating a transmission map with brightness-adaptive features. Images enhanced using the proposed method compare well with state-of-the-art methods in subjective and objective measures of quality.

1. Introduction

The image under sandstorm circumstances has color-distorted features because of the imbalanced color channels, in turn due to attenuation. This results in issues in computer vision, robot vision, and image recognition. The color channels of degraded sandstorm images exhibit asymmetry, whereby the red channel is abundant, and the blue channel is sparse due to attenuation and scattering. If the particles have no color components, the sandstorm image has only dusty or hazy features. However, if the particles have color, then the obtained image has a certain reddish or yellowish color cast. If a color-balancing procedure is not considered, then the enhanced image has an artificial color. Therefore, enhancement of the degraded image is needed.
Improvement methods of sandstorm images can be classified into model-based methods [1,2,3,4,5], model-free based methods, and machine learning-based methods.
Because sandstorm images and hazy images are obtained similarly, model-based [1,2,3,4,5] algorithms have been studied to enhance them. He et al. improved hazy images using the dark channel prior to estimate the darkest region in the image [5]. However, the enhanced image had artificial color components due to this method [5] being unable to sufficiently estimate the dark region in bright images. Meng et al. [6] enhanced images using a refined transmission map. This method [6] improved the image using a boundary constraint transmission map, which better estimated the transmission map compared to the method that of He et al. [5]. However, this method also had a weak point in bright regions. Lee’s [7] method improved sandstorm images using a normalized eigenvalue and the brightness-adaptive dark channel prior method. This method [7] enhanced degraded sandstorm images in spite of color distortion. Gao et al. [8] improved degraded sandstorm images using the reversed blue channel prior method, considering its sparsity in distorted sandstorm images. This method [8] led to less of a color shift; however, a hazy effect remained. Shi et al. [9] enhanced the sandstorm image using the mean shift of color components and an image-adaptive transmission map. However, this method [9] led to an artificial color shift in some images due to the shift of color components, regardless of color channel abundance. Wang et al. enhanced the hazy image using linear transformation [10]. This method estimates the atmospheric light based on quad tree subdivision, and its weakness is that this method displays a darker image when the dehazing level is increased [10].
Model-free based sandstorm image enhancement algorithms use image processing techniques such as histogram equalization and gamma correction. Al Ameen [11] enhanced sandstorm images using gamma correction. This method could enhance lightly degraded sandstorm images. However, in the case of greatly distorted images, the enhanced images had artificial color components because of the constant value used to operate gamma correction. Besides, a constant gamma value is not able to sufficiently reflect the image’s characteristics. Shi et al. proposed a sandstorm image enhancement method using the mean shift of color components and CLAHE [12]. This method could enhance the degraded sandstorm images; however, in the case of images with a sparse blue channel, the enhanced images had artificial color components. Cheng et al. improved sandstorm images using a robust gray world assumption and guided image filtering [13]. Zhu et al. improved the hazy image using gamma correction and multiexposure image fusing [14]. This method enhances the hazy image well. However, in some images a color shift effect appears.
Sometimes, the denoising algorithm is adopted in the dehazing area partially [15,16] due to the haze being also a particle of something. Mahdaoui et al. enhanced the noisy image using augmented Lagrangian methods [15].
Recently, machine learning-based sandstorm enhancement algorithms have been studied. Ren et al. [17] enhanced hazy images using a multiscale convolution neural network (CNN). Li et al. [18] improved hazy images using recombination of a hazy model to generate an enhanced haze-free image. Santra et al. successfully enhanced hazy images using a multiscale CNN and environmental illustration corresponding to the input image and its transmission map [19]. However, in some cases such as nighttime images, the enhanced images had an artificial effect due to synthetic images being made from daytime images [19]. Wang et al. enhanced hazy images using the atmospheric illumination prior method [20]. This method used the luminance channel to enhance hazy images with a multiscale CNN. Zhang et al. also improved hazy images using a multiscale CNN featuring encoders with three scaled convolutional layers and pooling layers constituting a fusion module [21].
Machine learning-based methods have advantages in some image processing areas. However, machine learning-based methods, because of the limited and insufficient dataset conditions, have disadvantages in naturally and sufficiently enhancing images in all conditions. Meanwhile, if the synthetic dataset is generated by reflecting the natural conditions and trained, then good performance is achieved even though the original dataset is limited. Sandstorm images have degraded color components owing to scattered, attenuated, and imbalanced color channels. Therefore, to naturally enhance distorted sandstorm images, it is necessary to balance the color channels. This paper proposes a color-balancing method using image-adaptive eigenvalues. The eigenvalues of an image reflect its features [7]. Accordingly, degraded color channels can be balanced to naturally enhance the image. Because the balanced images have similar features to hazy or dusty images, a further dehazing procedure is needed. Existing dehazing methods frequently use the DCP [5]. However, this method struggles in bright regions and constant kernel size, with the enhanced image having artificial color components in some cases because of the improperly estimated DCP. Therefore, this paper enhances hazy images using the DCP, based on a multiscale CNN composed of various layers. Accordingly, there is no need for a constant block region, and the enhanced image has no artificial effect.
Images enhanced using the proposed method compare well with state-of-the-art methods in subjective and objective measures of quality.

2. Proposed Method

Sandstorm images and dusty images are obtained similarly. However, sandstorm images have shifted color components due to colorful sand particles. The characteristics of sandstorm images are various due to attenuation or scattering by sand particles. Figure 1 shows the sandstorm images. Figure 1a,b show a nondegraded dusty image and its histogram distribution of color channel, while Figure 1c,d show a degraded sandstorm image and histogram distribution of color channel. As can be seen, in order to balance the color-casted sandstorm image, the color channels of the image require balancing.
Therefore, this section proposes a method for balancing color channels using image-adaptive eigenvalues.

2.1. Image-Adaptive Color Balance

When light propagates through the sand/dust medium, scattering and attenuation render the obtained image with a reddish or yellowish color distortion. Therefore, to naturally enhance color-casted sandstorm images, a color-balancing procedure is needed. Lee [7] enhanced degraded color channels using eigenvalues. This method performed well in enhancing distorted sandstorm images. However, this method uses the normalized eigenvalue of each color channel with respect to the maximum value, which sometimes leads to an overflow region due to the asymmetric distribution of red and blue channels. Therefore, this paper proposes image-adaptive eigenvalues to balance degraded color channels.
The eigenvalue of an image is obtained by [7,22,23]:
I · V = λ · V
where I is image matrix, V is non-zero vector, and λ is eigenvector. Equation (1) is able to be reformed as:
I · V λ · V · U = 0
where U is unit matrix. Because the V is non-zero vector, the eigenvalue of image is obtained by
det ( I λ · U ) = 0
where det ( · ) is determinant operation. To acquire the eigenvalue of an image needs a square matrix. However, the natural image has diverse image sizes. Therefore, in order to fit the same image size on row and col, the size of image is fit in min between row and col. The characteristic of an eigenvalue according to color channel is shown in the Lee method [7]. If the image has color cast as with being reddish, then the eigenvalue of red channel is higher than that of green and blue. Additionally, using this property, the color-casted image is able to be balanced. The cue point of the Lee method [7] to balance the color is described as [7]:
λ n c = λ m a x c max c ( λ m a x c )
where λ n c is the normalized eigenvalue of each color channel with respect to the maximum value, λ m a x c is the maximum eigenvalue of each color channel, and c { r , g , b } . This method performs nicely to balance the color-casted image. However, this method [7] balances the image just using normalized eigenvalue among greatly attenuated channel and abundant channel and because of this point the balanced image has an overflowed region. Therefore, this paper proposes the image-adaptive color-balancing method by reflecting the characteristics of the color channel. The balancing procedure can be expressed as follows:
I b c ( x ) = I c ( x ) · λ α c + β c · ( 1 I c ( x ) ) · μ I c ( x )
λ α c = log 2 ( 1 + μ λ n λ n c )
β c = μ ( μ I c ( x ) ) μ I c
where μ is the average operation. In Equation (6), λ n c was proposed by Lee [7] as a function with the maximum eigenvalue of each color channel to balance the degraded sandstorm image. However, this method leads to the above mentioned drawbacks. Therefore, to compensate for this phenomenon, this paper applies the image-adaptive eigenvalue λ α c using Equation (6). By multiplying the average value of λ n c , the excess value can be suppressed, leading to a more naturally balanced image. Moreover, using Equation (5), sparse color components can be compensated for. Equation (5) is somewhat similar to that proposed by Ancuti et al. [24]. However, Ancuti et al.’s [24] method enhanced images by compensating for sparse color components using the green channel. However, the proposed method used the gray channel consisting of the average value of all image features to reflect the channel conditions. Using this method, the relatively abundant red channel can be reduced, while sparser components can be compensated for. β c controls the image balance using grayscale if the red channel is abundant. The balanced image obtained using Equations (5)–(7) has no color cast and appears natural.
Figure 2 compares the balanced images obtained using Lee’s method [7] and the proposed method. As shown in Figure 2, the image enhanced using the proposed method has no color shift or overflow region, as well as a uniformly distributed histogram. Therefore, the proposed color-balancing method is promising for future applications.

2.2. Dehazing Using Brightness-Adaptive Dark Channel Network

Images balanced image using the proposed method seem hazy or dusty without a color shift. Therefore, a dehazing procedure is proposed. The dark channel prior (DCP) method [5] is frequently used to enhance the hazy image. However, this method [5] has disadvantages when estimating bright regions, resulting in an artificial color shift in the enhanced image. Therefore, to compensate for this effect, Lee proposed a brightness-adaptive transmission map using reverse DCP [7]. However, because the dark channel approach consists of a constant block region, the estimated image has a block effect. Moreover, He et al. [5] used a refinement procedure with a guided image filter [25] and a bilateral filter [26,27] to enhance the artificial effect of DCP. Therefore, the DCP method [5] and Lee’s method [7] can be used enhance hazy images; however, an artificial color shift occurs. The existing transmission map procedure uses a constant kernel size to estimate the dark region of each image.
I D C ( x ) = min c ( min y Ω ( x ) ( I c ( y ) A c ) )
where I D C ( x ) is the estimated dark channel, Ω ( x ) is the patch size (which was set to 15 in He et al.’s [5] method), and A c is the backscattered light of each color channel according to He et al.’s [5] method. As expressed in Equation (8), a constant patch size is used to estimate the image’s dark region. However, because the kernel size is constant, the estimated dark image has a block effect, which persists in the transmission map. To address this phenomenon, various refinement procedures can be applied. However, the CNN is able to generate the image’s transmission map without using a certain constant kernel size.
Therefore, this paper proposes a brightness-adaptive dark channel using multiscale CNN, thereby eliminating the block effect.
The basic dataset used to train a brightness-adaptive dark channel network is the D hazy dataset [28]. This dataset has a clear scene radiance, a synthetic hazy image, and a depth map. Because the dehazing procedure uses a transmission map, a dataset transmission map can be generated from the depth map. The transmission map can be expressed as follows [1,2,3,4,5]:
t ( x ) = exp β · d ( x )
where β is a scattering parameter, and d ( x ) is the depth map of the image. The scattering parameter β controls the transmission map’s brightness as with an ‘aerial perspective’ [5,29,30]. The ‘aerial perspective’ indicates haze as a function of distance, with greater values indicating thick haze [5,29,30]. A natural image has various depth features. Therefore, by considering the characteristics of the scattering parameter, the trained transmission map contains various depth maps, thus producing a proper transmission map. The variation of transmission maps with the scattering parameter is displayed in Figure 3. This paper uses the transmission map generated via the scattering parameter with a depth map as the target image for training. To generate various target transmission maps, the scattering parameter ( β ) was set in the range [0.1, 0.95, …, 0.75, 0.7] with a 0.05 interval.
The network followed a hybrid of DCP theory [5] and brightness-adaptive DCP [7].
l m p ( x ) min c ( min y Ω ( x ) ( I c ( y ) A c ) )
where A c is backscattered light according to He et al.’s method [5], Ω ( x ) is the patch region, and l m p ( x ) is part of the minimum pooling layer. Equation (10) shows the common point between minimum pooling and DCP [5] in estimating the dark region in an image. In contrast to minimum pooling, the DCP method [5] needs a certain constant patch size, leading to artificial effects in the enhanced image. Therefore, this paper used minimum pooling to properly estimate the image’s dark region.
To design the multiscale networks, U-net [31] was applied. The concept of the proposed transmission network can be expressed as follows [7]:
t p ( x ) = R e L U ( 1 l d ( x ) max ( μ l d , μ 1 l d ) + max ( l d ( x ) , 1 l d ( x ) ) )
where l d is the multiscaled dark channel layer, μ is the average operation, x is the location of the pixel, 1 l d is the reverse layer, R e L U ( . ) is the rectified linear unit (ReLU) [32] operation, max ( . ) and t p ( x ) are maximum operation and the proposed transmission map respectively.
The network design is depicted in Figure 4. As can be seen, to generate the transmission map with multiple scales, four minimum pooling layers and three upsampling layers are used. To combine the diverse characteristics of features, five concatenation layers are used. The colored arrows in Figure 4 indicate each layer’s operation (convolution, pooling, upsampling, and concatenation). Moreover, the numbers below the block and the width indicate the channel size of the layer. Each convolution layer consists of a ReLU activation function [32]. The transmission map ( l t ) generated after training on the input image has one channel. The blue dotted rectangle indicates the unit layers used in the network. The green dotted rectangle indicates the basic module of the proposed brightness-adaptive dark channel network. As known through Equation (10), minimum pooling and dark channel prior are similar; hence, minimum pooling and the convolution layer were included in the basic module to generate the multiscale dark image. The concatenation layer, was then used to generate the image’s diverse features. The proposed network generates the trained image l d ( x ) by using color-balanced image I b ( x ) . Using l d ( x ) , the transmission map can be generated as l t ( x ) .

2.3. Loss Function

The loss function used was a combination of SSIM loss [33] and MSE loss. When the generated data and ground-truth data are similar, the SSIM loss is larger meanwhile the MSE loss is smaller.
L o s s p = φ · L o s s m s e
φ = ( 1 + L o s s s s i m )
L o s s m s e = i = 1 N e ( t p ( x i ) , G ( x i ) ) 2 N
L o s s s s i m = ( 2 μ t p μ G + c 1 ) ( 2 σ t p G + c 2 ) ( ( μ t p ) 2 + ( μ G ) 2 + c 1 ) ( ( σ t p ) 2 + ( σ G ) 2 + c 2 )
σ t p G = 1 N 1 i = 1 N ( t p ( x i ) μ t p ) ( G ( x i ) μ G )
where L o s s p is the proposed loss function, L o s s m s e is the MSE loss function, and L o s s s s i m is the SSIM loss function. By multiplying L o s s s s i m with L o s m s e , training can be reflected by characteristic of a L o s s s i m and L o s m s e . μ t p is the average t p ( proposed   transmission   map ) , μ G is the average ground truth G , σ t p G is the covariance of t p ( x i ) and G ( x i ) [33], N is the number of elements, e ( . ) is the error, c 1 , c 2 are variables to stabilize the division [33], and φ acts as a controlling factor.

2.4. Comparison of Transmission Maps

This section compares the generated transmission map with existing transmission maps (He et al. [5], Santra et al. [19], Ren et al. [17], and Lee [7]) using the D hazy dataset [28]. As expressed in Equation (9), the transmission map can be estimated using the scattering parameter. The transmission maps were obtained using a range of scattering parameter sets [1, 0.95, …, 0.75, 0.7]. Figure 5 shows a comparison of existing methods and the proposed method. To make the ground-truth image, the depth map was used with the scattering parameter sets to 0.75. Compared with the ground-truth image (scattering parameter (β = 0.75), the transmission map generated using He et al.’s [5] and Santra et al.’s [19] method was poorly estimated. In contrast, the transmission map estimated using the proposed method was well generated.

2.5. Training Environment

In order to learn the transmission map adaptively, this study used diverse scattering parameters. The training parameters were as follows: learning rate = 0.0001, batch size = 8, validation size = 8, number of workers = 4, grad clip norm = 0.1, number of epochs = 20, and optimizer = Adam [34]. The hardware specification was as follows: Geforce RTX 2060 12 GB, Geforce GTX 1660 super 6 GB, Intel® core™ i7-8700 CPU @ 3.20 GHz, and 32 GB RAM. The training dataset used was D hazy [28], consisting of 1449 synthetic hazy images, ground-truth images, and depth maps. To train the transmission map, 1305 images were used, while 144 images were used for validation. Moreover, in order to perform a test of the trained transmission map, WEAPD [35] is used: this has real-world sandstorm images, including 692 images. To estimate the accuracy of training, the structural similarity index measure (SSIM) was applied [33]. The images in Figure 6 show the score of the proposed loss function and accuracy score, according to epoch. Successful training yields an SSIM score [33] close to one. As shown in Figure 6, the score of the proposed loss function and accuracy gradually converged during training because the bounce was too small.

2.6. The Recombined Image

The degraded sandstorm image was corrected using the proposed balancing method. As the balanced image had similar features to a hazy image, a dehazing procedure was applied using multiscale CNN. The recombined image can be expressed as [1,2,3,4,5]:
J c ( x ) = I b c ( x ) A b c max ( t p ( x ) , t 0 ) + A b c
where J c ( x ) is the recombined image, A b c is the backscattered light according to He et al.’s method [5], I b c ( x ) is the color-balanced image, t p ( x ) is the proposed transmission map, and t 0 = 0.1. To refine the enhanced image, guided image filtering was applied, as expressed below [13,25].
I g c ( x ) = G f ( J c ( x ) ,   k ,   e p s )
J r e f c ( x ) = ( J c ( x ) I g c ( x ) ) · γ + I g c ( x )
where I g c ( x ) is the guided filtered image, G f ( . ) is guided filter, k = 16, e p s = 0.1 2 , γ = 9, and J r e f c ( x ) is the refined image.
Figure 7 shows a comparison of He et al.’s method [5], Lee’s method [7], and the proposed method. As shown in Figure 7, in contrast to the proposed method, the image enhanced using He et al.’s [5] method had a ringing and block effect in some areas, highlighting its good performance in sandstorm image enhancement.

3. Summary

This section shows the summary of the proposed algorithm to enhance the color-casted sandstorm image naturally.
Table 1 shows the procedure of the proposed method. As shown through Table 1, this paper to enhance the color-casted sandstorm image naturally, first obtain the color-balanced image using Equations (5)–(7). The color-balanced image has hazy or dusty feature. Therefore, to enhance the hazy image a dehazing procedure is applied using CNN with Equations (11)–(13). The image is recombined with color-balanced image and generated transmission map using Equation (17). Moreover, to refine the image, a refining procedure is applied using Equations (18) and (19).

4. Experimental Result

This paper proposes a degraded sandstorm image enhancement method using image-adaptive eigenvalues and brightness-adaptive dark channel networks. As shown in Figure 2 and Figure 7, the proposed method performed well in sandstorm image enhancement.
This section compares existing state-of-the-art methods and the proposed method according to subjective and objective measures. The subjective measure compared color balancing and the enhanced image using a weather phenomenon database (WEAPD) [35].

4.1. Color Balance Comparison

This section compares color-balancing performance of the state-of-the-art methods and the proposed method. Figure 8 and Figure 9 show multiple degraded sandstorm images and enhanced images in various circumstances. Shi et al. [9,12] improved degraded sandstorm images using the mean shift of color components. These methods perform well in color balancing but introduce artificial color components in the case of greatly degraded sandstorm images. Al Ameen [11] balanced color channels using a constant gamma value, which introduces artificial colors, as this method is not able to reflect the image’s features adaptively. Lee [7] improved the degraded sandstorm image’s color components. However, in some images, an overflow region appeared due to the large difference in abundance of red and blue channels. On the other hand, the images enhanced using the proposed method had no color shift or artificial color components. As shown in Figure 8 and Figure 9, the proposed method could adequately enhance distorted sandstorm images in various circumstances.

4.2. Comparison of Enhanced Images

This section compares the images enhanced using various state-of-the-art methods and the proposed method. Because sandstorm images and hazy or dusty images are obtained similarly, this paper included state-of-the-art dehazing methods for comparison. Figure 10 and Figure 11 show variously degraded sandstorm images in diverse circumstances enhanced using state-of-the-art methods and the proposed method. He et al.’s [5] method is frequently used in dehazing; however, in some cases, bright regions such as the sky exhibited artificial color components due to the improperly estimated transmission map. Meng et al.’s method [6] has no color-balancing procedure; thus, enhanced images had a color shift. Al Ameen’s [11] method also had an artificial color shift due to the lack of an image-adaptive balancing procedure. Lee’s [7] method resulted in overflow regions because of the large difference in red and blue channel distribution. Shi et al.’s [9,12] methods resulted in an artificial color shift in the case of greatly distorted sandstorm images. Ren et al.’s [17] method has no color-balancing procedure, resulting in the enhanced image having shifted color components. Santra et al.’s [19] method has no color-balancing procedure, leading to artificial shifted color components in the enhanced image. Li et al.’s [18] method has no color correction procedure, leading to shifted color components in the enhanced image. Gao et al. [8] method has less color shifted enhanced image meanwhile the enhanced image using this method still dimmed. On the other hand, images enhanced using the proposed method had no color shift or artificial effect.
As shown in Figure 10 and Figure 11, images enhanced using the proposed method had no artificial effect, highlighting its good performance in sandstorm image enhancement.

4.3. Objective Comparison

Distorted sandstorm images were balanced using the proposed method, including a dehazing procedure based on multiscale CNN. Figure 5 shows the transmission map generated using the proposed method and state-of-the-art methods. As shown in Figure 5, the proposed method could properly generate an image-adaptive transmission map. Therefore, to compare the transmission map generated using the proposed method and other methods, the structural similarity index measure (SSIM) [33] and mean-squared error (MSE) measure were used. The reference transmission maps were generated using a depth map with various scattering parameters. The range of scattering parameters was [1, 0.7] with a 0.05 interval. If the generated data are similar to the reference data, the SSIM score [33] is high and the MSE score is low.
Table 2 and Table 3 show the average SSIM [33] and average MSE scores for the D hazy dataset [28]. As shown in Table 2, the transmission map generated using He et al.’s [5] method had a lower MSE than that generated using Santra et al.’s [19] method. The transmission map generated using Lee’s [7] method had a lower MSE score compared to other methods. The transmission map generated using Ren et al.’s [17] method had a higher MSE score than that generated using Lee’s [7] method. On the other hand, the transmission map generated using the proposed method had the lowest MSE score. Table 3 shows the average SSIM scores [33]. The transmission map generated using Ren et al.’s [17] method had a higher SSIM score than that generated using He et al.’s [5] and Santra et al.’s [19] methods. The transmission map generated using Lee’s [7] method had a higher SSIM score than that generated using He et al.’s [5] and Santra et al.’s [19] methods. The transmission map generated using He et al.’s [5] method had a lower SSIM score than that generated using Lee’s [7] and Ren et al.’s [17] methods. On the other hand, the transmission map generated using the proposed method had the higher SSIM score than that of He et al. [5], Lee [7], and Santra et al. [19] methods.
As shown by the comparison of MSE and SSIM [33] scores, the transmission map generated using the proposed method performed well in various circumstances.
The differences between the generated images and ground-truth images are shown in Table 4, Table 5, Table 6, Table 7 and Table 8 as a function of the normalized cross-correlation (NK), average difference (AD), structural content (SC), maximum difference (MD), and normalized absolute error (NAE) [36]. Low scores for these AD, SC, MD, and NAE measures indicate similarity to the ground-truth image. Moreover higher score for NK measure indicates similarity to the ground-truth image. He et al.’s [5] and Santra et al.’s [19] methods had higher scores than the other methods, indicating a low similarity to the ground-truth image. In contrast, Ren et al.’s [17] method scored lower than other methods in terms of AD and NAE. On the other hand, the proposed method scored lowest in terms of NAE.
The enhanced sandstorm images were compared subjectively in Figure 10 and Figure 11. The learned transmission map was compared in Figure 5 and Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. As can be seen, the proposed method enhanced the distorted sandstorm image naturally. This section analyzes the results objectively using two metrics: underwater image quality measure (UIQM) [37] and natural image quality evaluator (NIQE) [38]. The UIQM [37] is used in underwater image assessment. Because sandstorm images and underwater images have similar features such as the attenuation of the color channel and the color veil, this measure is frequently adopted in sandstorm image assessment. A high UIQM score [37] indicates a well-enhanced in terms of the colorfulness, sharpness, and contrast. A low NIQE [38] measure indicates a well-enhanced and natural-looking image.
Table 9, Table 10, Table 11 and Table 12 show the UIQM [37] scores and NIQE [38] scores for Figure 10 and Figure 11.
Table 9 and Table 10 show the UIQM scores [37] for Figure 10 and Figure 11. He et al.’s [5] method scored higher than Gao et al.’s [8] method despite its color shift in some images. Gao et al.’s [8] method scored lower than Meng et al.’s [6] method despite its color-balancing procedure. Al Ameen’s method [11] scored higher than Gao et al.’s [8] method despite its color shift. Shi et al.’s [9] method scored higher than He et al.’s [5] and Gao et al.’s [8] methods according to the image sharpness, colorfulness, and contrast. Shi et al.’s [12] method scored lower than Shi et al.’s [9] method in some images. Ren et al.’s [17] method scored higher than Gao et al.’s method [8] despite its lack of a color correction procedure in some images. Lee’s [7] method scored higher than Gao et al.’s [8] method in some images. Santra et al.’s method [19] scored lower than Al Ameen’s [11] method despite the color shift in the latter. Therefore, the UIQM score is not an absolute measure to assess a degraded image. On the other hand, the image enhanced using the proposed method had the higher UIQM score than that of other methods.
Table 11 and Table 12 show the NIQE scores [38] for Figure 10 and Figure 11. He et al.’s [5] method scored lower than Gao et al.’s [8] method, despite the color shift. Al Ameen’s [11] method scored lower than He et al.’s [5] due to its color-balancing procedure. Li et al.’s [18] method scored higher than Gao et al.’s [8] method due to the shifted color components. Despite the presence of artificial effects and color shifts, UIQM and NIQE scores do not always line up with subjective measures; hence, these measures are not absolute in assessing the enhanced image’s quality. Meng et al.’s [6] method scored lower than Gao et al.’s method [8] despite the shifted color components. Shi et al.’s method [9] scored higher than Meng et al.’s [6] despite its lower color shift. Shi et al.’s [12] method scored lower than Al Ameen’s [11] method due to its better image-adaptive color-balancing procedure. Ren et al.’s method [17] scored higher than Shi et al.’s [12] method due to its lack of a color correction procedure. Lee’s [7] method scored lower than Al Ameen’s [11] method due to its image-adaptive color-balancing procedure. Santra et al.’s [19] method scored higher than Shi et al.’s [12] method due to its lack of an image-adaptive color-balancing procedure. On the other hand, the image enhanced using the proposed method had the lower NIQE score than that of other methods due to its lack of shifted color components and artificial effects.
Table 9, Table 10, Table 11 and Table 12 confirm that image-adaptive color balancing and dehazing are needed to naturally enhance distorted sandstorm images.
Table 13 and Table 14 show the average UIQM [37] and NIQE [38] scores using the WEAPD dataset [35]. Existing dehazing methods have no color-balancing procedure. Therefore, enhanced sandstorm images have shifted color components leading to a low UIQM score and high NIQE score, as they do not sufficiently and properly reflecting the image. Therefore, to enhance sandstorm images distorted by attenuation and scattering, image-adaptive color correction and dehazing are needed. Accordingly, the proposed method achieved the lower NIQE score and higher UIQM score than that of other methods, highlighting its good performance in distorted sandstorm image enhancement.
Moreover, if the proposed method is applied to sandstorm images under a computer graphics (CG) environment [39], enhanced can also be achieved because CG images consist of pixel values. Furthermore, because the training images of the proposed algorithm are obtained via data augmentation as with [40], it is able to reflect the fluctuation in real-world circumstances. Therefore, the proposed method can be applied to both real-world and CG images.

5. Conclusions

Sandstorm images have color distortion due to scattering and attenuation by sand particles with color. To enhance the color-casted sandstorm image, a color-balancing procedure is needed. Therefore, this paper proposed an image-adaptive color-balancing procedure using image-adaptive eigenvalues to reflect the image conditions. Because the balanced image has hazy characteristics, a dehazing procedure based on multiscale CNN was then applied to the balanced image. Moreover, because image datasets have insufficient transmission maps, this paper generated various transmission maps based on a depth map with various scattering parameters. The learned transmission map was able to properly reflect the various image conditions without artificial effects. The contribution of this paper is that, by applying data augmentation to obtain transmission maps, the proposed algorithm based on machine learning can adaptively generate transmission map. Moreover, by applying eigenvalue of color channel, which can universally be used in a color-balancing method. Furthermore, in the case of sparse color components, the enhanced images in this paper presented a few artificial color shifts, representing the weak point of this paper. Sandstorm images suffer from reddish and yellowish color degradation; therefore, to naturally enhance the sandstorm image, an image-adaptive color-balancing procedure is needed, which will be addressed in the future, in addition to an image-adaptive transmission network.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Tan, R.T. Visibility in Bad Weather from a Single Image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 24–26 June 2008. [Google Scholar]
  2. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
  3. Narasimhan, S.G.; Shree, K.N. Chromatic Framework for Vision in Bad Weather. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2000), Hilton Head, SC, USA, 13–15 June 2000; Volume 1. [Google Scholar]
  4. Narasimhan, S.G.; Shree, K.N. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  5. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
  6. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 2–8 December 2013. [Google Scholar]
  7. Lee, H.S. Efficient Sandstorm Image Enhancement Using the Normalized Eigenvalue and Adaptive Dark Channel Prior. Technologies 2021, 9, 101. [Google Scholar] [CrossRef]
  8. Gao, G.; Lai, H.; Jia, Z.; Liu, Y.Q.; Wang, Y. Sand-dust image restoration based on reversing the blue channel prior. IEEE Photonics J. 2020, 12, 1–16. [Google Scholar] [CrossRef]
  9. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Let you see in sand dust weather: A method based on halo-reduced dark channel prior dehazing for sand-dust image enhancement. IEEE Access 2019, 7, 116722–116733. [Google Scholar] [CrossRef]
  10. Wang, W.; Yuan, X.; Wu, X.; Liu, Y. Fast image dehazing method based on linear transformation. IEEE Trans. Multimedia 2017, 19, 1142–1155. [Google Scholar] [CrossRef]
  11. Al-Ameen, Z. Visibility enhancement for images captured in dusty weather via tuned tri-threshold fuzzy intensification operators. Int. J. Intell. Syst. Appl. 2016, 8, 10. [Google Scholar] [CrossRef] [Green Version]
  12. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand–dust image enhancement. IET Image Process. 2020, 14, 747–756. [Google Scholar] [CrossRef]
  13. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. A fast sand-dust image enhancement algorithm by blue channel compensation and guided image filtering. IEEE Access 2020, 8, 196690–196699. [Google Scholar] [CrossRef]
  14. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A novel fast single image dehazing algorithm based on artificial multiexposure image fusion. IEEE Trans. Instrum. Meas. 2020, 70, 5001523. [Google Scholar] [CrossRef]
  15. El Mahdaoui, A.; Ouahabi, A.; Moulay, M.S. Image denoising using a compressive sensing approach based on regularization constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef]
  16. Ouahabi, A. A Review of Wavelet Denoising in Medical Imaging. In Proceedings of the 2013 8th International Workshop on Systems, Signal Processing and Their Applications (WoSSPA), Algiers, Algeria, 12–15 May 2013. [Google Scholar]
  17. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.-H. Single Image Dehazing via Multi-Scale Convolutional Neural Networks. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016. [Google Scholar]
  18. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  19. Santra, S.; Mondal, R.; Panda, P.; Mohanty, N.; Bhuyan, S. Image Dehazing via Joint Estimation of Transmittance Map and Environmental Illumination. In Proceedings of the 2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR), Bangalore, India, 27–30 December 2017. [Google Scholar]
  20. Wang, A.; Wang, W.; Liu, J.; Gu, N. AIPNet: Image-to-image single image dehazing with atmospheric illumination prior. IEEE Trans. Image Process. 2018, 28, 381–393. [Google Scholar] [CrossRef]
  21. Zhang, J.; Tao, D. FAMED-Net: A fast and accurate multi-scale end-to-end dehazing network. IEEE Trans. Image Process. 2019, 29, 72–84. [Google Scholar] [CrossRef] [Green Version]
  22. Chang, C.-I.; Du, Q. Interference and noise-adjusted principal components analysis. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2387–2396. [Google Scholar] [CrossRef] [Green Version]
  23. Tripathi, P.; Garg, R.D. Comparative Analysis of Singular Value Decomposition and Eigen Value Decomposition Based Principal Component Analysis for Earth and Lunar Hyperspectral Image. In Proceedings of the 2021 11th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 24–26 March 2021. [Google Scholar]
  24. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef] [Green Version]
  25. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef]
  26. Ghosh, S.; Nair, P.; Chaudhury, K.N. Optimized Fourier bilateral filtering. IEEE Signal Process. Lett. 2018, 25, 1555–1559. [Google Scholar] [CrossRef] [Green Version]
  27. Ghosh, S.; Chaudhury, K.N. On fast bilateral filtering using Fourier kernels. IEEE Signal Process. Lett. 2016, 23, 570–573. [Google Scholar] [CrossRef] [Green Version]
  28. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C. D-hazy: A dataset to evaluate quantitatively dehazing algorithms. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
  29. Goldstein, E.B. Sensation and Perception; Wadsworth; Cengage Learning: Boston, MA, USA, 1980. [Google Scholar]
  30. Preetham, A.J.; Shirley, P.; Smits, B. A Practical Analytic Model for Daylight. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999. [Google Scholar]
  31. Ronneberger, O.; Philipp, F.; Thomas, B. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
  32. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010. [Google Scholar]
  33. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  34. Kingma, D.P.; Jimmy, B. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  35. Xiao, H.; Zhang, F.; Shen, Z.; Wu, K.; Zhang, J. Classification of weather phenomenon from images by using deep convolutional neural network. Earth Space Sci. 2021, 8, e2020EA001604. [Google Scholar] [CrossRef]
  36. Memon, F.; Unar, M.A.; Memon, S. Image quality assessment for performance evaluation of focus measure operators. Mehran Univ. Res. J. Eng. Technol. 2015, 34, 379–386. [Google Scholar]
  37. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  38. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  39. Staniszewski, M.; Foszner, P.; Kostorz, K.; Michalczuk, A.; Wereszczyński, K.; Cogiel, M.; Golba, D.; Wojciechowski, K.; Polański, A. Application of Crowd Simulations in the Evaluation of Tracking Algorithms. Sensors 2020, 20, 4960. [Google Scholar] [CrossRef]
  40. Ciampi, L.; Messina, N.; Falchi, F.; Gennaro, C.; Amato, G. Virtual to real adaptation of pedestrian detectors. Sensors 2020, 20, 5250. [Google Scholar] [CrossRef]
Figure 1. Comparison of dusty image and color-casted sandstorm images: (a,b) dusty image and its distribution of histogram; (c,d) color-casted sandstorm image and its distribution of histogram.
Figure 1. Comparison of dusty image and color-casted sandstorm images: (a,b) dusty image and its distribution of histogram; (c,d) color-casted sandstorm image and its distribution of histogram.
Symmetry 14 02310 g001
Figure 2. Comparison of color-balancing performance between Lee [7] and the proposed method with histogram distribution of each color channel: (a,b) input images and histogram distribution of each color channel; (c,d) balanced images using Lee’s [7] method and its histogram distribution of color channel; (e,f) balanced images using proposed method and histogram distribution of its color channel.
Figure 2. Comparison of color-balancing performance between Lee [7] and the proposed method with histogram distribution of each color channel: (a,b) input images and histogram distribution of each color channel; (c,d) balanced images using Lee’s [7] method and its histogram distribution of color channel; (e,f) balanced images using proposed method and histogram distribution of its color channel.
Symmetry 14 02310 g002
Figure 3. Variation of transmission maps with scattering parameter.
Figure 3. Variation of transmission maps with scattering parameter.
Symmetry 14 02310 g003
Figure 4. Design of brightness-adaptive transmission network.
Figure 4. Design of brightness-adaptive transmission network.
Symmetry 14 02310 g004
Figure 5. Comparison of transmission maps obtained on the basis of depth map with a fixed scattering parameter ( β : 0.75 ) using existing methods and the proposed method (the blue dotted line indicates the proposed transmission map): (a) ground truth transmission map using depth map and fixed scattering parameter ( β : 0.75 ) ; (b) He et al. [5]; (c) Lee [7]; (d) Santra et al. [19]; (e) Ren et al. [17]; (f) proposed method.
Figure 5. Comparison of transmission maps obtained on the basis of depth map with a fixed scattering parameter ( β : 0.75 ) using existing methods and the proposed method (the blue dotted line indicates the proposed transmission map): (a) ground truth transmission map using depth map and fixed scattering parameter ( β : 0.75 ) ; (b) He et al. [5]; (c) Lee [7]; (d) Santra et al. [19]; (e) Ren et al. [17]; (f) proposed method.
Symmetry 14 02310 g005
Figure 6. The variation of lossand accuracy according to epoch: (a) accuracy graph; (b) loss graph.
Figure 6. The variation of lossand accuracy according to epoch: (a) accuracy graph; (b) loss graph.
Symmetry 14 02310 g006
Figure 7. The comparison of transmission maps and enhanced images with He et al. [5], Lee [7] methods, and the proposed method (the blue dotted line indicates the transmission map and enhanced image using the proposed method): (a,d) estimated transmission map and enhanced image by He et al. method [5]; (b,e) estimated transmission map and enhanced image by Lee method [7]; (c,f) generated transmission map and enhanced image by the proposed method.
Figure 7. The comparison of transmission maps and enhanced images with He et al. [5], Lee [7] methods, and the proposed method (the blue dotted line indicates the transmission map and enhanced image using the proposed method): (a,d) estimated transmission map and enhanced image by He et al. method [5]; (b,e) estimated transmission map and enhanced image by Lee method [7]; (c,f) generated transmission map and enhanced image by the proposed method.
Symmetry 14 02310 g007
Figure 8. Comparison of color-balanced images using state-of-the-art methods and the proposed method in various conditions: (a) input image; (b) Shi et al. [9]; (c) Shi et al. [12]; (d) Al Ameen [11]; (e) Lee [7]; (f) proposed method.
Figure 8. Comparison of color-balanced images using state-of-the-art methods and the proposed method in various conditions: (a) input image; (b) Shi et al. [9]; (c) Shi et al. [12]; (d) Al Ameen [11]; (e) Lee [7]; (f) proposed method.
Symmetry 14 02310 g008
Figure 9. Comparison of color-balanced images using state-of-the-art methods and the proposed method in various conditions: (a) input image; (b) Shi et al. [9]; (c) Shi et al. [12]; (d) Al Ameen [11]; (e) Lee [7]; (f) proposed method.
Figure 9. Comparison of color-balanced images using state-of-the-art methods and the proposed method in various conditions: (a) input image; (b) Shi et al. [9]; (c) Shi et al. [12]; (d) Al Ameen [11]; (e) Lee [7]; (f) proposed method.
Symmetry 14 02310 g009
Figure 10. Comparison of enhanced images using state-of-the-art methods and the proposed method: (a) input image; (b) He et al. [5]; (c) Meng et al. [6]; (d) Al Ameen [11]; (e) Lee [7]; (f) Gao et al. [8]; (g) Shi et al. [12]; (h) Shi et al. [9]; (i) Ren et al. [17]; (j) Santra et al. [19]; (k) Li et al. [18]; (l) proposed method.
Figure 10. Comparison of enhanced images using state-of-the-art methods and the proposed method: (a) input image; (b) He et al. [5]; (c) Meng et al. [6]; (d) Al Ameen [11]; (e) Lee [7]; (f) Gao et al. [8]; (g) Shi et al. [12]; (h) Shi et al. [9]; (i) Ren et al. [17]; (j) Santra et al. [19]; (k) Li et al. [18]; (l) proposed method.
Symmetry 14 02310 g010
Figure 11. The comparison of enhanced image using start-of-the-art methods and the proposed method: (a) input image; (b) He et al. [5]; (c) Meng et al. [6]; (d) Al Ameen [11]; (e) Lee [7]; (f) Gao et al. [8]; (g) Shi et al. [12]; (h) Shi et al. [9]; (i) Ren et al. [17]; (j) Santra et al. [19]; (k) Li et al. [18]; (l) proposed method.
Figure 11. The comparison of enhanced image using start-of-the-art methods and the proposed method: (a) input image; (b) He et al. [5]; (c) Meng et al. [6]; (d) Al Ameen [11]; (e) Lee [7]; (f) Gao et al. [8]; (g) Shi et al. [12]; (h) Shi et al. [9]; (i) Ren et al. [17]; (j) Santra et al. [19]; (k) Li et al. [18]; (l) proposed method.
Symmetry 14 02310 g011
Table 1. Summary of the proposed algorithm.
Table 1. Summary of the proposed algorithm.
Input: Sandstorm image, I ( x )
Calculate the image-adaptive eigenvalue using (6), obtain the λ α c
Calculate the image-adaptive color-balancing parameter using (7), obtain the β c
Make the color-balanced image using (5), obtain the I b c ( x )
Train the transmission map based on (11), obtain the t p ( x )
Calculate the loss function using (12) and (13), obtain the L o s p
Recombine the enhanced sandstorm image using (17), obtain the J c ( x )
Refine the improved sandstorm image using (18) and (19), obtain the J r e f c ( x )
Output: Enhanced image, J r e f ( x )
Table 2. Comparison of transmission maps according to MSE scores (a lower score indicates the more similar).
Table 2. Comparison of transmission maps according to MSE scores (a lower score indicates the more similar).
MSE (AVG)Beta
10.950.90.850.80.750.7AVGSTD
He et al. [5]0.0710.0770.0830.0910.0980.1070.1160.0910.015
Lee [7]0.0190.0210.0230.0260.0290.0320.0370.0270.006
Ren et al. [17]0.0490.0440.0390.0330.0290.0240.0200.0340.010
Santra et al. [19]0.1050.1120.1200.1270.1360.1450.1550.1290.017
PM0.0100.0100.0110.0120.0130.0150.0180.0130.003
Table 3. Comparison of transmission map according to SSIM [33] scores (a score closer to one indicates the more similar).
Table 3. Comparison of transmission map according to SSIM [33] scores (a score closer to one indicates the more similar).
SSIM (AVG)Beta
10.950.90.850.80.750.7AVGSTD
He et al. [5]0.7650.7580.7510.7440.7380.7310.7240.7440.013
Lee [7]0.9210.9200.9180.9160.9140.9110.9080.9150.004
Ren et al. [17]0.9060.9130.9210.9280.9340.9410.9470.9270.014
Santra et al. [19]0.7510.7430.7360.7280.7200.7120.7040.7280.016
PM0.9160.9170.9180.9180.9170.9160.9150.9170.001
Table 4. Comparison of transmission maps according to AD [36] scores (a lower score indicates the more similar).
Table 4. Comparison of transmission maps according to AD [36] scores (a lower score indicates the more similar).
AD (AVG)Beta
10.950.90.850.80.750.7AVG
He et al. [5]0.2260.2380.2500.2630.2760.2890.3030.264
Lee [7]0.0830.0950.1070.1200.1330.1470.1610.121
Santra et al. [19]0.3010.3130.3250.3380.3510.3640.3780.339
Ren et al. [17]−0.168−0.156−0.143−0.131−0.118−0.104−0.090−0.130
PM0.0260.0380.0510.0630.0760.0900.1040.064
Table 5. Comparison of transmission maps according to SC [36] scores (a lower score indicates the more similar).
Table 5. Comparison of transmission maps according to SC [36] scores (a lower score indicates the more similar).
SC (AVG)Beta
10.950.90.850.80.750.7AVG
He et al. [5]1.9482.0122.0802.1522.2272.3072.3902.159
Lee [7]1.2841.3241.3661.4101.4571.5051.5561.415
Santra et al. [19]3.0733.1713.2753.3833.4973.6173.7433.394
Ren et al. [17]0.6810.7010.7230.7460.7690.7940.8200.748
PM1.1041.1391.1761.2141.2541.2971.3421.218
Table 6. Comparison of transmission maps according to NK [36] scores (a higher score indicates the more similar).
Table 6. Comparison of transmission maps according to NK [36] scores (a higher score indicates the more similar).
NK (AVG)Beta
10.950.90.850.80.750.7AVG
He et al. [5]0.6950.6830.6710.6580.6460.6340.6220.658
Lee [7]0.8790.8650.8510.8370.8230.8090.7950.837
Santra et al. [19]0.5630.5540.5450.5360.5260.5170.5080.536
Ren et al. [17]1.2031.1861.1691.1521.1351.1181.1011.152
PM0.9500.9360.9210.9070.8920.8780.8630.907
Table 7. Comparison of transmission maps according to NAE [36] scores (a lower score indicates the more similar).
Table 7. Comparison of transmission maps according to NAE [36] scores (a lower score indicates the more similar).
NAE (AVG)Beta
10.950.90.850.80.750.7AVG
He et al. [5]0.3420.3530.3630.3740.3850.3960.4060.374
Lee [7]0.1430.1500.1600.1720.1860.2000.2140.175
Santra et al. [19]0.4420.4520.4610.4710.4810.4900.5000.471
Ren et al. [17]0.2860.2640.2420.2210.2000.1800.1600.222
PM0.1040.1000.1010.1060.1140.1270.1400.113
Table 8. Comparison of transmission map according to MD [36] scores (the lower score is the more similar).
Table 8. Comparison of transmission map according to MD [36] scores (the lower score is the more similar).
MD (AVG)Beta
10.950.90.850.80.750.7AVG
He et al. [5]0.5970.6060.6150.6240.6340.6440.6540.625
Lee [7]0.3860.3920.3970.4020.4080.4140.4200.403
Santra et al. [19]0.6420.6470.6520.6580.6630.6700.6760.658
Ren et al. [17]0.1300.1310.1320.1330.1340.1360.1370.133
PM0.3140.3190.3240.3290.3350.3400.3460.330
Table 9. Comparison of UIQM scores [37] for Figure 10 (a higher score indicates a better enhancement).
Table 9. Comparison of UIQM scores [37] for Figure 10 (a higher score indicates a better enhancement).
[5][18][11][8][6][9][12][17][7][19]PM
0.7650.7210.9750.5110.8270.9970.7290.7131.3680.6901.611
0.8680.9080.9410.6571.1090.8160.9170.9831.5020.8621.716
1.0140.9800.9080.7461.0020.9840.9090.8881.6380.8881.686
0.8420.9380.9950.7350.9690.7860.9430.9181.6500.8751.685
0.5680.4580.8550.4170.6260.7820.5380.5360.8710.4721.166
0.8380.9230.9850.7230.9350.6981.0350.9461.8650.9431.731
1.0201.0311.1510.6681.3491.2501.0040.9871.6111.0111.646
0.7690.8850.9960.6510.8410.8820.8410.8491.4180.8221.539
0.6890.5770.6400.4230.7800.7640.6180.5611.4290.6601.524
0.7500.7470.9940.5610.9461.1180.8100.7091.4350.7871.714
AVG0.8120.8170.9440.6090.9380.9080.8340.8091.4790.8011.602
Table 10. Comparison of UIQM scores [37] for Figure 11 (a higher score indicates a better enhancement).
Table 10. Comparison of UIQM scores [37] for Figure 11 (a higher score indicates a better enhancement).
[5][18][11][8][6][9][12][17][7][19]PM
1.1650.9321.0080.7450.9990.9870.9480.8731.6530.9251.586
0.9611.0791.0500.7651.3450.7801.0251.0151.6141.0881.513
0.5130.7350.8050.5050.7600.7360.6870.7941.2530.6611.473
1.0850.9051.0570.7410.9111.0590.9630.8741.6000.8871.588
0.5900.5160.9670.7460.5770.7890.7690.7621.4470.6021.474
0.6150.5780.8610.7550.6890.5810.7480.8511.3270.6161.379
0.7340.8540.7770.6941.0230.5930.7950.9451.4890.7551.392
1.0811.0701.2511.0281.0810.9091.1591.1741.6511.0551.575
0.7180.8230.8930.6441.0630.7540.8221.3671.4290.7851.449
0.6220.4500.9300.5370.5110.9360.6340.6011.4970.4371.537
AVG0.8080.7940.9600.7160.8960.8120.8550.9261.4960.7811.497
Table 11. Comparison of NIQE scores [38] for Figure 10 (a lower score indicates a better enhancement).
Table 11. Comparison of NIQE scores [38] for Figure 10 (a lower score indicates a better enhancement).
[5][18][11][8][6][9][12][17][7][19]PM
19.47119.93919.19919.62219.45319.13919.22719.50917.99119.56917.031
21.17220.30220.40621.28420.34420.80020.29821.20418.55221.17816.098
20.03920.69919.92420.08520.06619.72419.58820.12118.01820.36317.212
19.10519.42719.28119.12319.08019.10218.24219.13116.28919.09416.417
20.28020.47719.74120.35020.08220.10819.94120.33720.66420.16819.408
20.28620.52119.98520.09420.09320.06019.47020.21515.26420.11516.084
19.49319.51518.66419.61719.16019.30418.92919.43417.06919.21915.723
19.61019.73219.57919.65219.37019.53819.51219.57918.90019.67418.577
19.41719.67019.63219.55719.26519.38919.56119.53819.00519.53218.440
19.60019.65719.35319.74919.51119.36719.38119.64617.80519.63415.206
AVG19.84719.99419.57619.91319.64219.65319.41519.87117.95619.85517.020
Table 12. Comparison of NIQE scores [38] for Figure 11 (a lower score indicates a better enhancement).
Table 12. Comparison of NIQE scores [38] for Figure 11 (a lower score indicates a better enhancement).
[5][18][11][8][6][9][12][17][7][19]PM
19.93421.20320.32320.28620.12919.69619.73720.04217.86620.00617.180
18.44918.35117.98418.75917.95618.76717.93018.48615.62018.36615.599
19.67419.66319.60719.66919.55519.60919.82319.59619.31719.71018.496
19.79620.72219.68219.85919.93719.29619.45619.91617.84819.95416.849
20.16820.86219.90519.63820.34119.83819.88320.12717.61420.14317.543
19.45519.69919.43819.42519.50119.32919.16319.46117.41119.63416.879
20.98320.53320.33920.92619.69720.83419.41820.69417.61720.82816.630
19.18519.36918.75119.08119.15219.24218.27019.19815.33119.19915.722
20.39820.43720.22820.50520.19120.31720.06319.98017.73620.52516.524
19.54219.72519.64119.66419.67819.32519.64219.56718.25919.69618.575
AVG19.75820.05619.59019.78119.61419.62519.33919.70717.46219.80617.000
Table 13. Comparison of Figure 10 and Figure 11 and WEAPD dataset [35] according to UIQM [37] scores (a higher score indicates a better enhancement).
Table 13. Comparison of Figure 10 and Figure 11 and WEAPD dataset [35] according to UIQM [37] scores (a higher score indicates a better enhancement).
[5][18][11][8][6][9][12][17][7][19]PM
AVG (20)0.8100.8060.9520.6630.9170.8600.8450.8671.4870.7911.549
AVG (692)0.9540.9241.0180.7880.9910.9100.9500.9551.6180.9561.648
Table 14. Comparison of Figure 10 and Figure 11 and WEAPD dataset [35] according to NIQE [38] scores (a lower score indicates a better enhancement).
Table 14. Comparison of Figure 10 and Figure 11 and WEAPD dataset [35] according to NIQE [38] scores (a lower score indicates a better enhancement).
[5][18][11][8][6][9][12][17][7][19]PM
AVG(20)19.80320.02519.58319.84719.62819.63919.37719.78917.70919.83017.010
AVG(692)19.82620.19019.71519.85919.73219.70719.32819.87617.50419.83617.129
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, H. Sandstorm Image Enhancement Using Image-Adaptive Eigenvalue and Brightness-Adaptive Dark Channel Network. Symmetry 2022, 14, 2310. https://doi.org/10.3390/sym14112310

AMA Style

Lee H. Sandstorm Image Enhancement Using Image-Adaptive Eigenvalue and Brightness-Adaptive Dark Channel Network. Symmetry. 2022; 14(11):2310. https://doi.org/10.3390/sym14112310

Chicago/Turabian Style

Lee, Hosang. 2022. "Sandstorm Image Enhancement Using Image-Adaptive Eigenvalue and Brightness-Adaptive Dark Channel Network" Symmetry 14, no. 11: 2310. https://doi.org/10.3390/sym14112310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop