Next Article in Journal
Evaluating the Landslide Stability and Vegetation Recovery: Case Studies in the Tsengwen Reservoir Watershed in Taiwan
Next Article in Special Issue
Interfacial Friction Prediction in a Vertical Annular Two-Phase Flow Based on Support Vector Regression Machine
Previous Article in Journal
Splitting and Length of Years for Improving Tree-Based Models to Predict Reference Crop Evapotranspiration in the Humid Regions of China
Previous Article in Special Issue
Deep Learning Based Filtering Algorithm for Noise Removal in Underwater Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light

by
Fayadh Alenezi
1,*,†,
Ammar Armghan
2,
Sachi Nandan Mohanty
3,
Rutvij H. Jhaveri
4,† and
Prayag Tiwari
5,*,†
1
Department of Electrical Engineering, Faculty of Engineering, Jouf University, Sakakah 72388, Saudi Arabia
2
Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72345, Saudi Arabia
3
Department of Computer Science & Engineering, Vardhaman College of Engineering (Autonomous), Hyderabad 501218, India
4
Department of Computer Science & Engineering, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India
5
Department of Computer Science, Aalto University, 02150 Espoo, Finland
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Water 2021, 13(23), 3470; https://doi.org/10.3390/w13233470
Submission received: 20 October 2021 / Revised: 28 November 2021 / Accepted: 29 November 2021 / Published: 6 December 2021
(This article belongs to the Special Issue AI and Deep Learning Applications for Water Management)

Abstract

:
A lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The μ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement.

1. Introduction

The rise of the digital world has driven up the consumer market for various applications [1,2,3,4,5,6,7,8,9,10,11,12,13]. Recently, haze removal has gained increased attention [14]. Image dehazing has been extended from outdoor images to underwater images [15]. This is due to the rise of visual data analysis and comprehension to aid various applications. The human brain has a cortical area, which aids in analyzing the visual data [16]. The improved image clarity helps image processing tasks vital for significant applications such as those used for surveillance and in environmental studies.
The need for image dehazing arises from light scattering caused by suspended particles or aerosols in the atmosphere before reaching the camera [17,18]. The scattering of the particles limits the ability of the camera to capture clear images [18]. As a result, the images captured by cameras or sensors have degraded quality. This is because the particles lead to substantial loss or gain of contrast and color in the images. The images thus lack visual perceptibility, which hinders the image processing task.
Image dehazing aims to improve the visual and perceptual quality of hazed image, making them suitable for image processing applications [19,20,21]. Dehazing is therefore critical in many computer applications such as underwater image processing. However, the existing state-of-the-art methods used for underwater image dehazing or enhancement have various shortcomings. For this reason, better methods of image dehazing are needed.
We propose to implement underwater image dehazing technique via a novel CNN and the block-greedy algorithm. The technique we propose intelligently selects pixels from local and global patches to estimate optimal ambient light for underwater image dehazing. The cross-layering of connection in the depth network in CNN preserves the feature details such as edges of the proposed underwater dehazed images.
In addition, the use of Markov random field-based minimum cost function smooths the edges and features of the underwater dehazed images based on the pixels in the local and global neighboring patches. The technique seeks to solve one problem with existing underwater dehazing algorithms, which is based on a misleading assumption about a lack of difference in pixels in patches, leading to blurred images and overshadowing in some areas. In the proposed algorithm, we compare by introducing constrained terms to en-hance the smooth connection between local and global pixels. The minimum energy in MRF is implemented via graph cut, cautioning proposed CNN from smoother, far apart pixels.
The paper offers four objectives:
  • To estimate the scene depth based on the pixel differences between the color channels, helping strengthen the scene artifacts of the final dehazed underwater images.
  • To transform the local and global pixels for the purpose of reducing discontinuity, which increase the accuracy of the dehazed images, and preserves and improves the color hue.
  • To correct the discontinuity often exhibited in underwater images by continuous splitting invariance of the image pixels drawn from local and global pixels.
  • To estimate the ambient light based on the brightest pixels from all color channels.
The rest of the paper is organized as follows:
Section 2 presents a more recent underwater image dehazing techniques and their shortcomings which has laid ground for our proposed technique. Section 3 presents the proposed work in detail. Section 4 presents the experimental setup; the results obtained and discusses the performance of the proposed method. Section 5 summarizes the strengths and weaknesses of the proposed method.

2. Literature Review

Many scholars have presented various underwater image dehazing techniques. These techniques have few shortcomings. For example, a recent method proposed by Xiong et al. [22] offered an efficient underwater image enhancement model that makes ex-tensive use of the Beer-Lambert law (the linear relationship between light radiation and light absorption by the image pixels during transmittance). Xiong et al. [22] used mean and variance of natural images as a reference to correct color cast in underwater images. The method recovered better details of underwater images in two steps: establishing a linear model associated with the mean and variance, and presenting a nonlinear adaptive weight scheme using locating information to recover image details and prevent partial over-enhancement. The resultant images yielded better structural restoration and more natural color correction in less time. However, this dehazing was only partially successful: the images had over-saturated colors in some regions, specifically those near dark areas. This resulted in the loss of some image properties.
Similarly, Park and Sim [23] proposed underwater image restoration that uses geodesic color distance under a complete image formation model. In addition to the direct transmission and backward components that have been well established in many existing methods, Park et al. [23] their method considers the forward-scattering component to refine the transmission map via the geodesic color distance. Furthermore, scene radiance is estimated by forward scattering term in the point spread function. The resultant images show improvement in the quality of the estimated transmission maps, and the restored scene radiance of the methods were better when compared with the existing state-of-the-art methods. However, the two versions of the results indicate problems with color saturation and with darkening of the horizons, making the results unsuitable for non-water image dehazing tasks.
Li et al. [24] proposed underwater image enhancement based on dehazing and color correction. Their first step entailed obtaining the dehazed image via fusion based on calculating the difference between the maximum green-blue dark channels and the maxi-mum red channels. Their second step entailed obtaining enhanced images via a human-based visual system for color restoration. Their final stage entailed using a simple weight fusion strategy for efficient and simple incorporation of dehazing and enhancing the image to a high quality. Visual analysis indicated that the proposed results outper-formed the existing state-of-the-art methods. However, the method had one major defect: excess hue and grain with strong, sharp edges make the method unsuitable. The final images exhibit blurriness and over-saturation of colors toward the horizon, making it unfit for a number of applications.
Deng et al. [25] proposed a novel underwater image enhancement method in which light is removed from the color source to attain a dehazed image. This method is known as Removing Light Source Color and Dehazing (RLSCD). The technique explored a new approach to scene depth based on a strong correlation between attenuations and different light conditions. The method estimated the background light based on the gray open operation. This helped avoid the wrong estimation of the pixels in the backgrounds of the white objects. Deng et al. [25] further use the Lambertian model to estimate the light source disturbance of the light in the dehazed image. The removal of these disturbances led to effective correction of color distortion and light overcompensation. Yet although the experimental results outperformed state-of-the-art methods in terms of providing relatively natural color, increased contrast, and brightness, the blurriness of some regions remains problematic.
Wang et al. [26] proposed a deep CNN method for underwater image enhancement. Their method used an end-to-end framework where a CNN-based network called UIE Net was trained in color correction and haze removal. UIE Net-enabled strong learning of image features simultaneously. Wang et al. [26] also used a pixel disruptive strategy to exploit local features. However, visual analysis of the resultant enhanced underwater images indicate over-saturation of colors in most of the regions. This defect was due to the method’s neglect of global pixel patches while exploiting local patches.
Seeking to avoid the weaknesses of these state-of-the-art methods, the proposed technique considers the pixel difference between the global and local patches in scene depth estimation. The pixel difference is based on the green and red channels’ absolute mean intensity function, and specifically compares the green and blue channels, and the red and blue channels. The use of absolute mean intensity function helps in the extraction of image details and the strengthening of artifacts. The global background light is assumed to be based on the moving average of the impact of suspended light and the brightest pixels within the image. This is in contrast to the existing research that uses 0.1% of the brightest pixels suggested; instead, we use the brightest pixels from all the color channels. This is achieved by selecting the top 0.01% lowest blurry and brightest pixels in all the local and global patches. The image blurriness map is then used to select the regions with the lowest variance, based on the normalized attenuation ratios of different color channels in terms of the absolute mean intensity function, with the help of a block-greedy algorithm. Underwater images are prone to discontinuous pixels, hence giving them low quality. We transform both local and global pixel values to make them continuous by splitting the invariance of the image pixels. This increases the continuity of the pixels, increasing the accuracy and preserving and improving the color hue of the results. This is visible in the improvement of color in results presented in figures presented later in the paper.

3. Proposed Work

3.1. Underwater Image Formation Model

Underwater image hazing is due to light absorption and scattering (see Figure 1). The sunlight illuminating underwater scenes is attenuated by water molecules [27]. In addition, the radiance is reflected and refracted as light travels from the source towards the camera, causing further attenuation. Scene radiance transmission is regarded as direct transmission. The ratio of scene radiance to direct transmission is the transmission [28]. Light attenuation underwater varies based on the wavelength of the light. Red is the most attenuated, compared to blue and green [29,30,31,32]. Consequently, red channels disap-pear or are absorbed more rapidly than green and blue. Thus, underwater images tend to appear bluish or greenish, hazing the images.
Figure 1 shows how underwater scenes are degraded by light absorption and by scattering caused by particles suspended in the water. The water and suspended particles attenuate the scenic radiance and the reflected.
Existing attempts to solve the underwater image enhancement problem presented in Figure 1 have used the physical model described by Equation (1). For example, Jordt [33] tried to solve the underwater image enhancement problem by explicitly modeling the re-fraction in the water. Jaffe [34] included water properties like attenuation medium in the model to improve the results. Chiang and Chen [31], and Jaffe [34] created a model like traditional image dehazing process. In addition, Park and Sim [23] developed a complete underwater image restoration model based on direct transmission of forward and backward scattering components (see Equation (1)) [23].
Γ c ( x ) = Λ c ( x ) τ c ( x ) + Λ c ( x ) τ c ( x ) η c ( x ) + Θ c ( 1 τ c ( x ) )
where Γ c ( x ) and Λ c ( x ) denotes the intensities of the c { red, green, blue } color channel at the pixel x in an input underwater image and scene radiance image, respectively. Θ c is the ambient light, Λ c ( x ) τ c ( x ) is the direct transmission representing the attenuated scene radiance by transmission in Equation (2) [23].
τ c ( x ) = e Π c d ( x ) ,
where d ( x ) denotes the distance at x between a target scene and camera, and Π c denotes the attenuation coefficient of the c color channel.
Suppose we assume that attenuated light is isotropic and water is homogeneous. In that case, we can express the total attenuation coefficient Π c as a sum of absorption coefficient ξ c and scattering coefficient ζ c . Thus, Π c = ξ c + ζ c . ζ c describes the superposition of all scattering events at all angles. ζ c can be obtained by integrating the volume scattering function γ c ( θ ) over all solid angles as in Equation (3) [35,36].
ζ c = 0 π γ c ( θ ) d ν = 2 π 0 π ζ c ( θ ) sin θ d θ .
The parameters Π c , ξ c , ζ c and γ c ( θ ) are the inherent optical properties of the ocean or water body. However, the real-time measurement of these parameters have proved a very complex costly and time-consuming task; hence, we assume Π c 0.025 for the entire color channel as guided by Figure 2 extracted from Ruben et al. [37].
Λ c ( x ) t c ( x ) η c ( x ) in Equation (1) is the forward transmission where η c ( x ) is the point spread function of pixel x defined in Equation (4) [23].
η c ( x ) = ( e Ω c d ( x ) e Π c d ( x ) ) F 1 { e δ c d ( x ) ϖ }
Ω c and δ c are the empirical coefficients of the c color channel related to the hazed image scene, such that | Ω c | < | Π c | . F 1 denotes the inverse Fourier transform and ϖ is the radial frequency. The term Θ c ( 1 τ c ( x ) ) in Equation (1) is the backward scattering term with Θ c being the background light of the c color channel.

3.2. Scene Depth Estimation and Global Background Light

3.2.1. Scene Depth Estimation

We define the scene depth as [38]:
d ( x ) = ϕ 0 + ϕ 1 ε ( x ) + ϕ 2 ϱ ( x ) ,
where d ( x ) is underwater depth scene at pixel x { i , j } . ϕ i are linear coefficients depending on the pixel difference plots between the global and local patches. ε ( x ) is the mean intensity function showing the absolute difference between the pixel of the green and red channels. ϱ ( x ) is the mean intensity function showing the absolute difference between the pixel of the green and blue channels. ϵ ( x ) is the mean intensity function showing the absolute difference between the pixel of the red and blue channels. In the proposed method, we design the scene depth to extract the pixel intensity difference. The difference helps strengthen the scene artifacts, a component that is lacking in existing underwater dehazing techniques [23,24,25,30,38,39]. We re-write Equation (5) as
d ( x ) = ϕ 0 + ϕ 1 | [ ( argmax ( min x ( i , j ) ( min ( i , j ) { R , G } ε ( x ) ) ) ) ( argmin ( max x ( i , j ) ( max ( i , j ) { R , G } ε ( x ) ) ) ) ] | + ϕ 2 | [ ( argmax ( min x ( i , j ) ( min ( i , j ) { G , B } ϱ ( x ) ) ) ) ( argmin ( max x ( i , j ) ( max ( i , j ) { G , B } ϱ ( x ) ) ) ) ] | + ϕ 3 | [ ( argmax ( min x ( i , j ) ( min ( i , j ) { R , B } ϵ ( x ) ) ) ) ( argmin ( max x ( i , j ) ( max ( i , j ) { R , B } ϵ ( x ) ) ) ) ] | ,
Equation (6) suggests the scene depth value increases with an increase in the difference between maximum and minimum values of R G , R B , G B . These differences exist in the pixels of the hazed underwater images. Accurate estimation of scene depth and global background light helps increase the accuracy of underwater image dehazing.

3.2.2. Global Background Light

The coefficient of sunlight or the global background light Θ c is a function of light wavelength c. Prior studies such as that of He et al. [40] proposed using 0.1 % of the brightest pixels of the dark channel as the Θ c . Li et al. [39] used graph-based segmentation to estimate Θ c and transmission map via minimum information loss principle. Ancuti et al. [15] applied local maxima in the dark channel to estimate Θ c . Galdran et al. [19] used the brightest pixels in the red channel to estimate Θ c . Peng et al. [41] used average of 0.1 % of the brightest pixels as Θ c . Carlevaris-Bianco et al. [28] used the minimum values of the maximum difference between the blue-green and red channels as the Θ c . The estimation of ambient light in the proposed and existing methods points to the dif-ference between the underwater and general image dehazing. The underwater haze is characterized by light absorption of water particles, while atmospheric haze, the light is scattered by particles in the atmosphere. Thus, underwater image dehazing is based on wavelength correction arising from color absorption due to refraction and reflection of water molecules and other particles suspended in water. The traditional dehazing is based on correcting the reflection of light due to suspended particles in the atmosphere.
Suppose we assume that the radiance of the scattered light toward the camera is proportional to the volume scattering function γ c ( θ ) . If ϑ γ c ( ψ ) d ψ represents all scattering events toward the camera’s line of sight from all directions; then, we can say that
ζ c = ϑ γ c ( ψ ) d ψ .
Thus, we can deduce that
ζ c ϑ γ c ( θ ) d ψ
Equation (8) is used to show that Θ c ζ c Π c , which in general means that the global background light is proportional to the scattering coefficient and inversely proportional to the total attenuation coefficient. We know the values of Π c = 0.025 , but the value of ζ c is not clearly stated. Existing models have attempted to model ζ c based on specific water mediums. Smith and Baker [42] noted that the absorption coefficient varies irregularly with the wavelength in the visible light wavelength band. Barnard et al. [43] and Gould et al. [44] observed that the scattering coefficient has an approximately linear relationship with the wavelength of light in all test cases conducted during their experiment.
In this paper, we present a novel background light based on moving average of the impact of suspended light and brightest pixels within the image. Suppose the brightest pixel located at the brightest point in the image is estimated by
Θ c = Γ c ( argmax ( min y Ω ( min c Γ c ( y ) ) ) )
We eliminate the effect of the red channel by obtaining the brightest pixel from the green and blue channels by replacing Γ c ( y ) in the Equation (9) with Γ c ( y ) . Θ c is selected as the brightest pixel or the average value of the top 0.1 % brightest pixels in hazed image. We also consider the maximum difference between R & G B c in the hazed image. Suppose we consider that the red channel attenuates much faster than green and blue channels in underwater hazed images; then, Θ c can be interpreted as
Θ c = Γ c ( argmax | max y Ω Γ r ( y ) max y Ω Γ c ( y ) | ) .
The input image was first segmented into local and global patches to estimate Θ c in all the patches. The estimation entailed selecting the top 0.01 % least blurry and brightest pixels in all the patches. The region with the lowest variance was then selected with the help of image blurriness map Φ init Equation (11). The average pixel of Θ c (s) for local and global was compared, and standard deviation was obtained for all the patches.
Φ init ( x ) = 1 n i = 1 n | I g ( x ) G a u r i · r ( x ) | ,
where I g is the grayscale of the underwater hazed image I c , G a u r i , n ( x ) is the underwater hazed image filtered by a ς i × ς i spatial Gaussian filter with variance ς i 2 , ς i = 2 i n + 1 , and n is set to 16. We then use the max filter to calculate the rough blurriness map Φ ς as:
Φ ς = max y Ψ Φ init ( y )
where Ψ is set as 16 × 16 pixels for the test image of 256 × 256 pixels, the Φ ς helps fill the holes through pixel stretching, which may lead to the addition of artifacts. The additional artifacts may noise the images; hence, they must be managed. Thus, the re-fined blurriness maps Φ refined ς , is given as
Φ r e f i n e d ς ( x ) = Λ c { γ ς [ Φ ς ( x ) ] }
Images under sufficient light have higher Θ c , and we use the weighted combination of max and min to estimate the final Θ c as
ρ Θ max c + ( 1 ρ ) Θ min c Θ c = + ρ Θ G M I N c + ρ Θ G M A X c c ( R , G , B ) | R 620 , G 540 & B 450
where ρ is selective coefficient, Θ ( max ) c and Θ ( min ) c are the maximum and minimum candidate Θ c , respectively. We simultaneously normalize Equation (14) by using the ratios of attenuation coefficient between different color channels as follows
c G c R = ζ G Θ R , ζ Θ B G , c B c R = ζ B Θ R , ζ R Θ B , ,
and
Π R Π B = ( 0.00113 R + 1.62517 ) Θ B ( 0.00113 B + 1.62517 ) Θ R Π G Π B = ( 0.00113 G + 1.62517 ) Θ B ( 0.00113 B + 1.62517 ) Θ G ,
where Π R Π B and Π G Π B are red-blue and green-blue attenuation coefficient ratios, respectively. Θ R , Θ B , Θ G are the brightest pixel in the red, blue and green channels, respectively. R , G , B are the wavelengths values defined in Equation (14). Equations (9), (10) and (14) are an optimization problems whose solution is attained by block greedy algorithm with Equations (15) and (16) serving as a constraint. The solution estimate the background light used in the algorithm. The details of block greedy algorithm is discussed in [45,46]. Summary of the greedy algorithm for the proposed solution Θ c is presented in Algorithm 1.
Algorithm 1: Algorithm for (14).
Input: Underwater image in
Output: Ambient light Θ c out
  Initialisation:
 1: Let
   
Θ c = { ρ Θ max c + ( 1 ρ ) Θ min c + ρ Θ G M I N c + ρ Θ G M A X c Γ c ( argmax ( min y Ω ( min c Γ c ( y ) ) ) ) Θ c = Γ c ( argmax | max y Ω Γ r ( y ) max y Ω Γ c ( y ) | )
 2: c ( R , G , B ) | R 620 , G 540 & B 450
 3: Subject to
   c G c R = ζ G Θ R , ζ Θ B G ,
   c B c R = ζ B Θ R , ζ R Θ B ,
   Π R Π B = ( 0.00113 R + 1.62517 ) Θ B ( 0.00113 B + 1.62517 ) Θ R
   Π G Π B = ( 0.00113 G + 1.62517 ) Θ B ( 0.00113 B + 1.62517 ) Θ G
 4: Compute Θ c

3.3. Discontinuity of Pixels

Branou et al. [47] and Queiroz et al. [48] noted that underwater images exhibit low quality features due to discontinuities between pixels. They proposed use of the lie-group helps correct the discontinuity between pixels in the underwater images. The discontinuities in the pixels arise due to the reflectivity of light because of water particles. One of the major hurdles in underwater image dehazing is the constant discontinuity in the pixels. Thus, we show that the discontinuity in the image features via image pixels. If we assume two pixels denoted by f ( i ) and g ( i ) are continuous, then their product f g , sum f + g and composition f g are continuous. Suppose x ( i ) is a positive function of underwater hazed pixels. Then the following statements are true: If x ( i ) is continuous at i 0 then x is continuous at i 0 . This is because
lim i i 0 x ( i ) = lim i i 0 x ( i ) = x ( i 0 )
If x ( i ) is differentiable at i 0 then x ( i ) is differentiable at i 0 . The derivative of x ( i ) is x ( i ) 2 x ( i ) . We show that pixel of hazed image is continuous at i 0 , such that
lim i i 0 x ( i ) 2 x ( i ) = lim i i 0 x ( i ) lim i i 0 2 x ( i ) = x ( i 0 ) 2 x ( i 0 )
Thus, x ( i ) is differentiable at i 0 . Suppose φ : R 2 R 2 is a smooth function on R 2 . If the distance between the neighborhood pixel is given by
E p i x e l ( t ) = ( I φ 1 ( t ) J ) 2 ,
and is discontinuous at t 0 , then there exists i 0 , j 0 R 2 such that I ( i , j ) where i and j is the local and global neighborhood pixel, respectively, is discontinuous at i 0 , j 0 R 2 .
There is a need to transform both local and global pixel values to continuously man-age haze in the image. We have defined discontinuity points in Equation (19). Burge [49] noted that interpolating (the process of estimating intermediate values of the signal at continuous positions in an attempt to reconstruct the original set of discrete signals) maps the discrete pixel positions. This is achieved via geometric transformation. Suppose we define the linear interpolation as
lerp ( f , g , t ) = exp ( ( 1 t ) log f + t log g ) .
Equation (20) suggests that it satisfies a subdivision identity analogous to
f n g n = ( f g ) n
where n is image patch sizes in local f and global g pixel neighborhood. Equation (21) is called splitting invariance which corrects criterion for computations on pixel values. The logic governing Equation (21) suggests that the result of the computation depends on the image being dehazed. We increase n to increase the pixel sampling rate, increase the accuracy, and preserve, and improve the color hue. To make interpolation links between local and global pixels, we define
f i f j = f i + j ,
such that
f g = exp ( log f + log g ) .
Equations (20)–(23) indicate that f ( i , j ) g ( i , j ) ; we expect qualitative pixel orientations which extract image feature while minimizing defects arising due to reflectance of light in water.
We limit the details of information on the images in order to improve the image quality. Thus, we set [ 1 , 0 ] [ 0 , ( i , j ) ] = [ 1 , ( 1 , 1 ) ] . These render the scene’s transmission medium along the lines of human visual perception, which reduces saturation of color and opacity on the image surface. This is achieved by moderating the scene depth values such that λ ( i , j ) , 0 ( i , j ) 1 if the interpolated scene transmission medium is defined by S i , j is T i , j . Suppose T i + 1 , j + 1 is divided into two parts: S i (i.e., F i , j ) and split variance between S i , j and S i + 1 , j + 1 i + 1 , j + 1 . We compute the dehazed pixel based on the local and global pixel patches rendering where T i , j is passed over i , j . Similarly, T i + 1 , j + 1 = T i , j over i + 1 , j + 1 . The passing of local patches over global and vice versa helps captures image features, thus improving the quality of the dehazed underwater images. The effect of the continuity of pixels in the proposed techniques is elaborated by Figure 3.

3.4. Underwater Image Restoration

We have presented a method that examine scene depth and background light, and offers a solution to correct pixel discontinuities within local and global image neighborhoods in Section 3.2. The proposed technique is implemented via a novel Convolution Neural Network (CNN). The details and literature on CNN and its tremendous applications in image processing are presented in numerous studies [50,51,52,53]. The architecture of our proposed CNN for the underwater image dehazing is presented in Figure 4.

3.4.1. Global Light and Local Light Network

The existing underwater image dehazing techniques focus on global ambient light without focusing on the local ambient light [23,24,25,30]. The proposed technique uses image pixels via Equations (9), (10) and (14) to optimally estimate both the global and local ambient light. Unlike existing studies, the estimation of local ambient light helps in the extraction of finer details in the final dehazed images (see Figures 7–11). The inclusion of Equations (9), (10) and (14) improves the accuracy of the detail extraction based on the architecture of the GL and LL network in CNN, presented in Figure 5. The network achieves its objective by learning to map pixels between input underwater images and their corresponding surrounding light.
Figure 5 indicate that the proposed architecture for the Global light and Local light network consists of three convolution layers and two max-pooling layers. The convolution layers extract features. The max-pooling layers help to overcome local sensitivity arising due to pixel correlations. The max-pooling layers function to reduce the resolution of feature maps. The global pixel correlation may be linear; thus, the last convolution layer constrains nonlinear regression. This ensures that linear relationships do not deny the mapping of the pixels due to similarity. CNN mapping of scene radiance is known to have slow convergence [54]. To remedy this, we add the widely used ReLU layer after every convolution layer. The addition of ReLU also helps the architecture avoid settling at the local minima during the training phase.
Unlike the existing algorithm [55,56], the proposed algorithm estimates the ambient light based on local and global pixel intensities. Besides, the proposed algorithm eliminates the effect of red, green, and blue channels separately. The ambient light is the optimal value based on solution of Equations (9), (10) and (14) via block greedy algorithm. The block greedy algorithm helps in the smart pixel selection between local and global patches. The training procedure also entails downsampling of the input images to increase the accuracy of the results. The patches used during the experiment also involve local and global patches, thus, further improving the accuracy of the results (see samples of the results in Figures 3–11).

3.4.2. Depth Estimation Network

The depth estimation network used the hazed underwater image as input. The network requires the input image to be RGB. The depth information contains the information about the distance of the objects in the image from the viewpoint d ( x ) . The existing depth estimation networks, such as Luo et al. [1] have relied on an initial estimate of the global light based on reflectivity and geometry. In the proposed technique, the architecture is instead based on the Equation (6) approximate pixel differences. Here, the pixel differences depend on the augmented optimal difference between the three-color channels, unlike that of Luo et al. [1].
The cross-layer connection in Figure 6 helps preserve detail features. This is ensured by the first connection, that is, the connection between the convolution layer and ReLU, which compensates for information that may be lost during depth estimation. The information that is most visibly preserved and improved with this connection is image edges. The multi-level pyramid pooling connection helps preserve features during the transformation from the depth estimation to the final dehazed image. The upsampling at the convolution and pooling end helps ensure that local and global features are accurately incorporated in the final image, even in the presence of resolutions, with the help of pixel discontinuity correction.

3.4.3. Minimum Energy

Scene depth changes gradual and entails variations in local and global neighborhood pixels. Thus, accurate depth variation estimation depends on features from both the local and global neighborhood pixels. These are attainable via a novel energy function in the depth estimation network. The energy function is based on a novel global-local Markov chain already discussed in detail in [12]. The resultant energy function is optimized by the graph-cut as discussed in Alenezi and Ganesan [12]. However, in this model, we use the color channel features as representative of both global and local color moments proposed by [57]. This sets in opposition the super-pixels in the global and local neighborhoods as presented in [12]. Thus, the ambient light used represents the relationship between global and local pixels and super-pixels. This approach extends global and local consistency, which serves to protect the proposed convolution neural network from the problem of smoother far apart pixels. It also serves to avoid over saturation of color and to enhance sharper boundaries.

3.4.4. Λ -Estimator

The depth estimation d ( x ) leads to transmission estimation τ ( x ) . The optimal ambient light Θ c has also been estimated. We use Equation (1) to estimate that the scene radiance Λ c can be restored by Equation (24)
Λ c = Γ c Θ c ( 1 τ c ( x ) ) ( 1 + η c ( x ) ) τ ( x )

4. Experiments

4.1. Data and Implementation

We demonstrate the effectiveness of the proposed method by comparing the simulation results (see Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11) with the existing leading state-of-the-art underwater dehazing approaches. These approaches are presented in Table 1.
The training process for the proposed dehazing technique was performed on BIZON X5000 G2 with 16 GB RAM. The image databased used are obtained from the existing-state-of-the-art papers. These images were partitioned into 65,536 local and 16,384 global patches. As many patches as possible were used to increase the pixel classification accuracy, which increases the information content of the final image. The estimated parameter values and essential items used during the experiment are summarized in Table 2.

4.2. Evaluation Metrics

We base our performance evaluation on objective measures; thus, we used entropy [63], Underwater Color Image Quality Evaluation (UCIQE) [64], Underwater Image Quality Measure (UIQM) [65], Underwater Image Colorfulness Measure (UICM) [65] and Underwater Image Sharpness Measure (UISM) [65].
Table 3 presents a summary of the average and standard deviation values of these metrics for 32 underwater images used during evaluation. However, for the visual presentation, a sample of the results of the images is presented in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.

4.3. Results Analysis and Comparison

The results in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 indicate that the proposed results offer images with better visibility than those produced by existing state-of-the-art methods. This observation is backed by the quantitative data presented in Table 3 and Figure 12 and Figure 13.
Figure 7 compares the proposed results with recent underwater dehazing methods, RLSCD [25], and HUIE [61]. A visual analysis of our method compared to those presented by [25,61] show that our results are clearer with sharper details. Our results are perceptively better than existing state-of-the-art methods, which resonates with the performance evaluation metrics presented in Table 3.
Figure 8, as in Figure 7, compares the proposed results with Galdran et al. [58], Guo et al. [59], Li et al. [60], and Deng et al.’s [25] results. The visual analysis suggests that the proposed results supersede the existing results in all the performance evaluation metrics presented in Table 3.
Figure 9 indicates the strength of the proposed method compared to the benchmark algorithm. The novel architecture proposed for implementation accompanied by a greedy algorithm helps estimate optimal ambient light. When this is added to a color channel-based depth estimate, it increases the extraction accuracy. This accuracy is visible in clearly extracted features of the proposed underwater dehazed images compared to the benchmark algorithm.
Figure 10 also shows the strength of the proposed method in terms of finer detail ex-traction and color balance. A visual analysis indicates that the edges of fish and rocks are more easily distinguishable than with the benchmark algorithms. More features are also visible in the proposed results than in the benchmark in all the examples presented. The color balance explains why the UICM measure is significantly higher in the proposed than in existing methods. The sharpness of colorful and clarity of the features are also visible in Figure 8e compared to its counterpart Figure 8f, the proposed results. The proposed results are more transparent in both cases and have better visual perceptibility than any other results. The colors and edges are more permanent than the existing results. This explains why the UCIQE, UISM, and UICM of the proposed method supersede the existing results.
Figure 11 shows the versatility of the proposed method in the presence of highly varied colors. While the exiting benchmark algorithm shows precise details even amid minor variations, the proposed results are also excellent. This excellence can be attributed to the pixel con-sistency in the overlapping regions due to the novel cost function used in the algorithm. The estimation of the optimal ambient light is also assumed to contribute significantly to this success as local and global pixels are considered in the proposed algorithm, unlike in the existing techniques. The improvement of pixel discontinuities by limitating details and information also has helped improve the image’s quality. The passing of local patch details over global and vice versa also has helped capture image features. Thus, there is improvement of image quality as depicted in higher average μ , values and consistency (standard deviation values), σ , of entropy, UCIQE, UIQM, UICM, and UISM in Table 3. The last figure in Figure 9 shows one of the weaknesses of the proposed method, which is the over-saturation of the blue region. The proposed techniques rely on the mean intensity function of the absolute difference between the color channels; red and blue and blue and greed and green and red. Blue is a dominant color, and a lack of clear distinction in the boundary of blue and green makes the blue and green mean absolute difference exaggerated, hence oversaturating the excessive bluish regions.
Figure 12 shows the effectiveness of the proposed transmission map compared to the existing technique [23]. The effectiveness arises due to the proposed absolute difference between the color channels. A visual analysis of the results also suggests that the proposed technique yields clear edges than the comparison technique.
Figure 13 shows the effectiveness of the proposed ambient light estimation. A visual analysis of the results without ambient light suggests a poor color image compared to the proposed results with the novel ambient light estimation. This observation further strengthens the proposed technique contribution.

5. Conclusions

We present a technique based on pixel difference between the global and local patches in-scene depth estimation. Specifically, the pixel difference is based on comparison, constrast and relation to the green and red channels’ absolute mean intensity function, the green and blue channels, and the red and blue channels. This arrangement helps in the extraction of image details and strengthening artifacts. The global background light used is based on an assumed moving average of the impact of suspended light and the brightest pixels within the image. We normalized attenuation ratios of different color channels with the help of a block-greedy algorithm to select the region with the lowest variance. The discontinuity associated with underwater images is corrected by a transformation of both local and global pixel values. The increase in continuity of the pixel increases the accuracy, and preserves and improves the color hue, which is visible in the simulated results presented in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. We implement the proposed algorithm using a novel CNN and a block-greedy algorithm. This combination smartly selects pixels from local and global patches to estimate optimal ambient light. The unique connection of the CNN tends to yield images with preserved features such as edges, which are visible in the results presented in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. The novel Markov random field-based minimum cost function smoothing leads to smooth edges and improved features, giving the resulting dehazed images higher perceptual quality than existing benchmark algorithms. The smooth connection between local and global pixels within the training patches enhances the resultant underwater dehazed images. The performance of the proposed technique against existing state-of-the-art algorithms using entropy, UCIQE, UIQM, UICM, and UISM, as presented in Table 3, indicates that the proposed technique performs well in terms of average and consistency. One significant weakness of the proposed technique is that it is only applicable to underwater images. Future research could also establish the edge strengthening amidst color saturation during depth estimation. The effect of the proposed technique on natural dehaze images could also be investigated with minor amendments on the algorithm to remove the elements of optical properties of water.

Author Contributions

The authors contributed equally in this paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at Jouf University for funding this work through research grant No DSR2020-06-3662.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luo, Y.; Jiao, H.; Qi, L.; Dong, J.; Zhang, S.; Yu, H. Augmenting depth estimation from deep convolutional neural network using multi-spectral photometric stereo. In Proceedings of the 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), San Francisco, CA, USA, 4–8 August 2017; pp. 1–6. [Google Scholar]
  2. Wang, H.; Li, X.; Jhaveri, R.H.; Gadekallu, T.R.; Zhu, M.; Ahanger, T.A.; Khowaja, S.A. Sparse Bayesian learning based channel estimation in FBMC/OQAM industrial IoT networks. Comput. Commun. 2021, 176, 40–45. [Google Scholar] [CrossRef]
  3. Tiwari, P.; Zhu, H.; Pandey, H.M. DAPath: Distance-aware knowledge graph reasoning based on deep reinforcement learning. Neural Netw. 2021, 135, 1–12. [Google Scholar] [CrossRef]
  4. Jhaveri, R.; Sagar, R.; Srivastava, G.; Gadekallu, T.R.; Aggarwal, V. Fault-resilience for bandwidth management in industrial software-defined networks. IEEE Trans. Netw. Sci. Eng. 2021. [Google Scholar] [CrossRef]
  5. Dhanamjayulu, C.; Nizhal, U.; Maddikunta, P.K.R.; Gadekallu, T.R.; Iwendi, C.; Wei, C.; Xin, Q. Identification of malnutrition and prediction of BMI from facial images using real-time image processing and machine learning. IET Image Process. 2021. [Google Scholar] [CrossRef]
  6. Patel, H.; Singh Rajput, D.; Thippa Reddy, G.; Iwendi, C.; Kashif Bashir, A.; Jo, O. A review on classification of imbalanced data for wireless sensor networks. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720916404. [Google Scholar] [CrossRef]
  7. Tiwari, P.; Uprety, S.; Dehdashti, S.; Hossain, M.S. TermInformer: Unsupervised term mining and analysis in biomedical literature. Neural Comput. Appl. 2020, 1–14. [Google Scholar] [CrossRef] [PubMed]
  8. Iwendi, C.; Rehman, S.U.; Javed, A.R.; Khan, S.; Srivastava, G. Sustainable Security for the Internet of Things Using Artificial Intelligence Architectures. ACM Trans. Internet Technol. (TOIT) 2021, 21, 1–22. [Google Scholar] [CrossRef]
  9. Latif, S.A.; Wen, F.B.X.; Iwendi, C.; Li-li, F.W.; Mohsin, S.M.; Han, Z.; Band, S.S. AI-empowered, blockchain and SDN integrated security architecture for IoT network of cyber physical systems. Comput. Commun. 2021, 181, 274–283. [Google Scholar] [CrossRef]
  10. Alenezi, F.; Salari, E. A Fuzzy-Based Medical Image Fusion Using a Combination of Maximum Selection And Gabor Filters. Int. J. Eng. Sci. 2018, 9, 118–129. [Google Scholar]
  11. Alenezi, F.; Salari, E. Novel Technique for Improved Texture and Information Content of Fused Medical Images. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Louisville, KY, USA, 6–8 December 2018; pp. 348–353. [Google Scholar]
  12. Alenezi, F.S.; Ganesan, S. Geometric-Pixel Guided Single-Pass Convolution Neural Network With Graph Cut for Image Dehazing. IEEE Access 2021, 9, 29380–29391. [Google Scholar] [CrossRef]
  13. Alenezi, F.; Salari, E.; Verma, A. A Novel Image Fusion Method Which Combines Wiener Filtering, Pulsed Chain Neural Networks and Discrete Wavelet Transforms for Medical Imaging Applications. Int. J. Comput. Sci. Technol. 2018, 9, 9–16. [Google Scholar]
  14. Gu, K.; Tao, D.; Qiao, J.F.; Lin, W. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1301–1313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-haze: A dehazing benchmark with real hazy and haze-free outdoor images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 754–762. [Google Scholar]
  16. Downing, P.E.; Jiang, Y.; Shuman, M.; Kanwisher, N. A cortical area selective for visual processing of the human body. Science 2001, 293, 2470–2473. [Google Scholar] [CrossRef]
  17. Gu, Z.; Ju, M.; Zhang, D. A single image dehazing method using average saturation prior. Math. Probl. Eng. 2017, 2017. [Google Scholar] [CrossRef]
  18. Nishino, K.; Kratz, L.; Lombardi, S. Bayesian defogging. Int. J. Comput. Vis. 2012, 98, 263–278. [Google Scholar] [CrossRef]
  19. Galdran, A.; Alvarez-Gila, A.; Bria, A.; Vazquez-Corral, J.; Bertalmío, M. On the duality between retinex and image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8212–8221. [Google Scholar]
  20. Khoond, R.; Goyal, B.; Dogra, A. Image Enhancement Using Nonlocal Prior and Gradient Residual Minimization for Improved 64 Visualization of Deep Underwater Image. In Computational Intelligence Methods for Super-Resolution in Image Processing Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 261–278. [Google Scholar]
  21. Talebi, H.; Milanfar, P. Learned perceptual image enhancement. In Proceedings of the 2018 IEEE International Conference on Computational Photography (ICCP), Pittsburgh, PA, USA, 4–6 May 2018; pp. 1–13. [Google Scholar]
  22. Xiong, J.; Zhuang, P.; Zhang, Y. An Efficient Underwater Image Enhancement Model With Extensive Beer-Lambert Law. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2021; pp. 893–897. [Google Scholar]
  23. Park, E.; Sim, J.Y. Underwater image restoration using geodesic color distance and complete image formation model. IEEE Access 2020, 8, 157918–157930. [Google Scholar] [CrossRef]
  24. Li, H.; Zhuang, P.; Wei, W.; Li, J. Underwater Image Enhancement Based on Dehazing and Color Correction. In Proceedings of the 2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), Xiamen, China, 16–18 December 2019; pp. 1365–1370. [Google Scholar]
  25. Deng, X.; Wang, H.; Liu, X. Underwater image enhancement based on removing light source color and dehazing. IEEE Access 2019, 7, 114297–114309. [Google Scholar] [CrossRef]
  26. Wang, Y.; Zhang, J.; Cao, Y.; Wang, Z. A deep CNN method for underwater image enhancement. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1382–1386. [Google Scholar]
  27. Wang, N.; Zheng, H.; Zheng, B. Underwater image restoration via maximum attenuation identification. IEEE Access 2017, 5, 18941–18952. [Google Scholar] [CrossRef]
  28. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the Oceans 2010 Mts/IEEE Seattle, Seattle, WA, USA, 20–23 September 2010; pp. 1–8. [Google Scholar]
  29. Berman, D.; Treibitz, T.; Avidan, S. Diving into haze-lines: Color restoration of underwater images. In Proceedings of the British Machine Vision Conference (BMVC), London, UK, 4–7 September 2017; Volume 1. [Google Scholar]
  30. Berman, D.; Levy, D.; Avidan, S.; Treibitz, T. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2822–2837. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Chiang, J.Y.; Chen, Y.C. Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans. Image Process. 2011, 21, 1756–1769. [Google Scholar] [CrossRef] [PubMed]
  32. Wen, H.; Tian, Y.; Huang, T.; Gao, W. Single underwater image enhancement with a new optical model. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 753–756. [Google Scholar]
  33. Jordt, A. Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation. Ph.D. Thesis, Christian-Albrechts-Universität zu Kiel, Kiel, Germany, 2013. [Google Scholar]
  34. Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  35. Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
  36. Spinrad, R.W.; Carder, K.L.; Perry, M.J. Ocean Optics; Oxford University Press: Oxford, UK, 1994; Volume 25. [Google Scholar]
  37. Ruben, L.D.; Szupiany, R.N.; Latosinski, F.; Weibel, C.L.; Wood, M.; Boldt, J. Acoustic Sediment Estimation Toolbox (ASET): A software package for calibrating and processing TRDI ADCP data to compute suspended-sediment transport in sandy rivers. Comput. Geosci. 2020, 140, 104499. [Google Scholar] [CrossRef]
  38. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Pacific Rim Conference on Multimedia; Springer: Berlin/Heidelberg, Germany, 2018; pp. 678–688. [Google Scholar]
  39. Li, C.; Guo, J.; Chen, S.; Tang, Y.; Pang, Y.; Wang, J. Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1993–1997. [Google Scholar]
  40. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
  41. Peng, L.; Li, B. Single Image Dehazing Based on Improved Dark Channel Prior and Unsharp Masking Algorithm. In International Conference on Intelligent Computing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 347–358. [Google Scholar]
  42. Smith, R.C.; Baker, K.S. Optical properties of the clearest natural waters (200–800 nm). Appl. Opt. 1981, 20, 177–184. [Google Scholar] [CrossRef] [PubMed]
  43. Barnard, A.H.; Pegau, W.S.; Zaneveld, J.R.V. Global relationships of the inherent optical properties of the oceans. J. Geophys. Res. Ocean 1998, 103, 24955–24968. [Google Scholar] [CrossRef]
  44. Gould, R.W.; Arnone, R.A.; Martinolich, P.M. Spectral dependence of the scattering coefficient in case 1 and case 2 waters. Appl. Opt. 1999, 38, 2377–2383. [Google Scholar] [CrossRef]
  45. Eden, A.; Feige, U.; Feldman, M. Max-min greedy matching. In Proceedings of the 14th Workshop on the Economics of Networks, Systems and Computation, Phoenix, AZ, USA, 28 June 2019; p. 1. [Google Scholar]
  46. Chen, J.; Guan, B.; Wang, H.; Zhang, X.; Tang, Y.; Hu, W. Image thresholding segmentation based on two dimensional histogram using gray level and local entropy information. IEEE Access 2017, 6, 5269–5275. [Google Scholar] [CrossRef]
  47. Brandou, V.; Allais, A.G.; Perrier, M.; Malis, E.; Rives, P.; Sarrazin, J.; Sarradin, P.M. 3D reconstruction of natural underwater scenes using the stereovision system IRIS. In Proceedings of the OCEANS 2007-Europe, Aberdeen, UK, 18–21 June 2007; pp. 1–6. [Google Scholar]
  48. Queiroz-Neto, J.P.; Carceroni, R.; Barros, W.; Campos, M. Underwater stereo. In Proceedings of the Proceedings. 17th Brazilian Symposium on Computer Graphics and Image Processing, Curitiba, Brazil, 20–20 October 2004; pp. 170–177. [Google Scholar]
  49. Burger, W.; Burge, M.J. Digital Image Processing: An Algorithmic Introduction Using Java; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  50. Monti, F.; Boscaini, D.; Masci, J.; Rodola, E.; Svoboda, J.; Bronstein, M.M. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5115–5124. [Google Scholar]
  51. Garcia, V.; Bruna, J. Few-shot learning with graph neural networks. arXiv 2017, arXiv:1711.04043. [Google Scholar]
  52. Narasimhan, M.; Lazebnik, S.; Schwing, A. Out of the box: Reasoning with graph convolution nets for factual visual question answering. In Advances in Neural Information Processing Systems; 2018; pp. 2654–2665. Available online: https://arxiv.org/abs/1811.00538 (accessed on 1 November 2018).
  53. Cui, Z.; Xu, C.; Zheng, W.; Yang, J. Context-dependent diffusion network for visual relationship detection. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Korea, 22–26 October 2018; pp. 1475–1482. [Google Scholar]
  54. Al-Barazanchi, H.A.; Qassim, H.; Verma, A. Novel CNN architecture with residual learning and deep supervision for large-scale scene image categorization. In Proceedings of the 2016 IEEE 7th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 20–22 October 2016; pp. 1–7. [Google Scholar]
  55. Yang, D.; Peltoketo, V.T.; Kamarainen, J.K. CNN-Based Cross-Dataset No-Reference Image Quality Assessment. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  56. Cheng, P.; He, S.; Cheng, J.; Luan, X.; Liu, F. Asynchronous output feedback control for a class of conic-type nonlinear hidden Markov jump systems within a finite-time interval. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 7644–7651. [Google Scholar] [CrossRef]
  57. Park, S.H.; Lee, S.; Yun, I.D.; Lee, S.U. Hierarchical MRF of globally consistent localized classifiers for 3D medical image segmentation. Pattern Recognit. 2013, 46, 2408–2419. [Google Scholar] [CrossRef]
  58. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  59. Guo, Y.; Li, H.; Zhuang, P. Underwater image enhancement using a multiscale dense generative adversarial network. IEEE J. Ocean. Eng. 2019, 45, 862–870. [Google Scholar] [CrossRef]
  60. Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef] [Green Version]
  61. Li, X.; Hou, G.; Tan, L.; Liu, W. A Hybrid Framework for Underwater Image Enhancement. IEEE Access 2020, 8, 197448–197462. [Google Scholar] [CrossRef]
  62. Song, Y.; Li, J.; Wang, X.; Chen, X. Single image dehazing using ranking convolutional neural network. IEEE Trans. Multimed. 2017, 20, 1548–1560. [Google Scholar] [CrossRef] [Green Version]
  63. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  64. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  65. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
Figure 1. Underwater hazing problem model showing back, forward and direct scattering. The effect of light absorption due to the three types of scattering (back, direct and forward) results in the in-camera hazing of assumed haze-free scenes.
Figure 1. Underwater hazing problem model showing back, forward and direct scattering. The effect of light absorption due to the three types of scattering (back, direct and forward) results in the in-camera hazing of assumed haze-free scenes.
Water 13 03470 g001
Figure 2. Theoretical attenuation coefficient showing different regions in water. The light gray shaded region has the grain-size distribution (GSD) ranging between ( 0.4 –4 μ m ) and the darker gray shaded region has the GSD ranging between ( 4 300 μ m ) . The vertical dotted line represents the mean particle radius ( 10.5 μ m ) [37].
Figure 2. Theoretical attenuation coefficient showing different regions in water. The light gray shaded region has the grain-size distribution (GSD) ranging between ( 0.4 –4 μ m ) and the darker gray shaded region has the GSD ranging between ( 4 300 μ m ) . The vertical dotted line represents the mean particle radius ( 10.5 μ m ) [37].
Water 13 03470 g002
Figure 3. The effect of Lie-grouping during training in the proposed technique is visible in the saturated color in the surf plot of the underwater dehazed image compared to that of the underwater hazed image. The most notable difference is in the red and blue colors in the hazed and dehazed image surf plot.
Figure 3. The effect of Lie-grouping during training in the proposed technique is visible in the saturated color in the surf plot of the underwater dehazed image compared to that of the underwater hazed image. The most notable difference is in the red and blue colors in the hazed and dehazed image surf plot.
Water 13 03470 g003
Figure 4. The proposed CNN architecture of our technique consists of three modules. These modules are a Global light and Local light network, a depth estimation network, and a Λ -Estimator. The Global light and Local light network is used to estimate the global and local ambient light based on the underwater pixels. The depth estimation network estimates the depth of the transmission of the underwater image. The Λ -Estimator restores the dehazed image.
Figure 4. The proposed CNN architecture of our technique consists of three modules. These modules are a Global light and Local light network, a depth estimation network, and a Λ -Estimator. The Global light and Local light network is used to estimate the global and local ambient light based on the underwater pixels. The depth estimation network estimates the depth of the transmission of the underwater image. The Λ -Estimator restores the dehazed image.
Water 13 03470 g004
Figure 5. The proposed architecture of our Global light and Local light network consisting of two operations. The operations are convolution and max-pooling. The input of the Global light and Local light network is the underwater image after downsampling and the output in the approximated scene light based on global and local ambient light. The scene light depends on global and local pixel relationships and intensities.
Figure 5. The proposed architecture of our Global light and Local light network consisting of two operations. The operations are convolution and max-pooling. The input of the Global light and Local light network is the underwater image after downsampling and the output in the approximated scene light based on global and local ambient light. The scene light depends on global and local pixel relationships and intensities.
Water 13 03470 g005
Figure 6. The architecture of the depth estimation network consists of three layers. The layers contain two operations, convolution and pooling. The loss function is managed at the end of the two operations before upsampling.
Figure 6. The architecture of the depth estimation network consists of three layers. The layers contain two operations, convolution and pooling. The loss function is managed at the end of the two operations before upsampling.
Water 13 03470 g006
Figure 7. A subjective comparison of different methods with proposed results. From left to right is input image (hazed image), and results of RLSCD (a) [25], and HUIE (b) [61] and the proposed.
Figure 7. A subjective comparison of different methods with proposed results. From left to right is input image (hazed image), and results of RLSCD (a) [25], and HUIE (b) [61] and the proposed.
Water 13 03470 g007
Figure 8. A subjective comparison of different methods with proposed results. From left to right is (a) input image (hazed image), and results of (b) ARC [58], (c) UWGAN [59], (d) WSCT [60], (e) RLSCD [25] and (f) the proposed.
Figure 8. A subjective comparison of different methods with proposed results. From left to right is (a) input image (hazed image), and results of (b) ARC [58], (c) UWGAN [59], (d) WSCT [60], (e) RLSCD [25] and (f) the proposed.
Water 13 03470 g008
Figure 9. A subjective comparison of different methods with proposed results. From top to bottom is (a) input image (hazed image), and results of (b) UEBLL [22], and (c) the proposed.
Figure 9. A subjective comparison of different methods with proposed results. From top to bottom is (a) input image (hazed image), and results of (b) UEBLL [22], and (c) the proposed.
Water 13 03470 g009
Figure 10. A subjective comparison of different methods with proposed results. From left to right is input image (hazed image), and results of (a) [61], and the proposed.
Figure 10. A subjective comparison of different methods with proposed results. From left to right is input image (hazed image), and results of (a) [61], and the proposed.
Water 13 03470 g010
Figure 11. A subjective comparison of different methods with proposed results. From left to right is input image (hazed image), and results of (a) HUIE [61], and the proposed.
Figure 11. A subjective comparison of different methods with proposed results. From left to right is input image (hazed image), and results of (a) HUIE [61], and the proposed.
Water 13 03470 g011
Figure 12. A subjective comparison of transmission map of existing method [23] with proposed results.
Figure 12. A subjective comparison of transmission map of existing method [23] with proposed results.
Water 13 03470 g012
Figure 13. A subjective comparison of the effectiveness of the proposed ambient light in the proposed technique.
Figure 13. A subjective comparison of the effectiveness of the proposed ambient light in the proposed technique.
Water 13 03470 g013
Table 1. List of the existing state of the art techniques used to compare the proposed technique.
Table 1. List of the existing state of the art techniques used to compare the proposed technique.
Technique NameAbbreviationReference
Automatic Red-Channel method(ARC)[58]
Underwater image enhancement using a Multi-scale dense Generative Adversarial Network(UWGAN)[59]
Weakly Supervised Color Transfer(WSCT)[60]
Underwater image enhancement model with Extensive Beer-Lambert Law(UEBLL)[22]
Underwater image enhancement based on Removal of Light Source Color and Dehazing(RLSCD)[25]
Hybridframework for Underwater Image Enhancement(HUIE)[61]
Table 2. Values obtained and used during the experiment for the proposed underwater dehazing algorithm.
Table 2. Values obtained and used during the experiment for the proposed underwater dehazing algorithm.
ItemExperimental Value Range
Average Training Time(41 min 55 s)–(20 min 53 s)
Learning Rate0.095–0.015
Validation Frequency1000–4000
Iterations33,000–132,000
Estimated λ g 98 [62]
Table 3. Comparison of mean and standard deviation of performance evaluation metrics of the proposed and existing-state-of-the-art algorithm for example presented in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. Higher values of μ indicate better methods while lower values of σ show consistency of the results.
Table 3. Comparison of mean and standard deviation of performance evaluation metrics of the proposed and existing-state-of-the-art algorithm for example presented in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. Higher values of μ indicate better methods while lower values of σ show consistency of the results.
Algorithm μ and σ EntropyUCIQEUIQMUICMUISM
ARC [58] μ 7.600.522.85−32.855.22
σ ± 1.57 ± 0.29 ± 0.81 ± 2.73 ± 0.71
UWGAN [59] μ 7.630.594.275.645.58
σ ± 1.36 ± 1.17 ± 1.64 ± 1.45 ± 0.85
WSCT [60] μ 7.560.542.31−57.295.31
σ ± 0.95 ± 1.43 ± 0.69 ± 2.01 ± 1.05
UEBLL [22] μ 7.750.513.7210.055.08
σ ± 0.19 ± 1.95 ± 0.71 ± 1.12 ± 0.97
RLSCD [25] μ 7.890.625.492.8511.74
σ ± 1.06 ± 0.91 ± 2.91 ± 2.05 ± 3.27
HUIE [61] μ 7.680.584.193.728.59
σ ± 0.50 ± 1.06 ± 1.39 ± 1.03 ± 0.75
Proposed μ 7.920.656.0210.5612.58
σ ±0.03 ± 0.21 ± 0.25 ± 0.92 ± 0.68
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alenezi, F.; Armghan, A.; Mohanty, S.N.; Jhaveri, R.H.; Tiwari, P. Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light. Water 2021, 13, 3470. https://doi.org/10.3390/w13233470

AMA Style

Alenezi F, Armghan A, Mohanty SN, Jhaveri RH, Tiwari P. Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light. Water. 2021; 13(23):3470. https://doi.org/10.3390/w13233470

Chicago/Turabian Style

Alenezi, Fayadh, Ammar Armghan, Sachi Nandan Mohanty, Rutvij H. Jhaveri, and Prayag Tiwari. 2021. "Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light" Water 13, no. 23: 3470. https://doi.org/10.3390/w13233470

APA Style

Alenezi, F., Armghan, A., Mohanty, S. N., Jhaveri, R. H., & Tiwari, P. (2021). Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light. Water, 13(23), 3470. https://doi.org/10.3390/w13233470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop