Next Article in Journal
An Osteocartilaginous 3D Printing Implant Using a Biocompatible Polymer and Pre-Differentiated Mesenchymal Stem Cells in Sheep
Previous Article in Journal
A Novel Charging Station on Overhead Power Lines for Autonomous Unmanned Drones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Underwater Images via Color Correction and Multiscale Fusion

School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10176; https://doi.org/10.3390/app131810176
Submission received: 7 August 2023 / Revised: 5 September 2023 / Accepted: 5 September 2023 / Published: 10 September 2023

Abstract

:
Color distortion, low contrast, and blurry details are the main features of underwater images, which can have adverse effects on their quality. To address these issues, a novel enhancement method based on color correction and multiscale fusion is proposed to improve underwater image quality, achieving color correction, contrast enhancement, and detail sharpening at different stages. The method consists of three main steps: color correction using a simple and effective histogram equalization-based method to correct color distortion, decomposition of the V channel of the color-corrected image into low- and high-frequency components using a guided filter, enhancement of the low-frequency component using a dual-interval histogram based on a benign separation threshold strategy, and a complementary pair of gamma functions; the fusion of the two versions of the low-frequency component to enhance image contrast; and finally, the design of an enhancement function to highlight image details. Comparative analysis with existing methods demonstrates that the proposed method achieves high-quality underwater images and favorable qualitative and quantitative evaluations. Compared to the method with the highest score, the average UIQM score of our method exceeds 6%, and the average UCIQE score exceeds 2%.

1. Introduction

The underwater image is a crucial means of understanding the underwater world, and it plays an essential role in underwater exploration, subaquatic (underwater) operations, and underwater target recognition. However, due to the selective absorption of light by water, underwater images lose their original color and exhibit a blue (green) hue, which reduces the clarity of the image [1]. Moreover, due to the complex scattering of the subaquatic medium, the image has a foggy effect that significantly reduces the clarity of the scene [2].
In order to tackle these challenges, scholars have devised a range of algorithms aimed at restoring and enhancing underwater images with the goal of improving their overall quality.
Image-restoration algorithms are built upon the foundations of underwater imaging models and can effectively handle degraded images. However, the effectiveness of these methods heavily depends on parameter estimation, and some methods require specialized underwater equipment, which increases costs. In recent years, some scholars have combined deep learning with underwater images, but this relies on synthesized underwater degraded image pairs and corresponding high-quality land images, often involving complex network architectures [3,4].
In contrast, underwater image-enhancement algorithms require less underwater-specific prior knowledge and aim to improve pixel quality by adjusting pixel intensities. Enhanced images exhibit higher contrast and richer detail, resulting in better visual effects. Common enhancement algorithms are predominantly based on spatial-domain- or fusion-based approaches.
Spatial-domain-based underwater image-enhancement algorithms effectively enhance image contrast. However, they may introduce red color casts and noise since color correction and noise handling are not explicitly considered. Fusion-based methods can improve image quality by reducing noise and enhancing details. However, these methods require the acquisition of multiple fusion images and the design of suitable fusion weights; otherwise, they may not yield positive results [3].
To address these issues, this paper introduces an innovative and reliable underwater image-enhancement algorithm. First, an approach for color correction utilizing histogram equalization is used to correct color distortions. Then, guided filtering is applied to decompose the V channel of the color-corrected image into low-frequency and high-frequency components. The low-frequency component is enhanced using a dual-interval histogram based on the benign separation threshold and a pair of complementary gamma functions. The two enhanced outputs of the low-frequency component are fused to enhance image contrast, and an enhancement function is designed to highlight image details. We summarize the innovation points of this article as follows:
  • Color compensation is performed by combining the local and mean differences between the attenuation and non-attenuation channels of underwater degraded images. Based on this, a histogram correction technique based on histogram equalization is used to further correct the color of underwater images.
  • Generate low-frequency components with different contrasts using dual-interval histograms and fuse the two versions of low-frequency components to enhance image contrast. Propose a function to highlight the high-frequency components of the V channel.

2. Related Works

In this section, we present a comprehensive review of relevant research, focusing on three main aspects: techniques for restoring underwater images, approaches for enhancing underwater images, and methodologies based on deep learning.
Underwater image-restoration technology estimates underwater imaging model parameters to obtain images before degradation. Underwater optical imaging-based [5,6,7] and polarization characteristics-based methods [8,9] consider the complexity of the underwater environment by constructing a specialized underwater imaging system, which yields results close to the ground truth. However, the limitations of these methods must also be considered—underwater optical imaging systems are sophisticated and costly as they require sophisticated hardware and capture equipment to obtain murky underwater images. He et al. [10] proposed the dark channel prior (DCP) for outdoor image dehazing. This is also applicable to underwater image processing since the degradation of images in foggy and underwater environments is caused by the irregular propagation of light in the medium, which share some similarities. Methods based on DCP have also been widely applied in underwater image processing. Galdran et al. [11] applied the idea of DCP to the red channel to improve the visibility of underwater images. Li et al. [12] utilized the histogram characteristics of images to process underwater images. Drews et al. [13] considered the degradation mode of underwater images and proposed a variant of DCP (UDCP). However, the method is not effective in the presence of white objects or artificial light. Peng et al. [14] used image ambiguity and underwater light attenuation to propose an underwater image-restoration method. Zhou et al. [15] used a color-line model to restore underwater images. However, since these recovery-based methods depend on subaquatic prior knowledge and specific model parameters, they may not be effective in certain water conditions. Liu [16] proposed a universal single-image-restoration algorithm by extending the dark channel prior (GDCP) to achieve restoration for different types of images; the method also demonstrated good performance in processing underwater images.
The enhancement algorithm enhances an image’s quality by adjusting its pixels’ intensity. Iqbal et al. [17] proposed an adaptive color-correction method for underwater images, where clustering algorithms are used to obtain the illuminant chromaticity of the image and then a linear color transformation is applied to correct the color distortion. Ancuti et al. [18] fused two versions of images to enhance underwater images and videos. Fu et al. [19] proposed a Retinex-based method for correcting color and enhancing the contrast in underwater images using different approaches to enhance the two components of underwater images. They also proposed a two-step enhancement method [20] for single underwater images, using DCP to remove the color cast and then using histogram equalization and the Retinex algorithm to enhance the image details. Zhang et al. [21] utilized dual-interval histograms to improve the quality of underwater images based on color correction. Zhang et al. [22] proposed the minimum color loss theory, which combines underwater image attenuation maps to adjust the color and contrast of the image. Zhang et al. [23] designed a matrix for color correction and then used histogram technology to enhance the contrast of underwater images. These enhancement-based methods are generally more stable and effective at improving the contrast and clarity of underwater images.
Deep learning-based [24] underwater image-restoration and -enhancement methods have made significant progress in recent years, providing new possibilities for solving image quality problems in underwater environments. The method based on generative adversarial networks (GAN) performs well. By engaging generators and discriminators to learn from each other, these methods can generate clearer and more realistic underwater images, Yang et al. [25] improved the generation of adversarial networks by generating high-quality images through multiscale generators; Sun et al. [26] achieved underwater image enhancement in multiple scenarios by generating adversarial networks (UMGAN), which achieved unpaired image to image conversion between underwater turbid domains and underwater clear domains. Deep learning methods can improve image quality by learning the characteristics of underwater environments, thereby removing fog and dispersion from images. Li et al. [27] added underwater scenes before convolutional neural networks and constructed a synthetic underwater image dataset; Fu et al. [28] constructed a dual-branch network to address the issues of global distortion and contrast reduction in underwater images. The method based on deep learning relies on the synthesis of degraded underwater images and corresponding high-quality land images, so researchers focus on unsupervised learning and use the information from the images themselves for training, avoiding the need for a large number of datasets. Saleh et al. [29] achieved unsupervised underwater image enhancement by introducing adaptive uncertainty distributions into deep learning models.

3. Method of This Article

This article proposes a hybrid strategy to improve the quality of underwater images. This strategy consists of three main steps: removing color distortion; extracting the V-channel of the color-correction image and decomposing it into high-frequency and low-frequency components, enhancing the contrast of low-frequency components; and sharpening the details of high-frequency components (Figure 1).

3.1. Color Correction

Light travels differently underwater, resulting in a green (or blue) tint in underwater images due to the attenuation of blue and red light being more severe than green light. However, if the unique characteristics of color degradation in underwater scenes are not considered, this may cause additional color distortion, such as the introduction of red artifacts. To tackle this issue, our method uses an adaptive local compensation strategy to remove color distortion in underwater images, providing significant compensation for pixels with severe attenuation and limited compensation for others. The mathematical definition of this method is presented below.
I r c ( x , y ) = I r ( x , y ) + I g ( x , y ) I r ( x , y ) × I ¯ g I ¯ r
I b c ( x , y ) = I b ( x , y ) + I g ( x , y ) I b ( x , y ) × I ¯ g I ¯ b
where I r x , y represents the pixel value of the red channel at the pixel location (x, y), I g x , y represents the pixel value of the green channel at the pixel location (x, y), I b x , y represents the pixel value of the blue channel at the pixel location (x, y), I r c x , y represents the pixel value of the red channel at the pixel location (x, y) after compensation, I b c x , y represents the pixel value of the blue channel at the pixel location (x, y) after compensation, I ¯ g represents the average value of the entire green channel, and I ¯ r represents the average value of all the red channel.
( I g ¯ I r ¯ ) is crucial in describing the local attenuation of the red channel in comparison to the green channel, and pixels with significant attenuation of the red channel receive more compensation accordingly. The same principle applies to the blue channel.
After compensation processing, the grayscale value distribution of each color channel of the degraded underwater image remains uneven, and it is necessary to adjust the pixel value range of the image to enhance the visual effect and readability of the image. The linear stretching method can make the pixel value range more uniform, thereby making some detailed information that was originally compressed into a smaller pixel value range more visible. However, in some cases, linear stretching may also make the detailed information that was originally in the low pixel value range become blurrier or lost.
This paper proposes using the image histogram to subsequently correct the gray value distribution of each color channel in the underwater degraded image, even after the color compensation. The following section describes the process using the red channel as an example, with the same operation performed for the blue and green channels. The histogram of the red channel is defined as follows:
H ( i ) = n i   i [ 0 , 255 ]
n i is the count of pixels in the compensated red channel with a pixel value of i .
H C ( i ) = t h i f   H ( i ) > t h H ( i ) e l s e
In Equation (4), H C i represents the histogram after clipping, while th is the clipping threshold set in this paper, which can be mathematically expressed as follows:
t h = m e a n ( H ( i ) ) + s t d ( H ( i ) )
In Equation (5), the term m e a n H i is the mean value of H i , while s t d H i is the standard deviation of H i . The next step is to assign the number of pixels in the clipped histogram. The mathematical definition is outlined below:
H F ( i ) = H C ( i ) + ( i = 0 255 v a l ( i ) ) / 255
v a l ( i ) = H ( i ) t h i f   H ( i ) > t h 0 e l s e
After obtaining the redistributed histogram, its probability density function is expressed in Equation (8). For an image histogram, the probability density function (PDF) is obtained by dividing the frequency or probability of each pixel value by the length of the pixel value range, resulting in a probability density value corresponding to each pixel value.
P D F ( i ) = H F ( i ) / s u m
The sum in this case refers to the total count of pixels in the red channel. Equation (9) is then used to calculate the cumulative distribution function (CDF) of the histogram. The CDF function is used to calculate the histogram equalization transformation function, so as to achieve the image gray-value equalization processing.
C D F ( i ) = i = 0 255 P D F ( i )
Finally, a color-corrected image is generated using Equation (10).
f = 255 × C D F ( i )

3.2. Contrast Enhancement

Guided filtering [30] is an image filtering technique used to smooth images while preserving details and edge information. Compared to traditional filtering methods, the guided filtering has better noise reduction and image smoothing capabilities while preserving edges.
The principle of guided filtering is based on the following idea: it uses a guiding image, called the guidance image, to guide the filtering operation. The guidance image is typically an auxiliary image that is correlated with the input image, such as the grayscale image, gradient image, or other feature image derived from the original image. The guidance image provides guidance information about the image structure and features, allowing the filter to have a better understanding of the image content and edge information.
HSV (Hue, Saturation, Value) is a commonly used color space that is more consistent with human perception when describing colors. The HSV color space decomposes colors into three components: hue, saturation, and value. Hue represents the basic attributes of a color, which is the name of the color we perceive, such as red, green, blue, etc. Saturation represents the purity or depth of a color. Colors with higher saturation have a bright appearance, while colors with lower saturation exhibit a darker or lighter effect. Value represents the brightness of a color.
The HSV color space is used in image processing, where the V channel represents the image’s brightness. This paper enhances the contrast of the image in the V channel by decomposing it into low- and high-frequency components using guided filtering. The low frequency preserves the main changes in illumination, while the high frequency contains the image details.
V = V s + V t
In Equation (11), V represents the brightness channel of the image, and V s and V t represent the low frequency and high frequency of the brightness channel, respectively.
Foreground and background regions in underwater images exhibit distinct characteristics and require different contrast-enhancement techniques. In our approach, we first use a benign separation threshold strategy to segment the underwater image into foreground and background sub-images. Subsequently, we perform contrast enhancement on these pairs of sub-images and utilize the Laplacian Pyramid Gaussian Pyramid to fuse the two versions of contrast-enhanced images. This effectively achieves a smooth transition between the enhanced images and eliminates discontinuities between them.
Through these steps, the visibility of underwater images is significantly improved while preserving the details and textures of the foreground and background areas.

3.2.1. Local Contrast Enhancement of Sub-Images

Most underwater images exhibit a bright foreground and dark background due to the characteristics of the underwater imaging system. Specifically, areas closer to the light source are brighter than areas farther away from the light source [31]. This paper uses Equation (12) to separate the threshold.
h = p 1 ( u 1 u ) p 2 ( u 2 u )
In Equation (12), p 1 and p 2 represent the probabilities of foreground and background pixels of the structural layer V s , while u 1 and u 2 are the foreground and background pixels’ average values, respectively. The u represents the average value of pixels in V s . The objective is to obtain the best threshold where the difference between the foreground and background is maximized. The larger the between-class variance h, the higher the difference between the two parts of the image.
We select T as the initial threshold to divide the image into two categories and then use Equation (12) as the fitness function to update h. When h is at its maximum, it means that T has been updated to the optimal threshold. Consequently, the input image can be divided into a background sub-image V s D and a foreground sub-image V s U , as shown below:
V s D = { V s ( x , y ) | V s ( x , y ) T , V s ( x , y ) V s }
V s U = V s ( x , y ) | V s ( x , y ) > T , V s ( x , y ) V s
where V s ( x , y ) represents the pixel value of the red channel at the pixel location (x, y). The probability of sub-images V s D and V s U appearing in the sub-histogram of low and high intervals is defined as P D ( V s D ) and P D ( V s U ) , respectively, and can be expressed as follows:
P D ( V s D ) = H ( V s D ) / N D
P D ( V s U ) = H ( V s U ) / N U
The frequency of grayscale occurrences in H ( V s D ) and H ( V s U ) are denoted as V s D and V s U , respectively, where N D and N U are the total number of pixels in the background and foreground sub-images, respectively. The CDFs of the background and foreground sub-images are defined as C D V s D and C D V s U , respectively, and are expressed as follows:
C D ( V s D ) = V s D = 0 T P D ( V s D )
C D ( V s U ) = V s U = T + 1 255 P D ( V s U )
Finally, to solve the low contrast of the image, the background sub-image and foreground sub-image are equalized using Equation (19), where V 1 refers to the structural layer V s after local contrast enhancement.
V 1 = T × C D ( V s D ) V s D [ 0 , T ] ( T + 1 ) + [ 255 ( T + 1 ) ] × C U ( V s U ) V s U [ T + 1 , 255 ]

3.2.2. Global Contrast Enhancement of Sub-Images

One researcher [32] proposed a complementary relationship between the two functions in Equation (20) for enhancing low-light image contrast. The function curves are illustrated in Figure 2, indicating that function y 2 provides a more pronounced enhancement of pixels than function y 1 .
y 1 = 1 ( 1 x ) γ γ < 1 y 2 = ( 1 ( 1 x ) 1 / γ ) γ γ < 1
Therefore, we have designed a new global contrast-enhancement method for underwater images in Equation (21).
V 2 ( x , y ) = 1 ( 1 V s ( x , y ) ) γ V s ( x , y ) > T ( 1 ( 1 V s ( x , y ) ) 1 / γ ) γ e l s e
In Equation (21), V s x , y is the pixel value of V s at (x, y), and V 2 x , y is the pixel value at (x, y) in the image after contrast enhancement. This section uses the threshold obtained in local contrast enhancement to divide the input image for processing.
However, in this section, each pixel of the sub-image shares the same contrast-enhancement function, resulting in the global brightness improvement of the image. Figure 3 shows the image-enhancement effect of this section and section A. To visually display the impact, Figure 3a,b show the image converted from the space of the HSV to the RGB space.

3.3. Fusion

Multiscale fusion algorithms are commonly used in the fields of image processing and computer vision. They can fuse meaningful information from different scales together, enhance the features of input images, and thus improve the accuracy and efficiency of image processing and computer vision tasks. Multiscale fusion has shown excellent performance in applications such as remote-sensing image processing [33], defogging and rain removal [34], and high-resolution imaging. In this article, we generate a stable fusion framework that can increase image visibility and emphasize image details. Our framework is built on images with different contrasts generated from a single original image.
After the above processing, we obtained two enhanced image versions: V 1 and V 2 . Because light exposure significantly affects image quality, we fused image regions with good brightness through exposure weight maps.
w i ( x , y ) = exp ( V i ( x , y ) 0.5 ) 2 2 × 0.25 2
The formula for calculating the normalized weight map for fusion is as follows:
w ¯ i ( x , y ) = w i ( x , y ) i w i ( x , y )
where V i x , y represents the ith input image at x , y . We use the pyramid fusion method. The resulting enhanced image is obtained as:
V es ( x , y ) = l i = 1 2 G l ( w ¯ i ( x , y ) ) L l ( V i ( x , y ) )
where l represents the number of layers of the pyramid, G · represents Gaussian operators, and L · represents Laplacian operators, respectively.

3.4. Detail Enhancement

The proposed method in this paper presents a simple approach for enhancing the details in underwater images. The texture details are adaptively amplified, and the blurred details are significantly improved using an enhancement coefficient K. Through extensive experiments and considering the influence of parameters on visibility and the human visual system, λ and σ in Equation (25) are set to be 8 and 1.
K = 1 + λ exp V t ( x , y ) δ
V t x , y represents the pixel value of V t at the pixel location (x, y), while V e t x , y represents the pixel value of the texture layer at the pixel location (x, y) after enhancement. ε is the parameter used to avoid the dense noise of enlarging the underwater image. In our experiment, ε is 0.005. Equation (26) is used to enhance the details.
V e t ( x , y ) = 0 i f V t ( x , y ) < ε K × V t ( x , y ) e l s e
Finally, the enhanced input of the V channel is as follows:
V f i n a l = V e s + V e t
The final step involves converting the enhanced V channel image and the H, S channel image from the HSV color space to the RGB color space to obtain the improved underwater image.

4. Results

In this section, we perform qualitative and quantitative evaluations to assess the effectiveness of the proposed algorithm. We compare our algorithm with several existing advanced underwater image-enhancement methods, namely ARC [11], Fusion [18], GDCP [16], IBLA [14], TS [20], UDCP [13], MLLE [22], and ACCDO [23]. We used a UIEB [35,36] dataset with 950 real-world underwater images for evaluation, of which 890 images had corresponding reference images, referred to as UIEBR, and the remaining 60 underwater images were referred to as UIEBC. Table 1 shows the methods we compared.

4.1. Qualitative Evaluation

For the qualitative evaluation, we initially chose two types of underwater images from UIEBR: blue and green. Due to space limitations, we presented only a selection of images. The enhanced images produced by various algorithms are shown in Figure 4 and Figure 5.
For the blue image presented in Figure 4, the GDCP, IBLA, and UDCP algorithms significantly improved the saturation, visual effect, and visibility, but their color-correction performance was unsatisfactory. The local features of the image processed by TS are prominent. However, it does not effectively address the color deviation, such as in the processing images of B2. The processing results of this fusion method are closest to the reference image. ARC employed effective color-correction methods, but the enhancement of darker areas of the underwater image was not prominent.
For the green image shown in Figure 5, the effect of UDCP cannot meet the requirements well. IBLA has achieved excellent visibility and color correction. However, the treatment of G1 and G2 is unsatisfactory. Among the compared algorithms, ARC, Fusion, and TS exhibit better performance concerning improving visibility and correcting color for the green images. However, their contrast-enhancement effect is somewhat limited, especially in the case of G2 images. The GDCP significantly enhances underwater image contrast, although the color correction is nonideal. The enhanced images obtained by our method have good results in terms of color, contrast, and details.

4.2. Quantitative Evaluation

In the quantitative evaluation, we selected three full-reference indicators to assess the quality of the underwater image shown in the figure: edge intensity (AG) [3], information entropy (IE) [3], and patch-based contrast quality index (PCQI) [35]. Furthermore, we employed two non-reference indicators, namely the underwater image quality measurement (UIQM) [37] and the underwater color image quality-evaluation index (UCIQE) [38], to evaluate the quality of the underwater images.
AG is mainly used to represent the clarity of images, while IE is used to describe the average information content of underwater images. PCQI is used to evaluate the local contrast of underwater images. The UIQM is a comprehensive evaluation indicator that includes color, clarity, and contrast. The UCIQE is a linear combination of color concentration, saturation, and contrast. We strive to comprehensively evaluate the existing methods and the methods presented in this article through these indicators.
To further demonstrate the effectiveness of our algorithm, we evaluated the entire UIEB dataset, including UIEBR and UIEBC. Table 2 and Table 3 present the performance indicators of this method and other methods. Our algorithm achieves the highest scores in IE, the UIQM, and the UCIQE on UIEBR and UIEBC, while its AG and PCQI scores are lower than those of the MLLE algorithm. Nonetheless, our algorithm still demonstrates competitiveness, and compared to mainstream methods, the MLLE algorithm performs better in handling the details and local contrast of degraded images, which is an area for improvement in our future work.

4.3. Running Time

The runtime is an important criterion for determining whether an algorithm can process in real time. We have chosen underwater images with different resolutions to detect the runtime of different algorithms. All the algorithms mentioned in the article are executed on the same PC and MATLAB (2018b). The computer is configured with Intel i7 118000H, 2.3 GHz, and 16 GB of running memory. The running time of different algorithms is shown in Table 4. With the improvement of image resolution, UDCP and IBLA require a longer processing time. The TS, GDCP, and MLLE algorithms have excellent performance in terms of runtime, and the algorithm proposed in this article has good competitiveness.

4.4. Detail Analysis

The information hidden in the details of an image is often critical, and bright image details are vital for practical applications such as underwater target detection [39] and tracking. Therefore, an excellent underwater image-enhancement algorithm should correct the color, improve contrast, and enhance image details. Figure 6 shows the enhanced images and the details produced by different algorithms. The MLLE and ACCDO algorithms have achieved satisfactory results. ARC, Fusion, and TS algorithms enhance image details to a certain extent. The pattern of the small fish in Figure 6 is clearer than in the original image, but the contrast is somewhat lacking; GDCP and IBLA have improved contrast, but there is no enhancement of details. The algorithm in this article takes into account both details and contrast.

4.5. Application Test

We conducted experiments on two visual tasks—low-light image enhancement and local feature-point matching—to demonstrate the positive role of our method in vision.
The visibility and contrast of the image are often reduced in low-light environments, making it difficult to extract valuable information or make further use of the image [40]. Here, we apply our method to images captured in low-light environments, as shown in Figure 7. Our approach significantly improves visibility and contrast in low-light images.
To demonstrate that the images processed in this article can be applied to visual matching tasks, we used the local feature-point matching algorithm (SIFT) to establish the corresponding relationship between two similar scenes. We apply this algorithm to a pair of real underwater images and their improved matching results. Figure 8 shows the local feature-point matching results. In Figure 8a,b, the correctly matched feature points in the original image pair are 4 and 87, respectively. Figure 8c,d show the number of feature points matched in the image pairs processed by our proposed algorithm, which are 26 and 140, respectively. This indicates that our method has played a positive role in image preprocessing.

5. Conclusions

We propose a method for enhancing underwater images using color correction and multiscale fusion. The test results on mainstream underwater image datasets show that the enhanced images obtained by this method have high quality.
Although our method has some superior performance, it also has some limitations. On the one hand, our method has a less significant enhancement effect on delicate image textures, which may be due to the neglect of detail processing in the fusion framework. On the other hand, simple threshold filtering has been chosen for underwater noise, which may fail when facing complex underwater scenes and also eliminates small image details. Therefore, we will consider researching and addressing these issues in our future work, attempting to find a more excellent fusion strategy that takes into account the improvement of contrast and detail. The denoising of underwater images is also a direction worth focusing on.

Author Contributions

Conceptualization, N.T. and L.C.; methodology, N.T. and L.C.; software, N.T.; validation, L.C. and Y.L.; formal analysis, N.X.; investigation, N.T.; resources, L.C.; writing—original draft preparation, N.T.; writing—review and editing, L.C.; project administration, L.C.; funding acquisition, X.L. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hubei Province of China under Grant (No. 2022CFB776), and the Foundation of Wuhan Institute of Technology under Grant (No. CX2022160).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

These data can be found here: https://li-chongyi.github.io/proj_benchmark.html (accessed on 1 June 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schettini, R.; Corchs, S. Underwater Image Processing: State of the Art of Restoration and Image Enhancement Methods. EURASIP J. Adv. Signal Process. 2010, 2010, 746052. [Google Scholar] [CrossRef]
  2. Chiang, J.Y.; Chen, Y.-C. Underwater Image Enhancement by Wavelength Compensation and Dehazing. IEEE Trans. Image Process. 2011, 21, 1756–1769. [Google Scholar] [CrossRef]
  3. Zhang, W.; Dong, L.; Pan, X.; Zou, P.; Qin, L.; Xu, W. A survey of restoration and enhancement for underwater images. IEEE Access 2019, 7, 182259–182279. [Google Scholar] [CrossRef]
  4. Akkaynak, D.; Treibitz, T. A revised underwater image formation model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6723–6732. [Google Scholar]
  5. Kocak, D.M.; Dalgleish, F.R.; Caimi, F.M.; Schechner, Y.Y. A Focus on Recent Developments and Trends in Underwater Imaging. Mar. Technol. Soc. J. 2008, 42, 52–67. [Google Scholar] [CrossRef]
  6. Jaffe, J.S.; Moore, K.D.; McLean, J.; Strand, M.P. Underwater Optical Imaging: Status and Prospects. Oceanography 2001, 14, 64–75. [Google Scholar] [CrossRef]
  7. Jaffe, J.S. Underwater optical imaging: The past, the present, and the prospects. IEEE J. Ocean Eng. 2014, 40, 683–700. [Google Scholar] [CrossRef]
  8. Liu, F.; Wei, Y.; Han, P.; Yang, K.; Bai, L.; Shao, X. Polarization-based exploration for clear underwater vision in natural illumination. Opt. Express 2019, 27, 3629–3641. [Google Scholar] [CrossRef]
  9. Hu, H.; Zhao, L.; Li, X.; Wang, H.; Liu, T. Underwater Image Recovery Under the Nonuniform Optical Field Based on Polarimetric Imaging. IEEE Photon-J. 2018, 10, 1–9. [Google Scholar] [CrossRef]
  10. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef]
  11. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic Red-Channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef]
  12. Li, C.-Y.; Guo, J.-C.; Cong, R.-M.; Pang, Y.-W.; Wang, B. Underwater Image Enhancement by Dehazing with Minimum Information Loss and Histogram Distribution Prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
  13. Drews, P.; Nascimento, E.R.; Botelho, S.S.C.; Campos, M.F.M. Underwater Depth Estimation and Image Restoration Based on Single Images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef]
  14. Peng, Y.-T.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  15. Zhou, Y.; Wu, Q.; Yan, K.; Feng, L.; Xiang, W. Underwater Image Restoration Using Color-Line Model. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 907–911. [Google Scholar] [CrossRef]
  16. Peng, Y.-T.; Cao, K.; Cosman, P.C. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
  17. Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H. Enhancing the low-quality images using unsupervised color correction method. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 10–13 October 2010; pp. 1703–1709. [Google Scholar]
  18. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef]
  19. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; Volume 2014, pp. 4572–4576. [Google Scholar]
  20. Fu, X.; Fan, Z.; Ling, M.; Huang, Y.; Ding, X. Two-step approach for single underwater image enhancement. In Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, 6–9 November 2017; pp. 789–794. [Google Scholar]
  21. Zhang, W.; Dong, L.; Zhang, T.; Xu, W. Enhancing underwater image via color correction and bi-interval contrast enhancement. Signal Process Image Commun. 2021, 90, 116030. [Google Scholar] [CrossRef]
  22. Zhang, W.; Zhuang, P.; Sun, H.-H.; Li, G.; Kwong, S.; Li, C. Underwater Image Enhancement via Minimal Color Loss and Locally Adaptive Contrast Enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010. [Google Scholar] [CrossRef]
  23. Zhang, W.; Wang, Y.; Li, C. Underwater Image Enhancement by Attenuated Color Channel Correction and Detail Preserved Contrast Enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735. [Google Scholar] [CrossRef]
  24. Jiang, K.; Wang, Z.; Wang, Z.; Chen, C.; Yi, P.; Lu, T.; Lin, C.W. Degrade is upgrade: Learning degradation for low-light image enhancement. In Proceedings of the AAAI Conference on Artificial Intelligence 2022, Virtual, 22 February–1 March 2022; Volume 36, pp. 1078–1086. [Google Scholar]
  25. Liu, R.; Jiang, Z.; Yang, S.; Fan, X. Twin adversarial contrastive learning for underwater image enhancement and be-yond. IEEE Trans. Image Process. 2022, 31, 4922–4936. [Google Scholar] [CrossRef]
  26. Sun, B.; Mei, Y.; Yan, N.; Chen, Y. UMGAN: Underwater Image Enhancement Network for Unpaired Image-to-Image Translation. J. Mar. Sci. Eng. 2023, 11, 447. [Google Scholar] [CrossRef]
  27. Li, C.; Guo, J.; Guo, C. Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef]
  28. Fu, X.; Cao, X. Underwater image enhancement with global–local networks and compressed-histogram equalization. Signal Process. Image Commun. 2020, 86, 115892. [Google Scholar] [CrossRef]
  29. Saleh, A.; Sheaves, M.; Jerry, D.; Azghadi, M.R. Adaptive uncertainty distribution in deep learning for unsupervised underwater image enhancement. arXiv 2022, arXiv:2212.08983. [Google Scholar]
  30. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  31. Zhang, W.; Pan, X.; Xie, X.; Li, L.; Wang, Z.; Han, C. Color correction and adaptive contrast enhancement for underwater image enhancement. Comput. Electr. Eng. 2021, 91, 106981. [Google Scholar] [CrossRef]
  32. Li, C.; Tang, S.; Yan, J.; Zhou, T. Low-Light Image Enhancement via Pair of Complementary Gamma Functions by Fusion. IEEE Access 2020, 8, 169887–169896. [Google Scholar] [CrossRef]
  33. Xiao, Y.; Su, X.; Yuan, Q.; Liu, D.; Shen, H.; Zhang, L. Satellite video super-resolution via multiscale deformable convolution alignment and temporal grouping projection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–19. [Google Scholar] [CrossRef]
  34. Jiang, K.; Wang, Z.; Yi, P.; Chen, C.; Huang, B.; Luo, Y.; Ma, J.; Jiang, J. Multi-scale progressive fusion network for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020, Seattle, WA, USA, 13–19 June 2020; pp. 8346–8355. [Google Scholar]
  35. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
  36. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  37. Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A Patch-Structure Representation Method for Quality Assessment of Contrast Changed Images. IEEE Signal Process. Lett. 2015, 22, 2387–2390. [Google Scholar] [CrossRef]
  38. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  39. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
  40. Dubreuil, M.; Delrot, P.; Leonard, I.; Alfalou, A.; Brosseau, C.; Dogariu, A. Exploring underwater target detection by imaging polarimetry and correlation techniques. Appl. Opt. 2013, 52, 997–1005. [Google Scholar] [CrossRef] [PubMed]
  41. Singh, K.; Parihar, A.S. Illumination estimation for nature preserving low-light image enhancement. Vis. Comput. 2023, 1–16. [Google Scholar] [CrossRef]
Figure 1. The framework of the proposed method. R, G and B are the color channels of RGB format images, respectively.
Figure 1. The framework of the proposed method. R, G and B are the color channels of RGB format images, respectively.
Applsci 13 10176 g001
Figure 2. The function curves.
Figure 2. The function curves.
Applsci 13 10176 g002
Figure 3. (a) is a local contrast-enhancement image and (b) is a global contrast-enhancement image.
Figure 3. (a) is a local contrast-enhancement image and (b) is a global contrast-enhancement image.
Applsci 13 10176 g003
Figure 4. Enhancement effects of different methods, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method. Enhancement effects of different methods on blue images, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method.
Figure 4. Enhancement effects of different methods, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method. Enhancement effects of different methods on blue images, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method.
Applsci 13 10176 g004
Figure 5. Enhancement effects of different methods, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method. Enhancement effects of different methods on green images, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method.
Figure 5. Enhancement effects of different methods, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method. Enhancement effects of different methods on green images, including ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and our method.
Applsci 13 10176 g005
Figure 6. Enhancement results of different methods and the red box represents the corresponding locally enlarged image. From left to right are raw underwater images, and the result of ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and the proposed method.
Figure 6. Enhancement results of different methods and the red box represents the corresponding locally enlarged image. From left to right are raw underwater images, and the result of ARC, Fusion, GDCP, IBLA, TS, UDCP, MLLE, ACCDO, and the proposed method.
Applsci 13 10176 g006
Figure 7. The upper column is the original image, and the lower column is the image processed by our method.
Figure 7. The upper column is the original image, and the lower column is the image processed by our method.
Applsci 13 10176 g007
Figure 8. Local feature points match by using the SIFT [41]. (a,b) are the feature-point matching results of the original image, (c,d) are the feature-point matching results of the enhanced image using this method.
Figure 8. Local feature points match by using the SIFT [41]. (a,b) are the feature-point matching results of the original image, (c,d) are the feature-point matching results of the enhanced image using this method.
Applsci 13 10176 g008
Table 1. These methods are replaced with abbreviations later in this article.
Table 1. These methods are replaced with abbreviations later in this article.
MethodAbbreviation
Automatic red-channel underwater image restoration [11]ARC
Underwater depth estimation and image restoration based on single images [13]UDCP
Underwater image restoration based on image blurriness
and light absorption [14]
IBLA
Generalization of the dark channel prior for single image restoration [16]GDCP
Color balance and fusion for underwater image enhancement [18]Fusion
Two-step approach for single underwater image enhancement [20]TS
Underwater Image Enhancement via Minimal Color Loss and Locally Adaptive Contrast Enhancement [22]MLLE
Underwater Image Enhancement by Attenuated Color Channel Correction and Detail Preserved Contrast Enhancement [23]ACCDO
Table 2. Comparison of indicators for different methods in UIEBR. The red value represents the best, while the blue value represents the second best.
Table 2. Comparison of indicators for different methods in UIEBR. The red value represents the best, while the blue value represents the second best.
UIEBRARCFusionGDCPIBLATSUDCPMLLEACCDOOUR
AG4.8356.3197.2215.9897.1945.20712.9139.52212.663
IE 7.1877.4137.3167.2677.2536.5577.5807.6627.727
PCQI 1.0131.0661.0451.0741.1480.8141.2211.1911.044
UIQM 3.2143.5162.5682.5603.2452.4052.6073.5203.745
UCIQE0.5630.5880.6100.6040.6010.5840.6050.5550.620
Table 3. Comparison of indicators for different methods in UIEBC. The red value represents the best, while the blue value represents the second best.
Table 3. Comparison of indicators for different methods in UIEBC. The red value represents the best, while the blue value represents the second best.
UIEBCARCFusionGDCPIBLATSUDCPMLLEACCDOOUR
AG 3.1394.5124.8484.3025.0703.0557.7346.5126.347
IE7.0577.2537.1226.9987.2155.6387.3127.5197.624
PCQI 0.9920.9980.9541.0111.0520.8011.0861.0710.914
UIQM 2.1492.1751.8821.8412.3861.6211.6481.9522.214
UCIQE0.5360.5720.5650.5910.5740.5200.5790.5490.596
Table 4. Comparison of runtime for different methods. The red value represents the best, while the blue value represents the second best.
Table 4. Comparison of runtime for different methods. The red value represents the best, while the blue value represents the second best.
ResolutionARCFusionGDCPIBLATSUDCPMLLEACCDOOUR
500 × 375 1.0271.4990.27611.1690.1286.6640.1590.4880.121
640 × 4801.6782.4760.28521.6960.17812.6300.2260.8110.175
850 × 564 2.6453.8330.37335.7110.25418.9460.3621.2510.297
1280 × 7205.0607.5760.55162.2280.47440.2560.8232.1950.558
Ave2.6033.8460.39632.7010.25919.6240.3931.1860.288
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, N.; Cheng, L.; Li, Y.; Li, X.; Xu, N. Enhancing Underwater Images via Color Correction and Multiscale Fusion. Appl. Sci. 2023, 13, 10176. https://doi.org/10.3390/app131810176

AMA Style

Tian N, Cheng L, Li Y, Li X, Xu N. Enhancing Underwater Images via Color Correction and Multiscale Fusion. Applied Sciences. 2023; 13(18):10176. https://doi.org/10.3390/app131810176

Chicago/Turabian Style

Tian, Ning, Li Cheng, Yang Li, Xuan Li, and Nan Xu. 2023. "Enhancing Underwater Images via Color Correction and Multiscale Fusion" Applied Sciences 13, no. 18: 10176. https://doi.org/10.3390/app131810176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop