Next Article in Journal
Research on the Support Performance of Internal Feedback Hydrostatic Thrust and Journal Bearing Considering Load Effect
Previous Article in Journal
Advancing Green TFP Calculation: A Novel Spatiotemporal Econometric Solow Residual Method and Its Application to China’s Urban Industrial Sectors
Previous Article in Special Issue
U-Net-Based Learning Using Enhanced Lane Detection with Directional Lane Attention Maps for Various Driving Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Biorthogonal Spline Wavelet-Based K-Layer Network for Underwater Image Enhancement

1
School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China
2
School of Applied Science and Civil Engineering, Beijing Institute of Technology, Zhuhai 519088, China
3
School of Artificial Intelligence, Dongguan City University, Dongguan 523109, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1366; https://doi.org/10.3390/math12091366
Submission received: 1 April 2024 / Revised: 24 April 2024 / Accepted: 28 April 2024 / Published: 30 April 2024
(This article belongs to the Special Issue Advanced Machine Vision with Mathematics)

Abstract

:
Wavelet decomposition is pivotal for underwater image processing, known for its ability to analyse multi-scale image features in the frequency and spatial domains. In this paper, we propose a new biorthogonal cubic special spline wavelet (BCS-SW), based on the Cohen–Daubechies–Feauveau (CDF) wavelet construction method and the cubic special spline algorithm. BCS-SW has better properties in compact support, symmetry, and frequency domain characteristics. In addition, we propose a K-layer network (KLN) based on the BCS-SW for underwater image enhancement. The KLN performs a K-layer wavelet decomposition on underwater images to extract various frequency domain features at multiple frequencies, and each decomposition layer has a convolution layer corresponding to its spatial size. This design ensures that the KLN can understand the spatial and frequency domain features of the image at the same time, providing richer features for reconstructing the enhanced image. The experimental results show that the proposed BCS-SW and KLN algorithm has better image enhancement effect than some existing algorithms.

1. Introduction

Underwater imaging introduces unique challenges that necessitate the exploration of more advanced wavelet-based solutions. Underwater images are often compromised by complex environmental factors, including lighting conditions, water quality, and scattering, leading to blurred images and color distortion. Notably, there are significant differences in the attenuation rates of the different wavelengths of light in water, longer wavelengths, such as red light, attenuate more rapidly than shorter wavelengths, like blue or green light. This discrepancy results in a pronounced color cast in underwater images, which typically exhibit a blue or green hue.
The underwater environment’s lighting, combined with the scattering effects of ambient light on plankton and suspended particles, exacerbates the blurring of underwater images. Hence, there is a pressing need for techniques capable of analysing the depth, details, and texture information of images to discern their detailed features accurately. Moreover, underwater images often feature complex backgrounds and biological elements. Adjacent regions within an image may exhibit significantly different feature information due to variations in the structure and physical location, while non-adjacent regions might share similar features [1,2]. This complexity demands a nuanced approach to underwater image analysis, highlighting the critical role of sophisticated wavelet-based methodologies in addressing these challenges.
Physics-based methods construct models using the physical and optical characteristics of images captured underwater [3]. These approaches examine the physical processes responsible for degradation, such as color distortion or scattering, and aim to correct them to enhance underwater images. However, a singular physics-based model may not encompass the diverse array of intricate physical and optical factors inherent in underwater environments. This limitation results in inadequate generalization and may yield outcomes characterized by either excessive or insufficient enhancement.
Wavelet analysis is a potent tool for signal processing, playing an instrumental role in underwater image enhancement. While wavelet decomposition is known for its ability to process image signals across multiple scales and to analyse signals in the frequency domain [4,5]. Traditional wavelets, such as Haar, have been applied across a variety of vision tasks for decomposing visual signals into single or multiple layers, aiming to reduce noise and enhance the contrast of these signals. However, when faced with the complexities inherent to underwater images, particularly in terms of multi-level decomposition and the capture of high-frequency details, the limitations of these traditional wavelets become apparent. This has led to a reevaluation of the trajectory of wavelet theory, with particular emphasis on the contributions of Mallat’s multi-resolution analysis [6]. This reassessment serves as a catalyst for rethinking the design of wavelet functions, thereby more effectively addressing the specific enhancement needs of underwater images. By refining wavelet functions to cater to the unique challenges posed by underwater environments, such as blurring, color cast, and detail loss, this approach seeks to develop more sophisticated and effective methods for underwater image enhancement.
Additionally, spline wavelets usually exhibit less of a ringing effect, which helps to prevent the introduction of unnecessary artifacts in underwater image enhancement tasks. This provides important inspiration for us to explore the combination of wavelet decomposition with deep learning-based image enhancement tasks. As shown in Figure 1, this is our decomposition of underwater images based on BCS-SW. After the underwater image is decomposed by BCS-SW, four sub-band images, ca, cd, ch, and cv, are obtained by conducting convolution and downsampling. Most deep neural network methods that incorporate wavelet analysis tend to introduce Haar wavelets into the network; we aim to explore the integration of more complex wavelets with deep neural networks to better combine the advantages of deep neural networks with wavelet analysis. Our contributions to this work can be summarized as follows:
1. We propose a new BCS-SW based on the CDF wavelet construction method and the cubic special spline algorithm. BCS-SW is a compactly supported, symmetric spline wavelet. Both BCS-SW and its corresponding dual wavelet are constructed for image decomposition and reconstruction, respectively, based on a multi-resolution analysis and two-scale equations. BCS-SW demonstrates superior performance in the frequency domain signal extraction, particularly for complex signals, providing a more versatile and efficient approach for interpreting local features and color degeneration in underwater images;
2. We propose a K-layer network (KLN) based on the BCS-SW for underwater image enhancement. Specifically, the KLN utilizes K-layer wavelet decomposition on underwater images to extract features across multiple frequency domains. Each layer of decomposition is paired with convolution-based layers, each tuned to a distinct scale. These convolution-based layers enhance the network’s comprehension of the images’ spatial characteristics, facilitating more stable frequency domain signals;
3. We devise qualitative and quantitative experiments on multiple underwater image datasets, showcasing the effectiveness of the BCS-SW and KLN in underwater image enhancement. Our work serves as an important reference for the integration of complex wavelet transforms into deep neural networks, demonstrating the potential of this approach.
Figure 1. Visualization of the decomposed underwater original and ground truth images by the biorthogonal cubic special spline wavelet (BCS-SW). After the underwater image is decomposed by BCS-SW, ca is the approximate information of the original image, cd is the diagonal information of the original image, ch is the horizontal information of the original image, and cv is the vertical information of the original image.
Figure 1. Visualization of the decomposed underwater original and ground truth images by the biorthogonal cubic special spline wavelet (BCS-SW). After the underwater image is decomposed by BCS-SW, ca is the approximate information of the original image, cd is the diagonal information of the original image, ch is the horizontal information of the original image, and cv is the vertical information of the original image.
Mathematics 12 01366 g001

2. Related Work

For spline wavelet algorithms, Khan et al. [7] pioneered the formulation of B-spline wavelet packets, alongside the creation of their corresponding dual wavelet packets, delving into the exploration of the inherent properties of B-spline wavelet packets. Cohen et al. [8] proposed the Cohen–Daubechies–Feauveau (CDF) wavelet family, and this wavelet family uses the compact support set spline function as the parent wavelet and has the properties of compact support set and orthogonal, which becomes an important milestone in the study of spline wavelet. Olkkonen et al. [9] introduced the shift-invariant gamma spline wavelet transform for the tree-structured subscale analysis of asymmetric signal waveforms and systems with asymmetric impulse responses. Building on this work,  Tavakoli and Esmaeili [10] subsequently orchestrated the development of biorthogonal multiple knot B-spline (MKBS) scaling functions, along with the inception of multiple knot B-spline wavelet (MKBSW) basis functions, thereby enriching the repository of the tools available in wavelet analysis. Compared to Haar wavelets and Daubechies wavelets, MKBSW offers superior smoothness and continuity, which make them perform better in accurately approximating and analysing continuous signals.
For wavelet analysis applied to image processing, the advantages include its ability to provide multi-resolution analysis, efficient compression, and effective denoising capabilities. However, the disadvantages include higher computational complexity, issues with boundary effects, and difficulties in choosing the appropriate wavelet basis. Spline wavelet transform has been applied to various image processing tasks, such as image super-resolution and denoising proposed by Huang et al. [11] and Kang et al. [12]. A new spatially adaptive method for recovering noisy blurred images proposed by Banham and Katsaggelos [13], which is particularly effective in producing crisp deconvolution while suppressing noise in the flat regions of the image. In the realm of underwater image enhancement, several physically based methods utilize the discrete wavelet transform (DWT) to decompose the image and process it in the frequency domain. For instance, Sree Sharmila et al. [14] proposed a novel image resolution enhancement method based on the combination of DWT and stabilized wavelet transform (SWT), employing the histogram shift shaping method for wavelet decomposition to enhance the contrast and resolution of the image. Singh et al. [15] employed a discrete wavelet transform-based interpolation technique for resolution enhancement. Ma and Oh [16] introduced a dual-stream network based on Haar wavelets [17], which effectively tackles color cast and enhances blurry details in underwater images. Huang et al. [11] introduced a convolutional neural network (CNN) approach based on wavelets, capable of ultra-resolving very low-resolution face images, as small as 16 × 16 pixels, to larger versions at multiple scaling factors (2×, 4×, 8×, and even 16×) within a unified framework.
For deep neural networks applied to underwater image processing, their capability primarily depends on the quality of the training dataset and the capacity of the network. Introducing wavelet analysis into deep neural networks can effectively accelerate the network’s processing of image features and enhance the network’s understanding of images. Due to the robustness and generalization capabilities of deep neural networks in underwater image enhancement tasks, a significant body of work has emerged focusing on leveraging these networks. Perez et al. [18] proposed employing CNN for underwater image enhancement. They trained the CNN using image restoration techniques to achieve an end-to-end transformation model between a hazy image and its corresponding clear image. Wang et al. [19] introduced a CNN-based method named UIE-Net, which is trained on two tasks: color correction and haze removal. This unified training approach enables the network to learn powerful feature representations for both tasks simultaneously. To enhance the extraction of intrinsic features within local blocks, their learning framework incorporates a pixel perturbation strategy, significantly improving convergence speed and accuracy. Goodfellow et al. [20] introduced generative adversarial nets (GANs), presenting a novel methodology for training generative models. As research into GANs deepens, underwater image enhancement increasingly becomes a task of transforming between the underwater domain and the enhancement domain. Li et al. [21] proposed Water-GAN to generate a large training dataset comprising corresponding depth, in-air color images, and realistic underwater images. Water-GAN’s generator is responsible for synthesizing real and depth images into an underwater image, while Water-GAN’s discriminator classifies the real images from the synthesized ones. Moreover, Fabbri et al. [22] proposed UGAN, GANs specifically designed to enhance the quality of underwater images. The model’s design objective is to improve the visibility and clarity of underwater images through adversarial training, addressing the challenges posed by the absorption and scattering of light in underwater environments.
Although deep neural networks (DNNs) have demonstrated strong capabilities in underwater image processing, especially in image enhancement, object detection, and classification, there are some obvious limitations in their application. Underwater images are often heavily affected by noise and perturbations, and deep neural networks may exhibit some vulnerability when processing such data, especially if the network has not been trained on data containing high noise. We propose a new deep neural network K-layer network for underwater image processing based on BCS-SW. In the experimental part, we also compare it with other cutting-edge underwater image processing techniques to show the advantage of the KLN network over other methods.

3. Methodology

The current research that has more combinations with deep neural networks is mostly the Haar wavelets because of the orthogonality of Haar wavelets. Although they have the advantages of simple computation and easy implementation, they also have some limitations, such as a lack of smoothness, resulting in a loss of low-frequency information, and sensitivity to noise, and may not be as good as spline wavelet transform in processing complex images and continuous signals. Hence, we propose a novel spline wavelet combined with deep neural networks to process underwater images.
Inspired by compactly supported spline wavelets [23], which provide a better approximation of various images and its wavelet basis functions can be flexibly adjusted according to the needs of applications. We propose a new biorthogonal special cubic special spline wavelet (BCS-SW) for underwater processing in this work. Furthermore, we also propose a new deep neural network, the K-layer network (KLN), for underwater image enhancement based on BCS-SW.

3.1. A New Biorthogonal Cubic Special Spline Wavelet (BCS-SW)

3.1.1. Cubic Special Spline Algorithm

In our work, BCS-SW is based on the cubic special spline algorithm proposed by Chen and Cai [24]. They proposed a novel spline algorithm and provided various representations of cubic splines with different compact supports. In this paper, we selected one of the cubic splines with the smallest compact support, as shown in Equation (1), and on the basis of this spline, we derived a new class of spline wavelet algorithms following the CDF method of wavelet construction.
S ( t ) = 451 3 β 3 ( t ) 256 3 ( β 3 ( t 1 16 ) + β 3 ( t + 1 16 ) ) + 64 3 ( β 3 ( t 1 8 ) + β 3 ( t + 1 8 ) ) ,
and β 3 ( t ) is the cubic B-spline:
β 3 ( t ) = i = 0 4 ( 1 ) i 3 ! 4 i t + 2 i 3 · ϖ t + 2 i , t R ,
where ϖ ( t ) is the unit step function
ϖ ( t ) = 0 , t < 0 , 1 , t 0 .
S ( t ) comes out from a linear combination of the normalized and the shifted B-splines of the same order. Consequently, S ( t ) can inherit nearly all the favourable properties of β 3 ( t ) , including analyticity, central symmetry, local support, and high-order smoothness. Moreover, S ( t ) can directly interpolate the provided data without the need to solve coefficient equations, a capability that B-spline lacks.
The Fourier transform expressions of S ( t ) :
S ^ ( ω ) = 451 3 512 3 cos ω 16 + 64 3 cos ω 8 sin ω 2 ω 2 4 .
The spline S ( t ) and the Fourier transform S ^ ( ω ) are separately plotted in Figure 2.

3.1.2. Constructing Biorthogonal Cubic Special Spline Wavelet (BCS-SW)

Chui [25] and Graps [26] have proved B-spline β m ( t ) is the scale function of the corresponding multi-resolution analysis. S ( t ) is formed by the linear combination of B-spline β 3 ( t ) translation and expansion. Therefore, we can naturally deduce the following conclusion.
The subspaces V j 3 are generated by S ( t ) binary dilation and integer translation, as follows:
V j 3 = span ¯ 2 j 2 S 2 j t k , k Z , j Z ,
where V j 3 j Z forms a general multi-resolution analysis (GMRA) in L 2 ( R ) , called spline multi-resolution analysis. g ( t ) is the corresponding scaling function. According to the theory of wavelet construction, S ( t ) , as a scale function, can construct a new wavelet ψ ( t ) . Let S * ( t ) be the dual scaling function of S ( t ) and ψ * ( t ) be the dual wavelet of ψ ( t ) , then their corresponding low-pass filters are:
H ( ω ) = 1 2 n = N 1 N 2 h n e i n ω , H * ( ω ) = 1 2 n = L 1 L 2 h n * e i n ω .
And high-pass filters are:
G ( ω ) = 1 2 k = 1 L 2 1 L 1 g k e i k ω , G * ( ω ) = 1 2 k = 1 N 2 1 N 1 g k * e i k ω ,
where N 1 , N 2 , L 1 , L 2 are all integers, N 2 N 1 + 1 and L 2 L 1 + 1 are the lengths of H ( ω ) and H * ( ω ) , respectively, and g k = ( 1 ) k h 1 k , g k * = ( 1 ) k h 1 k * . All coefficients are real coefficients.
We also construct a new class of compactly supported wavelets based on CDF. We are aware that wavelets with compact supports exist as long as the two-scale sequence of the related scaling function is finite. In the paper, we set H ( ω ) and H * ( ω ) as odd-length and the support set is symmetric at about 0. The vanishing moment order of H ( ω ) and H * ( ω ) are N and N * , respectively, and they can also have the following representation:
H ( ω ) = c o s ( ω 2 ) 2 N Q ( c o s ( ω ) ) ,
H * ( ω ) = c o s ( ω 2 ) 2 N * Q * ( c o s ( ω ) ) ,
where Q ( c o s ( ω ) ) , Q * ( c o s ( ω ) ) are the polynomials of c o s ( ω ) .
Let
P ( s i n 2 ( ω 2 ) ) = Q ( c o s ( ω ) ) Q * ( c o s ( ω ) ) ¯ ,
and when y = s i n 2 ( ω 2 ) , we also have:
P ( y ) = n = 0 L 1 L 1 + n n y n ,
where L = N + N * .
From the time domain expression of the two-scale equation corresponding to S ( t ) , the low-pass filter of ψ ( t ) in the frequency domain can be obtained as follows:
H ( ω ) = S ^ ( 2 ω ) S ^ ( ω ) = 451 512 c o s ω 8 + 64 c o s ω 4 451 512 c o s ω 16 + 64 c o s ω 8 c o s 4 ( ω 2 ) .
From Equations (8), (10) and (12), we can obtain N = 2 , and
Q ( c o s ( ω ) ) = 451 512 c o s ω 8 + 64 c o s ω 4 451 512 c o s ω 16 + 64 c o s ω 8 ,
Q * ( c o s ( ω ) ) = P ( s i n 2 ( ω 2 ) ) Q ( c o s ( ω ) ) ¯ .
When N = 2 , L takes different values, for example, L = 4 , 5 , 6 , 7 , we can obtain multiple corresponding N * . Bring these values into Equations (11), (9) and (14), by taking the inverse Fourier transform as in Algorithm 1, we can obtain multiple groups h n , h n * of the corresponding low-pass filter coefficients of the new biorthogonal spline wavelet in Table 1. Considering the symmetry of the coefficients, we only give n = 0 , 1 , 2 , 3 , . . . , as shown in Table 1. In practical application, the corresponding odd coefficients can be symmetrically selected for image processing.  
Algorithm 1 BCS-SW Filter Algorithm
Input:
L: sum of the vanishing moment order of H ( ω ) and H * ( ω ) by Equations (8) and (9), and L = 7 ;
P ( s i n ( ) ) : defined by Equation (11);
ω : axis angel;
n: integer, the subscript of the low-pass filter coefficient;
Output:
h n : the set of corresponding low-pass filter coefficients of H ( ω ) ;
h n * : the set of corresponding low-pass filter coefficients of H * ( ω ) ;
  1:
function: inverse_fourier_transform f ;
  2:
  n m = 0 : 1 : size ( f , 2 ) 1 ; j = 1 ;
  3:
for  i = 1 : 1 : s i z e ( n m , 2 )  do
  4:
      f f = f . exp ( j n m i ω ) ;
  5:
      ω = π : 0.01 : π ;
  6:
      h n i = ( 1 / ( 2 π ) ) r e a l ( t r a p z ( ω , f f ) ) ; //trapz(): computes the integral
  7:
end for
  8:
return: h n .  
  9:
for  n = 1 : 1 : s i z e ( f , 2 ) 1  do
10:
      f ( k , : ) = Q ( cos ( w ) ) ; // Q ( cos ( w ) ) is defined by Equation (13)
11:
      f * ( k , : ) = Q * ( cos ( w ) ) ; // Q * ( cos ( w ) ) is defined by Equation (14)
12:
      h n = inverse_fourier_transform(f);
13:
      h n * = inverse_fourier_transform( f * );
14:
end for
15:
return: h n and h n * .
Due to g n = ( 1 ) n h 1 n , g n * = ( 1 ) n h 1 n * , from the data in Table 1, we can calculate the corresponding g n , g n * , the high-pass filter coefficients of the ψ ( t ) and ψ * ( t ) . The filter bank in the frequency domain is H ( ω ) , G ( ω ) , H * ( ω ) , G * ( ω ) , the decomposition and reconstruction processes use two different sets of filters, respectively. It was decomposed with h n * and g n * , the reconstruction uses a different pair of filters h n and g n . Because of this, we make h n * and g n * wavelet decomposition filters, and h n and g n wavelet synthesis filters.

3.2. K-Layer Network

Compared to traditional wavelets, the BCS-SW combines the smoothness of the spline function with the localized accuracy of wavelet analysis [27]. In underwater image processing, due to the influence of water and the scattering of light, the image is often affected by noise and blur. BCS-SW is able to better capture smooth parts of the image, helping to reduce noise and blur. BCS-SW provides better frequency localization. This means that in underwater image processing, the spline wavelet can better capture the local details and structural information of the image. BCS-SW generally has better time–frequency localization characteristics and can better adapt to the different scale and frequency features present in underwater images. And its gradient is more accurate and less prone to be lost, which is beneficial for deep neural networks.
In this section, we propose the K-layer network (KLN) based on BCS-SW. BCS-SW decomposition offers significant advantages for underwater image enhancement tasks and separates high- and low-frequency information from images, and it can effectively remove or reduce noise caused by suspended particles in underwater images. In the KLN network, the input image undergoes K-layer decomposition, the encoder part has two branches: one branch uses wavelets for feature decomposition, and the other branch uses convolution for feature decomposition. Decoder part: the middle tensor of the same shape of the decoder part is spliced upsampling, the upsampling process of decoding, and the output of the previous step is used as the input of the next step, which speeds up the network’s processing of features.
Specifically, the filters of decomposition and reconstruction of BCS-SW are truncated, let L = 7 , N = 2 , N * = 5 , h n selects 7 numbers that are symmetric about 0, in the same way, the length of the g n also takes 7; and h n * selects 21 numbers that are symmetric about 0, similarly, the length of g n * is also 21.
As shown in Figure 3, the KLN decomposes underwater image X based on BCS-SW to get four wavelet coefficients in each layer, for example, c 1 , d 1 h , d 1 v , and d 1 d , in the first layer, c 1 is the approximate information of the original image, d 1 d is the diagonal information of the original image, d 1 h is the horizontal information of the original image, d 1 v is the vertical information of the original image. In subsequent layers, the low-frequency coefficient c j of the previous layer is decomposed to get new wavelet coefficients following Equation (15):
c j + 1 ( n 1 , n 2 ) = k 1 k 2 h n * ( 2 n 1 k 1 ) h n * ( 2 n 2 k 2 ) c j ( k 1 , k 2 ) , d j + 1 h ( n 1 , n 2 ) = k 1 k 2 h n * ( 2 n 1 k 1 ) g n * ( 2 n 2 k 2 ) c j ( k 1 , k 2 ) , d j + 1 v ( n 1 , n 2 ) = k 1 k 2 g n * ( 2 n 1 k 1 ) h n * ( 2 n 2 k 2 ) c j ( k 1 , k 2 ) , d j + 1 d ( n 1 , n 2 ) = k 1 k 2 g n * ( 2 n 1 k 1 ) g n * ( 2 n 2 k 2 ) c j ( k 1 , k 2 ) ,
where c j ( k 1 , k 2 ) represents the low-frequency coefficient obtained in the jth layer. j + 1 th wavelet decomposition is carried out by the previous jth low-frequency coefficient. Specifically, the low-pass filter lod and the high-pass filter hid filter each row of c j and sample at intervals, and then filter each column of c j and sample at intervals with them, respectively, to obtain the j + 1 th layer wavelet coefficients: c j + 1 ( n 1 , n 2 ) , d j + 1 h ( n 1 , n 2 ) , d j + 1 v ( n 1 , n 2 ) , d j + 1 d ( n 1 , n 2 ) . We also use Equation (16) to represent this process.
c k , d k h , d k v , d k d = W ( X ) , k = 1 , W ( c k 1 ) , k > 1 ,
where W ( ) represents the wavelet decomposition based on BCS-SW. At the same time, the KLN will also perform K convolutions on the input underwater image X.
x k = F c o n v ( x k 1 ) , k > 0 ,
where x 0 = X , x k represents the features obtained in the kth convolution layer. Equation (17) represents x k , which comes from the convolution processing of x k 1 , which is the convolution result of the previous layer.
The KLN reconstructs the enhanced underwater images as follows Equation (18):
r i = F u p c o n v ( x k , c k , d k h , d k v , d k d ) , i = 1 , F u p c o n v ( r i 1 , x k + 1 i , c k + 1 i , d k + 1 i h , d k + 1 i v , d k + 1 i d ) , i > 1 ,
where F u p c o n v ( ) represents reconstructing the enhanced image based on upsampling and convolution, r k is the enhanced underwater image, and also is the output of the KLN. We use the L1 loss, L2 loss, SSIM loss, and Perceptual loss for reconstructing enhanced images. Because we used more loss functions and spent a longer time on training, we also helped to obtain good experimental results, as shown:
L o s s r e c o n s t r u c t = L o s s s s i m ( r k , g t ) + L o s s p e r c e p t u a l ( r k , g t ) + | r k g t | 1 + r k g t 2 ,
where g t is the ground truth enhanced image.
The KLN leverages the Adam optimization method, which is renowned for its efficiency and effectiveness in handling large datasets with high-dimensional parameter spaces. This method significantly enhances the convergence rates by adapting the learning rates based on the estimations of the first and second moments of the gradients, making it ideal for complex models like the KLN. In our implementation, the ‘concat’ command plays a crucial role, as it is employed to perform the concatenation step during the model’s layer fusion process. Specifically, this command facilitates the merging of features from different layers, which is vital for preserving and integrating diverse spatial and contextual information across the network. This technique not only enriches the model’s feature representation but also boosts its overall performance by enabling more comprehensive learning from multiple perspectives within the data.

4. Experiments

4.1. Implementation Details

In this section, we will introduce the quantitative experiments and qualitative experiments of the BCS-SW and KLN. We verify the effectiveness of the BCS-SW and KLN on various underwater datasets. In particular, the KLN is trained under the Pytorch. The training is performed on an NVIDIA A100-PCIE-40GB GPU, we use the Adam optimizer with a batch size of 25 and a learning rate of 1 × 10 4 . The training set, which includes 5000 images, is the combination of 800 images from UIEBD proposed by Li et al. [28], 3200 images from LSUI proposed by Peng et al. [29], and 1000 images from UIQS proposed by Liu et al. [30]. The test set includes UIEBD90 and UIQS. To evaluate the ability of the BCS-SW and KLN, we introduce the non-reference underwater images metrics: UCIQE proposed by Yang and Sowmya [31] and UIQM proposed by Panetta et al. [32], and the full-reference metrics: PCQI proposed by Wang et al. [33]. UCIQE evaluates the quality of the underwater images by calculating the color intensity, saturation, and contrast of the underwater images. The larger the value, the better the quality of the underwater image. UIQM evaluates the quality of the underwater images by calculating the brightness, contrast, and saturation of the underwater images. The larger the value, the better the quality of the underwater image. PCQI evaluates the quality of the enhanced image by calculating the contrast difference between the enhanced image and the ground truth image in a localized area. The larger the value, the more similar the enhanced image is to the ground truth image.
We first compared the performance of the BCS-SW, Haar, Bior3.5 and DB2 wavelet in underwater image denoising and enhancement tasks on the UIEBD and LSUI underwater image datasets, as shown in Section 4.2. And then, we compare the KLN with other underwater image enhancement methods, including UDCP proposed by Drews et al. [34], GDCP proposed by Peng et al. [35], Ucolor proposed by Li et al. [36], MLLE proposed by Zhang et al. [37], TOPAL proposed by Jiang et al. [38], WWPF proposed by Zhang et al. [39], U-shape proposed by Peng et al. [29], and UIEBD and UIQS proposed by Liu et al. [30], as shown in Section 4.3.

4.2. BCS-SW vs. Other Wavelets in Underwater Image Related Tasks

All existing wavelet-based deep learning network models predominantly utilize Haar wavelets. The newly proposed BCS-SW has better local properties in the frequency domain than Haar and can capture local features in images better. It can represent complex image structures and features more accurately and maintain image details and contours better during image reconstruction, with continuity and smoothness. This means that more continuous and smoother results can be produced during image reconstruction, in contrast to the mutability of the transformation results of Haar, which may result in image reconstruction results with jagged edges.
The process of image approximation information reconstruction after one layer decomposition based on the BCS-SW is: the low-frequency information, horizontal high-frequency information, vertical high-frequency information, and diagonal high-frequency information are obtained after image wavelet decomposition. The wavelet decomposed the first layer of the image, reconstructed the low-frequency information of the first layer of decomposition, obtained the general appearance of the original image, and compared the general appearance of these images. As shown in Figure 4 and Table 2, the experimental outcomes demonstrate that the BCS-SW surpasses the Haar wavelet in terms of PSNR proposed by Korhonen and You [40] and SSIM proposed by Hore and Ziou [41], indicating a better performance in preserving the original image’s quality and structural integrity.
As shown in Figure 5 and Table 3, we add Gaussian noise to the underwater images, and the wavelet threshold denoising method is adopted. Firstly, the noisy image is decomposed in two layers by the wavelet, and the wavelet coefficient is processed by threshold; that is, the wavelet coefficient greater than (or) less than a certain threshold is processed, and the original image is reconstructed though using the processed results by Haar, Bior3.5, DB2, and BCS-SW, respectively. Because of the BCS-SW’s representation capabilities, it usually produces better denoising results. The coefficients of the BCS-SW transform are generally easier to distinguish between signal and noise, and are therefore more suitable for removing noise from images. In Table 3, the comparison of PSNR and SSIM indicates that, compared to other wavelets, the images processed by the BCS-SW are the closest to their corresponding unaltered images captured on land.
In this section, we also show that when multi-layer wavelets decompose the underwater image, the decomposed signal based on a spline wavelet has lower noise, and the partially reconstructed image is closer to the original underwater image.

4.3. KLN vs. Other Underwater Image Enhancement Algorithms

As shown in Figure 6, the test results on UIEBT90 demonstrate significant improvements in the background quality of the underwater images processed by the KLN. Particularly in the images in the first and third rows, a clearly visible change in background color can be observed when compared with other processing methods. More importantly, this enhancement in background color not only allows for the revelation of more details in the background but also makes the subjects in the images more prominent, with color restoration becoming more natural and realistic. The second row of images showcases scenes abundant with fish. After the background improvement by the KLN, the contrast between the background and the fish becomes more pronounced, making the target objects within the images more discernible. Additionally, the texture information of the fish is more clearly displayed in the images. As shown in Table 4, although the scores in PCQI, UIQM, and UCIQE are not the highest, the gap from the highest value is minimal. Other methods might perform optimally in certain metrics, for example, the Ucolor method achieves the highest score in UIQM, but its scores in UCIQE and PCQI are lower than our method’s scores. Furthermore, compared with other methods, the images processed by our method achieve the highest PSNR and SSIM values. Considering all factors, our model presents the best overall performance.
As shown in Figure 7, a detailed examination of the images in the first and second rows of the collection, especially the starfish at their centres, reveals a significant color difference. These starfish should display a full orange-red hue in their natural state, but after being processed by different methods, their color rendition varies. Compared to the more subdued and greyish-orange appearance of the starfish processed by other methods, those processed by the KLN exhibit a more vivid, rich, and natural orange-red color. This stark contrast not only illustrates the KLN’s remarkable capability in restoring and enhancing the red spectrum colors of underwater images but also highlights its superiority in color authenticity. Further analysis of the data in Table 5 shows that the KLN ranks just below WWPF in the UCIQE metric, demonstrating its strong performance in improving the overall color quality of the underwater images. Notably, the KLN’s performance in UIQM and PCQI is also very close to the best, validating its efficacy in maintaining image color saturation, contrast, and brightness, while also emphasizing its exceptional ability to preserve image details and texture. These comprehensive performances make the KLN one of the best choices among various underwater image enhancement technologies for overall effectiveness.
When comparing our model with two other deep learning models, we observed significant differences in the number of parameters and computational complexity (FLOPS), as shown in Table 6. Our model has the highest number of parameters (57.24 M) and FLOPS (207.8 G), indicating that its network structure is the most complex and suitable for tasks that require synthesizing a large amount of global and local information. In contrast, the U-shaped model performs tasks with fewer parameters (31.6 M) and FLOPS (26.11 G), which may be more effective in simple or real-time applications. The UIR model operates with an extremely low number of parameters (1.68 M) but relatively high FLOPS (36.44 G), reflecting its efficient use of parameters, making it suitable for environments with limited computational resources but that require high processing capabilities.

5. Conclusions

In this work, we propose the BCS-SW and introduce the KLN model based on the BCS-SW. We substantiate the effectiveness of both the BCS and the KLN through the theoretical analysis in Section 3. Further, in Section 4, we demonstrate that that BCS-SW surpasses other wavelets in terms of image decomposition, denoising, and reconstruction capabilities. Additionally, we validate the effectiveness of the KLN for underwater image processing tasks. Our approach provides a practical methodology that serves as a benchmark for integrating wavelet and deep learning techniques in underwater image enhancement and related efforts. However, our integration of wavelet processing in the network leads to a higher computational demand compared to other methods, thus limiting our ability to further expand the depth and breadth of the KLN to enhance its performance. In future research, we will focus on how to conveniently incorporate wavelet analysis into deep neural networks. Moving forward, we aim to continue exploring strategies to address image processing challenges through the fusion of wavelet and deep learning technologies.

Author Contributions

Conceptualization, D.Z. and Z.C.; methodology, D.Z. and Z.C.; software, D.Z.; validation, D.Z., Z.C., and D.H.; formal analysis, D.Z.; investigation, D.Z. and Z.C.; resources, D.Z.; data curation, D.Z. and D.H.; writing—original draft preparation, D.Z.; writing—review and editing, D.Z. and D.H.; visualization, D.Z. and D.H.; supervision, D.Z.; project administration, Z.C.; funding acquisition, D.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by Philosophy and Social Sciences Planning Project, Zhuhai, 2023 OF FUNDER grant number 2022ZDZX4061, and in part by Undergraduate Universities Online Open Course Steering Project. 313314 Guangdong, 2022 OF FUNDER grant number 2022ZXKC558.

Data Availability Statement

Data set in our manuscript can be obtained as follows: UIEB: https://li-chongyi.github.io/proj_benchmark.html (accessed on 28 November 2019); UIQS: https://github.com/dlut-dimt/Realworld-Underwater-Image-Enhancement-RUIE-Benchmark (accessed on 3 January 2020); LSUI: https://github.com/LintaoPeng/U-shape_Transformer_for_Underwater_Image_Enhancement (accessed on 18 May 2023).

Acknowledgments

The authors thank Zhuang Zhou and Fangli Sun for their careful review and advice.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Song, H.; Wang, R. Underwater Image Enhancement Based on Multi-Scale Fusion and Global Stretching of Dual-Model. Mathematics 2021, 9, 595. [Google Scholar] [CrossRef]
  2. Zhu, D. Underwater image enhancement based on the improved algorithm of dark channel. Mathematics 2023, 11, 1382. [Google Scholar] [CrossRef]
  3. Peng, Y.T.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, S.; Zhang, H.; Zhang, X.; Su, Y.; Wang, Z. Voiceprint Recognition under Cross-Scenario Conditions Using Perceptual Wavelet Packet Entropy-Guided Efficient-Channel-Attention–Res2Net–Time-Delay-Neural-Network Model. Mathematics 2023, 11, 4205. [Google Scholar] [CrossRef]
  5. Garai, S.; Paul, R.K.; Rakshit, D.; Yeasin, M.; Emam, W.; Tashkandy, Y.; Chesneau, C. Wavelets in combination with stochastic and machine learning models to predict agricultural prices. Mathematics 2023, 11, 2896. [Google Scholar] [CrossRef]
  6. Mallat, S.G. Multiresolution approximations and wavelet orthonormal bases of L2(R). Trans. Am. Math. Soc. 1989, 315, 69–87. [Google Scholar]
  7. Khan, S.; Ahmad, M.K. A study on B-spline wavelets and wavelet packets. Appl. Math. 2014, 5, 3001. [Google Scholar] [CrossRef]
  8. Cohen, A.; Daubechies, I.; Feauveau, J.C. Biorthogonal bases of compactly supported wavelets. Commun. Pure Appl. Math. 1992, 45, 485–560. [Google Scholar] [CrossRef]
  9. Olkkonen, H.; Olkkonen, J.T. Gamma splines and wavelets. J. Eng. 2013, 2013, 625364. [Google Scholar]
  10. Tavakoli, A.; Esmaeili, M. Construction of Dual Multiple Knot B-Spline Wavelets on Interval. Bull. Iran. Math. Soc. 2019, 45, 843–864. [Google Scholar] [CrossRef]
  11. Huang, H.; He, R.; Sun, Z.; Tan, T. Wavelet-srnet: A wavelet-based cnn for multi-scale face super resolution. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1689–1697. [Google Scholar]
  12. Kang, E.; Chang, W.; Yoo, J.; Ye, J.C. Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE Trans. Med. Imaging 2018, 37, 1358–1369. [Google Scholar] [CrossRef] [PubMed]
  13. Banham, M.R.; Katsaggelos, A.K. Spatially adaptive wavelet-based multiscale image restoration. IEEE Trans. Image Process. 1996, 5, 619–634. [Google Scholar] [CrossRef] [PubMed]
  14. Sree Sharmila, T.; Ramar, K.; Sree Renga Raja, T. Impact of applying pre-processing techniques for improving classification accuracy. Signal Image Video Process. 2014, 8, 149–157. [Google Scholar] [CrossRef]
  15. Singh, S.R. Enhancement of contrast and resolution of gray scale and color images by wavelet decomposition and histogram shaping and shifting. In Proceedings of the 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom), Greater Noida, India, 7–8 November 2014; pp. 300–305. [Google Scholar]
  16. Ma, Z.; Oh, C. A Wavelet-Based Dual-Stream Network for Underwater Image Enhancement. In Proceedings of the ICASSP 2022—2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 2769–2773. [Google Scholar] [CrossRef]
  17. Haar, A. Zur Theorie der Orthogonalen Funktionensysteme; Georg-August-Universitat: Gottingen, Germany, 1909. [Google Scholar]
  18. Perez, J.; Attanasio, A.C.; Nechyporenko, N.; Sanz, P.J. A deep learning approach for underwater image enhancement. In Proceedings of the Biomedical Applications Based on Natural and Artificial Computing: International Work-Conference on the Interplay between Natural and Artificial Computation, IWINAC 2017, Corunna, Spain, 19–23 June 2017; pp. 183–192. [Google Scholar]
  19. Wang, Y.; Zhang, J.; Cao, Y.; Wang, Z. A deep CNN method for underwater image enhancement. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1382–1386. [Google Scholar]
  20. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
  21. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef]
  22. Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar]
  23. Chui, C.K.; Wang, J.Z. On compactly supported spline wavelets and a duality principle. Trans. Am. Math. Soc. 1992, 330, 903–915. [Google Scholar] [CrossRef]
  24. Chen, J.; Cai, Z. A New Class of Explicit Interpolatory Splines and Related Measurement Estimation. IEEE Trans. Signal Process. 2020, 68, 2799–2813. [Google Scholar] [CrossRef]
  25. Chui, C.K. An Introduction to Wavelets; Academic Press: Cambridge, MA, USA, 1992; Volume 1. [Google Scholar]
  26. Graps, A. An introduction to wavelets. IEEE Comput. Sci. Eng. 1995, 2, 50–61. [Google Scholar]
  27. Lamnii, A.; Nour, M.Y.; Zidna, A. A reverse non-stationary generalized B-splines subdivision scheme. Mathematics 2021, 9, 2628. [Google Scholar]
  28. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar]
  29. Peng, L.; Zhu, C.; Bian, L. U-shape transformer for underwater image enhancement. IEEE Trans. Image Process. 2023, 32, 3066–3079. [Google Scholar] [CrossRef] [PubMed]
  30. Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
  31. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
  32. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  33. Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process. Lett. 2015, 22, 2387–2390. [Google Scholar]
  34. Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2013; pp. 825–830. [Google Scholar]
  35. Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef] [PubMed]
  36. Li, C.; Anwar, S.; Hou, J.; Cong, R.; Guo, C.; Ren, W. Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Trans. Image Process. 2021, 30, 4985–5000. [Google Scholar] [CrossRef] [PubMed]
  37. Zhang, W.; Zhuang, P.; Sun, H.H.; Li, G.; Kwong, S.; Li, C. Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010. [Google Scholar] [CrossRef] [PubMed]
  38. Jiang, Z.; Li, Z.; Yang, S.; Fan, X.; Liu, R. Target oriented perceptual adversarial fusion network for underwater image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6584–6598. [Google Scholar] [CrossRef]
  39. Zhang, W.; Zhou, L.; Zhuang, P.; Li, G.; Pan, X.; Zhao, W.; Li, C. Underwater image enhancement via weighted wavelet visual perception fusion. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2469–2483. [Google Scholar] [CrossRef]
  40. Korhonen, J.; You, J. Peak signal-to-noise ratio revisited: Is simple beautiful? In Proceedings of the 2012 Fourth International Workshop on Quality of Multimedia Experience, Melbourne, Australia, 5–7 July 2012; pp. 37–38. [Google Scholar]
  41. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
Figure 2. Analysis of the cubic special spline S ( t ) . (a) The graph of the S ( t ) . (b) The graph of the Fourier transform S ^ ( ω ) .
Figure 2. Analysis of the cubic special spline S ( t ) . (a) The graph of the S ( t ) . (b) The graph of the Fourier transform S ^ ( ω ) .
Mathematics 12 01366 g002
Figure 3. The structure of a K-layer network based (KLN) on BCS-SW. Our method overview involves training on a synthetic dataset. We obtain sub-band images with multiple frequency bands through discrete wavelet transform (DWT). This process effectively decouples color cast and blurry details in underwater images.
Figure 3. The structure of a K-layer network based (KLN) on BCS-SW. Our method overview involves training on a synthetic dataset. We obtain sub-band images with multiple frequency bands through discrete wavelet transform (DWT). This process effectively decouples color cast and blurry details in underwater images.
Mathematics 12 01366 g003
Figure 4. The reconstruction images of approximation information after 1 layer decomposition using BCS-SW transform and Haar wavelet transform. And (ad) are the original images of the four underwater images and processed by wavelet transform. The first column is the original underwater images; the second column is the reconstructed image after haar wavelet transform; the third column is the reconstructed images after BCS-SW wavelet transform.
Figure 4. The reconstruction images of approximation information after 1 layer decomposition using BCS-SW transform and Haar wavelet transform. And (ad) are the original images of the four underwater images and processed by wavelet transform. The first column is the original underwater images; the second column is the reconstructed image after haar wavelet transform; the third column is the reconstructed images after BCS-SW wavelet transform.
Mathematics 12 01366 g004
Figure 5. The figures of four underwater images of BCS-SW, Haar, Bior3.5, and DB2 wavelet denoising. And (ad) are the original images, denoising with Gaussian noise and denoised by various wavelets of the four underwater images. The first column is the original images, the second column is with Gaussian noise, and columns 3–6 are denoising images using Haar, Bior3.5, DB2, and BCS-SW.
Figure 5. The figures of four underwater images of BCS-SW, Haar, Bior3.5, and DB2 wavelet denoising. And (ad) are the original images, denoising with Gaussian noise and denoised by various wavelets of the four underwater images. The first column is the original images, the second column is with Gaussian noise, and columns 3–6 are denoising images using Haar, Bior3.5, DB2, and BCS-SW.
Mathematics 12 01366 g005
Figure 6. The KLN vs. other underwater image enhancement methods on UIEBD. The PCQI of the KLN is the best, meaning that the processing method has achieved good results in improving the perceived color and quality of the underwater images.
Figure 6. The KLN vs. other underwater image enhancement methods on UIEBD. The PCQI of the KLN is the best, meaning that the processing method has achieved good results in improving the perceived color and quality of the underwater images.
Mathematics 12 01366 g006
Figure 7. The KLN vs other underwater image enhancement methods on UIQS. The UCQIE of the KLN is the 2nd best, showing that the processing method has a good effect on improving the color quality of the underwater images.
Figure 7. The KLN vs other underwater image enhancement methods on UIQS. The UCQIE of the KLN is the 2nd best, showing that the processing method has a good effect on improving the color quality of the underwater images.
Mathematics 12 01366 g007
Table 1. The low-pass filter coefficients h n , h n * , ( n = 0 , 1 , 2 . . . ) of the BCS-SW and its dual wavelet.
Table 1. The low-pass filter coefficients h n , h n * , ( n = 0 , 1 , 2 . . . ) of the BCS-SW and its dual wavelet.
N h n N * h n *
21417/1969, 737/1491, 389/2760,
5/907, −1/1415, 1/6154, −1/19,642,
1/51,496…
2813/731, 63/298, −419/1066, 91/1301,
81/992, −8/273, 3/689, −1/647,
1/1482, −1/3011, 1/5619…
3330/317, 38/141, −157/398, −3/488,
137/972, −13/669, −17/1051, 7/1205,
−1/1437, 1/4657, −1/12,080 …
4645/643, 96/319, −231/593, −45/733,
143/838, 3/610, −3/83, 7/1030,
5/1491, −1/819, 1/8087…
5483/493, 319/994, −589/1535, −65/634,
93/502, 23/749, −25/496, 1/369,
3/308, −1/479, −1/1397…
Table 2. The PSNR and SSIM of approximation information reconstruction after 1 layer decomposition using Haar and BCS-SW. The 1st best results are in bold. ↑: The higher, the better.
Table 2. The PSNR and SSIM of approximation information reconstruction after 1 layer decomposition using Haar and BCS-SW. The 1st best results are in bold. ↑: The higher, the better.
ImageHaarBCS-SW
PSNRSSIMPSNR ↑SSIM ↑
Figure 4a31.77520.992232.75650.9938
Figure 4b36.68920.982040.80690.9928
Figure 4c40.39200.997942.15290.9986
Figure 4d27.80360.912130.30550.9477
Table 3. The PNSR and SSIM of denoising results of four underwater images with Gaussian noise intensity of 0.02 using different wavelets. The 1st best results are in bold. ↑: The higher, the better.
Table 3. The PNSR and SSIM of denoising results of four underwater images with Gaussian noise intensity of 0.02 using different wavelets. The 1st best results are in bold. ↑: The higher, the better.
ImageHaarBior3.5DB2BCS-SW
PSNRSSIMPSNRSSIMPSNRSSIMPSNR ↑SSIM ↑
Figure 5a22.000.616621.800.536622.890.666423.360.6911
Figure 5b22.110.559721.890.552123.120.623223.760.6547
Figure 5c24.620.615923.530.582425.920.713727.000.7484
Figure 5d22.310.562621.830.533523.190.620523.850.6528
Table 4. The mean UIQM, UCIQE, PCQI, PSNR, and SSIM scores of different methods on UIEBD90. The best results are in bold. ↑: The higher, the better.
Table 4. The mean UIQM, UCIQE, PCQI, PSNR, and SSIM scores of different methods on UIEBD90. The best results are in bold. ↑: The higher, the better.
OriginalCLAHE
(1994)
UCDP
(2013)
GDCP
(2018)
Ucolor
(2021)
MLLE
(2022)
TOPAL
(2022)
U-Shape
(2023)
WWPF
(2023)
Semi-UIR
(2023)
OURS
UIQM ↑2.47452.74092.01802.09953.03051.95612.89943.01412.39002.95032.8546
UCIQE ↑0.50310.55270.58600.61410.57090.62160.57260.57480.63410.61880.6089
PCQI ↑1.20360.93241.01611.10331.22421.13771.08661.21871.17041.2516
PSNR ↑23.904814.077115.572521.502615.368922.274521.990515.860219.321425.6162
SSIM ↑0.91140.63790.75810.89840.58480.90280.85280.63450.80050.9538
Table 5. The mean UIQM, UCIQE, and PCQI scores of different methods on UIQS. The 1st and 2nd best results are in bold and underline, respectively. ↑: The higher, the better.
Table 5. The mean UIQM, UCIQE, and PCQI scores of different methods on UIQS. The 1st and 2nd best results are in bold and underline, respectively. ↑: The higher, the better.
OriginalCLAHE
(1994)
UCDP
(2013)
GDCP
(2018)
Ucolor
(2021)
MLLE
(2022)
TOPAL
(2022)
U-Shape
(2023)
WWPF
(2023)
Semi-UIR
(2023)
OURS
UIQM ↑2.46412.67952.13152.40022.99632.41122.86442.97052.65912.95822.7684
UCIQE ↑0.43210.47610.51280.54670.53170.58290.50040.54560.59580.56670.5897
PCQI ↑1.22681.10411.21271.21211.33041.08991.20351.30021.29111.2616
Table 6. The Flops (G) and total parameters of our method with others.
Table 6. The Flops (G) and total parameters of our method with others.
U-ShapeSemi-UIROURS
Flops (G)26.1136.44207.8
Total Parameters (M)31.61.6857.24
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, D.; Cai, Z.; He, D. A New Biorthogonal Spline Wavelet-Based K-Layer Network for Underwater Image Enhancement. Mathematics 2024, 12, 1366. https://doi.org/10.3390/math12091366

AMA Style

Zhou D, Cai Z, He D. A New Biorthogonal Spline Wavelet-Based K-Layer Network for Underwater Image Enhancement. Mathematics. 2024; 12(9):1366. https://doi.org/10.3390/math12091366

Chicago/Turabian Style

Zhou, Dujuan, Zhanchuan Cai, and Dan He. 2024. "A New Biorthogonal Spline Wavelet-Based K-Layer Network for Underwater Image Enhancement" Mathematics 12, no. 9: 1366. https://doi.org/10.3390/math12091366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop