Next Article in Journal
Temporal Network Link Prediction Based on the Optimized Exponential Smoothing Model and Node Interaction Entropy
Next Article in Special Issue
On Edge Detection Algorithms for Water-Repellent Images of Insulators Taking into Account Efficient Approaches
Previous Article in Journal
Plasma and Thermal Physics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering

School of Intelligent Manufacturing and Information, Jiangsu Shipping College, Nantong 226010, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(6), 1181; https://doi.org/10.3390/sym15061181
Submission received: 20 April 2023 / Revised: 18 May 2023 / Accepted: 26 May 2023 / Published: 1 June 2023

Abstract

:
Salt-and-pepper noise (SPN) is a common type of image noise that appears as randomly distributed white and black pixels in an image. It is also known as impulse noise or random noise. This paper aims to introduce a new weighted average based on the Atangana–Baleanu fractional integral operator, which is a well-known idea in fractional calculus. Our proposed method also incorporates the concept of symmetry in the window mask structures, resulting in efficient and easily implementable filters for real-time applications. The distinguishing point of these techniques compared to similar methods is that we employ a novel idea for calculating the mean of regular pixels rather than the existing used mean formula along with the median. An iterative procedure has also been provided to integrate the power of removing high-density noise. Moreover, we will explore the different approaches to image denoising and their effectiveness in removing noise from images. The symmetrical structure of this tool will help in the ease and efficiency of these techniques. The outputs are compared in terms of peak signal-to-noise ratio, the mean-square error and structural similarity values. It was found that our proposed methodologies outperform some well-known compared methods. Moreover, they boast several advantages over alternative denoising techniques, including computational efficiency, the ability to eliminate noise while preserving image features, and real-time applicability.

1. Introduction

Symmetry is a fundamental concept in mathematics that pertains to the behavior of functions when subjected to specific transformations or operations. If a functional equation exhibits symmetry, applying any element from the group will result in a valid solution to the problem. This property can be advantageous in problem solving because finding one solution enables us to derive several other solutions by repeatedly applying the same transformation or operation. Recent decades have seen the widespread use of symmetry in mathematical describing of important practical problems such as adaptive control [1,2], machine learning [3], pattern recognition [4], finding analytical solutions for partial differential equations [5], signal advancement [6], passivity control [7], time series analysis [8], telecommunication network [9], 3D imaging [10], nonlinear system identification [11], stochastic processes [12], optical fiber acoustic [13], UAV-based multiple oblique image flows [14], mathematical modeling, and prediction in infectious disease epidemiology [15,16]. For see more applications, please see [17,18,19,20,21,22,23].
Image processing involves various processes such as image denoising [24], image mosaic [25], image stitching [26], edge detecting [27], medical image registration [28], endoscopic imaging technology [29], depth estimation [30,31], feature extraction [32,33], classifying underwater images [34], image matching [35], and image inpainting [36]. It is generally acknowledged that symmetry is also a reliable tool in image processing for various purposes such as image compression, object recognition, shape detection, and image restoration. In image compression, the symmetric properties of images are utilized to reduce storage or transmission requirements. By exploiting symmetrical patterns, only a fraction of the original image data needs to be stored while the rest can be reconstructed using mirroring or other techniques. In addition, in object recognition and shape detection, the identification and analysis of symmetric patterns and shapes in images are essential for the accurate classification and detection of objects. For example, symmetry-based algorithms can detect deviations from expected symmetric patterns, which can help in identifying potential defects or abnormalities in an image. Symmetry can also be employed to improve the visual quality of images by removing distortions or artifacts. Symmetric image processing methods can help in restoring distorted images to their original forms by performing operations such as bilinear interpolation or mirror image reflections.
There have been notable advancements in the development of image-denoising algorithms in recent years. Noise in an image can stem from various factors, with poor lighting conditions being a frequent culprit that results in low contrast and a lack of detail. Camera settings, including high ISO values or long exposure times, can also cause noise and lead to grainy or blurry images. Furthermore, noise may be introduced during image transmission or storage. Image denoising has been a topic of interest in the field of image processing for many years [37]. With the increasing use of digital images in various fields, the need for high-quality images has become more important than ever before. However, images captured in real-world scenarios are often affected by noise, which can reduce the quality of the image and make it difficult to extract useful information [38]. In [39], a median-based filter was designed to remove SPN from digital images. The authors of [40] proposed an improved image-denoising algorithm based on the TV model. The concept of local fractional entropy was applied in [41] to design an efficient fractional-based mask in image denoising. Recently, deep learning-based methods have shown great promise in image denoising [42,43]. These methods use neural networks to learn the underlying structure of the noise and to remove it from the image. One popular approach is the use of convolutional neural networks, which are effective in removing various types of noise from images [44]. Another approach is the use of generative adversarial networks for image denoising [45]. These techniques consist of two neural networks: a generator network that generates fake images and a discriminator network that tries to distinguish between real and fake images. By training these networks together, these networks can learn to generate high-quality images that are free from noise. One of the most traditional methods for image denoising includes filters such as median filters [46] and mean filters [47] and using symmetric window masks in image processing. These filters can also be customized to target specific types of noise, such as Gaussian or salt-and-pepper noise. However, these methods have limitations when it comes to preserving important image details and textures [48,49,50,51]. Notably, the context of symmetry is often present in image processing filter masks used for image denoising. For example, the popular Gaussian filter mask [52] is rotationally symmetric, meaning that it produces the same result when rotated around its center point. This symmetry helps to ensure that the filter produces consistent results across the image and reduces the computational complexity of the denoising operation. Other denoising filters, such as median filters, may not have rotational symmetry but may have reflectional symmetry, which also helps to ensure consistent results and to reduce computational complexity.
The primary objective of this paper is to introduce and to elucidate a series of original techniques that leverage the utilization of Atangana–Baleanu fractional operators with noninteger orders for the purposes of mitigating salt-and-pepper noise from digital images. Through our study, we aim to offer a comprehensive analysis of the efficacy and potential applications of these novel techniques in the domain of image denoising. To the best of our knowledge, the approach proposed in this study has not been previously explored in the existing literature. Based on the findings of our experimental analysis, we contend that this novel method holds significant promise as a viable solution for creating effective filters in the domain of image denoising. The general structure of this article is as follows. In the next section, we will review some basic definitions related to fractional differential calculus. Different structures for filters in image denoising that are used in this article are designed in the third section of the article. Moreover, an overview of some discretizations for the fractional integral operator of the Atangana–Baleanu type will be discussed in Section 4. The main algorithm of the paper is presented in Section 5. Numerical simulations and comparisons of results are given in Section 6. In conclusion, a summary of key findings and insights gleaned from our investigation is presented in the final section of this article. These conclusions serve to encapsulate the key takeaways from our study and offer valuable insights for future research efforts in the field of image denoising.

2. A Summary of Some Well-Known Fractional Operators

This section includes a short overview of some basic definitions presented in fractional calculus, which are widely used in the literature.
The Liouville–Caputo derivative [53]:
D LC H τ = 1 Γ ( 1 ) 0 τ ( τ ϕ ) H ˙ ( ϕ ) d ϕ , 0 < 1 .
The Caputo–Fabrizio derivative [54]:
D CF H τ = ( 2 ) S ( ) 2 ( 1 ) 0 τ exp ( τ ϕ ) 1 H ˙ ( ϕ ) d ϕ , 0 < < 1 ,
where S ( ) = / ( 2 ) .
The Atangana–Baleanu fractional derivative in the Caputo sense [55]:
D ABC H ( τ ) = J ( ) 1 0 τ ML ( τ ϕ ) 1 H ˙ ( ϕ ) d ϕ , 0 < 1 ,
where ML ( . ) stands for the well-known Mittag–Leffler function given by ML ( τ ) = k = 0 t k Γ ( k + 1 ) .
The Atangana–Baleanu fractional integral in the Caputo sense [55]:
I ABC H τ = 1 J ( ) H τ + Γ ( ) J ( ) 0 τ H ( ϕ ) ( τ ϕ ) 1 d ϕ , 0 < 1 ,
where J ( . ) is a function defined by J ( ) = 1 + / Γ ( ) .

3. An Overview of the Atangana–Baleanu Fractional Masks

Let us assume that a given fractional integral can be approximated at a point, using unit time-step length, in the following discrete form
I H ( τ ) ρ 0 H ( τ ) + ρ 1 H τ 1 + ρ 2 H τ + ρ 3 H τ 3 + ρ 4 H τ 4 + ρ 5 H τ 5 + ,
where ρ 1 , ρ 2 , , ρ 5 are the first few coefficients of the corresponding expansion of the fractional operator. Further, this idea can be also utilized in a multivariate case such as
x I H ( x , t ) ρ 0 H ( x , t ) + ρ 1 H x 1 , y + ρ 2 H x 2 , y + ρ 3 H x 3 , y + ρ 4 H x 4 , y + ρ 5 H x 5 , y + , y I H ( x , t ) ρ 0 H ( x , t ) + ρ 1 H x , t 1 + ρ 2 H x , t 2 + ρ 3 H x , t 3 + ρ 4 H x , t 4 + ρ 5 H x , t 5 + .
The obtained symmetric coefficients can be used in the design of masks in various applications of image processing. One of the possible arrangements for these masks is the following designs using different dimensions:
  • For an 3 × 3 fractional integral mask, we introduce the following symmetric window mask
    Ω 3 = ω i , j 3 : =   ρ 1     ρ 1     ρ 1   .
    ρ 1 8 ρ 0 ρ 1
    ρ 1 ρ 1 ρ 1
  • For an 5 × 5 fractional integral mask, we construct the following symmetric integral mask
    Ω 5 = ω i , j 5 : =   ρ 2     ρ 2     ρ 2     ρ 2     ρ 2   .
    ρ 2 ρ 1 ρ 1 ρ 1 ρ 2
    ρ 2 ρ 1 8 ρ 0 ρ 1 ρ 2
    ρ 2 ρ 1 ρ 1 ρ 1 ρ 2
    ρ 2 ρ 2 ρ 2 ρ 2 ρ 2
  • In addition, for an 7 × 7 fractional mask, the following symmetric structure is considered
    Ω 7 = ω i , j 7 : =   ρ 3     ρ 3     ρ 3     ρ 3     ρ 3     ρ 3     ρ 3   .
      ρ 3     ρ 2     ρ 2     ρ 2     ρ 2     ρ 2   ρ 3
    ρ 3 ρ 2 ρ 1 ρ 1 ρ 1 ρ 2 ρ 3
    ρ 3 ρ 2 ρ 1 8 ρ 0 ρ 1 ρ 2 ρ 3
    ρ 3 ρ 2 ρ 1 ρ 1 ρ 1 ρ 2 ρ 3
    ρ 3 ρ 2 ρ 2 ρ 2 ρ 2 ρ 2 ρ 3
    ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3
  • For an 9 × 9 fractional mask, the following symmetric windows mask is proposed
    Ω 9 = ω i , j 9 : =   ρ 4     ρ 4     ρ 4     ρ 4     ρ 4     ρ 4     ρ 4     ρ 4     ρ 4   .
    ρ 4 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 4
    ρ 4 ρ 3 ρ 2 ρ 2 ρ 2 ρ 2 ρ 2 ρ 3 ρ 4
    ρ 4 ρ 3 ρ 2 ρ 1 ρ 1 ρ 1 ρ 2 ρ 3 ρ 4
    ρ 4 ρ 3 ρ 2 ρ 1 8 ρ 0 ρ 1 ρ 2 ρ 3 ρ 4
    ρ 4 ρ 3 ρ 2 ρ 1 ρ 1 ρ 1 ρ 2 ρ 3 ρ 4
    ρ 4 ρ 3 ρ 2 ρ 2 ρ 2 ρ 2 ρ 2 ρ 3 ρ 4
    ρ 4 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 4
    ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4
  • Moreover, an 11 × 11 fractional mask can be constructed similarly in a symmetric form as
    Ω 11 = ω i , j 11 : =   ρ 5     ρ 5     ρ 5     ρ 5     ρ 5     ρ 5     ρ 5     ρ 5     ρ 5     ρ 5     ρ 5   .
    ρ 5 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 2 ρ 2 ρ 2 ρ 2 ρ 2 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 ρ 1 ρ 1 ρ 2 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 8 ρ 0 ρ 1 ρ 2 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 ρ 1 ρ 1 ρ 2 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 2 ρ 2 ρ 2 ρ 2 ρ 2 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 3 ρ 4 ρ 5
    ρ 5 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 4 ρ 5
    ρ 5 ρ 5 ρ 5 ρ 5 ρ 5 ρ 5 ρ 5 ρ 5 ρ 5 ρ 5 ρ 5
The coefficients ρ in these filters will all be determined according to the results of the next section. In addition, the proposed higher-order fractional filters are used in the rest of the article, especially in the case of high noise in the images.

4. Some Discretizations in Determining the Approximation of the AB Integral Operator

4.1. Fractional Mask Based on the Grunwald–Letnikov Idea (AB1)

Definition 1. 
One of the most common discrete forms for derivatives of fractional order, which have many different applications, is the Grunwald–Letnikov (GL) derivative with the following definition [56]
D G L α H τ 1 Γ ( α ) 0 t H ( ϕ ) ( τ ϕ ) 1 + α d ϕ = lim h 0 h α H τ + ( α ) H τ δ + α ( α + 1 ) 2 H τ δ + + Γ ( α + 1 ) k ! Γ ( α N + 1 ) H τ n δ ,
where Γ ( z ) = 0 e x p ( ν ) ν z 1 d ϕ is the well-known Gamma function, α R + and N = [ t / δ ] .
  • Using Equation (7) with = α > 0 , the corresponding integral definition of Grunwald–Letnikov is obtained as
I G L H τ = 1 Γ ( ) 0 t H ( ϕ ) ( τ ϕ ) 1 d ϕ lim h 0 h H τ + H τ δ + ( ) ( + 1 ) 2 H τ δ + + Γ ( + 1 ) N ! Γ ( N + 1 ) H τ n δ .
Now, reconsider the definition of the AB- fractional integral defined in Equation (4), as
I A B H τ = 1 J ( ) H τ + Γ ( ) J ( ) 0 τ H ( ϕ ) ( τ ϕ ) 1 d ϕ .
A closer look at with Equation (8) reveals that
I A B H τ = 1 J ( ) H τ + J ( ) I G L H τ .
Thus, we can write
I A B H τ = 1 J ( ) H τ + Γ ( ) J ( ) 0 τ H ( ϕ ) ( τ ϕ ) 1 d ϕ , 1 J ( ) H τ + J ( ) h H τ + H τ δ + ( ) ( + 1 ) 2 H τ δ + .
In this way, it reads
x I A B 1 H ( x , y ) 1 J ( ) H x , y + 2 J ( ) H x 1 , y + 3 2 2 J ( ) H x 2 , y + 4 3 3 + 2 2 6 J ( ) H x 3 , y + 5 6 4 + 11 3 6 2 24 J ( ) H x 4 , y + 6 10 5 + 35 4 50 3 + 24 2 120 J ( ) H x 5 , y + , y I A B 1 H ( x , y ) 1 J ( ) H x , y + 2 J ( ) H x , y 1 + 3 2 2 J ( ) H x , y 2 + 4 3 3 + 2 2 6 J ( ) H x , y 3 + 5 6 4 + 11 3 6 2 24 J ( ) H x , y 4 + 6 10 5 + 35 4 50 3 + 24 2 120 J ( ) H x , y 5 + .
Hence, the coefficients of the Atangana–Baleanu fractional integral expansion are determined as follows
ρ 0 = 1 J ( ) , ρ 1 = 2 J ( ) , ρ 2 = 3 2 2 J ( ) , ρ 3 = 4 3 3 + 2 2 6 J ( ) , ρ 4 = 5 6 4 + 11 3 6 2 24 J ( ) , ρ 5 = 6 10 5 + 35 4 50 3 + 24 2 120 J ( ) .
Using coefficients in Equation (13), the so-called fractional AB1 masks of different sizes including Ω 1 , Ω 2 , Ω 3 , Ω 4 can be characterized.

4.2. Fractional Mask Based on the Toufik–Atangana Idea (AB2)

The following iterative scheme in determining the approximation of the AB integral operator is suggested as follows [57]
I A B H τ n = 1 J ( ) H τ n + B ( ) s = 0 n h H τ s Γ ( + 2 ) ( n + 1 s ) × ( n s + 2 + ) ( n s ) ( n s + 2 + 2 ) h H τ s 1 Γ ( + 2 ) ( n + 1 s ) + 1 ( n s ) ( n s + 1 + ) ) .
Hence, Equation (14) can be rewritten as
I A B H τ n = 1 Γ + 2 + h + 2 J Γ + 2 H τ n + h J ( ) 2 + + 3 2 4 Γ + 2 H τ n 1 + 2 + 1 + + 4 3 + 6 × 2 + 2 J ( ) Γ + 2 H τ n 2 + .
Thus, we have the following forms
x I A B 2 H ( x , y ) 1 Γ + 2 + + 2 J Γ + 2 H x , y + 2 + 3 2 2 2 4 J ( ) Γ + 2 H x 1 , y + 2 + 4 3 + 2 2 6 2 + 2 + 2 J ( ) Γ + 2 H x 2 , y + 2 + 5 4 + 2 2 8 3 + 2 + 3 2 J ( ) Γ + 2 H x 3 , y + 2 + 6 5 + 2 2 10 4 + 2 + 4 3 J ( ) Γ + 2 H x 4 , y + 2 + 7 6 + 2 2 12 5 + 2 + 5 4 J ( ) Γ + 2 H x 5 , y + , y I A B 2 H ( x , y ) 1 Γ + 2 + + 2 J Γ + 2 H x , y + 2 + 3 2 2 2 4 J ( ) Γ + 2 H x , y 1 + 2 + 4 3 + 2 2 6 2 + 2 + 2 J ( ) Γ + 2 H x , y 2 + 2 + 5 4 + 2 2 8 3 + 2 + 3 2 J ( ) Γ + 2 H x , y 3 + 2 + 6 5 + 2 2 10 4 + 2 + 4 3 J ( ) Γ + 2 H x , y 4 + 2 + 7 6 + 2 2 12 5 + 2 + 5 4 J ( ) Γ + 2 H x , y 5 + .
Therefore, the coefficients used in the so-called AB2 masks of different sizes will be determined as follows
ρ 0 = 1 Γ + 2 + + 2 J Γ + 2 , ρ 1 = 2 + 3 2 2 2 4 J ( ) Γ + 2 , ρ 2 = 2 + 4 3 + 2 2 6 2 + 2 + 2 J ( ) Γ + 2 , ρ 3 = 2 + 5 4 + 2 2 8 3 + 2 + 3 2 J ( ) Γ + 2 , ρ 4 = 2 + 6 5 + 2 2 10 4 + 2 + 4 3 J ( ) Γ + 2 , ρ 5 = 2 + 7 6 + 2 2 12 5 + 2 + 5 4 J ( ) Γ + 2 .

4.3. Fractional Mask Based on Euler’s Method Idea (AB3)

Another possible approximation for the AB- fractional integral is derived from Euler’s method in the form of an iterative scheme as [58]
I A B H τ n = 1 J ( ) H τ n + h J ( ) Γ ( + 1 ) s = 0 n 1 θ n , s H τ s ,
where
θ n , s = n s n s 1 .
Equation (18) can be reformulated in the following equivalent manner
I A B H τ n = 1 J H τ n + J ( ) Γ + 1 H τ n 1 + 2 1 J ( ) Γ + 1 H τ n 2 + .
Thus, it reads
x I A B 3 H ( x , y ) 1 J H x , y + J ( ) Γ + 1 H x 1 , y + 2 1 J ( ) Γ + 1 H x 2 , y + 3 2 J ( ) Γ + 1 H x 3 , y + 4 3 J ( ) Γ + 1 H x 4 , y + 5 4 J ( ) Γ + 1 H x 5 , y + , y I A B 3 H ( x , y ) 1 J H x , y + J ( ) Γ + 1 H x , y 1 + 4 3 J ( ) Γ + 1 H x , y 2 + 3 2 J ( ) Γ + 1 H x , y 3 + 2 1 J ( ) Γ + 1 H x , y 4 + 5 4 J ( ) Γ + 1 H x , y 5 + .
Therefore, the coefficients used in the so-called AB3 masks of different sizes will be determined as follows
ρ 0 = 1 J , ρ 1 = J ( ) Γ + 1 , ρ 2 = 2 1 J ( ) Γ + 1 , ρ 3 = 3 2 J ( ) Γ + 1 , ρ 4 = 4 3 J ( ) Γ + 1 , ρ 5 = 5 4 J ( ) Γ + 1 .

4.4. Fractional Mask Based on the Middle Point Idea (AB4)

Let us reconsider the definition of the AB- fractional integral defined in Equation (4), as
I A B H t = 1 J ( ) H τ + Γ ( ) J ( ) 0 τ H ( ϕ ) ( τ ϕ ) 1 d ϕ .
Taking ϕ = τ ϕ into account in the integral Equation (23) yields
I A B H t = 1 J ( ) H τ + Γ ( ) J ( ) 0 τ H ( τ ϕ ) ϕ 1 d ϕ .
Now, by dividing the integral given in Equation (24), we will have
I A B H t = 1 J ( ) H τ + Γ ( ) J ( ) k = 0 n 1 t k τ k + 1 H ( τ ϕ ) ϕ 1 d ϕ .
Then, applying an approximation formula gives
t k τ k + 1 H ( ϕ ) ( ϕ ) d ϕ H ( t k ) + H ( τ k + 1 ) 2 t k τ k + 1 d ϕ ( ϕ ) ,
in Equation (25), one obtains
I A B H t = 1 J ( ) H τ + Γ ( ) J ( ) k = 0 n 1 H ( τ t k ) + H ( τ τ k + 1 ) 2 t k τ k + 1 d ϕ ϕ 1 , = 1 J ( ) H τ + Γ ( ) J ( ) k = 0 n 1 H ( τ t k ) + H ( τ τ k + 1 ) 2 τ k + 1 τ k .
Thus, we have
I A B H τ n = 1 J ( ) H τ n + Γ ( ) J ( ) k = 0 n 1 H ( τ n t k ) + H ( τ n τ k + 1 ) 2 ( ( k + 1 ) h ) ( k h ) , = 1 J ( ) H τ n + 1 Γ ( ) J ( ) k = 0 n 1 H ( τ n k ) + H ( τ n k + 1 ) 2 ( ( k + 1 ) h ) ( k h ) .
Upon consolidating the aforementioned findings, we can assert that [59]
I A B H τ n = 2 Γ ( ) 2 Γ ( ) + 1 2 J ( ) Γ ( ) H τ n + 3 2 J ( ) Γ H τ n 1 + 4 2 2 J ( ) Γ H τ n 2 + .
The equivalent form for Equation (29) in the x and y directions will be as follows
x I A B 4 H ( x , y ) 2 Γ ( ) 2 Γ ( ) + 1 2 J ( ) Γ ( ) H x , y + 3 2 J ( ) Γ H x 1 , y + 4 2 2 J ( ) Γ ( ) H x 2 , y + 5 3 2 J ( ) Γ H x 3 , y + 6 4 2 J ( ) Γ H x 4 , y + 7 5 2 J ( ) Γ H x 5 , y , y I A B 4 H ( x , y ) 2 Γ ( ) 2 Γ ( ) + 1 2 J ( ) Γ ( ) H x , y + 3 2 J ( ) Γ H x , y 1 + 4 2 2 J ( ) Γ ( ) H x , y 2 + 5 3 2 J ( ) Γ H x , y 3 + 6 4 2 J ( ) Γ H x , y 4 + 7 5 2 J ( ) Γ H x , y 5 .
Therefore, the coefficients used in the so-called AB4 masks of different sizes will be determined as follows
ρ 0 = 2 Γ ( ) 2 Γ ( ) + 1 2 J ( ) Γ ( ) , ρ 1 = 3 2 J ( ) Γ , ρ 2 = 4 2 2 J ( ) Γ , ρ 3 = 5 3 2 J ( ) Γ , ρ 4 = 6 4 2 J ( ) Γ , ρ 5 = 7 5 2 J ( ) Γ .

5. The Main Algorithm of the Paper

The main algorithm of the article is presented in this section. First, we assume that C : = c i j m × n is a matrix whose values are non-negative integers and are less than or equal to 255. This matrix is called an image matrix.
Definition 2. 
If C : = c i j m × n is an image matrix, we call the entries with values of 0 or 255 as noise pixels and the other entries as regular pixels of the image.
Definition 3. 
If the entries of an image matrix include noise components, then the matrix is called a noise image.
Definition 4. 
If C is the matrix corresponding to an image, then the binary matrix of C is defined as E : = b i j m × n , where
c i j = 0 , a i j { 0 , 255 } , 1 , c i j { 0 , 255 } .
Definition 5. 
Let C : = c i j m × n and 1 p min { m , n } . Then, the p symmetric padding matrix of C is a matrix of ( m + 2 p ) × ( n + 2 p ) size that is defined in the following manner
C ˜ p = c p p c p 1 c p 1 c p 2 c p n c p n c p ( n p + 1 ) c 1 p c 11 c 11 c 12 c 1 n c 1 n c 1 ( n p + 1 ) c 1 p c 11 c 11 c 12 c 1 n c 1 n c 1 ( n p + 1 ) c 2 p c 21 c 21 c 22 c 2 n c 2 n c 2 ( n p + 1 ) c 3 p c 31 c 31 c 32 c 3 n c 3 n c 3 ( n p + 1 ) c m p c m 1 c m 1 c m 2 c m n c m n c m ( n p + 1 ) c m p c m 1 c m 1 c m 2 c m n c m n c m ( n p + 1 ) c ( m p + 1 ) p c ( m p + 1 ) 1 c ( m p + 1 ) 1 c ( m p + 1 ) 2 c ( m p + 1 ) n c ( m p + 1 ) n c ( m p + 1 ) ( n p + 1 ) .
Example 1. 
For C = 63 5 255 0 255 173 84 23 0 , we have C ˜ 2 = 255 0 0 255 173 173 255 5 63 63 5 255 255 5 5 63 63 5 255 255 5 255 0 0 255 173 173 255 23 84 84 23 0 0 23 23 84 84 23 0 0 23 255 0 0 255 173 173 255 .
Definition 6. 
Let C : = c i j m × n and 1 r p . Then, r-approximate matrix of c i j in C ˜ p is denoted by C i j r and is as follows:
C i j r = c i j r ( 2 r + 1 ) × ( 2 r + 1 ) = c ˜ ( i + p r ) ( j + p r ) c ˜ ( i + p r ) ( j + p + r ) c ˜ ( i + p ) ( j + p ) c ˜ ( i + p + r ) ( j + p r ) c ˜ ( i + p + r ) ( j + p + r ) .
Example 2. 
Under the assumptions of Example 1, we have
C 13 1 = 0 255 173 63 5 255 63 5 255 .
Definition 7. 
Let us define the matrix C ¯ i j r : = c ¯ i j r ( 2 r + 1 ) × ( 2 r + 1 ) from C i j r as
c ¯ i j r = 0 , c i j r { 0 , 255 } , c i j r , c i j r { 0 , 255 } .
In other words, this matrix consists of all regular entries of C i j r , and zero elsewhere.
Example 3. 
Under the assumptions of Example 2, we have
C ¯ 13 1 = 0 0 173 63 5 0 63 5 0 .
Definition 8. 
Let C = c i j ( 2 r + 1 ) × ( 2 r + 1 ) for r = 1 , 2 , , 5 . Then, the Atangana–Baleanu mean of C is defined as follows
A B m C : = ( i , j ) Λ c i , j ω i , j 2 r + 1 ( i , j ) Λ ω i , j 2 r + 1 ,
where Λ = i , j | c i * j * { 0 , 255 } , and Ω 2 r + 1 ’s for r = 1 , 2 , , 5 are filters introduced in the Section 3.
Definition 9. 
Let C = [ c i j ] m × n and D = [ d i j ] m × n be two given matrices. The l 1 -distance of them is calculated as
| | C D | | 1 : = i = 1 m j = 1 n c i j d i j .
  • Considering the above symbols and definitions, the main denoising algorithm in this paper (Algorithm 1) is presented as follows
    Algorithm 1 The algorithm of the Atangana–Baleanu iterative adaptive mean filter.
    Input: Obtain C as a noisy image C = [ c i j ] m × n
    Output: Obtain D as a denoised image D = [ d i j ] m × n
    Step 1. Obtain a noisy image matrix C : = c i j m × n where min { m , n } 5 .
    Step 2. Change the format of matrix C from uint8 to double if needed.
    Repeat
    Step 3. Set D:=C.
    Step 4. For p from 5 to 1
       Construct the binary matrix E : = e i j m × n of C.
       Construct C ¯ ¯ p and E ¯ ¯ p .
    For i = 1 : m
       For j = 1 : n
          If e i j = 0
             For r from 1 to p
                If E i j r [ 0 ]
                   Construct C i j r .
                   Construct C ¯ i j r .
                    c i j A B m C ¯ i j r
                   Break
                End If
             End For
          End If
       End For
    End For
    Until | | C D | | 1 ϵ .
    Step 5. D is the denoised image matrix.
    Step 6. Change the format of matrix D from double to uint8.
The flowchart of the algorithm is also presented in Figure 1.
Remark 1. 
The main difference between the proposed algorithm in this paper and the one in [60] is that instead of the Cesáro mean in STEP 3, we used the Atangana–Baleanu fractional mean.

6. Discussion

The quality of images resulting from different algorithms is measured using various criteria. One of these criteria is the calculation of the peak signal-to-noise ratio (PSNR), which can be measured by the following formula
PSNR = 10 l o g 10 255 × 255 MSE ,
where
MSE = 1 m × n j = 1 n i = 1 m I * ( i , j ) I ( i , j ) 2 .
PSNR measures the difference between the original image and the denoised image in terms of their peak signal power and noise power. Higher PSNR values indicate better image quality. Moreover, MSE measures the average squared difference between the original and denoised images. Lower MSE values indicate better image quality.
The next known index that can be used to measure the similarity of two images is the structural similarity index measurement, which can be calculated with the following formula
SSIM ( I 1 , I 2 ) = 2 μ 1 μ 2 + c 1 2 12 + c 2 μ 1 2 + μ 2 2 + c 1 1 2 + 2 2 + c 2 .
SSIM compares the structural information of the original and denoised images. It measures the similarity in terms of luminance, contrast, and structure. Higher SSIM values indicate better image quality.
We compare the results of our proposed methods AB1AB4 in terms of PNSR and SSIM with those of TSF, NAFSM, ASWMF, ACmF, NASNLM, and BPDF. Each of these algorithms mentioned above has been used in denoising of Elaine, peppers, and goldhill images contaminated with salt-and-pepper noise with intensities of 10, 30, 50, 70, and 90 percent as shown in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16. As is evident, salt-and-pepper noise can significantly degrade the visual quality of an image and make it difficult to extract useful information from the image. Further, this kind of noise can occur in any part of an image, but it tends to be more prevalent in areas of low contrast or in regions with sharp edges.
Further, in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, the amount of MSE index obtained from different algorithms in the Elaine, peppers, and goldhill images are reported. The results obtained in these tables confirm that the algorithms proposed in this article have a very impressive performance and have obtained the best results among other methods in most tests. The value considered for the fractional order parameter was considered equal to 0.95 in all our proposed methods while performing the experiments.
Our approach seems to be useful in applications where image quality is critical, such as medical imaging or surveillance. In medical imaging, for example, noise reduction in images is crucial for accurate diagnosis and treatment planning. Our method’s ability to preserve important image details while removing noise makes it an excellent candidate for these types of applications.

7. Conclusions

Removing noise from digital images while preserving important details is a difficult task, as the noise can be complex, and its effect on individual pixels can be unpredictable. In this paper, we propose a novel method for denoising digital images that contain salt-and-pepper noise. This type of noise appears as randomly distributed white and black pixels in an image. This paper presents a set of algorithms that are effective in removing salt-and-pepper noise from images with high efficiency. This type of noise is a type of image noise that appears as randomly distributed white and black pixels in an image, and it affects individual pixels in an image, causing them to have either the highest intensity value (white pixel) or the lowest intensity value (black pixel). The basic idea of these methods is to introduce a new weighted average based on a well-known idea in fractional calculus, called the Atangana–Baleanu fractional integral operator. Moreover, the concept of symmetry is clearly used in the proposed window mask structures in this paper. Furthermore, our proposed method has been extensively tested on various datasets to assess its effectiveness in denoising images. We compared our method with other state-of-the-art denoising techniques and found that it outperformed them in terms of the peak signal-to-noise ratio (PSNR) metric and visually as well. Our proposed methods are advantageous over other methods in image denoising because they are computationally efficient and can be easily implemented. Moreover, they remove noise from an image while preserving its important features. Another significant advantage of our proposed method is that it can be easily implemented, which is essential for real-time applications. This feature makes it practical for use in video processing, where images are captured in rapid succession, and quick processing times are necessary to avoid delays. In conclusion, our proposed approach to image denoising is a significant step forward in the field of digital image processing. It offers several advantages over existing techniques, including its ability to remove salt-and-pepper noise while preserving important image features, computational efficiency, and ease of implementation. Overall, we believe that our method will have a significant impact on various applications that require high-quality image processing.

Author Contributions

M.W., S.W., X.J. and Y.W. contributed equally and significantly to writing this article. All authors have read and approved the final manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Song, F.; Liu, Y.; Shen, D.; Li, L.; Tan, J. Learning Control for Motion Coordination in Wafer Scanners: Toward Gain Adaptation. IEEE Trans. Ind. Electron. 2022, 69, 13428–13438. [Google Scholar] [CrossRef]
  2. Meng, Q.; Lai, X.; Yan, Z.; Su, C.Y.; Wu, M. Motion planning and adaptive neural tracking control of an uncertain two-link rigid–flexible manipulator with vibration amplitude constraint. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3814–3828. [Google Scholar] [CrossRef]
  3. Wang, H.; Zhang, X.; Jiang, S. A Laboratory and Field Universal Estimation Method for Tire–Pavement Interaction Noise (TPIN) Based on 3D Image Technology. Sustainability 2022, 14, 12066. [Google Scholar] [CrossRef]
  4. Zhao, C.; Cheung, C.F.; Xu, P. High-efficiency sub-microscale uncertainty measurement method using pattern recognition. ISA Trans. 2020, 101, 503–514. [Google Scholar] [CrossRef] [PubMed]
  5. Ghanbari, B.; Baleanu, D. Applications of two novel techniques in finding optical soliton solutions of modified nonlinear Schrödinger equations. Results Phys. 2022, 44, 106171. [Google Scholar] [CrossRef]
  6. Zeng, Q.; Bie, B.; Guo, Q.; Yuan, Y.; Han, Q.; Han, X.; Chen, M.; Zhang, X.; Yang, Y.; Liu, M.; et al. Hyperpolarized Xe NMR signal advancement by metal-organic framework entrapment in aqueous solution. Proc. Natl. Acad. Sci. USA 2020, 117, 17558–17563. [Google Scholar] [CrossRef] [PubMed]
  7. Lin, X.; Wen, Y.; Yu, R.; Yu, J.; Wen, H. Improved Weak Grids Synchronization Unit for Passivity Enhancement of Grid-Connected Inverter. IEEE J. Emerg. Sel. Top. Power Electron. 2022, 10, 7084–7097. [Google Scholar] [CrossRef]
  8. Wang, F.; Wang, H.; Zhou, X.; Fu, R. A Driving Fatigue Feature Detection Method Based on Multifractal Theory. IEEE Sens. J. 2022, 22, 19046–19059. [Google Scholar] [CrossRef]
  9. Cao, K.; Wang, B.; Ding, H.; Lv, L.; Dong, R.; Cheng, T.; Gong, F. Improving physical layer security of uplink NOMA via energy harvesting jammers. IEEE Trans. Inf. Forensics Secur. 2020, 16, 786–799. [Google Scholar] [CrossRef]
  10. Zhuo, Z.; Du, L.; Lu, X.; Chen, J.; Cao, Z. Smoothed Lv distribution based three-dimensional imaging for spinning space debris. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–3. [Google Scholar] [CrossRef]
  11. Sun, L.; Hou, J.; Xing, C.; Fang, Z. A robust hammerstein-wiener model identification method for highly nonlinear systems. Processes 2022, 10, 2664. [Google Scholar] [CrossRef]
  12. Hu, J.; Wu, Y.; Li, T.; Ghosh, B.K. Consensus control of general linear multiagent systems with antagonistic interactions and communication noises. IEEE Trans. Autom. Control 2018, 64, 2122–2127. [Google Scholar] [CrossRef]
  13. Zhong, T.; Wang, W.; Lu, S.; Dong, X.; Yang, B. RMCHN: A Residual Modular Cascaded Heterogeneous Network for Noise Suppression in DAS-VSP Records. IEEE D 2023, 20, 7500205. [Google Scholar] [CrossRef]
  14. Zhou, G.; Yang, F.; Xiao, J. Study on pixel entanglement theory for imagery classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5409518. [Google Scholar] [CrossRef]
  15. Huppert, A.; Katriel, G. Mathematical modelling and prediction in infectious disease epidemiology. Clin. Microbiol. Infect. 2022, 19, 999–1005. [Google Scholar] [CrossRef] [PubMed]
  16. Djilali, S.; Ghanbari, B. Dynamical behavior of two predators–one prey model with generalized functional response and time-fractional derivative. Adv. Differ. Equ. 2021, 2021, 235. [Google Scholar] [CrossRef]
  17. Fan, X.; Wei, G.; Lin, X.; Wang, X.; Si, Z.; Zhang, X.; Shao, Q.; Mangin, S.; Fullerton, E.; Jiang, L.; et al. Reversible switching of interlayer exchange coupling through atomically thin VO2 via electronic state modulation. Matter 2020, 2, 1582–1593. [Google Scholar] [CrossRef]
  18. Baleanu, D.; Jajarmi, A.; Mohammadi, H.; Rezapour, S. A new study on the mathematical modelling of human liver with Caputo–Fabrizio fractional derivative. Chaos Solitons Fractals 2020, 134, 109705. [Google Scholar] [CrossRef]
  19. Wang, G.; Zhao, B.; Wu, B.; Wang, M.; Liu, W.; Zhou, H.; Zhang, C.; Wang, Y.; Han, Y.; Xu, X. Research on the macro-mesoscopic response mechanism of multisphere approximated heteromorphic tailing particles. Lithosphere 2022, 2022, 1977890. [Google Scholar] [CrossRef]
  20. Defterli, O.; Baleanu, D.; Jajarmi, A.; Sajjadi, S.S.; Alshaikh, N.; Asad, J.H. Fractional treatment: An accelerated mass-spring system. Rom. Rep. Phys. 2022, 74, 122. [Google Scholar]
  21. Xu, S.; Dai, H.; Feng, L.; Chen, H.; Chai, Y.; Zheng, W.X. Fault Estimation for Switched Interconnected Nonlinear Systems with External Disturbances via Variable Weighted Iterative Learning. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 2011–2015. [Google Scholar] [CrossRef]
  22. Bai, X.; Shi, H.; Zhang, K.; Zhang, X.; Wu, Y. Effect of the fit clearance between ceramic outer ring and steel pedestal on the sound radiation of full ceramic ball bearing system. J. Sound Vib. 2022, 529, 116967. [Google Scholar] [CrossRef]
  23. Huang, N.; Chen, Q.; Cai, G.; Xu, D.; Zhang, L.; Zhao, W. Fault diagnosis of bearing in wind turbine gearbox under actual operating conditions driven by limited data with noise labels. IEEE Trans. Instrum. Meas. 2021, 70, 3502510. [Google Scholar] [CrossRef]
  24. Ghanbari, B.; Atangana, A. A new application of fractional Atangana–Baleanu derivatives: Designing ABC-fractional masks in image processing. Phys. A Stat. Mech. Appl. 2020, 542, 123516. [Google Scholar] [CrossRef]
  25. Zhang, Z.; Wang, L.; Zheng, W.; Yin, L.; Hu, R.; Yang, B. Endoscope image mosaic based on pyramid ORB. Biomed. Signal Process. Control 2022, 71, 103261. [Google Scholar] [CrossRef]
  26. Liu, Y.; Tian, J.; Hu, R.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Improved feature point pair purification algorithm based on SIFT during endoscope image stitching. Front. Neurorobotics 2022, 16, 840594. [Google Scholar] [CrossRef] [PubMed]
  27. Ghanbari, B.; Atangana, A. Some new edge detecting techniques based on fractional derivatives with non-local and non-singular kernels. Adv. Differ. Equ. 2020, 2020, 435. [Google Scholar] [CrossRef]
  28. Liu, S.; Yang, B.; Wang, Y.; Tian, J.; Yin, L.; Zheng, W. 2D/3D multimode medical image registration based on normalized cross-correlation. Appl. Sci. 2022, 12, 2828. [Google Scholar] [CrossRef]
  29. Cao, Z.; Wang, Y.; Zheng, W.; Yin, L.; Tang, Y.; Miao, W.; Liu, S.; Yang, B. The algorithm of stereo vision and shape from shading based on endoscope imaging. Biomed. Signal Process. Control 2022, 76, 103658. [Google Scholar] [CrossRef]
  30. Ban, Y.; Liu, M.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Depth estimation method for monocular camera defocus images in microscopic scenes. Electronics 2022, 11, 2012. [Google Scholar] [CrossRef]
  31. Wang, S.; Sheng, H.; Zhang, Y.; Yang, D.; Shen, J.; Chen, R. Blockchain-Empowered Distributed Multi-Camera Multi-Target Tracking in Edge Computing. IEEE Trans. Ind. Inform. 2023. [Google Scholar] [CrossRef]
  32. Zhou, W.; Lv, Y.; Lei, J.; Yu, L. Global and local-contrast guides content-aware fusion for RGB-D saliency prediction. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3641–3649. [Google Scholar] [CrossRef]
  33. Zhou, W.; Yu, L.; Zhou, Y.; Qiu, W.; Wu, M.W.; Luo, T. Local and global feature learning for blind quality evaluation of screen content and natural scene images. IEEE Trans. Image Process. 2018, 27, 2086–2095. [Google Scholar] [CrossRef]
  34. Yang, M.; Wang, H.; Hu, K.; Yin, G.; Wei, Z. IA-Net: An Inception–Attention-Module-Based Network for Classifying Underwater Images from Others. IEEE J. Ocean. Eng. 2022, 47, 704–717. [Google Scholar] [CrossRef]
  35. Zhou, G.; Bao, X.; Ye, S.; Wang, H.; Yan, H. Selection of optimal building facade texture images from UAV-based multiple oblique image flows. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 1534–1552. [Google Scholar] [CrossRef]
  36. Liu, R.; Wang, X.; Lu, H.; Wu, Z.; Fan, Q.; Li, S.; Jin, X. SCCGAN: Style and characters inpainting based on CGAN. Mob. Netw. Appl. 2021, 26, 3–12. [Google Scholar] [CrossRef]
  37. Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  38. Huang, J.; Zhao, Z.; Ren, C.; Teng, Q.; He, X. A prior-guided deep network for real image denoising and its applications. Knowl. Based Syst. 2022, 255, 109776. [Google Scholar] [CrossRef]
  39. Liang, H.; Li, N.; Zhao, S. Salt and Pepper Noise Removal Method Based on a Detail-Aware Filter. Symmetry 2021, 13, 515. [Google Scholar] [CrossRef]
  40. Li, M.; Cai, G.; Bi, S.; Zhang, X. Improved TV Image Denoising over Inverse Gradient. Symmetry 2023, 15, 678. [Google Scholar] [CrossRef]
  41. Al-Shamasneh, A.R.; Ibrahim, R.W. Image Denoising Based on Quantum Calculus of Local Fractional Entropy. Symmetry 2023, 15, 396. [Google Scholar] [CrossRef]
  42. Zhou, G.; Song, B.; Liang, P.; Xu, J.; Yue, T. Voids filling of DEM with multiattention generative adversarial network model. Remote Sens. 2022, 14, 1206. [Google Scholar] [CrossRef]
  43. Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.W. Deep learning on image denoising: An overview. Neural Netw. 2020, 131, 251–275. [Google Scholar] [CrossRef] [PubMed]
  44. Ilesanmi, A.E.; Ilesanmi, T.O. Methods for image denoising using convolutional neural network: A review. Complex Intell. Syst. 2021, 7, 2179–2198. [Google Scholar] [CrossRef]
  45. Zhong, Y.; Liu, L.; Zhao, D.; Li, H. A generative adversarial network for image denoising. Multimedia Tools and Applications. Multimed. Tools Appl. 2020, 79, 16517–16529. [Google Scholar] [CrossRef]
  46. Rahman, S.M.; Hasan, M.K. Wavelet-domain iterative center weighted median filter for image denoising. Signal Process. 2003, 83, 1001–1012. [Google Scholar] [CrossRef]
  47. Thanh, D.N.; Engínoğlu, S. An iterative mean filter for image denoising. IEEE Access 2019, 7, 167847–167859. [Google Scholar]
  48. Feng, Y.; Zhang, B.; Liu, Y.; Niu, Z.; Fan, Y.; Chen, X. A D-band manifold triplexer with high isolation utilizing novel waveguide dual-mode filters. IEEE Trans. Terahertz Sci. Technol. 2022, 12, 678–688. [Google Scholar] [CrossRef]
  49. Xu, K.D.; Guo, Y.J.; Liu, Y.; Deng, X.; Chen, Q.; Ma, Z. 60-GHz compact dual-mode on-chip bandpass filter using GaAs technology. IEEE Electron Device Lett. 2021, 42, 1120–1123. [Google Scholar] [CrossRef]
  50. Xu, B.; Guo, Y. A novel DVL calibration method based on robust invariant extended Kalman filter. IEEE Trans. Veh. Technol. 2022, 71, 9422–9434. [Google Scholar] [CrossRef]
  51. Xu, B.; Wang, X.; Zhang, J.; Guo, Y.; Razzaqi, A.A. A novel adaptive filtering for cooperative localization under compass failure and non-gaussian noise. IEEE Trans. Veh. Technol. 2022, 71, 3737–3749. [Google Scholar] [CrossRef]
  52. Chen, Z.; Zhou, Z.; Adnan, S. Joint low-rank prior and difference of Gaussian filter for magnetic resonance image denoising. Med Biol. Eng. Comput. 2021, 59, 607–620. [Google Scholar] [CrossRef]
  53. Samko, G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach: Yverdon, Switzerland, 1993. [Google Scholar]
  54. Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernal. Prog. Fract. Differ. Appl. 2015, 1, 73–85. [Google Scholar]
  55. Atangana, A.; Baleanu, D. New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model. Therm. Sci. 2016, 20, 763–769. [Google Scholar] [CrossRef]
  56. Huading, J.; Pu, Y. Fractional calculus method for enhancing digital image of bank slip. Proc. Congr. Image Signal Process. 2008, 3, 326–330. [Google Scholar]
  57. Toufik, M.; Atangana, A. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models. Eur. Phys. J. Plus 2017, 132, 444. [Google Scholar] [CrossRef]
  58. Li, C.; Zeng, F. The finite difference methods for fractional ordinary differential equations. Num. Funct. Anal. Opt. 2013, 34, 149–179. [Google Scholar] [CrossRef]
  59. Pu, Y.F.; Zhou, J.L.; Yuan, X. Fractional differential mask: A fractional differential-based approach for multiscale texture enhancement. IEEE Trans. Image Process 2010, 19, 491–511. [Google Scholar]
  60. Pu, Y.F.; Zhou, J.L. Adaptive cesáro mean filter for salt-and-pepper noise removal. El-Cezeri 2020, 7, 304–314. [Google Scholar]
Figure 1. Flowchart of the algorithm.
Figure 1. Flowchart of the algorithm.
Symmetry 15 01181 g001
Figure 2. Comparison of the performance of different methods for salt-and-pepper noise ratio of 10% for Elaine.
Figure 2. Comparison of the performance of different methods for salt-and-pepper noise ratio of 10% for Elaine.
Symmetry 15 01181 g002
Figure 3. Comparison of the performance of different methods for salt-and-pepper noise ratio of 30% for Elaine.
Figure 3. Comparison of the performance of different methods for salt-and-pepper noise ratio of 30% for Elaine.
Symmetry 15 01181 g003
Figure 4. Comparison of the performance of different methods for salt-and-pepper noise ratio of 50% for Elaine.
Figure 4. Comparison of the performance of different methods for salt-and-pepper noise ratio of 50% for Elaine.
Symmetry 15 01181 g004
Figure 5. Comparison of the performance of different methods for salt-and-pepper noise ratio of 70% for Elaine.
Figure 5. Comparison of the performance of different methods for salt-and-pepper noise ratio of 70% for Elaine.
Symmetry 15 01181 g005
Figure 6. Comparison of the performance of different methods for salt-and-pepper noise ratio of 90% for Elaine.
Figure 6. Comparison of the performance of different methods for salt-and-pepper noise ratio of 90% for Elaine.
Symmetry 15 01181 g006
Figure 7. Comparison of the performance of different methods for salt-and-pepper noise ratio of 10% for peppers.
Figure 7. Comparison of the performance of different methods for salt-and-pepper noise ratio of 10% for peppers.
Symmetry 15 01181 g007
Figure 8. Comparison of the performance of different methods for salt-and-pepper noise ratio of 30% for peppers.
Figure 8. Comparison of the performance of different methods for salt-and-pepper noise ratio of 30% for peppers.
Symmetry 15 01181 g008
Figure 9. Comparison of the performance of different methods for salt-and-pepper noise ratio of 50% for peppers.
Figure 9. Comparison of the performance of different methods for salt-and-pepper noise ratio of 50% for peppers.
Symmetry 15 01181 g009
Figure 10. Comparison of the performance of different methods for salt-and-pepper noise ratio of 70% for peppers.
Figure 10. Comparison of the performance of different methods for salt-and-pepper noise ratio of 70% for peppers.
Symmetry 15 01181 g010
Figure 11. Comparison of the performance of different methods for salt-and-pepper noise ratio of 90% for peppers.
Figure 11. Comparison of the performance of different methods for salt-and-pepper noise ratio of 90% for peppers.
Symmetry 15 01181 g011
Figure 12. Comparison of the performance of different methods for salt-and-pepper noise ratio of 10%.
Figure 12. Comparison of the performance of different methods for salt-and-pepper noise ratio of 10%.
Symmetry 15 01181 g012
Figure 13. Comparison of the performance of different methods for salt-and-pepper noise ratio of 30%.
Figure 13. Comparison of the performance of different methods for salt-and-pepper noise ratio of 30%.
Symmetry 15 01181 g013
Figure 14. Comparison of the performance of different methods for salt-and-pepper noise ratio of 50%.
Figure 14. Comparison of the performance of different methods for salt-and-pepper noise ratio of 50%.
Symmetry 15 01181 g014
Figure 15. Comparison of the performance of different methods for salt-and-pepper noise ratio of 70%.
Figure 15. Comparison of the performance of different methods for salt-and-pepper noise ratio of 70%.
Symmetry 15 01181 g015
Figure 16. Comparison of the performance of different methods for salt-and-pepper noise ratio of 90%.
Figure 16. Comparison of the performance of different methods for salt-and-pepper noise ratio of 90%.
Symmetry 15 01181 g016
Table 1. Comparisons of MSE obtained by different masks for Elaine.
Table 1. Comparisons of MSE obtained by different masks for Elaine.
NoiseNoisyTSFNAFSMASWMFACmFNASNLMBPDFAB1AB2AB3AB4
10%18475.4995.5208.8745.760101.1347.7155.3025.3025.3015.303
30%552118.48018.50627.71818.409251.65030.20916.94616.94616.94816.948
50%923536.47036.89755.83435.772261.18488.47033.26033.26033.26033.256
70%12,86764.13567.713112.75563.441123.526292.68261.30261.29861.30561.298
90%16,595126.05231.62380.34129.49109.805203.18127.87127.87127.87127.88
Table 2. Comparisons of SSIM obtained by different masks for Elaine.
Table 2. Comparisons of SSIM obtained by different masks for Elaine.
NoiseNoisyTSFNAFSMASWMFACmFNASNLMBPDFAB1AB2AB3AB4
10%0.1710.9750.9750.9720.974 70.8060.9690.9760.9760.9760.976
30%0.0470.9190.9190.9120.9180.7260.8990.9240.9240.9240.9248
50%0.0220.8480.8480.8370.8470.6840.8040.8570.8570.8570.857
70%0.0110.7570.7530.7350.7560.6990.6410.7630.7630.7630.763
90%0.0050.6400.5800.5580.6300.6920.2430.6320.6320.6320.632
Table 3. Comparisons of MSE obtained by different masks for peppers.
Table 3. Comparisons of MSE obtained by different masks for peppers.
NoiseNoisyTSFNAFSMASWMFACmFNASNLMBPDFAB1AB2AB3AB4
10%19994.5224.61014.3355.51847.7839.6945.5185.5165.5195.519
30%601117.31217.44245.08217.942106.73642.41017.32817.32917.32517.325
50%10,10243.70043.64289.64640.793121.435124.06639.04939.05239.04639.053
70%14,03987.15590.318179.17982.00295.531433.33080.03980.03980.03880.036
90%18,066201.78327.27605.09200.25190.148084.16197.85197.85197.84197.84
Table 4. Comparisons of SSIM obtained by different masks for peppers.
Table 4. Comparisons of SSIM obtained by different masks for peppers.
NoiseNoisyTSFNAFSMASWMFACmFNASNLMBPDFAB1AB2AB3AB4
10%0.1730.9870.9860.9770.9870.8820.9810.9870.9870.9870.987
30%0.0580.9420.9410.8990.9450.8310.9090.9450.9450.9450.945
50%0.0280.8860.8860.8160.8930.7720.8060.8950.8950.8950.895
70%0.0120.8560.8520.7640.8580.8270.6870.8610.8610.8610.861
90%0.0050.7510.6820.5720.7460.7840.1900.7480.7480.7480.748
Table 5. Comparisons of MSE obtained by different masks for goldhill.
Table 5. Comparisons of MSE obtained by different masks for goldhill.
NoiseNoisyTSFNAFSMASWMFACmFNASNLMBPDFAB1AB2AB3AB4
10%18656.1766.20614.0025.95368.95510.2176.3916.3896.3916.390
30%571124.21124.24845.78621.818190.73542.99222.20922.21322.20922.211
50%942148.76948.91883.94243.474221.964107.89642.88742.88742.89042.895
70%13,22288.11191.346145.30182.937135.603301.68881.51381.50781.51581.507
90%17,026187.60310.07377.12190.02182.892695.97188.13188.13188.14188.12
Table 6. Comparisons of SSIM obtained by different masks for goldhill.
Table 6. Comparisons of SSIM obtained by different masks for goldhill.
NoiseNoisyTSFNAFSMASWMFACmFNASNLMBPDFAB1AB2AB3AB4
10%0.2070.9840.9830.9690.9840.8950.9760.9840.9840.9840.984
30%0.0280.8860.8860.8160.8930.7720.8060.8950.8950.8950.895
50%0.0140.8060.8020.7020.8120.7270.6190.8140.8140.8140.814
70%0.0140.8060.8020.7020.8120.7270.6190.8140.8140.8140.814
90%0.0060.6520.5990.5080.6480.6580.3130.6510.6510.6510.651
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, M.; Wang, S.; Ju, X.; Wang, Y. Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering. Symmetry 2023, 15, 1181. https://doi.org/10.3390/sym15061181

AMA Style

Wang M, Wang S, Ju X, Wang Y. Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering. Symmetry. 2023; 15(6):1181. https://doi.org/10.3390/sym15061181

Chicago/Turabian Style

Wang, Meixia, Susu Wang, Xiaoqin Ju, and Yanhong Wang. 2023. "Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering" Symmetry 15, no. 6: 1181. https://doi.org/10.3390/sym15061181

APA Style

Wang, M., Wang, S., Ju, X., & Wang, Y. (2023). Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering. Symmetry, 15(6), 1181. https://doi.org/10.3390/sym15061181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop