Next Article in Journal
Wake Control of Flow Past Twin Cylinders via Small Cylinders
Previous Article in Journal
Grey-Wolf-Optimization-Algorithm-Based Tuned P-PI Cascade Controller for Dual-Ball-Screw Feed Drive Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional-Order Variational Image Fusion and Denoising Based on Data-Driven Tight Frame

School of Mathematics and Physics, North China Electric Power University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(10), 2260; https://doi.org/10.3390/math11102260
Submission received: 12 April 2023 / Revised: 9 May 2023 / Accepted: 10 May 2023 / Published: 11 May 2023

Abstract

:
Multi-modal image fusion can provide more image information, which improves the image quality for subsequent image processing tasks. Because the images acquired using photon counting devices always suffer from Poisson noise, this paper proposes a new three-step method based on the fractional-order variational method and data-driven tight frame to solve the problem of multi-modal image fusion for images corrupted by Poisson noise. Thus, this article obtains fused high-quality images while removing Poisson noise. The proposed image fusion model can be solved by the split Bregman algorithm which has significant stability and fast convergence. The numerical results on various modal images show the excellent performance of the proposed three-step method in terms of numerical evaluation metrics and visual quality. Extensive experiments demonstrate that our method outperforms state-of-the-art methods on image fusion with Poisson noise.

1. Introduction

A single image cannot fully display all the information of the target scene. For example, computed tomography (CT) images mainly shows hard bone tissues of the body, while magnetic resonance imaging (MRI) mainly shows soft tissues. Multifocus images are formed by focusing on different objects because the imaging equipment cannot achieve focused imaging of all the objects in the same scene. Infrared images can capture thermal radiation information, but the resolution is poor, while visible images have higher resolution. Therefore, it is essential to perform image fusion. The current research of image fusion is mainly applied in the fields of medical science [1], remote sensing [2], monitoring [3], etc.
The current image fusion algorithms are mainly based on the principal component analysis method [4], the pyramid transform-based method [5], the wavelet transform-based method [6], etc. The article [7] elaborates the theoretical knowledge behind the different fusion algorithms. The method based on principal component analysis is easy-to-understand research. However, it is computationally intensive and has poor real-time performance. The methods based on pyramid transform mainly include the Laplacian pyramid [8], gradient pyramid [9], etc. However, the pyramid decomposition is redundant and lacks direction selectivity, which cannot effectively present the structural information of the fused image and leads to blurred boundaries of the fused image. The main methods based on wavelet transform are dual-tree complex wavelet transform [10], contourlet transform [11], shearlet transform [12], etc., but the relationship between the decomposition layers, fusion quality, and time efficiency need to be balanced. The article [13] describes the latest research progress in image fusion.
Because the variational method has a mature theoretical system and it is easy to design and analyze [14], it is widely used in image recovery [15], image denoising [16], and image fusion [17]. The authors in the literature [15] proposed a variational model for local non-texture inpainting based on the total variational model and the Mumford–Shah model. The authors in the literature [16] proposed a fractional-order total variational model for image denoising. In the literature [17], a three-step method for fusing clear images is proposed, as shown in Figure 1. In the image decomposition step, the images are processed using a variational model based on the data-driven tight frame (DDTF):
min v 1 , v 2 , { a i } i = 1 r 2 v 1 W u 1 2 2 + v 2 W u 2 2 2 + λ 1 2 v 1 0 + λ 2 2 v 2 0 ,
s . t . W T W = I ,
where λ 1 > 0 , λ 2 > 0 are the parameters corresponding to the two balances. u 1 , u 2 are the clear source images, and v 1 , v 2 are the sparse representation coefficients of u 1 , u 2 , respectively. W : = W ( a 1 , a 2 , , a r 2 ) is the analysis operator associated with the tight frame generated by r 2 two-dimensional filters { a i } i = 1 r 2 . I is the unit matrix. Unfortunately, clear images can be handled by this method; images containing noise do not work.
In addition, noise is also an important factor to be considered in the image fusion process. In the process of actual image generation, noise interference is inevitable. For example, Poisson noise is often generated when using photon counting devices, which mainly depends on the number of photons. The common noises mainly include Gaussian noise, salt and pepper noise, Poisson noise, etc. The first two kinds of noise have been studied by a large number of researchers [18,19,20,21]. Image denoising is performed using artificial neural networks in article [22], and a new hybrid filter technique is proposed by combining anisotropic diffusion with a Butterworth band-pass filter to overcome over-filtering of the image in article [23] for image denoising. Hence, in this paper, we focus on the image disturbed by Poisson noise. The Tikhonov regularization model [22], the total variational model (TV) [23], and the higher-order total variational model [24] are common methods that use variational models for removing Poisson noise. The total variational-based denoising model is as follows:
min u u 1 + β u f ln u 1 .
where β is a positive parameter, f is the source image, and u is the denoised image. This model performs well on piecewise constant images, but it causes staircasing artifacts for piecewise smooth images. High-order total variational models such as the PDE-based model [25] and the Lysaker–Lundervold–Tai (LLT) model [26] introduce speckle artifacts.
Different from other types of models, the fractional-order total variational (FOTV) model has shown that it can suppress staircasing artifacts and speckle artifacts [27]. By considering the intensity of adjacent images, their local geometric features can be maintained [28,29]. The literature [16] proposes a Poisson denoising model based on fractional-order total variation, which can maintain the high-order smoothness of the image. The model is as follows:
min u α u 1 + β u f ln u 1 .
where α is the fraction that is greater than or equal to 1, f is the noise image, u is the denoised image, and β is the parameter of the fidelity term.
Before the fusion of images containing noise, the images are usually pre-processed with noise removal; however, this operation also reduces the efficiency of image fusion. Consider that each term in the variational model (1) for image fusion and the variational model (3) for image denoising is independent and indispensable to each other. The addition of an item to the variational model is feasible and easy to interpret. Many methods have been proposed to simultaneously denoise and fuse images disturbed by Gaussian noise [30,31,32], and these methods have achieved excellent results. Inspired by them, we want to fuse images disturbed by Poisson noise using the variational model.
In this paper, a new three-step method for image denoising and fusion is proposed by combining the two variational models (1) and (3), and its work flowchart is shown in Figure 2. Firstly, in the image decomposition step, an improved variational model is proposed to process the images with noise, and the split Bregman method is used to solve it. Secondly, the coefficients are constructed according to the fusion rules. Finally, the fused image is obtained using the variational model in the image reconstruction step, which maintains the image smoothness and significant features.
Our contributions can be summarized as follows:
  • Motivated by a fractional-order total variational denoising model and a data-driven tight frame variational model for image fusion, a variational fusion model capable of handling noisy images is constructed. The denoised images and analysis operator are obtained by this model.
  • The new three-step method is constructed. The method combines FOTV and DDTF models for simultaneously denoising and fusing images, and it can find the complementary information from the noisy source images to obtain the final fused images and suppress the noise output. This is the first time that a fractional-order variational model is used to denoise and fuse images disturbed by Poisson noise.
  • We evaluate this method on different types of images. The experiments show that the proposed method is more effective.
The rest of this paper is organized as follows. In Section 2, a new three-step method is proposed. In Section 3, the solving procedure using the split Bregman algorithm is described in detail. In Section 4, by numerical experiments, the advantages of the proposed method are illustrated. This paper concludes with a brief summary in Section 5.

2. Materials and Methods

In this section, we focus on the basic theory of fractional-order derivatives and image fusion, and the proposed three-step method is further described.

2.1. Related Materials

Fractional-order derivatives are widely used in image processing due to their extra degrees of freedom. The literature [33] reviews the progress of research on fractional-order derivatives in different image processing areas. Full-reference image quality assessment methods are proposed in the literature [34] by combining the Grünwald–Letnikov derivative and image gradients. The application of fractional-order derivatives in color image edge detection is presented in the literature [35].
Total variation models using fractional-order derivatives are used for Gaussian noise removal [36], Poisson noise removal [16], multiplicative noise removal [37], etc. The theory of fractional-order derivatives is described below. The fractional-order gradient is defined as α u = [ D 1 α u , D 2 α u ] , where D 1 α u , D 2 α u are the discrete gradients along the x-axis and the y-axis and are defined by
( D 1 α u ) i , j = k = 0 K 1 ( 1 ) k C k α u i k , j , ( D 2 α u ) i , j = k = 0 K 1 ( 1 ) k C k α u i , j k .
where K is the number of adjacent pixels used to calculate the fractional-order derivative at each pixel. The image u is expressed as a matrix u i , j , 1 i N , 1 j M . The coefficients { C k α } k = 0 K 1 are determined by the Gamma function Γ ( x ) , C k α = Γ ( α + 1 ) Γ ( k + 1 ) Γ ( α + 1 k ) . Then, the FOTV of u is defined as
α u 1 : = i , j | ( D 1 α u ) i , j | + | ( D 2 α u ) i , j | .
Notice that when α = 1 , ( D 1 1 u ) i , j = u i , j u i 1 , j , ( D 2 1 u ) i , j = u i , j u i , j 1 . It is natural that FOTV is equivalent to TV.
The basic idea of image fusion using the variational method is as follows: the actual image problem is first transformed into an energy general function model and then solved by the variational method. The general structure of image fusion is mainly divided into three steps: image decomposition, the fusion coefficient according to certain rules, and image reconstruction. In other words, the source image is first input, and then the fused image is finally output after the whole process. Both the first and third steps use the variational model to process the image, as shown in Figure 2.

2.2. The Proposed New Three-Step Method

For the proposed new three-step method, the specific model corresponding to each step will be introduced in detail. In the step of image decomposition, the model that can process images with Poisson noise is proposed as follows:
min u 1 , u 2 , v 1 , v 2 , { a i } i = 1 r 2 γ 1 u 1 f 1 ln u 1 1 + γ 2 u 2 f 2 ln u 2 1 + β 1 α u 1 1 + β 2 α u 2 1 + v 1 W u 1 2 2 + v 2 W u 2 2 2 + λ 1 2 v 1 0 + λ 2 2 v 2 0 ,
s . t . W T W = I ,
where γ 1 , γ 2 , β 1 , β 2 , λ 1 , λ 2 are the parameters, which are all positive values. The last two items using λ 1 2 , λ 2 2 are convenient to solve with the hard threshold operator. f 1 , f 2 are the images containing noise, u 1 , u 2 are the denoised images, v 1 , v 2 are the sparse representation coefficients of u 1 , u 2 , and W is the analysis operator.
By solving the model (4), the denoised images u 1 , u 2 and the analysis operator W are obtained. We use c i = W u i to denote the coefficients of the image u i , i = 1 , 2 . The magnitude of the coefficients at each pixel can indicate the presence or absence of features in the neighborhood of that pixel, and the fusion coefficients are represented by the following fusion rule:
c j ( x ) = c 1 j ( x ) , x { x | Σ j | c 1 j ( x ) | Σ j | c 2 j ( x ) | } , c 2 j ( x ) , x { x | Σ j | c 1 j ( x ) | < Σ j | c 2 j ( x ) | } ,
where c j ( x ) denotes the coefficient of the jth filter acting at pixel x, j = 1 , 2 , , r 2 , x = 1 , 2 , , N . c i j ( x ) denotes the coefficients of the jth filter acting at pixel x of images u i , i = 1 , 2 .
For the image reconstruction step, based on the denoised images u 1 , u 2 and coefficients c obtained in the first two steps, the fused image u is reconstructed using the following variational model:
min u W u c 1 + μ 1 2 u | Ω 1 u 1 | Ω 1 2 2 + μ 2 2 u | Ω 2 u 2 | Ω 2 2 2 ,
where μ 1 , μ 2 are parameters, Ω i denotes the restricted area of u i , Ω i = { j | | u ( j ) | < t } , and t is a constant.

3. Algorithm

In this section, the models (4) and (6) corresponding to the image decomposition and image reconstruction steps are described in detail.

3.1. Image Decomposition

For the variational model (4) proposed in the previous section, it is equal to the following subproblems.
  • The u-subproblem: for fixed W , v 1 , v 2 , we solve
    min u 1 γ 1 u 1 f 1 ln u 1 1 + β 1 α u 1 1 + v 1 W u 1 2 2 ,
    min u 2 γ 2 u 2 f 2 ln u 2 1 + β 2 α u 2 1 + v 2 W u 2 2 2 .
  • The v-subproblem: for fixed W , u 1 , u 2 , we solve
    min v 1 , v 2 v 1 W u 1 2 2 + v 2 W u 2 2 2 + λ 1 2 v 1 0 + λ 2 2 v 2 0 .
  • The W-subproblem: for fixed v 1 , v 2 , u 1 , u 2 , we solve
    min W T W = I v 1 W u 1 2 2 + v 2 W u 2 2 2 .
Firstly, the u-subproblem is solved by introducing the variables Q 1 = α u 1 , Q 2 = α u 2 and taking the split Bregman iteration method to transform Equations (7) and (8) into the following form:
( u 1 k + 1 , Q 1 k + 1 ) = arg min u 1 , Q 1 γ 1 u 1 f 1 ln u 1 1 + β 1 Q 1 1 + v 1 k + 1 W k + 1 u 1 2 2 + δ 1 2 Q 1 α u 1 θ 1 k 2 2 ,
θ 1 k + 1 = θ 1 k + ( α u 1 k + 1 Q 1 k + 1 ) ,
( u 2 k + 1 , Q 2 k + 1 ) = arg min u 2 , Q 2 γ 2 u 2 f 2 ln u 2 1 + β 2 Q 2 1 + v 2 k + 1 W k + 1 u 2 2 2 + δ 2 2 Q 2 α u 2 θ 2 k 2 2 ,
θ 2 k + 1 = θ 2 k + ( α u 2 k + 1 Q 2 k + 1 ) ,
where the parameters δ 1 , δ 2 are positive numbers. It is natural that Equations (11) and (13) can be further rewritten as subproblems for u 1 , Q 1 , u 2 , Q 2 .
u 1 k + 1 = arg min u 1 γ 1 u 1 f 1 ln u 1 1 + v 1 k + 1 W k + 1 u 1 2 2 + δ 1 2 Q 1 k α u 1 θ 1 k 2 2 ,
Q 1 k + 1 = arg min Q 1 β 1 Q 1 1 + δ 1 2 Q 1 α u 1 k + 1 θ 1 k 2 2 ,
u 2 k + 1 = arg min u 2 γ 2 u 2 f 2 ln u 2 1 + v 2 k + 1 W k + 1 u 2 2 2 + δ 2 2 Q 2 k α u 2 θ 2 k 2 2 ,
Q 2 k + 1 = arg min Q 2 β 2 Q 2 1 + δ 2 2 Q 2 α u 2 k + 1 θ 2 k 2 2 .
The Euler–Lagrange equations of Equations (15) and (17) are as follows:
γ 1 ( u 1 f 1 ) / u 1 2 ( W k + 1 ) T ( v 1 k + 1 W k + 1 u 1 ) δ 1 ( α ) T ( Q 1 k α u 1 θ 1 k ) = 0 ,
γ 2 ( u 2 f 2 ) / u 2 2 ( W k + 1 ) T ( v 2 k + 1 W k + 1 u 2 ) δ 2 ( α ) T ( Q 2 k α u 2 θ 2 k ) = 0 .
To efficiently solve nonlinear Equations (19) and (20), u in the denominator replaces the previous iteration u k and u k + 1 is solved by the fast Fourier transform (FFT) under periodic boundary conditions.
u 1 k + 1 = F 1 [ F ( 2 ( W k + 1 ) T u 1 k v 1 k + 1 + γ 1 f 1 + δ 1 ( α ) T u 1 k ( Q 1 k θ 1 k ) ) γ 1 + F ( 2 ( W k + 1 ) T W k + 1 u 1 k + δ 1 ( α ) T ( α ) u 1 k ) ] ,
u 2 k + 1 = F 1 [ F ( 2 ( W k + 1 ) T u 2 k v 2 k + 1 + γ 2 f 2 + δ 2 ( α ) T u 2 k ( Q 2 k θ 2 k ) ) γ 2 + F ( 2 ( W k + 1 ) T W k + 1 u 2 k + δ 2 ( α ) T ( α ) u 2 k ) ] ,
where F denotes FFT.
One can verify readily that the solution of subproblem Q 1 , Q 2 can be expressed explicitly as
Q 1 k + 1 = ( α u 1 k + 1 + θ 1 k ) max { 0 , 1 β 1 δ 1 | α u 1 k + 1 + θ 1 k | } ,
Q 2 k + 1 = ( α u 2 k + 1 + θ 2 k ) max { 0 , 1 β 2 δ 2 | α u 2 k + 1 + θ 2 k | } .
By Equations (11)–(24), the solution of the u-subproblem is found. The solution process for the v-subproblem is described below.
Equation (9) can be solved by the hard threshold operator T:
v i k + 1 = T λ i ( W k u i k ) = W k u i , | W k u i k | > λ i , 0 , | W k u i k | λ i ,
where λ i > 0 are given constants, i = 1 , 2 .
At the end of the image decomposition step, the W-subproblem is solved. Because the subproblem (10) is a minimization problem with quadratic constraints, it can be solved by singular value decomposition (SVD) as follows:
W = 1 r R L T ,
where L, R are the SVD of V 1 U 1 T + V 2 U 2 T , V 1 U 1 T + V 2 U 2 T = L D R T . V i ,   U i denotes the matrix that pulls the image blocks of r × r into column vectors.
Combining the above solving process, the image decomposition step algorithm is as follows (Algorithm 1):
Algorithm 1: The image decomposition step algorithm
Mathematics 11 02260 i001

3.2. Image Reconstruction

By introducing the variables y , z and the parameter δ > 0 , Equation (6) can be solved using the split Bregman iteration method and rewritten as
u k + 1 = arg min u μ 1 2 u | Ω 1 u 1 | Ω 1 2 2 + μ 2 2 u | Ω 2 u 2 | Ω 2 2 2 + δ 2 W u c z k + y k 2 2 ,
z k + 1 = arg min z z 1 + δ 2 z W u k + 1 + c y k 2 2 ,
y k + 1 = y k + ( W u k + 1 c z k + 1 ) .
Similarly, the solution of the above subproblems can be obtained:
u k + 1 = [ ( μ 1 D 1 u 1 + μ 2 D 2 u 2 + δ W T ( c + z k y k ) ) ( μ 1 D 1 + μ 2 D 2 + δ I ) ] ,
z k + 1 = ( W u k + 1 c + y k ) max { 0 , 1 1 δ | W u k + 1 c + y k | } .
Therefore, the algorithm for image reconstruction is as follows (Algorithm 2):
Algorithm 2: The algorithm for image reconstruction
Mathematics 11 02260 i002

4. Numerical Experiments

In this section, to verify the effectiveness of the method, numerical experiments of the proposed three-step method are performed. All experimental results were achieved using Matlab (R2016b) on a laptop with Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, 16 GB RAM, Windows 11. Different types of images are selected, including synthesized images [17], medical images [32], multifocused images [38], and infrared and visible images [39], all of which are widely used in image processing.

4.1. Evaluation Metrics

It is important to choose the metrics to evaluate the quality of the denoised image and the fused image. In general, the metrics of the denoising and fusion effect can be divided into two aspects: subjective visual effect and objective quantitative evaluation. The subjective visual effect is that people evaluate the image quality by visual observation, which cannot be judged very accurately. Usually, some objective evaluations are described in the literature. There are many metrics for image evaluation quality, such as an intelligent model proposed in article [40] to evaluate noise in ultrasound images. In this paper, the peak signal-to-noise ratio (PSNR), mutual information (MI) [41], edge strength ( Q a b f ) [42], and structural similarity ( Q e ) [43] are chosen to evaluate the image quality. The PSNR is used to measure the denoising effect of the image. MI is used to measure the amount of information of an image containing another image. Q a b f is used to determine the relative amount of edge information that is transferred from the input images into the fused image. Q e is used for determining the structural similarity between two images. It is characteristic that the larger these values are, the better the results.

4.2. Selection of Parameters

The selection of parameters is crucial in the implementation of the algorithm. In the image decomposition step, the stop criterion is u k + 1 u k 2 u k 2 < 10 5 , and in the image reconstruction step, the number of iterations is 1000. The method proposed in this paper mainly involves the following parameters. Basic guidance for setting these parameters is discussed. γ 1 , γ 2 are the coefficients of the approximation term. The larger γ 1 , γ 2 , the closer the approximation solution obtained is to the exact solution. The parameters β 1 , β 2 are the coefficients of the regular term. The larger β 1 , β 2 , the smoother the image obtained; the smaller β 1 , β 2 , the denoising effect on the image is not significant. δ 1 and δ 2 control the penalty function terms of u 1 , u 2 . β 1 δ 1 and β 2 δ 2 control the iterations of Q 1 , Q 2 .

4.3. Numerical Experimental Results and Analysis

At present, most researchers have studied the simultaneous denoising and fusion of images disturbed by Gaussian noise [36,44,45], and some others have studied the problem of Poisson denoising. There are few studies on the simultaneous denoising fusion of images interfered with by Poisson noise. We found a paper [46] on the simultaneous denoising and fusion of images with Poisson noise, which proposes an online convolutional coding model to train noisy images. The fusion was performed on multifocused images with a PSNR value of 29.46. Because the authors ran the model in GPU, this paper is not compared with it.
Because Poisson noise depends on the pixel intensity, the noise level can be controlled by the peak intensity of the original image. The original image with a preset peak value is scaled, before adding Poisson noise. Specifically, we consider three peak values: 55, 155, and 255.
It is obvious from Figure 3 that the image with peak value 55 is more noisy than the image with peak values 155 and 255.
Next, the validity of fractional-order α is verified. In Figure 4, we present the denoising results for the image with peak 55 using different fractional orders α = 1 , 1.6, and 2.4. It is obvious that when α = 1 , i.e., TV, the first column still contains some noise. When α = 2.4 , the last column image is too smooth and blurs the boundary. This means that the fractional-order α has an effect on the final result.
In the following, the numerical experiments on the proposed method using different types of images are performed, i.e., Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
In all the experiments, the most appropriate parameters are chosen from the set of parameters γ i { 10 , 10 2 , 5 × 10 2 , 8 × 10 2 , 10 3 } , β i { 3 × 10 1 , 5 × 10 1 , 8 × 10 1 , 10 2 , 5 × 10 2 , 8 × 10 2 , 10 3 } , and δ i { 10 1 , 10 2 , 2 × 10 2 , 10 3 , 5 × 10 3 } , i = 1 , 2 . Firstly, we process the two medical images at peaks 55 and 155 and set α = 1.6, K = 16, and the results are shown in Figure 6.
In Figure 6, the proposed method is very effective for medical images. The Poisson noise in the image is effectively removed and most of the feature information is retained without edge blurring. Information is provided on objective evaluation metrics, i.e., the values of P S N R , M I , Q a b f , and Q e in Table 1, which show that the proposed method is more effective in most cases. Note that both ‘Noisy’ and ‘PSNR’ in Table 1, Table 2, Table 3 and Table 4 represent the values of the PSNR for the two noisy images and the denoised images, respectively.
Then, the effect of different values of the parameters on the fusion results are discussed, as shown in Figure 7. It is observed that the selection of γ 1 , γ 2 , β 1 , β 2 will affect the final fusion effect, and these parameters are sensitive. The parameters δ 1 , δ 2 will affect the time of the whole calculation process.
The denoising results and the final fusion results of the multifocus images are described in Figure 8 and Figure 9. Figure 8 shows the multifocus image at peaks of 155 and 255. Figure 9 shows the denoising results and the final fusion results of the multifocus image with peaks of 155 and 255, respectively.
It is not difficult to find that for (a,e,b,f) in Figure 9, although some of the noise can be removed from this image, there is still some left, and the fusion results show that the tiny texture features are blurred, which is consistent with the results in Table 2.
Medical images and multifocal images are used to compare the proposed method with the method in the literature [17], and the results are shown below. In Figure 10, the first row presents the fused images using the proposed method, and the second row presents the fused images using the DDTF method. By looking at these images, it is easy to see that our proposed method has better performance with respect to denoising, which is also illustrated by the values in Table 3.
At the end of the experiment, we show the denoising and fusion results of the infrared and visible images. Figure 11 shows the images at peaks of 155 and 255. Figure 12 shows the denoising results and fusion results for the infrared and visible images at peaks of 155 and 255, respectively.
As can be seen from Figure 12, the proposed method does not seem to be too effective in denoising infrared and visible light, and the fused image still contains some noise, which is consistent with the results in Table 4.

5. Conclusions

In this paper, motivated by the respective advantages of the image fusion method based on the variational model and the denoising method based on the fractional-order variational model, a new three-step method is proposed for the denoising and fusion of images disturbed by Poisson noise. The proposed method is solved by the split Bregman iterative algorithm. The validity of the fractional order in the variational model is examined using synthesis images. In addition, numerical experiments are performed for three different types of images to demonstrate the effectiveness of the method.

Author Contributions

Methodology, R.Z.; software, R.Z. and J.L.; writing—original draft preparation, R.Z.; writing—review and editing, J.L. and R.Z.; visualization, R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Beijing Natural Science Foundation of China (No.Z200001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, Y.; Cao, S.; Wan, W.; Huang, S. Multi-modal medical image super-resolution fusion based on detail enhancement and weighted local energy deviation. Biomed. Signal Process. Control 2023, 80, 104387. [Google Scholar] [CrossRef]
  2. Gharbia, R.; Hassanien, A.; El-Baz, A.; Elhoseny, M.; Gunasekaran, M. Multi-spectral and panchromatic image fusion approach using stationary wavelet transform and swarm flower pollination optimization for remote sensing applications. Future Gener. Comput. Syst. 2018, 88, 501–511. [Google Scholar] [CrossRef]
  3. Paramanandham, N.; Rajendiran, K. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications. Infrared Phys. Technol. 2018, 88, 13–22. [Google Scholar] [CrossRef]
  4. Nandi, D.; Ashour, A.; Samanta, S.; Chakraborty, S.; Salem, M.; Dey, N. Principal component analysis in medical image processing: A study. Int. J. Image Min. 2015, 1, 65–86. [Google Scholar] [CrossRef]
  5. Du, J.; Li, W.; Xiao, B. Anatomical-functional image fusion by information of interest in local laplacian filtering domain. IEEE Trans. Image Process. 2017, 26, 5855–5866. [Google Scholar] [CrossRef] [PubMed]
  6. Prakash, O.; Park, C.; Khare, A.; Jeon, M.; Gwak, J. Multiscale fusion of multimodal medical images using lifting scheme based biorthogonal wavelet transform. Optik 2019, 182, 995–1014. [Google Scholar] [CrossRef]
  7. Hermessi, H.; Mourali, O.; Zagrouba, E. Multimodal medical image fusion review: Theoretical background and recent advances. Signal Process 2021, 183, 108036. [Google Scholar] [CrossRef]
  8. Burt, P.; Adelson, E. The Laplacian pyramid as a compact image code. Readings Comput. Vision. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  9. Petrovic, V.; Xydeas, C. Gradient-based multiresolution image fusion. IEEE Trans. Image Process. 2004, 13, 228–237. [Google Scholar] [CrossRef]
  10. Ioannidou, S.; Karathanassi, V. Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on quickbird image fusion. IEEE Geosci. Remote. Sens. Lett. 2007, 1, 166–170. [Google Scholar] [CrossRef]
  11. Yang, L.; Guo, B.; Ni, W. Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 2008, 72, 203–211. [Google Scholar] [CrossRef]
  12. Wang, L.; Li, B.; Tian, L. Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Inf. Fusion 2014, 19, 20–28. [Google Scholar] [CrossRef]
  13. Singh, S.; Singh, H.; Bueno, G.; Deniz, O.; Singh, S.; Monga, H.; Hrisheekesha, P.N.; Pedraza, A. A review of image fusion: Methods, applications and performance metrics. Digit. Signal Process. 2023, 137, 104020. [Google Scholar] [CrossRef]
  14. Chan, T.; Shen, J.; Vese, L. Variational PDE models in image processing. Not. Am. Math. Soc. 2003, 50, 14–26. [Google Scholar]
  15. Chan, T.; Shen, J. Mathematical models for local non-texture inpainting. SIAM J. Appl. Math. 2002, 62, 1019–1043. [Google Scholar]
  16. Rahman, M.; Zhang, J.; Qin, J.; Lou, Y. Poisson image denoising based on fractional-order total variation. Inverse Probl. Imaging 2020, 14, 77–96. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Zhang, X. Variational bimodal image fusion with data-driven tight frame. Inf. Fusion 2020, 55, 164–172. [Google Scholar] [CrossRef]
  18. Thakur, R.K.; Maji, S.K. Multi scale pixel attention and feature extraction based neural network for image denoising. Pattern Recognit. 2023, 141, 109603. [Google Scholar] [CrossRef]
  19. Zhang, Q.; Huang, C.; Yang, L.; Yang, Z. Salt and pepper noise removal method based on graph signal reconstruction. Digit. Signal Process. 2023, 135, 103941. [Google Scholar] [CrossRef]
  20. Singh, A.; Kushwaha, S.; Alarfaj, M.; Singh, M. Comprehensive overview of backpropagation algorithm for digital image denoising. Electronics 2022, 11, 1590. [Google Scholar] [CrossRef]
  21. Kushwaha, S.; Singh, R.K. Optimization of the proposed hybrid denoising technique to overcome over-filtering issue. Biomed. Eng. Biomed. Tech. 2019, 64, 601–618. [Google Scholar]
  22. Tikhonov, A.; Goncharsky, A.; Stepanov, V.; Yagola, A. Numerical Methods for the Solution of Ill-Posed Problems; Mathematics and its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1995; 328p. [Google Scholar]
  23. Rudin, L.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  24. Chan, T.; Marquina, A.; Mulet, P. High-order total variation-based image restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
  25. Zhang, J.; Ma, M.; Wu, Z.; Deng, C. High-order total bounded variation model and its fast algorithm for Poissonian image restoration. Math. Probl. Eng. 2019, 2019, 1–11. [Google Scholar] [CrossRef]
  26. Lysaker, M.; Lundervold, A.; Tai, X. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Image Process. 2003, 12, 1579–1590. [Google Scholar] [CrossRef] [PubMed]
  27. Bai, J.; Feng, X. Fractional-order anisotropic diffusion for image denoising. IEEE Trans. Image Process. 2007, 16, 2492–2502. [Google Scholar] [CrossRef] [PubMed]
  28. Pu, Y. Fractional differential analysis for texture of digital image. J. Algorithms Comput. Technol. 2007, 1, 357–380. [Google Scholar]
  29. Zhang, J.; Chen, K. A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 8, 2487–2518. [Google Scholar] [CrossRef]
  30. Liu, L.; Xu, L.; Fang, H. Infrared and visible image fusion and denoising via l2-lp norm minimization. Signal Process. 2020, 172, 107546. [Google Scholar] [CrossRef]
  31. Goyal, S.; Singh, V.; Rani, A.; Yadav, N. Multimodal image fusion and denoising in NSCT domain using CNN and FOTGV. Biomed. Signal Process. Control. 2022, 71, 103214. [Google Scholar] [CrossRef]
  32. Li, X.; Zhou, F.; Tan, H. Joint image fusion and denoising via three-layer decomposition and sparse representation. Knowl.-Based Syst. 2021, 224, 107087. [Google Scholar] [CrossRef]
  33. Yang, Q.; Chen, D.; Zhao, T.; Chen, Y. Fractional calculus in image processing: A review. Fract. Calc. Appl. Anal. 2016, 19, 1222–1249. [Google Scholar] [CrossRef]
  34. Varga, D. Full-Reference image quality assessment based on Grünwald–Letnikov derivative, image gradients, and visual saliency. Electronics 2022, 11, 559. [Google Scholar] [CrossRef]
  35. Henriques, M.; Valério, D.; Gordo, P.; Melicio, R. Fractional-order colour image processing. Mathematics 2021, 9, 457. [Google Scholar] [CrossRef]
  36. Mei, J.; Dong, Y.; Huang, T. Simultaneous image fusion and denoising by using fractional-order gradient information. J. Comput. Appl. Math. 2019, 351, 212–227. [Google Scholar] [CrossRef]
  37. Ullah, A.; Chen, W.; Khan, M.A. A new variational approach for restoring images with multiplicative noise. Comput. Math. Appl. 2016, 71, 2034–2050. [Google Scholar] [CrossRef]
  38. Jiang, Q.; Jin, X.; Chen, G.; Lee, S.; Cui, X.; Yao, S.; Wu, L. Two-scale decomposition-based multifocus image fusion framework combined with image morphology and fuzzy set theory. Inf. Sci. 2020, 541, 442–474. [Google Scholar] [CrossRef]
  39. Li, H.; Wu, X.; Kittler, J. Infrared and visible image fusion using a deep learning framework. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; pp. 2705–2710. [Google Scholar]
  40. Hossain, M.M.; Hasan, M.M.; Rahim, M.A.; Rahman, M.M.; Yousuf, M.A.; Al-Ashhab, S.; Akhdar, H.F.; Alyami, S.A.; Azad, A.; Moni, M.A. Particle swarm optimized fuzzy CNN with quantitative feature fusion for ultrasound image quality identification. IEEE J. Transl. Eng. Health Med. 2022, 10, 1–12. [Google Scholar]
  41. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 313–315. [Google Scholar] [CrossRef]
  42. Piella, G.; Heijmans, H. A new quality metric for image fusion. In Proceedings of the IEEE International Conference on Image Processing, Barcelona, Spain, 14–17 September 2003; Volume 3, p. III-173. [Google Scholar]
  43. Xydeas, C.; Petrovi, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef]
  44. Zhao, W.; Lu, H. Medical image fusion and denoising with alternating sequential filter and adaptive fractional order total variation. IEEE Trans. Instrum. Meas. 2017, 66, 2283–2294. [Google Scholar] [CrossRef]
  45. Wang, G.; Li, W.; Du, J.; Xiao, B.; Gao, X. Medical image fusion and denoising algorithm based on a decomposition model of hybrid variation-sparse representation. IEEE J. Biomed. Health Inform. 2022, 26, 5584–5595. [Google Scholar] [CrossRef] [PubMed]
  46. Wang, W.; Xia, X.; He, C.; Ren, Z.; Wang, T.; Lei, B. A Noise-Robust online convolutional coding model and its applications to poisson denoising and image fusion. Appl. Math. Model. 2021, 95, 644–666. [Google Scholar] [CrossRef]
Figure 1. The three-step method.
Figure 1. The three-step method.
Mathematics 11 02260 g001
Figure 2. The proposed new three-step method.
Figure 2. The proposed new three-step method.
Mathematics 11 02260 g002
Figure 3. The four columns (ah) of images are synthesized images and the noisy images of peak values at 55, 155, and 255.
Figure 3. The four columns (ah) of images are synthesized images and the noisy images of peak values at 55, 155, and 255.
Mathematics 11 02260 g003
Figure 4. The three columns (af) of images are the denoised images with peak value 55 at fractional orders 1, 1.6, and 2.4, respectively.
Figure 4. The three columns (af) of images are the denoised images with peak value 55 at fractional orders 1, 1.6, and 2.4, respectively.
Mathematics 11 02260 g004
Figure 5. (a) CT, (b) MRI, (c) MR-T1, (d) MR-T2. (eh), (il) are the noisy images at peaks of 55 and 155, respectively.
Figure 5. (a) CT, (b) MRI, (c) MR-T1, (d) MR-T2. (eh), (il) are the noisy images at peaks of 55 and 155, respectively.
Mathematics 11 02260 g005
Figure 6. The four columns (al) are the denoised image and fused images obtained by processing the medical images at the peak of 55 and 155, respectively.
Figure 6. The four columns (al) are the denoised image and fused images obtained by processing the medical images at the peak of 55 and 155, respectively.
Mathematics 11 02260 g006
Figure 7. Compare the fusion effect of different parameter values for the noisy image with a peak of 55. Column 1: γ 1 = γ 2 = 10 3 , 5 × 10 2 . Column 2: β 1 = β 2 = 30 , 100 . Column 3: δ 1 = δ 2 = 10 3 , 10 1 .
Figure 7. Compare the fusion effect of different parameter values for the noisy image with a peak of 55. Column 1: γ 1 = γ 2 = 10 3 , 5 × 10 2 . Column 2: β 1 = β 2 = 30 , 100 . Column 3: δ 1 = δ 2 = 10 3 , 10 1 .
Mathematics 11 02260 g007
Figure 8. (a) Multifocus1a, (b) Multifocus1b, (c) Multifocus2a, (d) Multifocus2b. (eh), (il) are the noisy images at peaks of 155 and 255, respectively.
Figure 8. (a) Multifocus1a, (b) Multifocus1b, (c) Multifocus2a, (d) Multifocus2b. (eh), (il) are the noisy images at peaks of 155 and 255, respectively.
Mathematics 11 02260 g008
Figure 9. The four columns (al) are the denoised image and fused images obtained by processing the multifocus images at the peak of 155 and 255, respectively.
Figure 9. The four columns (al) are the denoised image and fused images obtained by processing the multifocus images at the peak of 155 and 255, respectively.
Mathematics 11 02260 g009
Figure 10. The two columns (ah) show the fused images obtained by the DDTF method and the proposed method for the image with a peak of 155, respectively.
Figure 10. The two columns (ah) show the fused images obtained by the DDTF method and the proposed method for the image with a peak of 155, respectively.
Mathematics 11 02260 g010
Figure 11. (a) IR1, (b) VIS1, (c) IR2, (d) VIS2. (e)–(h), (il) are the noisy image at peaks of 155 and 255, respectively.
Figure 11. (a) IR1, (b) VIS1, (c) IR2, (d) VIS2. (e)–(h), (il) are the noisy image at peaks of 155 and 255, respectively.
Mathematics 11 02260 g011
Figure 12. The four columns (al) are the denoised image and fused images obtained by processing the infrared and visible images at peaks of 155 and 255, respectively.
Figure 12. The four columns (al) are the denoised image and fused images obtained by processing the infrared and visible images at peaks of 155 and 255, respectively.
Mathematics 11 02260 g012
Table 1. Indexes of the proposed methods for medical images at peak values 55 and 155, respectively.
Table 1. Indexes of the proposed methods for medical images at peak values 55 and 155, respectively.
Test ImagesPeakNoisyPSNR MI Q abf Q e
CT/MRI5531.17/24.3931.70/26.791.830.500.56
15535.70/28.8536.06/28.952.550.590.48
MR-T1/MR-T25527.98/23.9029.91/26.152.610.400.53
15532.41/28.4032.86/28.293.160.470.41
Table 2. Indexes of the proposed methods for multifocus images at peak values 155 and 255, respectively.
Table 2. Indexes of the proposed methods for multifocus images at peak values 155 and 255, respectively.
Test ImagesPeakNoisyPSNR MI Q abf Q e
Multifocus115525.97/25.7228.02/27.504.000.400.39
25528.17/27.8729.13/28.354.580.390.41
Multifocus215525.39/25.4129.72/30.164.710.540.70
25527.57/27.5630.43/30.554.770.600.66
Table 3. Indexes of the proposed methods for multifocus images at peak values 155 and 255, respectively.
Table 3. Indexes of the proposed methods for multifocus images at peak values 155 and 255, respectively.
MethodIndexCT/MRIMR-T1/MR-T2Multifocus1Multifocus2
proposed M I 2.513.114.324.69
Q a b f 0.590.390.390.54
Q e 0.440.370.480.69
DDTF M I 2.632.723.493.54
Q a b f 0.460.400.370.47
Q e 0.330.280.250.46
Table 4. Indexes of the proposed methods for infrared and visible images at peak values 155 and 255, respectively.
Table 4. Indexes of the proposed methods for infrared and visible images at peak values 155 and 255, respectively.
Test ImagesPeakNoisyPSNR MI Q abf Q e
IR1/VIS115526.71/26.7027.72/28.892.180.230.40
25528.90/29.2129.29/30.472.620.240.41
IR2/VIS215525.01/25.8027.63/28.301.670.290.24
25527.18/28.0028.61/29.572.120.310.23
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, R.; Liu, J. Fractional-Order Variational Image Fusion and Denoising Based on Data-Driven Tight Frame. Mathematics 2023, 11, 2260. https://doi.org/10.3390/math11102260

AMA Style

Zhao R, Liu J. Fractional-Order Variational Image Fusion and Denoising Based on Data-Driven Tight Frame. Mathematics. 2023; 11(10):2260. https://doi.org/10.3390/math11102260

Chicago/Turabian Style

Zhao, Ru, and Jingjing Liu. 2023. "Fractional-Order Variational Image Fusion and Denoising Based on Data-Driven Tight Frame" Mathematics 11, no. 10: 2260. https://doi.org/10.3390/math11102260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop