Next Article in Journal
Quantification of Movement in Stroke Patients under Free Living Conditions Using Wearable Sensors: A Systematic Review
Next Article in Special Issue
A Preliminary Exploration of the Placental Position Influence on Uterine Electromyography Using Fractional Modelling
Previous Article in Journal
Effects of Social Implementation Education for Assistive Device Engineers at NIT (KOSEN) through the Development of a Digital Reading Device for the Visually Impaired
Previous Article in Special Issue
MEMS Accelerometer Noises Analysis Based on Triple Estimation Fractional Order Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional Derivatives Application to Image Fusion Problems

Institute of Control and Industrial Electronics, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(3), 1049; https://doi.org/10.3390/s22031049
Submission received: 9 November 2021 / Revised: 14 January 2022 / Accepted: 25 January 2022 / Published: 28 January 2022
(This article belongs to the Special Issue Fractional Sensor Fusion and Its Applications)

Abstract

:
In this paper, an analysis of the method that uses a fractional order calculus to multispectral images fusion is presented. We analyze some correct basic definitions of the fractional order derivatives that are used in the image processing context. Several methods of determining fractional derivatives of digital images are tested, and the influence of fractional order change on the quality of fusion is presented. Results achieved are compared with the results obtained for methods where the integer order derivatives were used.

1. Introduction

In the last few decades, we can observe significant development of the image fusion theory. The main task of image fusion is to combine relevant data from different source images to generate a single image that contains richer information [1]. The most important part of this process is the effective extraction of image features and the use of appropriate fusion principles, which allow extracting useful information from source images and integrating it into the fused image without introducing any artifacts [2]. For example, fusing a panchromatic (grayscale) photo presented in Figure 1 and a multispectral (color) photo (Figure 2) is used when images show the same view. If the panchromatic image shows more details than the multispectral one, it is worth combining them, getting one color photo of good quality (more details will be visible). Figure 3 shows the effect of this kind of operation.
The presented example is only one of the various types of problems. Among them, we can highlight methods for sparse representation, multi-scale transformation, subspace, variation, neural network, saliency detection, and mixed models [4]. A combination of useful information from source images is very beneficial for subsequent applications and is widely used in such fields as photography visualization [5,6,7,8], object tracking [9,10], medical diagnosis [11,12], and remote sensing monitoring [13,14].
Recently, among image fusion methods, the emergence of algorithms based on fractional differential calculus can be observed [15,16,17,18,19]. They are applied in such problems as fingerprint detection [20], exposing of essential elements in medical images [21], brain photo analysis [22], elimination of noise in images (improved image quality) [23], or contrast enhancement [24]. As the fractional derivative started to be used relatively recently, new applications are still being found.
The adaptation of the fractional calculus to image processing problems forced researchers to develop discrete 2D approximations of fractional derivatives operators. In the literature, we can find many different definitions, both for continuous functions and their discrete approximations. However, some discrete approximations were not correctly derived. In effect, the operators that only imitate, but are not actually, the fractional operators were proposed. The literature also shows deficiencies in the analysis of individual approximations, thus answers to the questions of which of them should be used, for which problem, and which order of the derivative should be applied are essential.
In this paper, we present an analysis of the method that uses a fractional order differential to multispectral images fusion [15]. We analyze the correct basic definitions of the fractional order derivative. Original and corrected versions of the masks used in [21] have been proposed. On the experimental side, various methods of determining derivatives of digital images are tested for different datasets than the one used in [15], and the influence of fractional order change on the quality of fusion is presented. Additionally, achieved results are compared with the results obtained for methods where the integer order derivatives have been used. This novel comparison shows pros and cons of the application of fractional calculus in the context of image fusion as stated in Section 5, and motivates further studies.

2. Fractional Order Derivative

The non-integer (fractional) order derivative is a generalization of the traditional integer derivative of the n-th order ( n N 0 , where N is a set of natural numbers) to the real or even complex order. This is a topic of mathematical analysis that can be solved in different ways. Fractional order calculus is quite a rapidly developing field nowadays, finding applications in more and more new areas.
Although this calculus was discovered over three centuries ago, it remained only a purely mathematical notion that had few or no applications for a long time. However, in the second half of the 20th century and at the beginning of the 21st century, it began to be recognized that it could be used to solve real problems. New applications for these tools were found, and more and more different models of fractional order derivatives began to emerge.
Today, fractional calculus plays an essential role in control theory, viscoelasticity, heating processes, heat conduction, biotechnology, and particle physics. The application of this calculus in image processing is also an issue that has been intensively researched in recent years. The use of some models has shown that satisfactory results have been obtained, for example, in medical image processing [21] or photo-based fingerprint recognition [20]. This calculus is used in problems where there is a memory effect since having memory is a feature of fractional order operators [21,25].
One of the challenges with the non-integer derivatives remains their physical interpretation. Contrary to integer order, the physical meaning of fractional or even real order is not entirely clear.
Another problem is that fractional order derivatives lack local character. The result of the derivative of a function depends on the entire function. This causes a significant increase in the number of necessary calculations compared to the integer order derivatives. This results in the increase of time needed for determining this type of operator.
Numerical algorithms designed to derive non-integer order derivatives contain critical, nested loops, and their complexity increases with the increasing iteration number. Programming languages such as Matlab and Python are very often used in image processing. However, these are interpreted languages and are not efficient at handling nested loops.
There are methods to speed up the calculations. One of them is the “short memory principle” [26]. It allows for getting an approximate result. Its use makes it possible not to consider the distant values of the function when calculating the derivative at a given point. At the expense of speeding up the operation, the accuracy of the final result is reduced.
Another method that can speed up the computation of non-integer derivatives is Oustaloup’s [27] method. This method allows for obtaining a correct result in a predefined frequency band. The precision of the approximation is also limited to a narrow frequency spectrum. If this band is extended, the uncertainty of the obtained result increases.
Contrary to integer order derivatives, there is no single definition for non-integer order. Many definitions have been developed, but the most frequently used are three of them: Riemann–Liouville, Caputo, and Grünwald–Letnikov. Each of these definitions has advantages and disadvantages that will be discussed. For a broad class of functions, the definitions of Riemann–Liouville and Grünwald–Letnikov are equivalent. This makes it possible to use the Riemann–Liouville derivative definition at the beginning when defining the problem and then apply the Grünwald–Letnikov definition to obtain a solution.

2.1. Grünwald–Letnikov Definition

This definition was proposed in 1867 [28]. For order α > 0 , the fractional derivative looks as follows [29]:
G L D 0 , t α f ( t ) = lim Δ t 0 1 Δ t α k = 0 N ω k α f ( t k Δ t ) ,
where: ω k α = ( 1 ) k α k , and N Δ t = t .
However, definition (1) based on the limit is practically useful only for a finite-difference implementation, hence the definition below is used:
G L D 0 , t α f ( t ) = k = 0 n 1 f ( k ) ( 0 ) t a + k Γ ( k α + 1 ) + 1 n α 0 t ( t τ ) n α 1 f ( n ) ( τ ) d τ ,
where: n 1 α < n Z + , α > 0 .
This definition is often used to determine the exact fractional derivative of a function. The achieved result is later used to determine the uncertainty obtained in calculating the derivative of the discrete approximation. If f ( t ) is sufficiently smooth, f ( t ) C n [ 0 , t ] , then the Grünwald–Letnikov derivative is equivalent to the Riemann–Liouville definition.
In the literature, we can find GL definition in the following form:
G L D α f ( t ) = lim h 0 1 h α n = 0 t h ( 1 ) n Γ ( α + 1 ) n ! Γ ( α n + 1 ) f ( t n h ) ,
where: n 1 α < n Z + ,   α > 0 , Γ ( x ) is a gamma function.

2.2. Riemann–Liouville Definition

The definition of the Riemann–Liouville derivative for a real order greater than 0 of the function f ( t ) takes the form [30]:
R L D 0 , t α f ( t ) = 1 Γ ( n α ) d n d t n 0 t ( t τ ) n α 1 f ( τ ) d τ ,
where: n 1 < α n Z + ,   α > 0 , Γ ( x ) is a gamma function.
The advantage of this definition is that the function under study does not have to be continuous at the origin. It does not have to be differentiable either [31].

2.3. Caputo Definition

This definition was introduced by Michele Caputo in 1967 [32]. Unlike the derivative calculated from the Riemann–Liouville definition, in this case, we do not need to define the fractional initial conditions. For a real order α R , Caputo’s definition of the non-integer order derivative is as follows [33]:
C D α f ( t ) = 1 Γ ( n α ) 0 t ( t τ ) n α 1 f ( n ) ( τ ) d τ ,
where: t > 0 ,   n 1 < α n ,   n Z + ,   Γ ( x ) is a gamma function.
The value of the derivative of the non-integer order based on Caputo’s definition satisfies the important relationship:
C D α A = 0 ,
where A is a constant value. The great advantage of this method is that it includes integer initial and final conditions.
When modeling non-integer order derivatives systems, it is possible to use more than one definition. A combination of the definitions of Riemann–Liouville and Grünwald–Letnikov is often used. In addition to the mentioned definitions, many others have also been created. However, they are not used that often. De Oliveira et al. collected and described many different definitions for non-integer order derivatives [34].

2.4. Derivatives in Image Processing

The derivative determined on the images allows for studying the changes in brightness or color in the image. The natural application of this operator is edge detection. In this case, we need to find the derivative in two orthogonal (perpendicular) directions. They do not necessarily have to be vertical and horizontal. For example, it is possible to calculate diagonal derivatives of an image. The calculation of the first derivative of an image belongs to the group of gradient methods. They are a collection of the simplest edge detection operations to extract edges and remove the rest of the image.
The magnitude of the gradient is proportional to the rate of increase of the image function value for a given point. It also indicates how expressive the edge is. The higher the value, the more visible it is. If we want to define all the edges in the image, we should assume some limit value. For pixels whose magnitude is greater than or equal to this value, we will consider them as an edge.
Images are not continuous functions, and it is impossible to change the argument t by an infinitely small amount. Thus, they must be considered as discrete functions, and it is necessary to use an approximate version of an operator that will be able to act on a discrete function. The minimum value we can move in the image is one pixel. Thus, the formula for the derivative of an image takes the form:
f ( t ) f ( t + 1 ) f ( t ) 1 = f ( t + 1 ) f ( t ) .
Based on this relationship, it is possible to create many different methods of determining the gradient.

2.4.1. Integer Order Derivative Mask

Many methods for determining the integer order derivatives in the images have been proposed. One such operator is Sobel’s mask [35]. It is characterized by the fact that, during averaging, weights are used, giving the analyzed point the highest value. Sobel’s mask has the following form:
x = 1 0 1 2 0 2 1 0 1 , y = 1 2 1 0 0 0 1 2 1 .
The second order derivative can be approximated by using Laplace mask:
2 = 0 1 0 1 4 1 0 1 0 .
Based on the selected definitions of calculating fractional order derivatives, various methods are proposed that can be used in image processing. Some of them make it possible to build an appropriate mask approximating the integer order of the derivative. Other methods transform images directly. Unfortunately, many of the proposed models do not designate non-integer derivatives, despite being described in this way.

2.4.2. Mask Based on Riemann–Liouville Definition

Amoako-Yirenkyi et al. [21] proposed a mask based on the generalization of Riemann–Liouville definition and for any order α [ 0 , ) . However, apparently their derivation contains a mistake. In this paper, we present a correct derivation based on a standard Riemann–Liouville definition presented in Section 2.2, and for order α [ 0 , 1 ) .
For an analytical function f ( t ) , such that t R and α Q , a derivative operator is defined as:
D t α f ( t ) = 1 Γ ( 1 α ) d d t 0 t f ( τ ) ( t τ ) α d τ .
By focusing on the integral expression in (10), we can write:
0 t f ( τ ) ( t τ ) α d τ = 0 t f ( τ ) ( t τ ) α d τ = t α f ( t )
and
D t α f ( t ) = 1 Γ ( 1 α ) d d t t α f ( t ) .
This equation is for a one-dimensional function, but because an image has two dimensions, we have to transform this formula into a two-dimensional form by putting t x 2 + y 2 . Finally, determining the directional derivatives with respect to x and y, we get the formulae for the gradient mask elements by finding the derivative in the horizontal and vertical directions, as follows:
Θ x ( x i , y i ) = α · x i Γ ( 1 α ) x i 2 + y i 2 α / 2 1 ,
Θ y ( x i , y i ) = α · y i Γ ( 1 α ) x i 2 + y i 2 α / 2 1 ,
where k i k and l j l with ( 2 k + 1 ) × ( 2 l + 1 ) is the size of the mask for every k , l 1 , and α is a constant parameter.
The determined mask of the size 5 × 5 , proposed in [21], for the horizontal direction takes the form:
M x = 2 α 8 α 8 Γ ( 1 α ) α 5 α 5 Γ ( 1 α ) 0 α 5 α 5 Γ ( 1 α ) 2 α 8 α 8 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) α 2 α 2 Γ ( 1 α ) 0 α 2 α 2 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) 2 α 4 α 4 Γ ( 1 α ) α Γ ( 1 α ) 0 α Γ ( 1 α ) 2 α 4 α 4 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) α 2 α 2 Γ ( 1 α ) 0 α 2 α 2 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) 2 α 8 α 8 Γ ( 1 α ) α 5 α 5 Γ ( 1 α ) 0 α 5 α 5 Γ ( 1 α ) 2 α 8 α 8 Γ ( 1 α ) ,
while for the vertical direction:
M y = 2 α 8 α 8 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) 2 α 4 α 4 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) 2 α 8 α 8 Γ ( 1 α ) α 5 α 5 Γ ( 1 α ) α 2 α 2 Γ ( 1 α ) α Γ ( 1 α ) α 2 α 2 Γ ( 1 α ) α 5 α 5 Γ ( 1 α ) 0 0 0 0 0 α 5 α 5 Γ ( 1 α ) α 2 α 2 Γ ( 1 α ) α Γ ( 1 α ) α 2 α 2 Γ ( 1 α ) α 5 α 5 Γ ( 1 α ) 2 α 8 α 8 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) 2 α 4 α 4 Γ ( 1 α ) 2 α 5 α 5 Γ ( 1 α ) 2 α 8 α 8 Γ ( 1 α ) .
Analyzing the masks Equations (15) and (16), it can be seen that the proposed masks do not follow Equations (13) and (14). Such inconsistency causes the final mask formulae to be contradictory to the reasoning coming from the definition of Riemann–Liouville fractional derivatives described in the formula (4). The proper masks of the size 5 × 5 should have the following forms:
M x = 2 α 8 α 8 α + 1 Γ ( 1 α ) α 5 α 5 α + 1 Γ ( 1 α ) 0 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 8 α 8 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) α 2 α 2 α + 1 Γ ( 1 α ) 0 α 2 α 2 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 4 α 4 α + 1 Γ ( 1 α ) α Γ ( 1 α ) 0 α Γ ( 1 α ) 2 α 4 α 4 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) α 2 α 2 α + 1 Γ ( 1 α ) 0 α 2 α 2 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 8 α 8 α + 1 Γ ( 1 α ) α 5 α 5 α + 1 Γ ( 1 α ) 0 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 8 α 8 α + 1 Γ ( 1 α ) ,
while for the vertical direction:
M y = 2 α 8 α 8 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 4 α 4 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 8 α 8 α + 1 Γ ( 1 α ) α 5 α 5 α + 1 Γ ( 1 α ) α 2 α 2 α + 1 Γ ( 1 α ) α Γ ( 1 α ) α 2 α 2 α + 1 Γ ( 1 α ) α 5 α 5 α + 1 Γ ( 1 α ) 0 0 0 0 0 α 5 α 5 α + 1 Γ ( 1 α ) α 2 α 2 α + 1 Γ ( 1 α ) α Γ ( 1 α ) α 2 α 2 α + 1 Γ ( 1 α ) α 5 α 5 α + 1 Γ ( 1 α ) 2 α 8 α 8 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 4 α 4 α + 1 Γ ( 1 α ) 2 α 5 α 5 α + 1 Γ ( 1 α ) 2 α 8 α 8 α + 1 Γ ( 1 α ) .
We compare the results obtained for correctly determining masks in the conducted experiments. Still, we also analyze the results obtained for the incorrect masks to assess the impact of this error on the results achieved.

2.4.3. Eight-Directions Non-Integer Order Mask

In [15], a method was described that allows for building an eight-direction mask approximating non-integer order derivatives. This model is based on the Grünwald–Letnikov definition presented in (3).
For images, the minimum value of h change is equal to one pixel. After differentiating the finite function f ( x ) with respect to x, based on the expression (3), one can get the Formulae (19) and (20):
α f ( x , y ) x α f ( x , y ) + ( α ) f ( x 1 , y ) + ( α ) ( α + 1 ) 2 ! f ( x 2 , y ) ,
α f ( x , y ) y α f ( x , y ) + ( α ) f ( x , y 1 ) + ( α ) ( α + 1 ) 2 ! f ( x , y 2 ) .
Based on the above expressions, it is possible to build masks determining approximations of non-integer derivatives in eight different directions. Following [15], the final mask composed of all eight directional masks should be more robust to image rotation due to its symmetry. The mask components for eight directions appear as follows:
  • Derivative mask in direction x + : 0 1 0 0 α 0 0 α 2 α 2 0 .
  • Derivative mask in direction x : 0 α 2 α 2 0 0 α 0 0 1 0 .
  • Derivative mask in direction y + : 0 0 0 1 α α 2 α 2 0 0 0 .
  • Derivative mask in direction y :   0 0 0 α 2 α 2 α 1 0 0 0 .
  • Derivative mask in direction left upper diagonal: α 2 α 2 0 0 0 α 0 0 0 1 .
  • Derivative mask in direction left lower diagonal: 0 0 1 0 α 0 α 2 α 2 0 0 .
  • Derivative mask in direction right upper diagonal: 0 0 α 2 α 2 0 α 0 1 0 0 .
  • Derivative mask in direction right lower diagonal: 1 0 0 0 α 0 0 0 α 2 α 2 .
  • Based on the presented mask components, the resultant eight-direction mask can be obtained: M = α 2 α 2 0 α 2 α 2 0 α 2 α 2 0 α α α 0 α 2 α 2 α 8 α α 2 α 2 0 α α α 0 α 2 α 2 0 α 2 α 2 0 α 2 α 2 , where α is a derivative order.
This mask appears to approximate fractional derivatives and will be further applied to the image fusion study. Its use is presented in [15] and may be a good reference point for other methods.

2.4.4. FFT Approximation of a Fractional Order Derivative

In [36], a method for determining the fractional order derivative was shown. It is based on the Riemann–Liouville definition of the following form:
R L D a , t α f ( t ) = 1 Γ ( n α ) d n d t n a t ( t τ ) n α 1 f ( τ ) d τ ,
where α R is a fractional order of differ-integral of the function f ( t ) and, for n N 0 , we have: n 1 < α n for α > 0 , n = 0 for α 0 .
The Fourier transform of the Riemann–Liouville fractional derivatives with the lower bound a = is equal to:
F ( D α f ( x ) ) = ( j ω ) α F ( ω ) .
For any two-dimensional function g ( x , y ) absolutely integrable in ( , ) × ( , ) , the corresponding 2D Fourier transform is as follows [30]:
G ( ω 1 , ω 2 ) = g ( x , y ) e j ( ω 1 x + ω 2 y ) d x d y .
Therefore, we can write the formulae for fractional order derivatives as:
D x α g = F 1 ( j ω 1 ) α G ( ω 1 , ω 2 ) , D y α g = F 1 ( j ω 2 ) α G ( ω 1 , ω 2 ) ,
where F 1 is the inverse 2D continuous Fourier transform operator.
Finally, the result of image fractional derivative will be determined by the real part of the sum of the inverse transforms:
D α g = ( D x α g + D y α g ) ,
where is a real part of a complex function.

3. Methods and Metrics Used

For our experiments, we used the component substitution method proposed in [37]. The basis of this image fusion method is to extract the details of images by determining the difference between a panchromatic image and a linear combination of low-quality multispectral image channels. This method is effective when two combined images contain almost the same information.
The image fusion algorithm without a fractional order derivative was proposed in [38] and can be written as follows:
M k H = M k L + g k ( P I ) ,
where
I = i = 1 N ω i M i L ,
and M k H is the k-th band of the fused image, M k L is k-th band of the low-resolution multispectral image, ω i represents the band weight, and g k is a constant gain determined according to the relationship from [39]:
g k = c o v ( M k L , I ) v a r ( I ) ,
for k = 1 , 2 , , N , P is a panchromatic image, I is a linear combination of a low-resolution multispectral image, N indicates the number of bands covering the spectral signature of the panchromatic image, c o v ( X , Y ) denotes covariance between the X and Y images, and v a r ( X ) is a variance of image X. The band coefficients are determined in accordance with the AIHS approach (Adaptive Intensity-Hue-Saturation) [40], which achieves good results in this issue.
Azarang et al. in [15] proposed a modification of this method. Using the non-integer order derivative of the difference (P–I) should strengthen the edges and improve the fusion results. The modified form of the algorithm (26) can be written as follows:
M k H = M k L + g k × m ( P I ) ,
where m is a proposed mask and ∗ is a convolution operator.

Metrics

A problem with comparing the quality of image fusion results is the lack of an appropriate metric. Our experiments described in Section 4 support the claim that commonly used metrics such as the ones defined below are not always able to clearly determine which approach results in better visual quality of an image. Because of this, in the experiments, a selection of metrics is used.
To evaluate results, the following metrics were applied:
  • Root Mean Square Error (RMSE)—measures the changes in pixel values of the input band of the multispectral image R and the sharpened image F. This metric is fundamental if the images contain large, uniform areas. This error is calculated using the following formula [41]:
    R M S E = 1 m × n i = 1 m j = 1 n | R ( i , j ) F ( i , j ) | 2 .
    Ideal value of this error is 0.
  • Relative dimensionless global error in synthesis (ERGAS) is a global quality factor. This error is sensitive to the change in the average pixel value of the image and the dynamically changing range. Its value tells about the amount of spectral distortion in the image. It is expressed as [41]:
    E R G A S = 100 · h l 1 N i = 1 N R M S E ( i ) μ ( i ) 2 ,
    where: h l is the ratio of the number of pixels of a panchromatic image to the number of pixels of a multispectral image, μ ( i ) is the mean of i -th band, while N is the total number of bands. In the case of this metric, we aim for the error rate to be close to zero.
  • Spectral angle mapper (SAM) calculates spectral similarities by finding a spectral angle between two spectral vectors with a common origin. The length of a spectrum vector L p is calculated by the following equation:
    L p = λ = 1 M ρ λ 2 ,
    and the spectral angle θ is calculated as [42]:
    θ = cos 1 λ = 1 M ρ λ ρ λ L ρ L ρ ,
    where: L ρ is the length of the endmember vector, L ρ is the length of the modeled spectrum vector, ρ λ —reflection of the endmember. In the case of this metric, we aim for the error rate to be close to zero.
  • Correlation coefficient (CC)—shows the spectral correlation between two images. The value of this coefficient for the sharpened image F and the input multispectral image R is determined as follows [41]:
    C C ( R , F ) = m n ( R m n R ¯ ) ( F m n F ¯ ) m n ( R m n R ¯ ) 2 ( m n ( F m n F ¯ ) 2 ) .
    R ¯ and F ¯ mean the average values of the F and R images, while m and n denote the shape of the images. The optimal value for this coefficient is 1.
  • Universal Image Quality Index (UIQI) is defined as [41]:
    Q ( R , F ) σ R F σ R · σ F × 2 · R ¯ · F ¯ R ¯ 2 + F ¯ 2 × 2 · σ R · σ F σ R 2 + σ F 2 .
    The first factor presents the correlation coefficient between the images R and F, while the second factor is the luminance distance, and the last represents the contrast distortion. σ R F denotes the covariance between the images R and F. R ¯ and F ¯ are the mean values, σ R 2 and σ F 2 denote the standard deviation values of the R and F images, respectively. The best value for this index is 1. It can be reached if R equals F for all pixels.
  • Relative Average Spectral Error (RASE) is computed using the RMSE value using the following relation [41]:
    R A S E = 100 μ 1 N i = 1 N R M S E 2 ( B i ) ,
    where μ is the mean radiance of the N spectral bands and B i represents the i -th band of the input multispectral image. An optimal value of this error is equal to 0.

4. Experiments

For the experiments, a set of photos in PNG format, taken by the satellite from the Bing maps [3], has been used. The creator of this collection divided the data into categories such as land and water. Our experiments were focused only on a part with land-based images containing 1078 images. This collection includes good-quality panchromatic photos. Based on this collection, two image sets were prepared. The first was the result of saving color photos in grayscale, representing high-quality panchromatic photos. The second contained multispectral color photos with reduced quality due to the use of a blurring mask.
As the first experiment, the mean error values for the image fusion algorithm for the basic version, without additional derivatives [43], were checked. Its description and an indication of where the examined modification can be applied can be found in Section 3. The result of the experiment is shown in Table 1. Then, for the prepared data set, the method proposed in [15], based on a Grünwald–Letnikov definition that allows for building an eight-direction mask defined in Section 2.4.2, approximating the non-integer order derivatives, was examined. The results achieved for this method are presented in Table 2. The experiments with masks presented in Section 2.4.2 were shown in Table 3 and the improved version of this mask in Table 4. An experiment was also carried out in which the fractional derivative was determined using the Fast Fourier transform described in Section 2.4.4. The results of this experiment are presented in Table 5. For comparison, the experiments with the first derivative using Sobel’s filter, and the second derivative using Laplace operator, the description of which can be found in the Section 2.4.1, were added. The results of Sobel’s and Laplace’s filters for image fusion are presented in Table 6.
The results presented in the tables show that the best result of fusion for 4 out of 6 metrics was obtained by using an eight-direction mask proposed in [15] for derivative order α = 0.1 (see Table 2). This result is slightly better than results obtained without fractional order derivatives (see Table 1). It can be observed that increasing the fractional order of derivative from 0.1 to 0.9 worsens the results. However, for the order in the range <0.6; 0.9>, this mask achieved the worst results from all examined methods.
The best results of all the proposed modifications were obtained for the FFT method and the order of α = 0.2 , which can be seen in Table 5. Using the FFT-based derivative, the smallest fusion errors were obtained for the order α = 0.2 . Up to the order of 0.4 , the error increases slightly, while, from value 0.5, the errors increase with the order, but much slower than with the original method for which errors above 0.5 are already significant.
Although the lowest error results obtained by the FFT-based algorithm is twice as high as the best results obtained with the eight-directional mask, the visual assessment of the final images calls into question the determined error values. The FFT fusion images look optically similar, just as precise, and sometimes appear sharper, which can be observed in Figure 4. It confirms that, in image processing, better metric values do not always guarantee higher visual quality.
An incorrect mask proposed by [21] achieved the worst results from tested fractional order operators (see Table 3). The corrected version of this mask improved results a little bit (see Table 4). For these two masks, we can observe that maximum errors were obtained around the order of 0.5.
The Sobel and Laplace filters to approximate the first and the second order derivative gave abysmal results for all metrics (see Table 6).

5. Conclusions

In the paper, the improved, corrected versions of the edge detecting masks based on fractional derivative have been proposed. The masks were implemented with different approximations of this derivative. Analyzed modification of different approximations did not improve the quality of image fusion much. In some cases, the introduced changes while improving the quality metrics of the fused images worsened their visual quality for the tested set.
Analysis of achieved metrics’ values shows that the original fractional order solution achieved the best metrics for order α = 0.1 ; however, the fusion error for orders higher than 0.6 increased dramatically. More stable results were achieved by using the FFT algorithm for computing fractional order derivatives. Despite the error of the FFT algorithm for α = 0.2 being two times higher, visual verification indicates that this algorithm received sharper fused images. This confirms the thesis that there are currently no metrics that would unambiguously assess the quality of image fusion.
The integer order derivative method produces a result that completely disrupts the algorithm. Comparing the results with various methods for determining non-integer order derivatives shows that these methods often have a much greater chance of success in applications where standard methods for integer orders do not bring positive results. Edge reinforcement is essential in this application. Fractional derivatives accomplish this to varying degrees, creating a greater chance that a particular combination will be better suited for a given application.
The experiments presented show the potential of applying fractional derivatives in the image fusion context. However, there is still a lot of future research required. It would be worth testing the operation of the best-performing filter for different data sets. In addition, new masks based on different definitions of a fractional derivative may lead to better results. A well-known problem of image quality metrics not clearly reflecting the image’s visual quality should be addressed in future work.

Author Contributions

Conceptualization, A.D.; methodology, G.S.; software, S.M.; validation, A.D.; formal analysis, G.S.; data curation, S.M.; writing—original draft preparation, G.S.; writing—review and editing, A.D.; visualization, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, H.; Xu, H.; Tian, X.; Jiang, J.; Ma, J. Image Fusion Meets Deep Learning: A Survey and Perspective. Inf. Fusion 2021, 76, 323–336. [Google Scholar] [CrossRef]
  2. Ma, J.; Ma, Y.; Li, C. Infrared and Visible Image Fusion Methods and Applications: A Survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  3. Gallego, A.J.; Pertusa, A.; Gil, P. Automatic Ship Classification from Optical Aerial Images with Convolutional Neural Networks. Remote Sens. 2018, 10, 511. [Google Scholar] [CrossRef] [Green Version]
  4. Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A Generative Adversarial Network for Infrared and Visible Image Fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Blum, R. A Categorization of Multiscale-Decomposition-Based Image Fusion Schemes with a Performance Study for a Digital Camera Application. Proc. IEEE 1999, 87, 1315–1326. [Google Scholar] [CrossRef] [Green Version]
  6. Petschnigg, G.; Szeliski, R.; Agrawala, M.; Cohen, M.; Hoppe, H.; Toyama, K. Digital Photography with Flash and No-Flash Image Pairs. ACM Trans. Graph. 2004, 23, 664–672. [Google Scholar] [CrossRef]
  7. Li, S.; Kang, X. Fast Multi-Exposure Image Fusion with Median Filter and Recursive Filter. IEEE Trans. Consum. Electron. 2012, 58, 626–632. [Google Scholar] [CrossRef] [Green Version]
  8. Bavirisetti, D.P.; Dhuli, R. Multi-Focus Image Fusion Using Maximum Symmetric Surround Saliency Detection. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2016, 14, 58–73. [Google Scholar] [CrossRef] [Green Version]
  9. Schnelle, S.R.; Chan, A.L. Enhanced Target Tracking Through Infrared-Visible Image Fusion. In Proceedings of the 14th International Conference on Information Fusion, Sun City, South Africa, 1–4 November 2011; pp. 1–8. [Google Scholar]
  10. Zhu, Y.; Li, C.; Luo, B.; Tang, J.; Wang, X. Dense Feature Aggregation and Pruning for RGBT Tracking. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 465–472. [Google Scholar] [CrossRef] [Green Version]
  11. Qu, G.; Zhang, D.; Yan, P. Medical Image Fusion by Wavelet Transform Modulus Maxima. Opt. Express 2001, 9, 184–190. [Google Scholar] [CrossRef] [Green Version]
  12. Bhatnagar, G.; Wu, Q.M.J.; Liu, Z. Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain. IEEE Trans. Multimed. 2013, 15, 1014–1024. [Google Scholar] [CrossRef]
  13. Amarsaikhan, D.; Saandar, M.; Ganzorig, M.; Blotevogel, H.; Egshiglen, E.; Gantuyal, R.; Nergui, B.; Enkhjargal, D. Comparison of Multisource Image Fusion Methods and Land Cover Classification. Int. J. Remote Sens. 2012, 33, 2532–2550. [Google Scholar] [CrossRef]
  14. Ghassemian, H. A Review of Remote Sensing Image Fusion Methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  15. Azarang, A.; Ghassemian, H. Application of Fractional-Order Differentiation in Multispectral Image Fusion. Remote Sens. Lett. 2018, 9, 91–100. [Google Scholar] [CrossRef]
  16. Li, H.; Yu, Z.; Mao, C. Fractional Differential and Variational Method for Image Fusion and Super-Resolution. Neurocomputing 2016, 171, 138–148. [Google Scholar] [CrossRef]
  17. Mei, J.J.; Dong, Y.; Huang, T.Z. Simultaneous Image Fusion and Denoising by Using Fractional-Order Gradient Information. J. Comput. Appl. Math. 2019, 351, 212–227. [Google Scholar] [CrossRef]
  18. Li, X.; Nie, X.; Ding, Z.; Huang, H.; Zhang, Y.; Feng, L. Remote Sensing Image Fusion Method Based on Adaptive Fractional Differential. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
  19. Li, J.; Yuan, G.; Fan, H. Multispectral Image Fusion Using Fractional-Order Differential and Guided Filtering. IEEE Photonics J. 2019, 11, 1–18. [Google Scholar] [CrossRef]
  20. Baloochian, H.; Ghaffary, H.R.; Balochian, S. Enhancing Fingerprint Image Recognition Algorithm Using Fractional Derivative Filters. Open Comput. Sci. 2017, 7, 9–16. [Google Scholar] [CrossRef] [Green Version]
  21. Amoako-Yirenkyi, P.; Appati, J.K.; Dontwi, I.K. A New Construction of a Fractional Derivative Mask for Image Edge Analysis Based on Riemann–Liouville Fractional Derivative. Adv. Differ. Equ. 2016, 2016, 238. [Google Scholar] [CrossRef] [Green Version]
  22. Xu, C.; Wen, Y.; He, B. A Novel Fractional Order Derivate Based Log-Demons with Driving Force for High Accurate Image Registration. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1997–2001. [Google Scholar]
  23. Shukla, A.K.; Pandey, R.K.; Reddy, P.K. Generalized Fractional Derivative Based Adaptive Algorithm for Image Denoising. Multimed. Tools Appl. 2020, 79, 14201–14224. [Google Scholar] [CrossRef]
  24. Khanna, S.; Chandrasekaran, V. Fractional Derivative Filter for Image Contrast Enhancement with Order Prediction. In Proceedings of the IET Conference on Image Processing (IPR 2012), London, UK, 3–4 July 2012; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  25. Hristov, J. Transient Heat Diffusion with a Non-Singular Fading Memory: From the Cattaneo Constitutive Equation with Jeffrey’s Kernel to the Caputo-Fabrizio Time-Fractional Derivative. Therm. Sci. 2016, 20, 757–762. [Google Scholar] [CrossRef]
  26. Deng, W. Short Memory Principle and a Predictor–Corrector Approach for Fractional Differential Equations. J. Comput. Appl. Math. 2007, 206, 174–188. [Google Scholar] [CrossRef] [Green Version]
  27. Oustaloup, A. La Commande CRONE: Commande Robuste D’Ordre Non Entier; Hermés: Paris, France, 1991. [Google Scholar]
  28. Scherer, R.; Kalla, S.L.; Tang, Y.; Huang, J. The Grünwald–Letnikov Method for Fractional Differential Equations. Comput. Math. Appl. 2011, 62, 902–917. [Google Scholar] [CrossRef] [Green Version]
  29. Jacobs, B.A. A New Grünwald–Letnikov Derivative Derived from a Second-Order Scheme. Abstr. Appl. Anal. 2015, 2015, 952057. [Google Scholar] [CrossRef] [Green Version]
  30. Podlubny, I. (Ed.) Chapter 10—Survey of Applications of the Fractional Calculus. In Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 1999; Volume 198. [Google Scholar] [CrossRef]
  31. Lassoued, A.; Boubaker, O. Chapter 17—Fractional-Order Hybrid Synchronization for Multiple Hyperchaotic Systems. In Recent Advances in Chaotic Systems and Synchronization; Boubaker, O., Jafari, S., Eds.; Emerging Methodologies and Applications in Modelling; Academic Press: Cambridge, MA, USA, 2019; pp. 351–366. [Google Scholar] [CrossRef]
  32. Almeida, R. A Caputo Fractional Derivative of a Function with Respect to Another Function. Commun. Nonlinear Sci. Numer. Simul. 2017, 44, 460–481. [Google Scholar] [CrossRef] [Green Version]
  33. Owolabi, K.M.; Gómez-Aguilar, J.F.; Fernández-Anaya, G.; Lavín-Delgado, J.E.; Hernández-Castillo, E. Modelling of Chaotic Processes with Caputo Fractional Order Derivative. Entropy 2020, 22, 1027. [Google Scholar] [CrossRef] [PubMed]
  34. de Oliveira, E.C.; Tenreiro Machado, J.A. A Review of Definitions for Fractional Derivatives and Integral. Math. Probl. Eng. 2014, 2014, 238459. [Google Scholar] [CrossRef] [Green Version]
  35. Gao, W.; Zhang, X.; Yang, L.; Liu, H. An Improved Sobel Edge Detection. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010. [Google Scholar]
  36. Sarwas, G.; Skoneczny, S. Half Profile Face Image Clustering Based on Feature Points. In Image Processing and Communications Challenges 10; Choraś, M., Choraś, R.S., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 140–147. [Google Scholar]
  37. Dou, W.; Chen, Y.; Li, X.; Sui, D.Z. A General Framework for Component Substitution Image Fusion: An Implementation Using the Fast Image Fusion Method. Comput. Geosci. 2007, 33, 219–228. [Google Scholar] [CrossRef]
  38. Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An Adaptive IHS Pan-Sharpening Method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef] [Green Version]
  39. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-Tailored Multiscale Fusion of High-Resolution MS and Pan Imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  40. Meng, X.; Li, J.; Shen, H.; Zhang, L.; Zhang, H. Pansharpening with a Guided Filter Based on Three-Layer Decomposition. Sensors 2016, 16, 1068. [Google Scholar] [CrossRef]
  41. Panchal, S.; Thakker, R.A. Implementation and Comparative Quantitative Assessment of Different Multispectral Image Pansharpening Approches. arXiv 2015, arXiv:1511.04659. [Google Scholar]
  42. Dennison, P.; Halligan, K.; Roberts, D. A Comparison of Error Metrics and Constraints for Multiple Endmember Spectral Mixture Analysis and Spectral Angle Mapper. Remote Sens. Environ. 2004, 93, 359–367. [Google Scholar] [CrossRef]
  43. Aiazzi, B.; Baronti, S.; Selva, M. Improving Component Substitution Pansharpening Through Multivariate Regression of MS +Pan Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
Figure 1. Panchromatic grayscale image. Grayscale version of Figure 3.
Figure 1. Panchromatic grayscale image. Grayscale version of Figure 3.
Sensors 22 01049 g001
Figure 2. Color multispectral image. Blurred version of Figure 3.
Figure 2. Color multispectral image. Blurred version of Figure 3.
Sensors 22 01049 g002
Figure 3. Example of panchromatic and multispectral satellite image fusion on an image from MASATI dataset v2 [3] which is available for the scientific community on demand at http://www.iuii.ua.es/datasets/masati (accessed on 8 November 2021).
Figure 3. Example of panchromatic and multispectral satellite image fusion on an image from MASATI dataset v2 [3] which is available for the scientific community on demand at http://www.iuii.ua.es/datasets/masati (accessed on 8 November 2021).
Sensors 22 01049 g003
Figure 4. Comparison of the image fusion of the original algorithm based on an eight-direction approximation mask with an FFT solution. (a) Eight-direction mask for derivative order α = 0.1 with metrics: ERGAS = 0.31914, SAM = 0.17813, RASE = 1.2257, RMSE = 0.99474, UIQI = 0.14063, CC = 0.14238. (b) FFT solution for derivative order α = 0.2 with metrics: ERGAS = 0.60386, SAM = 0.32631, RASE = 2.322, RMSE = 1.8845, UIQI = 0.13603, CC = 0.1355.
Figure 4. Comparison of the image fusion of the original algorithm based on an eight-direction approximation mask with an FFT solution. (a) Eight-direction mask for derivative order α = 0.1 with metrics: ERGAS = 0.31914, SAM = 0.17813, RASE = 1.2257, RMSE = 0.99474, UIQI = 0.14063, CC = 0.14238. (b) FFT solution for derivative order α = 0.2 with metrics: ERGAS = 0.60386, SAM = 0.32631, RASE = 2.322, RMSE = 1.8845, UIQI = 0.13603, CC = 0.1355.
Sensors 22 01049 g004
Table 1. Results obtained without derivative.
Table 1. Results obtained without derivative.
ERGASSAMRASERMSEUIQICC
1.74511.13426.73795.9090.981360.99716
Table 2. Results of the original method of determining fractional derivatives proposed in [15].
Table 2. Results of the original method of determining fractional derivatives proposed in [15].
OrderMetrics
ERGASSAMRASERMSEUIQICC
0.11.64531.16916.36615.57380.984220.99564
0.21.74751.27706.79365.94600.983730.98983
0.32.18391.53408.52777.47520.976790.97738
0.43.06602.111211.99710.5380.957870.95496
0.54.54893.348817.80515.6600.917200.91805
0.66.97855.873927.30124.0280.838750.86105
0.711.22810.82443.89438.6440.700260.77827
0.819.98920.42378.08768.7560.484090.66629
0.946.80739.958182.76160.920.213840.52810
Table 3. Results for computing derivative using an incorrect mask proposed in [21].
Table 3. Results for computing derivative using an incorrect mask proposed in [21].
OrderMetrics
ERGASSAMRASERMSEUIQICC
0.14.51521.341317.53015.4730.888960.88148
0.25.17041.559020.09417.7470.861070.84455
0.35.92431.867923.04320.3630.827020.80213
0.46.56762.203725.55922.5940.796880.76689
0.56.95412.442027.07023.9340.778560.74642
0.66.98222.462927.18024.0330.777340.74512
0.76.59612.227625.67022.6950.795880.76581
0.85.81311.826322.60919.9800.832590.80885
0.94.81881.442718.71916.5290.876570.86469
Table 4. Results for computing derivative using a corrected mask.
Table 4. Results for computing derivative using a corrected mask.
OrderMetrics
ERGASSAMRASERMSEUIQICC
0.14.47601.329617.37715.3380.891170.88362
0.24.95921.486819.26917.0150.871400.85687
0.35.40321.649521.00618.5550.852300.83232
0.45.67301.761122.06119.4900.840350.81750
0.55.71941.781022.24319.6510.838250.81492
0.65.54551.705921.56319.0470.845950.82434
0.75.19711.569420.20017.8390.861160.84346
0.84.76441.420318.50616.3380.879410.86745
0.94.38641.301017.02715.0260.894660.88845
Table 5. Results for derivative based on FFT.
Table 5. Results for derivative based on FFT.
OrderMetrics
ERGASSAMRASERMSEUIQICC
0.13.22732.197012.62211.1050.953220.94970
0.23.10912.085512.15410.6900.956960.95382
0.33.19712.022212.48610.9840.954550.95083
0.43.45731.998713.48911.8710.946650.94145
0.53.85982.013815.04713.2500.933110.92554
0.64.38002.070417.06515.0340.913380.90253
0.74.99852.174019.46917.1570.886830.87174
0.85.69962.328922.19619.5640.853000.83274
0.96.47032.540125.19722.2100.811820.78561
Table 6. Results for integer order masks.
Table 6. Results for integer order masks.
MaskERGASSAMRASERMSEUIQICC
Sobel23.22020.21890.58380.0320.269220.32230
Laplacian12.7425.772449.6443.7230.419210.35566
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Motłoch, S.; Sarwas, G.; Dzieliński, A. Fractional Derivatives Application to Image Fusion Problems. Sensors 2022, 22, 1049. https://doi.org/10.3390/s22031049

AMA Style

Motłoch S, Sarwas G, Dzieliński A. Fractional Derivatives Application to Image Fusion Problems. Sensors. 2022; 22(3):1049. https://doi.org/10.3390/s22031049

Chicago/Turabian Style

Motłoch, Szymon, Grzegorz Sarwas, and Andrzej Dzieliński. 2022. "Fractional Derivatives Application to Image Fusion Problems" Sensors 22, no. 3: 1049. https://doi.org/10.3390/s22031049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop