Next Article in Journal
A GPS-Referenced Wavelength Standard for High-Precision Displacement Interferometry at λ = 633 nm
Next Article in Special Issue
A Sensor-Aided System for Physical Perfect Control Applications in the Continuous-Time Domain
Previous Article in Journal
Variable Thickness Strain Pre-Extrapolation for the Inverse Finite Element Method
Previous Article in Special Issue
Adaptive Quality of Service Control for MQTT-SN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors

1
Department of Mathematics Applications and Methods for Artificial Intelligence, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
2
Institute of Energy and Fuel Processing Technology, 41-803 Zabrze, Poland
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1722; https://doi.org/10.3390/s23031722
Submission received: 16 January 2023 / Revised: 30 January 2023 / Accepted: 31 January 2023 / Published: 3 February 2023
(This article belongs to the Special Issue Architectures, Protocols and Algorithms of Sensor Networks)

Abstract

:
In recent times, fractional calculus has gained popularity in various types of engineering applications. Very often, the mathematical model describing a given phenomenon consists of a differential equation with a fractional derivative. As numerous studies present, the use of the fractional derivative instead of the classical derivative allows for more accurate modeling of some processes. A numerical solution of anomalous heat conduction equation with Riemann-Liouville fractional derivative over space is presented in this paper. First, a differential scheme is provided to solve the direct problem. Then, the inverse problem is considered, which consists in identifying model parameters such as: thermal conductivity, order of derivative and heat transfer. Data on the basis of which the inverse problem is solved are the temperature values on the right boundary of the considered space. To solve the problem a functional describing the error of the solution is created. By determining the minimum of this functional, unknown parameters of the model are identified. In order to find a solution, selected heuristic algorithms are presented and compared. The following meta-heuristic algorithms are described and used in the paper: Ant Colony Optimization (ACO) for continous function, Butterfly Optimization Algorithm (BOA), Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimize (AO). The accuracy of the presented algorithms is illustrated by examples.

1. Introduction

With the increase in computing power of computers, all kinds of simulations of various phenomena occurring, among others, in physics, biology and technology are gaining in importance. The considered mathematical models are more and more complicated and can be used to model various processes in nature, science and engineering. In the case of modeling anomalous diffusion processes (e.g., heat conduction in porous materials) or processes with long memory, fractional derivatives play a special role. There are many different fractional derivatives, in which the following are the most popular: Caputo, Riemann-Liouville and Riesz. Authors of the study [1] present a model dedicated risk of corporate default, which can be described as a fractional self-exciting model. The model and methods introduced in the study were used to carry out a validation on real market data. In result, the fractional derivative model became better. Ming et al. [2] used Caputo fractional derivative to simulate China’s gross domestic product. The fractional model was compared with the model based on the classical derivative. Using the fractional derivative, the authors built a better and more precise model to predict the values of gross domestic product in China. Another applications of fractional derivatives in modeling processes in biology can be found in the article [3]. The authors presented the applications of the Atangan-Baleanu fractional derivative to create models of such processes as: Newton’s law of cooling, population growth model and blood alcohol model. In the article [4], authors used Caputo fractional derivative to investigate and model population dynamics among tumor cells-macrophage. The study also estimated unknown model parameters based on samples which were collected from the patient with non-small cell lung cancer who had chemotherapy-naive hospitalized. De Gaetano et al. [5] presented a mathematical model with a fractional derivative for Continuous Glucose Monitoring. The paper also contains the numerical solution of the considered fractional model. Based on experimental data from diabetic patients, the authors determine the order of the fractional derivative for which the model best fits the data. The research shows that the fractional derivative model fits the data better than the integer derivative model (both first and second order). More about fractional calculus and its application can be found in [6,7,8].
In order to implement more and more accurate and faster computer simulations, it is necessary to improve various types of numerical methods or algorithms that solve direct and inverse problems. Solving the inverse problem allows to design the process and select the input parameters of the model in a way that make possible obtaining the desired output state. Such tasks are considered difficult due to the fact that they are ill conditioned [9]. Sensor measurements often provide additional information for inverse issues. Based on these measurements, the input parameters of the model are selected and the entire process is designed. In the study [10] a variational approach for reconstructing the thermal conductivity coefficient is presented. The authors also cite statements regarding the existence and uniqueness of the solution. Numerical examples are also provided. In the article [11] the solution of the inverse problem consists in identifying the coefficients of the heat conduction model based on temperature measurements from sensors. In addition, several mathematical models were compared, in particular fractional models with classical model. Under the study, the parameters like order of fractional derivative as well as thermal conductivity and heat transfer coefficient were identified. Considerations regarding solving the inverse problem are also included in the article [12]. The authors present the approach of the solution from the Deep Neural Network, in which they used deep-learning methods. It allowed for learning all free parameters and functions through training. The back-propagation of the training data can be one of the methods for training the deep network. More examples of inverse problems in mathematical modeling and simulations can be found in [13,14,15,16,17,18,19,20].
In this article, the mathematical model of heat conduction with Riemann-Liouville fractional derivative is presented. In the provided model, the boundary conditions of the second and third order are adopted. Then, a solution of direct problem is shortly described. To solve this problem a finite difference scheme is derived. The inverse problem posed in this article consists in the reconstruction of the third order boundary condition and the identification of such parameters as order of fractional derivative and thermal conductivity. In the process of developing a procedure that solves the inverse problem, a fitness function is created. It describes the error of the approximate solution. In order to identify the parameters, the minimum of this function should be found. The following algorithms are used and compared to minimize the fitness function: Ant Colony Optimization (ACO), Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimization (AO). The presented procedure has been tested on numerical examples.

2. Anomalous Diffusion Model

We consider an anomalous diffusion equation in the form of a differential equation with a fractional derivative with over spatial variable:
c ϱ T ( x , t ) t = λ ^ β T ( x , t ) x β , x ( x L , x R ) , t ( 0 , t e n d ) .
In this approach, the considered anomalous diffusion equation describes the phenomenon of heat flow in porous medium [11,21,22]. In Equation (1) we assume the following notations: T [ K ] —temperature, x [ m ] —spatial variable, t [ s ] —time, c J k g K —specific heat, ϱ k g m 3 —density, β ( 1 , 2 ) —order of derivative and λ ^ = w ^ λ W m 3 β K is scaled heat conduction coefficient, where w ^ is scale parameter. Heat conduction λ had to be scaled to keep the units consistent. To Equation (1) an initial condition is added:
T ( x , 0 ) = ψ ( x ) , x [ x L , x R ] .
On the left side of the spatial interval the homogeneous boundary condition of the second order is taken:
λ T ( x , t ) x | x = x L = 0 , t ( 0 , t e n d ] ,
and for the right boundary of the spatial interval the boundary condition of the third order is assumed:
λ T ( x , t ) x | x = x R = h ( t ) ( T ( x R , t ) T ) , t ( 0 , t e n d ] .
The symbols T , h appearing in the Equation (4) denote ambient temperature and the heat transfer coefficient.
In the Equation (1) there is a fractional derivative with respect to space, which is defined as the Riemann-Liouville derivative [23]:
β T ( x , t ) x β = 1 Γ ( 2 β ) 2 x 2 x L x T ( s , t ) ( x s ) 1 β d s , β ( 1 , 2 ) .

3. Numerical Solution of Direct Problem

In order to solve the direct problem for model (1)–(4) it is used finite difference scheme. The considered area is discretized by creating a mesh S = { ( x i , t k ) : x i = x L + i Δ x , t k = k Δ t } , where Δ x = ( x R x L ) / M and Δ t = t e n d / K and i = 0 , , M , k = 0 , , K . Then the Riemann-Liouville derivative has to be approximated [23]:
β T ( x i , t k ) x β j = 0 i + 1 Γ ( j β ) Γ ( β ) Γ ( j + 1 ) T i j + 1 k ,
as well as boundary conditions (3) and (4):
λ 0 T 2 k + 1 + 4 T 1 k + 1 3 T 0 k + 1 2 Δ x = q k + 1 ,
λ N T M 2 k + 1 4 T M 1 k + 1 + 3 T M k + 1 2 Δ x = h k + 1 ( T M k + 1 T ) ,
where T is the ambient temperature, T i k is the approximate value of the function T in point ( x i , t k ) , and h is a function describing the heat transfer coefficient. Using the Equations (6)–(8) we obtain a differential scheme (a system of equations). By solving this system, the values of the function T will be determined in mesh points.

4. Inverse Problem and the Procedure for Its Solution

The problem considered in this article concern the inverse problem. It consists in establishing the input parameters of the model in a way that allows obtaining the temperature at the boundary corresponding to the measurements from the sensors. The identified parameters are: thermal conductivity λ ^ , order of derivative β and heat transfer function h in the form of a second degree polynomial. In the presented approach, after solving the direct problem for fixed values of unknown parameters, we obtain approximation of T and compare it to the measurements data. This is a method of creation the fitness function:
F ( λ ^ , β , h ) = j = 1 N T j ( λ ^ , β , h ) T j m 2 ,
where N is a number of measurements, T j ( λ ^ , β , h ) are temperature values at the measurement point calculated from the model, and T j m are measurements from sensors. To find the minimum of function (9) we use selected metaheuristic algorithms described in Section 5.

5. Meta-Heuristic Algorithms

In this section, we present selected metaheuristic algorithms for finding the minimum of functions. These algorithms will be: Ant Colony Optimization (ACO) for continuous function optimization, Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimization (AO).

5.1. ACO for Continuous Function Optimization

The inspiration for the creation this minimum function search algorithm was the observation of the habits of ants while searching for food. In the first stage, the ants randomly search the area around their nest. In the process of foraging for food, ants secrete a chemical called a pheromone. Thanks to this substance, the ants have a chance to communicate with each other. The amount of secreted substance depends on the amount of food found. If the ant has successfully found a food source, the next step is to return to the nest with a food sample. The animal leaves a pheromone trail that will allow other ants to find the food source. This mechanism was adapted to create the ACO algorithm for continuous function optimization [24]. More on the algorithm and its applications can be found, among others, in articles [25,26,27,28].
There are three main parts to the algorithm:
  • Solution (pheromone) representation. Points from the search area R n are identified as pheromone patches. In other words, the pheromone spot plays the role of a solution. Thus, k-th pheromone spot (or approximate solution) can be represented as x k = ( x 1 k , x 2 k , , x n k ) . Each solution (pheromone spot) has its quality calculated on the basis of fitness function F ( x k ) . In each iteration of the algorithm, we store a fixed number of pheromone spots in the set of solutions (establish at the start of the algorithm).
  • Transformation of the solution by the ant. The procedure of constructing a new solution, in the first place, consists in choosing one of the current solutions (pheromone spots) with a certain probability. The quality of the solution is a factor that determines the probability. The relationship here is as follows: with the increase in the quality of the solution, the probability of selection increases. In this paper, the following formula is adopted to calculate the probability (based on the rank) of the k-th solution:
    p k = ω k j = 1 L ω j ,
    where L denotes number of all pheromone spots, and ω is expressed by the formula:
    ω k = 1 q L 2 π e ( r a n k ( k ) 1 ) 2 2 ( q L ) 2 .
    The symbol r a n k ( k ) in the Equation (11) denotes the rank of the k-th solution in the set of solutions. The parameter q is a parameter that narrows the search area. In case of small value of q, the choice of the best solution is preferred. The greater q, the closer the probabilities of choosing each of the solutions. After choosing k-th solution, it is required to perform Gaussian sampling using the formula:
    g ( x , μ , σ ) = 1 σ 2 π e ( x μ ) 2 2 σ 2 ,
    where μ = x i k is i-th coordinate of k-th solution and σ = ξ L 1 j = 1 L | x i j x i k | is the calculated average distance between the chosen k-th solution and all the other solutions.
  • Pheromone spots update. In each iteration of the ACO algorithm, M of new solutions is created (M denotes the number of ants). These solutions should be included in the solution set. In total, there are L + M of pheromone spots in the set. Then the spots (solutions) are sorted by quality. The worst solutions in the M set are removed. Thus, the solution set always has a fixed number of elements equal to L.
Pseudocode ACO algorithm for continuous function optimization is presented in Algorithm 1.
Algorithm 1 Pseudocode of ACO algorithm.
1:
                Initialization part.
2: Configuration of ACO algorithm parameters.
3: Initialization of starting population { x 1 , x 2 , , x L } in a random way.
4: Calculation value of the fitness function F for all pheromone spots and sorting them according to their rank (quality).
5:
                Iterative part.
6: for iteration i = 1 , 2 , , I do
7:  Assignment of probability to pheromone spots according to the Equation (10).
8:  for  ant m = 1 , 2 , , M  do
9:    The ant chooses the k-th ( k = 1 , 2 , , L ) solution with probability p k .
10:    for  coordinate j = 1 , 2 , , n  do
11:      Using the probability density function (12) in the sampling process, the ant changes the j-th coordinate of the k-th solution.
12:    end for
13:  end for
14:  Calculation the value of the fitness function F for M new solutions.
15:  Adding M new solutions to the set of archive of old, sorting the archive by quality and then rejection of the M worst solutions.
16: end for
17: return best solution x b e s t .

5.2. Dynamic Butterfly Optimization Algorithm

Another of the presented heuristic algorithms is an improved version of the Butterfly Optimization Algorithm (BOA), namely the Dynamic Butterfly Optimization Algorithm (DBOA) [29].
In order to communicate, search for food, connect with a partner, and to escape from a predator, these animals use the sense of smell, taste and touch. The most important of these senses is smell. Thanks to the sense of smell butterflies look for food sources. Sensory receptors, called chemoreceptors, are scattered all over the body of a butterfly (e.g., on the legs).
Scientists studying the life of butterflies have noticed that these animals locate the source of a fragrance with great precision. In addition, they can distinguish fragrances and recognize their intensity. Those were an inspiration for the development of the Butterfly Optimization Algorithm (BOA) [30]. Each butterfly emits a specific fragrance of a given intensity. Spraying the fragrance allows other butterflies to recognize it and then communicate with each other. In this way, a “collective knowledge network” is created. The global optimum search algorithm is based on the ability of butterflies to sense the fragrance. If the animal cannot sense the fragrance of the environment, its movement will be random.
The key concept is fragrance and the way it is received and processed. The concept of modality detection and processing (fragrance) is based on the following parameters: stimulus intensity (I), sensory modality (c) and power exponent (a). I is the intensity of the stimulus. In BOA, fitness function is somehow correlated with the intensity of the stimulus I. Hence, it can be shown that the more fragrance a butterfly emits (solution quality is better), the easier it is for other butterflies in the environment to sense it and be attracted to it. This relationship is described as follows:
f = c I a ,
where f denotes fragrance, c is the sensory modality, I denotes the stimulus intensity, and a is the power exponent, which depends on the modality. In this article, we assume values for the parameters a and c in the range [ 0 , 1 ] . The parameter a is a modality-dependent power exponent. It has a variability in absorption and its value may decrease in subsequent iterations. Thus, the parameter a can control the behavior of the algorithm, its convergence. The parameter c is also important in the perspective of the BOA operation. In theory c [ 0 , ) , while in practice it is assumed that c [ 0 , 1 ] . The values of a and c have a significant impact on the speed of the algorithm. Considering this, it should be noted that an important step here is the appropriate selection of these parameters. It should be carried out once for various optimization tasks.
In the BOA we can distinguish the following stages:
  • Butterflies in the considered environment emit fragrances that differ in intensity, which results from the quality of the solution. Communication between these animals takes place through sensing the emitted fragrances.
  • There are two ways of movement of a butterfly, namely: towards a more intense fragrance emitted by another butterfly and in a random direction.
  • Global search is represented by:
    x n e w = x o l d + r 2 x b e s t x o l d f ,
    where x o l d is the position of the butterfly (agent) before the move, and x n e w is the transformation position of the butterfly, x b e s t is the position of the best butterfly in the current population, and f is the fragrance of a butterfly x o l d and r denotes a number from the range [ 0 , 1 ] selected in a random way.
  • Local search move is formulated by:
    x n e w = x o l d + r 2 x r 1 x r 2 f ,
    where x r 1 , x r 2 are randomly selected butterflies from the population.
At the end of each iteration modifying the population of agents (butterflies), the local search algorithm based on mutation operator (LSAM) is run. This is a significant modification compared to BOA. In this article, the operation of LSAM consisted in the selection of several individuals (solutions) and their transformation with the use of the mutation operator. In case of obtaining better solution after mutation, it replaces the old one. The LSAM algorithm is presented as pseudocode in Algorithm 2. More information regarding the applications of the butterfly algorithm can be found in [31,32,33].
Algorithm 2 Pseudocode of LSAM operator.
1: x r —random solution among the top half best agents in population (obtained from BOA).
2: F i t r = F ( x r ) —value of the fitness function for x r .
3: I—number of iterations, ξ —mutation rate.
4:
                  Iterative part.
5: for iteration i = 1 , 2 , , I do
6:  Calculate: x n e w =   Mutate( x r , ξ ), F i t n e w = F i t ( x n e w ) .
7:  if  F i t n e w < F i t r  then
8:     x r = x n e w , F i t r = F i t n e w .
9:  else
10:    Set a random solution x r n d from the population, but not x r .
11:    Compute the fitness function F i t r n d = F i t ( x r n d ) .
12:    if  F i t n e w < F i t r n d  then
13:      x r n d = x n e w
14:    end if
15:  end if
16: end for
Algorithm 2 includes the process of transforming the individual coordinates of the solution x = ( x 1 , x 2 , , x n ) with the use of the mutation operator. The transformation consists in drawing a number from the normal distribution and replacing the old coordinate with a new one. For j-th coordinate we use normal distribution:
x j n e w N ( x j , σ ) ,
where x j is mean and σ = 0.1 ( u B l B ) is standard deviation. By l B and u B are denoted lower and upper bound of coordinate. Algorithm 3 presents pseudocode of DBOA.
Algorithm 3 Pseudocode of DBOA.
1:
                 Initialization part.
2: Determine parameters of BOA algorithm. N—number of butterfly in population, n—dimension, c—sensor modality and a, ξ , p parameters.
3: Random generate starting population { x 1 , x 2 , , x N } .
4: Calculate the value of the fitness function F (hence intensity of the stimulus I = F ) for each butterfly x k ( k = 1 , 2 , , N ) in population.

             Iterative part.
for iteration i = 1 , 2 , , I do
   for  k = 1 , 2 , , N  do
    Calculate value of fragnance for x k with the use of Equation (13).
5:    end for
    Set the best agent x b e s t among the butterflies.
    for  k = 1 , 2 , , N  do
    Set a random number r from range [ 0 , 1 ] .
     if  r < p  then
10:      Convert solution x k t in accordance with the Equation (14).
     else
        Convert solution x k t in accordance with the Equation (15).
     end if
   end for
15:  Change value of the parameter a.
   Adopt the LSAM algorithm to convert the agents population with mutation rate ξ .
end for
return x b e s t .

5.3. Aquila Optimizer

Another of the considered algorithms is Aquila Optimizer (AO). This algorithm is a mathematical representation of the hunting behavior of a genus of bird called Aquila (family of hawks). Four main techniques can be distinguished in the way these predators hunt:
  • Expanded exploration. In the case that a predator is high in the air and wants to hunt other birds, it tilts vertically. After locating the victim from a height, Aquila begins nosediving with increasing speed. We can express this phenomenon with the use of the following equation:
    x n e w = 1 i I x b e s t + x m e a n r d x b e s t ,
    where x n e w is solution after transformation, x b e s t is the best solution so far and symbolizes position of the prey, i is current iteration, I is number of maximum iteration and r d is random number from [ 0 , 1 ] . In this case x b e s t can also be defined as the optimization goal or approximate solution. Vector x m e a n is mean solution from all population:
    x m e a n = 1 N k = 1 N x k .
  • Narrowed exploration. This technique involves circling the prey in flight and preparing to drop the earth and attack the prey. It is also known as short stroke contour flight. This is described in the algorithm by the equation:
    x n e w = L e v y D x b e s t + x r a n d o m + r d ( r cos ϕ r sin ϕ ) ,
    where x n e w and x b e s t denotes the same as in expanded exploration point, x r a n d o m is a random solution from population and r d is a random number from interval [ 0 , 1 ] . Term r cos ϕ r sin ϕ simulates spiral flight of Aquila. Expression L e v y D is random value of the Levy flight distribution:
    L e v y D = s u σ v 1 β ,
    where s , β are constants u , v denote random numbers from range [ 0 , 1 ] , and σ is formulated as follows [34]:
    σ = Γ ( 1 + β ) sin ( π β 2 ) Γ ( 1 + β 2 ) 2 β 1 2 β .
    In above equation Γ denotes gamma function. In order to determine the values of the parameters r and ϕ the following formula is used:
    r = r 1 + V D 1 , θ = ξ D 1 + 3 π 2 ,
    where r 1 is a fixed integer from { 1 , 2 , , 30 } , V , ξ are small constants, D 1 is an integer from { 1 , 2 , , n } .
  • Expanded exploitation. This hunting technique begins with a vertical attack on a prey, which location is known within some approximation defining the search area. Thanks to this information, Aquila gets as close to its prey as possible. It can be described as follows:
    x n e q = α x b e s t x m e a n r d + δ r d ( u B l B ) + l B ,
    where x n e w is the solution after transformation, x b e s t is the best solution at the moment and x m e a n is the mean solution in all population determined with the use of the formula (18). As before, r d denotes a random number from range [ 0 , 1 ] , while l B , u B are lower and upper bound, α and δ are constants parameters of exploitation regulation.
  • Narrowed exploitation. The characteristic feature of this technique are the stochastic movements of the bird, which attacks the prey in close proximity. It can be described by the formula:
    x n e w = Q F x b e s t G 1 r d x m e a n G 2 L e v y D + r d G 1 ,
    where x n e w denotes solution before transformation, Q F is quality function:
    Q F = i 2 r d 1 ( 1 I ) 2 .
    G 1 and G 2 are described by:
    G 1 = 2 r d 1 , G 2 = 2 ( t T ) 2 .
    We can adjust the algorithm with the above parameters.
The Aquila’s food-gathering behavior consists of the four hunting techniques previously described. The Formulas (17)–(26) describing four transformations consists in AO algorithm. Algorithm 4 shows description of implementation of the AO algorithm. More about the Aquila Optimizer can be found in [34,35].
Algorithm 4 Pseudocode of AO.
1:
                 Initialization part.
2: Set up parameters of AO algorithm.
3: Initialize population in a random way { x 1 , x 2 , , x N } .
4:
                  Iterative part.
5: for iteration i = 1 , 2 , , I do
6:  Determine values of the fitness function F for each agent in the population.
7:  Establish the best solution x b e s t in the population.
8:  for  k = 1 , 2 , , N  do
9:    Calculate mean solution x m e a n in the population.
10:    Improve parameters G 1 , G 2 , Q F of the algorithm.
11:    if  iteration i 2 3 I  then
12:      if  r d < 0.5  then
13:        Perform step expanded exploration (17) by updating solution x k .
14:        In the result solution x n e w , k is obtained.
15:        if  F ( x n e w , k ) < F ( x k )  then make substitution x k = x n e w , k
16:        end if
17:        if  F ( x n e w , k ) < F ( x b e s t )  then make substitution x b e s t = x n e w , k
18:        end if
19:      else
20:        Perform step narrowed exploration (19) by updating solution x k .
21:        In the result solution x n e w , k is obtained.
22:        if  F ( x n e w , k ) < F ( x k )  then make substitution x k = x n e w , k .
23:        end if
24:        if  F ( x n e w , k ) < F ( x b e s t )  then make substitution x b e s t = x n e w , k .
25:        end if
26:      end if
27:    else
28:      if  r d < 0.5  then
29:        Perform step Expanded exploitation (23) by updating solution x k .
30:        In the result solution x n e w , k is obtained.
31:        if  F ( x n e w , k ) < F ( x k )  then make substitution x k = x n e w , k .
32:        end if
33:        if  F ( x n e w , k ) < F ( x b e s t )  then make substitution x b e s t = x n e w , k .
34:        end if
35:      else
36:        Perform step narrowed exploitation (24) by updating solution x k .
37:        In the result solution x n e w , k is obtained.
38:        if  F ( x n e w , k ) < F ( x k )  then make substitution x k = x n e w , k .
39:        end if
40:        if  F ( x n e w , k ) < F ( x b e s t )  then make substitution x b e s t = x n e w , k .
41:        end if
42:      end if
43:    end if
44:  end for
45: end for
46: return x b e s t .

6. Numerical Example and Test of Algorithms

In this section, we present a numerical example illustrating the effectiveness of the algorithms described above. On this basis, the algorithms are compared with each other regarding the inverse problem in the heat flow model. As described in the Section 4, the unknown model parameters that need to be identified are: λ ^ —thermal conductivity, β —order of derivative and h—heat transfer function. Temperature measurements on the right boundary of the considered area (Figure 1) are supplementary data necessary to solve the inverse problem. The process should be modeled in a way that allows obtaining temperature values from the mathematical model adjusted to the measurement data. The calculations in the inverse problem are performed on the grid Δ x × Δ t = 100 × 1995 .
In the considered example, the following data are assumed in the model (1)–(4):
x L = 0 , x R = 3.825 , t e n d = 71.82 , c = 900 ,
ρ = 2106 , T = 298 , ψ ( x ) = 573.15 .
The verification of heuristic algorithms is carried out by comparative analysis of the values of the searched parameters obtained from the inverse problem solution with exact values. The exact values of the searched parameters are presented below.
λ ^ = 184 , β = 1.08 , h ( t ) = 2.42 t 2 5 t + 78.07 .
In the case of heat transfer function, the error between the exact function h, and the recreated h ^ is defined by the following formula:
δ ( h , h ^ ) = 0 t e n d | h ( t ) h ^ ( t ) | | h ( t ) | d t .
In Table 1 the results obtained for individual algorithms are presented. Evaluating the tested algorithms according to the criterion of the value of the fitness function F (9), it is concluded that the DBOA algorithm turned out to be the most appropriate. The value of the fitness function for this algorithm is definitely and significantly lower than in the other cases. Also, the reconstruction errors of the parameters λ and h are the smallest for the DBOA algorithm. The second place belongs to the ACO algorithm. Based on the results, it can be seen that minimizing the fitness function is difficult, and the inverse problem is ill-posed. The value of the fitness function (9) is strongly dependent on changes in the values of the searched parameters.
Figure 2 and Figure 3 present graphs of the exact function h and reconstruction function h ^ obtained from solving the inverse problem. For the ACO and DBOA algorithms, reconstruction of the function h is satisfying. The reconstructed function matches the exact function well. The reconstruction looks a bit worse in the case of the AO and BOA algorithms. Especially in the latter case, the green line (reconstructed h) diverges from the blue line (exact function h).
We compare the reconstructed temperature values at the measurement points with the measurement data afterwards. The Table 2 presents that the best results are obtained for the DBOA algorithm, and the worst for the BOA algorithm. Generally, these values are not high. Hence, it can be concluded that the reconstructed temperature is well matched to the measurement data, but also that the set problems are ill-posed and difficult to minimize.
An important parameter evaluating the obtained results is matching the temperature values at the measurement points with the measurement data. Figure 4 and Figure 5 show graphs of reconstructed temperature and graphs of measurement data for each of the algorithms. As can be seen, the reconstructed temperature values are well matched to the measurement data, despite the fact that the reconstructed values of the searched parameters λ and h differ significantly for considered algorithms. This proves that the graph of the objective function is flat in the vicinity of the exact solution. Thus, the considered inverse problem is difficult to solve. And the found solution (reconstructed parameter values) may contain significant errors.

7. Conclusions

The paper presents the inverse problem of heat flow consisting in the identifying parametric data of the model with given temperature measurements.The unknown parameters of the model are: thermal conductivity, order of fractional derivative and heat transfer function. To solve inverse problem, the function describing the error of the approximate solution should be minimized. Four meta-heuristic algorithms were used and compared, such as: ACO, DBOA, AO and BOA. DBOA turned out to be the best in terms of the value of the minimized function. In the case of DBOA, the value of the minimized function was 0.45 , which is a satisfactory result. In the case of other algorithms, these values were much higher: ACO 273 ; BOA 2501 and AO 482 . The DBOA also turned out to be the best in terms of errors in reconstruction model parameters and fitting reconstructed temperature to measurement data. In the case of DBOA, the error of reconstruction the temperature at the measurement points is equal to 0.0131 , while for the other algorithms this error was of the order of 10 1 . The considered problems turned out to be difficult to solve. The graph of the fitness function is very flat in the vicinity of the searched solution. Thus, even significant differences in the values of the reconstructed parameters have little impact on the differences in the values of the fitness function.

Author Contributions

Conceptualization, R.B. and D.S.; methodology, R.B. and D.S.; software, R.B.; validation, D.S., A.W. and R.B.; formal analysis, D.S.; investigation, R.B. and A.W.; writing—original draft preparation, A.W. and R.B.; writing—review and editing, A.W.; supervision, D.S.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ketelbuters, J.J.; Hainaut, D. CDS pricing with fractional Hawkes processes. Eur. J. Oper. Res. 2022, 297, 1139–1150. [Google Scholar] [CrossRef]
  2. Ming, H.; Wang, J.; Fečkan, M. The Application of Fractional Calculus in Chinese Economic Growth Models. Appl. Math. Comput. 2019, 7, 665. [Google Scholar] [CrossRef]
  3. Bas, E.; Ozarslan, R. Real world applications of fractional models by Atangana–Baleanu fractional derivative. Chaos Solitons Fractals 2018, 116, 121–125. [Google Scholar] [CrossRef]
  4. Ozkose, F.; Yılmaz, S.; Yavuz, M.; Ozturk, I.; Şenel, M.T.; Bağcı, B.S.; Dogan, M.; Onal, O. A Fractional Modeling of Tumor–Immune System Interaction Related to Lung Cancer with Real Data. Eur. Phys. J. Plus 2022, 137, 1–28. [Google Scholar] [CrossRef]
  5. De Gaetano, A.; Sakulrang, S.; Borri, A.; Pitocco, D.; Sungnul, S.; Moore, E.J. Modeling continuous glucose monitoring with fractional differential equations subject to shocks. J. Theor. Biol. 2021, 526, 110776. [Google Scholar] [CrossRef] [PubMed]
  6. Singh, H.; Kumar, D.; Baleanu, D. Methods of Mathematical Modelling. Fractional Differential Equations; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  7. Meng, R. Application of Fractional Calculus to Modeling the Non-Linear Behaviors of Ferroelectric Polymer Composites: Viscoelasticity and Dielectricity. Membranes 2021, 11, 409. [Google Scholar] [CrossRef] [PubMed]
  8. Maslovskaya, A.; Moroz, L. Time-fractional Landau–Khalatnikov model applied to numerical simulation of polarization switching in ferroelectrics. Nonlinear Dyn. 2023, 111, 4543–4557. [Google Scholar] [CrossRef]
  9. Kaipio, J.; Somersalo, E. Statistical and Computational Inverse Problems; Springer: New York, NY, USA, 2005. [Google Scholar]
  10. Marinov, T.; Marinova, R. An inverse problem solution for thermal conductivity reconstruction. Wseas Trans. Syst. 2021, 20, 187–195. [Google Scholar] [CrossRef]
  11. Brociek, R.; Słota, D.; Król, M.; Matula, G.; Kwaśny, W. Comparison of mathematical models with fractional derivative for the heat conduction inverse problem based on the measurements of temperature in porous aluminum. Int. J. Heat Mass Transf. 2019, 143, 118440. [Google Scholar] [CrossRef]
  12. Liang, D.; Cheng, J.; Ke, Z.; Ying, L. Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks. IEEE Signal Process. Mag. 2020, 37, 141–151. [Google Scholar] [CrossRef] [PubMed]
  13. Drezet, J.M.; Rappaz, M.; Grün, G.U.; Gremaud, M. Determination of thermophysical properties and boundary conditions of direct chill-cast aluminum alloys using inverse methods. Metall. Mater. Trans. A 2000, 31, 1627–1634. [Google Scholar] [CrossRef]
  14. Zielonka, A.; Słota, D.; Hetmaniok, E. Application of the Swarm Intelligence Algorithm for Reconstructing the Cooling Conditions of Steel Ingot Continuous Casting. Energies 2020, 13, 2429. [Google Scholar] [CrossRef]
  15. Okamoto, K.; Li, B. A regularization method for the inverse design of solidification processes with natural convection. Int. J. Heat Mass Transf. 2007, 50, 4409–4423. [Google Scholar] [CrossRef]
  16. Özişik, M.; Orlande, H. Inverse Heat Transfer: Fundamentals and Applications; Taylor & Francis: New York, NY, USA, 2000. [Google Scholar]
  17. Neto, F.M.; Neto, A.S. An Introduction to Inverse Problems with Applications; Springer: Berlin, Germany, 2013. [Google Scholar]
  18. Brociek, R.; Pleszczyński, M.; Zielonka, A.; Wajda, A.; Coco, S.; Sciuto, G.L.; Napoli, C. Application of Heuristic Algorithms in the Tomography Problem for Pre-Mining Anomaly Detection in Coal Seams. Sensors 2022, 22, 7297. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, T.; Yang, D. Modeling and Inversion of Airborne and Semi-Airborne Transient Electromagnetic Data with Inexact Transmitter and Receiver Geometries. Remote Sens. 2022, 14, 915. [Google Scholar] [CrossRef]
  20. Gao, H.; Zahr, M.J.; Wang, J.X. Physics-informed graph neural Galerkin networks: A unified framework for solving PDE-governed forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2022, 390, 114502. [Google Scholar] [CrossRef]
  21. Kukla, S.; Siedlecka, U.; Ciesielski, M. Fractional Order Dual-Phase-Lag Model of Heat Conduction in a Composite Spherical Medium. Materials 2022, 15, 7251. [Google Scholar] [CrossRef]
  22. Žecová, M.; Terpák, J. Heat conduction modeling by using fractional-order derivatives. Appl. Math. Comput. 2015, 257, 365–373. [Google Scholar] [CrossRef]
  23. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  24. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  25. Ojha, V.K.; Abraham, A.; Snášel, V. ACO for continuous function optimization: A performance analysis. In Proceedings of the 14th International Conference on Intelligent Systems Design and Applications, Okinawa, Japan, 28–30 November 2014; pp. 145–150. [Google Scholar] [CrossRef] [Green Version]
  26. Moradi, B.; Kargar, A.; Abazari, S. Transient stability constrained optimal power flow solution using ant colony optimization for continuous domains (ACOR). IET Gener. Transm. Distrib. 2022, 16, 3734–3747. [Google Scholar] [CrossRef]
  27. Omran, M.G.; Al-Sharhan, S. Improved continuous Ant Colony Optimization algorithms for real-world engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 85, 818–829. [Google Scholar] [CrossRef]
  28. Brociek, R.; Chmielowska, A.; Słota, D. Comparison of the Probabilistic Ant Colony Optimization Algorithm and Some Iteration Method in Application for Solving the Inverse Problem on Model With the Caputo Type Fractional Derivative. Entropy 2020, 22, 555. [Google Scholar] [CrossRef]
  29. Tubishat, M.; Alswaitti, M.; Mirjalili, S.; Al-Garadi, M.A.; Alrashdan, M.T.; Rana, T.A. Dynamic Butterfly Optimization Algorithm for Feature Selection. IEEE Access 2020, 8, 194303–194314. [Google Scholar] [CrossRef]
  30. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  31. Chen, S.; Chen, R.; Gao, J. A Monarch Butterfly Optimization for the Dynamic Vehicle Routing Problem. Algorithms 2017, 10, 107. [Google Scholar] [CrossRef]
  32. Xia, Q.; Ding, Y.; Zhang, R.; Liu, M.; Zhang, H.; Dong, X. Blind Source Separation Based on Double-Mutant Butterfly Optimization Algorithm. Sensors 2022, 22, 3979. [Google Scholar] [CrossRef] [PubMed]
  33. Zhang, M.; Wang, D.; Yang, J. Hybrid-Flash Butterfly Optimization Algorithm with Logistic Mapping for Solving the Engineering Constrained Optimization Problems. Entropy 2022, 24, 525. [Google Scholar] [CrossRef]
  34. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  35. Wang, Y.; Xiao, Y.; Guo, Y.; Li, J. Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications. Processes 2022, 10, 2703. [Google Scholar] [CrossRef]
Figure 1. Considered area with marked measuring points (a fragment of the boundary with Neumann boundary condition is marked in red, a fragment of the boundary with Robin boundary condition is marked in blue, and a fragment of the boundary with initial condition is marked in green).
Figure 1. Considered area with marked measuring points (a fragment of the boundary with Neumann boundary condition is marked in red, a fragment of the boundary with Robin boundary condition is marked in blue, and a fragment of the boundary with initial condition is marked in green).
Sensors 23 01722 g001
Figure 2. The exact heat transfer h ( t ) (blue line) and approximate heat transfer (dashed green line) for (a) ACO and (b) DBOA.
Figure 2. The exact heat transfer h ( t ) (blue line) and approximate heat transfer (dashed green line) for (a) ACO and (b) DBOA.
Sensors 23 01722 g002
Figure 3. The exact heat transfer h ( t ) (blue line) and approximate heat transfer (dashed green line) for (a) AO and (b) BOA.
Figure 3. The exact heat transfer h ( t ) (blue line) and approximate heat transfer (dashed green line) for (a) AO and (b) BOA.
Sensors 23 01722 g003
Figure 4. The exact temperature T ( x R , t ) in measurement point (blue line) and reconstructed temperature (green line) for (a) ACO and (b) DBOA.
Figure 4. The exact temperature T ( x R , t ) in measurement point (blue line) and reconstructed temperature (green line) for (a) ACO and (b) DBOA.
Sensors 23 01722 g004
Figure 5. The exact temperature T ( x R , t ) in measurement point (blue line) and reconstructed temperature (green line) for (a) AO and (b) BOA.
Figure 5. The exact temperature T ( x R , t ) in measurement point (blue line) and reconstructed temperature (green line) for (a) AO and (b) BOA.
Sensors 23 01722 g005
Table 1. Results of calculations ( λ ¯ —identified value of thermal conductivity coefficient; β ¯ —identified value of derivative order; h ¯ ( t ) —identified value of heat transfer function; δ —the relative error of reconstruction; F—the value of the fitness function).
Table 1. Results of calculations ( λ ¯ —identified value of thermal conductivity coefficient; β ¯ —identified value of derivative order; h ¯ ( t ) —identified value of heat transfer function; δ —the relative error of reconstruction; F—the value of the fitness function).
Algorithm λ ¯ δ λ ¯ [ % ] β ¯ δ β ¯ [ % ] h ¯ ( t ) δ h ¯ F
ACO 170.86 7.14 1.0838 0.35 2.27 t 2 + 1.41 t + 10.71 5.05 272.95
DBOA 178.83 2.81 1.0818 0.17 2.42 t 2 7.76 t + 94.46 2.39 0.45
AO 124.49 32.34 1.1021 2.04 2.11 t 2 + 1.95 t + 20.01 7.76 482.39
BOA 194.27 5.58 1.0798 0.02 1.98 t 2 5.42 t + 7.85 21.55 2501.21
Table 2. Errors of reconstruction temperature function T in measurement points ( Δ m a x —maximal absolute error; Δ m e a n —mean absolute error).
Table 2. Errors of reconstruction temperature function T in measurement points ( Δ m a x —maximal absolute error; Δ m e a n —mean absolute error).
Algorithm Δ max Δ mean
ACO 0.5111 0.3575
DBOA 0.0261 0.0131
AO 0.7471 0.4361
BOA 2.7409 0.9513
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brociek , R.; Wajda, A.; Słota, D. Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors. Sensors 2023, 23, 1722. https://doi.org/10.3390/s23031722

AMA Style

Brociek  R, Wajda A, Słota D. Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors. Sensors. 2023; 23(3):1722. https://doi.org/10.3390/s23031722

Chicago/Turabian Style

Brociek , Rafał, Agata Wajda, and Damian Słota. 2023. "Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors" Sensors 23, no. 3: 1722. https://doi.org/10.3390/s23031722

APA Style

Brociek , R., Wajda, A., & Słota, D. (2023). Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors. Sensors, 23(3), 1722. https://doi.org/10.3390/s23031722

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop