4.1. Neural Network Algorithm (NNA)
Artificial Neural Networks (ANNs) map the input data to the target data through an iterative update of the weights
of the ANNs to reduce the mean square error between the predicted output and the target output. The neural network algorithm (NNA) is based on the concepts and the structure of the ANNs to generate new solutions where the best searching agent in the population is considered as the target and the procedures of the algorithm tries to make all the searching agents follow that target solution [
31].
NNA is a population-based algorithm where it starts with an initial population of randomly generated solutions within the search space. Each individual or searching agent in the population is called a “pattern solution”, each pattern solution is a vector of representing the input data of the NNA. .
To start the NNA optimization algorithm, a pattern solution matrix
X with size
is randomly generated between the lower and upper bounds of the search space. The population of pattern solution
is given by:
where
where
and
are
vectors representing the lower and upper bounds of the search space.
Like ANNs, in NNA each pattern solution
will have its corresponding weight
where
. The weights array
is given by:
where
is a square matrix
of uniformly distributed random numbers between 0 and 1. The weight of the pattern solution is involved in the generation of a new candidate solution.
In NNA, the initial weights are random numbers and its value is updated as the iteration number increases according to the calculated error of the network. The weight values are constrained such that the summation of the weights for any pattern solution should not exceed one, defined mathematically as follows:
These constraints for weight values are used to control the bias of movement and the generation of new pattern solutions. Without this constraint, the algorithm will be stuck in a local optimum solution [
31].
The fitness
of each pattern solution is computed by the evaluation of the objective function
using the corresponding pattern solution
.
where
is the objective function.
After the fitness calculation for all pattern solutions, the pattern solution with the best fitness is considered as the target solution with a target position
, target fitness
and target weight
. The NNA models an ANN with
inputs each input of
D dimension(s) and only one target output
[
31].
Inspired by the weight summation technique used in ANNs, the new pattern solution is generated as follows:
where
k is an iteration index.
After the new pattern solutions are generated from the previous population, the weight matrix is updated as well using the following equation:
where the constraints (37) and (38) must be satisfied during the optimization process.
For better exploration of the search space, a bias operator is used in the NNA algorithm. The bias operator is used to modify a certain percentage of the pattern solutions generated in the new population
as well as the updated weight matrix
. The bias operator prevents the algorithm from premature convergence by modifying a certain number of individuals in the population to explore other places in the search space, which has not been visited by the population. For more details about the bias strategy, the reader can refer to reference [
31].
A modification factor
is used to determine the percentage of the pattern solutions to be modified using the bias operator. The initial value of
is set to 1 meaning that all individuals in the population are biased. The value of
will be adaptively reduced at each iteration using any possible reduction technique such as follows:
where
is a positive number smaller than 1 originally selected as 0.99.
The reduction of the modification factor is made to enhance the exploitation of the algorithm as the iterations increase by allowing the algorithm to search for the optimum solution near to the target solution especially at the final iterations.
Unlike ANNs, in NNA the transfer function operator is used to generate better-quality solutions. The transfer function operator (TF) is defined by the following equation:
Using the transfer function operator, the updated pattern solution is transferred from its current position to a new updated position towards the target pattern solution .
In NNA, at early iterations the bias operator has more chances to generate a new pattern solution meaning that more possible opportunities for discovering unvisited pattern solutions as well as using new weight values. As the iteration number increases, the chance of applying the bias operator decreases while the transfer function (TF) operator will have more chance enhancing the exploitation of the NNA especially at the final iterations.
NNA is considered as a dynamic optimization model because the generation of a new updated solution does not depend only on the previous value of that solution but also depends on all the population described mathematically as follows:
where
and
are the next and current locations of the
pattern solution respectively.
4.2. Formulation of FOFPID Controller Design as an Optimization Problem
In this paper, the neural network algorithm (NNA) was used to optimize the fractional order fuzzy PID controller. NNA was used to obtain the optimal or suboptimal value of the four scaling factors
, membership functions parameters for the two inputs
and
as well as the order of the fractional order operators
. Each candidate pattern solution must contain these parameters of the FOFPID controllers as follows:
Gaussian membership functions are used for the inputs of the FOFPID controller. The Gaussian membership function is characterized by two parameters, which are the center
and the standard deviation
. In this paper, a technique for encoding the membership functions using the minimum number of parameters is used, where, the peer positive and negative membership functions have the same value of the mean
, but with the opposite sign, and have the same standard deviation
as shown in
Figure 5. This approach of encoding reduces the total number of the membership functions’ parameters to be optimized to half, reducing the dimension of the optimization problem leading to a reduction of the computational cost. The total problem dimension is 20. The encoding of the controller parameters into a pattern solution is given in
Figure 6.
The formulation of FOFPID controller design as an optimization problem is described as follows:
Minimize
With the constrains:
where,
is the integral of the time weighted squared error,
is the error signal and
is the time.
The detailed procedures for using NNA for the optimization of the FOFPID controller are described in
Figure 7.
The optimized membership functions for both inputs of the FOFPID controller are shown in
Figure 8. The optimal values for
are:
and
. Using the Al-Alawi operator, the truncated 5th order discrete transfer functions approximating
and
with a sampling interval
= 0.001 s are: