Next Article in Journal
D3 Dihedral Logistic Map of Fractional Order
Previous Article in Journal
Supply Restoration in Active Distribution Networks Based on Soft Open Points with Embedded DC Microgrids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recursive Identification for MIMO Fractional-Order Hammerstein Model Based on AIAGS

Institute of Automation, Beijing University of Chemical Technology, Beijing 100020, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(2), 212; https://doi.org/10.3390/math10020212
Submission received: 30 November 2021 / Revised: 3 January 2022 / Accepted: 9 January 2022 / Published: 11 January 2022
(This article belongs to the Topic Fractional Calculus: Theory and Applications)

Abstract

:
In this paper, adaptive immune algorithm based on a global search strategy (AIAGS) and auxiliary model recursive least square method (AMRLS) are used to identify the multiple-input multiple-output fractional-order Hammerstein model. The model’s nonlinear parameters, linear parameters, and fractional order are unknown. The identification step is to use AIAGS to find the initial values of model coefficients and order at first, then bring the initial values into AMRLS to identify the coefficients and order of the model in turn. The expression of the linear block is the transfer function of the differential equation. By changing the stimulation function of the original algorithm, adopting the global search strategy before the local search strategy in the mutation operation, and adopting the parallel mechanism, AIAGS further strengthens the original algorithm’s optimization ability. The experimental results show that the proposed method is effective.

1. Introduction

In recent years, with the rapid economic and social development, the complexity of industry has been increasing. In order to understand and control these industrial processes more accurately, it is necessary to study system identification. However, in real life, nonlinear processes are inevitable and widespread. Nowadays, there is no definite characterization for nonlinear processes. A block-oriented model is a description of nonlinear model, which is the result of the interaction between the dynamic linear module and static nonlinear module. These model components can be connected in series, parallel, or feedback [1]. Hammerstein model is a typical block-oriented model that consists of a static nonlinear block in cascade with a dynamic linear block [2]. Because the dynamic behavior of the model is only included in the linear block, and the nonlinear block is static, this feature is conducive to identifying and controlling the nonlinear system constructed by the Hammerstein model [3]. Hammerstein model is extensively used to identify nonlinear systems [4,5,6,7]. As the model is widely used, the identification methods are also intensively discussed. These methods include neural networks [8,9], piecewise linear model [6], least square method [10], support vector machine [11], combined prior information [12], and so on.
In real life, it is evident that the dynamic linear block based on integer order cannot fully simulate the real model [13]. The fractional-order model extends the order of the model from the integer level to the fractional level. Therefore, the study of the fractional-order nonlinear model is essential [14]. At present, fractional-order models have been discussed in many fields, such as molecular materials [15,16], the voltage and current of the drive end impedance [17], industrial battery [18,19,20], and so on.
With the wide application of the fractional-order model, the problem of model identification has also been intensively discussed. However, the current methods have some limitations. The particle swarm optimization algorithm can be used to identify the parameters of the fractional Hammerstein model [21]. This method excessively depends on the optimization ability of the algorithm and does not consider the internal relationship between system parameters. Once the optimization algorithm has problems, it will significantly impact the identification results. The Levenberg–Marquardt algorithm developed by combining the two decomposition principles [22] can only be applied to the theoretical environment. Once the system is affected by noise, the model’s parameters will not be identified exactly. Reference [23] also requires an ideal environment. Some scholars pay attention to the fractional-order Hammerstein model with single-input single-output [24,25,26,27,28]. Some pay attention to the fractional-order Hammerstein model with multiple-input multiple-output, but most use the state space equation as the linear block of the model [29,30]. However, fractional-order calculus is a whole concept [31]. Using the transfer function of differential equation to construct the linear block of the Hammerstein model can better integrate the two concepts.
Based on the above background, this paper discusses a new method to identify the nonlinear coefficients, linear coefficients, and fractional order of the MIMO fractional Hammerstein model. In this method, AIAGS greatly improves the optimization ability by improving the immune algorithm’s stimulation function and search strategy. Then, the algorithm estimates the initial values of all MIMO fractional Hammerstein model parameters, including fractional order. The estimated result provides relatively accurate initial values for the subsequent algorithm. It solves the problem that the two-step method [28], which identifies coefficient and order, depends on the initial values. Then, using AMRLS, a method for accurate parameter identification of the MIMO fractional-order model is proposed. Finally, the effectiveness of the proposed method is verified by numerical simulation.
The main contribution of this paper is to propose an adaptive immune algorithm with a global search strategy to accurately identify the initial parameters of the fractional Hammerstein system. Secondly, a new recursive identification method for coefficients and fractional order of MIMO fractional-order nonlinear system with differential equation transfer function as linear block model is derived using an auxiliary model. Due to the different ways of selecting the optimal solution, the AIAGS algorithm proposed in this paper has higher reliability than the classical immune algorithm. Based on the auxiliary model, the recursive identification algorithm for the MIMO fractional Hammerstein model is given using the recursive least square method. The method in this paper solves the initial value problem of previous methods and provides more accurate initial values. This initial value cooperates with AMRLS, making the result of parameters identification of multi-input and multi-output fractional Hammerstein model closer to reality.
In this paper, an improved immune algorithm is proposed in Section 2. In Section 3, a new recursive identification method for MIMO fractional-order Hammerstein model with differential equation transfer function as linear block model is derived by using auxiliary model is discussed. In Section 4, numerical simulations show the effectiveness of the proposed method. Finally, Section 5 gives some conclusions.

2. Adaptive Immune Algorithm Based on Global Search Strategy

2.1. Review of Immune Algorithms

The immune algorithm is an adaptive intelligent system inspired by immunology and simulates the functions and principles of the biological immune system to solve complex problems. It retains several characteristics of the biological immune system, including global search capability, diversity maintenance mechanism, strong robustness, and parallel distributed search mechanism. The immune algorithm automatically generates the initial population by uniform probability distribution. After initialization, the population evolves and improves by the following steps: calculation of stimulation, selection, cloning, mutation, clonal inhibition, etc. [32].

2.2. AIAGS

2.2.1. Stimulation Improvement

Individual stimulation is the evaluation result of individual quality, which needs to be comprehensively considered individual affinity and concentration. The individual stimulation can usually be obtained by a simple mathematical calculation based on the evaluation results of individual affinity and concentration. In the traditional immune algorithm [33], the stimulation is expressed as
f s i m ( x i ) = a · f a f f ( x i ) b · f d e n ( x i )
where x i means the i th individual of the population; f a f f ( x i ) is affinity, which represents the Euclidean distance between the current individual and the optimal individual; f d e n ( x i ) is the concentration, indicating the number of other individuals whose Euclidean distance between the current individual and other individuals is within a certain threshold; f s i m ( x i ) is the stimulation; a and b is the calculation parameter. The algorithm will sort the individuals according to the stimulation and make the next choice.
This paper made the following changes to the coefficients of affinity and concentration. Firstly, the minus sign of Equation (1) is changed on the plus sign. Because the concentration represents the quality of population diversity, and too high concentration means that there are many very similar individuals in the population, the key point of the immune algorithm is to suppress the individuals with a high concentration to achieve global optimization. However, in both the original algorithm and various improved immune algorithms today, the coefficient b is non-negative, which leads to a minor incentive for individuals with low affinity and high concentration [34,35,36,37,38]. This improvement conforms to the core concept of the algorithm.
Secondly, this paper designs a parameter β related to the current population’s maximum, minimum, and individual affinity values. In the original algorithm, the a and b are constants. In various improved algorithms [34,35,36,37,38], the adaptive coefficients are only related to the number of current iterations. Because the comparison of stimulations between individuals is carried out in the population of the current iteration, these adaptive coefficients are not different from constants. They will not affect the stimulation ranking of the population. In this paper, because β is quadratic when selecting individuals based on stimulations, individuals with low affinity and individuals with high affinity will be considered, increasing the global searchability. The parameter is expressed as
β = ( f a f f ( x i ) f a f f a f a f f m a x f a f f a ) 2
where f a f f a is the average of f a f f m a x and f a f f m i n .
Finally, after a certain number of iterations, the population will move closer to the optimal global individual. If the concentration problem is also considered, it may give up the found optimal range and select the new random individual when selecting the individuals. Therefore, a monotone decreasing adaptive operator is designed in this paper. In the middle and later iteration stages, the concentration effect is negligible.
To sum up, the stimulation for this paper is expressed as
f s i m ( x i ) = ( 1 β ) · f a f f ( x i ) + [ 1 2 g e n G ( g e n G ) 2 ] · 0.5 β · f d e n ( x i )
where gen means the current number of iterations and G is the total number of iterations.
After improvement, the approximate trend of individual stimulations is shown in Figure 1a. The approximate trend of the stimulations of the original or other improved immune algorithm is shown in Figure 1b. The x-axis is 100 individuals sorted from smallest to largest according to affinity, and the y-axis is individual stimulation. It can be seen from Figure 1 that the original algorithm and other improved algorithms generally only select individuals with low affinity. In contrast, the algorithm in this paper can consider individuals with high affinity.

2.2.2. Mutation Strategy Improvement

The original algorithm has a single strategy in the mutation stage. The algorithm improved by others will enrich the mutation strategy and improve the probability of all individuals for mutation. However, the mutation strategy is selected only by random numbers, which makes the algorithm not flexible [38].
The algorithm of this paper has two minor changes in the mutation stage. First, an adaptive operator p m that changes from algebra is designed, and its value decreases monotonically between 0 and 0.8. The parameter can be expressed as
p m = 0.8 · ( 1 g e n G )
Secondly, when setting the global optimization step, a variable s v is added based on adaptation, gradually changing the mutation step. The optimal individual is selected for retention of the individuals after several mutations, which greatly enhances the global search ability.
To sum up, the mutation strategy for this paper can be expressed as
x i , j = { x b e s t , j + p m · ( x r 1 , j x r 2 , j ) ,                           r a n d > p m x r 1 , j + ( p m + s v ) · ( x r 2 , j x r 3 , j ) ,           o t h e r w i s e
where i means the sequence of individuals in the population; j denotes the sequence of dimensions in the individual; x r 1 , x r 2 , and x r 3 are different individuals randomly selected from the population except for the x i .
Obviously, in the early stage of the iteration, the mutation strategy will mostly choose the second mutation strategy, edge mutation strategy, which will enhance the global optimization ability of the algorithm. In the middle and later stages of the iteration, the first mutation strategy, the optimal individual mutation strategy, will be selected for local search.

2.2.3. Simulated Annealing Strategy

The simulated annealing algorithm mimics the annealing process in metallurgy and is classified as a single-based solution method. After comparing the current optimal solution with the previous optimal solution, if the fitness of the current optimal solution is greater than that of the previous one, it may abandon the current result and choose the previous result [39].
At the end of the improved algorithm, simulated annealing is added to avoid the algorithm falling into the local optimum. Some people have done similar work, but both the initial algorithm and others’ improved algorithm use stimulus to evaluate the optimal solution [33]. This paper uses affinity to evaluate the optimal solution at the end. However, the stimulation of the optimal individual of the previous generation may be slightly big, resulting in not being selected during mutation selection, so the affinity of the optimal individual of the current generation may be greater than that of the previous generation. At this time, the effect of simulated annealing will likely jump back to the result to optimize further. The replacement for such a case depends on the probability p as defined as
p = e Δ F Δ F = f a f f ( x i ) f a f f ( x i ) 1
where x i is the current optimal solution; x i denotes the previous optimal solution. This part will replace the solution if p < r a n d ( 0 , 1 ) .

2.2.4. Pseudo Code of AIAGS

To sum up, there are some innovations of this paper on the existing immune algorithms. The pseudo code of AIAGS is explained in detail in Algorithm 1. The flowchart of AIAGS is explained in detail in Figure 2.
Algorithm 1: AIAGS
Step.1Define the objective function F ( x ) ;
Step.2Initialize population X ;
Step.3Evaluate all the individuals x i by the objective function F ( x ) ;
Step.4Calculate the affinity f a f f ( x i ) and concentration f d e n ( x i ) of each individual;
Step.5Initialize the number of iteration m = 1 ;
Step.6While m < max number of iterations M;
Step.7 Calculate the stimulation f s i m ( x i ) of each individual by the Equation (3);
Step.8 Select the individuals in the population by stimulation and clone the individuals;
Step.9 Mutate the cloned individuals by the Equation (5);
Step.10 If the generated mutation vector exceeds the boundary, a new mutation vector is generated randomly until it is within the boundary;
Step.11 Inhibit cloning and calculate the affinity of each new individual;
Step.12 Generate optimal individual by Simulated Annealing by the Equation (6);
Step.13 End;
Step.14 m = m + 1 ;
Step.15End while;
Step.16Return the best solution.

2.3. Benchmark Function

Due to the limitations of intelligent optimization algorithms, unlike the traditional algorithm, which has a mathematical theoretical basis, it is not strict. After improving the optimization algorithm, most people use the classical benchmark function to test the algorithm’s effectiveness. This article uses eight classical and four CEC2017 benchmark functions to evaluate AIAGS. The u ( ) of F6 and F7 is expressed as
u ( x i , a , k , m ) = { K ( x i a ) m , if   x i > a 0 , a x i a K ( x i a ) m , a x i
These classical functions are divided into three groups: unimodal (F1–F4), multimodal (F5–F7), and fixed-dimension multimodal (F8). The unimodal benchmark function has only one optimal solution, which can verify the development and convergence. The multimodal benchmark function has many optimal solutions. However, there is only one global optimal solution, and the rest are local optimal solutions. The fixed dimensional multimodal functions can define the desired number of design variables and could provide a different search space. Therefore, the multimodal functions are responsible for testing exploration and avoiding the entrapment in the optimal local solution. Hybrid and composition functions can reflect some problems that are closer to reality [40]. In Table 1, the corresponding properties of these functions are listed, where dim represents the dimensions of the functions and range indicates the scope of the search space.

2.3.1. Comparison of AIAGS with Other Algorithms

In order to reflect the improvement effect of the immune algorithm in this paper, this section compares AIAGS with the original immune algorithm two improved immune algorithms: improved artificial immune algorithm (IAIA) [28] and modified artificial immune algorithm (MAIA) [29], and two new algorithms: Harris hawks optimization (HHO) [41] and Aquila optimizer (AO) [42]. The parameter settings of the counterparts’ algorithms are given in Table 2. The comparison results are shown in Table 3. However, intelligent algorithms are highly accidental. After several tests, this paper calculates the average value and standard deviation of each test result to avoid misleading the experimental results and the practical application of the algorithm.

2.3.2. Convergence

Convergence is the ability of the algorithm to search and converge to an acceptable solution in a certain time. Convergence is an important index to evaluate the performance of the algorithm. An algorithm has high convergence, which means fast optimization speed and high precision. Generally, the convergence speed can be measured by the number of iterations, and the convergence value can measure the accuracy.
The convergence curves of AIAGS and the other five algorithms in 12 benchmark functions are shown in Figure 3. It can be seen from Table 2 and Figure 3 that the convergence speed and optimization ability of AIAGS are not the strongest in individual benchmark functions. On the whole, AIAGS is far better than other immune algorithms in terms of convergence speed and optimization ability, and it is also better than the other two algorithms.

2.4. Summary

In this chapter, the immune algorithm’s stimulation function and mutation strategy are improved, and simulated annealing is added to the final step to select the optimal solution. The core idea of these improvements is to avoid finding the optimal local solution. After improving the algorithm, 12 different types of benchmark functions are used to evaluate the algorithm’s performance. Experiments show that the development and exploration ability of AIAGS is significantly improved compared with the previous immune algorithm. These conclusions provide substantial proof for the following system identification work.

3. Identification Method of MIMO Fractional Order Hammerstein Model

3.1. MIMO Fractional Order Hammerstein Model

3.1.1. Fractional Order Differentiation

At present, there are three definitions widely used in the field of fractional calculus: Grünwald–Letnikov (GL), Riemann–Liouville (RL), and Caputo definitions. Because the GL is easy to program [43], this paper considers it the research object. The definition of fractional order calculus can be expressed as
D t α f ( t ) = lim h 0 1 h α j = 0 [ t t 0 h ] ( 1 ) j ( α j ) f ( t j h )
where α is the fractional order. Because this paper explores differential equations, α > 0 . h is the sampling time; [] means that the integer part is reserved; ( 1 ) j ( α j ) is the binomials of ( 1 z ) α . By denoting w j α to replace the binomials, so w j α can be expressed as
w j α = ( 1 ) j ( α j ) = ( 1 ) j Γ ( α + 1 ) Γ ( j + 1 ) Γ ( α j + 1 )
Finally, when t 0 = 0 , the definition of fractional order calculus can be expressed as
D t α f ( t ) = 1 h α j = 0 [ t a h ] w j α f ( t j h )

3.1.2. MIMO Fractional-Order Hammerstein System

The MIMO Hammerstein model of this paper can be schematically represented in Figure 4. Hammerstein model is a typical nonlinear model composed of static nonlinear block and dynamic linear block. the dynamic linear block can be expressed as
[ y 1 ( t ) y 2 ( t ) y N ( t ) ] = [ G 1 , 1 G 1 , 2 G 1 , M G 2 , 1 G 2 , 2 G 2 , M G N , 1 G N , 2 G N , M ] [ u 1 ( t ) u 2 ( t ) u M ( t ) ]
where y k ( t ) is the kth system output; u l ( t ) is generated by the lth system input u l ( t ) through the nonlinear block, which can be expressed as
u l ( t ) = c l , 1 · f l , 1 ( u l ( t ) ) + c l , 2 · f l , 2 ( u l ( t ) ) + + c l , n l c · f l , n l c ( u l ( t ) ) = m = 1 n l c c l , m · f l , m ( u l ( t ) )
where c l , · are coefficients to be identified; f l , · () are a series of basic functions. G k , l is a fractional-order transfer function, which can reflect the relationship between u l ( t ) and y k ( t ) ; it is defined as
G k , l ( s ) = b k , l , m s m α + b k , l , m 1 s ( m 1 ) α + + b k , l , 0 a k , l , n s n α + a k , l , n 1 s ( n 1 ) α + + a k , l , 0
where a k , l , · and b k , l , · are coefficients to be identified; α is the fractional order to be identified. For the convenience of calculation and programming, in this paper a k , l , 0 is assumed to be 1. According to Equations (11) and (13), the kth system output can be expressed as
y k = G k , 1 u 1 + G k , 2 u 2 + + G k , M u M = b k , 1 , m S m α + b k , 1 , m 1 S ( m 1 ) α + + b k , 1 , 0 a k , 1 , n S n α + a k , 1 , n 1 S ( n 1 ) α + + a k , 1 , 1 S α + 1 u 1 + b k , 2 , m S m α + b k , 2 , m 1 S ( m 1 ) α + + b k , 2 , 0 a k , 2 , n S n α + a k , 2 , n 1 S ( n 1 ) α + + a k , 2 , 1 S α + 1 u 2 + + b k , M , m S m α + b k , M , m 1 S ( m 1 ) α + + b k , M , 0 a k , M , n S n α + a k , M , n 1 S ( n 1 ) α + + a k , M , 1 S α + 1 u M
By reduction of fractions to a common denominator and simplifying Equation (14), we can get an equation described as
A k , N A S N A α + A k , N A 1 S N A 1 α + + A k , 1 S α + 1 y k = B k , 1 , N B S N B α + B k , 1 , N B 1 S N B 1 α + + B k , 1 , 0 u 1 + B k , 2 , N B S N B α + B k , 2 , N B 1 S N B 1 α + + B k , 2 , 0 u 2 + + B k , M , N B S N B α + B k , M , N B 1 S N B 1 α + + B k , M , 0 u M
where A is the polynomial containing a ; B is the polynomial containing a and b ; the coefficient of fractional order N A = M n , N B = ( M 1 ) n + m . To sum up, the MIMO fractional-order Hammerstein system discussed in this paper can be expressed as
{ y k ( t ) + i = 1 n A A k , i D i α y k ( t ) = l = 0 M j = 0 n B B k , l , j D j α u l ( t ) y k ( t ) = y k ( t ) + v ( t )
where v ( t ) is the Gaussian white noise; y k ( t ) is the measured output containing noise. According to Equations (10) and (16), the MIMO fractional-order Hammerstein system can be expressed as
y k ( t ) = 1 1 + i = 1 n A A i / h i α · l = 0 M i = 0 n B m = 1 n l c B k , l , i h i α c l , m · j = 0 [ t / h ] w j i α f l , m u l ( t j h ) i = 1 n A A i h i α j = 1 [ t / h ] w j i α y k ( t j h ) + v ( t )

3.2. Parameter Identification Based on Auxiliary Model Recursive Least Square Method

In the MIMO fractional-order Hammerstein model, all the coefficients and the fractional order are needed to be identified. Previous articles usually considered only part of coefficients or for the SISO system. The work of this paper is rarely concerned before. The identification work is divided into coefficient identification and order identification. However, the two results affect each other, which cannot identify coefficients precisely without a precise fractional order. This paper will first use a series of input and output data to obtain the initial values of coefficients and the fractional order by the AIAGS algorithm mentioned above. The initial value is a little precise. Then, the initial value will be used to get the parameter identification result of the fractional-order Hammerstein model through the auxiliary model recursive least squares (AMRLS) algorithm.

3.2.1. Coefficient Identification

According to the basic knowledge of system identification, the input–output relations can be expressed as
y k ( t ) = y k ( t ) + v ( t ) = k ( t ) · θ k T + v ( t )
where k ( t ) is the variable vector including input–output data, which is expressed as
k ( t ) = [ k , A ( t ) , B k , 1 , 0 ( t ) , B k , 1 , 1 ( t ) , , B k , 1 , n B ( t ) , , B k , M , 0 ( t ) , B k , M , 1 ( t ) , , B k , M , n B ( t ) ] k , A ( t ) = [ j = 1 [ t / h ] w j α y k ( t j h ) , , j = 1 [ t / h ] w j n A α y k ( t j h ) ] B k , l , i ( t ) = [ j = 0 [ t / h ] w j i α f l , 1 ( u l ( t j h ) ) , , j = 0 [ t / h ] w j i α f l , M ( u l ( t j h ) ) ]
According to Equations (16) and (17), the vector θ k is found and expressed as
θ k = [ θ k , A , θ B k , 1 , 0 , , θ B k , 1 , n B , , θ B k , M , 0 , , θ B k , M , n B ] θ k , A = [ Q k , 1 , Q k , 2 , , Q k , n A ] θ B k , l , i   = [ W k , 1 , i c 1 , 1 , , W k , 1 , i c 1 , n l c , , W k , M , i c M , 1 , , W k , M , i c M , n l c ]
where
Q k , i = A k , i h i α 1 + i = 1 n A A k , i h i α W k , l , j   = B k , l , j h j α 1 + i = 1 n A A k , i h i α
It can be clearly seen that θ k contains coefficients that need to be identified. It is worth mentioning that y k ( t j h ) is unknown so that θ k , A cannot be identified directly by k , A ( t ) . According to references [44], an auxiliary model is used to estimate the unknown variable y k ( t j h ) . The auxiliary model of this paper can be schematically represented in Figure 5. The main idea of the auxiliary model is that the real output of the system y k ( t ) is replaced by the output of the auxiliary model y a m k ( t ) . Then, the identification problem has changed from the relationship between y k ( t ) and u l to the relationship between y a m k ( t ) and u l .
According to Figure 5, the input–output relations of the auxiliary model can be written as
y a m k ( t ) = a m k ( t ) · θ a m k T
where
a m k ( t ) = [ a m k , A ( t ) , B k , 1 , 0 ( t ) , B k , 1 , 2 ( t ) , , B k , 1 , n B ( t ) , , B k , M , 0 ( t ) , B k , M , 2 ( t ) , , B k , M , n B ( t ) ] a m k , A ( t ) = [ j = 1 [ t / h ] w j α y a m k ( t j h ) , , j = 1 [ t / h ] w j n A α y a m k ( t j h ) ] θ a m k = θ k ^
The estimate of k ( t ) can be used as the value of the auxiliary model information vector a m k ( t ) and the parameter identification of θ k can be used as the value of the auxiliary model parameter vector θ a m k . Define the criterion function as
J ( θ k ^ T ) = 1 2 i = 1 t [ y k ( i ) a m k ( i ) θ k ^ T ] 2
By finding the minimum value of the criterion function, the value of a m k ( i ) θ k ^ T can approach the value of y k ( i ) to identify θ k ^ . The minimum value can be obtained by the following equation.
J ( θ k ^ T ) θ k ^ T = i = 1 t a m k T ( i ) · [ y k ( i ) a m k ( i ) θ k ^ T ] = 0
When i = 1 t a m k T ( i 1 ) · a m k ( i 1 ) can be inversed, the value of θ k ^ can be identified by the recursive least squares as follows:
θ k ^ T ( t ) = [ i = 1 t a m k T ( i 1 ) · a m k ( i 1 ) ] 1 · i = 1 t a m k T ( i ) y k ( i ) θ k ^ T ( t ) = θ k ^ T ( t 1 ) + L ( t ) [ y k ( t ) a m k ( t ) θ k ^ T ( t 1 ) ] L ( t ) = P ( t 1 ) a m k T ( t ) [ 1 + a m k ( t ) P ( t 1 ) a m k T ( t ) ] 1 P ( t ) = [ I L ( t ) a m k ( t ) ] P ( t 1 )
where P ( 0 ) is a diagonal matrix in which the main diagonal elements are huge and equal.
According to the above equations, the elements of θ k ^ are all identified. Without losing generality, assuming c l , 1 as 1 can facilitate calculation and ensure the uniqueness of the final parameters. Then, the unique values of W k , l , j   and c l , m are calculated; they can be expressed as
W k , l , j   = θ B k , l , i   [ ( l 1 ) n l c + 1 ] c l , m = i = 0 n l c θ B k , l , i   ( k ) W k , l , j  
So far, the estimates of A , B , and c have been obtained.

3.2.2. Order Identification

In the previous section, this paper discusses the identification of coefficients. Substituting the accurate estimated value of the coefficients into Equation (17) can identify the order accurately. Define the criterion function as
J ( α ) = 1 2 i = 1 t [ y k ( i ) y k ^ ( i ) ] 2
By finding the minimum value of the criterion function, the value of y k ^ ( i ) can approach the value of y k ( i ) . The minimum value can be obtained by the following equation:
J ( α ) α = i = 1 t y k ^ ( i ) α · [ y k ( i ) y k ^ ( i ) ] = 0
where
y k ^ ( t ) α = α ( G k , 1 ^ s α u 1 ( t ) + G k , 2 ^ s α u 2 ( t ) + + G k , M ^ s α u M ( t ) ) = l = 0 M B k , l , N B S N B α + + B k , l , 0 A k , N A S N A α + + 1 2 · N A · A k , N A S N A α + + A k , 1 s α N B · B k , l , N B S N B α + + B k , l , 1 s α A k , N A S N A α + + 1 · ln ( s ) · u l ( t )
According to references [24], ln ( s ) · u l ( t ) can be replaced by s α · ( ln ( s ) / s α ) · u l ( t ) . The inverse Laplace transform of ln ( s ) / s α is a digamma function can be expressed as
L 1 ( ln ( s ) s α ) = t α 1 Γ ( α ) [ 1 Γ ( α ) d Γ ( α ) d α ln ( t ) ]
Then, ln ( s ) · u l ( t ) can be expressed as
D α 1 Γ ( α ) d Γ ( α ) d α D α u l ( t ) 1 Γ ( α ) 0 t ( t τ ) α 1 ln ( t τ ) u l ( t ) d τ
It’s easy to see that α can be calculated by Equations (28)–(32).

3.3. Summary

So far, the estimates of A , B , c , and α have been obtained. Because A are polynomials about a , B are polynomials about a and b , it is feasible to estimate the value of n A   a by the value of n A   A . Then, it is feasible to estimate the value of b by the value of a and B . To sum up, all estimates work has been completed.

4. Experimental Results

In this section, two numerical examples will demonstrate the validity of the proposed method.

4.1. Example 1

Consider the following model, which is expressed as
[ y 1 ( t ) y 2 ( t ) ] = [ G 1 , 1     G 1 , 2 G 2 , 1     G 2 , 2 ] [ u 1 ( t ) u 2 ( t ) ] y ( t ) = y ( t ) + v ( t )
where
G 1 , 1 = 4 5 s 0.3 + 1 ,   G 1 , 2 = 3 3 s 0.3 + 1 ,   G 2 , 1 = 4 6 s 0.3 + 1 ,   G 2 , 2 = 5 2 s 0.3 + 1 . u 1 ( t ) = u 1 ( t ) + 0.5 u 1 2 ( t ) + 0.3 u 1 3 ( t ) + 0.1 u 1 4 ( t ) u 2 ( t ) = u 2 ( t ) + 0.4 u 2 2 ( t ) + 0.2 u 2 3 ( t ) + 0.1 u 2 4 ( t )
The inputs u 1 and u 2 are persistent excitation signal sequences with unit variance and zero mean. v ( t ) is the stochastic Gaussian noise with zero mean and variance is 0.005. Then, the outputs y ( t ) are generated by their respective transfer functions of the MIMO fractional-order Hammerstein model.
According to the model, the θ to be identified are
θ = [ a 1 , 1 , 1 , a 1 , 2 , 1 , b 1 , 1 , 0 , b 1 , 2 , 0 , a 2 , 1 , 1 , a 2 , 2 , 1 , b 2 , 1 , 0 , b 2 , 2 , 0 , c 1 , 1 , c 1 , 2 , c 1 , 3 , c 2 , 1 , c 2 , 2 , c 2 , 3 , α ] = [ 5 ,   3 ,   4 ,   3 ,   6 ,   2 ,   4 ,   5 ,   0.5 ,   0.3 ,   0.1 ,   0.4 ,   0.2 ,   0.1 ,   0.3 ]
The identification steps are described in Section 3. At first, the intelligent optimization algorithm identifies the initial value of the model. Then, using AMRLS to identify the model coefficients, and at this time regarding the initial value of fractional order as the model’s actual value. When the coefficients are estimated, the estimated values of the coefficients are considered to be the true value to identify the fractional order. Finally, identifying coefficients and order is repeated until the iteration’s end or satisfactory results are obtained. The pseudo-code of the identification process is explained in detail in Algorithm 2.
Algorithm 2: Identification process
Step.1Collect the dates of all inputs, outputs;
Step.2Obtain the initial of unknown parameters by using intelligent optimization algorithm;
Step.3While  m < max number of iterations M;
Step.4 Estimate the value of model coefficients according to Equation (25);
Step.5 Estimate the value of fractional order according to Equation (29);
Step.6  If the two criterion function values J within the actual accuracy requirements;
Step.7  Break;
Step.8 End;
Step.9 m = m + 1 ;
Step.10End while;
Step.11Return the best solution.
In order to reflect the importance of the initial value of fractional order, in this section, the initial value is identified by three different optimization algorithms: AIAGS, HHO, and AO. The next identification work is carried out under four initial values. This section evaluates the final identification results from two aspects: RQE and MSE. They can be expressed as
RQE = ( θ ^ θ ) 2 θ 2 MSE = n i = 1 y i y ^ i 2 n
where θ ^ and y ^ i are estimated values; θ and y i are true values.
The final identification results obtained by Algorithm 2 are shown in Table 4, and the RQE and MSE of the results are shown in Table 5. The outputs of the real model and the outputs of the model obtained through identification are shown in Figure 6 and Figure 7. Figure 8 shows the estimated fractional-order convergence curve.

4.2. Example 2

Consider the following model, which is expressed as
[ y 1 ( t ) y 2 ( t ) ] = [ G 1 , 1     G 1 , 2 G 2 , 1     G 2 , 2 ] [ u 1 ( t ) u 2 ( t ) ] y ( t ) = y ( t ) + v ( t )
where
G 1 , 1 = 5 2 s 0.7 + 1 ,   G 1 , 2 = 1.7 s 0.7 + 1.9 1.5 s 1.4 + 1.3 s 0.7 + 1 ,   G 2 , 1 = 1.8 s α + 1.5 2.2 s 1.4 + 2.1 s 0.7 + 1 ,   G 2 , 2 = 1 1.6 s 0.7 + 1 . u 1 ( t ) = u 1 ( t ) + 0.5 u 1 2 ( t ) + 0.2 u 1 3 ( t ) + 0.1 u 1 4 ( t ) u 2 ( t ) = u 2 ( t ) + 0.4 u 2 2 ( t ) + 0.3 u 2 3 ( t ) + 0.1 u 2 4 ( t )
The parameter meanings are similar to that of Example 1, so θ can be expressed as
θ = [ a 1 , 1 , 1 , a 1 , 2 , 2 , a 1 , 2 , 1 , b 1 , 1 , 0 , b 1 , 2 , 1 , b 1 , 2 , 0 , a 2 , 1 , 2 , a 2 , 1 , 1 , a 2 , 2 , 1 , b 2 , 1 , 1 , b 2 , 1 , 0 , b 2 , 2 , 0 , c 1 , 1 , c 1 , 2 , c 1 , 3 , c 2 , 1 , c 2 , 2 , c 2 , 3 , α ] = [ 2 , 1.5 , 1.3 , 5 , 1.7 , 1.9 , 2.2 , 2.1 , 1.6 , 1.8 , 1.5 , 1 ,   0.5 ,   0.2 ,   0.1 ,   0.4 ,   0.3 ,   0.1 ,   0.7 ]
By repeating the identification process similar to Example 1, the final identification results are shown in Table 6, and the RQE and MSE of the results are shown in Table 7. The outputs of the real model and the outputs of the model obtained through identification are shown in Figure 9 and Figure 10. Figure 11 shows the estimated fractional-order convergence curve.

5. Conclusions

This paper discusses a new identification method for MIMO fractional-order Hammerstein models. In order to improve the accuracy of identification results, the identification process needs a heuristic algorithm to provide the initial value. Because the immune algorithm is prone to premature convergence, this paper improves the immune algorithm and proposes AIAGS. In AIAGS, the immune algorithm’s stimulation function and mutation strategy are improved, and simulated annealing is added to the final step to select the optimal solution. The core idea of these improvements is to avoid finding the optimal local solution. Then, through the obtained initial value, the auxiliary model recursive least squares method is used to accurately identify all the MIMO fractional-order Hammerstein model parameters. The experimental results show the effectiveness of the proposed algorithm. The proposed methods in this paper can be applied to other literature [45,46,47], such as parameter identification problems of different systems, engineering applications, fault diagnosis, and so on.

Author Contributions

Conceptualization, Q.J. and B.W.; methodology, Q.J.; software, B.W.; validation, Q.J., B.W. and Z.W.; formal analysis, B.W.; investigation, B.W.; resources, Q.J.; data curation, Q.J. and B.W.; writing—original draft preparation, B.W.; writing—review and editing, Q.J., B.W. and Z.W.; visualization, Q.J.; supervision, Q.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

In the paper, all the data generation information has been given in detail in the related chapter.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Billings, S.A.; Fakhouri, S.Y. Identification of systems containing linear dynamic and static nonlinear element. Automatica 1982, 18, 15–26. [Google Scholar] [CrossRef]
  2. Narendra, K.S.; Gallman, P.G. An Iterative Method for the Identification of Nonlinear Systems using the Hammerstein Model. IEEE Trans. Autom. Control 1966, AC11, 546–550. [Google Scholar] [CrossRef]
  3. Moghaddam, M.J.; Mojallali, H.; Teshnehlab, M. Recursive identification of multiple-input single-output fractional-order Hammerstein model with time delay. Appl. Soft Comput. 2018, 70, 486–500. [Google Scholar] [CrossRef]
  4. Chen, H.F. Strong consistency of recursive identification for Hammerstein systems with discontinuous piecewise-linear memoryless block. IEEE Trans. Autom. Control AC 2005, 50, 1612–1617. [Google Scholar] [CrossRef]
  5. Sznaier, M. Computational complexity analysis of set membership identification of Hammerstein and Wiener systems. Automatica 2009, 45, 701–705. [Google Scholar] [CrossRef]
  6. Kung, M.C.; Womack, B. Discrete time adaptive control of linear dynamic systems with a two-segment piecewise-linear asymmetric nonlinearity. IEEE Trans. Autom. Control 2003, 29, 170–172. [Google Scholar] [CrossRef]
  7. Mccannon, T.E.; Gallagher, N.C.; Minoo-Hamedani, D.; Wise, J.L. On the design of nonlinear discrete-time predictors. IEEE Trans. Inf. Theory 1982, 28, 366–371. [Google Scholar] [CrossRef]
  8. Jin, Q.; Wang, H.; Su, Q. A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise. ISA Trans. 2018, 72, 77–91. [Google Scholar] [CrossRef] [PubMed]
  9. Jin, Q.; Xu, Z.; Cai, W. An Improved Whale Optimization Algorithm with Random Evolution and Special Reinforcement Dual-Operation Strategy Collaboration. Symmetry 2021, 13, 238. [Google Scholar] [CrossRef]
  10. Dong, S.; Yu, L.; Zhang, W.A. Robust extended recursive least squyares identification algorithm for Hammerstein systems with dynamic disturbances. Digit. Signal Process. 2020, 101, 102716. [Google Scholar] [CrossRef]
  11. Dhaifallah, M.A.; Westwick, D.T. Support Vector Machine Identification of Output Error Hammerstein Models. IFAC Proc. Vol. 2011, 44, 13948–13953. [Google Scholar] [CrossRef]
  12. Schlegel, M.; Čech, M. Fractal System Identification for Robust Control—The Moment Approach. In Proceedings of the 5th International Carpathian Control Conference, Zakopane, Poland, 25–28 May 2004. [Google Scholar]
  13. Torvik, P.J.; Bagley, R.L. On the Appearance of the Fractional Derivative in the Behavior of Real Materials. J. Appl. Mech. 1984, 51, 725–728. [Google Scholar] [CrossRef]
  14. Zhao, C.; Dingy, X. Closed-form solutions to fractional-order linear differential equations. Front. Electr. Electr. Eng. China 2008, 3, 214–217. [Google Scholar] [CrossRef]
  15. Chen, S.; Liu, F.; Turner, I. Numerical inversion of the fractional derivative index and surface thermal flux for an anomalous heat conduction model in a multi-layer medium. Appl. Math. Model. 2018, 59, 514–526. [Google Scholar] [CrossRef] [Green Version]
  16. Deng, J. Higher-order stochastic averaging for a SDOF fractional viscoelastic system under bounded noise excitation. J. Frankl. Inst. 2017, 354, 7917–7945. [Google Scholar] [CrossRef]
  17. Das, S. Functional Fractional Calculus; Springer: Berlin/Heidelberg, Germany, 2011; pp. 110–122. [Google Scholar]
  18. Table, M.A.; Béthoux, O.; Godoyb, E. Identification of a PEMFC fractional order model. Int. J. Hydrogen Energy 2017, 42, 1499–1509. [Google Scholar]
  19. Kumar, S.; Ghosh, A. Identification of fractional order model for a voltammetric E-tongue system. Measurement 2019, 150, 107064. [Google Scholar] [CrossRef]
  20. Zhang, Q.; Shang, Y.; Li, Y. A novel fractional variable-order equivalent circuit model and parameter identification of electric vehicle Li-ion batteries. ISA Trans. 2019, 97, 448–457. [Google Scholar] [CrossRef]
  21. Hammar, K.; Djamah, T.; Bettayeb, M. Fractional hammerstein system identification using particle swarm optimization. In Proceedings of the 2015 7th International Conference on Modelling, Identification and Control (ICMIC) 2015, Sousse, Tunisia, 18–20 December 2015; pp. 1–6. [Google Scholar]
  22. Hammar, K.; Djamah, T.; Bettayeb, M. Fractional Hammerstein system identification based on two decomposition principles. IFAC-PapersOnLine 2019, 52, 206–210. [Google Scholar] [CrossRef]
  23. Chetoui, M.; Aoun, m. Instrumental variables based methods for linear systems identification with fractional models in the EIV context. In Proceedings of the 2019 16th International Multi-Conference on Systems, Signals & Devices (SSD) 2019, Istanbul, Turkey, 21–24 March 2019; pp. 90–95. [Google Scholar]
  24. Wang, J.; Wei, Y.; Liu, T. Fully parametric identification for continuous time fractional order Hammerstein systems. J. Frankl. Inst. 2020, 357, 651–666. [Google Scholar] [CrossRef]
  25. Cui, R.; Wei, Y.; Chen, Y. An innovative parameter estimation for fractional-order systems in the presence of outliers. Nonlinear Dyn. 2017, 89, 453–463. [Google Scholar] [CrossRef]
  26. Tang, Y.; Liu, H.; Wang, W. Parameter identification of fractional order systems using block pulse functions. Signal Process. 2015, 107, 272–281. [Google Scholar] [CrossRef]
  27. Cui, R.; Wei, Y.; Cheng, S. An innovative parameter estimation for fractional order systems with impulse noise. Isa Trans. 2017, 120–129. [Google Scholar] [CrossRef] [PubMed]
  28. Zhao, Y.; Yan, L.; Chen, Y. Complete parametric identification of fractional order Hammerstein systems. In Proceedings of the ICFDA’14 International Conference on Fractional Differentiation and Its Applications 2014, Catania, Italy, 23–25 June 2014; pp. 1–6. [Google Scholar]
  29. Zeng, L.; Zhu, Z.; Shu, L. Subspace identification for fractional order Hammerstein systems based on instrumental variables. Int. J. Control Autom. Syst. 2012, 10, 947–953. [Google Scholar]
  30. Khanra, M.; Pal, J. Reduced Order Approximation of MIMO Fractional Order Systems. IEEE J. Emerg. Sel. Top. Circuits Syst. 2013, 3, 451–458. [Google Scholar] [CrossRef]
  31. Lakshmikantham, V.; Vatsala, A.S. Basic theory of fractional differential equations. Nonlinear Anal. Theory Methods Appl. 2008, 69, 2677–2682. [Google Scholar] [CrossRef]
  32. Chun, J.S.; Jung, H.K.; Hahn, S.Y. A study on comparison of optimization performances between immune algorithm and other heuristic algorithms. IEEE Trans Magn. 1998, 34, 297222975. [Google Scholar]
  33. Castro, L.N.; Castro, D.L.; Timmis, J. Artificial Immune Systems: A New Computational Intelligence Approach; Springer Science & Business Media: London, UK, 2002. [Google Scholar]
  34. Chen, X.L.; Li, J.Q.; Han, Y.Y. Improved artificial immune algorithm for the flexible job shop problem with transportation time. Meas. Control 2020, 53, 2111–2128. [Google Scholar] [CrossRef]
  35. Lu, L.; Guo, Z.; Wang, Z. Parameter Estimation for a Capacitive Coupling Communication Channel Within a Metal Cabinet Based on a Modified Artificial Immune Algorithm. IEEE Access 2021, 75683–75698. [Google Scholar] [CrossRef]
  36. Samigulina, G.; Samigulina, Z. Diagnostics of industrial equipment and faults prediction based on modified algorithms of artificial immune systems. J. Intell. Manuf. 2021. [Google Scholar] [CrossRef]
  37. Yu, T.; Xie, M.; Li, X. Quantitative method of damage degree of power system network attack based on improved artificial immune algorithm. In Proceedings of the ICAIIS 2021: 2021 2nd International Conference on Artificial Intelligence and Information Systems, Chongqing, China, 28–30 May 2021. [Google Scholar]
  38. Xu, Y.; Zhang, J. Regional Integrated Energy Site Layout Optimization Based on Improved Artificial Immune Algorithm. Energies 2020, 13, 4381. [Google Scholar] [CrossRef]
  39. Eker, E.; Kayri, M.; Ekinci, S. A New Fusion of ASO with SA Algorithm and Its Applications to MLP Training and DC Motor Speed Control. Arab. J. Sci. Eng. 2021, 46, 3889–3911. [Google Scholar] [CrossRef]
  40. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  41. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  42. Abualigah, L.; Yousri, D.; Elaziz, M.A. Matlab Code of Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  43. Dzieliński, A.; Sierociuk, D. Stability of discrete fractional order state-space systems. IFAC Proc. Vol. 2006, 39, 505–510. [Google Scholar] [CrossRef]
  44. Ding, F.; Chen, H.; Li, M. Multi-innovation least squares identification methods based on the auxiliary model for MISO systems. Appl. Math. Comput. 2007, 187, 658–668. [Google Scholar] [CrossRef]
  45. Zhao, X.; Lin, Z.; Bo, F. Research on Automatic Generation Control with Wind Power Participation Based on Predictive Optimal 2-Degree-of-Freedom PID Strategy for Multi-area Interconnected Power System. Energies 2018, 11, 3325. [Google Scholar] [CrossRef] [Green Version]
  46. Ding, J.; Chen, J.; Lin, J. Particle filtering based parameter estimation for systems with output-error type model structures. J. Frankl. Inst. 2019, 356, 5521–5540. [Google Scholar] [CrossRef]
  47. Wang, L.; Liu, H.; Dai, L. Novel Method for Identifying Fault Location of Mixed Lines. Energies 2018, 11, 1529. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of stimulations. (a) The stimulations of AIAGS. (b) The stimulations of other algorithms.
Figure 1. Comparison of stimulations. (a) The stimulations of AIAGS. (b) The stimulations of other algorithms.
Mathematics 10 00212 g001
Figure 2. AIAGS.
Figure 2. AIAGS.
Mathematics 10 00212 g002
Figure 3. The convergence curves of AIAGS and other six algorithms.
Figure 3. The convergence curves of AIAGS and other six algorithms.
Mathematics 10 00212 g003aMathematics 10 00212 g003b
Figure 4. MIMO Hammerstein model.
Figure 4. MIMO Hammerstein model.
Mathematics 10 00212 g004
Figure 5. The MIMO Hammerstein model based on the auxiliary model.
Figure 5. The MIMO Hammerstein model based on the auxiliary model.
Mathematics 10 00212 g005
Figure 6. The real output 1 and the identified output 1.
Figure 6. The real output 1 and the identified output 1.
Mathematics 10 00212 g006
Figure 7. The real output 2 and the identified output 2.
Figure 7. The real output 2 and the identified output 2.
Mathematics 10 00212 g007
Figure 8. The estimated fractional-order convergence curve.
Figure 8. The estimated fractional-order convergence curve.
Mathematics 10 00212 g008
Figure 9. The real output 1 and the identified output 1.
Figure 9. The real output 1 and the identified output 1.
Mathematics 10 00212 g009
Figure 10. The real output 2 and the identified output 2.
Figure 10. The real output 2 and the identified output 2.
Mathematics 10 00212 g010
Figure 11. The estimated fractional-order convergence curve.
Figure 11. The estimated fractional-order convergence curve.
Mathematics 10 00212 g011
Table 1. Benchmark functions.
Table 1. Benchmark functions.
NameFormulaRange f m i n
Sphere F 1 ( x ) = i = 1 D x i 2 [−20, 20]0
Schwefel 1.2 F 2 ( x ) = i = 1 D ( j = 1 i x j ) 2 [−100, 100]0
Rosenbrock F 3 ( x ) = i = 1 D 1 [ 100 · ( x i 2 x i + 1 ) 2 + ( x i 1 ) 2 ] [−30, 30]0
Step F 4 ( x ) = i = 1 D ( x i + 0.5 ) 2 [−100, 100]0
Ackley F 5 ( x ) = 20 exp ( 0.2 1 n i = 1 D x i 2 ) exp 1 D i = 1 D cos 2 π x i + 20 + e [−40, 40]0
Generalized penalized 1 F 6 ( x ) = π n 10 sin π y 1 + i = 1 D 1 y i 1 2 [ 1 + 10 sin 2 π y i + 1 + i = 1 D u x i , 10 , 100 , 4 ] , y i = 1 + x i + 1 4 [−50, 50]0
Generalized penalized 2 F 7 ( x ) = 0.1 { sin 2 3 π x 1 + i = 1 D x i 1 2 1 + sin 2 3 π x i + 1 + x D 1 2 1 + sin 2 2 π x D } + i = 1 n u x i , 5 , 100 , 4 [−50, 50]0
Shekel’s Foxholes F 8 ( x ) = [ 1 500 + j = 1 25 1 j + i = 1 2 x i a i j ] 1 [−70, 70]1
Hybrid function 4 (N = 4) F 9 ( x ) [−100, 100]1400
Hybrid function 7 (N = 5) F 10 ( x ) [−100, 100]1700
Composition function 1 (N = 3) F 11 ( x ) [−100, 100]2100
Composition function 4 (N = 4) F 12 ( x ) [−100, 100]2400
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameter Settings
AIAGSδ = 0.1, sv = 0.2
AOα = 0.5, δ = 0.5
IAα = 2, β = 1, δ = 0.2, pm = 0.7
IAIAα = 2, β = 1, δ = 0.613, pm = 0.7
MAIAδ = 0.8, pm = 0.8, cr = 0.8
HHOα = 0.5, δ = 0.5
Table 3. Comparison of results obtained for the benchmark functions.
Table 3. Comparison of results obtained for the benchmark functions.
AIAGSAOIAIAIAMAIAHHO
F1
worst02.86 × 10−710.0001240.0001450.0308821.98 × 10−46
best07.37 × 10−767.65 × 10−53.54 × 10−50.0016832.62 × 10−58
Avg05.74 × 10−729.86 × 10−57.71 × 10−50.0125091.99 × 10−47
Std01 × 10−711.46 × 10−53.28 × 10−50.0099145.95 × 10−47
F2
worst02.82 × 10−560.0065780.02276116.070111.71 × 10−42
best01.72 × 10−730.0026060.0131820.8121251.15 × 10−51
Avg02.82 × 10−570.0039620.0172734.5658243.78 × 10−43
Std08.93 × 10−570.0014010.0030874.9472596.69 × 10−43
F3
worst6.39 × 10−70.001305433.5283696.243683.414110.008889
Best5.5 × 10−95 × 10−60.997270.7623534.47022.1 × 10−5
Avg9.99 × 10−80.00031980.76008143.828929.752450.002238
Std1.83 × 10−70.000424143.8194240.811730.686410.002581
F4
worst06.97 × 10−50.0041390.003290.003299.33 × 10−5
Best02.3 × 10−70.0016120.001740.001747.93 × 10−10
Avg01.87 × 10−50.0030660.0025670.0025672.05 × 10−5
Std02.32 × 10−50.000770.000530.000532.64 × 10−5
F5
worst8.88 × 10−168.88 × 10−164.6633423.2234281.0198248.88 × 10−16
Best8.88 × 10−168.88 × 10−160.0174550.0190810.1374168.88 × 10−16
Avg8.88 × 10−168.88 × 10−161.1395530.3420060.4374648.88 × 10−16
Std001.6173551.0124310.3232190
F6
worst4.71 × 10−323.84 × 10−54.7729136.2505790.0057882.07 × 10−5
Best4.71 × 10−327.83 × 10−81.16 × 10−50.3358820.0001071.56 × 10−7
Avg4.71 × 10−327.48 × 10−61.9847783.7815540.0017436.34 × 10−6
Std01.16 × 10−51.8306022.5122860.0020486.86 × 10−6
F7
worst1.35 × 10−320.0002810.0001018.19 × 10−50.0396770.000501
best1.35 × 10−321.31 × 10−65.21 × 10−53.87 × 10−50.0026721.18 × 10−7
Avg1.35 × 10−324.25 × 10−58.01 × 10−55.89 × 10−50.0179968.5 × 10−5
Std2.88 × 10−488.69 × 10−51.55 × 10−51.58 × 10−50.012930.000143
F8
worst0.9980042.9821051.9920310.9980040.9990271.992031
best0.9980040.9980040.9980040.9980040.9980040.998004
Avg0.9980041.5932341.1668750.9980040.9981071.196819
Std2.34 × 10−160.9584120.3629352.01 × 10−150.0003230.397606
F9
worst1528.3665142.0152215.4962302.8715755.4394349.2
best1472.8891557.7761443.2051428.9621488.1481450.039
Avg1503.7862462.4841580.1931655.4342510.731833.8
Std19.26844978.4552223.2629287.29311264.939843.7423
F10
worst1794.681838.1311763.4431782.142200.9551840.59
Best1744.1381731.2961722.8131725.3971766.4141744.772
Avg1774.5791781.8421738.9471748.6741898.9361781.998
Std17.1012832.0393310.857422.15373122.372430.2191
F11
worst2260.1042338.9932264.4872288.4342319.7332388.341
Best2209.7872204.092200.0052200.0032201.8222205.34
Avg2236.8022272.262211.4442211.2492265.5112272.888
Std18.1767356.0423118.065125.814244.2972471.44948
F12
worst2717.3672778.6922772.9842762.2612824.5932857.503
Best2521.7482746.4162500.0742500.0732505.9062770.847
Avg2626.9462767.8382669.6762629.3722710.072799.953
Std61.887629.524766114.739111.2591104.633728.32715
Table 4. The final identification results.
Table 4. The final identification results.
Method
(and AMRLS)
a 1 , 1 , 1 a 1 , 2 , 1 b 1 , 1 , 0 b 1 , 2 , 0 a 2 , 1 , 1 a 2 , 2 , 1 b 2 , 1 , 0 b 2 , 2 , 0 c 1 , 1 c 1 , 2 c 1 , 3 c 2 , 1 c 2 , 2 c 2 , 3 α α 0
AIAGS5.1273.1203.9762.9596.1182.0173.9964.9700.5010.2890.0950.4000.2000.1000.2990.333
AO4.6193.3253.8873.4655.3771.7604.1965.2720.5090.2970.0980.4040.1980.0980.2750.391
HHO4.6413.3293.8823.4485.2891.7574.1525.2830.5080.2960.0970.4040.1980.0990.2780.382
Table 5. The RQE and MSE of the results.
Table 5. The RQE and MSE of the results.
Method
(and AMRLS)
AIAGSAOHHO
RQE0.13600.29310.2987
MSE0.01440.09440.1019
Table 6. The final identification results.
Table 6. The final identification results.
Method
(and AMRLS)
a 1 , 1 , 1 a 1 , 2 , 2 a 1 , 2 , 1 b 1 , 1 , 0 b 1 , 2 , 1 b 1 , 2 , 0 a 2 , 1 , 2 a 2 , 1 , 1 a 2 , 2 , 1 b 2 , 1 , 1 b 2 , 1 , 0 b 2 , 2 , 0 c 1 , 1 c 1 , 2 c 1 , 3 c 2 , 1 c 2 , 2 c 2 , 3 α
AIAGS2.0021.5011.2975.0331.7031.9152.1642.2151.7661.6751.5641.0290.5040.1910.0950.3850.2930.1010.700
AO2.9461.4531.1745.6421.1271.8322.4592.5440.7334.6801.6261.1220.4830.1880.1000.3470.2900.1060.582
HHO3.1821.421.1975.6971.0041.8082.4632.6050.6914.9621.6311.1320.4820.1880.1000.3440.2880.1060.570
Table 7. The RQE and MSE of the results.
Table 7. The RQE and MSE of the results.
Method
(and AMRLS)
AIAGSAOHHO
RQE0.18190.65790.6935
MSE0.03510.51330.6626
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jin, Q.; Wang, B.; Wang, Z. Recursive Identification for MIMO Fractional-Order Hammerstein Model Based on AIAGS. Mathematics 2022, 10, 212. https://doi.org/10.3390/math10020212

AMA Style

Jin Q, Wang B, Wang Z. Recursive Identification for MIMO Fractional-Order Hammerstein Model Based on AIAGS. Mathematics. 2022; 10(2):212. https://doi.org/10.3390/math10020212

Chicago/Turabian Style

Jin, Qibing, Bin Wang, and Zeyu Wang. 2022. "Recursive Identification for MIMO Fractional-Order Hammerstein Model Based on AIAGS" Mathematics 10, no. 2: 212. https://doi.org/10.3390/math10020212

APA Style

Jin, Q., Wang, B., & Wang, Z. (2022). Recursive Identification for MIMO Fractional-Order Hammerstein Model Based on AIAGS. Mathematics, 10(2), 212. https://doi.org/10.3390/math10020212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop