Next Article in Journal
A Dual Incentive Value-Based Paradigm for Improving the Business Market Profitability in Blockchain Token Economy
Next Article in Special Issue
Impulsive Control of Complex-Valued Neural Networks with Mixed Time Delays and Uncertainties
Previous Article in Journal
Multi-Objective Optimal Design of a Hydrogen Supply Chain Powered with Agro-Industrial Wastes from the Sugarcane Industry: A Mexican Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation Algorithms for Hammerstein Finite Impulse Response Moving Average Systems Using the Data Filtering Theory

1
School of Mathematics, Southeast University, Nanjing 210096, China
2
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
3
Yonsei Frontier Lab, Yonsei University, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(3), 438; https://doi.org/10.3390/math10030438
Submission received: 1 December 2021 / Revised: 27 January 2022 / Accepted: 27 January 2022 / Published: 29 January 2022
(This article belongs to the Special Issue Impulsive Control Systems and Complexity II)

Abstract

:
This paper considers the parameter estimation problems of Hammerstein finite impulse response moving average (FIR–MA) systems. Based on the matrix transformation and the hierarchical identification principle, the Hammerstein FIR–MA system is recast into two models, and a decomposition-based recursive least-squares algorithm is deduced for estimating the parameters of these two models. In order to further improve the accuracy of the parameter estimation, a multi-innovation hierarchical least-squares algorithm based on the data filtering theory proposed. Finally, a simulation example demonstrates the effectiveness of the proposed scheme.

1. Introduction

The idetification methods have been very mature for multivariable linear systems [1,2]. However, many systems in practical applications and industrial control are nonlinear multivariable systems [3,4]. Nonlinear multivariable systems are different from scalar systems or linear systems, and each of their outputs is usually controlled and affected by several inputs at the same time, so the parameter estimation of multivariable systems is more difficult [5,6,7]. In recent years, many scholars have studied the parameter estimation problem of nonlinear multivariable systems and proposed many identification methods. You et al. proposed an iterative algorithm for the identification of multiple-input single-output output-error systems with unknown time-delays, based on the basis pursuit denoising criterion and the auxiliary model identification idea [8]. Wang and Ding presented a novel recursive least-squares algorithm for multiple-input multiple-output (MIMO) systems with autoregressive moving average noise, employing the auxiliary model and the data filtering technique [9]. Lenka studied the asymptotic stability of equilibrium points of multivariable fractional order systems based on the fractional comparison principle and the Laplace transform [10]. Liu et al. investigated parameter estimation problems for multivariable controlled autoregressive autoregressive moving average systems, and derived a partially coupled generalized extended stochastic gradient algorithm by using the auxiliary model [11].
Least-squares and stochastic gradient algorithms are two methods for parameter estimation [12,13,14,15]. Compared with the stochastic gradient algorithm, the recursive least-squares algorithm has fast convergence rate and high computational effort [16]. The recursive least-squares algorithm needs to calculate the inverse of the covariance matrix in the identification process; when the dimension of the covariance matrix is high, the computational complexity is very large. In order to reduce the complexity of the calculation, the decomposition technique can effectively solve the problem of a large computational burden in large-scale system identification [17,18,19]. The decomposition technique is used to decompose the original system into multiple subsystems and to identify the parameters in each subsystem separately [20,21]. For example, Wang et al. presented a hierarchical least-squares iterative algorithm to solve the difficulty that the identification model contains the unmeasurable variables and noise terms in the information matrix [22]. Ji et al. derived a hierarchical least-squares algorithm for two-input Hammerstein finite impulse response systems based on the hierarchical identification principle and the multi-innovation theory [23]. Wang et al. transformed a MIMO Hammerstein system with different types of coefficients into an over-parametrisation regression identification model, and then proposed a hierarchical extended stochastic gradient algorithm [24].
In order to reduce the error of parameter estimation, the data filtering technique can be used to get rid of the outliers and weaken the influence of noises to better extract useful information in the data [25,26]. Chen et al. combined the maximum likelihood principle, the decomposition technique and the data filtering technique, to present a maximum likelihood generalized extended gradient algorithm, and a data filtering-based maximum likelihood extended gradient algorithm [27]. Mao et al. presented a data filtering multi-innovation stochastic gradient identification algorithm for Hammerstein output-error autoregressive systems by means of the multi-innovation identification theory [28]. By eliminating the state variables in the systems, Li et al. proposed a filtering-based least-squares iterative algorithm for estimating the parameters of bilinear systems with colored noises [29]. The filtering technique can also be applied to signal processing and communication [30,31,32], and neural networks [33,34].
The parameter estimation of the system models is important for control system analysis and design. The parameters of the models can be estimated by using some identification methods [35,36,37] such as the hierarchical algorithms [38,39]. This paper studies the parameter estimation problems of two-input two-output Hammerstein finite impulse response moving average (FIR–MA) systems. The Hammerstein nonlinear system includes the nonlinear block and the linear block and its identification face more difficulties such as high computational complexity and parameter identifiability. In this paper, the decomposition technique and data filtering technique are employed to solve these problems. The main contributions of this paper lie in the following.
  • Based on the decomposition technique, we decomposed the Hammerstein system into two models, each of which is expressed as a regression form in the parameters of the nonlinear part or in the parameters of the linear part, and we propose a hierarchical least-squares algorithm.
  • By applying the data filtering technique, the input–output data are filtered, and a filtering-based hierarchical least-squares algorithm is presented for Hammerstein finite impulse response moving average systems to improve the accuracy of parameter estimation.
Briefly, the structure of this paper is organized as follows. Section 2 describes a two-input two-output Hammerstein FIR–MA system. Section 3 derives a hierarchical least-squares algorithm for the TITO Hammerstein system. Section 4 derives a filtering-based hierarchical least-squares algorithm for the TITO Hammerstein system. Section 5 provides an illustrative example to show the effectiveness of the proposed algorithms. Finally, we offer some concluding remarks in Section 6.

2. System Description and Identification Model

Let us define some symbols. “ A = : B ” or “ B : = A ” stands for “A is defined as B”. The superscript T denotes the matrix/vector transpose; the norm of the matrix X is defined by X 2 : = tr [ X X T ] . The symbol I n stands for an identity matrix of size n × n . 1 n represents a n-dimensional column vector whose elements are 1.
Consider the following Hammerstein FIR–MA system, depicted in Figure 1,
y ( t ) = θ ( z ) u ¯ ( t ) + α ( z ) v ( t ) ,
where u ( t ) : = [ u ¯ 1 ( t ) , u ¯ 2 ( t ) ] T 2 is the system nonline input, y ( t ) 2 is the system output, v ( t ) 2 is the stochastic white noise with zero mean, θ ( z ) : = [ θ 1 ( z ) , θ 2 ( z ) ] and α ( z ) are two matrix polynomials in the unit backward shift operator z 1 [ z 1 y ( t ) = y ( t 1 ) ] , and defined by
θ i ( z ) : = θ i , 1 z 1 + θ i , 2 z 2 + + θ i , n z n 2 , i = 1 , 2 ,
α ( z ) : = 1 + α 1 z 1 + α 2 z 2 + + α n z n .
The unknown nonlinear input u ¯ i ( t ) is assumed to be a static nonlinear function of the known basis d i , k with γ i , k as its parameters:
u ¯ i ( t ) = d i [ u i ( t ) ] = k = 1 s γ i , k d i , k [ u i ( t ) ] .
Assume that the orders n and s are known and u ¯ ( t ) = 0 , y ( t ) = 0 and v ( t ) = 0 for t 0 .
The purpose of this paper is to estimate three parameters θ i , n , γ i , k and α i .

3. The Hierarchical Least-Squares Algorithm

From Equation (1), we have
y ( t ) = A 1 ( z ) u ¯ 1 ( t ) + A 2 ( z ) u ¯ 2 ( t ) + α ( z ) v ( t ) .
The first two terms of the right hand side can be expressed by
A i ( z ) u ¯ i ( t ) = [ A i , 1 z 1 + A i , 2 z 2 + + A i , n z n ] u ¯ i ( t ) = A i , 1 u ¯ i ( t 1 ) + A i , 2 u ¯ i ( t 2 ) + + A i , n u ¯ i ( t n ) = [ A i , 1 , A i , 2 , , A i , n ] u ¯ i ( t 1 ) u ¯ i ( t 2 ) u ¯ i ( t n ) = [ A i , 1 , A i , 2 , , A i , n ] k = 1 s γ i , k d i , k [ u i ( t 1 ) ] k = 1 s γ i , k d i , k [ u i ( t 2 ) ] k = 1 s γ i , k d i , k [ u i ( t n ) ] .
Define
γ i : = γ i , 1 γ i , 2 γ i , s s , d i ( t ) : = d i , 1 [ u i ( t ) ] d i , 2 [ u i ( t ) ] d i , s [ u i ( t ) ] T 1 × s , i = 1 , 2 .
Then Equation (5) can be rewritten as
y ( t ) = [ θ 1 , 1 , θ 1 , 2 , , θ 1 , n ] d 1 [ u 1 ( t 1 ) ] γ 1 d 1 [ u 1 ( t 2 ) ] γ 1 d 1 [ u 1 ( t n ) ] γ 1 + [ θ 2 , 1 , θ 2 , 2 , , θ 2 , n ] d 2 [ u 2 ( t 1 ) ] γ 2 d 2 [ u 2 ( t 2 ) ] γ 2 d 2 [ u 2 ( t n ) ] γ 2 + α ( z ) v ( t ) .
Define
θ i : = [ θ i , 1 , θ i , 2 , , θ i , n ] 2 × n , D i ( t ) : = d i [ u i ( t 1 ) ] d i [ u i ( t 2 ) ] d i [ u i ( t n ) ] = d i , 1 [ u i ( t 1 ) ] d i , 2 [ u i ( t 1 ) ] d i , s [ u i ( t 1 ) ] d i , 1 [ u i ( t 2 ) ] d i , 2 [ u i ( t 2 ) ] d i , s [ u i ( t 2 ) ] d i , 1 [ u i ( t n ) ] d i , 2 [ u i ( t n ) ] d i , s [ u i ( t n ) ] n × s .
Then, Equation (7) can be rewritten in the following matrix form:
y ( t ) = [ θ 1 , 1 , θ 1 , 2 , , θ 1 , n ] D 1 ( t ) γ 1 + [ θ 2 , 1 , θ 2 , 2 , , θ 2 , n ] D 2 ( t ) γ 2 + α ( z ) v ( t ) = θ 1 D 1 ( t ) γ 1 + θ 2 D 2 ( t ) γ 2 + α ( z ) v ( t ) = i = 1 2 θ i D i ( t ) γ i + α ( z ) v ( t ) .
Notice that in the identification model (8), it is difficult to simultaneously estimate the matrix θ i and the vector γ i . Here we use the hierarchical identification principle to transform the Hammerstein identification model into two different forms.
Define
D γ ( t ) : = D 1 ( t ) γ 1 D 2 ( t ) γ 2 2 n , θ : = [ θ 1 , θ 2 ] 2 × 2 n , D θ ( t ) : = [ θ 1 D 1 ( t ) , θ 2 D 2 ( t ) ] 2 × 2 s , γ : = γ 1 γ 2 2 s , α : = [ α 1 , α 2 , , α n ] T n , ϕ s ( t ) : = [ v ( t 1 ) , v ( t 2 ) , , v ( t n ) ] T n × 2 .
The identification model in (8) can be transformed into the following two forms:
S 1 : y ( t ) = θ D γ ( t ) + ϕ s T ( t ) α + v ( t ) ,
S 2 : y ( t ) = D θ ( t ) γ + ϕ s T ( t ) α + v ( t ) .
The parameter matrices θ , α and γ to be identified are included in these two new models. Then, define two cost functions about the parameter matrices θ , α and γ :
J 1 ( θ , α ) : = l = 1 t [ y ( l ) θ D γ ( l ) ϕ s T ( l ) α ] 2 , J 2 ( γ ) : = l = 1 t [ y ( l ) D θ ( l ) γ ϕ s T ( l ) α ] 2 .
There are several difficulties in minimizing the two cost functions to obtain the parameter estimates. As the information matrix D θ ( t ) contains the unknown parameter matrix θ i , the information vector D γ ( t ) contains the unknown parameter vector γ i , the information matrix ϕ s ( t ) contains the unmeasurable noise term v ( t i ) . The problem is solved by replacing the unknown parameter matrix θ i with its estimate θ ^ i ( t ) , replacing the unknown parameter vector γ i with its estimate γ ^ i ( t ) , replacing the unmeasurable v ( t i ) with its estimate v ^ ( t i ) .
The estimate of ϕ s ( t ) , D γ ( t ) and D θ ( t ) can be written as
ϕ ^ s ( t ) : = [ v ^ ( t 1 ) , v ^ ( t 2 ) , , v ^ ( t n ) ] T . D ^ γ ( t ) : = D 1 ( t ) γ ^ 1 ( t 1 ) D 2 ( t ) γ ^ 2 ( t 1 ) , D ^ θ ( t ) : = [ θ ^ 1 ( t ) D 1 ( t ) , θ ^ 2 ( t ) D 2 ( t ) ] .
Then, the estimate v ^ ( t ) can be computed by
v ^ ( t ) = y ( t ) θ ^ ( t ) D ^ γ ( t ) ϕ ^ s T ( t ) α ^ ( t ) .
Then, we obtain the hierarchical least-squares (HLS) parameter estimation algorithm for the TITO Hammerstein FIR-MA models as follows:
θ ^ T ( t ) = θ ^ T ( t 1 ) + L θ ( t ) [ y ( t ) θ ^ ( t 1 ) D ^ γ ( t ) ϕ ^ s T ( t ) α ^ ( t 1 ) ] T ,
L θ ( t ) = P θ ( t 1 ) D ^ γ ( t ) [ 1 + D ^ γ T ( t ) P θ ( t 1 ) D ^ γ ( t ) ] 1 ,
P θ ( t ) = [ I n L θ ( t ) D ^ γ T ( t ) ] P θ ( t 1 ) , P θ ( 0 ) = p 0 I n ,
α ^ ( t ) = α ^ ( t 1 ) + L α ( t ) [ y ( t ) θ ^ ( t ) D ^ γ ( t ) ϕ ^ s T ( t ) α ^ ( t 1 ) ] ,
L α ( t ) = P α ( t 1 ) ϕ ^ s T ( t ) [ I n + ϕ ^ s ( t ) P α ( t 1 ) ϕ ^ s T ( t ) ] 1
P α ( t ) = [ I n L α ( t ) ϕ ^ s T ( t ) ] P α ( t 1 ) , P α ( 0 ) = p 0 I n ,
γ ^ ( t ) = γ ^ ( t 1 ) + L γ ( t ) [ y ( t ) D ^ θ ( t ) γ ^ ( t 1 ) ϕ ^ s T ( t ) α ^ ( t 1 ) ] ,
L γ ( t ) = P γ ( t 1 ) D ^ θ T ( t ) [ I s + D ^ θ ( t ) P γ ( t 1 ) D θ T ( t ) ] 1 ,
P γ ( t ) = [ I s L γ ( t ) D ^ θ ( t ) ] P γ ( t 1 ) , P γ ( 0 ) = p 0 I s ,
d i ( t ) = [ d i , 1 [ u i ( t ) ] , d i , 2 [ u i ( t ) ] , , d i , s [ u i ( t ) ] ] 1 × s , i = 1 , 2 ,
D i ( t ) = d i , 1 [ u i ( t 1 ) ] d i , 2 [ u i ( t 1 ) ] d i , s [ u i ( t 1 ) ] d i , 1 [ u i ( t 2 ) ] d i , 2 [ u i ( t 2 ) ] d i , s [ u i ( t 2 ) ] d i , 1 [ u i ( t n ) ] d i , 2 [ u i ( t n ) ] d i , s [ u i ( t n ) ] ,
D ^ γ ( t ) = D 1 ( t ) γ ^ 1 ( t 1 ) D 2 ( t ) γ ^ 2 ( t 1 ) ,
D ^ θ ( t ) = [ θ ^ 1 ( t ) D 1 ( t ) , θ ^ 2 ( t ) D 2 ( t ) ] ,
ϕ ^ s ( t ) = [ v ^ ( t 1 ) , v ^ ( t 2 ) , , v ^ ( t n ) ] T ,
v ^ ( t ) = y ( t ) θ ^ ( t ) D ^ γ ( t ) ϕ ^ s T ( t ) α ^ ( t ) ,
θ ^ i ( t ) = [ θ ^ 1 i ( t ) , θ ^ 2 i ( t ) , , θ ^ n i ( t ) ] ,
γ ^ i ( t ) = [ γ ^ i , 1 ( t ) , γ ^ i , 2 ( t ) , , γ ^ i , s ( t ) ] T ,
θ ^ ( t ) = [ θ ^ 1 ( t ) , θ ^ 2 ( t ) ] ,
γ ^ ( t ) = [ γ ^ 1 T ( t ) , γ ^ 2 T ( t ) ] T .
The steps of computing the parameter estimation matrices θ ^ ( t ) , α ^ ( t ) and γ ^ ( t ) by the HLS algorithm in (11)–(29) are summarised as follows.
  • Set the initial values: let t = 1, P θ ( 0 ) = p 0 I n , P α ( 0 ) = p 0 I n , P γ ( 0 ) = p 0 I s , θ ^ ( 0 ) = 1 2 × 2 n / p 0 , α ^ ( 0 ) = 1 n / p 0 , γ ^ ( 0 ) = 1 2 s / p 0 , p 0 = 10 6 ; u ( t ) = 0 and y ( t ) = 0 for t 0 and set a small positive number ε .
  • Collect the input and output data u ( t ) and y ( t ) , and form D i ( t ) by (21), ϕ ^ s ( t ) by (24).
  • Compute D ^ γ ( t ) , L θ ( t ) and P θ ( t ) by (22), (12) and (13), and update the parameter estimate θ ^ ( t ) by (11).
  • Compute D ^ θ ( t ) , L γ ( t ) and P γ ( t ) by (23), (18) and (19), and update the parameter estimate γ ^ ( t ) by (17).
  • Compute L α ( t ) and P α ( t ) by (15) and (16), and update the parameter estimate α ^ ( t ) by (14).
  • Compute the noise term v ^ ( t ) using (25).
  • Compare θ ^ ( t ) with θ ^ ( t 1 ) and compare γ ^ ( t ) with γ ^ ( t 1 ) : if θ ^ ( t ) θ ^ ( t 1 ) < ε and γ ^ ( t ) γ ^ ( t 1 ) < ε , terminate recursive calculation procedure and obtain θ ^ ( t ) , α ^ ( t ) and γ ^ ( t ) ; otherwise, increase t by 1 and go to step 2.
The flowchart of computing the parameter estimates θ ^ ( t ) , α ^ ( t ) and γ ^ ( t ) in the HLS algorithm is shown in Figure 2.

4. The Convergence Analysis of the Hierarchical Least-Squares Algorithm

The convergence analysis of the proposed hierarchical identification principle-based least-squares algorithm for the TITO Hammerstein system is illustrated as follows. Assume the estimated information vectors D ^ γ ( t ) , D ^ θ ( t ) , and ϕ ^ s ( t ) are persistently exciting, i.e., there exist constants M i > 0 and an integer N such that for t > N , the following strong persistent excitation conditions hold:
( A 1 ) M 1 I 1 N j = 0 N 1 D ^ γ ( t ) D ^ γ T ( t ) M 2 I , a . s . , ( A 2 ) M 3 I 1 N j = 0 N 1 D ^ θ ( t ) D ^ θ T ( t ) M 4 I , a . s . , ( A 3 ) M 5 I 1 N j = 0 N 1 ϕ ^ s ( t ) ϕ ^ s T ( t ) M 6 I , a . s .
Theorem 1.
For the the hierarchical least-squares algorithm in (11)–(29), suppose that (A1)–(A3) hold. Additionally, assume that the σ algebra sequence ϝ t = σ ( v ( t ) , v ( t 1 ) , v ( t 2 ) , ) , which is generated by v ( t ) , and { v ( t ) , ϝ t } is a martingale difference sequence on a probability space ( Ω , ϝ , P ) [24]. The sequence v ( t ) satisfies
( B 1 ) E [ v ( t ) | ϝ t 1 ] = 0 , a . s . , ( B 2 ) E [ | | v ( t ) | | 2 | ϝ t 1 ] = σ 2 ( t ) σ 2 < , a . s . , ( B 3 ) lim sup t 1 t i = 1 t v 2 ( i ) σ 2 < , a . s .
Then, the parameter estimation vector θ ^ i ( t ) , γ ^ ( t ) , and θ ^ ( t ) consistently converges to the true parameter vector θ i , γ and θ.

5. The Filtering Based Recursive Least-Squares Algorithm

This section proposes the filtering-based recursive least-squares algorithm for the two-input two-output system with MA noise by using the data filtering technique. The input–output data are filtered through a rational polynomial α ( z ) . Multiplying both sides of Equation (8) by 1 α ( z ) yields
1 α ( z ) y ( t ) = θ 1 D 1 ( t ) γ 1 α ( z ) + θ 2 D 2 ( t ) γ 2 α ( z ) + v ( t ) .
Define the filtered input and the filtered output as
y f ( t ) : = 1 α ( z ) y ( t ) = y ( t ) + [ 1 α ( z ) ] y f ( t ) ,
D f , i ( t ) : = 1 α ( z ) D i ( t ) = D i ( t ) + [ 1 α ( z ) ] D f , i ( t ) .
Then, Equation (32) can be rewritten as
y f ( t ) = θ 1 D f , 1 ( t ) γ 1 + θ 2 D f , 2 ( t ) γ 2 + v ( t ) .
Substituted (33) into (35), we can have
y ( t ) = θ 1 D f , 1 ( t ) γ 1 + θ 2 D f , 2 ( t ) γ 2 + [ α ( z ) 1 ] y f ( t ) + v ( t ) .
Define
D f , γ ( t ) : = D f , 1 ( t ) γ 1 D f , 2 ( t ) γ 2 2 n , D f , θ ( t ) : = [ θ 1 D f , 1 ( t ) , θ 2 D f , 2 ( t ) ] 2 × 2 s , ϕ f ( t ) : = [ y f ( t 1 ) , y f ( t 2 ) , , y f ( t n ) ] T n × 2 .
Then, Equation (36) can be transformed into the following two forms:
S f , 1 : y ( t ) = θ D f , γ ( t ) + ϕ f T ( t ) α + v ( t ) ,
S f , 2 : y ( t ) = D f , θ ( t ) γ + ϕ f T ( t ) α + v ( t ) .
The parameter matrices θ , α and γ to be identified are included in these two new models. Then, define two cost functions about the parameter matrices θ , α and γ :
J 3 ( θ , α ) : = l = 1 t [ y ( l ) θ D f , γ ( l ) ϕ f T ( l ) α ] 2 , J 4 ( γ ) : = l = 1 t [ y ( l ) D f , θ ( l ) γ ϕ f T ( l ) α ] 2 .
There are several difficulties in minimizing the two cost functions to obtain the parameter estimates. Since the information matrix D f , γ ( t ) contains the unknown parameter matrix θ i and D f , i ( t ) , the information vector D f , θ ( t ) contains the unknown parameter vector γ i and D f , i ( t ) , the information matrix ϕ f ( t ) contains the unmeasurable noise term y f ( t i ) . The problem is solved by replacing the unknown parameter matrix θ i with its estimate θ ^ i ( t ) , replacing the unknown parameter vector γ i with its estimate γ ^ i ( t ) , replacing the unmeasurable D f , i ( t ) and y f ( t i ) with their estimate D ^ f , i ( t ) and y ^ f ( t i ) .
The estimate of D f , i ( t ) can be computed by
D ^ f , i ( t ) : = D i ( t ) i = 1 2 α i D ^ f , i ( t i ) .
Then, the estimate of ϕ f ( t ) , D f , γ ( t ) and D f , θ ( t ) can be written as
ϕ ^ f ( t ) : = [ y ^ f ( t 1 ) , y ^ f ( t 2 ) , , y ^ f ( t n ) ] T , D ^ f , γ ( t ) : = D ^ f , 1 ( t ) γ ^ 1 ( t 1 ) D ^ f , 2 ( t ) γ ^ 2 ( t 1 ) , D ^ f , θ ( t ) : = [ θ ^ 1 ( t ) D ^ f , 1 ( t ) , θ ^ 2 ( t ) D ^ f , 2 ( t ) ] .
Then, the estimate y ^ f ( t ) can be computed by
y ^ f ( t ) = y ( t ) ϕ ^ f T ( t ) α ^ ( t ) .
Then, we obtain the filtering-based hierarchical least-squares (F–HLS) parameter estimation algorithm for the TITO Hammerstein FIR-MA models as follows:
θ ^ T ( t ) = θ ^ T ( t 1 ) + L θ ( t ) [ y ( t ) θ ^ ( t 1 ) D ^ f , γ ( t ) ϕ ^ f T ( t ) α ^ ( t 1 ) ] T ,
L θ ( t ) = P θ ( t 1 ) D ^ f , γ ( t ) [ 1 + D ^ f , γ T ( t ) P θ ( t 1 ) D ^ f , γ ( t ) ] 1 ,
P θ ( t ) = [ I n L θ ( t ) D ^ f , γ T ( t ) ] P θ ( t 1 ) , P θ ( 0 ) = p 0 I n ,
α ^ ( t ) = α ^ ( t 1 ) + L α ( t ) [ y ( t ) θ ^ ( t ) D ^ f , γ ( t ) ϕ ^ f T ( t ) α ^ ( t 1 ) ] ,
L α ( t ) = P α ( t 1 ) ϕ ^ s T ( t ) [ I n + ϕ ^ s ( t ) P α ( t 1 ) ϕ ^ f T ( t ) ] 1
P α ( t ) = [ I n L α ( t ) ϕ ^ f T ( t ) ] P α ( t 1 ) , P α ( 0 ) = p 0 I n ,
γ ^ ( t ) = γ ^ ( t 1 ) + L γ ( t ) [ y ( t ) D ^ f , θ ( t ) γ ^ ( t 1 ) ϕ ^ s T ( t ) α ^ ( t 1 ) ] ,
L γ ( t ) = P γ ( t 1 ) D ^ f , θ T ( t ) [ I s + D ^ f , θ ( t ) P γ ( t 1 ) D f , θ T ( t ) ] 1 ,
P γ ( t ) = [ I s L γ ( t ) D ^ f , θ ( t ) ] P γ ( t 1 ) , P γ ( 0 ) = p 0 I s ,
d i ( t ) = [ d i , 1 [ u i ( t ) ] , d i , 2 [ u i ( t ) ] , , d i , s [ u i ( t ) ] ] 1 × s , i = 1 , 2 ,
D i ( t ) = d i , 1 [ u i ( t 1 ) ] d i , 2 [ u i ( t 1 ) ] d i , s [ u i ( t 1 ) ] d i , 1 [ u i ( t 2 ) ] d i , 2 [ u i ( t 2 ) ] d i , s [ u i ( t 2 ) ] d i , 1 [ u i ( t n ) ] d i , 2 [ u i ( t n ) ] d i , s [ u i ( t n ) ] ,
D ^ f , i ( t ) : = D i ( t ) i = 1 2 α i D ^ f , i ( t i ) ,
D ^ f , γ ( t ) : = D ^ f , 1 ( t ) γ ^ 1 ( t 1 ) D ^ f , 2 ( t ) γ ^ 2 ( t 1 ) ,
D ^ f , θ ( t ) : = [ θ ^ 1 ( t ) D ^ f , 1 ( t ) , θ ^ 2 ( t ) D ^ f , 2 ( t ) ] .
ϕ ^ f ( t ) : = [ y ^ f ( t 1 ) , y ^ f ( t 2 ) , , y ^ f ( t n ) ] T ,
y ^ f ( t ) = y ( t ) ϕ ^ f T ( t ) α ^ ( t ) ,
θ ^ i ( t ) = [ θ ^ 1 i ( t ) , θ ^ 2 i ( t ) , , θ ^ n i ( t ) ] ,
γ ^ i ( t ) = [ γ ^ i , 1 ( t ) , γ ^ i , 2 ( t ) , , γ ^ i , s ( t ) ] T ,
θ ^ ( t ) = [ θ ^ 1 ( t ) , θ ^ 2 ( t ) ] ,
γ ^ ( t ) = [ γ ^ 1 T ( t ) , γ ^ 2 T ( t ) ] T .
The steps of computing the parameter estimation matrices θ ^ ( t ) , α ^ ( t ) and γ ^ ( t ) by the F–HLS algorithm in (39)–(58) are summarised as follows.
  • Set the initial values: let t = 1, P θ ( 0 ) = p 0 I n , P α ( 0 ) = p 0 I n , P γ ( 0 ) = p 0 I s , θ ^ ( 0 ) = 1 2 × 2 n / p 0 , α ^ ( 0 ) = 1 n / p 0 , γ ^ ( 0 ) = 1 2 s / p 0 , y ^ f ( t i ) = 1 n / p 0 , p 0 = 10 6 ; u ( t ) = 0 and y ( t ) = 0 for t 0 and set a small positive number ε .
  • Collect the input and output data u ( t ) and y ( t ) , form D i ( t ) by (49), ϕ ^ f ( t ) by (53), and compute D ^ f , i ( t ) by (50).
  • Compute D ^ f , γ ( t ) , L θ ( t ) and P θ ( t ) by (51), (40) and (41), and update the parameter estimate θ ^ ( t ) by (39).
  • Compute D ^ f , θ ( t ) , L γ ( t ) and P γ ( t ) by (52), (46) and (47), and update the parameter estimate γ ^ ( t ) by (45).
  • Compute L α ( t ) and P α ( t ) by (43) and (44), and update the parameter estimate α ^ ( t ) by (42).
  • Compute the noise term v ^ ( t ) using (54).
  • Compare θ ^ ( t ) with θ ^ ( t 1 ) and compare γ ^ ( t ) with γ ^ ( t 1 ) : if θ ^ ( t ) θ ^ ( t 1 ) < ε and γ ^ ( t ) γ ^ ( t 1 ) < ε , terminate recursive calculation procedure and obtain θ ^ ( t ) , α ^ ( t ) and γ ^ ( t ) ; otherwise, increase t by 1 and go to step 2.

6. Examples

Consider the following TITO Hammerstein FIR–MA system:
y ( t ) = θ ( z ) u ¯ ( t ) + α ( z ) v ( t ) , u ¯ ( t ) = u ¯ 1 ( t ) u ¯ 2 ( t ) = 0.35 u 1 ( t ) 0.40 u 1 2 ( t ) 0.14 u 2 ( t ) 0.34 u 2 2 ( t ) , y ( t ) = y 1 ( t ) y 2 ( t ) , v ( t ) = v 1 ( t ) v 2 ( t ) , α ( z ) = 1 + 1.00 z 1 + 0.11 z 2 , θ 1 ( z ) = 0.28 0.06 z 1 + 0.34 0.27 z 2 , θ 2 ( z ) = 1.10 1.40 z 1 + 0.41 0.54 z 2 , θ 1 , 1 = 0.28 0.06 , θ 1 , 2 = 0.34 0.27 , θ 2 , 1 = 1.10 1.40 , θ 2 , 2 = 0.41 0.54 , γ 1 = 0.35 0.40 , γ 2 = 0.14 0.34 , γ = γ 1 γ 2 , α = 1.00 0.11 .
In the simulation, the input u ¯ ( t ) was taken as an uncorrelated persistent excitation signal vector sequence with zero mean and unit variance, v ( t ) was taken as a white noise sequence with zero mean and variance σ 2 = 1.00 2 and we applied the HLS algorithm and the F–HLS algorithm to identify this example system. The parameter estimates and the errors are shown in Table 1, Table 2, Table 3 and Table 4. The parameter estimation errors: δ : = θ ^ ( t ) θ 2 + γ ^ ( t ) γ 2 θ 2 + γ 2 versus t are shown in Figure 3 and Figure 4.
From Table 1, Table 2, Table 3 and Table 4 and Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8, we can draw the following conclusions.
  • The estimation errors given by the HLS algorithm and the F–HLS algorithm become generally smaller and smaller as t increases.
  • Compared with the HLS algorithm, the F–HLS algorithm has higher parameter estimation accuracy.
  • The predictions of the residuals are close to the true residuals, and the estimated outputs are close to the true outputs.

7. Conclusions

We derived a hierarchical least-squares algorithm and a filtering-based hierarchical least-squares algorithm for the two-input two-output Hammerstein finite impulse response moving average systems based on the hierarchical identification principle and the data filtering theory. The proposed F-HLS algorithm can effectively identify the parameters of the TITO Hammerstein FIR–MA systems and has a higher parameter estimation accuracy compared with the HLS algorithm. The proposed model parameter estimation methods in the paper can combine some mathematical strategiesand other estimation algorithms to study the parameter identification problems of linear and nonlinear systems with different disturbances, and can be applied to other fields, such as signal processing and engineering application systems.

Author Contributions

Conceptualization, methodology and software, Y.J. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Shandong Province (ZR2020MF160 and SDYKC20090).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this article.

Acknowledgments

This work was supported by the Natural Science Foundation of Shandong Province (ZR2020MF160 and SDYKC20090).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pillonetto, G.; Chiuso, A.; Nicolao, G. Stable spline identification of linear systems under missing data. Automatica 2019, 108, 108493. [Google Scholar] [CrossRef] [Green Version]
  2. Li, H.Y.; Zhang, K. Accurate and fast parameter identification of conditionally Gaussian Markov jump linear system with input control. Automatica 2022, 137, 109928. [Google Scholar] [CrossRef]
  3. Ding, F.; Zhang, X.; Xu, L. The innovation algorithms for multivariable state-space models. Int. J. Adapt. Control Signal Process. 2019, 33, 1601–1608. [Google Scholar] [CrossRef]
  4. Li, X.D.; Li, P. Stability of time-delay systems with impulsive control involving stabilizing delays. Automatica 2021, 124, 109336. [Google Scholar] [CrossRef]
  5. Peng, H.X.; He, W.J.; Zhang, Y.L.; Li, X.; Ding, Y.; Menon, V.G.; Verma, S. Covert non-orthogonal multiple access communication assisted by multi-antenna jamming Author links open overlay. Phys. Commun. 2022, 52, 101598. [Google Scholar] [CrossRef]
  6. Ding, F.; Liu, G.; Liu, X.P. Parameter estimation with scarce measurements. Automatica 2011, 47, 1646–1655. [Google Scholar] [CrossRef]
  7. Shu, J.; He, J.C.; Li, L. MSIS: Multispectral instance segmentation method for power equipment. Comput. Intell. Neurosci. 2022, 2022, 2864717. [Google Scholar] [CrossRef]
  8. You, J.Y.; Liu, Y.J. Iterative identification for multivariable systems with time-delays based on basis pursuit de-noising and auxiliary model. Algorithms 2018, 11, 180. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, Y.J.; Ding, F. Novel data filtering based parameter identification for multiple-input multipleoutput systems using the auxiliary model. Automatica 2016, 71, 308–313. [Google Scholar] [CrossRef]
  10. Lenka, B.K. Fractional comparison method and asymptotic stability results for multivariable fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2019, 69, 398–415. [Google Scholar] [CrossRef]
  11. Liu, Q.Y.; Ding, F.; Yang, E.F. Parameter estimation algorithm for multivariable controlled autoregressive autoregressive moving average systems. Digit. Signal Process. 2018, 83, 323–331. [Google Scholar] [CrossRef]
  12. Pan, J.; Ma, H.; Liu, Q.Y. Recursive coupled projection algorithms for multivariable output-error-like systems with coloured noises. IET Signal Process 2020, 14, 455–466. [Google Scholar] [CrossRef]
  13. Xu, L.; Zhu, Q.M. Decomposition strategy-based hierarchical least mean square algorithm for control systems from the impulse responses. Int. J. Syst. Sci. 2021, 52, 1806–1821. [Google Scholar] [CrossRef]
  14. Ding, F.; Chen, T. Parameter estimation of dual-rate stochastic systems by using an output error method. IEEE Trans. Autom. Control 2005, 50, 1436–1441. [Google Scholar] [CrossRef]
  15. Zhang, X.; Yang, E.F. State estimation for bilinear systems through minimizing the covariance matrix of the state estimation errors. Int. J. Adapt. Control. Signal Process. 2019, 33, 1157–1173. [Google Scholar] [CrossRef]
  16. Zhang, X. Hierarchical parameter and state estimation for bilinear systems. Int. J. Syst. Sci. 2020, 51, 275–290. [Google Scholar] [CrossRef]
  17. Ding, F.; Lv, L.; Pan, J.; Wan, X.; Jin, X.B. Two-stage gradient-based iterative estimation methods for controlled autoregressive systems using the measurement data. Int. J. Control Autom. Syst. 2020, 18, 886–896. [Google Scholar] [CrossRef]
  18. Xu, L.; Chen, F.Y.; Hayat, T. Hierarchical recursive signal modeling for multi-frequency signals based on discrete measured data. Int. J. Adapt. Control Signal Process. 2021, 35, 676–693. [Google Scholar] [CrossRef]
  19. Katahira, K. How hierarchical models improve point estimates of model parameters at the individual level. J. Math. Psychol. 2016, 73, 37–58. [Google Scholar] [CrossRef] [Green Version]
  20. Atitallah, A.; Bedoui, S.; Abderrahim, K. Multistage for identification of Wiener time delay systems based on hierarchical gradient approach. Math. Comput. Model. Dyn. Syst. 2017, 23, 222–239. [Google Scholar] [CrossRef]
  21. Wang, Y.J.; Wu, M.H. Recursive parameter estimation algorithm for multivariate output-error systems. J. Frankl. Inst. 2018, 355, 5163–5181. [Google Scholar] [CrossRef]
  22. Ding, F.; Wang, Y.J. The filtering based iterative identification for multivariable systems. IET Control Theory Appl. 2016, 10, 894–902. [Google Scholar]
  23. Ji, Y.; Jiang, X.K.; Wan, L.J. Hierarchical least squares parameter estimation algorithm for two-input Hammerstein finite impulse response systems. J. Frankl. Inst. 2020, 357, 5019–5032. [Google Scholar] [CrossRef]
  24. Wang, D.Q.; Mao, L.; Ding, F. Recasted models-based hierarchical extended stochastic gradient method for MIMO nonlinear systems. IET Control Theory Appl. 2017, 11, 476–485. [Google Scholar] [CrossRef]
  25. Li, X.D.; Li, P. Input-to-state stability of nonlinear systems: Event-triggered impulsive control. IEEE Trans. Autom. Control 2021. [Google Scholar] [CrossRef]
  26. Li, P.; Li, X.D.; Lu, J.Q. Input-to-state stability of impulsive delay systems with multiple impulses. IEEE Trans. Autom. Control 2021, 66, 362–368. [Google Scholar] [CrossRef]
  27. Chen, F.Y.; Xu, L.; Hayat, T. Data filtering based maximum likelihood extended gradient method for multivariable systems with autoregressive moving average noise. J. Frankl. Inst. 2018, 355, 3381–3398. [Google Scholar] [CrossRef]
  28. Mao, Y.W.; Ding, F. Parameter estimation for nonlinear systems by using the data filtering and the multi-innovation identification theory. Int. J. Comput. Math. 2016, 93, 1869–1885. [Google Scholar] [CrossRef]
  29. Li, M.H.; Liu, X.M. The least squares based iterative algorithms for parameter estimation of a bilinear system with autoregressive noise using the data filtering technique. Signal Process. 2018, 147, 23–34. [Google Scholar] [CrossRef]
  30. Zhao, Z.Y.; Zhou, Y.Q.; Wang, X.Y.; Wang, Z.; Bai, Y. Water quality evolution mechanism modeling and health risk assessment based on stochastic hybrid dynamic systems. Expert Syst. Appl. 2022, 193, 116404. [Google Scholar] [CrossRef]
  31. Li, W.L.; Jia, Y.M.; Du, J.P. Event-triggered Kalman consensus filter over sensor networks. IET Control Theory Appl. 2015, 10, 103–110. [Google Scholar] [CrossRef]
  32. Kruzick, S.; Moura, J.M.F. Optimal filter design for signal processing on random graphs: Accelerated consensus. IEEE Trans. Signal Process. 2018, 66, 1258–1272. [Google Scholar] [CrossRef]
  33. Pan, X.; Zhao, J.; Xu, J. An object-based and heterogeneous segment filter convolutional neural network for high-resolution remote sensing image classification. Int. J. Remote Sens. 2019, 40, 5892–5916. [Google Scholar] [CrossRef]
  34. Zhou, Y.H.; Zhang, X. Hierarchical estimation approach for RBF-AR models with regression weights based on the increasing data length. IEEE Trans. Circuits Syst. Express Briefs 2021, 68, 3597–3601. [Google Scholar] [CrossRef]
  35. Ding, F.; Chen, T. Combined parameter and output estimation of dual-rate systems using an auxiliary model. Automatica 2004, 40, 1739–1748. [Google Scholar] [CrossRef]
  36. Ding, F.; Shi, Y.; Chen, Y.T. Auxiliary model-based least-squares identification methods for Hammerstein output-error systems. Syst. Control Lett. 2007, 56, 373–380. [Google Scholar] [CrossRef]
  37. Zhou, Y.H. Modeling nonlinear processes using the radial basis function-based state-dependent autoregressive models. IEEE Signal Process. Lett. 2020, 27, 1600–1604. [Google Scholar] [CrossRef]
  38. Liu, Y.J.; Shi, Y. An efficient hierarchical identification method for general dual-rate sampled-data systems. Automatica 2014, 50, 962–970. [Google Scholar] [CrossRef]
  39. Zhang, X. Optimal adaptive filtering algorithm by using the fractional-order derivative. IEEE Signal Process. Lett. 2022, 29, 399–403. [Google Scholar] [CrossRef]
Figure 1. The two-input two-output Hammerstein FIR–MA system.
Figure 1. The two-input two-output Hammerstein FIR–MA system.
Mathematics 10 00438 g001
Figure 2. The flowchart of the HLS algorithm.
Figure 2. The flowchart of the HLS algorithm.
Mathematics 10 00438 g002
Figure 3. The HLS estimation errors δ versus t with σ 2 = 0.50 2 and σ 2 = 1.00 2 .
Figure 3. The HLS estimation errors δ versus t with σ 2 = 0.50 2 and σ 2 = 1.00 2 .
Mathematics 10 00438 g003
Figure 4. The F-HLS estimation errors δ versus t with σ 2 = 0.50 2 and σ 2 = 1.00 2 .
Figure 4. The F-HLS estimation errors δ versus t with σ 2 = 0.50 2 and σ 2 = 1.00 2 .
Mathematics 10 00438 g004
Figure 5. The F-HLS residual v 1 ( t ) (black line)/ v ^ 1 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Figure 5. The F-HLS residual v 1 ( t ) (black line)/ v ^ 1 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Mathematics 10 00438 g005
Figure 6. The F-HLS residual v 2 ( t ) (black line)/ v ^ 2 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Figure 6. The F-HLS residual v 2 ( t ) (black line)/ v ^ 2 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Mathematics 10 00438 g006
Figure 7. The F-HLS residual y 1 ( t ) (black line)/ y ^ 1 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Figure 7. The F-HLS residual y 1 ( t ) (black line)/ y ^ 1 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Mathematics 10 00438 g007
Figure 8. The F-HLS residual y 2 ( t ) (black line)/ y ^ 2 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Figure 8. The F-HLS residual y 2 ( t ) (black line)/ y ^ 2 ( t ) (blue line) versus t unders σ 2 = 1.00 2 .
Mathematics 10 00438 g008
Table 1. The HLS estimates and their errors with σ 2 = 1.00 2 .
Table 1. The HLS estimates and their errors with σ 2 = 1.00 2 .
t100200500100020003000True Values
θ 1 , 11 −0.50914−0.30750−0.19701−0.22835−0.29013−0.29596−0.28000
θ 1 , 12 0.977660.566170.512170.321630.144900.131670.06000
θ 1 , 21 0.113840.216890.03764−0.11184−0.19503−0.25967−0.34000
θ 1 , 22 −0.80709−0.63473−0.25479−0.17204−0.24346−0.25086−0.27000
θ 2 , 11 0.744030.810740.744570.812580.928450.919091.10000
θ 2 , 12 0.442631.109900.950091.120741.144011.217501.40000
θ 2 , 21 −0.127670.000390.271780.217950.281460.338960.41000
θ 2 , 22 0.287150.699360.513350.400110.433990.420260.54000
γ 1 , 1 −0.52470−0.44508−0.39763−0.37525−0.37048−0.39093−0.35000
γ 1 , 2 −0.20855−0.21843−0.24514−0.25373−0.27027−0.27842−0.40000
γ 2 , 1 0.283460.340360.278910.262960.239750.235230.14000
γ 2 , 2 −0.29712−0.34020−0.32389−0.33012−0.33797−0.34912−0.34000
α 1 0.149940.261800.459480.689750.905511.016531.00000
α 2 0.501610.404900.238340.117980.063740.066360.11000
δ ( % ) 83.8888557.7107744.5361930.1151618.8815515.33941
Table 2. The HLS estimates and their errors with σ 2 = 0.50 2 .
Table 2. The HLS estimates and their errors with σ 2 = 0.50 2 .
t100200500100020003000True Values
θ 1 , 11 −0.44432−0.40006−0.38992−0.31198−0.26693−0.26510−0.28000
θ 1 , 12 0.56573−0.090280.00964−0.06114−0.069010.002390.06000
θ 1 , 21 −0.12045−0.04354−0.31996−0.30311−0.33949−0.33982−0.34000
θ 1 , 22 −0.22202−0.113420.01576−0.12291−0.19827−0.23634−0.27000
θ 2 , 11 1.043231.008611.210221.086271.011771.034081.10000
θ 2 , 12 0.806161.083251.035591.203251.322501.358321.40000
θ 2 , 21 0.416920.421050.362720.437770.450830.418050.41000
θ 2 , 22 1.133480.873730.736200.532330.497400.503850.54000
γ 1 , 1 −0.37277−0.35597−0.40370−0.38316−0.40989−0.42941−0.35000
γ 1 , 2 −0.41897−0.38966−0.42279−0.39275−0.41758−0.43019−0.40000
γ 2 , 1 −0.055600.017130.031960.088600.113630.122660.14000
γ 2 , 2 −0.40705−0.38088−0.38310−0.35594−0.35451−0.36378−0.34000
α 1 0.723920.839491.000221.059371.106171.145891.00000
α 2 −0.20897−0.13856−0.026920.093620.156650.184790.11000
δ ( % ) 48.6784629.8222524.4270312.7220010.385219.39920
Table 3. The F-HLS estimates and their errors with σ 2 = 1.00 2 .
Table 3. The F-HLS estimates and their errors with σ 2 = 1.00 2 .
t100200500100020003000True Values
θ 1 , 11 −0.46534−0.47768−0.40413−0.32596−0.27511−0.27812−0.28000
θ 1 , 12 0.546450.517580.323020.220730.129240.108610.06000
θ 1 , 21 0.348790.18063−0.05504−0.22830−0.36081−0.33090−0.34000
θ 1 , 22 −1.13416−1.18888−0.91111−0.53083−0.36501−0.39249−0.27000
θ 2 , 11 1.373121.078710.987021.023871.067331.060651.10000
θ 2 , 12 1.200171.407171.397121.263451.325481.359681.40000
θ 2 , 21 −0.50420−0.170690.295240.400660.395130.470110.41000
θ 2 , 22 0.462080.269370.496250.600090.725380.685120.54000
γ 1 , 1 −0.28981−0.29722−0.34828−0.35355−0.38810−0.38610−0.35000
γ 1 , 2 −0.46484−0.45777−0.42673−0.41237−0.41568−0.42091−0.40000
γ 2 , 1 0.192330.178230.179740.144250.142310.125820.14000
γ 2 , 2 −0.18200−0.18516−0.18579−0.19230−0.22195−0.23310−0.34000
α 1 0.477100.588350.742770.886230.977881.023031.00000
α 2 −0.17863−0.14081−0.14153−0.083450.010880.057320.11000
δ ( % ) 73.0059761.8465837.7654819.8001412.3609410.78310
Table 4. The F-HLS estimates and their errors with σ 2 = 0.50 2 .
Table 4. The F-HLS estimates and their errors with σ 2 = 0.50 2 .
t100200500100020003000True Values
θ 1 , 11 −0.37740−0.41815−0.37182−0.31015−0.25496−0.26324−0.28000
θ 1 , 12 0.517060.528880.346460.245850.142470.120240.06000
θ 1 , 21 0.218280.05468−0.08284−0.18580−0.31081−0.27801−0.34000
θ 1 , 22 −0.88659−0.96682−0.73059−0.42252−0.26874−0.30345−0.27000
θ 2 , 11 1.508001.153881.023431.051841.082141.075161.10000
θ 2 , 12 1.271601.472671.424321.253401.326451.363041.40000
θ 2 , 21 −0.47274−0.143300.255380.314260.320860.385960.41000
θ 2 , 22 0.347380.164580.405850.529640.615340.577830.54000
γ 1 , 1 −0.34054−0.33682−0.37719−0.37203−0.40047−0.39525−0.35000
γ 1 , 2 −0.45282−0.44870−0.41742−0.40349−0.40820−0.41487−0.40000
γ 2 , 1 0.213470.202440.211040.177280.176220.155730.14000
γ 2 , 2 −0.22228−0.22713−0.22743−0.22955−0.26159−0.27247−0.34000
α 1 0.524960.644010.801490.919771.008641.050001.00000
α 2 −0.13503−0.10317−0.11367−0.035950.050060.094200.11000
δ ( % ) 64.4993853.5230431.3940817.184698.810686.52457
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ji, Y.; Cao, J. Parameter Estimation Algorithms for Hammerstein Finite Impulse Response Moving Average Systems Using the Data Filtering Theory. Mathematics 2022, 10, 438. https://doi.org/10.3390/math10030438

AMA Style

Ji Y, Cao J. Parameter Estimation Algorithms for Hammerstein Finite Impulse Response Moving Average Systems Using the Data Filtering Theory. Mathematics. 2022; 10(3):438. https://doi.org/10.3390/math10030438

Chicago/Turabian Style

Ji, Yan, and Jinde Cao. 2022. "Parameter Estimation Algorithms for Hammerstein Finite Impulse Response Moving Average Systems Using the Data Filtering Theory" Mathematics 10, no. 3: 438. https://doi.org/10.3390/math10030438

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop