Next Article in Journal
High Accurate Mathematical Tools to Estimate the Gravity Direction Using Two Non-Orthogonal Inclinometers
Next Article in Special Issue
Improved Accuracy in Predicting the Best Sensor Fusion Architecture for Multiple Domains
Previous Article in Journal
Early Warning of Gas Concentration in Coal Mines Production Based on Probability Density Machine
Previous Article in Special Issue
A Robust Fault-Tolerant Predictive Control for Discrete-Time Linear Systems Subject to Sensor and Actuator Faults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

\({\mathbb{T}}\)-Proper Hypercomplex Centralized Fusion Estimation for Randomly Multiple Sensor Delays Systems with Correlated Noises

by
Rosa M. Fernández-Alcalá
*,
Jesús Navarro-Moreno
and
Juan C. Ruiz-Molina
Department of Statistics and Operations Research, University of Jaén, Paraje Las Lagunillas, 23071 Jaén, Spain
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(17), 5729; https://doi.org/10.3390/s21175729
Submission received: 15 July 2021 / Revised: 17 August 2021 / Accepted: 21 August 2021 / Published: 25 August 2021
(This article belongs to the Special Issue Sensor Fusion and Signal Processing)

Abstract

:
The centralized fusion estimation problem for discrete-time vectorial tessarine signals in multiple sensor stochastic systems with random one-step delays and correlated noises is analyzed under different T -properness conditions. Based on T k , k = 1 , 2 , linear processing, new centralized fusion filtering, prediction, and fixed-point smoothing algorithms are devised. These algorithms have the advantage of providing optimal estimators with a significant reduction in computational cost compared to that obtained through a real or a widely linear processing approach. Simulation examples illustrate the effectiveness and applicability of the algorithms proposed, in which the superiority of the T k linear estimators over their counterparts in the quaternion domain is apparent.

Graphical Abstract

1. Introduction

Multi-sensor systems and related information fusion estimation theory have attracted much attention over the last few decades due to their wide range of applications in many fields, including target tracking, robotics, navigation, big data, and signal processing [1,2,3,4,5,6,7].
In practice, failures during data transmission are unavoidable and lead to uncertain systems. In this regard, a significant problem is the estimation of the state from systems with random sensor delays (see, for example, ref. [8,9,10,11,12,13]). Such delays may be mainly caused by computational load, heavy network traffic, and the limited bandwidth of the communication channel, as well as other limitations, which mean that the measurements are not always up to date [8]. It is commonly assumed that measurement delays can be described by Bernoulli distributed random variables with known conditional probabilities, where the values 1 and 0 of these variables indicate the presence or absence of measurement delays in the corresponding sensor [10].
Traditionally, there have been two basic approaches to process the information from multiple sensors, centralized and distributed fusion. In the former approach, all the measurement data from each sensor are collected in a fusion center where they are fused and processed, whereas in the distributed fusion method, the measurements of each sensor are transmitted to a local processor where they are independently processed before being transmitted to the fusion center. It is well known that centralized fusion methods lead to the best (optimal) solution when all sensors work healthily [14,15,16]. The strength of this approach lies in the fact that it is easy to implement, and it makes possible the best use of the available information. Accordingly, with the purpose of optimal estimation, centralized fusion methodology has received increased attention in recent literature related to multi-sensor fusion estimation (see, for example, ref. [9,17,18,19]). Notwithstanding the foregoing, the main disadvantage of this approach is the high computational load that may be required, especially when the number of sensors is too large. Alternatively, distributed fusion methodologies are developed with the purpose of designing solutions with a reduced computational load. Although distributed fusion approach presents a better robustness, flexibility and reliability due to its parallel structure; the main handicap of these solutions is that they are suboptimal and, hence, it is desirable to explore other alternatives that can alleviate the computational demand. In this respect, the use of hypercomplex algebras may well offer an ideal framework in which to analyze the properness characteristics of the signals which lead to lower computational costs without losing optimality.
In general, the implementation of hypercomplex algebras in signal processing problems has expanded rapidly because of their natural ability to model multi-dimensional data giving rise to better geometrical interpretations. In this connection, quaternions and tessarines appear as 4D hypercomplex algebras composed of a real part and three imaginary parts, which provide them with the ideal structure for describing three and four-dimensional signals. Nowadays, they play a fundamental role in a variety of applications such as robotics, avionics, 3D graphics, and virtual reality [20]. In principle, the use of quaternions or tessarines means renouncing some of the usual algebraic properties of the real or complex fields. Thus, while quaternion algebra is non-commutative, tessarines become a non-division algebra. These properties make each algebra more appropriate for every specific problem. With this in mind, in [21,22,23,24] the application of these two isodimensional algebras is compared with the objective of showing how the choice of a particular algebra may determine the proposed method performance.
In the related literature, quaternion algebra has been widely used as a signal processing tool and it is still a trending topic in different areas. In particular, in the area of multi-sensor fusion estimation, ref. [25,26] proposed sensor fusion estimation algorithms based on a quaternion extended Kalman filter, ref. [27,28] have provided robust distributed quaternion Kalman filtering algorithm for data fusion over sensor networks dealing with three-dimensional data, and [29] designed a linear quaternion fusion filter from multi-sensor observations. A common characteristic of all the estimation algorithms above is that their methodologies are based on a strictly linear (SL) processing. However, in the quaternion domain, optimal linear processing is widely linear (WL), which requires the consideration of the quaternion signal and its three involutions. In this framework, ref. [30] devised WL filtering, prediction and smoothing algorithms for multi-sensor systems with mixed uncertainties of sensor delays, packet dropout and missing observations. Interestingly, when the signal presents properness properties (cancellation of one or more of the three complementary covariance matrices), the optimal processing is SL (if the signal is Q-proper) or semi-widely linear (if the signal is C-proper), which amounts to operate on a vector with reduced dimension, which means a significant reduction in the computational load of the associated algorithms (please review [31,32,33,34] for further details).
On the other hand, the use of tessarines is less common in the signal processing literature and, to the best of the authors’ knowledge, they have never been considered in multi-sensor fusion estimation problems. In general, the use of tessarines in estimation problems has been limited by the fact that it is not being a normed division algebra. This drawback was successfully overcome in [23] by introducing a metric that guarantees the existence and unicity of the optimal estimator. Moreover, although the optimal processing in the tessarine field is the WL processing, under properness conditions, it is possible to get the optimal solution from estimation algorithms with lower computational costs. In this sense, ref. [23,24] introduced the concept of T 1 and T 2 -properness and provided a statistical test to determine whether a signal presents one of these properness properties. According to the type of properness, the most suitable form of processing is T 1 linear processing, which supposes to operate on the signal itself, or T 2 linear processing, based on the augmented vector given by the signal and its conjugate. The application of both T 1 and T 2 linear processing to the estimation problem has provided optimal estimation algorithms of reduced dimension.
Motivated by the above discussions, in this paper we consider a tessarine multiple sensor system where each sensor may be delayed at any time independently from the others. The probability of the occurrence of each delay is dealt by a Bernoulli distribution. Moreover, unlike most sensor fusion estimation algorithms, the observation noises of different sensors can be correlated. In this context, new centralized fusion filtering, prediction and fixed-point smoothing algorithms are designed under both T 1 and T 2 -properness conditions. The algorithms proposed provide the optimal estimations of the state; meanwhile, the computational load has been reduced with respect to the counterpart tessarine WL (TWL) estimation algorithms. It is important to note that such savings in computational demand cannot be achieved in the real field. The superiority of these algorithms obtained from a T k linear approach over those derived in the quaternion domain is numerically demonstrated under different conditions of properness.
The remainder of the paper is organized as follows. Section 2 introduces the notation used throughout the paper and briefly reviews the main concepts related to the processing of tessarine signals and their implications under T k properness. Then, in Section 3, the problem of estimating a tessarine signal in linear discrete stochastic systems with random state delays and multiple sensors is formulated. Concretely, under T k -properness conditions, a compact state-space model of reduced dimension is proposed. From this model, and based on T k -properness properties, T k centralized fusion filtering, step ahead prediction, and fixed-point smoothing algorithms are devised in Section 4. Furthermore, the goodness of these algorithms in performance is numerically analyzed in Section 5 by means of a simulation example, where the superiority of the T k estimation algorithms above over their counterparts in the quaternion domain is evidenced. The paper ends with a section of conclusions. In order to maintain continuity, all technical proofs have been deferred to the Appendixes Appendix A, Appendix B, Appendix C.

2. Preliminaries

Throughout this paper, and unless otherwise stated, all the random variables are assumed to have zero-mean. Moreover, the notation and terminology is fairly standard. They are summarized in the following two subsections.

2.1. Notation

Matrices are indicated by boldfaced uppercase letters, column vectors as boldfaced lowercase letters, and scalar quantities by lightfaced lowercase letters. Moreover, the following notation is used.
  • I n denotes the identity matrix of dimension n.
  • 0 n × m denotes the n × m zero matrix.
  • 1 n denotes the n-vector of all ones.
  • 0 n denotes the n-vector of all zeros.
  • Superscript * stands for the tessarine conjugate.
  • Superscript T stands for the transpose.
  • Superscript H stands for the Hermitian transpose.
  • Subscript r represents the real part of a tessarine.
  • Subscript ν , for ν = η , η , η , represents the imaginary part of a tessarine.
  • Z stands for the integer field.
  • R stands for the real field. Accordingly, A R n × m means that A is a real n × m matrix, and similarly r R n means that r is a n-dimensional real vector.
  • T stands for the tessarine field. Accordingly, A T n × m means that A is a tessarine n × m matrix, and similarly r T n means that r is a m-dimensional real vector.
  • E [ · ] is the expectation operator.
  • Cov ( · ) is the covariance operator.
  • diag ( · ) is a diagonal (or block diagonal) matrix with elements specified on the main diagonal.
  • δ n , l is the Kronecker delta function, which is equal to one if l = n , and zero otherwise.
  • ∘ denotes the Hadamard product.
  • ⊗ denotes the Kronecker product.

2.2. Basic Concepts and Properties

The following property of the Hadamard product will be useful.
Property 1.
If A R n × n and b R n , then
diag ( b ) A diag ( b ) = b b T A .
Definition 1.
A tessarine random signal x ( t ) T n is a stochastic process of the form [23]
x ( t ) = x r ( t ) + η x η ( t ) + η x η ( t ) + η x η ( t ) , t Z
where x ν ( t ) R n , for ν = r , η , η , η , are real random signals and { 1 , η , η , η } obeys the following rules:
η η = η , η η = η , η η = η , η 2 = η 2 = 1 , η 2 = 1 .
The conjugate of a given tessarine random signal x ( t ) T n , is
x ( t ) = x r ( t ) η x η ( t ) + η x η ( t ) η x η ( t ) .
Moreover, the following two auxiliary tessarine vectors are defined:
x η ( t ) = x r ( t ) + η x η ( t ) η x η ( t ) η x η ( t ) ,
x η ( t ) = x r ( t ) η x η ( t ) η x η ( t ) + η x η ( t ) .
For a complete description of the second-order statistical properties of x ( t ) , we need to consider the augmented tessarine signal vector x ¯ ( t ) = [ x T ( t ) , x H ( t ) , x η T ( t ) , x η T ( t ) ] T . The following relationship between the augmented vector and the real vector x r ( t ) = [ x r T ( t ) , x η T ( t ) , x η T ( t ) , x η T ( t ) ] T can be established:
x ¯ ( t ) = 2 T n x r ( t ) ,
where T n = 1 2 A I n
A = 1 η η η 1 η η η 1 η η η 1 η η η ,
with T n H T n = I 4 n .
Definition 2.
Given two tessarine random signals x ( t ) , y ( s ) T n , the product ★ between them is defined as
x ( t ) y ( s ) = x r ( t ) y r ( s ) + η x η ( t ) y η ( s ) + η x η ( t ) y η ( s ) + η x η ( t ) y η ( s ) .
The following property of the product ★ is easy to check.
Property 2.
The augmented vector of x ( t ) y ( s ) is x ( t ) y ( s ) ¯ = D ¯ x ( t ) y ¯ ( s ) , where D ¯ x ( t ) = T n diag ( x r ( t ) ) T n H .
Definition 3.
The pseudo autocorrelation function of x ( t ) T n is defined as R x ( t , s ) = E [ x ( t ) x H ( s ) ] , t , s Z , and the pseudo cross-correlation function of x ( t ) , y ( t ) T n is defined as R x y ( t , s ) = E [ x ( t ) y H ( s ) ] , t , s Z .
Note that, depending on the vanishing of the different pseudo correlation functions R x x ν ( t , s ) , ν = , η , η , various kinds of tessarine properness can be defined. In particular, the following properness conditions in the tessarine domain have recently been introduced in [23,24].
Definition 4.
A random signal x ( t ) T n is said to be T 1 -proper (respectively, T 2 -proper) if, and only if, the functions R x x ν ( t , s ) , with ν = , η , η (respectively, ν = η , η ), vanish t , s Z .
In a like manner, two random signals x ( t ) T n 1 and y ( t ) T n 2 are cross T 1 -proper, (respectively, cross T 2 -proper) if, and only if, the functions R x y ν ( t , s ) , with ν = , η , η (respectively, ν = η , η ), vanish t , s Z .
Moreover, x ( t ) and y ( t ) are jointly T 1 -proper (respectively, jointly T 2 -proper) if, and only if, they are T 1 -proper (respectively, T 2 -proper) and cross T 1 -proper (respectively, cross T 2 -proper).
Remark that, T 1 properness is more restrictive than T 2 properness. Statistical tests to experimentally check whether a signal is T k -proper, k = 1 , 2 , or improper have been proposed in [23,24].
It should be highlighted that the different properness properties have direct implications on the optimal linear processing. In general, the optimal linear processing is the widely linear processing, which requires to operate on the augmented tessarine vector x ¯ ( t ) . Nevertheless, in the case of joint T k -properness, k = 1 , 2 , the optimal linear processing is reduced to a T k linear processing, with the corresponding decrease in the dimension of the problem. In particular, T 1 linear processing is based on the tessarine random signal itself, and T 2 linear processing considers the augmented vector given by the signal and its conjugate [24].

3. Problem Formulation

Consider the class of linear discrete stochastic systems with state delays and multiple sensors
x ( t + 1 ) = F 1 ( t ) x ( t ) + F 2 ( t ) x ( t ) + F 3 ( t ) x η ( t ) + F 4 ( t ) x η ( t ) + u ( t ) , t 0 z ( i ) ( t ) = x ( t ) + v ( i ) ( t ) , t 0 , i = 1 , , R y ( i ) ( t ) = γ ( i ) ( t ) z ( i ) ( t ) + ( 1 n γ ( i ) ( t ) ) z ( i ) ( t 1 ) , t 1 , i = 1 , , R
where R is the number of sensors, ★ is the product defined in (2), F j ( t ) T n × n , j = 1 , 2 , 3 , 4 , are deterministic matrices, x ( t ) T n is the system state to be estimated, u ( t ) T n is a tessarine noise, z ( i ) ( t ) T n is the ith sensor outputs with tessarine sensor noise v ( i ) ( t ) T n , y ( i ) ( t ) T n is the observation of the ith sensor, γ ( i ) ( t ) = [ γ 1 ( i ) ( t ) , , γ n ( i ) ( t ) ] T T n is a tessarine random vector with components γ j ( i ) ( t ) = γ j , r ( i ) ( t ) + η γ j , η ( i ) ( t ) + η γ j , η ( i ) ( t ) + η γ j , η ( i ) ( t ) , for j = 1 , , n , composed of independent Bernoulli random variables γ j , ν ( i ) ( t ) , j = 1 , , n , ν = r , η , η , η , with known probabilities p j , ν ( i ) ( t ) , and with possible outcomes { 0 , 1 } that indicates if the ν part of the jth observation component of the ith sensor is up-to-date (case γ j , ν ( i ) ( t ) ) = 1 ) or there exits one-step delay (case γ j , ν ( i ) ( t ) ) = 0 ).
The following assumptions for the above system (3) are made.
Assumption 1.
For a given sensor i, the Bernoulli variable vector γ ( i ) ( t ) is independent of γ ( i ) ( s ) , for t s , and also γ ( i ) ( t ) is independent of γ ( j ) ( t ) , for any two sensors i j .
Assumption 2.
For a given sensor i, γ ( i ) ( t ) is independent of x ( t ) , u ( t ) and v ( j ) ( t ) , for any i , j = 1 , , R .
Assumption 3.
u ( t ) and v ( i ) ( t ) are correlated white noises with respective pseudo variances Q ( t ) and R ( i ) ( t ) . Moreover, E [ u ( t ) v ( i ) H ( s ) ] = S ( i ) ( t ) δ t , s .
Assumption 4.
v ( i ) ( t ) is independent of v ( j ) ( t ) , for any two sensors i j .
Assumption 5.
The initial state x ( 0 ) is independent of the additive noises u ( t ) and v ( i ) ( t ) , for t 0 and i = 1 , , R .
Remark 1.
From the hypotheses established on the Bernoulli random variables it follows that, for any j 1 , j 2 = 1 , , n , ν 1 , ν 2 = r , η , η , η and i 1 , i 2 = 1 , , R ,
E γ j 1 , ν 1 ( i 1 ) ( t ) γ j 2 , ν 2 ( i 2 ) ( t ) = p j 1 , ν 1 ( i 1 ) ( t ) , i f i 1 = i 2 , j 1 = j 2 , ν 1 = ν 2 p j 1 , ν 1 ( i 1 ) ( t ) p j 2 , ν 2 ( i 2 ) ( t ) , o t h e r w i s e , E 1 γ j 1 , η 1 ( i 1 ) ( t ) 1 γ j 2 , η 2 ( i 2 ) ( t ) = 1 p j 1 , ν 1 ( i 1 ) ( t ) , i f i 1 = i 2 , j 1 = j 2 , ν 1 = ν 2 1 p j 1 , ν 1 ( i 1 ) ( t ) 1 p j 2 , ν 2 ( i 2 ) ( t ) , o t h e r w i s e .

3.1. One-State Delay System under T k -Properness

In this section, a TWL one-state delay system, which exploits the full amount second-order statistics information available, is introduced and analyzed in T k -properness scenarios, k = 1 , 2 .
For this purpose, consider the augmented vectors x ¯ ( t ) , z ¯ ( i ) ( t ) , and y ¯ ( i ) ( t ) of x ( t ) , z ( i ) ( t ) , and y ( i ) ( t ) , respectively. Then, by applying Property 2 on system (3), the following TWL one-state delay model can be defined:
x ¯ ( t + 1 ) = Φ ¯ ( t ) x ¯ ( t ) + u ¯ ( t ) , t 0 z ¯ ( i ) ( t ) = x ¯ ( t ) + v ¯ ( i ) ( t ) , t 0 , i = 1 , , R y ¯ ( i ) ( t ) = D ¯ γ ( i ) ( t ) z ¯ ( i ) ( t ) + D ¯ ( 1 γ ( i ) ) ( t ) z ¯ ( i ) ( t 1 ) , t 1 , i = 1 , , R
where
Φ ¯ ( t ) = F 1 ( t ) F 2 ( t ) F 3 ( t ) F 4 ( t ) F 2 ( t ) F 1 ( t ) F 4 ( t ) F 3 ( t ) F 3 η ( t ) F 4 η ( t ) F 1 η ( t ) F 2 η ( t ) F 4 η ( t ) F 3 η ( t ) F 2 η ( t ) F 1 η ( t ) .
Moreover, from Assumption 3, the pseudo correlation matrices associated to the augmented noise vectors u ¯ ( t ) and v ¯ ( i ) ( t ) are given by
  • E [ u ¯ ( t ) u ¯ H ( s ) ] = Q ¯ ( t ) δ t , s ;
  • E [ v ¯ ( i ) ( t ) v ¯ ( i ) H ( s ) ] = R ¯ ( i ) ( t ) δ t , s ;
  • E [ u ¯ ( t ) v ¯ ( i ) H ( s ) ] = S ¯ ( i ) ( t ) δ t , s .
The following result establishes conditions on system (5), which lead to T k -properness properties of the processes involved.
Proposition 1.
Consider the TWL one-state delay model (5).
  • If x ( 0 ) and u ( t ) are T 1 -proper, and Φ ¯ ( t ) is a block diagonal matrix of the form
    Φ ¯ ( t ) = diag F 1 ( t ) , F 1 ( t ) , F 1 η ( t ) , F 1 η ( t ) ,
    then x ( t ) is T 1 -proper.
    If additionally p j , r ( i ) ( t ) = p j , η ( i ) ( t ) = p j , η ( i ) ( t ) = p j , η ( i ) ( t ) p j ( i ) ( t ) , t , j , i , v ( i ) ( t ) is T 1 -proper, and u ( t ) and v ( i ) ( t ) are cross T 1 -proper, then x ( t ) and y ( i ) ( t ) are jointly T 1 -proper.
  • If x ( 0 ) and u ( t ) are T 2 -proper, and Φ ¯ ( t ) is a block diagonal matrix of the form
    Φ ¯ ( t ) = diag Φ 2 ( t ) , Φ 2 η ( t ) , with Φ 2 ( t ) = F 1 ( t ) F 2 ( t ) F 2 ( t ) F 1 ( t ) ,
    then x ( t ) is T 2 -proper.
    If additionally, p j , r ( i ) ( t ) = p j , η ( i ) ( t ) , p j , η ( i ) ( t ) = p j , η ( i ) ( t ) , t , j , i , v ( i ) ( t ) is T 2 -proper and u ( t ) , and v ( i ) ( t ) are cross T 2 -proper, then x ( t ) and y ( i ) ( t ) are jointly T 2 -proper.
Proof. 
The proof follows immediately from the application of the corresponding conditions on system (5) and the computation of the augmented pseudo correlation matrices R x ¯ ( t , s ) and R x ¯ y ¯ ( i ) ( t , s ) . □
Remark 2.
Note that under T 1 -properness conditions, Π ¯ γ ( i ) ( t ) = E [ D ¯ γ ( i ) ( t ) ] , i = 1 , , R , is a diagonal matrix of the form Π ¯ γ ( i ) ( t ) = I 4 Π 1 ( i ) ( t ) , with Π 1 ( i ) ( t ) = diag ( p 1 , r ( i ) ( t ) , , p n , r ( i ) ( t ) ) .
Likewise, under T 2 -properness conditions, Π ¯ γ ( i ) ( t ) = E [ D ¯ γ ( i ) ( t ) ] , i = 1 , , R , takes the form of a block diagonal matrix as follows:
Π ¯ γ ( i ) ( t ) = diag Π 2 ( i ) ( t ) , Π 2 ( i ) ( t ) , with Π 2 ( i ) ( t ) = 1 2 Π a ( i ) ( t ) Π b ( i ) ( t ) Π b ( i ) ( t ) Π a ( i ) ( t ) ,
where Π a ( i ) ( t ) = diag ( p 1 , r ( i ) ( t ) + p 1 , η ( i ) ( t ) , , p n , r ( i ) ( t ) + p n , η ( i ) ( t ) ) and Π b ( i ) ( t ) = diag ( p 1 , r ( i ) ( t ) p 1 , η ( i ) ( t ) , , p n , r ( i ) ( t ) p n , η ( i ) ( t ) ) .

3.2. Compact State-Space Model

By stacking the observations at each sensor in a global observation vector z ( t ) = z ¯ ( 1 ) T ( t ) , , z ¯ ( R ) T ( t ) T , the TWL one-state delay system (5) can be rewritten in a compact form as
x ¯ ( t + 1 ) = Φ ¯ ( t ) x ¯ ( t ) + u ¯ ( t ) , t 0 z ( t ) = C ¯ x ¯ ( t ) + v ( t ) , t 0 y ( t ) = D ¯ γ ( t ) z ( t ) + D ¯ ( 1 γ ) ( t ) z ( t 1 ) , t 1
where v ( t ) and y ( t ) denote the stacking vector of v ¯ ( i ) T ( t ) and y ¯ ( i ) T ( t ) , for i = 1 , , R , respectively. Moreover, C ¯ = 1 R I 4 n , D ¯ γ ( t ) = L ¯ diag ( γ r ( t ) ) L ¯ H and D ¯ ( 1 γ ) ( t ) = L ¯ diag 1 4 R n γ r ( t ) L ¯ H , with L ¯ = I R T n .
In addition, E [ v ( t ) v H ( s ) ] = R ¯ ( t ) δ t , s , with R ¯ ( t ) = diag R ¯ ( 1 ) ( t ) , , R ¯ ( R ) ( t ) , and E [ u ¯ ( t ) v H ( s ) ] = S ¯ ( t ) δ t , s , with S ¯ ( t ) = S ¯ ( 1 ) ( t ) , , S ¯ ( R ) ( t ) .
In this paper, our aim is to investigate the centralized fusion estimation problem under conditions of T k -properness, with k = 1 , 2 . In this sense, the use of T k -properness properties allows us to consider the observation equation with reduced dimension
y k ( t ) = D ˜ k γ ( t ) C ¯ x ¯ ( t ) + D ˜ k ( 1 γ ) ( t ) C ¯ x ¯ ( t 1 ) + D ˜ k γ ( t ) v ( t ) + D ˜ k ( 1 γ ) ( t ) v ( t 1 ) , t 1
where x ¯ ( t ) satisfies the state equation in (7), D ˜ k γ ( t ) = L k diag ( γ r ( t ) ) L ¯ H and D ˜ k ( 1 γ ) ( t ) = L k diag 1 4 R n γ r ( t ) L ¯ H , with L k = I R T k and T k = 1 2 B k I n , where
  • T 1 -proper scenario:
    B 1 = 1 η η η ;
    y 1 ( t ) y ( 1 ) T ( t ) , , y ( R ) T ( t ) T .
  • T 2 -proper scenario:
    B 2 = 1 η η η 1 η η η ;
    y 2 ( t ) y ( 1 ) T ( t ) , y ( 1 ) H ( t ) , , y ( R ) T ( t ) , y ( R ) H ( t ) T .
Remark 3.
Note that under T k -properness conditions, Π ˜ k γ ( t ) = E D ˜ k γ ( t ) is given by Π ˜ k γ ( t ) = diag Π ˜ k γ ( 1 ) ( t ) , , Π ˜ k γ ( R ) ( t ) , where Π ˜ k γ ( i ) ( t ) = Π k ( i ) ( t ) , 0 k n × ( 4 k ) n with Π k ( i ) ( t ) , i = 1 , , R , given in Remark 2.
Similarly, Π ˜ k ( 1 γ ) ( t ) = E D ˜ k ( 1 γ ) ( t ) is given by the block diagonal matrix Π ˜ k ( 1 γ ) ( t ) = diag Π ˜ k 1 γ ( 1 ) ( t ) , , Π ˜ k 1 γ ( R ) ( t ) with Π ˜ k 1 γ ( i ) ( t ) = I k n Π k ( i ) ( t ) , 0 k n × ( 4 k ) n .
Accordingly, whereas the optimal linear processing for the estimation of a tessarine signal x ( t ) is the TWL processing based on the set of measurements { y ( 1 ) , y ( t ) } , under T k -properness conditions the optimal estimator of x ( t ) T n , x ^ T k ( t | s ) , can be computed by projecting on the set of measurements { y k ( 1 ) , , y k ( s ) } , for k = 1 , 2 . Thereby, T k estimators are obtained that have the same performance as TWL estimators, but with a lower computational complexity. More importantly, this computational load saving cannot be achieved with the real approach.
Note that tessarine algebra is not a Hilbert space and, as a consequence, neither the existence nor the uniqueness of the projection on a set of tessarines is guaranteed. Nevertheless, this drawback has been overcome in [23] by defining a suitable metric, which assures the existence and uniqueness of these projections.
The following property sets the correlations between the noises, u ¯ ( t ) and v ( t ) , and both the augmented state x ¯ ( t ) and the observations y k ( t ) .
Property 3.
Under Assumptions 1–4, the following correlations hold.
1.
Correlations between noises and the augmented state:
 (a) 
E [ x ¯ ( t + 1 ) u ¯ H ( t ) ] = Q ¯ ( t ) ;
 (b) 
E [ x ¯ ( t ) u ¯ H ( s ) ] = 0 4 n × 4 n , for t s ;
 (c) 
E [ x ¯ ( t + 1 ) v H ( t ) ] = S ¯ ( t ) ;
 (d) 
E [ x ( t ) v ¯ H ( s ) ] = 0 4 n × 4 R n , for t s .
2.
Correlations between noises and T k observations:
 (a) 
E [ y k ( t ) u ¯ H ( t ) ] = Π ˜ k γ ( t ) S ¯ H ( t ) ;
 (b) 
E [ y k ( t + 1 ) u ¯ H ( t ) ] = Π ˜ k γ ( t + 1 ) C ¯ Q ¯ ( t ) + Π ˜ k ( 1 γ ) ( t + 1 ) S ¯ H ( t ) ;
 (c) 
E [ y k ( t ) u ¯ H ( s ) ] = 0 k R n × 4 n , for t < s ;
 (d) 
E [ y k ( t ) v H ( t ) ] = Π ˜ k γ ( t ) R ¯ ( t ) ;
 (e) 
E [ y k ( t + 1 ) v H ( t ) ] = Π ˜ k γ ( t + 1 ) C ¯ S ¯ ( t ) + Π ˜ k ( 1 γ ) ( t + 1 ) R ¯ ( t ) ;
 (f) 
E [ y k ( t ) v H ( s ) ] = 0 k R n × 4 R n , for t < s .
Remark 4.
Observe that, under a T k -properness setting, the state equation in (7) is equivalent to the T k state equation
x k ( t + 1 ) = Φ k ( t ) x k ( t ) + u k ( t ) , t 0
where,
  • in a T 1 -proper scenario, x 1 ( t ) x ( t ) , u 1 ( t ) u ( t ) , and Φ 1 ( t ) F 1 ( t ) ;
  • in a T 2 -proper scenario, x 2 ( t ) [ x T ( t ) , x H ( t ) ] T , u 2 ( t ) [ u T ( t ) , u H ( t ) ] T and Φ 2 ( t ) is as in (6).
In such cases, Q k ( t ) = E [ u k ( t ) u k H ( t ) ] and S k ( t ) = E [ u k ( t ) v k H ( t ) ] , for k = 1 , 2 , where v 1 ( t ) v ( t ) and v 2 ( t ) [ v T ( t ) , v H ( t ) ] T , with v ( t ) = v ( 1 ) T ( t ) , , v ( R ) T ( t ) T .
Nevertheless, Equation (9) cannot be used together with the observation Equation (8), since the latter involves the augmented state vector x ¯ ( t ) .

4. T k -Proper Centralized Fusion Estimation Algorithms

In this section, the T k centralized fusion filter, prediction, and fixed-point smoothing algorithms are designed on the basis of the set of observations { y k ( 1 ) , , y k ( s ) } , k = 1 , 2 , defined in (8).
With this purpose in mind, the observation Equation (8) is used to devise filtering, prediction, and smoothing algorithms for the augmented state vector x ¯ ( t ) . Then, by applying T k -properness properties, the recursive formulas for the filtering, prediction, and smoothing estimators of x k ( t ) are easily determined. Finally, the desired T k centralized fusion filtering, prediction and fixed-point smoothing estimators are obtained as a subvector of them.
Theorems 1–3 summarize the recursive formulas for the computation of these T k estimators as well as their associated error variances.

4.1. T k Centralized Fusion Filter

Theorem 1.
The optimal T k centralized fusion filter x ^ T k ( t | t ) and one-step predictor x ^ T k ( t + 1 | t ) for the state x ( t ) are obtained by extracting the first n components of the optimal estimator x ^ k ( t | t ) and x ^ k ( t + 1 | t ) , respectively, which are recursively computed from the expressions
x ^ k ( t | t ) = x ^ k ( t | t 1 ) + L k ( t ) ε k ( t ) , t 1
x ^ k ( t + 1 | t ) = Φ k ( t ) x ^ k ( t | t ) + H k ( t ) ε k ( t ) , t 1
with x ^ k ( 0 | 0 ) = 0 k n and x ^ k ( 1 | 0 ) = 0 k n , and where H k ( t ) = S k ( t ) Π k ( t ) Ω k 1 ( t ) , with Π k ( t ) = diag Π k ( 1 ) ( t ) , , Π k ( R ) ( t ) and Π k ( i ) ( t ) , i = 1 , , R , defined in Remark 2 for k = 1 , 2 .Moreover, ε k ( t ) are the innovations calculated as follows
ε k ( t ) = y k ( t ) Π k ( t ) C k x ^ k ( t | t 1 ) I m Π k ( t ) C k x ^ k ( t 1 | t 1 ) I m Π k ( t ) G k ( t 1 ) ε k ( t 1 ) , t 1
with m = k R n , ε k ( 0 ) = 0 m , and where C k = 1 R I k n , G k ( t ) = R k ( t ) Π k ( t ) Ω k 1 ( t ) , with R k ( t ) = E [ v k ( t ) v k H ( t ) ] .
In addition, L k ( t ) = Θ k ( t ) Ω k 1 ( t ) , where Θ ( t ) is computed through the equation,
Θ k ( t ) = P k ( t | t 1 ) C k T Π k ( t ) + Φ k ( t 1 ) P k ( t 1 | t 1 ) C k T I m Π k ( t ) + S k ( t 1 ) I m Π k ( t ) H k ( t 1 ) Θ k H ( t 1 ) C k T I m Π k ( t ) Φ k ( t 1 ) Θ k ( t 1 ) G k H ( t 1 ) I m Π k ( t ) H k ( t 1 ) Ω k ( t 1 ) G k H ( t 1 ) I m Π k ( t ) , t > 1
with Θ k ( 1 ) = P k ( 1 | 0 ) C k T Π k ( 1 ) + Φ k ( 0 ) P k ( 0 | 0 ) C k T I m Π k ( 1 ) + S k ( 0 ) I m Π k ( 1 ) , and the innovations covariance matrix Ω k ( t ) is obtained as
Ω k ( t ) = M k 1 ( t ) M k 2 ( t ) M k 3 ( t ) + M k 4 ( t ) + Π k ( t ) C k P k ( t | t 1 ) C k T Π k ( t ) + Π k ( t ) J k ( t 1 ) I m Π k ( t ) + I m Π k ( t ) J k H ( t 1 ) Π k ( t ) + I m Π k ( t ) [ C k P k ( t 1 | t 1 ) C k T C k Θ k ( t 1 ) G k H ( t 1 ) G k ( t 1 ) Θ k H ( t 1 ) C k T G k ( t 1 ) Ω k ( t 1 ) G k H ( t 1 ) ] I m Π k ( t ) , t > 1
with
Ω k ( 1 ) = M k 1 ( 1 ) M k 2 ( 1 ) M k 3 ( 1 ) + M k 4 ( 1 ) + Π k ( 1 ) C k P k ( 1 | 0 ) C k T Π k ( 1 ) + Π k ( 1 ) J k ( 0 ) I m Π k ( 1 ) + I m Π k ( 1 ) J k H ( 0 ) Π k ( 1 ) + I m Π k ( 1 ) C k P k ( 0 | 0 ) C k T I m Π k ( 1 ) ,
where
J k ( t ) = C k [ Φ k ( t ) P k ( t | t ) C k T H k ( t ) Θ k H ( t ) C k T + S k ( t ) Φ k ( t ) Θ k ( t ) G k H ( t ) H k ( t ) Ω k ( t ) G k H ( t ) ] ,
with J k ( 0 ) = C k Φ k ( 0 ) P k ( 0 | 0 ) C k T + S k ( 0 ) , and
  • M k 1 ( t ) = L k Cov ( γ r ( t ) ) L ¯ H C ¯ Σ ¯ ( t 1 ) C ¯ T L ¯ L k H ,
  • M k 2 ( t ) = L k Cov ( γ r ( t ) ) L ¯ H C ¯ S ¯ ( t ) L ¯ L k H ,
  • M k 3 ( t ) = L k Cov ( γ r ( t ) ) L ¯ H S ¯ H ( t ) C ¯ T L ¯ L k H ,
  • M k 4 ( t ) = L k Δ p r ( t ) L ¯ H R ¯ ( t ) L ¯ + Δ 1 p r ( t ) L ¯ H R ¯ ( t 1 ) L ¯ L k H ,
where Δ p r ( t ) = E [ γ r ( t ) γ r T ( t ) ] , Δ 1 p r ( t ) = E [ ( 1 4 R n γ r ( t ) ) ( 1 4 R n γ r ( t ) ) T ] , with entries given in (4), and
Σ ¯ ( t ) = Φ ¯ ( t ) D ¯ ( t ) Φ ¯ H ( t ) + Q ¯ ( t ) Φ ¯ ( t ) D ¯ ( t ) D ¯ ( t ) Φ ¯ H ( t ) + D ¯ ( t ) ,
where D ¯ ( t ) = R x ¯ ( t , t ) is recursively computed from
D ¯ ( t ) = Φ ¯ ( t 1 ) D ¯ ( t 1 ) Φ ¯ H ( t 1 ) + Q ¯ ( t 1 ) .
Finally, the T k filtering and prediction error pseudo covariance matrices P T k ( t | t ) and P T k ( t + 1 | t ) , respectively, are obtained from the filtering and prediction error pseudo covariance matrices P k ( t | t ) and P k ( t + 1 | t ) , calculated from the recursive expressions
P k ( t | t ) = P k ( t | t 1 ) Θ k ( t ) Ω k 1 ( t ) Θ k H ( t ) ,
with P k ( 0 | 0 ) = E [ x k ( 0 ) x k H ( 0 ) ] , and
P k ( t + 1 | t ) = Φ k ( t ) P k ( t | t ) Φ k H ( t ) H k ( t ) Θ k H ( t ) Φ k H ( t ) Φ k ( t ) Θ k ( t ) H k H ( t ) H k ( t ) Ω k ( t ) H k H ( t ) + Q k ( t ) ,
with P k ( 1 | 0 ) = Φ k ( 0 ) P k ( 0 | 0 ) Φ k H ( 0 ) + Q k ( 0 ) .
Remark 5.
In the implementation of the above algorithm, the particular structure of Σ ¯ ( t ) under T k -properness conditions should be taken into consideration. In this regard, it is not difficult to check that Σ ¯ ( t ) is a block diagonal matrix of the form
  • T 1 -properness: Σ ¯ ( t ) = diag Σ 1 ( t ) , Σ 1 ( t ) , Σ 1 η ( t ) , Σ 1 η ( t ) ;
  • T 2 -properness: Σ ¯ ( t ) = diag Σ 2 ( t ) , Σ 2 η ( t ) ,
with Σ k ( t ) = Φ k ( t ) D k ( t ) Φ k H ( t ) + Q k ( t ) Φ k ( t ) D k ( t ) D k ( t ) Φ k H ( t ) + D k ( t ) , k = 1 , 2 , where D k ( t ) = R x k ( t , t ) is recursively computed from
D k ( t ) = Φ k ( t 1 ) D k ( t 1 ) Φ k H ( t 1 ) + Q k ( t 1 ) .

4.2. T k Centralized Fusion Predictor

Theorem 2.
The optimal T k centralized fusion predictor x ^ T k ( t + τ | t ) for the state x ( t ) is obtained by extracting the first n components of the optimal estimator x ^ k ( t + τ | t ) , which is recursively computed from the expression
x ^ k ( t + τ | t ) = Φ k ( t + τ 1 ) x ^ k ( t + τ 1 | t ) , τ 2
with the initialization the one-step predictor x ^ k ( t + 1 | t ) given by (11).
Moreover, the T k -proper prediction error pseudo covariance matrix P T k ( t + τ | t ) is obtained from the prediction error pseudo covariance matrix P k ( t + τ | t ) , computed from the recursive expression
P k ( t + τ | t ) = Φ k ( t + τ 1 ) P k ( t + τ 1 | t ) Φ k H ( t + τ 1 ) + Q k ( t + τ 1 ) , τ 2
with the initialization the one-step prediction error pseudo covariance matrix given by (18).

4.3. T k Centralized Fusion Smoother

Theorem 3.
The optimal T k centralized fusion fixed-point smoother x ^ T k ( t | s ) , for a fixed instant t < s , for the state x ( t ) is obtained by extracting the n first components of the optimal estimator x ^ k ( t | s ) , which is recursively computed from the expressions
x ^ k ( t | s ) = x ^ k ( t | s 1 ) + L k ( t , s ) ε k ( s ) , t 1
with initial condition x ^ k ( t | t ) given by (10), and where the innovations ε k ( s ) are recursively computed from (12) and L k ( t , s ) = Θ k ( t , s ) Ω k 1 ( s ) with Ω k 1 ( s ) obtained from the recursive expression (14) and
Θ k ( t , s ) = E k ( t , s 1 ) Φ k H ( s 1 ) Θ k ( t , s 1 ) H k H ( s 1 ) C k T Π k ( s ) + E k ( t , s 1 ) C k T Θ k ( t , s 1 ) G k H ( s 1 ) I m Π k ( s ) ,
E k ( t , s ) = E k ( t , s 1 ) Φ k H ( s 1 ) Θ k ( t , s 1 ) H k H ( s 1 ) I C k T Π k ( s ) L k H ( s ) E k ( t , s 1 ) C k T Θ k ( t , s 1 ) G k H ( s 1 ) I m Π k ( s ) L k H ( s ) ,
with initialization Θ k ( t , t ) = Θ k ( t ) given by (13) and E k ( t , t ) = P k ( t | t ) .
Furthermore, the T k fixed-point smoothing error pseudo covariance matrix is recursively computed through the expression
P k ( t | s ) = P k ( t | s 1 ) Θ k ( t , s ) Ω k 1 ( s ) Θ k H ( t , s ) ,
with P k ( t | t ) the filtering error pseudo covariance matrix (17).
As mentioned above, the main advantage of the proposed T k centralized fusion algorithms is that the resulting T k centralized fusion estimators coincide with the optimal TWL counterparts; meanwhile, they lead to computational savings with respect to the one derived from a TWL approach.
Remark 6.
The computational demand of the proposed tessarine estimation algorithms under T k , for k = 1 , 2 properness conditions is similar to that of their counterparts in the quaternion domain, i.e., the QSL and QSWL estimation algorithms, respectively, (review [34] for a comparative analysis of the computational complexity of quaternion estimators). Therefore, the computational load of TWL estimation algorithms is of order O ( 64 R 3 n 3 ) , whereas the T k , for k = 1 , 2 , algorithms are of order O ( m 3 ) , with m = k R n .

5. Simulation Examples

In this section, the effectiveness of the above T k -proper centralized fusion estimation algorithms is experimentally analyzed. With this aim, the following simulation examples have be chosen to reveal the superiority of the proposed T k -proper estimators over their counterparts in the quaternion domain, when T k -properness conditions are present.
Let us consider the following tessarine system with three sensors:
x ( t + 1 ) = f 1 x ( t ) + u ( t ) z ( i ) ( t ) = x ( t ) + v ( i ) ( t ) , i = 1 , 2 , 3 y ( i ) ( t ) = γ ( i ) ( t ) z ( i ) ( t ) + ( 1 γ ( i ) ( t ) ) z ( i ) ( t 1 ) , i = 1 , 2 , 3
with f 1 = 0.9 0.3 η + 0.02 η + 0.1 η T . The following assumptions are made on the initial state and additive noises.
1.
The initial state x 0 is a tessarine Gaussian variable determined by the real covariance matrix
E [ x r ( 0 ) x r T ( 0 ) ] = a 0 2.5 0 0 4 0 2.5 2.5 0 a 0 0 2.5 0 4 .
2.
u ( t ) is a tessarine white Gaussian noise with a real covariance matrix
E [ u r ( t ) u r T ( s ) ] = 0.9 0 c 0 0 b 0 c c 0 0.9 0 0 c 0 b δ t , s .
3.
The measurement noises v ( i ) ( t ) of the three sensors are tessarine white Gaussian noises defined as
v ( i ) ( t ) = α i u ( t ) + w ( i ) ( t ) ,
where the coefficients α i are the constant scalars α 1 = 0.5 , α 2 = 0.8 , and α 3 = 0.4 and w ( i ) ( t ) , i = 1 , 2 , 3 , are T 1 -proper tessarine white Gaussian noises with mean zeros and real covariance matrices
E [ w ( i ) r ( t ) w ( i ) r T ( s ) ] = β i 0 0 0 0 β i 0 0 0 0 β i 0 0 0 0 β i δ t , s ,
with β 1 = 4 , β 2 = 8 , and β 3 = 25 , and independent of u ( t ) . Note that, if α i = 0 , then the noises u ( t ) and v ( i ) ( t ) are uncorrelated. In the opposite case, when α i becomes more different from 0, the correlation between u ( t ) and v ( i ) ( t ) is stronger.
Moreover, at every sensor i, the Bernoulli random variables γ ν ( i ) ( t ) , ν = r , η , η , η , have the constant probabilities P [ γ ν ( i ) ( t ) = 1 ] = p ν ( i ) , for all t T .
In this framework, a comparative study between tessarine and quaternion approaches is carried out to evaluate the performance of the proposed filtering, prediction and smoothing algorithms under T 1 and T 2 properness conditions. Specifically, besides the filtering, the 3-step prediction and fixed-point smoother at t = 20 problems are considered in our simulations.

5.1. Study Case 1: T 1 -Proper Systems

Consider the values a = 4 in (25) and b = 0.9 and c = 0.3 in (26), and the Bernoulli probabilities
  • p r ( 1 ) = p η ( 1 ) = p η ( 1 ) = p η ( 1 ) = p 1 ;
  • p r ( 2 ) = p η ( 2 ) = p η ( 2 ) = p η ( 2 ) = p 2 ;
  • p r ( 3 ) = p η ( 3 ) = p η ( 3 ) = p η ( 3 ) = p 3 .
Note that, under these conditions, both x ( t ) and y ( i ) ( t ) , i = 1 , 2 , 3 , are jointly T 1 -proper.
For the purpose of comparison, the error variances of both T 1 and QSL estimators have been computed for different Bernoulli probabilities p i , i = 1 , 2 , 3 . We denote the QSL error variances by P Q S L ( t | s ) . Then, as a performance measure, we compute the difference between the T 1 and QSL error variances associated to the filter, D E 1 ( t | t ) = P Q S L ( t | t ) P 1 ( t | t ) , the 3-step predictor, D E 1 ( t + 3 | t ) = P Q S L ( t + 3 | t ) P 1 ( t + 3 | t ) , and the fixed-point smoother at t = 20 , D E 1 ( 20 | t ) = P Q S L ( 20 | t ) P 1 ( 20 | t ) , for t > 20 .
Firstly, these differences are displayed in Figure 1 considering different degrees of correlations between the state and measurement noises: independent noises ( α 1 = α 2 = α 3 = 0 ), low correlations ( α 1 = 0.5 , α 2 = 0.8 , α 3 = 0.4 ), and high correlations ( α 1 = 5 , α 2 = 8 , α 3 = 4 ) and two levels of uncertainties: high delay probabilities (case p 1 = 0.5 , p 2 = 0.2 , p 3 = 0.4 ) and low delay probabilities (case p 1 = 0.9 , p 2 = 0.5 , p 3 = 0.8 ). As we can see, in all situations these differences are positive, which indicate that the proposed T 1 estimators outperform the QSL estimators. Moreover, this superiority in performance increases when the correlation between the system noises is higher. With respect to the levels of uncertainties, a better behavior of the T 1 estimators over the QSL counterparts is generally observed in the scenario of high delays probabilities, i.e., when the Bernoulli probabilities are smaller.
Next, in order to evaluate the performance of the proposed estimators versus the probability of delay, we consider the same Bernoulli probabilities in the three sensors ( p 1 = p 2 = p 3 = p ) , and the difference between the T 1 and QSL error variances are computed for different values of p. Figure 2 illustrates these differences for p = 0 , 0.2 , 0.4 , 0.6 , 0.8 , 1 . In these figures, the superiority in performance of T 1 estimators over QSL estimators is confirmed since D E 1 > 0 in every case. Additionally, in the filtering and prediction problems it is observed that this superiority is higher for the smallest Bernoulli probabilities, i.e., when the delay probabilities are greater. On the other hand, in the fixed-point smoothing problem, a similar behavior for Bernoulli probabilities p and 1 p is obtained, the advantages of the T 1 smoothing algorithm being higher than the QSL one at intermediate values of p (case p = 0.4 and p = 0.6 ). These results are examined in detail below.
Our aim now is to analyze the benefits of our T 1 estimation algorithms in terms of the Bernoulli probabilities of the three sensors p. In this analysis, different values of c in (26) are also considered. Then, the means of the difference between the T 1 and QSL filtering, prediction, and fixed-point smoothing error variances have been computed as
  • Filtering problem: M D E 1 p ( t | t ) = 1 100 t = 1 100 D E 1 p ( t | t ) ;
  • 3-step prediction problem: M D E 1 p ( t + 3 | t ) = 1 97 t = 1 97 D E 1 p ( t + 3 | t ) ;
  • Fixed-point smoothing problem: M D E 1 p ( 20 | t ) = 1 80 t = 1 80 D E 1 p ( 20 | t ) ;
for p varying from 0 to 1 and the values of c = 0 , 0.3 , 0.6 , and 0.8 , and where D E 1 p ( t | t ) , D E 1 p ( t + 3 | t ) and D E 1 p ( 20 | t ) denote the difference between the T 1 and QSL filtering, 3-step prediction, and fixed-point smoothing error variances, respectively, for a value of the Bernoulli probability p. Note that in the case c = 0 , the noise u ( t ) is, besides being T 1 -proper, Q -proper, and a higher value of c means that the noise u ( t ) moves further away from the Q -properness condition. The results of this analysis are depicted in Figure 3 where, on the one hand, we can clearly observe how the best performance of T 1 filtering and prediction estimators over the QSL counterparts is obtained for the smallest Bernoulli probabilities. Specifically, except for the case c = 0.8 , the maximum difference between T 1 and QSL errors is achieved when the Bernoulli probability takes the value 0, i.e., when only one-step delay exists in the measurements. However, in the fixed-point smoothing problem T 1 is more advantageous when the Bernoulli probabilities p tend to 0.5 . On the other hand, in every case, the superiority of our T 1 estimation algorithms is more evident as the parameter c in (26) grows, i.e., the noise u ( t ) is further away from the Q -properness condition.

5.2. Study Case 2: T 2 -Proper Systems

Consider the values a = 6 in (25), b = c = 0.3 in (26), and the Bernoulli probabilities for the three sensors as in Section 5.1. Note that, under these conditions, both x ( t ) and y ( i ) ( t ) , i = 1 , 2 , 3 , are jointly T 2 -proper.
Thus, we are interested in comparing the behavior of T 2 centralized fusion estimators with their counterparts in the quaternion domain, i.e., the quaternion semi-widely linear (QSWL) estimators. For this purpose, the T 2 and QSWL error variances, P 2 ( t | s ) and P Q S W L ( t | s ) , respectively, have been computed by considering different Bernoulli probabilities for the three sensors.
Specifically, we consider the filtering, the 3-step prediction, and the fixed-point smoothing problems at t = 20 , and, as a measure of comparison, we use the difference between both QSWL and T 2 error variances, which are defined as D E 2 ( t | t ) = P Q S W L ( t | t ) P 2 ( t | t ) (filtering), D E 2 ( t + 3 | t ) = P Q S W L ( t + 3 | t ) P 2 ( t + 3 | t ) (3-step prediction), and D E 2 ( 20 | t ) = P Q S W L ( 20 | t ) P 2 ( 20 | t ) (fixed-point smoothing).
Figure 4 and Figure 5 compare the difference between QSWL and T 2 centralized estimation error variances for different Bernoulli probabilities p 1 , p 2 and p 3 . Specifically, Figure 4 analyzes the filtering and 3-step prediction error variance differences D E 2 ( t | t ) and D E 2 ( t + 3 | t ) for the following cases:
1.
Case 1: for values of p 1 = 0.1 , 0.5 , 0.9 in three situations: p 2 = 0.9 and p 3 = 0.1 , p 2 = 0.1 and p 3 = 0.9 , and p 2 = p 3 = 0.5 ;
2.
Case 2: for values of p 2 = 0.1 , 0.5 , 0.9 in three situations: p 1 = 0.9 and p 3 = 0.1 , p 1 = 0.1 and p 3 = 0.9 , and p 1 = p 3 = 0.5 ;
3.
Case 3: for values of p 3 = 0.1 , 0.5 , 0.9 in three situations: p 1 = 0.9 and p 2 = 0.1 , p 1 = 0.1 and p 2 = 0.9 , and p 1 = p 2 = 0.5 .
It should be highlighted that similar results are obtained with any other combination of Bernoulli probabilities p i , i = 1 , 2 , 3 .
From these figures, we can reaffirm that T 2 processing is a better approach than the QSWL processing in terms of performance ( D E 2 > 0 ). Moreover, in the filtering and 3-step prediction problems (Figure 4), this fact is more evident when the probabilities of the Bernoulli variables decrease (that is, the delay probabilities increase).
The differences between both QSWL and T 2 error variances for the fixed-point smoothing problem are illustrated in Figure 5. Note that, since the behavior of the differences between QSWL and T 2 fixed-point smoothing errors is similar for Bernoulli probabilities values p i and 1 p i , these differences are analyzed in the following cases:
1.
Case 4: for values of p 1 = 0.1 , 0.3 , 0.5 in three situations: p 2 = 0.1 and p 3 = 0.3 , p 2 = 0.3 and p 3 = 0.1 , and p 2 = p 3 = 0.3 .
2.
Case 5: for values of p 2 = 0.1 , 0.3 , 0.5 in three situations: p 1 = 0.1 and p 3 = 0.3 , p 1 = 0.3 and p 3 = 0.1 , and p 1 = p 3 = 0.3 .
3.
Case 6: for values of p 3 = 0.1 , 0.3 , 0.5 in three situations: p 1 = 0.1 and p 2 = 0.3 , p 1 = 0.3 and p 2 = 0.1 , and p 1 = p 2 = 0.3 .
In every situation, the better behavior of T 2 processing over the QSWL processing is verified, and also this superiority increases when the Bernoulli probabilities tends to 0.5 , i.e., when there is a similar chance of receiving updated and delayed information.

6. Discussion

From among the different sensor fusion methods, it is the centralized fusion techniques that provide the optimal estimators from measurements of all sensors. Nevertheless, to avoid the computational load involved in these estimates, especially in systems with a large number of sensors, suboptimum estimation algorithms have been traditionally designed by using a decentralized fusion approach. This paper has overcome the above computational difficulties without renouncing to obtain the optimal solution, by considering hypercomplex algebras. Quaternions and, more recently, tessarines are the most usual 4D hypercomplex algebra employed in signal processing. Commonly, since both quaternions and tessarines are isomorfic spaces to R 4 , they involve the same computational complexity. Interestingly, under properness conditions, this complexity in terms of dimension is reduced to a half for QSWL and T 2 -proper methods and four times for QSL and T 1 -proper methods, which leads to a significant reduction in the computational load of our algorithms. Precisely, it is in this context that the use of hypercomplex algebras becomes an ideal tool with computational advantages over the existing methods to address the centralized fusion estimation problem.
In general, neither of these algebras always performs better than the other, and the choice of the most suitable one is conditioned by the characteristics of the signal. Due to the commutativity and reduced computational complexity, the tessarine algebra makes it particularly interesting for our purposes. Thus, under conditions of T k -properness, filtering, prediction, and fixed-point smoothing algorithms of reduced dimension have been devised for the estimation of a vectorial tessarine signal based on one-step randomly delayed observations coming from multiple sensors stochastic systems with different delay rates and correlated noises. The reduction of the dimension of the problem under T k -properness scenarios makes it possible for these algorithms to facilitate the computation of the optimal estimates with a lower computational cost in comparison with the real processing approach. It should be highlighted that this computational saving cannot be attained in the real field.
The good performance of the algorithms proposed has been experimentally illustrated by means of two simulation examples, where the better behavior of the proposed T k estimates over their counterparts in the quaternion domain under T k -properness conditions has been evidenced.
In future research, we will set out to explore the design of decentralized fusion estimation algorithms for hypercomplex signals and investigate the use of new hypercomplex algebras in this field.

Author Contributions

Conceptualization, R.M.F.-A.; Formal analysis, R.M.F.-A., J.N.-M. and J.C.R.-M.; Funding acquisition, R.M.F.-A. and J.N.-M.; Investigation, R.M.F.-A. and J.N.-M.; Methodology, R.M.F.-A.; Project administration, R.M.F.-A. and J.N.-M.; Software, R.M.F.-A.; Supervision, J.N.-M. and J.C.R.-M.; Validation, R.M.F.-A., J.N.-M. and J.C.R.-M.; Visualization, J.N.-M. and J.C.R.-M.; Writing—original draft, R.M.F.-A.; Writing—review & editing, J.N.-M. and J.C.R.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported in part by I+D+i Project with reference number 1256911, under ‘Programa Operativo FEDER Andalucía 2014–2020’, Junta de Andalucía, and Project EI_FQM2_2021 of ‘Plan de Apoyo a la Investigación 2021–2022’ of the University of Jaén.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Proof of Theorem 1

The proof is based on the innovation technique. Consider the one-state delay model:
x ¯ ( t + 1 ) = Φ ¯ ( t ) x ¯ ( t ) + u ¯ ( t ) , t 0 y k ( t ) = D ˜ k γ ( t ) C ¯ x ¯ ( t ) + D ˜ k ( 1 γ ) ( t ) C ¯ x ¯ ( t 1 ) + D ˜ k γ ( t ) v ( t ) + D ˜ k ( 1 γ ) ( t ) ] v ( t 1 ) , t 1
and define the innovations as ε k ( t ) = y k ( t ) y ^ k ( t | t 1 ) .
In order to simplify the proof of Theorem 1, the following results have been previously established.

Appendix A.1. Preliminary Results

The following property, stated without proof, about the correlations between the innovations ε k ( t ) and the augmented state x ¯ ( t ) and the noises u ¯ ( t ) and v k ( t ) , will be useful in the proof of Theorem 1.
Property A1.
Given the system (A1), and under the Assumptions 1-4, the following correlations hold:
(1)
E [ u ¯ ( t ) ε k H ( t ) ] = S ¯ ( t ) Π ˜ k γ H ( t ) .
(2)
E [ u ¯ ( t ) ε k H ( s ) ] = 0 4 n × m , for t > s .
(3)
E [ v ¯ ( t ) ε k H ( t ) ] = R ¯ ( t ) Π ˜ k γ H ( t ) .
(4)
E [ v ¯ ( t ) ε k H ( s ) ] = 0 4 R n × m , for t > s .
Moreover, the following results will be of interest in the derivation of the formulas given in Theorem 1.
Lemma A1.
Denote Δ D ˜ k γ ( t ) = D ˜ k γ ( t ) Π ˜ k γ ( t ) and Δ D ˜ k ( 1 γ ) ( t ) = D ˜ k ( 1 γ ) ( t ) Π ˜ k ( 1 γ ) ( t ) . For any tessarine random vectors α 1 ( t ) , α 2 ( t ) T 4 R n and β ( t ) T q , for any dimension q, the following relations hold:
 1 
E Δ D ˜ k γ ( t ) α 1 ( t ) α 2 H ( s ) Δ D ˜ k γ H ( t ) = L k Cov ( γ r ( t ) ) L ¯ H E [ α 1 ( t ) α 2 ¯ H ( s ) ] L ¯ L k H .
 2 
E Δ D ˜ k γ ( t ) α 1 ( t ) α 2 H ( s ) D ˜ k γ H ( t ) = L k Cov ( γ r ( t ) ) L ¯ H E [ α 1 ( t ) α 2 H ( s ) ] L ¯ L k H .
 3 
E Δ D ˜ k γ ( t ) α 1 ( t ) α 2 H ( s ) D ˜ k ( 1 γ ) H ( t ) = L k Cov ( γ r ( t ) ) L ¯ H E [ α 1 ( t ) α 2 H ( s ) ] L ¯ L k H .
 4 
E Δ D ˜ k γ ( t ) α 1 ( t ) β H ( s ) = 0 m × q .
 5 
E D ˜ k γ ( t ) α 1 ( t ) α 2 H ( s ) D ˜ k γ H ( t ) = L k E γ r ( t ) γ r T ( t ) L ¯ H E [ α 1 ( t ) α 2 H ( s ) ] L ¯ L k H .
 6 
E D ˜ k γ ( t ) α 1 ( t ) α 2 H ( s ) D ˜ k ( 1 γ ) H ( t ) = L k E γ r ( t ) ( 1 4 R n γ r ( t ) ) T L ¯ H E [ α 1 ( t ) α 2 H ( s ) ] L ¯ L k H .
 7 
E D ˜ k ( 1 γ ) ( t ) α 1 ( t ) α 2 H ( s ) D ˜ k ( 1 γ ) H ( t ) = L k E ( I m γ r ( t ) ) ( I m γ r ( t ) ) T L ¯ H E [ α 1 ( t ) α 2 H ( s ) ] L ¯ L k H .
Proof. 
The proof is immediate from (1) and taking into account that D ˜ k γ ( t ) = L k diag ( γ r ( t ) ) L ¯ H and D ˜ k ( 1 γ ) ( t ) = L k diag ( 1 4 R n γ r ( t ) ) L ¯ H . □

Appendic A.2. Expressions in Theorem 1

Although tessarine algebra is not a Hilbert space, the existence and uniqueness of the projection of an element on the set of measurements { y k ( 1 ) , , y k ( s ) } , for k = 1 , 2 , is guaranteed ([23]). Now, from Theorem 3 of [23], we obtain
x ¯ ^ ( t | t ) = x ¯ ^ ( t | t 1 ) + L ˜ k ( t ) ε k ( t ) ,
with L ˜ k ( t ) = Θ ˜ k ( t ) Ω k 1 ( t ) , where Θ ˜ k ( t ) = E [ x ¯ ( t ) ε k H ( t ) ] and Ω k ( t ) = E [ ε k ( t ) ε k H ( t ) ] . Then, by applying T k -properness conditions, (10) is directly devised.
Taking projections on both sides of the state and observation equations in (A1) onto the linear space spanned by { ε k ( 1 ) , , ε k ( t 1 ) } , and using Property A1, we have
x ¯ ^ ( t + 1 | t ) = Φ ¯ ( t ) x ¯ ^ ( t | t ) + H ˜ k ( t ) ε k ( t )
y ^ k ( t | t 1 ) = Π ˜ k γ ( t ) C ¯ x ¯ ^ ( t | t 1 ) + Π ˜ k ( 1 γ ) ( t ) C ¯ x ¯ ^ ( t 1 | t 1 ) + v ¯ ^ ( t 1 | t 1 )
where H ˜ k ( t ) = S ¯ ( t ) Π ˜ k γ H ( t ) Ω k 1 ( t ) and v ¯ ^ ( t | t ) = G ˜ k ( t ) ε k ( t ) , with G ˜ k ( t ) = R ¯ ( t ) Π ˜ k γ H ( t ) Ω k 1 ( t ) .
Then, (11) follows from (A3) and the T k -properness conditions on Φ ¯ ( t ) and Π ˜ k γ ( t ) established in Proposition 1 and Remark 3. Likewise, (12) is easily obtained from (A4).
Consider now the gain matrix L ˜ k ( t ) = Θ ˜ k ( t ) Ω k 1 ( t ) in (A2). Denote the prediction error and its covariance matrix as ϵ ¯ ( t | t 1 ) = x ¯ ( t ) x ¯ ^ ( t | t 1 ) and P ¯ ( t | t 1 ) = E [ ϵ ¯ ( t | t 1 ) ϵ ¯ H ( t | t 1 ) ] , respectively. Then, by applying (A1) and (A4), ϵ ¯ ( t | t 1 ) x ¯ ^ ( t | t 1 ) , Property 3 and Property A1, we have
Θ ˜ k ( t ) = P ¯ ( t | t 1 ) C ¯ T Π ˜ k γ H ( t ) + Φ ¯ ( t 1 ) P ¯ ( t 1 | t 1 ) C ¯ T Π ˜ k ( 1 γ ) H ( t ) + S ¯ ( t 1 ) Π ˜ k ( 1 γ ) H ( t ) H ˜ k ( t 1 ) Θ ˜ k H ( t 1 ) C ¯ T Π ˜ k ( 1 γ ) H ( t ) Φ ¯ ( t 1 ) Θ ˜ k ( t 1 ) G ˜ k H ( t 1 ) Π ˜ k ( 1 γ ) H ( t ) H ˜ k ( t 1 ) Ω k ( t 1 ) G ˜ k H ( t 1 ) Π ˜ k ( 1 γ ) H ( t ) , t > 1
and thus, the recursive expression (13) is directly obtained from (A5), by applying the T k -properness conditions on Φ ¯ ( t ) , and denoting by P k ( t | t 1 ) the first m × m submatrix of P ¯ ( t | t 1 ) .
Next, we devise the expression for the innovation covariance matrix (14). For this purpose, the innovations are rewritten in the following form
ε k ( t ) = Δ D ˜ k γ ( t ) C ¯ x ¯ ( t ) + Π ˜ k γ ( t ) C ¯ ϵ ¯ ( t | t 1 ) Δ D ˜ k γ ( t ) C ¯ x ¯ ( t 1 ) + Π ˜ k ( 1 γ ) ( t ) C ¯ ϵ ¯ ( t 1 | t 1 ) + D ˜ k γ ( t ) v ( t ) + D ˜ k ( 1 γ ) ( t ) v ( t 1 ) Π ˜ k ( 1 γ ) ( t ) G ˜ k ( t 1 ) ε k ( t 1 ) .
From (A3), the prediction error ϵ ¯ ( t + 1 | t ) can be expressed as
ϵ ¯ ( t + 1 | t ) = Φ ¯ ( t ) ϵ ¯ ( t | t ) + u ¯ ( t ) H ˜ k ( t ) ε k ( t ) .
As a consequence, from (A7), using Property 3 and Property A1, and taking into account that ϵ ¯ ( t | t ) ε k ( t ) , we have
E [ u ¯ ( t ) ϵ ¯ H ( t | t ) ] = H ˜ k ( t ) Θ ˜ k H ( t ) ,
E [ ϵ ¯ ( t + 1 | t ) ϵ ¯ H ( t | t ) ] = Φ ¯ ( t ) P ¯ ( t | t ) H ˜ k ( t ) Θ ˜ k H ( t ) ,
E [ ϵ ¯ ( t | t ) v ¯ H ( t ) ] = Θ ˜ k ( t ) G ˜ k H ( t ) ,
E [ ϵ ¯ ( t | t ) v ¯ H ( t + 1 ) ] = 0 4 n × 4 R n ,
E [ ϵ ¯ ( t + 1 | t ) v ¯ H ( t ) ] = S ¯ ( t ) Φ ¯ ( t ) Θ ˜ k ( t ) G ˜ k H ( t ) H ˜ k ( t ) Ω ˜ k ( t ) G ˜ k H ( t ) ,
E [ ϵ ¯ ( t + 1 | t ) v ¯ H ( t + 1 ) ] = 0 4 n × 4 R n .
Then, the expression (14) for the innovation covariance matrix is obtained from (A6), by using Lemma A1, Property 3, Property A1, (A9)–(A13), ϵ ¯ ( t + 1 | t ) ε k ( t ) , and by applying T k properness conditions. Furthermore, the recursion of D ¯ ( t ) = E [ x ¯ ( t ) x ¯ H ( t ) ] given in (16) is a direct consequence of the augmented state equation in system (A1). In a similar way, Equation (15) follows.
In the following step, consider the filtering error covariance matrix P ¯ ( t | t ) = E [ ϵ ¯ ( t | t ) ϵ ¯ H ( t | t ) ] with ϵ ¯ ( t | t ) = x ¯ ( t ) x ¯ ^ ( t | t ) . From (A2), we directly obtain that P ¯ ( t | t ) = P ¯ ( t | t 1 ) Θ ˜ k ( t ) Ω k 1 ( t ) Θ ˜ k H ( t ) and thus (17) holds by virtue of T k properness conditions.
Finally, from (A7), and taking into consideration that ϵ ¯ ( t | t ) ε k ( t ) , (A8), and Property A1, we have
P ¯ ( t + 1 | t ) = Φ ¯ ( t ) P ¯ ( t | t ) Φ ¯ H ( t ) H ˜ k ( t ) Θ ˜ k H ( t ) Φ ¯ H ( t ) Φ ¯ ( t ) Θ ˜ k ( t ) H ˜ k H ( t ) H ˜ k ( t ) Ω k ( t ) H ˜ k H ( t ) + Q ¯ ( t ) .
From T k properness conditions (18) follows.

Appendix A. Proof of Theorem 2

From the projection of x ( t + τ ) onto the linear space spanned by { ε k ( 1 ) , , ε k ( t ) } , we have
x ¯ ^ ( t + τ | t ) = Φ ( t + τ 1 ) x ¯ ^ ( t + τ 1 | t ) , τ 2
Then, from T k properness conditions, (19) holds.
Finally, from (19), it is clear that the prediction error covariance matrix P k ( t + τ | t ) satisfies the recursive expression (20).

Appendix B. Proof of Theorem 3

By projecting the state x ( t ) onto the linear space spanned by { ε k ( 1 ) , , ε k ( s ) } , we have
x ¯ ^ ( t | s ) = x ¯ ^ ( t | s 1 ) + L ˜ k ( t , s ) ϵ k ( s ) , t < s
with L ˜ k ( t , s ) = θ ˜ k ( t , s ) Ω k 1 ( s ) , where θ ˜ k ( t , s ) = E [ x ¯ ( t ) ϵ k H ( s ) ] . Then, (21) is directly derived from (A8), by applying T k properness conditions.
Consider the matrix θ ˜ k ( t , s ) . From (12) and (8), we have
θ ˜ k ( t , s ) = E [ x ¯ ( t ) ϵ ¯ H ( s | s 1 ) ] C ¯ T Π ˜ k γ ( s ) + E [ x ¯ ( t ) ϵ ¯ H ( s 1 | s 1 ) ] C ¯ T ( I m Π ˜ k γ ( s ) ) θ ˜ k ( t , s 1 ) G ˜ k H ( s 1 ) ( I m Π ˜ k γ ( s ) )
Let us define the matrix E ¯ ( t , s ) = E x ¯ ( t ) ϵ ¯ H ( s | s ) . Thus, from (A1) and (A3), it follows that
θ ˜ k ( t , s ) = E ¯ ( t , s 1 ) Φ ¯ H ( s 1 ) θ ˜ k ( t , s 1 ) H ˜ k H ( s 1 ) C ¯ T Π ˜ k γ ( s ) + E ¯ ( t , s 1 ) C ¯ T θ ˜ k ( t , s 1 ) G ˜ k H ( s 1 ) ( I m Π ˜ k γ ( s ) )
Then, (22) follows from T proper conditions.
In a similar way, from (A1)–(A3) and (A9), E ¯ ( t , s ) is of the form
E ¯ ( t , s ) = E ¯ ( t , s 1 ) Φ ¯ H ( s 1 ) θ ˜ k ( t , s 1 ) H H ( s 1 ) I m C ¯ T Π ˜ k γ ( s ) L H ( s ) E ¯ ( t , s 1 ) C ¯ T θ ˜ k ( t , s 1 ) G H ( s 1 ) ( I m Π ˜ k γ ( s ) ) L H ( s )
where E ¯ ( t , t ) = P ¯ ( t | t ) . Then, (23) follows from T k proper conditions.
Finally, (24) can be easily derived from (21).

References

  1. Li, W.L.; Jia, Y.M.; Du, J.P.; Fu, J.C. State estimation for nonlinearly coupled complex networks with application to multi-target tracking. Neurocomputing 2018, 275, 1884–1892. [Google Scholar] [CrossRef]
  2. Lee, J.S.; McBride, J. Extended object tracking via positive and negative information fusion. IEEE Trans. Signal Process. 2019, 67, 1812–1823. [Google Scholar] [CrossRef]
  3. Kurkin, A.A.; Tyugin, D.Y.; Kuzin, V.D.; Chernov, A.G.; Makarov, V.S.; Beresnev, P.O.; Filatov, V.I.; Zeziulin, D.V. Autonomous mobile robotic system for environment monitoring in a coastal zone. Procedia Comput. Sci. 2017, 103, 459–465. [Google Scholar] [CrossRef]
  4. Gingras, D. An overview of positioning and data fusion techniques applied to land vehicle navigation systems. In Automotive Informatics and Communicative Systems; Information Science Reference; IGI Global: Hershey, PA, USA, 2009; pp. 219–246. [Google Scholar]
  5. Gao, B.; Hu, G.; Gao, S.; Zhong, Y.; Gu, C.; Beresnev, P.O.; Filatov, V.I.; Zeziulin, D.V. Multi-sensor optimal data fusion for INS/GNSS/CNS integration based on unscented Kalman filter. Int. J. Control Autom. Syst. 2018, 16, 129–140. [Google Scholar] [CrossRef]
  6. Din, S.; Ahmad, A.; Paul, A.; Rathore, M.M.U.; Gwanggil, J. A clusterbased data fusion technique to analyze big data in wireless multi-sensor system. IEEE Access 2017, 5, 5069–5083. [Google Scholar] [CrossRef]
  7. Liggins, M.E.; Hall, D.L.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; The Electrical Engineering and Applied Signal Processing Series; CRC Press Inc.: Boca Raton, FL, USA, 2009. [Google Scholar]
  8. Hounkpevi, F.O.; Yaz, E.E. Minimum variance generalized state estimators for multiple sensors with different delay rates. Signal Process. 2007, 87, 602–613. [Google Scholar] [CrossRef]
  9. Ma, J.; Sun, S. Centralized fusion estimators for multisensor systems with random sensor delays, multiple packet dropouts and uncertain observations. IEEE Sens. J. 2013, 13, 1228–1235. [Google Scholar] [CrossRef]
  10. Chen, D.; Xu, L. Optimal filtering with finite-step autocorrelated process noises, random one-step sensor delay and missing measurements. Commun. Nonlinear Sci. Numer. Simul. 2016, 32, 211–224. [Google Scholar] [CrossRef]
  11. Sun, S.; Ma, J. Linear estimation for networked control systems with random transmission delays and packet dropouts. Inf. Sci. 2014, 269, 349–365. [Google Scholar] [CrossRef]
  12. Li, N.; Sun, S.; Ma, J. Multi-sensor distributed fusion filtering for networked systems with different delay and loss rates. Digital Signal Process. 2014, 34, 29–38. [Google Scholar] [CrossRef]
  13. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Optimal fusion estimation with multi-step random delays and losses in transmission. Sensors 2017, 17, 1151. [Google Scholar] [CrossRef] [PubMed]
  14. Tian, T.; Sun, S.; Li, N. Multi-sensor information fusion estimators for stochastic uncertain systems with correlated noises. Inf. Fusion 2016, 27, 126–137. [Google Scholar] [CrossRef]
  15. Abu Bakr, M.; Lee, S. Distributed multisensor data fusion under unknown correlation and data inconsistency. Sensors 2017, 17, 2472. [Google Scholar] [CrossRef] [Green Version]
  16. Sun, S.; Lin, N.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion 2017, 38, 122–134. [Google Scholar] [CrossRef]
  17. Liu, W.Q.; Wang, X.M.; Deng, Z.L. Robust centralized and weighted measurement fusion kalman estimators for uncertain multisensor systems with linearly correlated white noises. Inf. Fusion. 2017, 35, 11–25. [Google Scholar] [CrossRef]
  18. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Centralized fusion approach to the Estimation problem with multi-packet processing under uncertainty in outputs and transmissions. Sensors 2018, 18, 2697. [Google Scholar] [CrossRef] [Green Version]
  19. Shen, Q.; Liu, J.; Zhou, X.; Qin, W.; Wang, L.; Wang, Q. Centralized fusion methods for multi-sensor system with bounded disturbances. IEEE Access 2019, 7, 141612–141626. [Google Scholar] [CrossRef]
  20. Alfsmann, D.; Göckler, H.G.; Sangwine, S.J.; Ell, T.A. Hypercomplex Algebras in Digital Signal Processing: Benefits and Drawbacks. In Proceedings of the 15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, 3–7 September 2007; pp. 1322–1326. [Google Scholar]
  21. Ortolani, F.; Scarpiniti, M.; Comminiello, D.; Uncini, A. On the influence of microphone array geometry on the behavior of hypercomplex adaptive filters. In Proceedings of the 5th IEEE Microwaves, Radar and Remote Sensing Symposium, Kyiv, Ukraine, 29–31 August 2017; pp. 37–42. [Google Scholar]
  22. Ortolani, F.; Comminiello, D.; Scarpiniti, M.; Uncini, A. On 4-dimensional hypercomplex algebras in adaptive signal processing. In Neural Advances in Processing Nonlinear Dynamic Signals; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 131–140. [Google Scholar]
  23. Navarro-Moreno, J.; Fernández Alcalá, R.M.; Jiménez López, J.D.; Ruiz-Molina, J.C. Tessarine signal processing under the T-properness condition. J. Frankl. Inst. 2020, 357, 10099–10125. [Google Scholar] [CrossRef]
  24. Navarro-Moreno, J.; Ruiz-Molina, J.C. Wide-sense Markov signals on the tessarine domain. A study under properness conditions. Signal Process. 2021, 183, 108022. [Google Scholar] [CrossRef]
  25. Sabatelli, S.; Sechi, F.; Fanucci, L.; Rocchi, A. A sensor fusion algorithm for an integrated angular position estimation with inertial measurement units. In Proceedings of the Design, Automation and Test in Europe (DATE 2011), Grenoble, France, 14–18 March 2011; pp. 273–276. [Google Scholar]
  26. Tannous, H.; Istrate, D.; Benlarbi-Delai, A.; Sarrazin, J.; Gamet, D.; Ho Ba Tho, M.C.; Dao, T.T. A new multi-sensor fusion scheme to improve the accuracy of knee flexion kinematics for functional rehabilitation movements. J. Sens. 2016, 16, 1914. [Google Scholar] [CrossRef] [Green Version]
  27. Talebi, S.; Kanna, S.; Mandic, D. A distributed quaternion Kalman filter with applications to smart grid and target tracking. IEEE Trans. Signal Inf. Process. Netw. 2016, 2, 477–488. [Google Scholar] [CrossRef]
  28. Talebi, S.P.; Werner, S.; Mandic, D.P. Quaternion-valued distributed filtering and control. IEEE Trans. Autom. Control. 2020, 65, 4246–4256. [Google Scholar] [CrossRef]
  29. Wu, J.; Zhou, Z.; Fourati, H.; Li, R.; Liu, M. Generalized linear quaternion complementary filter for attitude estimation from multi-sensor observations: An optimization approach. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1330–1343. [Google Scholar] [CrossRef]
  30. Navarro-Moreno, J.; Fernández-Alcalá, R.M.; Jiménez López, J.D.; Ruiz-Molina, J.C. Widely linear estimation for multisensor quaternion systems with mixed uncertainties in the observations. J. Frankl. Inst. 2019, 356, 3115–3138. [Google Scholar] [CrossRef]
  31. Vía, J.; Ramírez, D.; Santamaría, I. Properness and widely linear processing of quaternion random vectors. IEEE Trans. Inform. Theory 2010, 56, 3502–3515. [Google Scholar] [CrossRef]
  32. Jiménez-López, J.D.; Fernández-Alcalá, R.M.; Navarro-Moreno, J.; Ruiz-Molina, J.C. Widely linear estimation of quaternion signals with intermittent observations. Signal Process. 2017, 136, 92–101. [Google Scholar] [CrossRef]
  33. Fernández Alcalá, R.M.; Navarro-Moreno, J.; Jiménez López, J.D.; Ruiz-Molina, J.C. Semi-widely linear estimation algorithms of quaternion signals with missing observations and correlated noises. J. Frankl. Inst. 2020, 357, 3075–3096. [Google Scholar] [CrossRef]
  34. Nitta, T.; Kobayashi, M.; Mandic, D.P. Hypercomplex widely linear estimation through the lens of underpinning geometry. IEEE Trans. Signal Process. 2019, 2019 67, 3985–3994. [Google Scholar] [CrossRef]
Figure 1. Difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.
Figure 1. Difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.
Sensors 21 05729 g001
Figure 2. Difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.
Figure 2. Difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.
Sensors 21 05729 g002
Figure 3. Mean of the difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction, and (c) fixed-point smoothing.
Figure 3. Mean of the difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction, and (c) fixed-point smoothing.
Sensors 21 05729 g003
Figure 4. Difference between QSWL and T 2 error variances for the problem of filtering (left column) and 3-step prediction (right column) for Cases 1–3.
Figure 4. Difference between QSWL and T 2 error variances for the problem of filtering (left column) and 3-step prediction (right column) for Cases 1–3.
Sensors 21 05729 g004

Figure 5. Difference between QSWL and T 2 error variances for the fixed-point smoothing problem for Cases 4–6.
Figure 5. Difference between QSWL and T 2 error variances for the fixed-point smoothing problem for Cases 4–6.
Sensors 21 05729 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fernández-Alcalá, R.M.; Navarro-Moreno, J.; Ruiz-Molina, J.C. \({\mathbb{T}}\)-Proper Hypercomplex Centralized Fusion Estimation for Randomly Multiple Sensor Delays Systems with Correlated Noises. Sensors 2021, 21, 5729. https://doi.org/10.3390/s21175729

AMA Style

Fernández-Alcalá RM, Navarro-Moreno J, Ruiz-Molina JC. \({\mathbb{T}}\)-Proper Hypercomplex Centralized Fusion Estimation for Randomly Multiple Sensor Delays Systems with Correlated Noises. Sensors. 2021; 21(17):5729. https://doi.org/10.3390/s21175729

Chicago/Turabian Style

Fernández-Alcalá, Rosa M., Jesús Navarro-Moreno, and Juan C. Ruiz-Molina. 2021. "\({\mathbb{T}}\)-Proper Hypercomplex Centralized Fusion Estimation for Randomly Multiple Sensor Delays Systems with Correlated Noises" Sensors 21, no. 17: 5729. https://doi.org/10.3390/s21175729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop