Next Article in Journal
Strategic Licensing of Green Technologies to a Brown Rival: A Game Theoretical Analysis
Next Article in Special Issue
Quantile Regression with a New Exponentiated Odd Log-Logistic Weibull Distribution
Previous Article in Journal
Numerical Solving Method for Jiles-Atherton Model and Influence Analysis of the Initial Magnetic Field on Hysteresis
Previous Article in Special Issue
A Credibility Theory-Based Robust Optimization Model to Hedge Price Uncertainty of DSO with Multiple Transactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RKHS Representations for Augmented Quaternion Random Signals: Application to Detection Problems

Department of Statistics and Operations Research, University of Jaén, 23071 Jaén, Spain
Mathematics 2022, 10(23), 4432; https://doi.org/10.3390/math10234432
Submission received: 28 October 2022 / Revised: 18 November 2022 / Accepted: 21 November 2022 / Published: 24 November 2022
(This article belongs to the Special Issue Probability Theory and Stochastic Modeling with Applications)

Abstract

:
The reproducing kernel Hilbert space (RKHS) methodology has shown to be a suitable tool for the resolution of a wide range of problems in statistical signal processing both in the real and complex domains. It relies on the idea of transforming the original functional data into an infinite series representation by projection onto an specific RKHS, which usually simplifies the statistical treatment without any loss of efficiency. Moreover, the advantages of quaternion algebra over real-valued three and four-dimensional vector algebra in the modelling of multidimensional data have been proven useful in much relatively recent research. This paper accordingly proposes a generic RKHS framework for the statistical analysis of augmented quaternion random vectors, which provide a complete description of their second order characteristics. It will allow us to exploit the full advantages of the RKHS theory in widely linear processing applications, such as signal detection. In particular, we address the detection of a quaternion signal disturbed by additive Gaussian noise and the discrimination between two quaternion Gaussian signals in continuous time.

1. Introduction

The importance of Hilbert space theory in statistical signal processing applications lies in the advantageous mathematical properties they gather, namely, the geometry of Hilbert spaces and the structure of function spaces [1]. Some recent and interesting applications of Hilbert space theory can be found in [2,3], to name a few. The characterization of random processes by means of the reproducing kernel Hilbert space (RKHS) approach has shown to be a suitable tool for the resolution of many statistical signal processing problems [4,5]. In the late 1950s, Parzen [6,7] was the one who initially suggested using RKHS methodology in statistical signal processing and time series analysis. More specifically, he provided a functional analysis perspective of random processes defined by second-order statistics and illustrated that the RKHS approach offers an elegant general framework for addressing a wide range of problems that involve inner product computations, for instance, least-squares estimation of random variables and signal detection problems. The underlying idea consisted of transforming via projections, in an specific RKHS, the original functional data into an infinite-dimensional series representation counterpart, which usually simplified the statistical treatment, with no loss of efficiency at all. Afterwards, in the 70s, Kailath showed the usefullness of RKHS formulation in the construction of likelihood ratios and the testing for nonsingularity for several detection problems [8,9,10]. More recently, a numerical evaluation of the inner product in an arbitrary RKHS in the real domain was proposed and then applied in the approximate representation of second-order stochastic processes by means of series expansions, as well as in the signal detection problem [11]. Although, the underlying RKHS theory in the complex domain has been developed by the mathematicians [12,13], the machine learning and signal processing communities have primarily focused on the case of real kernels [14,15]. However, more recent developments emphasized extending the use of kernel-based formulation towards more complex settings: the kernel-based approach for treating complex-valued random signals has drawing increasing interest in the area of statistical signal processing [16,17,18]. Likewise, matrix-valued kernels commonly known as operator-valued kernels have been also considered in recent studies, such as in [19,20,21], where Hilbert spaces of vector-valued functions with operator-valued reproducing kernels for multi-task learning are constructed.
Furthermore, recent higher dimensional kernel algorithms have considered mapping the input samples to quaternion functions because the quaternion domain facilitates the modelling of three- and four-dimensional signals. Comparing the quaternion model to the conventional kernel paradigm, which maps the input sample to a real function, the quaternion model’s capacity to manipulate multi-dimensional data has shown beneficial when dealing with quadrivariate signals. This suggests that increasing the dimensionality of the feature space enhances the efficiency of general kernel algorithms [22] and also enables the learning of various nonlinear features contained in the data. In fact, quaternion random signals appear in a variety of fields such as vector-sensor signals, image processing, aerospace, just to name a few, in order to model physical effects where several random components are involved [23,24,25,26] among others. The great interest in quaternion signal processing is due to the advantages of quaternion algebra over real-valued four-dimensional vector algebra in the modelling of such data [27,28,29]. However, the suitable statistical processing for quaternion random signals includes all the necessary second order statistical information accounting for a possible improperness (noncircular) of quaternion processes. The augmented covariance matrix contains too complementary covariance matrices in order to exploit complete second order information. This approach is known as quaternion widely linear (WL) processing. The effectiveness of the WL processing method for estimation problems involving complex-valued and quaternion-valued data has been formally demonstrated [30,31]. Althought the existence of RKHSs, positive definite kernels and an extension of the Mercer’s theorem in the quaternion domain are issues addressed in the existing research [22,32,33], the extension to the widely linear processing and the availability of an explicit expression of its inner product are, to our knowledge, still not addressed.
The challenge we face with quaternion random signals and the RKHS framework is to extend these ideas to the more general setting of WL processing by considering the augmented quaternion statistics in order to maximise the use of available statistical information and to exploit all the advantages of RKHS theory for second-order quaternion random signals. In this paper we present a general framework to obtain a novel RKHS for quaternion-valued signals with complete representation capabilities, since it allows us to represent any quaternion function. To every correlation function corresponds a RKHS for which this function is its reproducing kernel. So as a result, the closed linear span of a random signal and the RKHS specified by its correlation function are very closely related (there is, in fact, an isometric isomorphism). The essential underlying idea is that a natural connection between stochastic and deterministic functional analysis is provided by the RKHS framework. Thus, the RKHS can be seen as the natural Hilbert space associated with a random signal and its inner product can be used to express the solutions to a number of statistical signal processing problems. Our research focuses on how the WL approach can be used to construct an RKHS for augmented quaternion-valued random signals. The explicit description of a quaternion RKHS can allow us to propose general solutions to quaternion-valued signal processing problems in continuous-time following a WL processing, for instance, detection problems. These solutions generalise those previously introduced in the literature in particular cases, for example, under the assumption of circular (rotation-invariantly distributed) quaternion signals or for mean-square continuous quaternion signals [34,35]. In fact, we use this quaternion RKHS approach to deal with several problems of detection, as it is the detection of a deterministic signal disturbed by additive Gaussian noise and the discrimination between two quaternion Gaussian signals with unequal covariances in the continuous-time case.
In summary, the major contributions of this paper are three-fold:
  • From the augmented covariance matrix of a quaternion-valued signal, we construct the corresponding RKHS instead of designing quaternion-valued kernels that verify the necessary conditions for the associated quaternion reproducing kernel Hilbert space (QRKHS) to exist, as in [36].
  • First, we develop the properties of the RKHS associated with the correlation matrix of an augmented complex vector process and, second, we obtain an explicit expression of the widely QRKHS inner product that can effectively transform the functional quaternion data into a series representation simplifying their statistical treatment.
  • We show the potential applications of the widely QRKHS for quaternion-valued processes in signal processing problems such as signal detection.
The rest of the paper is organized as follows. Section 2 briefly outlines the basic characteristics of quaternion-valued random signals and, for the sake of completeness, introduces the key results from RKHS theory in the case of vector-valued functions, as this is the main mathematical tool employed in this paper. Section 3 presents a detailed description of the proposed RKHS for quaternion-valued random signals. In Section 4 the QRKHS approach is used to obtain solutions for several Gaussian signal detection problems and a numerical example is shown to illustrate the solution proposed for the detection of a quaternion deterministic signal disturbed by additive quaternion Gaussian noise. Finally, concluding remarks, limitations, and perspectives are given in Section 5.

2. Preliminaries and Motivations

Here, we summarize the notations employed throughout the paper and we review some quaternions and RKHS theory facts necessary for the development of the manuscript and in order to make the paper self-contained.
We denote matrices with boldfaced uppercase letters, column vectors with boldfaced lowercase letters, and scalar quantities with lightfaced lowercase letters. Quaternion (or complex) conjugate, transpose, and Hermitian (i.e., transpose and quaternion conjugate) are represented by superscripts ( · ) * , ( · ) T and ( · ) H , respectively. Throughout this paper, all the random variables considered are assumed with zero-mean.

2.1. Quaternion Random Signals

Let { i , j , k } be the imaginary units satisfying:
i 2 = j 2 = k 2 = ijk = 1 ij = k = j i jk = i = kj ki = j = ik
A quaternion q H is defined as
q = a + i b + j c + k d
where a , b , c , d are four real numbers. Quaternions form a noncommutative normed division algebra H , i.e., for p , q H , p q q p in general. The conjugate of a quaternion q is defined as q * = a i b j c k d and the norm of a quaternion is q = q q * = q * q = a 2 + b 2 + c 2 + d 2 . The involution of a quaternion q over a pure unit quaternion η (that is, η 2 = 1 ) is
q η = η q η 1 = η q η * = η q η
There are three types of Hilbert spaces in H depending on how the vectors are multiplied by the scalars because of the non-commutativity in the quaternion domain: left, right, and two-sided [37]. This fact may entail some drawbacks, for example, the set of linear operators acting on a one-sided Hilbert space does not have a linear structure. However, by fixing an arbitrary Hilbert basis, it is possible to introduce a notion of two-sided multiplication. The definition of a right quaternionic Hilbert space is given as follows [38].
Definition 1.
A right quaternionic Hilbert space is a complete and separable vector space under right multiplication by quaternions, H q , with an inner product · , · H q : H q × H q H satisfying the following properties, for f , g , h H q and v H
1. 
f , g H q * = g , f H q
2. 
f , g + h H q = f , g H q + f , h H q
3. 
f , g v H q = f , g H q v
4. 
f v , g H q = v * f , g H q
5. 
f H q 2 = f , f H q > 0 unless f = 0 .
Many of the well-known characteristics of complex Hilbert spaces are also present in quaternionic Hilbert spaces, such as the fact that every separable quaternionic Hilbert space has a basis. Once a Hilbert basis is fixed, any right quaternionic Hilbert space becomes a left quaternionic space and vice versa [38].
Definition 2.
A continuous-time quaternion random signal is a stochastic process q ( t ) H of the form
q ( t ) = a ( t ) + i b ( t ) + j c ( t ) + k d ( t ) , t T
with T a real set and a ( t ) , b ( t ) , c ( t ) , d ( t ) real stochastic processes.
Likewise, we can use the following modified Cayley-Dickson representation [29]
q ( t ) = α ( t ) + k β ( t )
where α ( t ) = a ( t ) + i b ( t ) C and β ( t ) = d ( t ) + i c ( t ) C are complex signals in the plane spanned by { 1 , i } .
We will denote the correlation function of q ( t ) as R q ( t , s ) = E [ q ( t ) q * ( s ) ] . Moreover, H ( q ) denotes the closed span of all quaternion-linear combinations of finitely many random variables q ( t ) and their limits in quadratic mean (q.m.).
Analogously to the complex case, a complete description of the second-order properties of a quaternion random signal q ( t ) is attained by considering the augmented quaternion random vector as [27]
q ( t ) = [ q ( t ) , q i ( t ) , q j ( t ) , q k ( t ) ] T
This type of quaternion processing that takes into account the quaternion signal and its involutions over the three pure unit quaternions { i , j , k } is known as full-widely linear (FWL) processing. The relationship between the augmented quaternion vector (3) and the real random signals in (1) is given by
q ( t ) = 2 T [ a ( t ) , b ( t ) , c ( t ) , d ( t ) ] T
where
T = 1 2 1 i j k 1 i j k 1 i j k 1 i j k
is a unitary quaternion operator, i.e., T H T = I 4 . Thus, the augmented correlation matrix R q ( t , s ) = E [ q ( t ) q H ( s ) ] is of the form
R q ( t , s ) = R q ( t , s ) R q q i ( t , s ) R q q j ( t , s ) R q q k ( t , s ) R q q i i ( t , s ) R q i ( t , s ) R q q k i ( t , s ) R q q j i ( t , s ) R q q j j ( t , s ) R q q k j ( t , s ) R q j ( t , s ) R q q i j ( t , s ) R q q k k ( t , s ) R q q j k ( t , s ) R q q i k ( t , s ) R q k ( t , s )
with R q q i ( t , s ) , R q q j ( t , s ) and R q q k ( t , s ) the three complementary correlation functions. Likewise, by using the modified Cayley-Dickson representation (2), the augmented quaternion vector q ( t ) can be expressed as
q ( t ) = 2 A [ α ( t ) , β ( t ) , α * ( t ) , β * ( t ) ] T
with A given by
A = 1 2 1 k 0 0 1 k 0 0 0 0 1 k 0 0 1 k
A is a unitary (one-to-one) quaternion operator, i.e., A H A = A A H = I 4 , thus it preserves inner product (an isometry) [39]. Then, the augmented correlation matrix R q ( t , s ) can be obtained from the correlation matrix of a ( t ) = [ α ( t ) , β ( t ) , α * ( t ) , β * ( t ) ] T , i.e., R q ( t , s ) = 2 A R a ( t , s ) A H , with R a the correlation matrix corresponding to the augmented complex random vector a ( t ) .

2.2. Reproducing Kernel Hilbert Spaces

Let H be an auxiliary Hilbert space of m-variate complex-valued functions defined on T, f ( t ) = [ f ( 1 ) ( t ) , f ( 2 ) ( t ) , , f ( m ) ( t ) ] T , with f ( i ) H , i = 1 , 2 , , m , a complex Hilbert space with a computationally convenient norm (usually a L 2 -space or a RKHS). Then, H is a Hilbert space under the inner product
f , g H = i = 1 m f ( i ) , g ( i ) H
Definition 3.
Let H be a linear space of functions on T. We say that H is a reproducing kernel Hilbert space (RKHS) of functions f : T H , when for any y H and s T the linear functional which maps f H to f ( s ) , y H is continuous on H .
According to the Riesz representation theorem [40], we obtain that, for every s T and y H , there is a linear operator K s : H H such that verifies the following reproducing property
f ( s ) , y H = f , K s y H
Moreover, for every t , s T we also introduce the linear operator K ( t , s ) : H H defined as follows
K ( t , s ) f : = ( K s f ) ( t )
for f H . Thus, the kernel K satisfies the following property, for every f , g H
K ( t , s ) g , f H = K s g , K t f H
Alternatively, a RKHS can be also defined by means of its reproducing kernel. To this end, let L ( H ) be the set of all the bounded linear operators from H to H .
Definition 4.
An L ( H ) -valued reproducing kernel is a function K : T × T L ( H ) such that K is self-adjoint and nonnegative-definite. For each L ( H ) -valued reproducing kernel K on T, there exists a unique Hilbert space H , called RKHS of K , consisting of H -valued functions on T such that
1. 
K ( · , s ) f H for all s T and f H , and
2. 
f ( s ) , g H = f , K ( · , s ) g H for all f H , s T , and g H
There exists a bijective correspondence between L ( H ) -valued reproducing kernels and H -valued RKHS which is central to the theory of vector-valued RKHS. In fact, for each H -valued RKHS, there exists a unique L ( H ) -valued reproducing kernel K on T that satisfies the above conditions. For this reason, K is called the reproducing kernel of H .
Moreover, the RKHS H can be spanned by the set { K s f | s T , f H } . For f = i = 1 n c i K t i y i and g = j = 1 n d j K s j w j the inner product is of the form
f , g H = i , j = 1 n c i d j * y i , K ( t i , s j ) w j H
According to the Mercer’s theorem for quaternionic kernels [33] and the Quaternion Moore-Aronszajn theorem [32] the existence and uniqueness (up to an isomorphism) of quaternion valued reproducing kernel Hilbert spaces is guaranteed for any positive definite quaternion-valued kernel, i.e., there exists a unique quaternion Hilbert space of functions for which the positive definite kernel is a reproducing kernel. Furthermore, a Mercer’s type series expansion can be extended to represent continuous quaternion-valued kernels. Therefore, we address the construction of a QRKHS associated with the augmented correlation function R q ( t , s ) which allows us to exploit all the advantages of RKHS theory in the quaternion FWL processing and obtain unified solutions to the quaternion signal detection problems. Based on the RKHS theory of complex random vectors, discussed in the next section, and taking into account the representation of the augmented quaternion vector in terms of the complex vector a ( t ) , we derive an explicit expression for the inner product corresponding to the QRKHS.

3. Quaternion RKHS Representation in WL Processing

3.1. RKHS Representation for Complex Random Vectors

Following the procedure developed in [6] for real stochastic processes, we apply the concepts of RKHS theory described in the previous section in the context of random signal processing by considering the correlation matrix of a complex vector stochastic process as the kernel. To do so, let x ( t ) = [ x ( 1 ) ( t ) , x ( 2 ) ( t ) , , x ( m ) ( t ) ] T , t T , be a m-variate second-order complex-valued random signal defined on the probability space ( Ω , A , P ) , and with correlation matrix R ( t , s ) , whose elements are R ( l , p ) ( t , s ) = E [ x ( l ) ( t ) x ( p ) * ( s ) ] , t , s T ; l , p = 1 , 2 , , m .
Theorem 1.
Let H be an auxiliary Hilbert space of m-variate complex-valued functions f defined on T. Assume that R ( t , s ) belongs to the direct product Hilbert space H H and define the correlation operator R on H as
R f ( i ) ( t ) = R ( i ) ( t , · ) , f H = j = 1 m R ( i , j ) ( t , · ) , f ( j ) H , i = 1 , 2 , , m
where R ( i ) ( t , s ) denotes the i-th row of R ( t , s ) , i = 1 , , m . Then R is a linear, self-adjoint, non-negative definite, and completely continuous operator of H into itself.
Proof. 
Note that R is well defined and, for all f H , R f H . Furthermore, R is self-adjoint and non-negative definite since R ( t , s ) is a correlation matrix and R ( t , s ) = R H ( s , t ) . Now, from the fact that R H H = M < and the Cauchy-Schwarz inequality follows
R f , g H = j = 1 m R f ( j ) , g ( j ) H M f H g H <
for all f , g H . Thus, R is a bounded operator. Finally, a bounded linear operator between normed spaces is always continuous [41] [Theorem 4.42]. □
From this theorem R is an L ( H ) -valued reproducing kernel and therefore, there exists a unique RKHS generated by R ( t , s ) , H ( R ) . Moreover, R is a trace class operator, i.e.,
n = 1 R ( t , · ) f n , R ( · , t ) f n H ( R ) = n = 1 R ( t , t ) f n , f n H <
by using the reproducing property (6) in H ( R ) and with { f n } n a basis for H . It follows from the spectral theory of completely continuous operators that the set of eigenvalues of R is an infinite sequence of positive real numbers converging to zero. In order to obtain a concrete structure, let ν n and ρ n be the eigenvalues and orthonormal eigenfunctions of R in H , then the kernel enjoys a representation as follows [40]
R ( t , s ) = n = 1 ν n ρ n ( t ) ρ n H ( s )
which converges in H H and R H H 2 = n = 1 ν n 2 < . When the convergence of the series expansion in (7) is pointwise in t , s T , for instance if R ( t , s ) is continuous, then the RKHS generated by R ( t , s ) , denoted by H ( R ) , can be spanned by the set { ν n ρ n ( t ) } and the reproducing inner product can be obtained as follows
f , g R = n = 1 1 ν n f , ρ n H ρ n , g H
for f ( t ) = n = 1 f , ρ n H ρ n ( t ) and g ( t ) = n = 1 g , ρ n H ρ n ( t ) .
Let H ( x ) be the Hilbert space spanned by the variables of the complex-valued random vector x ( t ) . Hence, if f H , the notation x , f H is an element of H ( x ) such that E x , f H = 0 and E x , f H x , g H * = R f , g H , with f , g H . Similarly, if f H ( R ) , x , f R denotes a random variable in H ( x ) that can be expressed as follows
x , f R = n = 1 1 ν n x , ρ n H ρ n , f H
By denoting x n = x , ρ n H , it can proved that they are uncorrelated random variables and verify E x n x n * = ν n . Thus, if the convergence of the series expansion in (7) is absolute in t , s T the following series representation for x ( t ) can be deduced
x ( t ) = n = 1 x n ρ n ( t ) , t T
which is the projection of x ( t ) onto the subspace of H ( x ) spanned by the random variables { x n / ν n } .

3.2. Quaternion RKHS Representation

Based on the results on RKHSs for complex vector processes described above we derive a RKHS for augmented quaternion random processes (3). To do this, let us consider the augmented complex random vector a ( t ) = [ α ( t ) , β ( t ) , α * ( t ) , β * ( t ) ] T with correlation matrix R a ( t , s ) , which is obtained from the Cayley-Dickson representation (2). Assume that R a ( t , s ) belongs to the direct product Hilbert space H H and denote by ν n and ρ n ( t ) the eigenvalues and orthonormal eigenfunctions of R a in H , respectively. It can be easily proved that ρ n ( t ) are of the form
ρ n ( t ) = [ ρ n ( 1 ) ( t ) , ρ n ( 2 ) ( t ) , ρ n ( 1 ) * ( t ) , ρ n ( 2 ) * ( t ) ] T
and ρ n , ρ m H = 2 R e ρ n ( 1 ) , ρ m ( 1 ) H + 2 R e ρ n ( 2 ) , ρ m ( 2 ) H = δ n m . Then, R a ( t , s ) can be represented by the series expansion of (7) and the RKHS associated can be spanned by { ν n ρ n ( t ) } with the inner product given in (8). Since the augmented correlation matrix (4) is related to R a ( t , s ) by the equality R q ( t , s ) = 2 A R a ( t , s ) A H , the eigenvalues and eigenfunctions of R q are of the form
λ n = 2 ν n , ϕ n ( t ) = A ρ n ( t )
with A given in (5). Let H q be some coefficient or auxiliary right quaternionic Hilbert space with a computationally convenient norm (e.g., a L 2 space) with · , · H q its inner product. Assume that the augmented correlation matrix R q belongs to the direct product Hilbert space H q H q . Let H * be the subspace of H q which contains the augmented quaternion functions, i.e., H * is the image of H under the unitary map with matrix A , A f ̲ H * H q , with f ̲ H . It is isomorphic to H and widely linear with the product by scalar
f ( t ) q = [ f ( t ) q , f i ( t ) q i , f j ( t ) q j , f k ( t ) q k ] T = [ f ( t ) q , ( f ( t ) q ) i , ( f ( t ) q ) j , ( f ( t ) q ) k ] T H *
for f = [ f , f i , f j , f k ] T H * , q H . This isomorphism allows us to obtain the following representation for the augmented correlation matrix from the series expansion (7) for R a ( t , s )
R q ( t , s ) = n = 1 λ n ϕ n ( t ) ϕ n H ( s )
which converges in H * H * . In particular, the eigenfunctions corresponding to the augmented correlation matrix belong to this subspace, ϕ n H * , and are orthonormal, ϕ n , ϕ m H q = A ρ n , A ρ m H q = ρ n , ρ m H = δ n m .
Thus, a RKHS generated by R q ( t , s ) , denoted by H ( R q ) , can be defined as the span of the set of the eigenfunctions, i.e., it consists of all augmented quaternion functions f H * H q for which
n = 1 1 λ n | f , ϕ n H q | 2 <
Then, the reproducing kernel inner product of two augmented quaternion functions in H ( R q ) can be expressed as shown
f , g R q = n = 1 1 λ n f , ϕ n H q ϕ n , g H q
Theorem 2.
The expression given by Equation (10) defines an inner product on H ( R q ) , is well-defined and verifies the reproducing property (6).
Proof. 
First, we prove that (10) really defines an inner product on H ( R q ) . In fact, by using the properties of · , · H q as an inner product in H q , it is easy to check that (10) satisfies for f , g , h H ( R q ) and v H the following properties
(i)
f , g R q * = n = 1 1 λ n ϕ n , g , H q * f , ϕ n H q * = g , f R q
(ii)
f , g + h R q = n = 1 1 λ n f , ϕ n H q ( ϕ n , g H q + ϕ n , h H q ) = f , g R q + f , h R q
(iii)
f , g v R q = n = 1 1 λ n f , ϕ n H q ϕ n , g H q v = f , g R q v
(iv)
f v , g R q = g , f v R q * = v * f , g R q
(v)
f R q 2 = f , f R q = n = 1 1 λ n | f , ϕ n H q | 2 > 0 unless f = 0 .
Now, note that R q ( · , s ) f H ( R q ) , for all s T and f H * , since
n = 1 1 λ n | R q ( · , s ) f , ϕ n H q | 2 = n = 1 1 ν n | R a ( · , s ) f ̲ , ρ n H | 2 <
where f = A f ̲ for f ̲ H and R q ( · , s ) f = 2 A R a ( · , s ) f ̲ H * . Finally, Equation (10) satisfies the reproducing property for f H ( R q ) and g H * as demonstrated below
f , R q ( · , s ) g R q = n = 1 1 λ n f , ϕ n H q ϕ n , R q ( · , s ) g H q = n = 1 1 ν n f ̲ , ρ n H ρ n , R a ( · , s ) g ̲ H = f ̲ , R a ( · , s ) g ̲ R a = f ̲ ( s ) , g ̲ H = f ( s ) , g H q
with f = A f ̲ and g = A g ̲ , f ̲ , g ̲ H , and by using the fact that the inner product in H ( R a ) verifies the reproducing property. □
In a similar way to the complex case, we define the random variables
q n = q , ϕ n H q = 2 a , ρ n H
with a , ρ n H = 2 R e α , ρ n ( 1 ) H + 2 R e β , ρ n ( 2 ) H real-valued uncorrelated random variables. Thus, q n are real-valued uncorrelated random variables, E q n q m = λ n δ n m , and they allow us to obtain the following series representation
q ( t ) = n = 1 q n ϕ n ( t ) , t T
In the following examples, some interesting representations for quaternion random processes are deduced as particular cases of those obtained in this section by developing the RKHS theory in the field of WL processing.

3.2.1. Example 1: Karhunen-Loéve-Type Representation

Firstly, let q ( t ) , t [ 0 , I ] , be a continuous in quadratic mean quaternion random signal. Consider H = L 2 [ 0 , T ] , the space of square integrable functions and let H be the space of vector functions f = [ f ( 1 ) , f ( 2 ) , f ( 3 ) , f ( 4 ) ] T such that f H = i = 1 4 f ( i ) 2 2 < . Let ν n and ρ n ( t ) be the eigenvalues and eigenfunctions of the integral operator R a defined on H as follows
R a f ( t ) = 0 I R a ( t , s ) f ( s ) d s
then, a Karhunen-Loéve-type expansion (12) can be deduced for the augmented quaternion vector q ( t ) with the random variables
q n = 0 I ϕ n H ( t ) q ( t ) d t = 2 0 I ρ n H ( t ) a ( t ) d t
This Karhunen-Loéve-type representation was proposed in [28] for a quaternion signal in continuous-time based on augmented statistics and was applied to the problems of estimation and detection.

3.2.2. Example 2: Gaussian Quaternion Signal plus Wiener Noise Representation

Now, let us consider the quaternion random process given by
q ( t ) = 0 t s ( τ ) d τ + w ( t ) , t [ 0 , I ]
with s ( t ) a Gaussian, continuous in quadratic mean, quaternion signal with correlation function L, and w ( t ) a Q -proper (i.e., the three complementary correlation functions vanish) standard Wiener process, with correlation function R. Moreover, s ( t ) and w ( t ) are independent.
Let L a and R a be the correlation matrices corresponding to the complex random vectors obtained from the Cayley-Dickson representation of s ( t ) and w ( t ) , respectively. In this case, R a ( t , s ) = min { t , s } I 4 and H = H ( R a ) its associated RKHS which consists of complex functions f = [ f ( 1 ) , f ( 2 ) , f ( 3 ) , f ( 4 ) ] T , with first derivate f satisfying that
0 I f H ( t ) f ( t ) d t <
Denote by
K a ( t , s ) = 0 s 0 t L a ( u , v ) d u d v
then, the kernel K a belongs to the direct product space H ( R a ) H ( R a ) . Let ν n and ρ n be its eigenvalues and eigenfunctions, respectively. The following series representation can be obtained for the augmented vector q ( t )
q ( t ) = n = 1 0 t A ϕ n ( u ) d u q n , t [ 0 , I ]
where the random coefficients are given by
q n = 2 0 I ϕ n H ( t ) d a ( t )
and ϕ n ( t ) are the eigenfunctions of L a , i.e., ρ n ( t ) = 0 t ϕ n ( u ) d u . A similar series expansion was obtained by [42] for real signals and is especially useful in the problems of estimating and detecting a Gaussian signal in additive white Gaussian noise.

4. Application to Detection Problems in the Quaternion Domain

4.1. Detection of Quaternion Deterministic Signals in Quaternion Gaussian Noise

The first issue we tackle is how to detect a quaternion deterministic signal that has been corrupted by quaternion additive Gaussian noise. A coordinate-free representation of the augmented quaternion noise based on the QRKHS associated with its augmented correlation function will allow us to obtain a log-likelihood ratio expression which unifies a variety of formulas for the optimum detection statistic (for instance, in terms of series expansions, solutions to integral equations, etc.). Specifically, the detection problem is formulated as follows
H 0 : y ( t ) = q ( t ) , t [ 0 , I ] H 1 : y ( t ) = s ( t ) + q ( t ) , t [ 0 , I ]
with s ( t ) a quaternion continuous completely known signal and q ( t ) a quaternion mean-square continuous Gaussian noise. P 0 and P 1 stand for the probability measures corresponding to the null and alternative hypotheses, respectively. Different signal and noise representations can be used to derive a number of likelihood ratio formulas. In accordance with Grenander’s Theorem [43] a method for determining likelihood ratios for continuous-time observation models entails first reducing the observation signal to an equivalent sequence, followed by determining the limit of the likelihood ratio for the truncated sequence. For this purpose, our approach considers the random coefficients obtained from the QRKHS representation of the observation quaternion random signal. Then, using calculations involving RKHS inner products, we compute the log-likelihood ratio to obtain a suitable detector expression.
Theorem 3.
Suppose that s ( t ) belongs to H ( R q ) , then the detection problem (13) is not singular ( P 0 P 1 ) and the log-likelihood ratio test is as follows
log d P 1 d P 0 ( y ) = y , s R q 1 2 | | s | | R q 2
Proof. 
From (12) and the fact that s ( t ) H ( R q ) we can replace the continuous-time problem (13) by the following discrete one
H 0 : y n = q n , n = 1 , 2 , H 1 : y n = s n + q n , n = 1 , 2 ,
with q n given in (11) and s n = s , ϕ n H q . Consequently, applying the Grenander’s Theorem [43] to the discrete detection problem above we obtain that P 0 P 1 since n = 1 | s n | 2 λ n = | | s | | R q 2 < and
log d P 1 d P 0 ( y ) = n = 1 y n s n λ n 1 2 n = 1 | s n | 2 λ n
Taking into account that y , s R q = n = 1 1 λ n y , ϕ n H q ϕ n , s H q = n = 1 y n s n λ n we prove (14). □

4.2. Discrimination between Two Quaternion Gaussian Signals

The second detection problem we study is the discrimination problem between two quaternion random signals which is formulated by the following hypotheses pair
H 0 : y ( t ) = q 1 ( t ) , t [ 0 , I ] H 1 : y ( t ) = q 2 ( t ) , t [ 0 , I ]
where q i ( t ) , i = 1 , 2 , are Gaussian, continuous in quadratic mean, quaternion signals with correlation functions R i ( t , s ) , respectively. P 0 and P 1 stand for the probability measures corresponding to the null and alternative hypotheses, respectively, and verify that P 0 P 1 , that is, the detection problem (15) is not singular. Let a i ( t ) , i = 1 , 2 , be the complex random vectors associated with the Cayley-Dickson representation of q i ( t ) , i = 1 , 2 , respectively. Let ν n and ρ n be the eigenvalues and eigenfunctions of operator R a 1 on the RKHS H ( R a 2 ) . Then the log-likelihood ratio for the underlying hypothesis test problem is provided in the following theorem.
Theorem 4.
The log-likelihood ratio test corresponding to (15) can be expressed as follows
log d P 1 d P 0 ( y ) = 1 2 n = 1 log ν n + 1 2 n = 1 1 ν n ν n y n * y n
where the uncorrelated random variables y n = 2 a 1 , ν n ρ n R a 1 , under H 0 and y n = 2 a 2 , ρ n R a 2 , under H 1 .
Proof. 
From the conditions of nonsingularity required we obtain that R a 2 R a 1 H ( R a 2 ) H ( R a 2 ) and that R a 2 dominates to R a 1 , that is, R a 2 R a 1 is a correlation matrix too [1]. On the other hand, R a 2 is Hilbert-Schmidt on L 2 [ 0 , I ] since R a 2 ( t , s ) is a continuous function on [ 0 , I ] [ 0 , I ] . Thus, there exists an isomorphism between H ( R a 2 ) and L 2 [ 0 , I ] [44], so ρ n = R a 2 1 / 2 ψ n with ψ n L 2 [ 0 , I ] . Using the series expansion (12) for the observation quaternion signal y ( t ) with H = H ( R a 2 )
y ( t ) = n = 1 y n A R a 2 1 / 2 ψ n ( t ) , t T
we get the following equivalent problem in terms of the random coefficients y n
H 0 : y n = 2 a 1 , ν n ρ n R a 1 N ( 0 , 2 ν n ) , n = 1 , 2 , H 1 : y n = 2 a 2 , ρ n R a 2 N ( 0 , 2 ) , n = 1 , 2 ,
Then [10] the log-likelihood ratio for y 1 , y 2 , is given by (16). □

4.3. Numerical Example

We consider the model (13) with the following quaternion signal to show the performance of the proposed detector (14)
s ( t ) = 1 π 2 cos π t + i 1 π 2 cos π t + j 1 π 2 cos π t + k 1 π 2 cos π t , t [ 0 , 1 ]
and the quaternion noise q ( t ) = 2 x ( t ) e i θ + k 2 x ( t ) e i θ , with x ( t ) the zero-mean Wiener real process and θ an standard Normal random variable, independent of x ( t ) . Moreover, we consider H = L 2 [ 0 , 1 ] , the space of square integrable complex functions. Figure 1 shows the detection probability P = 1 ψ ψ 1 ( 1 α ) d ( ψ denotes the cumulative probability distribution function of a N ( 0 , 1 ) random variable) in relation to the false-alarm probability α by using the Neyman-Pearson criterion, in terms of the signal-to-noise ratio
d 2 = | | s | | R q 2 = n = 1 1 λ n | s , ϕ n H q | 2
obtained with n = 5 (blue line) and n = 10 (red line) terms, respectively.

5. Conclusions

A generic RKHS framework for the statistical analysis of augmented quaternion random vectors has been presented. First, we have developed the properties of the RKHS associated with the correlation matrix of an augmented complex vector process and, second, we have obtained an explicit expression of the widely QRKHS inner product that can effectively transform the functional quaternion data into a series representation simplifying their statistical treatment. This novel QRKHS has allowed us to exploit the full advantages of the RKHS theory to propose general solutions to WL processing problems in continuous-time, for instance, detection problems. These solutions have shown to generalise those previously introduced in the literature in particular cases, for example, under the assumption of proper (rotation-invariantly distributed) quaternion signals or for mean-square continuous quaternion signals [34,35]. In particular, the quaternion RKHS approach has been applied to deal with the detection of a deterministic signal disturbed by additive Gaussian noise and the discrimination between two quaternion Gaussian signals with unequal covariances in the continuous-time case. Note that, in practice, the determination of eigenvalues and eigenfunctions can be quite involved. However, it is possible to employ a numerical method of solution, such as the Rayleigh-Ritz method (see [45] for a detailed study about its practical application).
Further research related to other hypercomplex systems, such as the tessarines, will be explored in the future to study possible extensions of the results provided in this work.

Funding

This research was funded by I+D+i Project with reference number 1256911, under «Programa Operativo FEDER Andalucía 2014–2020», Junta de Andalucía, and the Project EI-FQM2-2021 of «Plan de Apoyo a la Investigación» of the University of Jaén, Spain.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study: in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
RKHSReproducing kernel Hilbert space
QRKHSQuaternion reproducing kernel Hilbert space
WLWidely linear
FWLFull-widely linear

References

  1. Aronszajn, N. Theory of reproducing kernels. Trans. Amer. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  2. Liu, R.; Long, J.; Zhang, P.; Lake, R.E.; Gao, H.; Pappas, D.P.; Li, F. Efficient quantum state tomography with auxiliary Hilbert space. arXiv 2019, arXiv:1908.00577. [Google Scholar]
  3. Shafie, K.; Faridrohani, M.R.; Noorbaloochi, S.; Moradi Rekabdarkolaee, H. A global Bayes factor for observations on an infinite-dimensional Hilbert space. Applied to Signal Detection in fMRI. Austrian J. Stat. 2021, 50, 66–76. [Google Scholar] [CrossRef]
  4. Small, C.G.; McLeish, D.L. Hilbert Space Methods in Probability and Statistical Inference; John Wiley and Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  5. Berlinet, A.; Thomas-Agnan, C. Reproducing Kernel Hilbert Spaces in Probability and Statistics; Springer: Boston, MA, USA, 2012. [Google Scholar]
  6. Parzen, E. Extraction and detection problems and reproducing kernel Hilbert space. J. SIAM Control Ser. A 1962, 1, 35–62. [Google Scholar] [CrossRef]
  7. Parzen, E. Statistical inference on time series by Hilbert space methods. In Proceedings of a Symposium on Time Series Analysis; Rosenblatt, M., Ed.; Wiley: New York, NY, USA, 1963; pp. 253–382. [Google Scholar]
  8. Kailath, T. An RKHS approach to detection and estimation problems-part I: Deterministic signals in Gaussian noise. IEEE Trans. Inform. Theory 1971, 17, 530–549. [Google Scholar] [CrossRef]
  9. Kailath, T.; Duttweiler, D. An RKHS approach to detection and estimation problems-part III: Generalized innovations representations and a likelihood-ratio formula. IEEE Trans. Inform. Theory 1972, 18, 30–45. [Google Scholar] [CrossRef]
  10. Kailath, T.; Weinert, H.L. An RKHS approach to detection and estimation problems-part II: Gaussian signal detection. IEEE Trans. Inform. Theory 1975, 21, 15–23. [Google Scholar] [CrossRef]
  11. Oya, A.; Navarro-Moreno, J.; Ruiz-Molina, J.C. Numerical evaluation of Reproducing Kernel Hilbert Space inner products. IEEE Trans. Signal Proc. 2009, 57, 1227–1233. [Google Scholar] [CrossRef]
  12. Kobayashi, H. Representations of Complex-Valued Vector Processes and Their Application to Estimation and Detection. Ph.D. Thesis, Department of Electrical Engineering, Princeton University, Princeton, NJ, USA, 1967. [Google Scholar]
  13. Paulsen, V.I.; Raghupathi, M. An Introduction to the Theory of Reproducing Kernel Hilbert Spaces; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar] [CrossRef] [Green Version]
  14. Hofmann, T.; Scholkopf, B.; Smola, A.J. A Review of Kernel Methods in Machine Learning; Technical Report 156; Max Planck Institute for Biological Cybernetics: Tübingen, Baden-Württemberg, Germany, 2006; Available online: https://www.kyb.tuebingen.mpg.de/ (accessed on 2 February 2022).
  15. Rojo-Álvarez, J.L.; Martínez-Ramón, M.; Muñoz-Marí, J.; Camps-Valls, G. Reproducing Kernel Hilbert Space Models for Signal Processing. In Digital Signal Processing with Kernel Methods; Wiley-IEEE Press: Hoboken, NJ, USA, 2018; pp. 241–279. [Google Scholar] [CrossRef]
  16. Bouboulis, P.; Slavakis, K.; Theodoridis, S. Adaptive Learning in Complex Reproducing Kernel Hilbert Spaces employing Wirtinger’s subgradients. IEEE Trans. Neural Netw. 2012, 22, 425–438. [Google Scholar] [CrossRef]
  17. Bouboulis, P.; Theodoridis, S.; Mavroforakis, M. The augmented complex kernel LMS. IEEE Trans. Signal Proc. 2012, 60, 4962–4967. [Google Scholar] [CrossRef] [Green Version]
  18. Boloix-Tortosa, R.; Murillo-Fuentes, J.J.; Santos, I.; Pérez-Cruz, F. Widely linear complex-valued kernel methods for regression. IEEE Trans. Signal Proc. 2017, 65, 5240–5248. [Google Scholar] [CrossRef]
  19. Micchelli, C.A.; Pontil, M. Learning the kernel function via regularization. J. Mach. Learn. Res. 2005, 6, 1099–1125. [Google Scholar]
  20. Carmeli, C.; De Vito, E.; Toigo, A. Vector valued Reproducing Kernel Hilbert Spaces of integrable functions and Mercer theorem. Anal. Appl. 2006, 4, 377–408. [Google Scholar] [CrossRef]
  21. Kadri, H.; Duflos, E.; Preux, P.; Canu, S.; Rakotomamonjy, A.; Audiffren, J. Operator-valued kernels for learning from functional responde data. J. Mach. Learn. Res. 2015, 16, 1–54. [Google Scholar]
  22. Tobar, F.A.; Mandic, D.P. The quaternion kernel least squares. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–30 May 2013; pp. 6128–6132. [Google Scholar]
  23. Chen, B.; Liu, Q.; Sun, X.; Li, X.; Shu, H. Removing gaussian noise for color images by quaternion representation and optimisation of weights in non-local means filter. IET Image Proc. 2013, 8, 591–600. [Google Scholar] [CrossRef]
  24. Tao, J.; Chang, W. Adaptive beamforming based on complex quaternion processes. Math. Probl. Eng. 2014, 5, 1–10. [Google Scholar] [CrossRef] [Green Version]
  25. Bill, J.; Champagne, L.; Cox, B.; Bihl, T. Meta-Heuristic Optimization Methods for Quaternion-Valued Neural Networks. Mathematics 2021, 9, 938. [Google Scholar] [CrossRef]
  26. Xia, Y.; Jahanchahi, C.; Mandic, D.P. Quaternion-valued echo state networks. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 663–673. [Google Scholar] [PubMed]
  27. Cheong Took, C.; Mandic, D.P. Augmented second-order statistics of quaternion random signals. Signal Process 2011, 91, 214–224. [Google Scholar] [CrossRef] [Green Version]
  28. Navarro-Moreno, J.; Fernandez-Alcala, R.; Ruiz-Molina, J.C. A quaternion widely linear series expansion and its applications. IEEE Signal Process Lett. 2012, 19, 868–871. [Google Scholar] [CrossRef]
  29. Vía, J.; Ramírez, D.; Santamaría, I. Properness and widely linear processing of quaternion random vectors. IEEE Trans. Inform. Theory 2010, 56, 3502–3515. [Google Scholar] [CrossRef]
  30. Picinbono, B.; Chevalier, P. Widely linear estimation with complex data. IEEE Trans. Signal Process 1995, 43, 2030–2033. [Google Scholar] [CrossRef]
  31. Nitta, T. A theoretical foundation for the widely linear processing of quaternion-valued data. Appl. Math. 2013, 4, 1616–1620. [Google Scholar] [CrossRef] [Green Version]
  32. Tobar, F.A.; Mandic, D.P. Quaternion Reproducing Kernel Hilbert Spaces: Existence and uniqueness conditions. IEEE Trans. Inform. Theory 2014, 60, 5736–5749. [Google Scholar] [CrossRef]
  33. Shilton, A. Mercer’s Theorem for Quaternionic Kernels; Technical Report; Department of Electrical and Electronic Engineering, University of Melbourne: Melbourne, Australia, 2007; Available online: http://www.ee.unimelb.edu.au/pgrad/apsh/publications/ (accessed on 8 December 2021).
  34. Le Bihan, N.; Amblard, P.O. Detection and estimation of gaussian proper quaternion valued random processes. In Proceedings of the IMA Conference on Mathematics in Signal Process, The Royal Agricultural College, Cirencester, UK, 18–20 December 2006; pp. 23–26. [Google Scholar]
  35. Navarro-Moreno, J.; Ruiz-Molina, J.C.; Oya, A.; Quesada-Rubio, J.M. Detection of continuous-time quaternion signals in additive noise. EURASIP J. Adv. Signal Process 2012, 234, 1–7. [Google Scholar] [CrossRef] [Green Version]
  36. Tobar, F.A.; Mandic, D.P. Design of positive-definite quaternion kernels. IEEE Signal Process Lett. 2015, 22, 2117–2121. [Google Scholar] [CrossRef] [Green Version]
  37. Thirulogasanthar, K.; Ali, S.T. General construction of reproducing kernels on a quaternionic Hilbert space. Rev. Math. Phys. 2017, 29, 1750017. [Google Scholar] [CrossRef] [Green Version]
  38. Muraleetharan, B.; Thirulogasanthar, K. Berberian Extension and its S-spectra in a Quaternionic Hilbert Space. Adv. Appl. Clifford Algebras. 2020, 30, 1–18. [Google Scholar] [CrossRef] [Green Version]
  39. Ghiloni, R.; Moretti, V.; Perotti, A. Continuous slice functional calculus in quaternionic Hilbert Spaces. Rev. Math. Phys. 2013, 25, 1350006. [Google Scholar] [CrossRef]
  40. Riesz, F.; Nagy, B.S. Functional Analysis; Dover Publications: New York, NY, USA, 1990. [Google Scholar]
  41. Friedman, A. Foundations of Modern Analysis; Dover Publications: New York, NY, USA, 1982. [Google Scholar]
  42. Shepp, L. Radon-Nikodym derivatives of Gaussian measures. Ann. Math. Stat. 1966, 37, 321–354. [Google Scholar] [CrossRef]
  43. Poor, H.V. An Introduction to Signal Detection and Estimation; Springer: New York, NY, USA, 1994. [Google Scholar]
  44. Golosov, Y.I.; Tempelman, A.A. On the equivalence of measures corresponding to Gaussian vector-valued functions. Dokl. Akad. Nauk SSSR 1969, 184, 1271–1274. [Google Scholar]
  45. Oya, A.; Navarro-Moreno, J.; Ruiz-Molina, J.C. Widely linear simulation of continuous-time complex-valued random signals. IEEE Trans. Signal Process Lett. 2011, 18, 513–516. [Google Scholar] [CrossRef]
Figure 1. Detection probability versus the false-alarm probability.
Figure 1. Detection probability versus the false-alarm probability.
Mathematics 10 04432 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oya, A. RKHS Representations for Augmented Quaternion Random Signals: Application to Detection Problems. Mathematics 2022, 10, 4432. https://doi.org/10.3390/math10234432

AMA Style

Oya A. RKHS Representations for Augmented Quaternion Random Signals: Application to Detection Problems. Mathematics. 2022; 10(23):4432. https://doi.org/10.3390/math10234432

Chicago/Turabian Style

Oya, Antonia. 2022. "RKHS Representations for Augmented Quaternion Random Signals: Application to Detection Problems" Mathematics 10, no. 23: 4432. https://doi.org/10.3390/math10234432

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop