*Article* **State Space Modeling with Non-Negativity Constraints Using Quadratic Forms**

**Ourania Theodosiadou \*,† and George Tsaklidis \*,†**

Department of Mathematics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece

**\*** Correspondence: outheod@math.auth.gr (O.T.); tsaklidi@math.auth.gr (G.T.); Tel.: +30-2310-997964 (O.T.)

† These authors contributed equally to this work.

**Abstract:** State space model representation is widely used for the estimation of nonobservable (hidden) random variables when noisy observations of the associated stochastic process are available. In case the state vector is subject to constraints, the standard Kalman filtering algorithm can no longer be used in the estimation procedure, since it assumes the linearity of the model. This kind of issue is considered in what follows for the case of hidden variables that have to be non-negative. This restriction, which is common in many real applications, can be faced by describing the dynamic system of the hidden variables through non-negative definite quadratic forms. Such a model could describe any process where a positive component represents "gain", while the negative one represents "loss"; the observation is derived from the difference between the two components, which stands for the "surplus". Here, a thorough analysis of the conditions that have to be satisfied regarding the existence of non-negative estimations of the hidden variables is presented via the use of the Karush–Kuhn–Tucker conditions.

**Keywords:** state space model; Kalman filter; constrained optimization; two-sided components

#### **1. Introduction**

State space modeling is used for estimating—revealing the dynamic evolution of hidden variables' processes. In some cases, the state vector, which includes the hidden components, is subject to constraints, which are derived either due to the physical meaning of the states or because of the mathematical properties that have to be satisfied. For example, state space models with constraints are used in camera surveillance [1,2], navigation issues [3], and biological systems [4]. Especially, in finance, the hidden variables are often subject to non-negative constraints or in general have to be bounded. For example, in the Vasicek model [5] and its extension [6], the interest rates are considered to be hidden random variables subject to non-negative constraints, while in [7,8], the eigenvalues of the VAR process were restricted within the unit circle. Considering the use of state space models in the domain of finance, a discrete state space model could be implemented for the estimation of the hidden jump components of asset returns [9,10]. The use of jumps has been proposed for the description of the dynamics of asset prices since they can explain some of the empirical characteristics of the asset prices, e.g., the lack of a normal distribution or the existence of leptokurticity (see for example [11]).

When dealing with state space models that are subject to constraints, the Kalman filtering algorithm [12] can no longer be used, since it assumes linearity in the model. In the domain of nonlinear filters, the particle filtering approach (see for example [13–16]) has wide applicability, and it adopts resampling techniques for the estimation of the state vector at every time *t*. However, the use of resampling techniques adds considerable computational cost in the estimation procedure.

In this work, the observation is defined as the difference between the two-sided components under noise inclusion. The components are considered to be hidden random variables, and therefore, a state space model is established, where the state equation

**Citation:** Theodosiadou, O.; Tsakilids, G. State Space Modeling with Non-Negativity Constraints Using Quadratic Forms. *Mathematics* **2021**, *9*, 1908. https://doi.org/ 10.3390/math9161908


Academic Editors: Panagiotis-Christos Vassiliou and Andreas C. Georgiou

Received: 11 July 2021 Accepted: 5 August 2021 Published: 10 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

describes the dynamic evolution of the two hidden components. This equation represents a first-order Markov process, i.e., all the information needed for the estimation of the components at time *t* is derived by the components at time *t* − 1, and no other information from past times is needed. Moreover, the state vector is subject to non-negative constraints that have to be taken into account for its estimation in time. Such a model could describe, for example, the evolution of a system where the positive component represents "gain", while the negative one represents "loss"; the observation is derives from the difference between the two components, which stands for the "surplus", under noise inclusion. In asset pricing, an asset return can be defined as the difference between the two-sided nonnegative return jump components under noise inclusion, and the jump components are considered to be hidden variables. Another example could be the one-dimensional random walk, where a positive jump could represent (the measure of) a move to the right and a negative jump (the measure of) a move to the left, while the observation could be a function of the two jump components given at discrete times. To handle such kinds of problems, non-negative definite quadratic forms are adopted in the state equation for the dynamic evolution of the two-sided components. In this case, the recursive equations of the Kalman filter cannot be used for the estimation of the state vector, since this filter assumes linearity in the measurement and state equation. To this end, this work first derives the recursive equations for the estimation of the state vector based on the state space model representation with non-negative definite quadratic forms in the state equation and their Taylor expansions. Then, a thorough analysis of the necessary conditions that have to be satisfied in order to obtain the non-negative estimations at every time *t* is provided. In Proposition 1, the stationary points of the optimization problem with the non-negative constraints are given by using the Karush–Kuhn–Tucker conditions, while in Proposition 2, the necessary conditions for the existence of feasible solutions in the constrained optimization problem are provided.

Overall, this work proposes a method in state space modeling representation, which can be used when dealing with hidden components that are subject to non-negativity constraints. The method results in the formulation of a constrained optimization problem for which the stationary points are derived via Proposition 1, and the necessary conditions for the existence of feasible solutions in this optimization problem are provided via Proposition 2; to that end, the iterative formulas for the minimum variance a posteriori estimators for the (hidden) state vector are illustrated. Moreover, the proposed method has a low computational burden compared to other nonlinear filtering methods that can be used in state space modeling with inequality constraints and are based on resampling techniques (e.g., particle filtering).

The paper is organized as follows. In Section 2, the state space model proposed for the estimation of the two jump components is established. Two non-negative quadratic forms are adopted to describe the dynamic evolution of the two-sided components subject to their non-negative restrictions. In Section 3, the recursive equations of the second-order Kalman filter are presented, while in Section 4, a thorough analysis of the conditions that have to be fulfilled so as to have non-negative estimations is presented. The results of this analysis are summarized in Propositions 1 and 2. In Section 5, an illustrative example concerning the evolution of positive and negative jumps of asset returns is presented to demonstrate the theoretical results. Finally, Section 6 concludes on the findings and provides suggestions for future work.

#### **2. State Space Model**

In this section, a state space model representation is illustrated considering the case where there are two hidden processes subject to non-negativity constraints. The state equation that describes the dynamic evolution of the hidden components adopts the use of non-negative definite quadratic forms, while the measurement equation is linear.

The state equation is given by:

$$\begin{aligned} \mathbf{X}\_{t} &= \left(\mathbf{z}\_{t-1} + \mathbf{w}\_{t-1}\right)^{\top} \mathbf{G}^{(1)}(\mathbf{z}\_{t-1} + \mathbf{w}\_{t-1}) = f\_{1}(\mathbf{z}\_{t-1}, \mathbf{w}\_{t-1})\\ \mathbf{Y}\_{t} &= \left(\mathbf{z}\_{t-1} + \mathbf{w}\_{t-1}\right)^{\top} \mathbf{G}^{(2)}(\mathbf{z}\_{t-1} + \mathbf{w}\_{t-1}) = f\_{2}(\mathbf{z}\_{t-1}, \mathbf{w}\_{t-1}) \end{aligned} \tag{1}$$

or equivalently:

$$\mathbf{z}\_{t} = \sum\_{k=1}^{2} \boldsymbol{\Phi}\_{k} (\mathbf{z}\_{t-1} + \mathbf{w}\_{t-1})^{\top} \mathbf{G}^{(k)} (\mathbf{z}\_{t-1} + \mathbf{w}\_{t-1}) \tag{2}$$

where:


$$g\_{11}^{(k)} > 0 \quad \text{and} \quad g\_{11}^{(k)} g\_{22}^{(k)} - (g\_{12}^{(k)})^2 > 0 \; , \; k = 1, 2.$$

The vector *φ<sup>k</sup>* is a (2 × 1) column vector, where the *k*-th element equals 1, and the other element equals 0. The measurement equation is given by the relation:

$$\mathbf{R}\_{\mathbf{f}} = \mathbf{H} \mathbf{z}\_{\mathbf{f}} + \mathbf{e}\_{\mathbf{f}} \tag{3}$$

where **H** = 1 −1 and *et* <sup>∼</sup> *<sup>N</sup>*(0, *<sup>V</sup>*). Moreover, it is assumed that <sup>E</sup>(*ek***w***<sup>T</sup> <sup>j</sup>* ) = **0**.

Apparently, state Equation (2) describes a (nonobservable) first-order non-negative valued Markovian process, the evolution of which and its characteristics (e.g., periodicity, convergence etc.) depend on the structure (values) of the associated noisy observation sequence. The aim of our study here was to estimate (reveal) the Markovian process (2) (i.e., the matrices **G**(*k*), *k* = 1, 2, and **Q**), through the observation Equation (3), if the components of the state vector have to be non-negative. For this purpose, Model (2) and (3) adopts the use of non-negative definite quadratic forms to describe the dynamic evolution of the hidden two-sided components; that is, to ensure that the estimations of the components will be non-negative. To that end, the extended Kalman filter of second order is proposed in order to estimate at every time *t* the state vector **z***<sup>t</sup>* that incorporates the hidden jump components. It is noticed here that the noise component in Relation (2) is multiplicative and not additive.

Next, the extended Kalman filter of second order is described and its iterative equations for the estimation of the state vector are presented.

#### **3. Extended Kalman Filter of Second Order**

Model (2) and (3) presented in Section 2 is nonlinear, and subsequently, the recursive standard algorithm of the Kalman filter cannot be used for the estimation of the state vector. Aiming to derive the recursive equations for the estimation of the hidden states taking into consideration that the state Equation (2) is a quadratic form, the following notation is used:


$$\mathbf{P}\_t^- = \mathbb{E}[(\mathbf{z}\_t - \mathbf{\hat{z}}\_t^-)(\mathbf{z}\_t - \mathbf{\hat{z}}\_t^-)^\top] \quad \text{and} \quad \mathbf{P}\_t^+ = \mathbb{E}[(\mathbf{z}\_t - \mathbf{\hat{z}}\_t^+)(\mathbf{z}\_t - \mathbf{\hat{z}}\_t^+)^\top] \dots$$

According to (2), *zt*,*k*, *k* = 1, 2 is a function of the random variables **z***t*−1, and **w***t*−1, i.e., *zt*,*<sup>k</sup>* <sup>=</sup> *zt*,*k*(**z***t*−1, **<sup>w</sup>***t*−1). Then, using the Taylor expansion of second order of *zt*,*<sup>k</sup>* at (**zˆ**<sup>+</sup> *<sup>t</sup>*−1, **<sup>0</sup>**), it is derived that:

$$\begin{split} z\_{t,k} &= f\_k(\mathbf{\hat{z}}\_{t-1}^+, \mathbf{0}) \\ &+ (\frac{\partial f\_k(\mathbf{\hat{z}}\_{t-1}^+, \mathbf{0})}{\partial \mathbf{z}\_{t-1}})^\top (\mathbf{z}\_{t-1} - \mathbf{\hat{z}}\_{t-1}^+) + (\frac{\partial f\_k(\mathbf{\hat{z}}\_{t-1}^+, \mathbf{0})}{\partial \mathbf{w}\_{t-1}})^\top \mathbf{w}\_{t-1} \\ &+ \frac{1}{2} (\mathbf{z}\_{t-1} - \mathbf{\hat{z}}\_{t-1}^+)^\top \frac{\partial^2 f\_k(\mathbf{\hat{z}}\_{t-1}^+, \mathbf{0})}{\partial \mathbf{z}\_{t-1}^2} (\mathbf{z}\_{t-1} - \mathbf{\hat{z}}\_{t-1}) \\ &+ \frac{1}{2} \mathbf{w}\_{t-1}^\top \frac{\partial^2 f\_k(\mathbf{\hat{z}}\_{t-1}^+, \mathbf{0})}{\partial \mathbf{w}\_{t-1}^2} \mathbf{w}\_{t-1} \\ &+ (\mathbf{z}\_{t-1} - \mathbf{z}\_{t-1}^+)^\top \frac{\partial^2 f\_k(\mathbf{z}\_{t-1}^+, \mathbf{0})}{\partial \mathbf{z}\_{t-1} \partial \mathbf{w}\_{t-1}} \mathbf{w}\_{t-1}, \quad k = 1, 2 \end{split} (4)$$

where functions *fk* = *fk*(**z***t*−1, **w***t*−1), *k* = 1, 2, are given in (1). By equating the mean values in Relation (4), the a priori estimation of **z***t* (*prediction stage*) is derived, that is:

$$\begin{split} \dot{\mathbf{z}}\_{t,k}^{-} &= f\_k(\dot{\mathbf{z}}\_{t-1}^{+}, \mathbf{0}) + \frac{1}{2} \operatorname{tr}(\frac{\partial^2 f\_k(\dot{\mathbf{z}}\_{t-1}^{+}, \mathbf{0})}{\partial \mathbf{z}\_{t-1}^2} \mathbf{P}\_{t-1}^{+}) \\ &+ \frac{1}{2} \operatorname{tr}(\frac{\partial^2 f\_k(\dot{\mathbf{z}}\_{t-1}^{+}, \mathbf{0})}{\partial \mathbf{w}\_{t-1}^2} \mathbf{Q}), \quad k = 1, 2 \end{split} \tag{5}$$

and the entries of the respective variance–covariance matrix **P**− *<sup>t</sup>* are given by the relation,

$$\begin{split} (\mathbf{P}^{-}\_{t})\_{k,m} &= (\frac{\partial f\_{k}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{z}\_{t-1}})^{\top} \mathbf{P}^{+}\_{t-1} \frac{\partial f\_{m}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{z}\_{t-1}} \\ &+ (\frac{\partial f\_{k}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{w}\_{t-1}})^{\top} \mathbf{Q} \frac{\partial f\_{m}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{w}\_{t-1}} \\ &+ \frac{1}{2} \operatorname{tr} (\frac{\partial^{2} f\_{k}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{z}^{2}\_{t-1}} \mathbf{P}^{+}\_{t-1} \frac{\partial^{2} f\_{m}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{z}^{2}\_{t-1}} \mathbf{P}^{+}\_{t-1}) \\ &+ \frac{1}{2} \operatorname{tr} (\frac{\partial^{2} f\_{k}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{w}^{2}\_{t-1}} \mathbf{Q} \frac{\partial^{2} f\_{m}(\mathbf{\hat{z}}^{+}\_{t-1},\mathbf{0})}{\partial \mathbf{w}^{2}\_{t-1}} \mathbf{Q}), \quad k, m = 1, 2 \end{split} (6)$$

where (**P**− *<sup>t</sup>* )*k*,*<sup>m</sup>* denotes the (*k*, *m*)-element of matrix **P**<sup>−</sup> *<sup>t</sup>* and tr(.) denotes the trace of the respective matrix. Taking into consideration the properties of the trace of a matrix, it is derived after some algebraic manipulations on Relations (5) and (6) that:

$$\hat{\mathbf{z}}\_{t,k}^{-} = \mathbf{\hat{z}}\_{t-1}^{+} \,^T \mathbf{G}^{(k)} \mathbf{\hat{z}}\_{t-1}^{+} + \text{tr}(\mathbf{G}^{(k)} \mathbf{P}\_{t-1}^{+}) + \text{tr}(\mathbf{G}^{(k)} \mathbf{Q}), \quad k = 1, 2 \tag{7}$$

$$\begin{split} (\mathbf{P}\_{t}^{-})\_{k,m} = & 4\mathbf{\hat{z}}\_{t-1}^{+} \,^{T}\mathbf{G}^{(k)}\mathbf{P}\_{t-1}^{+} \mathbf{G}^{(m)}\mathbf{\hat{z}}\_{t-1}^{+} + 4\mathbf{\hat{z}}\_{t-1}^{+} \,^{T}\mathbf{G}^{(k)}\mathbf{Q}\mathbf{G}^{(m)}\mathbf{\hat{z}}\_{t-1}^{+} \\ & + 2\operatorname{tr}(\mathbf{G}^{(k)}\mathbf{P}\_{t-1}^{+} \mathbf{G}^{(m)}\mathbf{P}\_{t-1}^{+}) + 2\operatorname{tr}(\mathbf{G}^{(k)}\mathbf{Q}\mathbf{G}^{(m)}\mathbf{Q}), \, k, m = 1, 2 \end{split} \tag{8}$$

Regarding the a posteriori estimations of **z***t*, it is taken into account that the joint distribution of **z***t* and *Rt* is normal, based on the relation:

$$
\begin{bmatrix}
\mathbf{z}\_{t} \\
\mathbf{R}\_{t}
\end{bmatrix} \sim N(\begin{bmatrix}
\mathbf{z}\_{t}^{-} \\
\mathbf{H}\mathbf{\hat{z}}\_{t}^{-}
\end{bmatrix},
\begin{bmatrix}
\mathbf{P}\_{t}^{-} & \mathbf{P}\_{t}^{-}\mathbf{H}^{T} \\
\mathbf{H}\mathbf{P}\_{t}^{-} & \mathbf{H}\mathbf{P}\_{t}^{-}\mathbf{H}^{T} + V
\end{bmatrix}) \cdot \boldsymbol{\lambda}
$$

Then, we make use of the following Lemma (see for example [17]):

**Lemma 1.** *Let* **x***,* **y** *be two random variables that are jointly normally distributed with:*

$$\mathbb{E}(\begin{bmatrix} \mathbf{x} \\ \mathbf{y} \end{bmatrix}) = \begin{bmatrix} \mu\_{\chi} \\ \mu\_{\mathcal{Y}} \end{bmatrix} \quad \text{and} \quad \Sigma = \begin{bmatrix} \Sigma\_{11} & \Sigma\_{12} \\ \Sigma\_{21} & \Sigma\_{22} \end{bmatrix}.$$

*Then,* (**x**/**y**) ∼ *N*(*μ* , **Σ** )*, where:*

$$\boldsymbol{\mu}^{\prime} = \boldsymbol{\mu}\_{\boldsymbol{x}} + \boldsymbol{\Sigma}\_{11} \boldsymbol{\Sigma}\_{22}^{-1} (\mathbf{y} - \boldsymbol{\mu}\_{\boldsymbol{x}}) \quad \text{and} \quad \boldsymbol{\Sigma}^{\prime} = \boldsymbol{\Sigma}\_{11} - \boldsymbol{\Sigma}\_{12} \boldsymbol{\Sigma}\_{22}^{-1} \boldsymbol{\Sigma}\_{21} \dots$$

Based on Lemma 1, the a posteriori estimation of **z***<sup>t</sup>* (*update stage*) and the related variance–covariance matrix **P***t*(+) are given by,

$$\mathbf{\hat{z}}\_t^+ = \mathbf{\hat{z}}\_t^- + \mathbf{K}\_t (\mathbf{R}\_t - \mathbf{H}\mathbf{\hat{z}}\_t^-) \; , \tag{9}$$

$$\mathbf{P}\_t^+ = (\mathbf{I} - \mathbf{K}\_t \mathbf{H})\mathbf{P}\_t^- \tag{10}$$

.

where **K***<sup>t</sup>* = **P**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***T*(**HP**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***<sup>T</sup>* + *<sup>V</sup>*)−1. By using the recursive Relations (7)–(10), we can estimate the hidden components at every time *t*.

Next, a detailed investigation regarding the existence of non-negative solutions (i.e., non-negative a posteriori estimations of **z***t*) derived from (9) is presented.

#### **4. Investigation of the State Space Model**

In what follows, we present an investigation concerning the conditions that have to be satisfied so as to derive non-negative a posteriori estimations of the state vector **z***t*. Obviously, Relation (7) ensures the existence of non-negative a priori estimations of **z***<sup>t</sup>* at every time *t*. However, the a posteriori estimations of **z***<sup>t</sup>* given by (9) may not fulfil the non-negativity condition. We note that the solutions depend on the term **K***t*(*Rt* − **H ˆz**<sup>−</sup> *<sup>t</sup>* ), the sign of which is not time invariant. To this end, in order to ensure that the a posteriori unbiased estimator **zˆ**<sup>+</sup> *<sup>t</sup>* will be a minimum variance estimator under the non-negativity restrictions that its components must satisfy, the following optimization problem arises,

$$\min\_{\mathbf{2}\_{t}^{+}} \left\{ \text{tr}(\mathbf{P}\_{t}^{+}) = \mathbb{E} [(\mathbf{z}\_{t} - \mathbf{\hat{z}}\_{t}^{+})(\mathbf{z}\_{t} - \mathbf{\hat{z}}\_{t}^{+})^{T}] \right\} \tag{11}$$
 
$$\text{where } \mathbf{\hat{z}}\_{t}^{+} \succeq \mathbf{0}.$$

Symbol \* (or <sup>+</sup>) is used for the elementwise inequality, while **<sup>z</sup>***<sup>t</sup>* = (*Xt*,*Yt*)*<sup>T</sup>* is given by Equation (1) (or (2)). The following Proposition 1 provides the set of *stationary points* related to the optimization problem (11), subject to the non-negativity restrictions. This set includes the optimal solution, i.e., the unbiased minimum variance estimator **zˆ**<sup>+</sup> *<sup>t</sup>* . In what follows, we use the following notations:

$$a\_t = \mathbb{R}\_t - \mathbf{H} \mathbf{\hat{z}}\_t^-, b\_t = \mathbf{H} \mathbf{P}\_t^- \mathbf{H}^T \text{ and } \mathbf{K}\_t = (K\_{t,1}, K\_{t,2})^\top.$$

**Remark 1.** *Notice that, if at* = 0*, then Relation* (9) *leads to* **zˆ**<sup>+</sup> *<sup>t</sup>* = **zˆ**<sup>−</sup> *<sup>t</sup>* \* **0***, and consequently, the solution is acceptable.*

Taking into consideration Remark 1, it is assumed in the sequel that *at* = 0 for every *t*.

**Proposition 1.** *The weight matrix* **K***<sup>t</sup> and the stationary points related to the optimization problem* (11) *are given by the relations:*

$$\mathbf{K}(t) \qquad \mathbf{K}\_t = (b\_t + \mathbf{R}\_t)^{-1} \mathbf{P}\_t^{-} \mathbf{H}^T,\text{ which leads to the solution:} \quad \mathbf{x}$$

$$\begin{aligned} \mathbf{\hat{z}}\_t^+ &= \mathbf{\hat{z}}\_t^- + a\_t (b\_t + \mathbf{R}\_t)^{-1} \mathbf{P}\_t^- \mathbf{H}^T \mathbf{\hat{z}}\_t \\ \text{(ii)} \qquad \mathbf{K}\_t &= \begin{pmatrix} (b\_t + \mathbf{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_1 \\ -a\_t^{-1} \hat{z}\_{t,2}^- \end{pmatrix} \text{ which leads to the solution:} \end{aligned}$$

$$\mathcal{z}\_{t,1}^+ = \mathcal{z}\_{t,1}^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_{1\prime}$$

*z*ˆ + *<sup>t</sup>*,2 = 0 ;

*and:*

$$\begin{aligned} \text{(iii)} \qquad \mathbf{K}\_t &= \begin{pmatrix} -a\_t^{-1} \hat{z}\_{t,1}^-\\ (b\_t + \mathbf{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_2 \end{pmatrix}, \text{ which leads to the solution:}\\ \hat{z}\_{t,1}^+ &= 0 \end{aligned}$$

*and:*

$$\boldsymbol{\varepsilon}\_{t,2}^{+} = \boldsymbol{\varepsilon}\_{t,2}^{-} + a\_t (\boldsymbol{b}\_t + \boldsymbol{\mathcal{R}}\_t)^{-1} (\mathbf{P}\_t^{-} \mathbf{H}^T)\_{2} \mathbf{y}\_t$$

*(iv)* **<sup>K</sup>***<sup>t</sup>* <sup>=</sup> <sup>−</sup>*a*−<sup>1</sup> *<sup>t</sup>* **zˆ**<sup>−</sup> *<sup>t</sup> , which leads to the solution:*

**zˆ**+ *<sup>t</sup>* = **0**, *where* (**P**− *<sup>t</sup>* **<sup>H</sup>***T*)*<sup>i</sup> denotes the ith-row of matrix* **<sup>P</sup>**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***T, i* = 1, 2*.*

**Proof.** The Lagrangian function related to the optimization problem (11) is defined as:

$$\begin{split} \Lambda &= \text{tr}(\mathbf{P}\_{t}^{+}) + \lambda\_{1}(-\hat{\mathbf{z}}\_{t,1}^{+}) + \lambda\_{2}(-\hat{\mathbf{z}}\_{t,2}^{+}) \\ &= \text{tr}(\mathbb{E}[(\mathbf{z}\_{t} - \hat{\mathbf{z}}\_{t}^{+})(\mathbf{z}\_{t} - \hat{\mathbf{z}}\_{t}^{+})^{T}]) + \lambda\_{1}(-\hat{\mathbf{z}}\_{t,1}^{+}) + \lambda(-\hat{\mathbf{z}}\_{t,2}^{+}) \ , \quad \lambda\_{1}, \lambda\_{2} \geq 0. \end{split} \tag{12}$$

Based on (10), it is derived that:

$$\operatorname{tr}(\mathbf{P}\_t^+) = \operatorname{tr}[(\mathbf{I} - \mathbf{K}\_l \mathbf{H})\mathbf{P}\_t^- (\mathbf{I} - \mathbf{K}\_l \mathbf{H})^T + \mathbf{K}\_l V \mathbf{K}\_l^T]^\perp$$

while (by assuming the dependence of *z*ˆ + *<sup>t</sup>*,*<sup>i</sup>* on *Rt* and *z*ˆ − *t*,*i* , *i* = 1, 2, as provided in Kalman filtering):

$$\begin{aligned} \hat{z}\_{t,1}^+ &= \hat{z}\_{t,1}^- + K\_{t,1}(R\_t - \mathbf{H}\hat{\mathbf{z}}\_t^-) = \hat{z}\_{t,1}^- + a\_t K\_{t,1} \\ \hat{z}\_{t,2}^+ &= \hat{z}\_{t,2}^- + K\_{t,1}(R\_t - \mathbf{H}\hat{\mathbf{z}}\_t^-) = \hat{z}\_{t,2}^- + a\_t K\_{t,2} \dots \end{aligned}$$

By calculating the first derivative of the Lagrangian function and equating it to 0, it is derived that:

$$\begin{split} \frac{d\Lambda}{d\mathbf{K}\_{t}} &= \frac{d}{d\mathbf{K}\_{t}} [\text{tr}] (\mathbf{I} - \mathbf{K}\_{l}\mathbf{H}) \mathbf{P}\_{t}^{-} (\mathbf{I} - \mathbf{K}\_{l}\mathbf{H})^{T} + \mathbf{K}\_{l} V \mathbf{K}\_{t}^{T}] - \lambda\_{1} (\hat{z}\_{t,1}^{-} + a\_{l} \mathbf{K}\_{t,1}) \\ &- \lambda\_{2} (\hat{z}\_{t,2}^{-} + a\_{l} \mathbf{K}\_{t,2}) ] \\ &= -2\mathbf{P}\_{t}^{-T} \mathbf{H}^{T} + 2\mathbf{K}\_{l} \mathbf{H} \mathbf{P}\_{t}^{-} \mathbf{H}^{T} + 2\mathbf{K}\_{l} V - a\_{l} \lambda \\ &= 0 \end{split} \tag{13}$$

where *λ* = (*λ*1, *λ*2)*T*. Thus, matrix **K***<sup>t</sup>* has to satisfy the following condition (by noticing that **P**− *<sup>t</sup>* is symmetric):

$$-2\mathbf{P}\_t^-\mathbf{H}^T + 2\mathbf{K}\_l\mathbf{H}\mathbf{P}\_t^-\mathbf{H}^T + 2\mathbf{K}\_lV = a\_l\lambda\tag{14}$$

based on the constraints [18]:

$$\begin{array}{c} \lambda\_1(\hat{z}\_{t,1}^- + a\_t K\_{t,1}) = 0, \\ \lambda\_2(\hat{z}\_{t,2}^- + a\_t K\_{t,2}) = 0, \\ \hat{\lambda}\_1, \lambda\_2 \ge 0. \end{array}$$

The following cases have to be considered:

(i) **The two constraint conditions are inactive**. Then, *λ*<sup>1</sup> = *λ*<sup>2</sup> = 0, and the optimization problem, leading to (14), is transformed into the unconstrained one considered in the case of the Kalman filter. It is derived that:

$$\mathbf{K}\_{t} = \mathbf{P}\_{t}^{-} \mathbf{H}^{T} (\mathbf{H} \mathbf{P}\_{t}^{-} \mathbf{H}^{T} + V)^{-1},\tag{15}$$

which is the well-known *Kalman gain matrix*. The related solution in terms of the a posteriori estimator **zˆ**<sup>+</sup> *<sup>t</sup>* is:

$$\mathbf{\hat{z}}\_t^+ = \mathbf{\hat{z}}\_t^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} \mathbf{P}\_t^- \mathbf{H}^T. \tag{16}$$

Relation (16) constitutes a possible solution of the optimization problem (11), and it has to satisfy the constraint **zˆ**<sup>+</sup> *<sup>t</sup>* \* **0**;

	- (a) If *λ*<sup>2</sup> = 0, then we are led to the unconstrained optimization problem presented in Case (i), and the solution must satisfy the non-negative restrictions, i.e., **zˆ**<sup>+</sup> *<sup>t</sup>* \* **0**;
	- (b) If *z*ˆ − *<sup>t</sup>*,2 + *atKt*,2 = 0, it is derived via the active constraint condition that:

$$K\_{t,2} = -a\_t^{-1} \hat{z}\_{t,2}^{-} \,. \tag{17}$$

By using (17), Relation (14) is transformed into:

$$
\begin{pmatrix}
(\mathbf{P}\_t^- \mathbf{H}^T)\_1 \\
(\mathbf{P}\_t^- \mathbf{H}^T)\_2
\end{pmatrix} + \begin{pmatrix}
K\_{t,1} \\
\end{pmatrix} \mathbf{H} \mathbf{P}\_t^- \mathbf{H}^T + \begin{pmatrix}
K\_{t,1} \\
\end{pmatrix} V = a\_t \boldsymbol{\lambda} \ .
$$

Consequently,

$$K\_{t,1} = (b\_t + R\_t)^{-1} (\mathbf{P}\_t^{-} \mathbf{H}^T)\_{1\prime} \tag{18}$$

where *bt* = **HP**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***<sup>T</sup>* <sup>≥</sup> 0. By using (17) and (18), it is derived that:

$$\mathbf{K}\_t = \begin{pmatrix} (b\_t + \mathbf{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_1 \\ -a\_t^{-1} \hat{\boldsymbol{\varepsilon}}\_{t,2}^- \end{pmatrix}.$$

Thus,

and:

$$
\hat{z}\_{t,1}^+ = \hat{z}\_{t,1}^- + a\_t (b\_t + R\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_1
$$

$$
\hat{z}\_{t,2}^+ = 0 \; ;
$$

	- (a) If *λ*<sup>1</sup> = 0, then we obtain the unconstrained optimization problem presented in Case (i), and the solution must fulfil the nonnegative restrictions, i.e., **zˆ**<sup>+</sup> *<sup>t</sup>* \* **0**;
	- (b) If *z*ˆ − *<sup>t</sup>*,1 + *atKt*,1 = 0 and *λ*<sup>1</sup> = 0, then it is derived that:

$$K\_{t,1} = -a\_t^{-1} \hat{z}\_{t,1}^{-} \, . \tag{19}$$

and Relation (14) is transformed into:

$$
\begin{pmatrix}
(\mathbf{P}\_t^- \mathbf{H}^T)\_1 \\
(\mathbf{P}\_t^- \mathbf{H}^T)\_2
\end{pmatrix} + \begin{pmatrix}
K\_{t,1} \\
K\_{t,1} \\
$$

Then,

$$K\_{t,2} = (b\_t + \mathcal{R}\_t)^{-1} (\mathbf{P}\_t^{-} \mathbf{H}^T)\_2 \tag{20}$$

where *bt* = **HP**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***<sup>T</sup>* <sup>≥</sup> 0. By using (19) and (20), it is derived that:

$$\mathbf{K}\_{\mathbf{f}} = \begin{pmatrix} -a\_{\mathbf{f}}^{-1} \mathbf{\hat{z}}\_{\mathbf{f},1}^{-} \\ (b\_{\mathbf{f}} + \mathbf{R}\_{\mathbf{f}})^{-1} (\mathbf{P}\_{\mathbf{f}}^{-} \mathbf{H}^{T})\_{2} \end{pmatrix} \text{ \textquotedblleft}$$

and consequently:

$$\mathcal{E}\_{t,1}^+ = 0$$

and:

$$\mathcal{z}\_{t,2}^+ = \mathcal{z}\_{t,2}^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_2 :$$

(iv) **The two constraint conditions are active**, i.e., *z*ˆ − *<sup>t</sup>*,1 + *atKt*,1 = 0 and *z*ˆ − *<sup>t</sup>*,2 + *atKt*,2 = 0. In this case, we have to seek solutions such that *λ*1, *λ*<sup>2</sup> ≥ 0.

Based on the active constraint conditions, it is derived that:

$$\mathbb{K}\_{t,1} = -a\_t^{-1} \hat{z}\_{t,1}^- \quad \text{and} \quad \mathbb{K}\_{t,2} = -a\_t^{-1} \hat{z}\_{t,2}^-$$

i.e., **<sup>K</sup>***<sup>t</sup>* <sup>=</sup> <sup>−</sup>*a*−<sup>1</sup> *<sup>t</sup>* **zˆ**<sup>−</sup> *<sup>t</sup>* , resulting in the relation,

$$\begin{split} \mathbf{\hat{z}}\_{t}^{+} &= \mathbf{\hat{z}}\_{t}^{-} + a\_{t} \mathbf{K}\_{t} \\ &= \mathbf{\hat{z}}\_{t}^{-} - a\_{t} a\_{t}^{-1} \mathbf{\hat{z}}\_{t}^{-} \\ &= \mathbf{0}. \end{split}$$

The state vector **zˆ**<sup>+</sup> *<sup>t</sup>* = **0** constitutes a feasible solution, and it has to be checked whether Relation (14) is satisfied with *λ*1, *λ*<sup>2</sup> ≥ 0.

In what follows, Proposition 2 provides the necessary conditions for the existence of feasible solutions regarding the constrained filter.

**Proposition 2.** *The solutions given in Proposition 1 regarding the optimization problem* (11) *are feasible upon the following conditions (necessary conditions):*

*(i)*

$$\mathbf{\hat{z}}\_t^+ = \mathbf{\hat{z}}\_t^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} \mathbf{P}\_t^- \mathbf{H}^\top$$

*constitutes a feasible solution, if:*

$$a\_t \mathbf{P}\_t^{-} \mathbf{H}^T \succeq -(b\_t + \mathcal{R}\_t) \mathbf{\hat{z}}\_t^{-} \; ;$$

*(ii)*

$$\mathbf{\hat{z}}\_t^+ = \begin{pmatrix} \hat{z}\_{t,1}^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_1\\ 0 \end{pmatrix}$$

*constitutes a feasible solution, if:*

$$a\_t(\mathbf{P}\_t^-\mathbf{H}^-)\_1 \ge -(b\_t + R\_t)\mathbf{\hat{z}}\_{t,1}^-$$

*and:*

$$a\_t(\mathbf{P}\_t^- \mathbf{H}^\top)\_2 < -(b\_t + \mathcal{R}\_t)\mathbf{\hat{z}}\_{t,2}^- \; ;$$

*(iii)*

$$\hat{\mathbf{z}}\_t^+ = \begin{pmatrix} 0\\ \hat{\varepsilon}\_{t,2}^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_2 \end{pmatrix}.$$

*constitutes a feasible solution, if:*

$$a\_t (\mathbf{P}\_t^- \mathbf{H}^T)\_1 < -(b\_t + R\_t)\mathbf{\hat{z}}\_{t,1}^-$$

*and:*

$$a\_l(\mathbf{P}\_t^- \mathbf{H}^T)\_2 \ge -(b\_l + \mathcal{R}\_l)\mathbf{\hat{z}}\_{t,2}^- \; ;$$

*(iv)* **zˆ**<sup>+</sup> *<sup>t</sup>* = **0** *constitutes a feasible solution, if:*

$$a\_t (\mathbf{P}\_t^- \mathbf{H}^T)\_1 < -(b\_t + \mathcal{R}\_t)\mathfrak{k}\_{t,1}^-$$

*and:*

$$a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_2 < -(b\_t + R\_t)\mathbf{\hat{z}}\_{t,2}^-.$$

**Proof.** Similar to the proof of Proposition 1, four cases are considered:

(i) **The two constraint conditions are inactive**. Then, *λ*<sup>1</sup> = *λ*<sup>2</sup> = 0, and the optimization problem is transformed into the unconstrained one that is met in the case of the Kalman filter. In this case, based on Proposition 1, we obtain that **K***<sup>t</sup>* = (*bt* + *Rt*)−1**P**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***T*, resulting in the estimation:

$$\mathbf{\hat{z}}\_t^+ = \mathbf{\hat{z}}\_t^- + a\_t (b\_t + \mathcal{R}\_t)^{-1} \mathbf{P}\_t^- \mathbf{H}^T \text{ ,}$$

where **zˆ**<sup>+</sup> *<sup>t</sup>* is a feasible solution of the optimization problem with the nonnegative constraints, if:

$$\mathbf{2}\_t^- + a\_t (b\_t + R\_t)^{-1} \mathbf{P}\_t^- \mathbf{H}^T \succeq \mathbf{0}.$$

Consequently, the necessary condition is formulated as follows:

$$a\_{\mathbf{f}} \mathbf{P}\_{\mathbf{f}}^{-} \mathbf{H}^{T} \succeq -a\_{\mathbf{f}} (b\_{\mathbf{f}} + R\_{\mathbf{f}}) \mathbf{\hat{z}}\_{\mathbf{f}}^{-} \; ; \; \mathbf{z}$$

	- (a) If *λ*<sup>2</sup> = 0, then based on (14), the solution is given by (15), which is related to the Kalman filter and the unconstrained optimization problem. This solution is acceptable if it is aligned with the active constraint condition. Otherwise, it is rejected;
	- (b) If *λ*<sup>2</sup> > 0, matrix **K***<sup>t</sup>* has to be in such a form so that *z*ˆ + *<sup>t</sup>*,1 ≥ 0. It is derived via the active constraint condition that *Kt*,2 <sup>=</sup> <sup>−</sup>*a*−<sup>1</sup> *<sup>t</sup> z*ˆ − *<sup>t</sup>*,2 where *at* = 0 based on Remark 1. Then, (14) results in:

$$\begin{split} a\_{l}\boldsymbol{\lambda} &= \begin{pmatrix} (\mathbf{P}\_{t}^{-}\mathbf{H}^{T})\_{1} \\ (\mathbf{P}\_{t}^{-}\mathbf{H}^{T})\_{2} \end{pmatrix} + \begin{pmatrix} K\_{t,1} \\ -a\_{t}^{-1}\hat{z}\_{t,2}^{-} \end{pmatrix} \mathbf{H}\mathbf{P}\_{t}^{-}\mathbf{H}^{T} + \begin{pmatrix} K\_{t,1} \\ -a\_{t}^{-1}\hat{z}\_{t,2}^{-} \end{pmatrix} V \\ \begin{pmatrix} \lambda\_{1} \\ \lambda\_{2} \end{pmatrix} &= \begin{pmatrix} -a\_{t}^{-1}(\mathbf{P}\_{t}^{-}\mathbf{H}^{T})\_{1} \\ -a\_{t}^{-1}(\mathbf{P}\_{t}^{-}\mathbf{H}^{T})\_{2} \end{pmatrix} + \begin{pmatrix} a\_{t}^{-1}K\_{t,1} \\ -a\_{t}^{-2}\hat{z}\_{t,2}^{-} \end{pmatrix} \mathbf{H}\mathbf{P}\_{t}^{-}\mathbf{H}^{T} + \begin{pmatrix} a\_{t}^{-1}K\_{t,1} \\ -a\_{t}^{-2}\hat{z}\_{t,2}^{-} \end{pmatrix} V \\ &= \begin{pmatrix} -a\_{t}^{-1}(\mathbf{P}\_{t}(-)\mathbf{H}^{T})\_{1} + a\_{t}^{-1}b\_{t}K\_{t,1} + a\_{t}^{-1}K\_{t,1}V \\ -a\_{t}^{-1}(\mathbf{P}\_{t}^{-}\mathbf{H}^{T})\_{2} + a\_{t}^{-2}b\_{t}\hat{z}\_{t,2}^{-} + a\_{t}^{-2}\hat{z}\_{t,2}^{-}V \end{pmatrix}. \end{split}$$

Since *λ*<sup>1</sup> = 0 and for *at* = 0, Relation (21) implies:

$$
\begin{pmatrix} 0 \\ \lambda\_2 \end{pmatrix} = \begin{pmatrix} -(\mathbf{P}\_t^- \mathbf{H}^T)\_1 + b\_t \mathbf{K}\_{t,1} + \mathbf{K}\_{t,1} V \\ -a\_t^{-1} (\mathbf{P}\_t^- \mathbf{H}^T)\_2 - a\_t^{-2} b\_t \hat{\varepsilon}\_{t,2}^- - a\_t^{-2} \hat{\varepsilon}\_{t,2}^- V \end{pmatrix} \tag{22}
$$

It is derived via (22) that if *λ*<sup>2</sup> > 0, then:

$$\lambda\_2 = -a\_t(\mathbf{P}\_t^- \mathbf{H}^T)\_2 - b\_t \mathfrak{z}\_{t,2}^- - \mathfrak{z}\_{t,2}^- V > 0 \ .$$

Consequently,

$$a\_t(\mathbf{P}\_t^{-}\mathbf{H}^T)\_2 < -(b\_t + V)\hat{\varepsilon}\_{t,2}^{-}\,. \tag{23}$$

resulting in *at*(**P**<sup>−</sup> *<sup>t</sup>* **<sup>H</sup>***T*)<sup>2</sup> < 0. Moreover, taking into consideration that:

$$\hat{z}\_{t,1}^+ = \hat{z}\_t^- + a\_t \mathbb{K}\_{t,1} = \hat{z}\_{t,1}^- - a\_t (b\_t + V)(\mathbf{P}\_t^- \mathbf{H}^T)\_1$$

and *z*ˆ + *<sup>t</sup>*,1 ≥ 0, it is derived that:

$$a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_1 < -(b\_t + V)\mathcal{E}\_{t,1}^-\; ; \tag{24}$$

	- (a) *λ*<sup>1</sup> = 0 and *λ*<sup>2</sup> = 0;
	- (b) *λ*<sup>1</sup> > 0 and *λ*<sup>2</sup> = 0.

Similar to Case (ii), the third part of Proposition 2 can be derived;

(iv) **The two constraint conditions are active**, i.e., *z*ˆ − *<sup>t</sup>*,1 + *atKt*,1 = 0 and *z*ˆ − *<sup>t</sup>*,2 + *atKt*,2 = 0. In this case, we have to search for solutions where *λ*1, *λ*<sup>2</sup> ≥ 0. It is derived via Proposition 1 that:

$$\mathbf{K}\_t = a\_t^{-1} \mathbf{\hat{z}}\_t^{-} \, , \tag{25}$$

which leads to the solution **zˆ**<sup>+</sup> *<sup>t</sup>* = **0**.

The following subcases are considered:


$$a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_2 < -(b\_t+V)\hat{z}\_{t,2}^-\dots$$

Otherwise, it is rejected since it is not aligned with the conditions of the considered case (i.e., *λ*<sup>1</sup> = 0 *λ*<sup>2</sup> > 0);

(c) If *λ*<sup>1</sup> > 0 and *λ*<sup>2</sup> = 0, similar to Case (*iv*)-c, the solution **z**<sup>+</sup> *<sup>t</sup>* = **0** is accepted, if:

$$a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_1 < -(b\_t+V)\mathcal{E}\_{t,1}^- \; ; \; $$

(d) If *λ*<sup>1</sup> > 0 and *λ*<sup>2</sup> > 0, then by taking into consideration Relations (23) and (25), it turns out that the necessary condition in order for **z**<sup>+</sup> *<sup>t</sup>* = **0** to be accepted as a feasible solution (which satisfies the conditions of the considered case, i.e., *λ*<sup>1</sup> > 0 *λ*<sup>2</sup> > 0) is:

$$a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_1 < -(b\_t + V)\mathcal{E}\_{t,1}^- \quad \text{and} \quad a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_2 < -(b\_t + V)\mathcal{E}\_{t,2}^-.$$

In conclusion, in Case (*iv*), the vector **z**<sup>+</sup> *<sup>t</sup>* = **0** is a possible optimal solution, if at least one of the following conditions holds:

$$a\_t(\mathbf{P}\_t(-)\mathbf{H}^T)\_1 < -(b\_t + V)\mathfrak{z}\_{t,1}^- \quad \text{or} \quad a\_t(\mathbf{P}\_t^-\mathbf{H}^T)\_2 < -(b\_t + V)\mathfrak{z}\_{t,2}^-.$$

**Remark 2.** *Based on the low computational cost, the four possible solutions of the constrained optimization problem* (11) *can be examined one-to-one, aiming to find the optimal solution. In any case, the necessary conditions presented in Proposition 2 can be examined simultaneously to have a more comprehensive view in the process of searching for the optimal solution.*

Next, an illustrative application of the described methodology is presented regarding the estimation (revelation) of the two-sided jump components of asset returns.

#### **5. Application; Estimation of the Two-Sided Jump Components of the NASDAQ Index**

In this section, an application example of the proposed methodology analyzed in Section 4 is illustrated concerning the estimation of the hidden two-sided jump components of the NASDAQ index for the 3 y period 2006–2008. To estimate the parameters of the model, i.e., the parameter set *φ* = (**G**(1), **G**(2), *σ*<sup>2</sup> *<sup>x</sup>* , *σ*<sup>2</sup> *<sup>y</sup>* , *V*), the maximum likelihood estimation method is used taking into consideration that the distribution of *Rt* conditioned on **z***t* is normal, i.e.,

$$\mathcal{R}\_t \| \mathbf{z}\_t \sim \mathcal{N}(\mathbf{H} \mathbf{\hat{z}}\_t^-, \mathbf{H} \mathbf{P}\_t^- \mathbf{H}^T + V) \dots$$

Therefore, the log-likelihood function, LogL, is of the form:

$$\log L(R\_1, \ldots, R\_n) = -n/2 \log(2\pi) - 0.5 \sum\_{t=1}^n \left( \log(|\omega\_t|) + u\_t^T \omega\_t^{-1} u\_t \right) \tag{26}$$

where,

$$
\boldsymbol{\mu}\_t = \boldsymbol{R}\_t - \mathbf{H} \mathbf{\hat{z}}\_t^- \quad \text{and} \quad \boldsymbol{\omega}\_t = \mathbf{H} \mathbf{P}\_t^- \mathbf{H}^T + \boldsymbol{V} \dots
$$

The estimations derived by maximizing *LogL*, given in (26), are as follows:

$$\mathbf{G}^{(1)} = \begin{bmatrix} 5.4741 & -2.8498 \\ -2.8498 & 7.3474 \end{bmatrix}, \mathbf{G}^{(2)} = \begin{bmatrix} 7.4368 & 1.4909 \\ 1.4909 & 2.8304 \end{bmatrix}$$

and:

$$
\sigma\_x^2 = 0.9897 \times 10^{-3}, \quad \sigma\_y^2 = 0.86281 \times 10^{-3}, \quad V = 4.961 \times 10^{-11} \text{ \AA}
$$

with *LogL* = 995.9854. Based on the estimated parameters, the estimated two-sided jump components of the NASDAQ index are showcased in Figure 1.

**Figure 1.** (**a**) Estimated positive return jumps of the NASDAQ index during 2006–2008. (**b**) Estimated negative return jumps of the NASDAQ index during 2006–2008.

#### **6. Conclusions**

In this work, the topic of state space modeling with non-negative constraints was considered. For that purpose, a state space model was constructed where the state equation that describes the dynamic evolution of the components of the hidden state vector was expressed via non-negative definite quadratic forms and represents a non-negative valued Markovian stochastic process of order one. Due to the inequality conditions, a constrained optimization problem arises to derive estimators for the states, which are unbiased and of minimum variance. Towards this direction, a thorough analysis was illustrated via Propositions 1 and 2, concerning the stationary points of the optimization problem along with the special conditions that have to be satisfied in order to derive non-negative estimations for the state vectors at every time. Thus, in Proposition 2, necessary conditions were derived for a stationary point to constitute a feasible solution. The proposed method constitutes an alternative for handling state space models with non-negativity constraints, and it has a low computational burden compared to resampling methods for the estimation procedure.

Regarding future work, the generalization of the proposed method for the case of an *n*-dimensional non-negative state vector, *n* > 2, could be examined. This is a challenging problem in many applications. For example, in navigation problems, for *n* = 3, state space models with non-negativity constraints are suitable to describe the distance covered during the motion of a vehicle, if we let the three non-negative components of the state vector represent the measures of the velocities (speeds) along the axes in *R*3.

**Author Contributions:** Methodology, O.T. and G.T.; Writing—original draft, O.T. and G.T.; Writing review & editing, O.T. and G.T. Both authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The dataset used to illustrate the applicability of the proposed method is publicly available at www.finance.yahoo.com accessed on 11 June 2021.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com