**About the Editor**

#### **Simon Tomaziˇ ˇ c**

Simon Tomaziˇ c received his B.Sc. and Ph.D. degrees in electrical engineering from the Faculty ˇ of Electrical Engineering, University of Ljubljana, Slovenia, in 2012 and 2016, respectively. He is currently an Assistant Professor at the Laboratory of Control Systems and Cybernetics, University of Ljubljana. His main research interests include fuzzy model identification, mathematical and computational modelling, soft sensors, machine learning with big data, predictive control of dynamical systems and indoor positioning using computer vision and radiolocation.

### *Editorial* **Intelligent Soft Sensors**

**Simon Tomažiˇc**

Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia; simon.tomazic@fe.uni-lj.si; Tel.:+386-1-4768-760

In this Special Issue, we embark on a journey into the exciting field of intelligent soft sensors, and take a deep dive into the groundbreaking advances and potential that these software algorithms have introduced in various fields. Soft sensors, often referred to as virtual sensors, are fast becoming a cornerstone of the ever-evolving digital age. To appreciate the capabilities of soft sensors, one must acknowledge their fundamental basis. These tools are based on sophisticated mathematical models, state-of-the-art computational techniques, and rigorous algorithmic strategies. Fields as diverse as physics, electrical engineering, biology, mathematics and even economics have influenced their development, making them a true example of interdisciplinary excellence.

Because these algorithms process a large number of measurements simultaneously, insights into online estimation of complex process variables that were previously impossible have become possible. In areas such as control applications, fault detection or data fusion, the importance of soft sensors has increased. Essentially, data from numerous sources are fused to gain insights from the apparent clutter.

The broader socio-economic and environmental context must also be considered. With rising costs, increasing concerns about sustainability and the overarching need for efficiency, the importance of soft sensors is clearer than ever. They provide the adaptability necessary for optimised resources, minimised waste and improved performance, meeting the pressing needs of the 21st century. In the rapidly digitising world, the ingenuity and breakthrough capabilities of today's researchers and engineers are undeniably impressive. It is not just about finding solutions to current problems, but also about identifying opportunities that many have not yet recognized. The forward-looking nature of this field becomes even clearer in the context of soft sensors. With this technological development, it is important to also recognise the broader impacts and applications.

Among the many interesting contributions in this Special Issue, one deals with the complexity of setting observer parameters for second-order systems with constant perturbations [1]. A novel algorithm that simplifies the inherent complexity in defining the convergence rate and the range for unknown state errors is the focus of the discussion. By matching the theory with practical scenarios, its usefulness is highlighted using a microalgae bioreactor model, emphasising the effectiveness of the algorithm in ensuring an optimal convergence rate and precision. The challenges posed by unobservable and nonlinear systems are addressed in [2], which presents a unique extended Kalman filter framework. An intelligent projection onto an observable subspace is used to solve previously insurmountable instability problems. The brilliance is particularly evident in sensor selection, which helps stabilise unobservable states and optimise sensor selection without the need for actual measurements. A deeper understanding is fostered by another valuable contribution [3] that re-evaluates conventional prediction and health management techniques. In an era characterised by rapid production cycles and frequent design changes, the urgency of a more dynamic, self-contained test optimisation methodology is highlighted. By merging soft sensing, ensemble learning and extreme learning machines (ELMs), a technique is presented that excels in efficiency and adaptability to the intricacies of modern systems.

However, the applications of soft sensors are not limited to manufacturing or engineering. Their impact on the medical and psychological fields is equally significant. The research [4] presented on ToLA—a computerised adaptive testing version of the Tower

**Citation:** Tomažiˇc, S. Intelligent Soft Sensors. *Sensors* **2023**, *23*, 6895. https://doi.org/10.3390/s23156895

Received: 26 July 2023 Accepted: 2 August 2023 Published: 3 August 2023

**Copyright:** © 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

of London task—is exemplary. An item response theory was combined with an adaptive algorithm. The result is a diagnostic tool with increased sensitivity that provides a deeper understanding of neurodevelopmental disorders. Similarly, a model for measuring hypnotic depth during intravenous general anaesthesia is presented [5], pointing to a future where anaesthetic treatments are perfectly individualised.

Research into electrodermal activity (EDA) for stress detection [6] highlights the importance of monitoring mental health. The application of frequency spectrum analysis in this research promises a fast, accurate and efficient assessment of stress in real time, which is particularly important given the global increase in mental health problems.

In bioprocessing, where the goal is to improve accuracy and efficiency, soft sensors are becoming increasingly useful, especially when combined with Raman spectroscopy and machine learning. A study on the automated feeding of Chinese hamster ovary cells (CHO) [7] is presented, highlighting the importance of integrating Raman spectroscopy and chemometrics to develop models for monitoring and controlling CHO bioprocesses. The profound implications of the careful preprocessing and wise selection of multivariate analysis methods to improve accuracy in the monitoring of important process variables are demonstrated.

To further explore the topic of bioprocesses, the fermentation process of Pichia pastoris is examined and the BDA-IPSO-LSSVM soft sensor modelling method is presented [8]. This model addresses the recurring problem of model errors due to discrepancies in data distribution. By leveraging the combined strengths of transfer learning, fuzzy set concepts and advanced optimisation techniques, exceptional predictive accuracy is achieved, even under changing conditions. While these advances are remarkable, the transformative impact of soft sensors extends even further in the modern technology landscape.

With the increasing integration of technologies such as artificial intelligence, machine learning and IoT (Internet of Things), the potential of soft sensors is growing exponentially. Possibilities that were once considered unattainable are now becoming tangible. It is not just about the sensors, but also about the ecosystems that enable them. Change is on the horizon as the industry is ready to adopt a proactive maintenance culture, predictive quality control and instant adaptive adjustments. In the field of soft sensor research, societal impact and directions are becoming increasingly important. Educational institutions, companies and policy makers play a crucial role in shaping this path. Their joint efforts to create an enabling environment foster innovation and drive progress in this dynamic field.

The following section looks at the versatility and scope of soft sensor applications in industry. The research on self-acting shape memory coils (SMC) with variable stiffness [9] is worth highlighting. The proposed method not only improves the performance and cost efficiency of SMA actuators, but its applicability extends across various fields from robotic grippers to biomedical devices.

The next paper [10] deals with the merging of data from different sources in sintering processes and highlights the growing importance of integrating different types of data for better predictions. By cleverly merging visual data with traditional time series process data, the method achieves significant improvements in predicting complicated quality variables such as FeO content. In the area of safety and infrastructure, research on a highly sensitive fire detection and early-warning system [11] highlights the value of fusing multiple fire indicators for timely and effective fire detection in buildings. This hybrid feature-fusion-based strategy represents potential for much safer smart building systems.

The essence of soft sensors lies in the data they interpret. These are not just any data, but a mixture of signals, inputs and variables that all help to describe the underlying processes. When these nuances are unravelled, soft sensors reveal the hidden subtleties of the world. Whether it is the microenvironment of a bioreactor, the status of a complicated machine or the subtle changes in human physiology, these virtual tools enable a comprehensive understanding.

In addition, much attention is being paid to ethics and transparency in relation to these technologies. As these sensors play an increasingly central role in determining outcomes, such as autonomous driving decisions—whether to avoid an obstacle or brake abruptlythe need for transparency, accountability and ethical considerations in their algorithms becomes ever more important. It is our collective responsibility to ensure that the full potential of these technologies is realised while adhering to ethical principles and practices.

In summary, the articles in this Special Issue not only highlight the advancing frontiers of soft sensors in various fields, but also emphasise the importance of multidisciplinary collaboration in these developments. As technologies advance, so will the applications and opportunities for soft sensors. Challenges and opportunities await researchers and professionals in this field, as the future promises even more advanced and integrated solutions.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Thijs Devos 1,2,\*, Matteo Kirchner 1,2, Jan Croes 1,2, Wim Desmet 1,2 and Frank Naets 1,2**


**Abstract:** To comply with the increasing complexity of new mechatronic systems and stricter safety regulations, advanced estimation algorithms are currently undergoing a transformation towards higher model complexity. However, more complex models often face issues regarding the observability and computational effort needed. Moreover, sensor selection is often still conducted pragmatically based on experience and convenience, whereas a more cost-effective approach would be to evaluate the sensor performance based on its effective estimation performance. In this work, a novel estimation and sensor selection approach is presented that is able to stabilise the estimator Riccati equation for unobservable and non-linear system models. This is possible when estimators only target some specific quantities of interest that do not necessarily depend on all system states. An Extended Kalman Filter-based estimation framework is proposed where the Riccati equation is projected onto an observable subspace based on a Singular Value Decomposition (SVD) of the Kalman observability matrix. Furthermore, a sensor selection methodology is proposed, which ranks the possible sensors according to their estimation performance, as evaluated by the error covariance of the quantities of interest. This allows evaluating the performance of a sensor set without the need for costly test campaigns. Finally, the proposed methods are evaluated on a numerical example, as well as an automotive experimental validation case.

**Keywords:** extended Kalman filter; state estimation; sensor selection; observability; non-linear models

#### **1. Introduction**

Nowadays, many new mechatronic systems become available on the market, designed to perform tasks with increasing complexity while ensuring machine/operator safety. As a result of this increase in complexity, advanced controller schemes are being developed that introduce the need for accurate information on the dynamics of these systems. However, in many applications, this information cannot be directly measured because either no sensor exists that is capable of measuring the quantity of interest or the available sensors are expensive or impractical to implement [1]. This has led to the development of various virtual sensing techniques that aim at obtaining the dynamic information of a mechatronic system indirectly through estimations based on simple measurements [2]. This methodology has been investigated for use in a wide range of applications [3–6].

As a result of increased computational power, these virtual sensing schemes have gained significant traction for the analysis of structural [7] and general mechanical systems [8,9] in recent years. Driven by the need for more accurate system information, model complexity has significantly increased up to flexible multibody-related formulations [10–12]. However, this comes at the cost of increased computational effort and additional issues towards observability and thus estimator stability.

**Citation:** Devos, T.; Kirchner, M.; Croes, J.; Desmet, W.; Naets, F. Sensor Selection and State Estimation for Unobservable and Non-Linear System Models. *Sensors* **2021**, *21*, 7492. https://doi.org/10.3390/ s21227492

Academic Editor: Simon Tomazic

Received: 14 October 2021 Accepted: 6 November 2021 Published: 11 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

When model complexity is increased, observability requirements can become challenging as extra model degrees of freedom require additional sensor data to ensure full observability [13]. For example, in tire force estimation applications, vehicle position states are typically unobservable when no GPS measurement is present, although these measurements do not contribute significantly to the tire force estimation performance [1,14]. Furthermore, the non-linear governing equations create additional challenges to evaluate observability globally and/or locally [15,16]. Additionally, some virtual sensing applications feature models with lots of states (e.g., for meteorology or oceanography typically > 106 [6]), of which only a few significantly contribute to the estimator performance. This indicates that there is a need for an approach to deal with unobservable, and thus unstable, estimators given the interest in only a few specific quantities.

One of the approaches proposed in the literature to deal with large system models is to reduce the order of the estimator to decrease its computational efforts and set fewer constraints towards observability. Two main approaches to reduce the order of an estimator have been presented in the literature. The first approach involves reducing the model if this model can be approximated sufficiently well by a subset of modes [17–19]. However, this approach is highly model specific and is therefore not suited for all applications. The second approach involves reducing the order of the Kalman filter by transforming the Riccati equation associated with the filter [1,6]. Yonezawa et al. [20] have presented an approach for unobservable system states to be asymptotically observable, assuming that a dynamic link is present between the unobservable and the observable states. This approach has proven to be very effective when considering unknown parameters or constant bias errors. However, this approach only works when a dynamic coupling is present between unobservable and observable states, which is, in a lot of applications, rarely the case for all system states, especially when dealing with increasing model complexity.

For application cases with decoupled state dynamics, such as unknown accelerometer biases and unknown parameters, the Schmidt–Kalman filter can be deployed [21]. The covariances corresponding to the augmented states are not taken into account in the update steps as these states are typically not measurable and dynamically completely decoupled from the other states. This makes it impossible to observe these states; hence, they are simply omitted from the update step. When many of these states are present, this technique can drastically improve the computational effort as only the observable state covariances are being evaluated [22]. On the other hand, the Schmidt–Kalman filter approach can only be defined for completely decoupled augmented states.

Finally, sensor selection is a necessary step when deploying any estimation framework. A common approach is trial-and-error as sensors typically have to be bought and deployed before information can be acquired on their performance. However, previous research has already investigated some more cost-effective approaches that vary widely from equationbased modeling approaches [23] to machine learning methods [24]. Commonly proposed objective functions for model-based sensor selection algorithms are the Fisher Information Matrix (FIM) [25] or the error covariance matrix of the estimator [26], evaluated using the Riccati Equation (ARE) or the Lyapunov Equation [27]. However, all of these methods still rely heavily on sensor information being available that contains information on the sensor performance, which usually requires highly expensive test campaigns for which sensors have to be acquired.

As an alternative to the above-described schemes, this work presents a novel approach to stabilise a non-observable, non-linear estimator for both full and partially decoupled system dynamics given that only specific quantities of interests are targeted and proposes a new sensor selection strategy that is able to evaluate sensor set performances before acquiring them. Firstly, an estimator setup is proposed based on a set of non-linear governing equations for both the system model, measurements and quantities of interest, described in Section 2. Furthermore, a projection of the estimator covariance equations on an observable subspace is proposed in Section 3, which makes it possible to stabilise the estimator covariances during operation. The projection explained in Section 3.2 is

based on the singular value decomposition of the linear Kalman observability matrix. Furthermore, the projected Kalman filter equations presented in Section 3.3 can stabilise the unobservable state covariances although these states remain unobservable. Hence, their resulting covariances should be interpreted with care. Additionally, a sensor selection methodology is proposed, which ranks the sensors based on their estimation performance such that the user can obtain an overview of the best sensors to choose from, as defined in Section 4. This has the great advantage that a prior analysis of the sensor performance can be carried out that eliminates the need to acquire sensors beforehand and organise costly test campaigns. Finally, the work is validated in two different cases, namely an academic example explained in Section 5.1 where the potential of the method is shown and a full experimental validation case explained in Section 5.2.

#### **2. Model Definition and Overall Estimator Setup**

In this work, a non-linear model will be considered to govern the dynamics of a physical system, the measurements and some predefined quantities of interest. These non-linear models are furthermore linearised and discretised in an Extended Kalman Filter [13] based the estimation framework. This section describes the implementation of the general estimator equations.

#### *2.1. Definition of Model, Measurement and Quantities of Interest Equations*

The non-linear equations considered in this work can be divided into system, measurements and quantities of interest equations and are introduced as:

$$\text{Model:} \qquad \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, t) \tag{1}$$

$$\text{Measurements:} \qquad \mathbf{y} = \mathbf{h}(\mathbf{x}, t) \tag{2}$$

$$\mathbf{Q}\text{quantities of Interest:}\qquad \mathbf{y}\_{\mathbb{S}^5} = \mathbf{g}(\mathbf{x}, t)\tag{3}$$

where **x** is the state vector and *t* is the time. The vector **x˙** describes the time derivative of the states, and the vectors **y** and **yvs** are the measurements and quantities of interest, respectively.

In this work, an estimation framework is set up based on the discrete extended Kalman Filter for which an explicit relation is necessary. Therefore, these continuous non-linear equations have to be linearised and discretised. This is discussed in Section 2.2.

#### *2.2. Linearisation and Discretisation of the Governing Equations*

To obtain the linear and discrete Jacobian matrices used for the estimator evaluation, the non-linear equations are linearised and discretised. The Jacobians are obtained by performing a linearisation around the a priori estimation point (**x***k*|*k*, *tk*) by:

$$\mathbf{F}\_{\mathbf{c},k} \quad = \left. \frac{\partial \mathbf{f}(\mathbf{x},t)}{\partial \mathbf{x}} \right|\_{\mathbf{x}\_{k\parallel k}t\_k} \tag{4}$$

Furthermore, the rest of the linearised Jacobians are calculated accordingly:

$$\mathbf{H}\_k \quad = \left. \frac{\partial \mathbf{h}(\mathbf{x}, t)}{\partial \mathbf{x}} \right|\_{\mathbf{x}\_{k[k+1}, t\_k} \tag{5}$$

$$\mathbf{G}\_{\text{vs},k} \quad = \left. \frac{\partial \mathbf{g}(\mathbf{x},t)}{\partial \mathbf{x}} \right|\_{\mathbf{x}\_{k+1|k+1},t\_k} \tag{6}$$

Important to note is that any linearisation scheme can be deployed, which suits the application of the estimator. In this work, the linear continuous time Jacobians are obtained using forward differencing [28]:

$$\mathbf{F}\_{\varepsilon,k} = \frac{\partial \mathbf{f}(\mathbf{x},t)}{\partial \mathbf{x}}\Big|\_{\mathbf{x}\_{k\mid k},t\_k} \approx \frac{\mathbf{f}(\mathbf{x}\_{k\mid k} + \boldsymbol{\epsilon}, t\_k) - \mathbf{f}(\mathbf{x}\_{k\mid k}, t\_k)}{\boldsymbol{\epsilon}}\tag{7}$$

The next step is to derive the discrete system Jacobian used for the integration of the system equations and evaluation of the Kalman filter covariance equations. Again, any type of discretisation scheme could be employed here, which best suits the application of the estimator. In this work, we chose to use a different discretisation method for the integration of the system equations and the derivation of the discretised sytem Jacobian for the Kalman covariance equations. This is conducted because it is not always practical to use the same discretisation scheme as this can lead to numerically ill-conditioned systems, for example, when dealing with a set of stiff equations [29].

For the derivation of the discretised system Jacobian used in the Kalman covariance equations, the exponential discretisation scheme [30] is used:

$$\mathbf{F}\_k = e^{\mathbf{F}\_{c,k}\Delta t} \tag{8}$$

where **F***c*,*<sup>k</sup>* is the continuous time system Jacobian obtained from Equation (7) and Δ*t* is the timestep. The obtained discrete Jacobian is furthermore used in the EKF covariance equations.

For the integration of the system equations, a solver that is especially suited for stiff equations from the Matlab *ode-suite* is employed [31].

#### *2.3. Overview of the Extended Kalman Filter Framework*

In this work, the Extended Kalman Filter [13] is chosen for the estimation of the dynamic system information. This filter is an extension of the regular Kalman filter, making it applicable to non-linear equations. Using the previously calculated linearised Jacobians, the general estimator equations can be defined. In this process, the propagation of the states is conducted using the earlier defined non-linear equations, but the propagation of the covariance equations is conducted using the linearised matrices. The EKF-algorithm consists of three main steps, which are prediction, correction and update.

*Prediction*:

$$\mathbf{x}\_{k|k+1} = \mathbf{f}(\mathbf{x}\_{k|k}, t\_k) \tag{9}$$

$$\mathbf{y}\_{k+1} = \mathbf{h}(\mathbf{x}\_{k|k+1}, t\_k) \tag{10}$$

$$\mathbf{P}\_{k|k+1} = \mathbf{F}\_k \mathbf{P}\_{k|k} \mathbf{F}\_k^T + \mathbf{Q}\_d \tag{11}$$

*Correction*:

$$\mathbf{S}\_{k} = \mathbf{H}\_{k}\mathbf{P}\_{k|k+1}\mathbf{H}\_{k}^{T} + \mathbf{R} \tag{12}$$

$$\mathbf{K}\_k = \mathbf{P}\_{k|k+1} \mathbf{H}\_k^T \mathbf{S}\_k^{-1} \tag{13}$$

*Update*:

$$\mathbf{x}\_{k+1|k+1} = \mathbf{x}\_{k|k+1} + \mathbf{K}\_k (\mathbf{y}\_{m,k+1} - \mathbf{y}\_{k+1}) \tag{14}$$

$$\mathbf{P}\_{k+1|k+1} = (\mathbb{I} - \mathbf{K}\_k \mathbf{H}\_k)\mathbf{P}\_{k|k+1} \tag{15}$$

As the quantities of interest are governed by non-linear equations such that **y***vs* = **g**(**x**, *t*) in this work, a first-order Taylor expansion of the quantities of interest is considered around the previously calculated state configuration (**x***k*+1|*k*<sup>+</sup>1, *tk*) to derive their covariances:

$$\mathbf{y}\_{\rm vs} \approx \mathbf{g}(\mathbf{x}\_{k+1|k+1\prime}, t\_k) + \mathbf{G}\_{\rm vs,k}(\mathbf{x} - \mathbf{x}\_{k+1|k+1}) + \mathbf{K}\_{\rm vs,k}(t - t\_k) \tag{16}$$

where **<sup>G</sup>***vs*,*<sup>k</sup>* <sup>=</sup> *<sup>∂</sup>***g**(**x**,*t*) *∂***x** - - - **<sup>x</sup>***k*+1|*k*+<sup>1</sup> and **<sup>K</sup>***vs*,*<sup>k</sup>* <sup>=</sup> *<sup>∂</sup>***g**(**x**,*t*) *∂t* - - - *tk* are the Jacobians of the quantities of interest according to Equation (6). When evaluating Equation (16) around the current configuration point (**x***k*+1|*k*<sup>+</sup>1, *tk*) while assuming that the function **<sup>g</sup>** is approximately affine in this region, it can be stated that, up to a first-order approximation of the Taylor series, the quantities of interest will be a stochastic variable with the following mean and covariance [32–34]:

$$\mathbf{y}\_{vs,k+1} = \mathbf{g}(\mathbf{x}\_{k+1|k+1\prime}t\_k) \tag{17}$$

$$\mathbf{P}\_{\rm vs,k+1} = \mathbf{G}\_{\rm vs,k} \mathbf{P}\_{k+1|k+1} (\mathbf{G}\_{\rm vs,k})^T \tag{18}$$

Since in most applications only a selected number of quantities of interest are targeted by the estimator, this works aims at focusing the estimator performance on these specific quantities of interest while keeping the estimator stable.

In this work, only a few predefined quantities of interest are estimated, which are evaluated using their error covariance. However, the covariances of the states themselves are important for the stability of the filter as the estimator is only stable when all states are at least detectable [23]. This is unwanted when a certain sensor does not improve the estimation quality of the quantities of interest but is required to make all estimator states observable, in which case one would like to omit this sensor from the estimator. Therefore, this work aims at solving the following major difficulties:


Both of these approaches will be discussed in the next two sections of this work.

#### **3. Stabilisation of the Extended Kalman Filter for Unobservable System States**

When employing estimators, it is well known that observability is an important topic [1,35] as it determines the stability of the filter. Because in this work only some predefined quantities of interest are considered, it might occur that some sensors do not have any effect on the estimation performance for these specific quantities of interest. They can however cause some states to be unobservable, which can lead to stability issues of the estimator algorithm. In this work, a projection is proposed to stabilise the unobservable state covariances given that only some predefined quantities of interest are targeted. Important to note here is that only the stability issues are being tackled. The targeted states remain unobservable, but the estimator error covariances will be stable. The derivation of the observable subspace consists of three main steps:


These steps are discussed in the following subsections.

#### *3.1. Observability Analysis of the Extended Kalman Filter*

The observability investigation starts with the computation of an observability criterion. In this work, the linear Kalman observability matrix [36] is used, which is based on the Jacobians **F***<sup>k</sup>* and **H***<sup>k</sup>* calculated in Equations (4) and (5):

$$\mathcal{O} = \begin{bmatrix} \mathbf{H}\_k \\ \mathbf{H}\_k \mathbf{F}\_k \\ \mathbf{H}\_k \mathbf{F}\_k^2 \\ \vdots \\ \mathbf{H}\_k \mathbf{F}\_k^{n-1} \end{bmatrix} \tag{19}$$

If this matrix is of full rank, all states are observable. This is a sufficient condition for the estimator Riccati equation to converge to a stable solution [37]. However, this only holds for local observability, while for global observability analysis, the Lie derivatives have to be investigated [36,38]. Due to the non-linear nature of the governing equations, alternative methods need to be implemented to perform observability analysis, such as the ones in [15] or [16]. This work proposes to combine the observability matrices of different timesteps into a large observability matrix. This large observability matrix is a combination of matrices evaluated at evenly spaced timesteps of the training data:

$$\mathcal{O}\_{tot} = \begin{bmatrix} \mathcal{O}\_{k=1} \\ \mathcal{O}\_{k=1+p} \\ \mathcal{O}\_{k=1+2p} \\ \vdots \\ \mathcal{O}\_{k=m} \end{bmatrix} \tag{20}$$

where *k* is the timestep of evaluation, *m* is the total amount of timesteps in the training data and *p* is an integer index that defines the amount of observability matrices that are taken into account in the total observability matrix. The number *p* can be chosen between 1 and *m* depending on the wanted global observability coverage. The smaller the number *p*, the better global observability is analysed, but the larger the total observability matrix. Alternatively, when there are no training data available, the total observability matrix can be calculated for the first timesteps of the simulation based on which the projection can be defined.

Global observability requires the matrix from Equation (20) to be of full rank. However, to reduce costs, installing the minimum amount of sensors needed to produce good and reliable estimation results should always be the aim. Therefore, sensors that do not contribute significantly to the overall quantities of interest estimation performance but are necessary to make all system states observable are not wanted. However, they will cause the estimator to become unstable due to the unboundedness of the unobservable state covariances. This work therefore aims at tackling the stability issues related to unobservable system states. A projection of the Kalman filter Riccati equation is proposed on an observable subspace defined by the singular modes of the complete observability matrix from Equation (20). This projection will be discussed in the next subsection.

#### *3.2. Calculation of an Observable Projection Basis*

It is well known that a singular value decomposition (SVD) can be used to determine the rank of a matrix [15,39]. Furthermore, an SVD can also indicate which states are unobservable by looking into the modeset corresponding to the singular values. This modeset can also be exploited as a basis for an observable projection.

Lets consider the singular value decomposition of the complete observability matrix. As a result, the full observability matrix (O*tot*) can be written as a product of three matrices:

$$\mathcal{O}\_{\text{tot}} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V} \tag{21}$$

where **Σ** is a square matrix containing the singular values and matrices **U** and **V** contain the corresponding modes that satisfy **VV***<sup>T</sup>* = I and **UU***<sup>T</sup>* = I. When the observability matrix is rank deficient, at least one of the calculated singular values will be close to zero and the matrix **Σ** will have the following structure:

$$
\Sigma = \begin{bmatrix}
\sigma\_1 & \cdots & 0 & 0 & \cdots & 0 \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \cdots & \sigma\_m & 0 & \cdots & 0 \\
0 & \cdots & 0 & \approx 0 & \cdots & 0 \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & 0 & \cdots & \approx 0
\end{bmatrix} = \begin{bmatrix}
\Sigma\_\upsilon & \mathbf{0} \\
\mathbf{0} & \Sigma\_u
\end{bmatrix} \approx \begin{bmatrix}
\Sigma\_\upsilon & \mathbf{0} \\
\mathbf{0} & \mathbf{0}
\end{bmatrix} \tag{22}
$$

where *m* is the rank of the total observability matrix O*tot*. To determine whether singular values are zero, a threshold value of 10−<sup>12</sup> has been set by trial-and-error after investigating the singular values for the validation examples. Furthermore, the corresponding modes present in the matrix **V** can be partitioned according to the corresponding singular values in Equation (22):

$$\mathbf{V} = \begin{bmatrix} \mathbf{V}\_o \\ \mathbf{V}\_u \end{bmatrix} \tag{23}$$

where **V***u* are the modes that span the kernel of the observability matrix. At the same time, **V***o* is the matrix consisting of the modes corresponding to an observable subspace for this particular sensor set. Using the **VV***<sup>T</sup>* = I property of the SVD matrices, the following expression can be obtained:

$$\mathbf{V}\mathbf{V}^{T} = \begin{bmatrix} \mathbf{V}\_{o} \\ \mathbf{V}\_{u} \end{bmatrix} \begin{bmatrix} \mathbf{V}\_{o}^{T} & \mathbf{V}\_{u}^{T} \end{bmatrix} = \begin{bmatrix} \mathbf{V}\_{o}\mathbf{V}\_{o}^{T} & \mathbf{V}\_{o}\mathbf{V}\_{u}^{T} \\ \mathbf{V}\_{u}\mathbf{V}\_{o}^{T} & \mathbf{V}\_{u}\mathbf{V}\_{u}^{T} \end{bmatrix} = \begin{bmatrix} \mathbb{I} & \mathbf{0} \\ \mathbf{0} & \mathbb{I} \end{bmatrix} \tag{24}$$

where the **V***o***V***<sup>T</sup> <sup>o</sup>* = I and **<sup>V</sup>***o***V***<sup>T</sup> <sup>u</sup>* = **V***u***V***<sup>T</sup> <sup>o</sup>* = **0** properties can be obtained. In this work, a projection of the estimator covariance equations is proposed using the matrix **V***o* containing an observable modeset:

$$\mathbf{x} = \mathbf{V}\_o^T \mathbf{q} \tag{25}$$

The aim of this transformation is to obtain new, observable estimator covariance equations such that the estimator runs stable during operation. Applying Equation (25) to Equations (4) and (5), the new linearised Jacobians can be found as:

$$
\overline{\mathbf{F}}\_k = \mathbf{V}\_\sigma \mathbf{F}\_k \mathbf{V}\_\sigma^T \tag{26}
$$

$$
\tilde{\mathbf{H}}\_k = \mathbf{H}\_k \mathbf{V}\_o^T \tag{27}
$$

Due to the transformation, the new estimator observability matrix will have an additional multiplication by the transformation matrix **V***<sup>T</sup> <sup>o</sup>* . Using the singular value decomposition formula from Equation (21), Equation (22) and the **V***u***V***<sup>T</sup> <sup>o</sup>* = **0** and **V***o***V***<sup>T</sup> <sup>o</sup>* = I properties, the following expression is obtained:

$$\mathcal{O}\_{\text{tot}} \mathbf{V}\_o^T = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V} \mathbf{V}\_o^T = \mathbf{U} \begin{bmatrix} \boldsymbol{\Sigma}\_o & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{V}\_o \\ \mathbf{V}\_u \end{bmatrix} \mathbf{V}\_o^T = \mathbf{U} \begin{bmatrix} \boldsymbol{\Sigma}\_o & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{V}\_o \mathbf{V}\_o^T \\ \mathbf{0} \end{bmatrix} = \mathbf{U} \begin{bmatrix} \boldsymbol{\Sigma}\_o \\ \mathbf{0} \end{bmatrix} \tag{28}$$

which proves that the newly obtained observability matrix will be of full rank as all singular values larger than 0 of the original total observability matrix are present in **Σ***o*.

The above proposed projection only works if the targeted quantities of interest, expressed in the virtual sensor equations through **G***vs*,*k*, are independent of the unobservable states. Mathematically speaking, this means that the linearised Jacobian of the virtual sensors cannot be spun with any of the observability matrix kernel basis vectors **V***u*. To check this, the following criterion has to be evaluated:

$$\mathbf{G}\_{vs,m}\mathbf{V}\_u^T = \mathbf{0} \tag{29}$$

where *m* indicates that the linearised Jacobian of the quantities of interest equation is taken at the last timestep of the training data to make sure that initial transient behaviour is no longer present.

If the criterion of Equation (29) is not met, the estimator cannot be set up such that it will deliver valuable and trustworthy information on the quantities of interest. If this is the case, the sensor set connected to the Kalman filter should be changed or extended to a set that is able to capture all relevant information.

#### *3.3. Transformation of the Estimator Covariance Equations*

Now that the projection matrix **V***o* has been defined and the new Jacobians are derived, the new projected Kalman filter equations can be deduced. Using the matrices defined in Equations (9)–(13) and expressing the estimator equations in terms of the linearised Jacobians deduced from Equations (4) and (5), the following equations can be derived:

*Prediction*:

$$\mathbf{x}\_{k|k+1} = \mathbf{f}(\mathbf{x}\_k, t\_k) \tag{30}$$

$$\mathbf{y}\_{k+1} = \mathbf{h}(\mathbf{x}\_k, t\_k) \tag{31}$$

$$\mathbf{P}\_{k|k+1} = (\mathbf{V}\_o \mathbf{F}\_k \mathbf{V}\_o^T) \mathbf{P}\_{k|k} (\mathbf{V}\_o \mathbf{F}\_k \mathbf{V}\_o^T)^T + \mathbf{V}\_o \mathbf{Q}\_d \mathbf{V}\_o^T \tag{32}$$

*Correction*:

$$\mathbf{S}\_{k} = (\mathbf{H}\_{k}\mathbf{V}\_{o}^{T})\mathbf{P}\_{k|k+1}(\mathbf{H}\_{k}\mathbf{V}\_{o}^{T})^{T} + \mathbf{R} \tag{33}$$

$$\mathbf{K}\_k = \mathbf{P}\_{k|k+1} (\mathbf{H}\_k \mathbf{V}\_o^T)^T \mathbf{S}\_k^{-1} \tag{34}$$

*Update*:

$$\mathbf{x}\_{k+1|k+1} = \mathbf{x}\_{k|k+1} + \mathbf{V}\_o^T \mathbf{K}\_k (\mathbf{y}\_{m,k+1} - \mathbf{y}\_{k+1}) \tag{35}$$

$$\mathbf{P}\_{k+1|k+1} = (\mathbb{I} - \mathbf{K}\_k(\mathbf{H}\_k \mathbf{V}\_o^T))\mathbf{P}\_{k|k+1} \tag{36}$$

Finally, the quantities of interest and their corresponding covariances expressed by Equations (17) and (18) can be projected according to the transformation matrix **V***o*:

$$\mathbf{y}\_{\upsilon s,k+1} = \mathbf{g}(\mathbf{x}\_{k+1|k+1\prime} t\_k) \tag{37}$$

$$\mathbf{P}\_{\rm vs}{}\_{k+1} = (\mathbf{G}\_{\rm vs} \mathbf{V}\_o^T) \mathbf{P}\_{k+1|k+1} (\mathbf{G}\_{\rm vs} \mathbf{V}\_o^T)^T \tag{38}$$

To acquire the original state covariances, the newly acquired covariances need to be transformed back to their original values. This can be conducted by pre- and postmultiplying the covariance matrix **P***k*+1|*k*+<sup>1</sup> with the observable subspace transformation matrix **V***o*:

$$\mathbf{P}\_{k+1|k+1} = \mathbf{V}\_{\sigma}^{T} \mathbf{P}\_{k+1|k+1} \mathbf{V}\_{\sigma} \tag{39}$$

As a result of the transformation, the covariance matrix **P***k*+1|*k*+<sup>1</sup> of Equation (36) is no longer of its original size any more but is of size *m* according to Equation (22). This means that, due to the negligible contribution of the unobservable states to the transformation matrix **V***o*, these unobservable states will be omitted from the covariance equations, ensuring that their covariances will remain stable during the estimator operation. However, the drawback is that, when transforming the state covariances back to their original values in Equation (39), the corresponding covariances for the unobservable states are not interpretable as the transformation matrix **V***<sup>o</sup>* does not contain any contributions towards these states. However, if the defined quantities of interest are independent of the unobservable states, their corresponding covariances will be interpretable and deliver reliable results. At the same time, some measurements can be excluded from the estimator, which reduces the need to buy more sensors to solve estimator stability issues related to observability.

#### **4. Sensor Selection Using the Stabilised EKF Estimation Framework**

The second aspect that is covered in this work is the selection of the appropriate sensors. A sensor selection algorithm is proposed that ranks the sensors according to their contribution to the quantities of interest estimation performance by evaluating the covariance equations based on the training data. The mean of the covariance profile of the quantities of interest is used to evaluate the performance of a certain sensor set.

#### *4.1. Generate Training Data by Performing a Forward Simulation*

The training data are generated by performing a forward simulation and saving all relevant matrices. This allows evaluating the sensor performance beforehand, hence eliminating the need for an initial measurement campaign on a physical system and the acquisition of sensors.

During the forward simulation, the relevant information needed for further processing is stored. All sensors to be included in the sensor selection algorithm should be present in the training data, such that the generated linearised Jacobian for the measurement equation **H***n*<sup>0</sup> *s <sup>k</sup>* <sup>∈</sup> <sup>R</sup>(*n*<sup>0</sup> *<sup>s</sup>* )×(*nst*) has the correct dimensions corresponding to the total amount of sensors in the sensor space *n*<sup>0</sup> *s*.

#### *4.2. Sensor Selection Algorithm*

Very often in state estimation, it is not clear which sensors contribute the most towards the estimation performance. Therefore, the proposed sensor selection algorithm aims at ranking the sensors by their estimation performance, which is evaluated using the quantities of interest covariance matrices **P***vs*,*<sup>k</sup>* for each simulation timestep. During the sensor selection procedure, no state update is performed, but only the covariance propagation is evaluated. This allows the algorithm to be sped up significantly. The linearised Jacobian matrices are reused from the training data generated according to the previous subsection. The sensor selection algorithm consists of the following steps (where *n*0 *<sup>s</sup>* is the number of sensors in the entire sensorspace):

	- (a) remove a sensor *<sup>j</sup>* from the sensor space obtaining **<sup>H</sup>***n*<sup>0</sup> *<sup>s</sup>*−*i*−*j <sup>k</sup>* ;
	- (b) calculate transformation matrix **V***o* and observability matrix kernel **V***u*;
	- (c) check that none of the selected sensors depend on the kernel of the observability matrix **G***vs*,*m***V***<sup>u</sup>* = 0. If any of the selected sensors are part of the kernel, the algorithm is stopped as the sensors are incapable of accurately estimating the quantities of interest;
	- (d) if the previous criterion is fulfilled, propagate Kalman covariance equations defined by Equations (32)–(38) from Section 3.3 to obtain **P***vs*,*i*,*k*. The resulting mean of the quantity of interest is the sensor set covariance *pi*.

The result of the algorithm is a set of sensors ranked by their respective performance contribution to the overall quantity of interest covariance. The algorithm is summarised in Algorithm 1.



#### **5. Validation and Discussion**

In this work, two validation cases are presented:


These validation cases are discussed in the next sections.

#### *5.1. Simple Validation Case*

The simple validation case presented in this work is based on a three-mass system depicted in Figure 1.

**Figure 1.** A visual representation of the academic validation case used consisting of three masses connected with springs.

In this case, the quantity of interest is the position of the left mass *m*1, whereas the measurement used is the position of mass *m*2.

#### 5.1.1. Model Setup

For this simple example, the parameters *k*<sup>1</sup> and *k*<sup>2</sup> are chosen to be the same, such that *k*<sup>1</sup> = *k*<sup>2</sup> = *k*. The same holds for the masses *m*<sup>1</sup> = *m*<sup>2</sup> = *m*<sup>3</sup> = *m*. This is conducted to simplify the system and make the results more intuitive. All system parameters are listed in Table 1.

**Table 1.** Parameters of the three-mass academic example system used for validation.


Depending on variables *k*1, *k*<sup>2</sup> and the masses of the blocks, three eigenmodes can be computed for this system. For this example, the three eigenfrequencies are 0, <sup>√</sup>*k*/*<sup>m</sup>* and <sup>√</sup>3*k*/*m*. The system is excited with a sinusoidal force at the exact frequency of the second eigenmode, being√*k*/*<sup>m</sup>* <sup>=</sup> <sup>100</sup> Hz. Corresponding to this eigenfrequency, the eigenmode is of the form:

$$\mathbf{V}\_2 = \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}^T \tag{40}$$

This is the mode where both outer masses move in opposite directions, and the middle mass does not move. This means that the middle mass *m*<sup>2</sup> is located in a node of this eigenmode. Therefore, a position measurement of this mass cannot provide any information on the states of both other masses, and hence, the estimator will be unobservable. Here, the proposed projection can be executed to stabilise the Kalman filter covariances.

#### 5.1.2. Estimator Setup

To estimate the quantities of interest, the filter has to be configured with a model noise matrix **Q***<sup>d</sup>* and a measurement noise matrix **R**. For the model noise matrix **Q***d*, a noise level of (10−<sup>2</sup> m2) has been set for the acceleration equations, whereas the noise level for the velocity equations are being set to 0. This is due to the fact that the velocity equations are a copy from the previous timestep, and therefore, no error is introduced. This means that the **Q***d*-matrix is defined as follows:

$$Q\_d = \begin{bmatrix} 10^{-2} & 0 & 0 & 0\_{1 \times 3} \\ 0 & 10^{-2} & 0 & 0\_{1 \times 3} \\ 0 & 0 & 10^{-2} & 0\_{1 \times 3} \\ 0\_{3 \times 1} & 0\_{3 \times 1} & 0\_{3 \times 1} & 0\_{3 \times 3} \end{bmatrix} \tag{41}$$

The measurement noise is uncorrelated with a variance of ( 2 × <sup>10</sup>−<sup>1</sup> (m/s2) 2) for acceleration measurements and/or (10−<sup>1</sup> m2) for position measurements.

#### 5.1.3. Discussion

This academic example allows showing that the projection onto the observable subspace can stabilise the state covariances of the Kalman filter. To show this, a special case is selected where the system is excited on exactly the second eigenmode, and a position measurement of mass *m*<sup>2</sup> is considered. The quantity of interest is the acceleration of mass *m*1. Since the system is excited on an eigenfrequency, a position measurement of mass *m*<sup>2</sup> will not deliver any useful data to track the acceleration of mass *m*<sup>1</sup> as this mass lies within a node of the second eigenmode of the system. This special case is chosen to show that the proposed methodology works for any system where the unobservable modes can be isolated using a singular value decomposition.

The results of the estimation are depicted in Figure 2 and are similar to the results obtained from other simple, academic example cases [35]. One can observe that the amplitude of the oscillations raises continuously due to the system excitation on resonance frequency and the lack of damping in the model. The estimator is able to track the states of the system well, although larger variations in the position of mass *m*<sup>2</sup> can be observed where the absolute state values reach 0 m. Because this mass is not moving much, it becomes harder for the estimator to track.

For this system, a position measurement on mass *m*<sup>2</sup> will cause the states of both the outer masses *m*<sup>1</sup> and *m*<sup>3</sup> to be unobservable. This is visible in Figure 3 due to the unboundedness of the covariances of the states corresponding to masses *m*<sup>1</sup> and *m*<sup>3</sup> shown in grey as they are linked to the kernel of the observability matrix **V***u*.

**Figure 2.** State estimation results for the academic model. The graphs shows the difference between the estimation results with covariance transformation versus without.

**Figure 3.** The resulting state covariances for the academic model using a position measurement on the middle mass *m*<sup>2</sup> and a virtual position measurement of mass *m*3.

Using the projection shown in Section 3, these unstable covariances can be stabilised. When applying the projection, the resulting covariances are bounded, which is indicated by the red dashed lines in Figure 3. Figure 3 also shows that, since the states of the middle mass *m*<sup>2</sup> were already fully observable, the projection does not influence these covariances. However, one has to be careful interpreting the covariances of any quantity of interest should they be linked to an unobservable modeset **V***u*, which can be investigated using Equation (29). Because the unobservable system states remain unobservable when applying the projection, the estimation results of these states should be interpreted carefully. This indicates that the transformation introduces a trade-off between the stability of the Kalman filter and the interpretability of the covariances. When this would be the case in practise, additional sensors will have to be added to the sensor set in order to comply with the observability requirements. Nevertheless, the results show that the proposed methodology is able to stabilise the unobservable covariances, hence ensuring a stable estimator containing unobservable states.

#### *5.2. Experimental Validation Case*

The second validation case for the proposed estimator approach is an experimental case performed on a vehicle. The experimental validation is conducted on a vehicle using a 10-DOF vehicle model that has been developed and validated at Flanders Make in previous work [1,40,41] and graciously provided to be used in this work.

#### 5.2.1. Model Setup

The model used in this work is a 10-DOF freedom model is depicted in Figure 4, which is based on the Flanders Make Range Rover Evoque, visible in the same figure. The model has 16 states in total, 12 states for the body (2 for each degree of freedom) and 4 to describe the rotational degree of freedom of the wheels. Geometric parameters have been measured on the real vehicle, such as the suspension attachment points, centre of gravity location, wheel centre locations and so on. Table 2 gives an overview of the main parameters of the Range Rover Evoque, which are used by the 10-DOF vehicle model.


**Table 2.** Vehicle model parameters for the Range Rover Evoque [1,40].

Next to these general body definitions, the-10 DOF vehicle models features:


The measurements were generated during a test campaign at Ford Lommel [40]. Here, the vehicle has been equipped with various sensors to measure the dynamic quantities of the system, among which a low-cost Global Navigation Satellite System receiver, an inertial measurement unit (IMU), a Corrsys Datron optical sensor and Kistler tire force transducers [1]. The IMU measured the accelerations in three directions of the centre of gravity and rotational speeds and angles (roll, pitch and yaw), defined around the axes depicted in Figure 4. The Corrsys Datron optical sensor measured the longitudinal and

**Figure 4.** An overview of the vehicle application case for the validation of this work. (**a**) The 10-DOF model used for experimental validation. (**b**) The Flanders Make Range Rover Evoque represented by the 10 DOF model [1].

For this validation case, several quantities of interest have been investigated. These considered metrics are listed below:

lateral velocity of the car, of which the slip angle could be deduced. The GPS was able to record the location of the vehicle's centre of gravity during the measurements, and the tire


The tire forces were estimated as this is commonly conducted already since tire force transducers are very expensive. The side-slip angle is also a subject of current research as this variable is hard to measure due to its dynamic nature but necessary for advanced control algorithms such as the ESP. Finally, the vehicle longitudinal and lateral position states were also included in the quantities of interest to show that the proposed projection and sensor selection methodology work for independent states.

#### 5.2.2. Estimator Covariance Tuning

A necessary step to set up any estimator is to tune the covariance matrices. The model covariance matrix **Q***<sup>d</sup>* used for this validation case has been derived as:


The matrix defined above has been tuned by trial and error by means of evaluating the covariance values of the estimator.

The measurement noise matrix **R** is defined based on data sheets of the sensor manufacturer and a set of reference measurements [1]. The used noise levels are listed below:

$$R\_{d\_i} = 3.1 \times 10^{-3} \text{ (m/s}^2\text{)}^2, R\_{\text{gyr}} = 2.491 \times 10^{-1} \text{ (m/s)}^2$$

$$R\_{\text{GPS}} = 2 \text{ m}^2, R\_{v\_i} = 4.856 \times 10^{-2} \text{ (m/s)}^2, R\_{\omega\_{ij}} = 2.1 \times 10^{-1} \text{ (m/s)}^2$$

#### 5.2.3. Discussion

Figure 5 shows the results of the estimator compared to the measurements from the Range Rover Evoque. The quantities of interest, as listed in the section above, are compared to the measurement data from the experimental campaign. The sensors used in the Kalman filter were the IMU accelerations in three directions, a gyroscope for the yaw rate measurement, the GPS (vehicle *x* and *y*) and wheel speed sensors.

**Figure 5.** Validation of the quantities of interest (QoI) using real measurements from the Flanders Make Range Rover Evoque.

In general, the results match very well. One thing that can be noted is that the lateral forces are a bit over estimated in the extreme regions. Similar results were observed in [40] for the accelerations and the yaw rate when a linear tire model was used. This can be explained by the use of a linear tire model that does not take into account the saturation of the tire. This causes the forces generated by the tire to be larger than the ones actually measured. The use of a more complex, non-linear tire model that can take tire saturation into account will fix this issue and can be addressed in future work.

This validation case can show the potential of the covariance stabilisation methodology presented in this work on an engineering case as vehicle absolute position states are typically dynamically independent from the rest of the vehicle states when the road is considered flat. Therefore, if the tire forces are targeted by the estimator, the GPS measurements are less relevant for the estimator performance but will immediately cause the position states to be unobservable. This is depicted in Figure 6b where the blue covariance curve is rising continuously throughout the simulation. The green covariance curve on the same graph shows that the transformation can stabilise the covariance as it is now bounded, but orders of magnitude lower than for the observable case (red line). However, Figure 6a shows that the position is not tracked well during the simulation for all unobservable cases causing the vehicle position results to be poor. This indicates that one should be careful when interpreting the resulting covariance when using the projection for unobservable states. The proposed projecion therefore introduces a trade-off between the stability of the estimator and the interpretability of the results. However, if only interested in some specific quantities of interest, this approach can significantly reduce the cost of deploying extra sensors to obtain full observability, especially for applications with lots of states [6].

**Figure 6.** Comparison of the vehicle lateral position state for an observable case, an unobservable case without projection and an unobservable case with projection. (**a**) The vehicle lateral position state compared to the measured GPS Signal. (**b**) The covariance corresponding to the lateral vehicle position state.

The second part of the validation is to select the appropriate sensors for the estimation of the different quantities of interest that are considered. This is conducted by ranking the sensors based on their relative contributions to the covariance of the quantity of interest, similar to the approaches used in [26,27]. Figure 7 shows the results of the approach presented in Section 4. Figure 7 shows the contribution of each sensor to the quantities of interest covariance (*p*0) relative to a simulation, including the complete sensor set (indicated by the blue line) at each iteration of Algorithm 1 where the sensor set has been reduced by exactly one sensor from left to right. Furthermore, the colour of the bars indicate information on observability. A green bar indicates that the observability is fulfilled during the entire simulation. A yellow bar indicates that the projection of Section 3 was carried out that stabilised the unobservable state covariances, and a red bar indicates that it is impossible to estimate the quantities of interest with this sensor set.

**Figure 7.** The sensor selection methodology results. For each quantity of interest, the sensor space has been reduced from left to right. The observability is indicated with the colour (green is fully observable, yellow is unobservable but stable and red is unobservable). The normalised metric covariance is indicated by the blue line.

> In this validation case, only the last iteration is marked with a red colour, indicating that, at the least, this sensor is needed to provide the necessary data to comply with the observability requirements. Additionally, the blue line indicates that, for most quantities of interest within this validation example, only a few sensors contribute significantly to the overall quantity of interest covariance. The rest of the sensors have negligible added value relative to the quantities of interest performance.

> Figure 7 shows that for the tire forces, the acceleration measurements in the respective directions are the most important sensors as they are directly related to the forces in the equations of motion. Adding these sensors can increase the estimator performance by a factor of 5, as indicated by the blue line on Figure 7. One can also observe that GPS measurements are of lesser importance here and are situated on the left side of the graph. However, omitting these GPS measurements causes the vehicle position states to be unobservable (as indicated by the yellow bar colour), which can be solved by applying the transformation proposed in Section 3.

On the other hand, when trying to evaluate the position states of the vehicle, the results depicted in Figure 7 show that not much performance can be gained by adding additional sensors as these states have no dependency to other states because they are directly obtained from integration. This means that GPS measurements are absolutely necessary to be able to track these states. The proposed approach is able to identify the situations where the estimator cannot be set up using the current sensor set by evaluating Equation (29).

In addition to the approaches presented in [25–27], the approach proposed in this work is able give a sensor performance indication for all types of sensors while also being able to handle the same type of sensors on different mounting locations as well as different sensor noise levels (cheap versus expensive sensors). Of course, the results of the sensor selection can be significantly different when considering multiple quantities of interest at the same time. This has not been investigated in this work and can be addressed in future work.

#### **6. Conclusions**

In this work, an algorithm is proposed to stabilise estimators for unobservable system states, given that only specific quantities of interest are targeted by the estimator. Through a projection of the estimator on an observable subspace, the employed Riccati equations can be stabilised, but care needs to be taken as this does not make the system inherently observable. To ensure reliable operation, the targeted quantities of interest should not rely on the non-observable omitted system subspace. This approach has been validated on a numeric academic mass-spring system, as well as an industrial automotive estimation problem. Compared to known solutions, this method is shown to consistently handle cases where decoupled states are present, strongly coupled modes are present, as well as complex non-linear behaviour.

Next to the estimation stabilisation approach, a sensor selection algorithm was proposed that ranks the sensor performance based on the expected uncertainty on the various quantities of interest. Starting form an exhaustive potential sensor set, the ranking procedure is performed by gradually removing one sensor at a time, which induces the lowest covariance rise when omitted from the estimator. On the automotive validation case, the method automatically detects that for tire forces, the acceleration measurements coming from the IMU are the most important sensors, whereas for the vehicle position states, the GPS measurements would be the most effective, which is consistent with engineering experience. Therefore, compared to known solutions, the approach is able to provide insight into the sensor set performance with respect to the predefined quantities of interest without the need for acquiring sensors and setting up costly test campaigns and is able to detect when a particular sensor set is unable to bring sufficient data in an estimator context.

**Author Contributions:** Conceptualisation, T.D., M.K. and J.C.; data curation, T.D., M.K. and J.C.; formal analysis, T.D.; funding acquisition, W.D. and F.N.; investigation, T.D.; methodology, T.D., M.K., J.C. and F.N.; project administration, M.K.; resources, J.C. and F.N.; software, T.D.; supervision, J.C., W.D. and F.N.; validation, T.D.; visualisation, T.D.; writing–original draft, T.D. and F.N. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partially supported by Flanders Make, the strategic research centre for the manufacturing industry. The Flanders Innovation & Entrepreneurship Agency within the IMPROVED and MULTISENSOR project is also gratefully acknowledged for its support. Internal Funds KU Leuven are gratefully acknowledged for their support.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented within this study are resulting from activities within the acknowledged projects and are available therein.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**

