**3. Proposed Methodology**

The three-axis sensor like accelerometer and gyroscope scope is acquired from next-generation IMU in order to compute the angular velocity (*Lw*) and linear acceleration (*La*) in the proposed position estimation system. The designed system consists of learning to prediction module and the position estimation using sensor fusion in an indoor navigation system. The first module briefly explains the step-by-step working of position estimation using sensor fusion in indoor navigation, and the second module describes the detailed learning to prediction module using ANN and Kalman filter algorithm. Next, we briefly explain the step-by-step working of these models.

### *3.1. Scenario of Position Estimation in Indoor Navigation*

The proposed position estimation system is divided into two modules, that is, the position estimation using sensor fusion and learning to prediction module. The position estimation further divided into four sub-modules (i.e., sensor fusion based on Kalman filter algorithm, IMU acceleration, Integrator, and position estimation) as shown in Figure 2.

**Figure 2.** Position Estimation for proposed Indoor Navigation System.

The 3-axis output produced by the IMU in the form of a magnetic field vector, angular velocity, and linear acceleration is passed to the sensor fusion module. In the sensor fusion module, the output from the IMU is fuse together using the Kalman filter to get the drift-free and noise-free orientation of the object in an indoor environment. The Kalman filter in the sensor fusion module works as

a prediction algorithm, which comprised of two parts, that is, prediction and correction [53,54]. The prediction is calculated by taking the integral of gyroscope measurement and then correct the prediction using the accelerometer and magnetometer readings. The fusion that happens inside the Kalman filter is a probabilistic fusion that combines and correct the inputs based on maximum likelihood. After calculation the drift and noise-free orientation from IMU, the orientation matrix is passed to the IMU acceleration module to remove the gravity and centripetal force. The processed data is further passed to the integrator module in which the velocity is calculated by taking the integral acceleration. Once getting velocity, then take the second integral of velocity to get the position of an object. The integrator is used to performs the mathematical operation of integration concerning time and calculate the velocity and position.

In an indoor environment, the position and orientation of the object are determined by the non-linear matrix. In past researches, accelerometer, magnetometer, and gyroscope's output gives the orientation estimation, which helps in finding the object orientation. In the case of gyroscope, the complete adjustment could not be calculated at once considering the tendency related to gyroscope readings, as shown in Equation (1). Integrate the angular velocity to get the orientation (roll, pitch, yaw). However, in this case, we have a drifting error.

$$L\_{\omega} = L\_{\omega\_{\rm true}} + b\_{\mathcal{S}} + v\_{\mathcal{S}}.\tag{1}$$

Here, *L<sup>ω</sup>* is angular velocity vector in the local sensor frame. *v<sup>g</sup>* is a frequency noise, *b<sup>g</sup>* is a sensor bias a low-frequency noise we also considered bias as a constant in a small window of time and *Lωtrue* is a true angular velocity that we get from the sensor.

In case of acceleration, assuming there is no motion of the body, the only acceleration measure of gravity is divided by three-component and then uses some trigonometry and gets the roll and pitch with respect to the vertical axis. However, in this case, the accelerometer has a very high-frequency noise, as shown in Equation (2).

$$L\_a = L\_{a\_{body}} + L\_{\mathcal{S}} + b\_a + \upsilon\_{a\prime} \tag{2}$$

where *L<sup>a</sup>* is the acceleration vector in the local frame, *Lmtrue* represents the true acceleration due to the motion of the person or object or in other words linear acceleration, *L<sup>g</sup>* is the gravitational acceleration, *v<sup>a</sup>* is a frequency noise, *b<sup>a</sup>* is a sensor bias a low frequency noise we also considered bias as a constant in a small window of time.

In the case of the magnetometer, it is used to calculate the yaw. Therefore, we combine both accelerometer and magnetometer to calculate orientation estimation. However, in this case, we still have noise estimation as represented in Equation (3).

$$L\_m = L\_{m\_{\rm true}} + L\_{m\_{\rm int}} + b\_m + \upsilon\_{m\prime} \tag{3}$$

where *b<sup>m</sup>* and *v<sup>m</sup>* represent the bias and noise. *Lmtrue* represents the true magnetic field which is the earth magnetic field that is used for heading. *Lmint* is the magnetic disturbances. In order to solve the issue of bias and noise, the idea would be to get the best out of two kinds of estimation. The estimation, which does not have a lot of noise and at the same time does not have the drift. Therefore the best way to get the drift-free and noise-free orientation is to fuse together all the sensors, as shown in Figure 3. In the proposed system, we used the Kalman filter algorithm as a prediction algorithm in order to apply the sensor fusion. In this case, the first step is the prediction step in which we get the data from the IMU sensor and calculate the estimate through the mathematical model and then correct the estimation with measurement (i.e., correction). The Kalman filter comprised of two steps, that is, prediction and correction, in case of orientation estimation of IMU, the prediction is calculated by the integration of gyroscope reading and then correct the measurement through accelerometer and magnetometer reading. The fusion that happens inside the Kalman filter is a probabilistic fusion that combines and corrects the input based on maximum likelihood. In Figure 3, the *σ<sup>ω</sup>* is the uncertainty

in the gyroscope measurement, and angular velocity is symbolized as *ω*. Similarly, *a* is represented as acceleration and *σ<sup>a</sup>* is the uncertainty in acceleration measurement. Finally, the magnetic field is denoted as *m* and *σ<sup>m</sup>* is the uncertainty in magnetometer measurement.

**Figure 3.** Orientation estimation using sensor fusion based on the Kalman filter algorithm.

#### 3.1.1. Sensor Fusion Using Kalman Filter

In the study, we used the nonlinear version of the Kalman filter algorithm. The Kalman filter linearized the mean based on the state-space and compute the covariance-based on current state estimation. The Kalman filter is based on a discrete dynamic equation, which can be represented based on two phases; the first phase is the prediction phase, and the second one is the updated measurement phase. In the sensor fusion module of the proposed model, we have used the Kalman filter. The Kalman filter takes the linear acceleration, angular velocity, and magnetic field vector as an input from the sensor and provides the corrected predicted parameter as output. The working flow of the Kalman filter is shown in Figure 4. First, the initial estimation error covariance and initial state are to be determined and calculated using Equations (4) and (5). Besides input, measurement values, process and measurement noise covariance are also considered as an input. Equation (5) helps in finding the gain of Kalman. Both the estimation error covariance and the state estimation based on Equations (7) and (8) for their correctness.

The time update equation of the Kalman filter is:

$$
\hat{\mathfrak{x}}\_{\overline{k}} = \hat{\mathfrak{x}}\_{k-1} + B\_{\overline{k}} \tag{4}
$$

$$P\_{\vec{k}} = A P\_{\vec{k}-1} A^T + Q.\tag{5}$$

**Figure 4.** Configuration diagram of Kalman Filter in indoor navigation.

The state update equation of the Kalman filter is:

$$\mathbf{K}\_{\mathbf{k}} = \mathbf{P}\_{\overline{k}} \mathbf{H}^{T} (H \mathbf{P}\_{\overline{k}} \mathbf{H}^{T} + \mathbf{R})^{-1} \tag{6}$$

$$
\hat{\mathfrak{x}}\_{k} = \hat{\mathfrak{x}}\_{\tilde{k}} + k\_{k}(z\_{k} - h\_{k}) \tag{7}
$$

$$P\_k = (I - \mathcal{K}\_k H) P\_{\bar{k}'} \tag{8}$$

where the predicted state-vector is represented as *x*ˆ¯*<sup>k</sup>* , which includes 3-axis coordinates along with the heading using the proposed sensor fusion technique. *B<sup>k</sup>* is the state space model matrix. Furthermore, *P*¯*k* is the error covariance matrix which is comprised of two parts—(i) predicted error noise symbolized as (Q); (ii) State matrix which is denoted as A. The *P<sup>k</sup>* value is modified at every iteration using three parameters. The parameters are predicted error covariance matrix (*P*¯*<sup>k</sup>* ), obtained Kalman-Gain (*K<sup>k</sup>* ), and updates the measurements (*H*).

#### 3.1.2. IMU Acceleration

The output from the IMU accelerometer is divided into three component, that is, IMU acceleration, gravity correction, centripetal force correction, as shown in Figure 5.

**Figure 5.** Inertial measurement unit (IMU) accelerometer output.

In the proposed position estimation system, we use linear acceleration in order to obtain the position in an indoor environment; therefore, it is essential to remove both centripetal and gravitational force from the sensor data. The IMU fixed frame contains gravity acceleration, which can be formulated in Equation (9)

$$
\begin{bmatrix}
ac c\_{x\_{gravity}} \\
ac c\_{y\_{gravity}} \\
ac c\_{z\_{gravity}} \\
\end{bmatrix}(t) = \text{Orientation}^{-1}(t) \begin{bmatrix} 0 \\ 0 \\ \mathcal{g} \end{bmatrix},\tag{9}
$$

where gravitational acceleration magnitude is symbolized as g and acceleration vector with respect to

earth fixed frame is denoted by 0 0 *g* . Similarly gravity acceleration vector with respect to IMU fixed frame symbolized as *accxgravity accygravity* .

*acczgravity* The rotation of the object can take place in two ways—(i) the object rotation around a point in the space and (ii) the object rotation around itself. Therefore, in the proposed system the rotation of the object is calculated using the Equation (10):

$$
\begin{bmatrix}
\boldsymbol{a}\boldsymbol{c}\_{\boldsymbol{X}\_{\text{centripetal}}} \\
\boldsymbol{a}\boldsymbol{c}\_{\mathcal{Y}\_{\text{centripetal}}} \\
\boldsymbol{a}\boldsymbol{c}\_{\boldsymbol{z}\_{\text{centripetal}}}
\end{bmatrix} = \begin{bmatrix}
\boldsymbol{0} & -\boldsymbol{\omega}\_{\boldsymbol{z}} & \boldsymbol{\omega}\_{\boldsymbol{y}} \\
\boldsymbol{\omega}\_{\boldsymbol{z}} & \boldsymbol{0} & -\boldsymbol{\omega}\_{\boldsymbol{x}} \\
\end{bmatrix} \cdot \begin{bmatrix}
V\_{\boldsymbol{x}} \\
V\_{\boldsymbol{y}} \\
V\_{\boldsymbol{z}}
\end{bmatrix}.
\tag{10}
$$

The centripetal force is defined as the cross product of the linear velocity and angular velocity.

#### 3.1.3. Integrator Module

Integrator is used to performs the mathematical operation of integration with respect to time and calculate the velocity and position from the linear acceleration [55]. The integrator module is based on the following steps. In the first step, we take the integral of acceleration measurement to get the velocity of the object in an indoor environment, as shown in Equation (12). In Equation (11), a denotes the acceleration measurement taken from the IMU sensor.

$$a = \text{constant} \tag{11}$$

$$v = \int a dt = v\_o + at.\tag{12}$$

In the second step, we take the integral of the computed velocity to get the position of the object in indoor environment. The Equations (12),(13) and (14) demonstrate the second integration, where *y* donates position, velocity is denoted as *v*0, *t* represents time and *a* is the acceleration.

$$y = \int vdt\tag{13}$$

$$y = \int (v\_o + at)dt\tag{14}$$

$$y = y\_o + v\_o t + \frac{1}{2}at^2.\tag{15}$$

#### *3.2. Learning to Prediction Model*

The proposed learning to prediction model is classified with a couple of modules, that is, the prediction algorithm and learning module. Traditionally, the historical data are used to train the prediction algorithm so that the relationship and hidden pattern can be learned among the output parameters and the input parameters. Afterwards, the output of any given input data is predicted using the trained model. The performance of the prediction algorithm depends upon a couple of things. The training data conditions are as same as the application scenario data and the input data. Nevertheless, none of the current prognostication algorithms adopts model well enough to train dynamic input states. Therefore, to vanquish the existing studies' limitations, we design a novel learning to a prediction model for the proposed indoor navigation system, as illustrated in Figure 6.

In the proposed learning to prediction model, the prediction algorithm is tuned using the learning module in order to improve the accuracy and performance of the prediction algorithm. Furthermore, the learning module continuously evaluates the performance of the prediction algorithm by receiving the output as feedback. The external parameter is also considered by the learning module that may create an impact on prediction algorithms in terms of performance. After monitoring the current output and external factors, the learning module updates the tunable parameters of the prediction algorithm or upgrade the complete trained model in the prediction algorithm to improve the prediction accuracy.

The design learning to prediction model comprised of two-part, that is, learning model and the prediction model. The learning model is based on ANN, and the prediction model uses a Kalman filter algorithm as a prediction algorithm, as demonstrated in Figures 7 and 8. In Figure 7, the Kalman filter algorithm is used to predict the actual accelerometer readings from the noisy accelerometer sensor readings, which are heavily influenced by the gyro bias. Traditionally the Kalman filter algorithm does not require historical data for prediction, but it only requires the previous state to predict the actual state of the system. Similarly, in the case of the learning module, we have used the feed-forward back propagation neural network (FFBPNN) to tune the prediction algorithm. The learning module takes three inputs, that is, accelerometer reading, gyroscope reading, and the Kalman filter predicted reading as feedback. The accelerometer sensor reading *A<sup>t</sup>* is passed to Kalman filter as input at time *t*, and the output from the Kalman filter is the predicted accelerometer sensor reading *P<sup>a</sup>* without noise. Noise in accelerometer sensor readings is due to the gyro bias, which is *G<sup>t</sup>* .

**Figure 6.** Conceptual view of the proposed learning to prediction model in indoor navigation system.

**Figure 7.** Block diagram of Accelerometer prediction using learning to prediction model.

Similarly, in Figure 8, the gyroscope reading *G<sup>t</sup>* is predicted using the Kalman filter algorithm, and the output is predicted gyroscope sensor reading *P<sup>g</sup>* without noise. Noise in the gyroscope is due to the impact of accelerometer value.

**Figure 8.** Block diagram of Gyroscope prediction using learning to prediction model.

In the intended learning to prediction model, the tunable parameter can control the prediction algorithm performance. , which is Kalman, gain *K*. To intelligently update the gain *K* after every iteration, the estimated error in sensors' reading *R* and the covariance matrix *P* are used. The next subsection describes the Kalman filter in detailed.
