2.2.2. Crossed Integration of Kalman Filter and ANN

Different from the methods above, Kalman filter and ANN are combined in a tight pattern, in which ANN is applied during the internal filtering procedure. Shang et al. [52] predicted the model error in the filter with FNN, and the error level was considered to confirm the measurement noise covariance, which was set as 0 or infinity. Li et al. [53] thought that the gain of EKF was usually modified with the erroneous measurement, which reduced the gain precision. They used BPNN to train the gain with the input of measurement, estimation, and error, and then the precision was increased. Deep neural networks [54] have been studied recently. Pei et al. [55] combined a deep neural network with the Kalman filter in the emotion recognition of the image and audio. The features extracted by the deep neural network were input into the switching Kalman filter to obtain the final estimation results.

In the research reported in the literature, more works are conducted in the first separately distributed mode, in which Kalman filter and ANN process the data respectively. The mode does not adjust the inner parameters of the Kalman filter. The works of tightly crossed integration are relatively few. They can be improved with the relation exploration in filter parameters with ANN. Besides, the category and structure of the neural network can be studied to meet the demand for filtering calculation procedures.

### **3. Neuron-Based Kalman Filter**

### *3.1. Framework of Neuron-Based Kalman Filter*

Kalman filter provides a feasible framework to filter the noises and estimate the system state. The components of the Kalman filter can be divided into two categories, namely the models and intermediate variables. The models describe the system dynamic and the measurement process, including the system process equation and measurement equation. The intermediate variables influence the filtering results seriously, but they are difficult to determine in practice. In this paper, the effect of the intermediate variables on the filtering is explored with neural networks. The neurons are integrated into the Kalman filter, and the neurons can help to optimize the filtering process with the limited existing information.

The main influence factors of Kalman filter include the process equation, process noise, and measurement noise, expressed as the matrix *A*, *Q* and *R*. Notably, the noise variances are the critical intermediate variables that affect the estimation results. The effect of noise variances is expressed in the filtering gain *K*, and the filtering gain determines the estimation result as an important weight. In the optimizing thought with the neurons, the influence relation of the filtering results and the variables should be explored. Then the framework of the integrated Kalman filter is designed firstly, shown in Figure 1, and the concrete design ideas will be interpreted later.

**Figure 1.** Framework structure of the neuron-based Kalman filter.

Considering the prediction process of the Kalman filter in Equation (4), the estimation result is affected by the precision of the process equation. The process equation describes the system change along the time, but the common simplified equation is difficult to model the actual system. In the view of data-driven, the core of the process equation is the time series relation of the system change. And the time series can be modeled well with neural computing. Then a neuro unit is introduced to model the process in a black-box thought. The first neuro unit can be expressed as:

$$\hat{\mathfrak{x}}(k) = f\_1(K(k), K(k-1), K(k-2), \dots, \hat{\mathfrak{x}}(k-1), \hat{\mathfrak{x}}(k-2), \dots) \tag{8}$$

The inputs in the first neuro unit are the filtering gain *K* series and the state estimation value *x*ˆ at the previous time points, represented with *k* − 1, *k* − 2, ··· . The output is the estimation *x*ˆ at the time *k*. The key to the model is the fitting function *f*<sup>1</sup> which aims at the excavation of the time series features in the estimated variables and intermediate filtering variables. As a vital variable, the filtering gain *K* is obtained from the variance of process and measurement noise *Q* and *R*, and it can represent the noise features to some extent. Therefore, the filtering gain is set as an input to transmit the noise features to the system process model. The output estimation value is related to the state at the previous time and the filtering gain. Then the new estimation can be regarded as a more accurate predictive value of the system, and it replaces the initial prediction value in Equation (4) to continue the computing process of the Kalman filter.

For neural computing, it needs training with the existing data. In the training of the first neuro unit, the filtering process data and the final estimation result are collected as the training set. In detail, the series data of the filtering gain from time step (*k* − *m*) to *k* and the estimation value from (*k* − *m*) to (*k* − 1) are set as the training input data, where *m* is the prediction length set in the neural computing. The estimation value at *k* is set as the training output. In fact, the previous one-step prediction value is optimized with the final estimation value in the network. The fitting function *f*<sup>1</sup> can be obtained with the training data and the learning algorithm which will be discussed in Section 3.2.

Considering the final estimation process of Kalman filter in Equation (3), the estimation result is determined by two parts, of which one is the prediction value, and the other one is the measurement *Sensors* **2020**, *20*, 299

residual error. The second neuro unit is built to discover the mapping relation between the two parts and the final estimation result, expressed as:

$$\pounds'(k) = f\_2(K(k), K(k-1), \dots, z(k), z(k-1), \dots, \pounds(k), \pounds(k-1), \dots) \tag{9}$$

The inputs of the second neuro unit are the measurement value *z*, the prediction value *x*ˆ and the filtering gain *K* at the previous time points, represented with *k* − 1, *k* − 2, ··· The output is the final estimation value *x*ˆ at the time *k*. The final estimation synthesizes the measurement and the filtering intermediate variables such as the noise variance matrix (reflected by the filtering gain) and the prediction variables. The function *f* <sup>2</sup> is trained to realize the synthetization. In the training, the values of *K*, *z*, *x*ˆ from (*k* − *m*) to *k* are set as the input, and the final estimation value *x*ˆ at *k* is set as the output.

It can considered that the estimation via the neuro units is an effective supplement and amendment of the estimation in Kalman filter. Then the new final estimation value can be obtained by synthesizing the two estimation values from the neural computing and Kalman filter:

$$\mathfrak{X}(k|k) = (1 - a)\mathfrak{X}(k|k) + a\mathfrak{X}'(k|k) \tag{10}$$

where *x*ˆ(*k k*) is from Kalman filter, *<sup>x</sup>*ˆ (*k k*) is from the neuro unit. (<sup>1</sup> <sup>−</sup> <sup>α</sup>) and <sup>α</sup> are the weights of the two estimation values. α is determined by the validation error of the neuro unit, and:

$$\alpha = \frac{n}{\sum\_{i=1}^{n} \left| (d\_i^v - d\_i) / d\_i \right|} \tag{11}$$

where *d* is the validation set during the neuro unit training, *dv* is the output of the neuro unit for the validation set, *n* is the number of data in the validation set.

### *3.2. Neuro Units Based on Nonlinear Autoregressive Model*

In the framework of the neuron-based Kalman filter, the critical components are the neuro units which analyze the intermediate variables to support the final filtering result. Referring to the demand analysis of the two units, the two functions in Equations (8) and (9) should be able to fit the nonlinear relation in multiple variables. Moreover, they should excavate the time-series features in the data. With the two aspects of the demands, the nonlinear autoregressive model with exogenous input (NARX) can be the appropriate solution [56,57]. NARX derives from the time series autoregressive analysis, and it is effective in the reconstitution of the nonlinear systems. The availability of NARX has been proved by various applications [58–60].

NARX belongs to the recurrent neural network. It has a learning efficiency with the better gradient descent. The nonlinear relation between the inputs and outputs in NARX can be expressed as follows:

$$y(t+1) = \phi(y(t), y(t-1), \dots, y(t-n), u(t+1), u(t), \dots, u(t-m))\tag{12}$$

where *y*(*t* + 1) is the output to be predicted, *y*(*t*) to *y*(*t* − *n*) are the historical outputs, *u*(*t* + 1) to *u*(*t* − *m*) are the related inputs which last to the current moment. φ represents the nonlinear relation between inputs and outputs, and it also represents the structure of NARX, shown in Figure 2.

**Figure 2.** Structure of NARX.

NARX consists of the input layer, hidden layer, and output layer. The transfer function of NARX is similar to the backpropagation neural network, and a one-hidden-layer network is shown as follows:

$$f(\cdot) = \lg(\sum \omega\_{\mathbb{H}} h(\cdot)) \tag{13}$$

$$h(\cdot) = r(\sum a\_i u\_i) \tag{14}$$

where *g* and *r* are the activation functions of the output, ω*<sup>h</sup>* is the node weight in the hidden layer, *h*(·) is the activation function of the hidden layer, ω*<sup>i</sup>* is the weight of all inputs. The vector form of ω*<sup>h</sup>* and ω*<sup>i</sup>* are represented with *W* in Figure 2.

Based on the primary NARX structure, the two neuro units in the neuron-based Kalman filter can be designed concretely, which are shown in Figure 3.

**Figure 3.** Concrete structures of the two neuro units in the proposed Kalman filter.

For the neuron units in Figure 3, the concrete scale can be determined with the traditional empirical mode in the shallow neural network. The number of the hidden layer is set as one according to the number of input and output variables. The number of hidden nodes can be determined with the equations, such as *<sup>n</sup>* <sup>=</sup> log2 *<sup>p</sup>*, *<sup>n</sup>* <sup>=</sup> <sup>√</sup>*<sup>p</sup>* <sup>+</sup> *<sup>q</sup>* <sup>+</sup> *<sup>a</sup>*, where *<sup>n</sup>* is the number of hidden nodes, *<sup>p</sup>* is the number of input nodes, *q* is the number of output nodes, and *a* is a constant between 1 and 10. Besides, the number of hidden nodes can be adjusted in the network training, following the network performance.

Based on the static construction of the neural network, the appropriate training method should be selected to obtain favorable dynamic performance. The gradient descent method is the core solution in neural network training. Some improved algorithms have been proposed. LevenbergMarquardt (L–M) [61,62] is a rapid training method that combines the basic gradient descent method and Gauss-Newton method. Its error target function is:

$$E(w) = \frac{1}{2} \sum\_{i=1}^{p} \left\| Y\_i - Y\_i' \right\|^2 = \frac{1}{2} \sum\_{i=1}^{p} e\_i^2(w) \tag{15}$$

where *Yi* is the expected output, *Yi* is the actual output, *ei*(*w*) is the error, *p* is the number of samples, *w* is the vector consisting of network weights and threshold values.

The *k*-th iterative vector of weights and threshold values is *wk*, and the new vector is:

$$w^{k+1} = w^k + \Delta w \tag{16}$$

and the increment in L–M is calculated as:

$$
\Delta w = \left[ l^T(w)l(w) + \mu l \right]^{-1} l^T(w)e(w) \tag{17}
$$

where *I* is the unit matrix, μ is the learning rate, *J*(*w*) is the Jacobian matrix, and:

$$J(w) = \begin{bmatrix} \frac{\partial c\_1(w)}{\partial w\_1} & \frac{\partial c\_1(w)}{\partial w\_1} & \cdots & \frac{\partial c\_1(w)}{\partial w\_1} \\ \frac{\partial c\_1(w)}{\partial w\_1} & \frac{\partial c\_1(w)}{\partial w\_1} & \cdots & \frac{\partial c\_1(w)}{\partial w\_1} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial c\_1(w)}{\partial w\_1} & \frac{\partial c\_1(w)}{\partial w\_1} & \cdots & \frac{\partial c\_1(w)}{\partial w\_1} \end{bmatrix} \tag{18}$$

With the training method, the neuro units can be built with the anticipative functions. Then the improved Kalman filter can be established with the functional neuro units.

### *3.3. Adaptive Filtering Algorithm*

In the framework of neuron-based Kalman filter, two neuron units are introduced into the basic consistent of Kalman filter. The input, output, and inner structure of the neuro unit are designed to improve the filtering. Finally, the adaptive filtering algorithm based on the improved Kalman filter is proposed here, in which the neuro units are trained firstly and applied to the filter. The flow of the algorithm is presented in Figure 4.

As shown in Figure 4, the algorithm consists of two parts, namely the training process on the left and the filtering process on the right. The concreter flow of the algorithm is as follows:

	- (1) The system and measurement equations are established according to the object. The parameters in the Kalman filter can be initialized with empirical values.
	- (2) The primary calculation of the Kalman filter is conducted iteratively following Equations (1)–(7). The measurement vectors are imported into the filter along with time. The intermediate and final values are recorded, including the one-step prediction value, Kalman gain, measurement, estimation result, etc. The recorded values are all labeled with a time stamp. Meanwhile, the iterative steps should be no less than about 150 for the following neuro unit training. The number of sample steps may be adjusted according to the complexity of signals.
	- (3) With the filtering values in step 2, they are marked with the step number to form the time series sets. Then the prediction value and filter gain are imported into the first neuro unit. The prediction value, filter gain, and measurement are imported into the second neuro unit. The estimation result is set as the reference output of the two units.
	- (5) Based on the model equations and the initialized parameters in Kalman filter, the initial variable and filter gain are imported into the first neuro unit, and the prediction value is outputted and set as the basis of prediction error.
	- (6) The filter gain is updated and used to calculate the estimation value with the measurement. Meanwhile, the prediction value, filter gain, and measurement are imported into the second neuro unit to obtain another estimation value.
	- (7) The two estimation values are fused following Equation (10).
	- (8) The step moves forward to conduct steps (5)–(7) iteratively. In the iteration, the measurement vectors are calculated along with time.

**Figure 4.** Algorithm flow of the adaptive filtering with the neuron-based Kalman filter.

### **4. Experiment and Result**

The simulation and practice experiments are conducted to test the filter proposed. In the simulation, different noises are generated to simulate the complex noises in the sensors. In the experiment, the wheeled robot path is measured with low-cost GPS. All the computing runs with MATLAB 2017a on a PC equipped with an Intel Core i5-6200U CPU@2.30 GHz and 8 GB RAM. The experiment setting and results are presented in this section.

### *4.1. Simulation and Result*

The common noises in sensors are the white noise and color noise. The signals with the two kinds of noises are generated in the simulation. The two sets of the signals are:

$$
\alpha\_1(k) = \sigma(k) f(k) \tag{19}
$$

$$\mathbf{x}\_2(k) = \mathbf{G}(z^{-1}) f(k) \tag{20}$$

where *f*(*k*) is the Gaussian white noise. σ(*k*) is the standard deviation of *f*(*k*), and σ(*k*)=(*L* + *k*)/*L*, *L* is the number of signal samples, and *k* is the sample number. *G*(*z*−1) is the transfer function of a system which can be second order or third order to simulate the noise change process.

The first set is the approximately linear noises, and the second one is the sinusoidal noises, corresponding to the white noise and color noise, respectively. In the simulation, the sampling interval is 0.02 s. The numbers of signal samples are all 2000. That is, the sampling time is the 40 s. The simulation signals are shown in Figure 5, in which the true values of *x*<sup>1</sup> and *x*<sup>2</sup> are 0.

**Figure 5.** Simulation signals with different noises.

In the filtering of the simulation signals, the system model is established with the classical Jerk model, which also can be replaced with other motion models such as constant velocity, constant acceleration, Singer, interacting multiple model algorithms, and so on. For the Kalman filter, the initial state estimate *x*<sup>0</sup> and covariance *P*<sup>0</sup> are assumed to be *x*<sup>0</sup> = [0 0 0]*<sup>T</sup>* and *P*<sup>0</sup> = 1000\**eye*(4).

Because the neuro unit needs the training, the first 70% of the data are set as the training data, and the rest is used to test the filtering result. In the setting of the two neuro units, the number of

hidden nodes are set as 3 and 6, respectively. Other settings are also tested to obtain the optimal performance. The training results of NARX are shown in Figure 6.

**Figure 6.** Training results of NARX in integrated filter.

Based on the trained neuro units, the data are imported into the proposed filter to estimate the variable values. For verifying the estimation performance, the traditional Kalman filter is set as a contrast, abbreviated as KF. Moreover, the proposed filter can be regarded as a solution to adaptive filtering. Then one of the latest improvements of AKF in [38] is also set as the contrast, abbreviated as IAKF. The proposed filter in this paper is abbreviated as NKF. The filtering results are shown in Figure 7. For the quantitative evaluation of errors, the mean absolute error (MAE) and root-mean-square error (RMSE) are calculated and listed in Table 1.

(**b**) Result of signal *x*<sup>2</sup>

**Figure 7.** Filtering results of simulation signals.


**Table 1.** Evaluation of filtering errors.

For the first set of signals which are of the white noise, the results of the three methods are relatively similar in the curve graph. NKF performs better slightly than the others. The trend is evident in the error evaluation criteria. MAE represents the mean level of errors. MAE of NKF declines 45.72% of KF and 21.41% of IAKF. RMSE shows the fluctuation degree of errors. RMSE of NKF decreases 45.23% of KF and 20.91% of IAKF.

For the second set, the filtering results show the distinguishable trends. The results of KF fluctuate sharply and become diverging in the latter period. IAKF and NKF can trace the signals more closely, and NKF is more effective in the intuitionistic graph. MAE of NKF has been reduced by 69.16% of KF, and 18.80% of IAKF. The decreasing percentage of RMSE reaches 67.77% and 21.13% for NKF to KF and IAKF. The error reduction of the second set is larger than the first set.

### *4.2. Practical Experiment and Result*

Except for the simulation, a practical experiment is also conducted to verify the proposed method. A trajectory of the wheeled robot (shown in Figure 8) is measured on the playground, and the presupposed trajectory is presented in Figure 9. The robot started from the top right corner and ended at the same point. A low-cost GPS receiver is used to obtain the location information, including the longitude and latitude. The relative coordinates are transformed from the longitude and latitude:

$$d = 111.12 \cdot \cos \frac{1}{\sin \phi\_{t-1} \sin \phi\_t + \cos \phi\_{t-1} \cos \phi\_t (\lambda\_t - \lambda\_{t-1})} \tag{21}$$

where *d* is the displacement, φ is the latitude and λ is the longitude. The displacement can be decomposed into the coordinates on a plane. The measurements and true trajectory in the relative coordinates are shown in Figure 10.

**Figure 8.** Wheeled robot used to measure the trajectory, and the robot is developed and assembled by laboratory of system engineering in Beijing Institute of Technology, Beijing, China.

**Figure 9.** Presupposed trajectory in the practical experiment.

**Figure 10.** Relative coordinates transformed from the practical trajectory measurements.

In this part, the data of the whole trajectory is filtered firstly. Then a segment of the trajectory in another measurement is tested again.

### 4.2.1. Result of the Whole Trajectory

Similar to the simulation, the traditional Kalman filter and improvement of AKF in [38] are set as the contrast methods. The filtering results are shown in Figure 11, including the distance in the x-axis, y-axis, and x-y plane. The absolute errors are shown in Figure 12, and the evaluation indexes are in Table 2.

**Figure 11.** *Cont*.

**Figure 12.** *Cont*.

**Figure 12.** Absolute errors of displacement in x and y axes.

The results in Figure 11 show intuitively the small differences in the filtering results, and it is because the difference is in the order of magnitudes less than 1 m. The differences are more evident in the absolute errors in Figure 12. The general trends of the filtering results with the three methods are similar to the pattern of the second set signal in the simulation. It is because that the noises in the real sensors mainly are the color noises instead of the white noises. The basic Kalman filter performs badly in the practical system. Its results are unsteady and diverging along with the time. NKF perform better than others in various time periods besides the beginning. The neuro unit based on NARX reaches stable performance after the drastic fluctuation, which usually occurs at the beginning. Therefore, NKF is superior to the Kalman filter on the whole without the initial period. The performance can be analyzed quantitatively with the error criteria in Table 2.


**Table 2.** Evaluation of filtering errors (whole trajectory).

The general trend is consistent in the *x*-axis and *y*-axis for the three indexes. There is a conspicuous promotion in MAE for NKF and the RMSE of NKF declines about 6% with the Kalman filter and 8% with IAKF.

### 4.2.2. Result of Segment in the Trajectory

To test the proposed filter with more data, a segment of the whole trajectory is selected from one of the multiple measurements, which is not in the same measurement with the whole trajectory above.

The data from 180 s to 450 s are selected, including the displacements in x-axis and y-axis. The contrast methods are the same as the experiment above. The filtering results are shown in Figure 13, and the evaluation indexes of errors are listed in Table 3.

**Figure 13.** Filtering results of the selected segment of the trajectory.


**Table 3.** Evaluation of filtering errors (segment of the trajectory).

It can be found from Figure 13 that the general trend is similar to the results of the whole trajectory in Figure 11. The results of NKF and IAKF are intuitively approximately the same. From careful and detailed identification in the magnifying subfigure, the proposed NKF can filter the noises closer to the true value than other methods. In the results of the x-axis and y-axis, the basic KF performs badly with the obvious deviation.

The filtering performance can be evaluated more accurately with Table 3. For results in the x-axis, MAE of NKF is 32.73% of KF, 82.01% of IAKF. RMSE of NKF is similar to IAKF, but relatively lower than KF. The trend of results in the y-axis is consistent with the ones in x-axis. Compared with the error indicators in Table 2, the errors of the segment are larger than the whole trajectory, which may be due to the fewer data to train the neural networks.

### **5. Discussion**

In this paper, a novel Kalman filter is designed by introducing neural computing. Simulations and experiments are carried out, and the results are presented and described briefly. Following the results, the methods are discussed in this section.

From the filtering results of simulation signals and practical measurements, it can be proved that the proposed filter can eliminate the noises to the anticipated degree. It performs distinctly better than the traditional Kalman filter, especially for complex noises. Besides, the proposed filter can achieve the latest improvements of AKF. The core thought of the proposed filter is to obtain more knowledge from the existing limited data during the filtering procedure. The process variables in the filtering are reutilized with the neural units, while the reutilization in AKF [36–38] is conducted with statistical methods. Therefore, the proposed filter can be regarded as a new exploration of parameter adjusting, which is similar to the essential thought of AKF.

In the proposed filter, the neuro unit is built based on the nonlinear autoregressive model. The neuro unit specializes in the nonlinear time-series feature extraction with a small-scale structure. Although many more networks have been proposed, it should be conservative in the selection of networks. The complex network may destroy the efficient and straightforward features of the Kalman filter. Besides, the complex network may be not suitable for the terminal applications without the high-performance processor.

Except for the intuitive estimation results, the computational complexity can be analyzed for the proposed and contrast method. According to the basic evaluation method of computational complexity, the complexity of KF, IAKF, and NKF is *O*(*n*2), *O*(*n*2) and *O*(*n*3), respectively, where *n* is the number of state variables. The complexity of NKF increases because dual matrix multiplication is introduced by neuro units. The operation time is also recorded in the experiment, shown in Table 4.


**Table 4.** Operation time of different methods in simulation and experiment (time unit: s).

For computing time, the methods distinct slightly, although the complexity of NKF is higher. However, an important fact that cannot be ignored is the training of the neuro units. The time above is the test procedure, while NKF needs prior training. The training time is between 3 s and 7 s, according to the preset convergence conditions. In this paper, the training requires historical data in the offline mode. The filtering is conducted after the training, which reduces the real-time performance. It is the challenge how to realize online learning along with the filtering process, which can be studied in the future.

The neurons in the proposed method work well from the experiment results. Although a good filtering performance has been obtained, the inherent mechanism of the proposed method is actually not completely clear. Hence the theoretical analysis should be conducted, and the effect of the neural network on the filter should be deduced in the view of numerical analysis in the future.

In the proposed method, ANN inspired us to optimize the intermediate factors and calculating process in the Kalman filter with the black-box thought. It mainly solves the problem of modeling and parameter adjusting of the traditional filter. It can be a useful tool in the target tracking, trajectory estimation, and pedestrian navigation, especially in the situations of inexperienced modeling of complex systems and the parameter settings.

## **6. Conclusions**

For the intelligent terminals and objects in the internet of things, it has been the vital task to sense the environment and self-status accurately. An improved Kalman filter is proposed with neural computing for accurate sensing. Kalman filter provides a favorable framework in which the system model can be replaced according to the concrete applications. The neuro unit based on NARX is a powerful tool to examine nonlinear and time-series relations. The proposed filter focuses on the data change features and tries to lower the impact of model analysis. In future work, the stability and rapidity of neural computing should be studied deeply. The neuron-based Kalman filter can develop more fully with smarter and faster online neural computing. Moreover, the theoretical derivation should be carried out to support the neuro-based filter. The proposed method can combine other identification approaches [63–66] to study the modeling and filtering problems of other dynamic time series and stochastic systems with colored noises [67–70], and can be applied to other fields [71–74], such as signal modeling and control systems [75–79] studied in other literature [80].

**Author Contributions:** Conceptualization, Y.-t.B. and X.-b.J.; methodology, Y.-t.B.; writing—original draft preparation, Y.-t.B.; project administration, X.-y.W.; funding acquisition, Z.-y.Z.; supervision, B.-h.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Key Research and Development Program of China No. 2017YFC1600605, National Natural Science Foundation of China No. 61903008, 61673002, Young Teacher Research Foundation Project of BTBU No. QNJJ2020-26.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
