*3.2. Real-Time Analysis*

Figures 10–15 represent the predicted and real values of different algorithms. The dotted black line shows the real value that we obtained during the real-time analysis and the colored lines represent the predicted values of the algorithms. These figures map 353 samples of obesity, with a total of 16 columns; each sample value is the sum of 16 columns.

Figure 10 represents the predicted value of the decision tree, which shows that real values match with the predicted values most of the time and provide good results as already described in Table 1 with an accuracy of 95%. This algorithm is able to validate the model by using statistical data, which makes it more reliable.

**Decision Tree KNN Logistic Regression Classes Precision Recall F1-Score Precision Recall F1-Score Precision Recall F1-Score** 0 1 0.98 0.99 0.79 0.89 0.84 0.9 1 0.95 1 0.92 0.96 0.94 0.58 0.47 0.52 0.9 0.77 0.83 2 0.97 0.95 0.96 0.75 0.75 0.75 0.92 0.92 0.92 3 0.98 0.98 0.98 0.79 0.98 0.87 0.96 0.98 0.97 4 0.98 1 0.99 0.96 0.98 0.97 1 0.98 0.99 5 0.92 0.9 0.91 0.67 0.61 0.64 0.78 0.96 0.82

6 0.92 0.92 0.92 0.67 0.59 0.63 0.87 0.82 0.84

accuracy 0.95 0.76 0.9

**Figure 10.** Real-time analysis of each testing sample against the predicted values of the decision tree.

Figure 11 represents the predicted value of naïve Bayes, which shows that real values did not match with the predicted values most of the time and provided very bad results, as shown in Table 2, with an accuracy of 59%. This algorithm assumes that all predicates are independent and very rarely occur in real life.

**Table 2.** Performance analysis of random forest, naïve Bayes, and support vector machine.


**Figure 11.** Real-time analysis of each testing sample against the predicted values of naïve Bayes.

Figure 12 presents the predicted value of SVM, which shows that there were very few values where the predicted values did not match the real values and, thus, it provided very good results, as shown in Table 2, with an accuracy of 96%. This algorithm even works with unstructured and semi-structured data.

**Figure 12.** Real-time analysis of each testing sample against the predicted values of SVM.

Figure 13 presents the predicted value of KNN, which shows that there were few values where the predicted values matched the real values and some values where the predicted values did not match the real values; thus, it provided average results, as described in Table 2, with an accuracy of 76%. This algorithm does not perform well on a small dataset.

**Figure 13.** Real-time analysis of each testing sample against the predicted values of KNN.

Figure 14 presents the predicted value of logistic regression, which shows that there were very few values where the predicted values did not match the real values; thus, it provided good results, as shown in Table 1, with an accuracy of 90%. This algorithm is very fast at classifying unknown records.

**Figure 14.** Real-time analysis of each testing sample against the predicted values of logistic regression.

Figure 15 presents the predicted value of the random forest, which shows that there were very few values where the predicted values did not match the real values and, thus, it provided very good results, as shown in Table 2, with an accuracy of 95%. This algorithm can be used to solve classification and regression problems.

**Figure 15.** Real-time analysis of each testing sample against the predicted values of random forest.
