**6. Analysis of the Obtained Results**

In order to assess the obtained results, the values of the indicators used to test the quality of classifiers were compared. In the Table 17, a comparison of the values obtained for the models acquired with decision trees (DT) and the rough set theory (RST) was presented. Indicators from Acc to J should acquire the highest possible values up to 1, while the remaining ones should acquire the lowest possible value to 0. The green color indicates those indicator values that obtained more favorable values for particular classes. Yellow, on the other hand, indicates that the values obtained for particular indicators were identical.


**Table 17.** Comparison of the obtained indicator values for testing the quality of classifiers for the models obtained with decision trees (DT) and the rough set theory (RST).

When analyzing the results presented in the table, it should be noted that the first indicator (accuracy—Acc) shows that the classifier developed with DT for the class 30–50% allocates objects to this class to which they actually belong (Acc = 1) with the most likelihood. By contrast, the lowest Acc value obtained for the RST classifier was a genetic algorithm for the class 70–85% (Acc = 0.58). The sensitivity (TPR) of the classifiers presents itself in a slightly different way. The ability to detect the objects from the highlighted class is the highest for LEM2 algorithm for both highlighted classes and for DT in the class of 30–50%. The lowest ability for a genetic algorithm in the class of 30–50% (TPR = 0.00). By analyzing specificity (TNR), you can see that, for the highlighted class of 30–50%, the results are the highest (TNR = 1.00) for DT, exhaustive, and genetic algorithms. Precision (PPV) is similarly the highest for the highlighted class of 30–50% for DT and exhaustive algorithms (PPV = 1), and the lowest, again, for the highlighted class of 30–50% for a genetic algorithm. Negative predictive value (NVP), or the probability of the membership of the object recognized by a classifier as non-highlighted to the actual non-highlighted class, is the highest in case of the LEM2 algorithm for both highlighted classes

and DT in the class of 30–50% (NVP = 1). It is the lowest for the class of 70–85% for covering and genetic algorithms (NPV = 0.67).

The results of the compared values of the Matthew's correlation coefficient (MCC) measure (correlation coefficient between real classes and projected classes by the model) indicate that the best result was achieved in the class of 30–50% for DT (MCC = 1). The lowest result was for a genetic algorithm in the class of 70–85% (MCC = 0.17). By analyzing the results obtained for F1 (harmonic mean of precision and sensitivity of a model) and J (the sum of sensitivity and specificity reduced by 1) for all the models, it can be seen that the best classifier is DT in the class of 30–50%, while the worst is a genetic algorithm for the same class.

The remaining indicators should take the lowest possible values. A general classifier error (Err) is more favorable in case of using the DT classifier for the class of 30–50% (Err = 0.00), and the worst for the class of 70–85% is a genetic algorithm. The FPR indicator (probability of false alarms, i.e., the objects incorrectly assigned to a highlighted class, among all objects actually non-highlighted) and the FDR indicator (probability of false alarms among all the objects recognized by the classifier as highlighted) achieves the best values for the class of 30–50% (FPR = 0 and FDR = 0). However, the best FNR values (the probability of missing the highlighted objects, that is, their assignment by the classifier to the non-highlighted class) were also obtained for the class of 30–50% (FNR = 0).
