Figure 1.
The processing steps in our suggested model’s approach.
Figure 1.
The processing steps in our suggested model’s approach.
Figure 2.
A dense block that illustrates that each layer takes all the previous feature maps as input.
Figure 2.
A dense block that illustrates that each layer takes all the previous feature maps as input.
Figure 3.
A block diagram illustrating the pretrained DenseNet-161 model.
Figure 3.
A block diagram illustrating the pretrained DenseNet-161 model.
Figure 4.
Images from the PETA, INRA, ILIDS, and MSMT17 databases are used as examples from each dataset. This figure demonstrates how one’s status might vary throughout the six classes.
Figure 4.
Images from the PETA, INRA, ILIDS, and MSMT17 databases are used as examples from each dataset. This figure demonstrates how one’s status might vary throughout the six classes.
Figure 5.
The loss curve and confusion matrix of binary classification on the PETA dataset: (a) The loss curve, (b) the confusion matrix.
Figure 5.
The loss curve and confusion matrix of binary classification on the PETA dataset: (a) The loss curve, (b) the confusion matrix.
Figure 6.
The loss curve and confusion matrix of the first experiment on INRIA and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 6.
The loss curve and confusion matrix of the first experiment on INRIA and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 7.
The loss curve and confusion matrix of the second experiment on INRIA and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 7.
The loss curve and confusion matrix of the second experiment on INRIA and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 8.
The loss curve and confusion matrix of the first experiment on INRIA, ILIDS, and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 8.
The loss curve and confusion matrix of the first experiment on INRIA, ILIDS, and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 9.
The loss curve and confusion matrix of the second experiment on INRIA, ILIDS, and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 9.
The loss curve and confusion matrix of the second experiment on INRIA, ILIDS, and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 10.
The loss curve and confusion matrix of the first experiment on the PETA dataset: (a) The loss curve, (b) the confusion matrix.
Figure 10.
The loss curve and confusion matrix of the first experiment on the PETA dataset: (a) The loss curve, (b) the confusion matrix.
Figure 11.
The loss curve and confusion matrix of the second experiment on the PETA dataset: (a) The loss curve, (b) the confusion matrix.
Figure 11.
The loss curve and confusion matrix of the second experiment on the PETA dataset: (a) The loss curve, (b) the confusion matrix.
Figure 12.
The loss curve and confusion matrix of multi-classification on INRIA and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 12.
The loss curve and confusion matrix of multi-classification on INRIA and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 13.
The loss curve and confusion matrix of multi-classification on INRIA, ILIDS, and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Figure 13.
The loss curve and confusion matrix of multi-classification on INRIA, ILIDS, and MSMT17 datasets: (a) The loss curve, (b) the confusion matrix.
Table 1.
PETA dataset distribution.
Table 1.
PETA dataset distribution.
Viewing Direction | Carrying Baggage | Without Baggage |
---|
Front view | 1000 | 1000 |
Back view | 1000 | 1000 |
Side view | 1000 | 1000 |
Total | 3000 | 3000 |
Table 2.
INRA dataset distribution.
Table 2.
INRA dataset distribution.
Viewing Direction | Carrying Baggage | Without Baggage |
---|
Front view | 26 | 19 |
Back view | 22 | 15 |
Side view | 19 | 11 |
Total | 67 | 45 |
Table 3.
ILIDS dataset distribution.
Table 3.
ILIDS dataset distribution.
Viewing Direction | Carrying Baggage | Without Baggage |
---|
Front view | 809 | 773 |
Back view | 788 | 766 |
Side view | 781 | 771 |
Total | 2378 | 2310 |
Table 4.
MSMT17 dataset distribution.
Table 4.
MSMT17 dataset distribution.
Viewing Direction | Carrying Baggage | Without Baggage |
---|
Front view | 1264 | 1046 |
Back view | 1147 | 1101 |
Side view | 1040 | 1002 |
Total | 3451 | 3149 |
Table 5.
The evaluation results on the PETA dataset.
Table 5.
The evaluation results on the PETA dataset.
Network | Accuracy | Precision | Recall/Sensitivity | Specificity |
---|
Dense-Net 161 | 98.50% | 97.99% | 98.98% | 98.03% |
Table 6.
The evaluation results of the first experiment on INRIA and MSMT17 datasets.
Table 6.
The evaluation results of the first experiment on INRIA and MSMT17 datasets.
Network | Accuracy | Precision | Recall/Sensitivity | Specificity |
---|
Dense-Net 161 | 96.00% | 95.83% | 95.83% | 96.15% |
Table 7.
The evaluation results of the second experiment on INRIA and MSMT17 datasets.
Table 7.
The evaluation results of the second experiment on INRIA and MSMT17 datasets.
Network | Accuracy | Precision | Recall/Sensitivity | Specificity |
---|
Dense-Net 161 | 97.25% | 98.99% | 95.63% | 98.97% |
Table 8.
The evaluation results of the first experiment on INRIA, ILIDS, and MSMT17 datasets.
Table 8.
The evaluation results of the first experiment on INRIA, ILIDS, and MSMT17 datasets.
Network | Accuracy | Precision | Recall/Sensitivity | Specificity |
---|
Dense-Net 161 | 97.00% | 98.96% | 95.00% | 99.00% |
Table 9.
The evaluation results of the second experiment on INRIA, ILIDS, and MSMT17 datasets.
Table 9.
The evaluation results of the second experiment on INRIA, ILIDS, and MSMT17 datasets.
Network | Accuracy | Precision | Recall/Sensitivity | Specificity |
---|
Dense-Net 161 | 98.75% | 98.99% | 98.50% | 99.00% |
Table 10.
PETA dataset distribution for multi-classification for the first experiment.
Table 10.
PETA dataset distribution for multi-classification for the first experiment.
Viewing Direction | Carrying Baggage | Without Baggage |
---|
Front view | 330 | 562 |
Back view | 318 | 605 |
Side view | 325 | 533 |
Total | 973 | 1700 |
Table 11.
The evaluation results of the first experiment on the PETA dataset.
Table 11.
The evaluation results of the first experiment on the PETA dataset.
Network | Accuracy | Macro F1 | Micro F1 |
---|
Dense-Net 161 | 97% | 96.29% | 97% |
Table 12.
Precision, recall, and F1-score for the first experiment on the PETA dataset.
Table 12.
Precision, recall, and F1-score for the first experiment on the PETA dataset.
| Precision | Recall | F1-Score |
---|
FV-Pos | 1.00 | 0.99 | 1.00 |
FV-Neg | 0.88 | 0.96 | 0.92 |
BV-Pos | 0.98 | 0.94 | 0.96 |
BV-Neg | 0.90 | 0.92 | 0.91 |
SV-Pos | 0.99 | 1.00 | 1.00 |
SV-Neg | 1.00 | 1.00 | 1.00 |
Average | 0.96 | 0.97 | 0.97 |
Table 13.
The evaluation results of the second experiment on the PETA dataset.
Table 13.
The evaluation results of the second experiment on the PETA dataset.
Network | Accuracy | Macro F1 | Micro F1 |
---|
Dense-Net 161 | 98.25% | 98.28% | 98.25% |
Table 14.
Precision, recall, and F1-score for the second experiment on the PETA dataset.
Table 14.
Precision, recall, and F1-score for the second experiment on the PETA dataset.
| Precision | Recall | F1-Score |
---|
FV-Pos | 1.00 | 1.00 | 1.00 |
FV-Neg | 0.99 | 0.94 | 0.97 |
BV-Pos | 0.98 | 1.00 | 0.99 |
BV-Neg | 0.98 | 0.99 | 0.98 |
SV-Pos | 0.98 | 1.00 | 0.99 |
SV-Neg | 0.97 | 0.97 | 0.97 |
Average | 0.983 | 0.983 | 0.983 |
Table 15.
The evaluation results on INRIA and MSMT17 datasets.
Table 15.
The evaluation results on INRIA and MSMT17 datasets.
Network | Accuracy | Macro F1 | Micro F1 |
---|
Dense-Net 161 | 96.67% | 96.69% | 96.67% |
Table 16.
Precision, recall, and F1-score for INRIA and MSMT17 datasets.
Table 16.
Precision, recall, and F1-score for INRIA and MSMT17 datasets.
| Precision | Recall | F1-Score |
---|
FV-Pos | 0.98 | 0.96 | 0.97 |
FV-Neg | 0.94 | 0.96 | 0.95 |
BV-Pos | 0.99 | 0.95 | 0.97 |
BV-Neg | 0.96 | 0.98 | 0.97 |
SV-Pos | 0.96 | 0.97 | 0.97 |
SV-Neg | 0.97 | 0.97 | 0.97 |
Average | 0.967 | 0.965 | 0.967 |
Table 17.
The evaluation results on INRIA, ILIDS, and MSMT17 datasets.
Table 17.
The evaluation results on INRIA, ILIDS, and MSMT17 datasets.
Network | Accuracy | Macro F1 | Micro F1 |
---|
Dense-Net 161 | 98.33% | 98.26% | 98.33% |
Table 18.
Precision, recall, and F1-score for INRIA, ILIDS, and MSMT17 datasets.
Table 18.
Precision, recall, and F1-score for INRIA, ILIDS, and MSMT17 datasets.
| Precision | Recall | F1-Score |
---|
FV-Pos | 0.97 | 1.00 | 0.99 |
FV-Neg | 0.97 | 0.98 | 0.98 |
BV-Pos | 1.00 | 0.95 | 0.98 |
BV-Neg | 0.99 | 0.99 | 0.99 |
SV-Pos | 0.98 | 0.99 | 0.99 |
SV-Neg | 0.99 | 0.98 | 0.99 |
Average | 0.983 | 0.982 | 0.987 |
Table 19.
Comparison of binary classification.
Table 19.
Comparison of binary classification.
Dataset | Number of Images | Method | Accuracy | Precision | Recall/Sensitivity | Specificity |
---|
INRIA and MSMT17 | 500 per class | Our Method DenseNet-161 | 96% * | 0.9583 | 0.9583 * | 0.9615 * |
1000 image per class | 97.25% | 0.9899 | 0.9563 | 0.9897 |
500 per class | Local Tridirectional Pattern [29] | 95% | 0.9889 | 0.9511 | 0.8333 |
INRIA, ILIDS, and MSMT17 | 500 per class | Our Method DenseNet-161 | 97% * | 0.9896 * | 0.9500 | 0.9900 * |
1000 image per class | 98.75% | 0.9899 | 0.9850 | 0.9900 |
500 per class | Joint Scale LBP [25] | 95.4% | 0.9889 | 0.9511 | 0.8333 |
PETA | 1000 images per class | Our Method DenseNet-161 | 98.50% | 0.9799 | 0.9898 | 0.9803 |
Table 20.
Comparison for multi-classification.
Table 20.
Comparison for multi-classification.
Dataset | Number of Images | Method | Accuracy | Av. Precision | Av. Recall | Av. F1-Score |
---|
PETA | 2673 images | Our Method DenseNet-161 | 97% | 0.96 | 0.97 | 0.97 |
1000 per class | 98% | 0.98 | 0.98 | 0.98 |
2673 images | CNNR + DA + TL [34] | − | 0.93 | 0.90 | 0.91 |
2673 images | CNNR [34] | − | 0.94 | 0.64 | 0.76 |
2673 images | CNN + BA + TL [34] | − | 0.95 | 0.68 | 0.79 |
2673 images | CNN + DA + RF [34] | − | 0.90 | 0.69 | 0.78 |
2673 images | CNN + BA + SVM [34] | − | 0.86 | 0.50 | 0.63 |
2673 images | CNN [34] | − | 0.93 | 0.55 | 0.70 |
INRIA and MSMT17 | 1000 per class | Our Method DenseNet-161 | 96.67% | 0.97 | 0.97 | 0.97 |
INRIA, ILIDS, and MSMT17 | 98.33% | 0.98 | 0.98 | 0.99 |