Author Contributions
Conceptualization, D.I.; formal analysis, D.I.; investigation, D.I.; methodology, T.A., M.A. and S.E.-S.; project administration, T.A. and M.A.; software, D.I. and T.A.; supervision, T.A.; validation, T.A.; visualization, D.I., T.A. and S.E.-S.; writing—original draft, D.I.; writing—review and editing, D.I., T.A., M.A. and S.E.-S.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.
Figure 1.
The architecture of the proposed framework. Abbreviations: National Social Life, Health, and Aging Project (NSHAP); explainable artificial intelligence (XAI).
Figure 1.
The architecture of the proposed framework. Abbreviations: National Social Life, Health, and Aging Project (NSHAP); explainable artificial intelligence (XAI).
Figure 4.
Performance comparison of different classical classifiers at the detection layer. Abbreviations: feature selection (fs); hyperparameter optimization (op); critical difference (cd); area under curve (auc). (a) Performance of classical classifiers with and without feature selection and optimization (detection layer). (b) Comparison of classical classifiers based on the Friedman test (detection layer). (c) AUC scores for classical classifiers with feature selection and hyperparameter optimization (detection layer).
Figure 4.
Performance comparison of different classical classifiers at the detection layer. Abbreviations: feature selection (fs); hyperparameter optimization (op); critical difference (cd); area under curve (auc). (a) Performance of classical classifiers with and without feature selection and optimization (detection layer). (b) Comparison of classical classifiers based on the Friedman test (detection layer). (c) AUC scores for classical classifiers with feature selection and hyperparameter optimization (detection layer).
Figure 5.
Performance comparison of different static ensemble classifiers at the detection layer. (a) Performance of static ensemble classifiers with and without feature selection and optimization (detection layer). (b) Comparison of static ensemble classifiers based on the Friedman test (detection layer). (c) AUC scores for static ensemble classifiers with feature selection and hyperparameter optimization (detection layer).
Figure 5.
Performance comparison of different static ensemble classifiers at the detection layer. (a) Performance of static ensemble classifiers with and without feature selection and optimization (detection layer). (b) Comparison of static ensemble classifiers based on the Friedman test (detection layer). (c) AUC scores for static ensemble classifiers with feature selection and hyperparameter optimization (detection layer).
Figure 6.
Performance comparison between different classic and static classifiers at the detection layer. (a) Performance metric comparison between classic and static ensemble classifiers (detection layer). (b) Comparison of classic and static ensemble classifiers based on the Friedman test (detection layer).
Figure 6.
Performance comparison between different classic and static classifiers at the detection layer. (a) Performance metric comparison between classic and static ensemble classifiers (detection layer). (b) Comparison of classic and static ensemble classifiers based on the Friedman test (detection layer).
Figure 7.
Performance comparison of different DES classifiers with classical classifiers at the detection layer. (a) Comparison of FIRE-KNOP with classical classifiers pool with different numbers of base classifiers (detection layer). (b) Comparison of DES classifiers with a pool of 6 classical classifiers based on the Friedman test (detection layer). (c) AUC scores for DES classifiers with a pool of 6 classical classifiers (detection layer).
Figure 7.
Performance comparison of different DES classifiers with classical classifiers at the detection layer. (a) Comparison of FIRE-KNOP with classical classifiers pool with different numbers of base classifiers (detection layer). (b) Comparison of DES classifiers with a pool of 6 classical classifiers based on the Friedman test (detection layer). (c) AUC scores for DES classifiers with a pool of 6 classical classifiers (detection layer).
Figure 8.
Performance comparison of DES classifiers with a static ensemble classifiers pool at the detection layer. (a) Comparison of FIRE-KNOP with a static ensemble classifiers pool with a different number of base classifiers (detection layer). (b) Comparison of DES classifiers with a pool of 5 static ensemble classifiers based on the Friedman test (detection layer). (c) AUC scores for DES classifiers with a pool of 5 static ensemble classifiers (detection layer).
Figure 8.
Performance comparison of DES classifiers with a static ensemble classifiers pool at the detection layer. (a) Comparison of FIRE-KNOP with a static ensemble classifiers pool with a different number of base classifiers (detection layer). (b) Comparison of DES classifiers with a pool of 5 static ensemble classifiers based on the Friedman test (detection layer). (c) AUC scores for DES classifiers with a pool of 5 static ensemble classifiers (detection layer).
Figure 9.
Performance comparison of DES classifiers with a mixed classifiers pool at the detection layer. (a) Comparison of FIRE-KNOP with a mixed classifiers pool with a different number of base classifiers (detection layer). (b) Comparison of DES classifiers with a pool of 4 mixed classifiers based on the Friedman test (detection layer). (c) AUC scores for DES classifiers with a pool of 4 mixed ensemble classifiers (detection layer).
Figure 9.
Performance comparison of DES classifiers with a mixed classifiers pool at the detection layer. (a) Comparison of FIRE-KNOP with a mixed classifiers pool with a different number of base classifiers (detection layer). (b) Comparison of DES classifiers with a pool of 4 mixed classifiers based on the Friedman test (detection layer). (c) AUC scores for DES classifiers with a pool of 4 mixed ensemble classifiers (detection layer).
Figure 10.
Performance comparison of different classical classifiers at the severity prediction layer. (a) Performance of classical classifiers with and without optimization (severity prediction layer). (b) Comparison of classical classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for classical classifiers with feature selection and hyperparameter optimization (severity prediction layer).
Figure 10.
Performance comparison of different classical classifiers at the severity prediction layer. (a) Performance of classical classifiers with and without optimization (severity prediction layer). (b) Comparison of classical classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for classical classifiers with feature selection and hyperparameter optimization (severity prediction layer).
Figure 11.
Performance comparison of different static ensemble classifiers at the severity prediction layer. (a) Performance of static ensemble classifiers with and without feature selection and optimization (severity prediction layer). (b) Comparison of static ensemble classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for static ensemble classifiers with feature selection and hyperparameter optimization (severity prediction layer).
Figure 11.
Performance comparison of different static ensemble classifiers at the severity prediction layer. (a) Performance of static ensemble classifiers with and without feature selection and optimization (severity prediction layer). (b) Comparison of static ensemble classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for static ensemble classifiers with feature selection and hyperparameter optimization (severity prediction layer).
Figure 12.
Performance comparison between different classic and static classifiers at the severity prediction layer. (a) Performance metric comparison between classic and static ensemble classifiers (severity prediction layer). (b) Comparison of classic and static ensemble classifiers based on the Friedman test (severity prediction layer).
Figure 12.
Performance comparison between different classic and static classifiers at the severity prediction layer. (a) Performance metric comparison between classic and static ensemble classifiers (severity prediction layer). (b) Comparison of classic and static ensemble classifiers based on the Friedman test (severity prediction layer).
Figure 13.
Performance comparison of different DES classifiers with classical classifiers at the severity prediction layer. (a) Comparison of KNORAU with classical classifiers pool with different numbers of base classifiers (severity prediction layer). (b) Comparison of DES classifiers with a pool of 5 classical classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for DES classifiers with a pool of 5 classical classifiers (severity prediction layer).
Figure 13.
Performance comparison of different DES classifiers with classical classifiers at the severity prediction layer. (a) Comparison of KNORAU with classical classifiers pool with different numbers of base classifiers (severity prediction layer). (b) Comparison of DES classifiers with a pool of 5 classical classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for DES classifiers with a pool of 5 classical classifiers (severity prediction layer).
Figure 14.
Performance comparison of DES classifiers with static ensemble classifiers pool at the severity prediction layer. (a) Comparison of FIRE-KNOP with a static ensemble classifiers pool with a different number of base classifiers (severity prediction layer). (b) Comparison of DES classifiers with a pool of 5 static ensemble classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for DES classifiers with a pool of 5 static ensemble classifiers (severity prediction layer).
Figure 14.
Performance comparison of DES classifiers with static ensemble classifiers pool at the severity prediction layer. (a) Comparison of FIRE-KNOP with a static ensemble classifiers pool with a different number of base classifiers (severity prediction layer). (b) Comparison of DES classifiers with a pool of 5 static ensemble classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for DES classifiers with a pool of 5 static ensemble classifiers (severity prediction layer).
Figure 15.
Performance comparison of DES classifiers with a mixed classifiers pool at the severity prediction layer. (a) Comparison of FIRE-KNOP with a mixed classifiers pool with a different number of base classifiers (severity prediction layer). (b) Comparison of DES classifiers with a pool of ten mixed classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for DES classifiers with a pool of ten mixed ensemble classifiers (severity prediction layer).
Figure 15.
Performance comparison of DES classifiers with a mixed classifiers pool at the severity prediction layer. (a) Comparison of FIRE-KNOP with a mixed classifiers pool with a different number of base classifiers (severity prediction layer). (b) Comparison of DES classifiers with a pool of ten mixed classifiers based on the Friedman test (severity prediction layer). (c) AUC scores for DES classifiers with a pool of ten mixed ensemble classifiers (severity prediction layer).
Figure 16.
Performance comparison of static regressors at the scale prediction layer. (a) Performance comparison based on static regressors with and without feature selection and optimization (scale prediction layer). (b) Comparison of static regressors based on the Friedman test (scale prediction layer).
Figure 16.
Performance comparison of static regressors at the scale prediction layer. (a) Performance comparison based on static regressors with and without feature selection and optimization (scale prediction layer). (b) Comparison of static regressors based on the Friedman test (scale prediction layer).
Figure 17.
SHAP plots for feature importance in the detection model. (a) SHAP summary plot on feature importance for detection. (b) SHAP beesplot on feature importance for detection.
Figure 17.
SHAP plots for feature importance in the detection model. (a) SHAP summary plot on feature importance for detection. (b) SHAP beesplot on feature importance for detection.
Figure 18.
Decision tree classifier for detection.
Figure 18.
Decision tree classifier for detection.
Figure 19.
Waterfall plot for instances of depressed and normal individuals. (a) Waterfall plot for a predicted depressed individual. (b) Waterfall plot for a predicted normal individual.
Figure 19.
Waterfall plot for instances of depressed and normal individuals. (a) Waterfall plot for a predicted depressed individual. (b) Waterfall plot for a predicted normal individual.
Table 1.
Chi-square test on a selection of categorical features (normal and depressed).
Table 1.
Chi-square test on a selection of categorical features (normal and depressed).
Feature Description | Chi–Square Value | p–Value |
---|
Interviewer’s rating for interviewee’s posture | 62.7900 | |
Happiness in current/past relationship | 58.1990 | |
Numerical questions performance | 18.9570 | |
Freq of internet usage | 63.7290 | |
Self-rated general happiness | 330.409 | |
Gender | 26.6790 | |
Difficulty getting out of bed | 126.823 | |
Disabled | 81.7890 | |
Table 2.
Classical classifier results with feature selection and hyperparameter optimization (detection layer).
Table 2.
Classical classifier results with feature selection and hyperparameter optimization (detection layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
DT | | | | |
LR | | | | |
NB | | | | |
KN | | | | |
MLP | | | | |
SVC | | | | |
Table 3.
Static ensemble classifier results with feature selection and hyperparameter optimization (detection layer).
Table 3.
Static ensemble classifier results with feature selection and hyperparameter optimization (detection layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
RF | | | | |
XGB | | | | |
GB | | | | |
AB | | | | |
CB | | | | |
LGBM | | | | |
Vot | | | | |
Table 4.
DES model results with all six base classical classifiers (detection layer).
Table 4.
DES model results with all six base classical classifiers (detection layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
KNORAE | | | | |
KNORAU | | | | |
KNOP | | | | |
DESMI | | | | |
METADES | | | | |
DESKNN | | | | |
DESP | | | | |
FIRE-KNORA-U | | | | |
FIRE-KNORA-E | | | | |
FIRE-METADES | | | | |
FIRE-DESKNN | | | | |
FIRE-DESP | | | | |
FIRE-KNOP | | | | |
Table 5.
DES model results with five base static classifiers (detection layer).
Table 5.
DES model results with five base static classifiers (detection layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
KNORAE | | | | |
KNORAU | | | | |
KNOP | | | | |
DESMI | | | | |
METADES | | | | |
DESKNN | | | | |
DESP | | | | |
FIRE-KNORA-U | | | | |
FIRE-KNORA-E | | | | |
FIRE-METADES | | | | |
FIRE-DESKNN | | | | |
FIRE-DESP | | | | |
FIRE-KNOP | | | | |
Table 6.
DES model results with a pool of four mixed classifiers (detection layer).
Table 6.
DES model results with a pool of four mixed classifiers (detection layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
KNORAE | | | | |
KNORAU | | | | |
KNOP | | | | |
DESMI | | | | |
METADES | | | | |
DESKNN | | | | |
DESP | | | | |
FIRE-KNORA-U | | | | |
FIRE-KNORA-E | | | | |
FIRE-METADES | | | | |
FIRE-DESKNN | | | | |
FIRE-DESP | | | | |
FIRE-KNOP | | | | |
Table 7.
Classical classifier results with feature selection and hyperparameter optimization (severity prediction layer).
Table 7.
Classical classifier results with feature selection and hyperparameter optimization (severity prediction layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
DT | | | | |
LR | | | | |
NB | | | | |
KN | | | | |
MLP | | | | |
SVC | | | | |
Table 8.
Static ensemble classifier results with feature selection and hyperparameter optimization (severity prediction layer).
Table 8.
Static ensemble classifier results with feature selection and hyperparameter optimization (severity prediction layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
RF | | | | |
XGB | | | | |
GB | | | | |
AB | | | | |
CB | | | | |
LGBM | | | | |
Vot | | | | |
Table 9.
DES model results with five base classical classifiers (severity prediction layer).
Table 9.
DES model results with five base classical classifiers (severity prediction layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
KNORAE | | | | |
KNORAU | | | | |
KNOP | | | | |
DESMI | | | | |
METADES | | | | |
DESKNN | | | | |
DESP | | | | |
FIRE-KNORA-U | | | | |
FIRE-KNORA-E | | | | |
FIRE-METADES | | | | |
FIRE-DESKNN | | | | |
FIRE-DESP | | | | |
FIRE-KNOP | | | | |
Table 10.
DES model results with five base static classifiers (severity prediction layer).
Table 10.
DES model results with five base static classifiers (severity prediction layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
KNORAE | | | | |
KNORAU | | | | |
KNOP | | | | |
DESMI | | | | |
METADES | | | | |
DESKNN | | | | |
DESP | | | | |
FIRE-KNORA-U | | | | |
FIRE-KNORA-E | | | | |
FIRE-METADES | | | | |
FIRE-DESKNN | | | | |
FIRE-DESP | | | | |
FIRE-KNOP | | | | |
Table 11.
DES model results with a pool of ten mixed classifiers (severity prediction layer).
Table 11.
DES model results with a pool of ten mixed classifiers (severity prediction layer).
Model | Accuracy | Precision | Recall | F1-Score |
---|
KNORAE | | | | |
KNORAU | | | | |
KNOP | | | | |
DESMI | | | | |
METADES | | | | |
DESKNN | | | | |
DESP | | | | |
FIRE-KNORA-U | | | | |
FIRE-KNORA-E | | | | |
FIRE-METADES | | | | |
FIRE-DESKNN | | | | |
FIRE-DESP | | | | |
FIRE-KNOP | | | | |
Table 12.
Comparison of the best models from all experiments in the detection layer and the severity prediction layer.
Table 12.
Comparison of the best models from all experiments in the detection layer and the severity prediction layer.
Task | Experiments | Best Model | Accuracy | Precision | Recall | F1-Score |
---|
Detection | Classic | SVC | 0.8147 ± 0.0125 | 0.8158 ± 0.0127 | 0.8147 ± 0.0125 | 0.8145 ± 0.0124 |
Static | Vot | 0.8708 ± 0.0106 | 0.8712 ± 0.0105 | 0.8708 ± 0.0106 | 0.8708 ± 0.0106 |
DESw/Classic | FIRE-KNOP | 0.8328 ± 0.0160 | 0.8335 ± 0.0160 | 0.8328 ± 0.0160 | 0.8327 ± 0.0160 |
DESw/Static | FIRE-KNOP | 0.8821 ± 0.0105 | 0.8825 ± 0.0104 | 0.8821 ± 0.0105 | 0.8821 ± 0.0105 |
DESw/Mix | FIRE-KNOP | 0.8833 ± 0.0096 | 0.8838 ± 0.0095 | 0.8833 ± 0.0096 | 0.8833 ± 0.0096 |
Severity | Classic | SVC | 0.7926 ± 0.0199 | 0.7980 ± 0.0182 | 0.7926 ± 0.0199 | 0.7916 ± 0.0204 |
Static | Vot | 0.8294 ± 0.0178 | 0.8431 ± 0.0158 | 0.8294 ± 0.0178 | 0.8276 ± 0.0184 |
DESw/Classic | KNORAU | 0.7926 ± 0.0173 | 0.7995 ± 0.0165 | 0.7926 ± 0.0173 | 0.7913 ± 0.0177 |
DESw/Static | FIRE-KNOP | 0.8332 ± 0.0183 | 0.8450 ± 0.0177 | 0.8332 ± 0.0183 | 0.8318 ± 0.0186 |
DESw/Mix | FIRE-KNOP | 0.8368 ± 0.0149 | 0.8479 ± 0.0137 | 0.8368 ± 0.0149 | 0.8354 ± 0.0153 |
Table 13.
Static ensemble regression model results without feature selection and hyperparameter optimization (scale prediction layer).
Table 13.
Static ensemble regression model results without feature selection and hyperparameter optimization (scale prediction layer).
Model | RMSE | MAE | |
---|
CBR | | | |
XGBR | | | |
LGBMR | | | |
GBR | | | |
RFR | | | |
ETR | | | |
ABR | | | |
Voting | | | |
Table 14.
Static ensemble regression model results with feature selection and hyperparameter optimization (scale prediction layer).
Table 14.
Static ensemble regression model results with feature selection and hyperparameter optimization (scale prediction layer).
Model | RMSE | MAE | |
---|
CBR | | | |
XGBR | | | |
LGBMR | | | |
GBR | | | |
RFR | | | |
ETR | | | |
ABR | | | |
Voting | | | |