C-SHAP: A Hybrid Method for Fast and Efficient Interpretability
Abstract
:1. Introduction
- We present C-SHAP, a novel extension of SHAP, which integrates K-means clustering to significantly reduce execution time across various machine learning models. For example, on the Diabetes dataset, which was collected by the National Institute of Diabetes and Digestive and Kidney Diseases https://www.kaggle.com/datasets/antoniofurioso/diabetes-dataset-sklearn (accessed on 14 November 2024), C-SHAP reduced the Random Forest execution time from 421.29 s to 0.39 s.
- C-SHAP closely preserves select features of SHAP while significantly improving computational efficiency across various models. The Venn diagram analysis revealed that in models such as Random Forest, SVC (Support Vector Classifier), K-Nearest Neighbors, and Logistic Regression, the selected features between SHAP and C-SHAP were identical, with the only difference observed in the XGboost model.
- C-SHAP was benchmarked against SHAP and LIME across multiple machine learning algorithms, demonstrating its versatility and efficiency improvements for both classification and regression tasks. The evaluations included datasets from domains such as healthcare and finance, confirming C-SHAP’s scalability across diverse real-world applications.
2. Related Works
3. How SHAP Interprets Model Decisions
- is the Shapley value for feature i.
- N is the set of all features.
- S is a subset of features that does not include feature i.
- is the prediction based only on the subset S of features.
- is the prediction based on the subset S plus the feature i.
- The fraction is the weighting factor, representing how to fairly distribute the marginal contributions across all possible subsets.
4. How C-SHAP Improves SHAP’s Performance
Algorithm 1: Pseudocode for C-SHAP |
Input: Dataset , Machine Learning Model M, Number of Clusters K Output: Top-ranked features
|
- X be the dataset with n samples and m features.
- represent the k-th cluster centroid derived from K-means clustering, where is the number of clusters.
- is the centroid of the k-th cluster found by K-means.
- represents the model’s output when using the feature subset S for the centroid .
- is the SHAP value for feature i in the cluster .
- SHAP value (second row): This value represents the predicted output or cumulative SHAP contribution at the node. For example, the root node in Figure 3 displays a SHAP value of 6076.398, while the corresponding node in Figure 4 shows 5298.152, indicating differences in the values processed due to clustering in C-SHAP.
5. Experimental Evaluation
5.1. Dataset
5.2. Evaluation Metrics
5.3. Assessing Scalability
6. Results
7. Discussion
Adaptive Clustering in C-SHAP
8. Limitations
9. Conclusions and Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016. [Google Scholar] [CrossRef]
- Castelvecchi, D. Can we open the black box of AI? Nat. News 2016, 538, 20. [Google Scholar] [CrossRef] [PubMed]
- Villamizar, H.; Kalinowski, M.; Lopes, H.; Mendez, D. Identifying concerns when specifying machine learning-enabled systems: A perspective-based approach. J. Syst. Softw. 2024, 213, 112053. [Google Scholar] [CrossRef]
- Le, T.T.H.; Kim, H.; Kang, H.; Kim, H. Classification and explanation for intrusion detection system based on ensemble trees and SHAP method. Sensors 2022, 22, 1154. [Google Scholar] [CrossRef]
- Luo, Y.; Tseng, H.H.; Cui, S.; Wei, L.; Ten Haken, R.K.; El Naqa, I. Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling. BJR Open 2019, 1, 20190021. [Google Scholar] [CrossRef]
- Stiglic, G.; Kocbek, P.; Fijacko, N.; Zitnik, M.; Verbert, K.; Cilar, L. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1379. [Google Scholar] [CrossRef]
- Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
- Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Carvalho, D.V.; Pereira, E.M.; Cardoso, J.S. Machine learning interpretability: A survey on methods and metrics. Electronics 2019, 8, 832. [Google Scholar] [CrossRef]
- Lipton, Z.C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 2018, 16, 31–57. [Google Scholar] [CrossRef]
- Mukhtar, A.; Hofer, B.; Jannach, D.; Wotawa, F. Explaining software fault predictions to spreadsheet users. J. Syst. Softw. 2023, 201, 111676. [Google Scholar] [CrossRef]
- Debjit, K.; Islam, M.S.; Rahman, M.A.; Pinki, F.T.; Nath, R.D.; Al-Ahmadi, S.; Hossain, M.S.; Mumenin, K.M.; Awal, M.A. An improved machine-learning approach for COVID-19 prediction using Harris Hawks optimization and feature analysis using SHAP. Diagnostics 2022, 12, 1023. [Google Scholar] [CrossRef] [PubMed]
- ElShawi, R.; Sherif, Y.; Al-Mallah, M.; Sakr, S. Interpretability in healthcare: A comparative study of local machine learning interpretability techniques. Comput. Intell. 2021, 37, 1633–1650. [Google Scholar] [CrossRef]
- Marcílio, W.E.; Eler, D.M. From explanations to feature selection: Assessing SHAP values as feature selection mechanism. In Proceedings of the 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Porto de Galinhas, Brazil, 7–10 November 2020; pp. 340–347. [Google Scholar]
- Tüzün, E.; Tekinerdogan, B.; Kalender, M.E.; Bilgen, S. Empirical evaluation of a decision support model for adopting software product line engineering. Inf. Softw. Technol. 2015, 60, 77–101. [Google Scholar] [CrossRef]
- Samir, M.; Sherief, N.; Abdelmoez, W. Improving bug assignment and developer allocation in software engineering through interpretable machine learning models. Computers 2023, 12, 128. [Google Scholar] [CrossRef]
- Wang, S.; Huang, L.; Gao, A.; Ge, J.; Zhang, T.; Feng, H.; Satyarth, I.; Li, M.; Zhang, H.; Ng, V. Machine/deep learning for software engineering: A systematic literature review. IEEE Trans. Softw. Eng. 2022, 49, 1188–1231. [Google Scholar] [CrossRef]
- Kamal, M.S.; Nimmy, S.F.; Dey, N. Interpretable Code Summarization. IEEE Trans. Reliab. 2024; early access. [Google Scholar] [CrossRef]
- Dam, H.K.; Tran, T.; Ghose, A. Explainable software analytics. In Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, Gothenburg, Sweden, 27 May–3 June 2018; pp. 53–56. [Google Scholar] [CrossRef]
- Tantithamthavorn, C.K.; Jiarpakdee, J. Explainable ai for software engineering. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), Melbourne, Australia, 15–19 November 2021; pp. 1–2. [Google Scholar] [CrossRef]
- Lundberg, S. A unified approach to interpreting model predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Molnar, C.; Casalicchio, G.; Bischl, B. Interpretable machine learning—A brief history, state-of-the-art and challenges. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Springer: Berlin/Heidelberg, Germany, 2020; pp. 417–431. [Google Scholar]
- Merrick, L.; Taly, A. The explanation game: Explaining machine learning models using shapley values. In Proceedings of the Machine Learning and Knowledge Extraction: 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Dublin, Ireland, 25–28 August 2020; Proceedings 4. Springer: Berlin/Heidelberg, Germany, 2020; pp. 17–38. [Google Scholar]
- Zhang, Y.; Tiňo, P.; Leonardis, A.; Tang, K. A survey on neural network interpretability. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 5, 726–742. [Google Scholar] [CrossRef]
- Sundararajan, M.; Najmi, A. The many Shapley values for model explanation. In Proceedings of the International Conference on Machine Learning, PMLR, Online, 13–18 July 2020; pp. 9269–9278. [Google Scholar]
- Meddage, P.; Ekanayake, I.; Perera, U.S.; Azamathulla, H.M.; Md Said, M.A.; Rathnayake, U. Interpretation of machine-learning-based (black-box) wind pressure predictions for low-rise gable-roofed buildings using Shapley additive explanations (SHAP). Buildings 2022, 12, 734. [Google Scholar] [CrossRef]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [PubMed]
- Mo, B.Y.; Nuannimnoi, S.; Baskoro, A.; Khan, A.; Ariesta Dwi Pratiwi, J.; Huang, C.Y. ClusteredSHAP: Faster GradientExplainer based on K-means Clustering and Selections of Gradients in Explaining 12-Lead ECG Classification Model. In Proceedings of the 13th International Conference on Advances in Information Technology, Bangkok, Thailand, 6–9 December 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Gramegna, A.; Giudici, P. SHAP and LIME: An evaluation of discriminative power in credit risk. Front. Artif. Intell. 2021, 4, 752558. [Google Scholar] [CrossRef] [PubMed]
- Garreau, D.; Luxburg, U. Explaining the explainer: A first theoretical analysis of LIME. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Online, 26–28 August 2020; pp. 1287–1296. [Google Scholar]
- Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.; Humayun, M. Explainable AI for retinoblastoma diagnosis: Interpreting deep learning models with LIME and SHAP. Diagnostics 2023, 13, 1932. [Google Scholar] [CrossRef] [PubMed]
- Slack, D.; Hilgard, S.; Jia, E.; Singh, S.; Lakkaraju, H. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–8 February 2020; pp. 180–186. [Google Scholar] [CrossRef]
- Chen, R.C.; Dewi, C.; Huang, S.W.; Caraka, R.E. Selecting critical features for data classification based on machine learning methods. J. Big Data 2020, 7, 52. [Google Scholar] [CrossRef]
- Ismi, D.P.; Panchoo, S.; Murinto, M. K-means clustering based filter feature selection on high dimensional data. Int. J. Adv. Intell. Inform. 2016, 2, 38–45. [Google Scholar] [CrossRef]
- Roshan, K.; Zafar, A. Using kernel shap xai method to optimize the network anomaly detection model. In Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 23–25 March 2022; pp. 74–80. [Google Scholar] [CrossRef]
- Zheng, W.; Shen, T.; Chen, X.; Deng, P. Interpretability application of the Just-in-Time software defect prediction model. J. Syst. Softw. 2022, 188, 111245. [Google Scholar] [CrossRef]
- Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
- John, G.H.; Kohavi, R.; Pfleger, K. Irrelevant features and the subset selection problem. In Machine Learning Proceedings 1994; Elsevier: Amsterdam, The Netherlands, 1994; pp. 121–129. [Google Scholar] [CrossRef]
- Hakkoum, H.; Idri, A.; Abnane, I. Global and local interpretability techniques of supervised machine learning black box models for numerical medical data. Eng. Appl. Artif. Intell. 2024, 131, 107829. [Google Scholar] [CrossRef]
- Shapley, L.S. Stochastic games. Proc. Natl. Acad. Sci. USA 1953, 39, 1095–1100. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
- Zhang, Q.s.; Zhu, S.C. Visual interpretability for deep learning: A survey. Front. Inf. Technol. Electron. Eng. 2018, 19, 27–39. [Google Scholar] [CrossRef]
- Ghosh, A.; Kandasamy, D. Interpretable artificial intelligence: Why and when. Am. J. Roentgenol. 2020, 214, 1137–1138. [Google Scholar] [CrossRef] [PubMed]
- Shapley, L.S. A value for n-person games. Contrib. Theory Games 1953, 2. [Google Scholar] [CrossRef]
- Parisineni, S.R.A.; Pal, M. Enhancing trust and interpretability of complex machine learning models using local interpretable model agnostic shap explanations. Int. J. Data Sci. Anal. 2023, 18, 457–466. [Google Scholar] [CrossRef]
- Liu, S.; Wu, K.; Jiang, C.; Huang, B.; Ma, D. Financial time-series forecasting: Towards synergizing performance and interpretability within a hybrid machine learning approach. arXiv 2023, arXiv:2401.00534. [Google Scholar]
- Chudasama, Y.; Purohit, D.; Rohde, P.; Gercke, J.; Vidal, M. InterpretME: A tool for interpretations of machine learning models over knowledge graphs. Semant. Web 2023, 1–21. [Google Scholar] [CrossRef]
- Schmidt, P.; Biessmann, F. Quantifying interpretability and trust in machine learning systems. arXiv 2019, arXiv:1901.08558. [Google Scholar]
- Vishwarupe, V.; Joshi, P.M.; Mathias, N.; Maheshwari, S.; Mhaisalkar, S.; Pawar, V. Explainable AI and interpretable machine learning: A case study in perspective. Procedia Comput. Sci. 2022, 204, 869–876. [Google Scholar] [CrossRef]
- Covert, I.; Lundberg, S.M.; Lee, S.I. Understanding global feature contributions with additive importance measures. Adv. Neural Inf. Process. Syst. 2020, 33, 17212–17223. [Google Scholar]
- Amoukou, S.I. Trustworthy Machine Learning: Explainability and Distribution-Free Uncertainty Quantification. Ph.D. Thesis, Université Paris-Saclay, Paris, France, 2023. [Google Scholar]
- Bhatt, U.; Xiang, A.; Sharma, S.; Weller, A.; Taly, A.; Jia, Y.; Ghosh, J.; Puri, R.; Moura, J.M.; Eckersley, P. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 648–657. [Google Scholar] [CrossRef]
- Liu, C.; Yin, X.; Huang, D.; Xu, Y.; Li, S.; Yu, C.; Zhang, Y.; Deng, Y. An Interpretable Prediction Model for the Risk of Retinopathy of Prematurity Development Based on Machine Learning and SHapley Additive exPlanations (SHAP) PREPRINT (Version 1) Available at Research Square 2023. Available online: https://doi.org/10.21203/rs.3.rs-3569382/v1 (accessed on 14 November 2024).
- Bizimana, P.C.; Zhang, Z.; Hounye, A.H.; Asim, M.; Hammad, M.; El-Latif, A.A.A. Automated heart disease prediction using improved explainable learning-based technique. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 1–30. [Google Scholar]
- Zhong, Q.M.; Chen, S.Z.; Sun, Z.; Tian, L.C. Fully automatic operational modal analysis method based on statistical rule enhanced adaptive clustering method. Eng. Struct. 2023, 274, 115216. [Google Scholar] [CrossRef]
- Zheng, M.; Gao, P.; Zhang, R.; Li, K.; Wang, X.; Li, H.; Dong, H. End-to-end object detection with adaptive clustering transformer. arXiv 2020, arXiv:2011.09315. [Google Scholar]
Ref. | Dataset | Machine Learning Model | Interpretability Techniques | Execution Times of Interpretability Techniques |
---|---|---|---|---|
[41] | Wisconsin Breast Cancer (BC, BCD), Diabetes, Parkinson’s, SPECT, Single-Proton-Emission Computed Tomography (SPECTF) | Multi-layer Perceptron (MLP), Support Vector Machine (SVM), Random Forest, XGB, Naive Bayes (NB) | LIME, SHAP, Model-Agnostic Supervised Local Explanations (MAPLEs), Local Rule-Based Explanations (LOREs), CIU | LIME: 789–9211 s, SHAP: 741–32,566 s, MAPLE: 120–8333 s, LORE: 9411–44,262 s, CIU: 116–1062 s. |
[49] | French Royalty KG, Lung Cancer KG | Random Forest, Decision Trees, AdaBoost Classifier | LIME | LIME: s (French Royalty KG), s (Lung Cancer KG). |
[50] | Book Categorization, IMDb Sentiment | L2-regularized Logistic Regression (unigram Bag-of-Words (BOW) with Term Frequency-Inverse Document Frequency (TF-IDF) normalization) | LIME, Covariance (COVAR) | LIME: s per instance; COVAR: s per instance. |
[51] | Medical Survey Dataset (Predicting Diabetes) | Random Forest | LIME, SHAP, Explain Like I’m 5 (ELI5) | LIME: execution time not explicitly stated but faster than SHAP; SHAP: computationally intensive. |
[13] | Mortality, Diabetes, Drug Review, Side Effects | Random Forest | LIME, SHAP, Anchors, LORE, Influence-Based LIME (ILIME), MAPLE | LIME: – s; SHAP: – s; Anchors: – s; LORE: 1639–1642 s; ILIME: – s; MAPLE: 375–20,135 s. |
[21] | MNIST | Decision Trees, CNNs | SHAP (Kernel, Deep), LIME | Kernel SHAP: computationally efficient; exact time not provided. Deep SHAP: faster than Kernel SHAP. LIME: slower with 50,000 perturbations. |
[22] | Text Classification Dataset | SVM (RBF Kernel), Logistic Regression, Random Forest, Decision Trees, Nearest Neighbors | LIME, Submodular Pick LIME (SP-LIME), Parzen window method, Gradient-based methods | LIME: ∼3 s per instance for 5000 perturbations. |
[14] | Indian Liver Disease, Heart Disease, Wine, Vertebral Column, Breast Cancer, Boston, Diabetes, NHANESI | XGBoost | SHAP (Tree SHAP), Recursive Feature Elimination (RFE), Mutual Information, Analysis of Variance (ANOVA) | TreeSHAP: –10,894.86 ms; RFE: –251,591.22 ms; Mutual Information: – ms; ANOVA: – ms. |
[30] | CPSC2018 (12-lead ECG classification) | ResNet34 | SHAP (GradientExplainer, ClusteredSHAP, RandomGrad Explainer, DeepExplainer) | ClusteredSHAP: s/sample; RandomGrad Explainer: s/sample; GradientExplainer: s/sample; DeepExplainer: s/sample. |
[43] | Mortality, Chronic Kidney Disease, Hospital Procedure Duration | Gradient-Boosted Trees, Random Forests, Decision Trees | TreeExplainer (SHAP values), Kernel SHAP, LIME, MAPLE | Kernel SHAP: exponential complexity ; slower and less scalable for datasets with many features or samples. |
[47] | Mobile Price Classification, MCCB Electrical Dataset | Random Forest Classifier, MLP, Random Forest Regressor | SHAP (Kernel Explainer, Tree Explainer), Local Interpretable Model-Agnostic SHAP Explanations (LIMASEs) | Kernel Explainer: – s (MLP); LIMASE: – s; Tree Explainer: exact times not specified but similar results with faster computations. |
Dataset Name | Data Points | Number of Features | Type |
---|---|---|---|
Indian Liver Disease | 583 | 10 | Classification |
Heart Disease | 303 | 13 | Classification |
Wine | 178 | 13 | Classification |
Vertebral Column | 310 | 6 | Classification |
Breast Cancer | 569 | 30 | Classification |
Diabetes | 442 | 10 | Regression |
Dataset Size (%) | SHAP Execution Time (Sec) | C-SHAP Execution Time (Sec) |
---|---|---|
10% | 43.12 | 0.15 |
25% | 126.45 | 0.28 |
50% | 269.87 | 0.40 |
75% | 382.43 | 0.55 |
100% | 439.15 | 0.29 |
Model | Accuracy | LIME (Sec) | SHAP (Sec) | C-SHAP (Sec) |
---|---|---|---|---|
Random Forest | 0.73 | 0.14 | 421.29 | 0.39 |
SVC | 0.73 | 0.19 | 1968.09 | 0.21 |
XGBoost | 0.72 | 0.12 | 0.06 | 0.05 |
LR | 0.73 | 0.06 | 165.72 | 0.05 |
K-NNs | 0.68 | 0.09 | 350.84 | 0.07 |
Model | Accuracy | LIME (Sec) | SHAP (Sec) | C-SHAP (Sec) |
---|---|---|---|---|
Random Forest | 0.736 | 0.09 | 439.15 | 0.29 |
SVC | 0.73 | 0.20 | 1984.44 | 0.22 |
XGBoost | 0.72 | 0.11 | 0.06 | 0.05 |
LR | 0.73 | 0.07 | 176.95 | 0.07 |
K-NNs | 0.68 | 0.08 | 363.78 | 0.07 |
Model | Accuracy | LIME (Sec) | SHAP (Sec) | C-SHAP (Sec) |
---|---|---|---|---|
Random Forest | 0.74 | 0.12 | 953.18 | 0.39 |
SVC | 0.73 | 0.20 | 3954.12 | 0.72 |
XGBoost | 0.70 | 0.14 | 0.05 | 0.04 |
LR | 0.72 | 0.12 | 368.42 | 0.17 |
K-NNs | 0.69 | 0.11 | 844.85 | 0.20 |
Model | Accuracy | LIME (Sec) | SHAP (Sec) | C-SHAP (Sec) |
---|---|---|---|---|
Random Forest | 0.98 | 0.12 | 525.33 | 0.43 |
SVC | 0.97 | 0.10 | 1044.86 | 0.15 |
XGBoost | 0.95 | 0.13 | 0.03 | 0.04 |
LR | 0.95 | 0.09 | 245.18 | 0.07 |
K-NNs | 0.97 | 0.10 | 894.58 | 0.15 |
Model | Accuracy | LIME (Sec) | SHAP (Sec) | C-SHAP (Sec) |
---|---|---|---|---|
Random Forest | 1.0 | 0.13 | 132.42 | 0.58 |
SVC | 0.76 | 0.16 | 300.64 | 0.69 |
XGBoost | 0.96 | 0.19 | 0.08 | 0.08 |
LR | 1.0 | 0.12 | 63.48 | 0.29 |
K-NNs | 0.74 | 0.13 | 103.10 | 0.34 |
Model | Accuracy | LIME (Sec) | SHAP (Sec) | C-SHAP (Sec) |
---|---|---|---|---|
Random Forest | 0.84 | 0.09 | 19.07 | 0.36 |
SVC | 0.80 | 0.09 | 41.53 | 0.11 |
XGBoost | 0.83 | 0.08 | 0.09 | 0.09 |
LR | 0.83 | 0.06 | 11.49 | 0.03 |
K-NNs | 0.78 | 0.07 | 16.61 | 0.03 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ranjbaran, G.; Recupero, D.R.; Roy, C.K.; Schneider, K.A. C-SHAP: A Hybrid Method for Fast and Efficient Interpretability. Appl. Sci. 2025, 15, 672. https://doi.org/10.3390/app15020672
Ranjbaran G, Recupero DR, Roy CK, Schneider KA. C-SHAP: A Hybrid Method for Fast and Efficient Interpretability. Applied Sciences. 2025; 15(2):672. https://doi.org/10.3390/app15020672
Chicago/Turabian StyleRanjbaran, Golshid, Diego Reforgiato Recupero, Chanchal K. Roy, and Kevin A. Schneider. 2025. "C-SHAP: A Hybrid Method for Fast and Efficient Interpretability" Applied Sciences 15, no. 2: 672. https://doi.org/10.3390/app15020672
APA StyleRanjbaran, G., Recupero, D. R., Roy, C. K., & Schneider, K. A. (2025). C-SHAP: A Hybrid Method for Fast and Efficient Interpretability. Applied Sciences, 15(2), 672. https://doi.org/10.3390/app15020672