Consensus and Divergence in Explainable AI (XAI): Evaluating Global Feature-Ranking Consistency with Empirical Evidence from Solar Energy Forecasting
Abstract
1. Introduction
2. Related Works
2.1. Explainable AI
2.2. LightGBM
2.3. XAI in Solar Energy
3. Materials and Methods
3.1. Data Collection
3.2. Data Pre-Processing
Missing Hour Imputation
3.3. Feature Engineering
3.3.1. Temporal and Seasonal Features
3.3.2. Cyclical Features
3.3.3. Rolling Average
3.3.4. Lag Effects
3.4. LightGBM Predictive Model Implementation
lgbm_model = lgb.LGBMRegressor(
random_state = 42, n_estimators = 2000,
learning_rate = 0.03, num_leaves = 63,
max_depth = 10, min_data_in_leaf = 30,
feature_fraction = 0.8, bagging_fraction = 0.8,
bagging_freq = 5)
3.5. Performance Metrics
3.6. XAI Integration
4. Results
4.1. Trend Analysis
4.2. Model Performance
4.3. XAI Results
4.4. Statistical Tests Results
4.5. Cross-Model Validation Results
5. Discussion
6. Conclusions
- Relying on a single XAI technique can be risky. Future workflows should adopt a triangulated approach, using at least three distinct XAI categories (e.g., SHAP, ALE, and Gain) to ensure explanation stability.
- We suggest that Kendall’s W and Spearman’s rank correlations should be adopted as diagnostic metrics when comparing feature-ranking XAI methods. A low consensus score between XAI methods serves as a red flag, indicating a lack of agreement in the relative importance of features and suggesting that the feature-ranking explanations may be unstable or method-specific rather than reflecting a robust, model-agnostic pattern.
- Through cross-validation with LightGBM and CatBoost, this research confirms that certain feature hierarchies (such as the dominance of the lag-1 variable) are architecture-independent. This suggests that XAI can uncover universal domain truths, provided that the practitioner accounts for method-specific biases.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hokmabad, H.N.; Husev, O.; Belikov, J. Day-ahead Solar Power Forecasting Using LightGBM and Self-Attention Based Encoder-Decoder Networks. IEEE Trans. Sustain. Energy 2024, 16, 866–879. [Google Scholar] [CrossRef]
- Biswal, B.; Deb, S.; Datta, S.; Ustun, T.S.; Cali, U. Review on smart grid load forecasting for smart energy management using machine learning and deep learning techniques. Energy Rep. 2024, 12, 3654–3670. [Google Scholar] [CrossRef]
- Petrosian, O.; Zhang, Y. Solar Power Generation Forecasting in Smart Cities and Explanation Based on Explainable AI. Smart Cities 2024, 7, 3388–3411. [Google Scholar] [CrossRef]
- Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl. Based Syst. 2023, 263, 110273. [Google Scholar] [CrossRef]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Molnar, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2nd ed.; Leanpub: Victoria, BC, Canada, 2022. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Apley, D.W.; Zhu, J. Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B Stat. Methodol. 2020, 82, 1059–1086. [Google Scholar] [CrossRef]
- Munir, S.; Pradhan, M.R.; Abbas, S.; Khan, M.A. Energy consumption prediction based on lightgbm empowered with explainable artificial intelligence. IEEE Access 2024, 12, 91263–91271. [Google Scholar] [CrossRef]
- Agrawal, A.; Sazos, M.; Al Durra, A.; Maniatakos, M. Towards robust power grid attack protection using lightgbm with concept drift detection and retraining. In Proceedings of the 2020 Joint Workshop on CPS&IoT Security and Privacy; Association for Computing Machinery: New York, NY, USA, 2020; pp. 31–36. [Google Scholar] [CrossRef]
- Zhao, R.; Zhang, L.; Li, Z. Identification of Financial Fraud in Listed Companies Based on Bayesian-LightGBM Model. In Proceedings of 2024 2nd International Conference on Artificial Intelligence, Systems and Network Security, AISNS 2024; Association for Computing Machinery, Inc.: New York, NY, USA, 2025; pp. 339–344. [Google Scholar] [CrossRef]
- Chaibi, M.; Benghoulam, E.M.; Tarik, L.; Berrada, M.; Hmaidi, A.E. An interpretable machine learning model for daily global solar radiation prediction. Energies 2021, 14, 7367. [Google Scholar] [CrossRef]
- Kuzlu, M.; Cali, U.; Sharma, V.; Güler, Ö. Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools. IEEE Access 2020, 8, 187814–187823. [Google Scholar] [CrossRef]
- Nallakaruppan, M.K.; Shankar, N.; Bhuvanagiri, P.B.; Padmanaban, S.; Khan, S.B. Advancing solar energy integration: Unveiling XAI insights for enhanced power system management and sustainable future. Ain Shams Eng. J. 2024, 15, 102740. [Google Scholar] [CrossRef]
- Park, J.; Moon, J.; Jung, S.; Hwang, E. Multistep-ahead solar radiation forecasting scheme based on the light gradient boosting machine: A case study of Jeju Island. Remote Sens. 2020, 12, 2271. [Google Scholar] [CrossRef]
- Tang, F.; Ishwaran, H. Random forest missing data algorithms. Stat. Anal. Data Min. ASA Data Sci. J. 2017, 10, 363–377. [Google Scholar] [CrossRef]
- Chakraborty, D.; Elzarka, H. Advanced machine learning techniques for building performance simulation: A comparative analysis. J. Build. Perform. Simul. 2019, 12, 193–207. [Google Scholar] [CrossRef]
- Kwiatkowski, D.; Phillips, P.C.; Schmidt, P.; Shin, Y. Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? J. Econom. 1992, 54, 159–178. [Google Scholar] [CrossRef]
- Dickey, D.A.; Fuller, W.A. Distribution of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc. 1979, 74, 427–431. [Google Scholar] [PubMed]
- Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
- Nemenyi, P.B. Distribution-Free Multiple Comparisons; Princeton University: Princeton, NJ, USA, 1963. [Google Scholar]
- Hooker, G.; Mentch, L.; Zhou, S. Unrestricted permutation forces extrapolation: Variable importance requires at least one more model, or there is no free variable importance. Stat. Comput. 2021, 31, 82. [Google Scholar] [CrossRef]
- Strobl, C.; Boulesteix, A.L.; Zeileis, A.; Hothorn, T. Bias in random forest variable importance measures: Illustrations, sources and a solution. BMC Bioinform. 2007, 8, 25. [Google Scholar] [CrossRef] [PubMed]
- Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
- Varghese, A.S.; Somasundaram, G.; Nambiar, A. On Developing Explainable AI Evaluation Metrics for Image Classification using Borda Count and Multiple Correlation Techniques. ACM Trans. Intell. Syst. Technol 2025. [Google Scholar]




















| Split | Metric | Southlands Centre | Whitehorn Centre | North Corporate Warehouse | Glenmore Water Treatment Plant |
|---|---|---|---|---|---|
| 1 | RMSE | 15.5179 | 39.7431 | 15.6982 | 9.6990 |
| MAE | 9.7015 | 23.8118 | 10.2982 | 6.1004 | |
| 2 | RMSE | 5.9587 | 64.8073 | 13.1643 | 3.4459 |
| MAE | 3.6308 | 45.3081 | 8.1971 | 2.0299 | |
| 3 | RMSE | 9.6111 | 50.7185 | 14.2117 | 4.6768 |
| MAE | 5.5653 | 32.6921 | 7.1421 | 2.7397 | |
| 4 | RMSE | 11.9866 | 65.4772 | 9.1897 | 5.5152 |
| MAE | 7.3637 | 42.7564 | 5.8265 | 3.3064 | |
| 5 | RMSE | 6.7854 | 46.6546 | 11.0693 | 5.1042 |
| MAE | 4.1379 | 27.8820 | 6.5250 | 3.4573 | |
| 6 | RMSE | 6.0813 | 53.9753 | 7.1280 | 6.3114 |
| MAE | 4.0014 | 32.9228 | 4.1773 | 4.1725 | |
| 7 | RMSE | 10.2782 | 39.5914 | 9.7129 | 5.0726 |
| MAE | 6.3272 | 23.4742 | 6.8736 | 2.8919 | |
| 8 | RMSE | 5.7738 | 42.5065 | 18.3104 | 5.0714 |
| MAE | 3.4477 | 23.7408 | 8.6614 | 3.3199 | |
| 9 | RMSE | 13.9398 | 55.8495 | 19.9058 | 7.2009 |
| MAE | 8.8078 | 34.4194 | 13.6641 | 4.4056 | |
| 10 | RMSE | 10.6346 | 49.4617 | 16.9181 | 5.5551 |
| MAE | 6.7328 | 32.0890 | 11.6170 | 3.4500 | |
| Average | RMSE | 9.6567 | 50.8785 | 13.5308 | 5.7652 |
| MAE | 5.9716 | 31.9097 | 8.2982 | 3.5874 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Thinn, K.T.; Saeed, W. Consensus and Divergence in Explainable AI (XAI): Evaluating Global Feature-Ranking Consistency with Empirical Evidence from Solar Energy Forecasting. Mathematics 2026, 14, 297. https://doi.org/10.3390/math14020297
Thinn KT, Saeed W. Consensus and Divergence in Explainable AI (XAI): Evaluating Global Feature-Ranking Consistency with Empirical Evidence from Solar Energy Forecasting. Mathematics. 2026; 14(2):297. https://doi.org/10.3390/math14020297
Chicago/Turabian StyleThinn, Kay Thari, and Waddah Saeed. 2026. "Consensus and Divergence in Explainable AI (XAI): Evaluating Global Feature-Ranking Consistency with Empirical Evidence from Solar Energy Forecasting" Mathematics 14, no. 2: 297. https://doi.org/10.3390/math14020297
APA StyleThinn, K. T., & Saeed, W. (2026). Consensus and Divergence in Explainable AI (XAI): Evaluating Global Feature-Ranking Consistency with Empirical Evidence from Solar Energy Forecasting. Mathematics, 14(2), 297. https://doi.org/10.3390/math14020297

