Contextual Background Estimation for Explainable AI in Temperature Prediction
Abstract
:1. Introduction
1.1. Significance of Machine Learning in Power Systems
- 1.
- Model-agnostic or model-specific—If the method can be applied to any type of ML model, then the method is model-agnostic. On the other hand, if the method is designed for one type of ML model, then it is model-specific. Examples of the model-specific method include Integrated Gradient [23] and Layer-Wise Relevance Backpropagation (LRP) [24], the latter of which is designed to explain the decisions made by neural networks. The model-agnostic method includes ELI5 [25], Kernel SHAP [26], and LIME [27].
- 2.
- Local or global—Global XAI methods can explain the entire scope of the decisions the model can make, even for non-existing cases. In other words, it explains the behavior of an entire model. In contrast to that, local XAI methods explain only the particular decision made by the model. The global methods include, for example, Global Surrogate Models [28], Permutation Feature Importance [29], and Partial Dependence Plot [30]. LIME and Kernel SHAP are examples of local methods.
- 3.
- Intrinsic (ante hoc) or post hoc—Post hoc methods include those methods that can be used after model training to explain its behavior. Contrary to this, a model is intrinsically explainable if it is explainable `by design’. Simple models such as linear models [31] and decision trees [32] can be explained without any exterior method [33]. Grad-CAM [34], LIME, and Kernel SHAP are examples of post hoc methods.
1.2. Limits of XAI
1.2.1. Base Value
- If a real-estate agent wanted to understand why a model predicted a property price of 40,000 USD instead of the typical 70,000 USD for the district, a base value of 45,000 USD or 30,000 USD would be unhelpful.
- If someone wanted to know why a model predicted an electrical load of 15 GW for next Wednesday, while it normally predicts 10 GW for Wednesdays at this time of year, a base value of 50 GW or 0 GW would provide no useful insight.
- If a model predicted that tomorrow’s temperature would be 25 °C, whereas the average temperature for the week was 18 °C, a base value of 0 °C would not enable explaining this difference effectively.
1.2.2. TimeSHAP and Background
1.3. The Aim of This Article
2. Materials and Methods
2.1. Description of Mean Background Method
- checking which time-series sequences can be used to predict , where m is some small margin. In Figure 5, two example values are shown ( and );
- all time-series sequences selected in step I can be used to create a mean time-series background that could resemble the frequency of the target;
- an XAI method is applied to explain the changes between and . In Figure 5, there are two values, and , which result in two different explanations for each feature. For example, moving the temperature prediction from °C to °C might be influenced by different features than moving the prediction from °C to 24 °C.
2.2. Description of Background Estimation Method
Algorithm 1: Simplified algorithm for Background Estimation Method that uses timeSHAP as an XAI method |
2.3. Summary of Methods and Scenarios of Use
- The local power supplier predicts power demand for the next day at 8:00, which is equal to 180 MW. The prediction is unusually high and the supplier wants to make sure that the prediction is valid. The user wants to know why the model changed prediction to 180 MW while the expected level for this hour and time of year is 150 MW. This is a contextualized situation, so it is not possible to simply use XAI method with a default base value to find answers. The user can apply Mean Background Method, assuming MW and MW. Out of the entire dataset, the mean background is created and XAI method is applied. It turns out that the extreme rise in temperature on the previous day is mainly responsible for the shift; it comprises 80% of the shift. Humidity is second most important variable. It comprises 10% of the shift from 15 MW to 18 MW. Based on the historical data and expertise knowledge, the supplier can believe this prediction. However, the supplier has doubts whether XAI method provides viable results. The background used requires verification, so the Background Estimation Method is applied. The results show that the importance of temperature was not changing radically depending on background used, while the importance of humidity was fluctuating strongly from 2% to 17%. The user still decides to trust the prediction since the utility for temperature is much higher than the utility for humidity. Even though the importance of humidity experiences fluctuations, its importance is low, while temperature is much more important and its importance is stable. The power supplier can use a precise and reliable model to steer efficient power production, thus meeting the demand.
- A microgrid is supplied with energy by a local wind farm. It is crucial to predict how much power will be created by the wind farm and how much should be ordered from a utility distribution network to meed demand. It is predicted that the wind farm will generate 10 MWh, as opposed to 17 MWh, which was generated a day before. The user wants to know whether this is a viable prediction. XAI method cannot be used for that in a traditional way since this is a contextualized situation and the user wants to know what caused the shift from 17 MWh to 10 MWh. New base value is required. It can be achieved with Mean Background Method. It is applied with MWh and MWh over entire dataset. It is known that certain features were important to create the shift from 17 MWh to 10 MWh—wind speed (50%), air density (20%), temperature (10%), and humidity (5%). However, after applying Background Estimation Method, all features are assigned with low utility—below 25 a.u. After thorough investigation of results from the Background Estimation Method, it was noted that there are few groups of backgrounds. The user decided to use the Background Estimation Method again but this time using only one group of backgrounds for it—the one that includes the day before the original prediction. This time the utility is high (over 75) for each feature. The user decides to trust the results as the power production can rapidly change due to rough changes in the important features. Due to this decision, more energy is ordered from the utility distribution network.
- A new model predicting power production was prepared to work with SCADA system that controls power usage and production in a microgrid. To make sure that the model is working properly and predicts results based on correct features, it undergoes tests. Calculation of utility indexes for numerous test cases can be one of those tests. Utility indexes are high for two features that should be used and low for other two features that also should be used. Even if the accuracy of the model is high, two out of four features that are known to be important cannot be explained viably. The model should be rejected to save fragile systems.
3. Results
3.1. Evaluation of Mean Background Method
3.2. Evaluation of Background Estimation Method
4. Discussion and Future Work
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Moreno Escobar, J.J.; Morales Matamoros, O.; Tejeida Padilla, R.; Lina Reyes, I.; Quintana Espinosa, H. A comprehensive review on smart grids: Challenges and opportunities. Sensors 2021, 21, 6978. [Google Scholar] [CrossRef] [PubMed]
- Strielkowski, W.; Civín, L.; Tarkhanova, E.; Tvaronavičienė, M.; Petrenko, Y. Renewable energy in the sustainable development of electrical power sector: A review. Energies 2021, 14, 8240. [Google Scholar] [CrossRef]
- Almihat, M.G.M.; Kahn, M.; Aboalez, K.; Almaktoof, A.M. Energy and sustainable development in smart cities: An overview. Smart Cities 2022, 5, 1389–1408. [Google Scholar] [CrossRef]
- Rahman, M.M.; Dadon, S.H.; He, M.; Giesselmann, M.; Hasan, M.M. An Overview of Power System Flexibility: High Renewable Energy Penetration Scenarios. Energies 2024, 17, 6393. [Google Scholar] [CrossRef]
- Yu, K.; Wei, Q.; Xu, C.; Xiang, X.; Yu, H. Distributed Low-Carbon Energy Management of Urban Campus for Renewable Energy Consumption. Energies 2024, 17, 6182. [Google Scholar] [CrossRef]
- El Rhatrif, A.; Bouihi, B.; Mestari, M. AI-based solutions for grid stability and efficiency: Challenges, limitations, and opportunities. Int. J. Internet Things Web Serv. 2024, 9, 16–28. [Google Scholar]
- Khodayar, M.; Liu, G.; Wang, J.; Khodayar, M.E. Deep learning in power systems research: A review. CSEE J. Power Energy Syst. 2020, 7, 209–220. [Google Scholar]
- Ozcanli, A.K.; Yaprakdal, F.; Baysal, M. Deep learning methods and applications for electrical power systems: A comprehensive review. Int. J. Energy Res. 2020, 44, 7136–7157. [Google Scholar] [CrossRef]
- Forootan, M.M.; Larki, I.; Zahedi, R.; Ahmadi, A. Machine learning and deep learning in energy systems: A review. Sustainability 2022, 14, 4832. [Google Scholar] [CrossRef]
- Janjua, J.I.; Ahmad, R.; Abbas, S.; Mohammed, A.S.; Khan, M.S.; Daud, A.; Abbas, T.; Khan, M.A. Enhancing smart grid electricity prediction with the fusion of intelligent modeling and XAI integration. Int. J. Adv. Appl. Sci. 2024, 11, 230–248. [Google Scholar] [CrossRef]
- Park, J.; Kang, D. Artificial Intelligence and Smart Technologies in Safety Management: A Comprehensive Analysis Across Multiple Industries. Appl. Sci. 2024, 14, 11934. [Google Scholar] [CrossRef]
- Elmousalami, H.; A Alnaser, A.; Kin Peng Hui, F. Advancing Smart Zero-Carbon Cities: High-Resolution Wind Energy Forecasting to 36 Hours Ahead. Appl. Sci. 2024, 14, 11918. [Google Scholar] [CrossRef]
- Titz, M.; Pütz, S.; Witthaut, D. Identifying drivers and mitigators for congestion and redispatch in the German electric power system with explainable AI. Appl. Energy 2024, 356, 122351. [Google Scholar] [CrossRef]
- Doroz, R.; Orczyk, T.; Wrobel, K.; Porwik, P. Adaptive classifier ensemble for multibiometric Verification. Procedia Comput. Sci. 2024, 246, 4038–4047. [Google Scholar] [CrossRef]
- Hamrani, A.; Medarametla, A.; John, D.; Agarwal, A. Machine-Learning-Driven Optimization of Cold Spray Process Parameters: Robust Inverse Analysis for Higher Deposition Efficiency. Coatings 2024, 15, 12. [Google Scholar] [CrossRef]
- Ali, A.; Faheem, Z.B.; Waseem, M.; Draz, U.; Safdar, Z.; Hussain, S.; Yaseen, S. Systematic review: A state of art ML based clustering algorithms for data mining. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar]
- Orczyk, T.; Porwik, P.; Doroz, R. A preliminary study on the dispersed classification system for recognizing safety of drivers’ maneuvers. Procedia Comput. Sci. 2023, 225, 2604–2613. [Google Scholar] [CrossRef]
- Masini, R.P.; Medeiros, M.C.; Mendes, E.F. Machine learning advances for time series forecasting. J. Econ. Surv. 2023, 37, 76–111. [Google Scholar] [CrossRef]
- Wrobel, K.; Doroz, R.; Porwik, P.; Orczyk, T.; Cavalcante, A.B.; Grajzer, M. Features of Hand-Drawn Spirals for Recognition of Parkinson’s Disease. In Proceedings of the Asian Conference on Intelligent Information and Database Systems, Ho Chi Minh City, Vietnam, 28–30 November 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 458–469. [Google Scholar]
- Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.Y.; Belikov, J.; Mannor, S.; Levron, Y. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities. Energy AI 2022, 9, 100169. [Google Scholar] [CrossRef]
- Letzgus, S.; Müller, K.R. An explainable AI framework for robust and transparent data-driven wind turbine power curve models. Energy AI 2024, 15, 100328. [Google Scholar] [CrossRef]
- Panagoulias, D.P.; Sarmas, E.; Marinakis, V.; Virvou, M.; Tsihrintzis, G.A.; Doukas, H. Intelligent decision support for energy management: A methodology for tailored explainability of artificial intelligence analytics. Electronics 2023, 12, 4430. [Google Scholar] [CrossRef]
- Davydko, O.; Pavlov, V.; Longo, L. Selecting Textural Characteristics of Chest X-Rays for Pneumonia Lesions Classification with the Integrated Gradients XAI Attribution Method. In Proceedings of the World Conference on Explainable Artificial Intelligence, Lisbon, Portugal, 26–28 July 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 671–687. [Google Scholar]
- Mahendran, A.; Vedaldi, A. Salient deconvolutional networks. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part VI 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 120–135. [Google Scholar]
- Victoria, A.H.; Tiwari, R.S.; Ghulam, A.K. Libraries for Explainable Artificial Intelligence (EXAI): Python. In Explainable AI (XAI) for Sustainable Development; Chapman and Hall/CRC: Boca Raton, FL, USA, 2024; pp. 211–232. [Google Scholar]
- Lundberg, S. A unified approach to interpreting model predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Monteiro, W.R.; Reynoso-Meza, G. On the generation of global surrogate models through unconstrained multi-objective optimization. arXiv 2022. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Chambers, J.M. Linear models. In Statistical Models in S; Routledge: London, UK, 2017; pp. 95–144. [Google Scholar]
- De Ville, B. Decision trees. Wiley Interdiscip. Rev. Comput. Stat. 2013, 5, 448–455. [Google Scholar] [CrossRef]
- Puthanveettil Madathil, A.; Luo, X.; Liu, Q.; Walker, C.; Madarkar, R.; Cai, Y.; Liu, Z.; Chang, W.; Qin, Y. Intrinsic and post-hoc XAI approaches for fingerprint identification and response prediction in smart manufacturing processes. J. Intell. Manuf. 2024, 35, 4159–4180. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Strumbelj, E.; Kononenko, I. An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 2010, 11, 1–18. [Google Scholar]
- Yaprakdal, F.; Varol Arısoy, M. A multivariate time series analysis of electrical load forecasting based on a hybrid feature selection approach and explainable deep learning. Appl. Sci. 2023, 13, 12946. [Google Scholar] [CrossRef]
- Powroźnik, P.; Szcześniak, P. Predictive Analytics for Energy Efficiency: Leveraging Machine Learning to Optimize Household Energy Consumption. Energies 2024, 17, 5866. [Google Scholar] [CrossRef]
- Laitsos, V.; Vontzos, G.; Paraschoudis, P.; Tsampasis, E.; Bargiotas, D.; Tsoukalas, L.H. The State of the Art Electricity Load and Price Forecasting for the Modern Wholesale Electricity Market. Energies 2024, 17, 5797. [Google Scholar] [CrossRef]
- Gürses-Tran, G.; Körner, T.A.; Monti, A. Introducing explainability in sequence-to-sequence learning for short-term load forecasting. Electr. Power Syst. Res. 2022, 212, 108366. [Google Scholar] [CrossRef]
- Grzeszczyk, T.A.; Grzeszczyk, M.K. Justifying short-term load forecasts obtained with the use of neural models. Energies 2022, 15, 1852. [Google Scholar] [CrossRef]
- Sarker, M.A.A.; Shanmugam, B.; Azam, S.; Thennadil, S. Enhancing smart grid load forecasting: An attention-based deep learning model integrated with federated learning and XAI for security and interpretability. Intell. Syst. Appl. 2024, 23, 200422. [Google Scholar] [CrossRef]
- Abumohsen, M.; Owda, A.Y.; Owda, M. Electrical load forecasting using LSTM, GRU, and RNN algorithms. Energies 2023, 16, 2283. [Google Scholar] [CrossRef]
- Cordeiro-Costas, M.; Villanueva, D.; Eguía-Oller, P.; Martínez-Comesaña, M.; Ramos, S. Load forecasting with machine learning and deep learning methods. Appl. Sci. 2023, 13, 7933. [Google Scholar] [CrossRef]
- Hong, T.; Fan, S. Probabilistic electric load forecasting: A tutorial review. Int. J. Forecast. 2016, 32, 914–938. [Google Scholar] [CrossRef]
- Ungureanu, S.; Topa, V.; Cziker, A.C. Analysis for non-residential short-term load forecasting using machine learning and statistical methods with financial impact on the power market. Energies 2021, 14, 6966. [Google Scholar] [CrossRef]
- Cassarino, T.G.; Sharp, E.; Barrett, M. The impact of social and weather drivers on the historical electricity demand in Europe. Appl. Energy 2018, 229, 176–185. [Google Scholar] [CrossRef]
- Baur, L.; Ditschuneit, K.; Schambach, M.; Kaymakci, C.; Wollmann, T.; Sauer, A. Explainability and interpretability in electric load forecasting using machine learning techniques—A review. Energy AI 2024, 16, 100358. [Google Scholar] [CrossRef]
- Olsen, L.H.B.; Glad, I.K.; Jullum, M.; Aas, K. A comparative study of methods for estimating model-agnostic Shapley value explanations. Data Min. Knowl. Discov. 2024, 38, 1782–1829. [Google Scholar] [CrossRef]
- Alkhatib, A.; Boström, H.; Johansson, U. Estimating Quality of Approximated Shapley Values Using Conformal Prediction. Proc. Mach. Learn. Res. 2024, 230, 1–17. [Google Scholar]
- Yuan, H.; Liu, M.; Kang, L.; Miao, C.; Wu, Y. An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models. arXiv 2022, arXiv:2204.11351. [Google Scholar]
- Letzgus, S.; Wagner, P.; Lederer, J.; Samek, W.; Müller, K.R.; Montavon, G. Toward explainable artificial intelligence for regression models: A methodological perspective. IEEE Signal Process. Mag. 2022, 39, 40–58. [Google Scholar] [CrossRef]
- Errousso, H.; Abdellaoui Alaoui, E.A.; Benhadou, S.; Medromi, H. Exploring how independent variables influence parking occupancy prediction: Toward a model results explanation with SHAP values. Prog. Artif. Intell. 2022, 11, 367–396. [Google Scholar] [CrossRef]
- Książek, W. Explainable Thyroid Cancer Diagnosis Through Two-Level Machine Learning Optimization with an Improved Naked Mole-Rat Algorithm. Cancers 2024, 16, 4128. [Google Scholar] [CrossRef]
- Bento, J.; Saleiro, P.; Cruz, A.F.; Figueiredo, M.A.; Bizarro, P. Timeshap: Explaining recurrent models through sequence perturbations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 2565–2573. [Google Scholar]
- Max Planck Institute for Biogeochemistry. Jena, Germany. 2009–2016. Available online: https://www.bgc-jena.mpg.de/wetter/ (accessed on 1 March 2024).
- Instytut Meteorologii i Gospodarki Wodnej Państwowy Instytut Badawczy. 2015–2023. Available online: https://danepubliczne.imgw.pl/ (accessed on 1 March 2024).
Metric Name | Dataset Name | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Jena | IMGW 351180455 | IMGW 352200375 | IMGW 353170235 | IMGW 349190650 | IMGW 351190469 | IMGW 352140310 | IMGW 352150300 | IMGW 351160424 | IMGW 353200272 | IMGW 354160115 | |
Mean Absolute Error | 0.73 | 0.77 | 0.76 | 0.70 | 0.65 | 0.77 | 0.78 | 0.71 | 0.87 | 0.74 | 0.68 |
Mean Squared Error | 1.04 | 1.11 | 1.12 | 0.94 | 0.81 | 1.15 | 1.12 | 0.93 | 1.39 | 1.06 | 1.01 |
Root Mean Squared Error | 1.02 | 1.05 | 1.06 | 0.97 | 0.90 | 1.07 | 1.06 | 0.96 | 1.18 | 1.03 | 1.00 |
0.99 | 0.98 | 0.98 | 0.99 | 0.99 | 0.98 | 0.98 | 0.99 | 0.98 | 0.98 | 0.98 |
Feature Name | Dataset Name | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
IMGW 351180455 | IMGW 352200375 | IMGW 353170235 | IMGW 349190650 | IMGW 351190469 | ||||||
Mean [%] | Std [%] | Mean [%] | Std [%] | Mean [%] | Std [%] | Mean [%] | Std [%] | Mean [%] | Std [%] | |
Air temperature | 31.81 | 0.63 | 26.27 | 0.50 | 25.98 | 0.76 | 36.61 | 0.84 | 41.46 | 0.71 |
Wind direction | 0.61 | 0.13 | 0.26 | 0.27 | 1.14 | 0.10 | 0.26 | 0.31 | 3.15 | 0.56 |
Wind speed | 0.73 | 0.38 | 0.37 | 0.39 | 1.50 | 0.31 | 6.57 | 0.78 | 1.46 | 0.58 |
Wind gust | 0.14 | 0.32 | 0.15 | 0.16 | 0.14 | 0.07 | 0.58 | 0.40 | 0.12 | 0.14 |
Steam pressure | 45.29 | 0.71 | 32.20 | 1.39 | 36.52 | 0.90 | 57.06 | 3.77 | 25.17 | 0.74 |
Relative humidity | 28.66 | 0.53 | 17.36 | 1.12 | 22.75 | 0.23 | 27.67 | 0.57 | 8.62 | 1.29 |
Pressure at station level | 13.44 | 1.35 | 34.82 | 2.33 | 26.55 | 1.11 | 75.39 | 2.51 | 20.44 | 2.99 |
Precipitation over 6 h | 0.27 | 0.09 | 0.24 | 0.20 | 0.75 | 0.13 | 0.09 | 0.09 | 0.30 | 0.21 |
Sunshine duration | 0.47 | 0.50 | 0.39 | 0.50 | 2.08 | 0.80 | 1.95 | 1.06 | 4.03 | 1.27 |
Max wind gust over 12 h | 0.39 | 0.12 | 0.77 | 0.38 | 0.15 | 0.06 | 4.04 | 0.54 | 0.25 | 0.12 |
Min temperature over 12 h | 0.02 | 0.03 | 0.16 | 0.19 | 0.25 | 0.09 | 0.21 | 0.18 | 0.50 | 0.28 |
Max temperature over 12 h | 0.24 | 0.07 | 0.22 | 0.08 | 0.58 | 0.14 | 1.01 | 0.38 | 0.46 | 0.37 |
Feature Name | Dataset Name | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
IMGW 352140310 | IMGW 352150300 | IMGW 351160424 | IMGW 353200272 | IMGW 354160115 | ||||||
Mean [%] | Std [%] | Mean [%] | Std [%] | Mean [%] | Std [%] | Mean [%] | Std [%] | Mean [%] | Std [%] | |
Air temperature | 53.35 | 1.18 | 33.75 | 2.76 | 26.44 | 0.49 | 43.72 | 0.25 | 36.60 | 0.73 |
Wind direction | 0.92 | 0.39 | 2.02 | 1.08 | 1.00 | 0.30 | 0.77 | 0.08 | 0.38 | 0.10 |
Wind speed | 53.35 | 1.18 | 0.52 | 0.85 | 1.22 | 0.42 | 0.30 | 0.13 | 0.58 | 0.22 |
Wind gust | 0.06 | 0.12 | 0.09 | 0.10 | 0.14 | 0.07 | 1.13 | 0.11 | 0.09 | 0.03 |
Steam pressure | 52.47 | 0.97 | 40.93 | 5.14 | 41.83 | 0.46 | 23.45 | 0.27 | 34.10 | 0.56 |
Relative humidity | 21.35 | 1.10 | 19.05 | 1.76 | 16.46 | 0.43 | 14.37 | 0.20 | 13.63 | 0.41 |
Pressure at station level | 13.07 | 4.42 | 11.27 | 4.88 | 1.36 | 1.23 | 57.02 | 0.57 | 32.86 | 1.80 |
Precipitation over 6 h | 0.22 | 0.21 | 0.15 | 0.36 | 0.37 | 0.14 | 0.80 | 0.21 | 0.09 | 0.09 |
Sunshine duration | 1.43 | 1.18 | 1.15 | 1.64 | 1.18 | 1.36 | 0.00 | 0.00 | 1.17 | 1.06 |
Max wind gust over 12 h | 0.23 | 0.13 | 0.10 | 0.11 | 0.20 | 0.07 | 0.66 | 0.05 | 0.12 | 0.03 |
Min temperature over 12 h | 1.22 | 0.32 | 0.41 | 0.66 | 0.09 | 0.09 | 0.07 | 0.04 | 0.33 | 0.11 |
Max temperature over 12 h | 0.72 | 0.34 | 0.49 | 0.55 | 0.51 | 0.25 | 0.09 | 0.04 | 0.76 | 0.10 |
Feature Name | Mean [%] | Std [%] |
---|---|---|
p | 0.40 | 0.52 |
T | 5.60 | 0.06 |
Tpot | 40.24 | 0.40 |
Tdew | 3.04 | 0.07 |
rh | 5.00 | 0.06 |
VPmax | 3.43 | 0.05 |
VPact | 1.67 | 0.06 |
VPdef | 0.00 | 0.00 |
sh | 4.94 | 0.11 |
H2OC | 7.83 | 0.07 |
rho | 10.44 | 0.19 |
wv | 0.00 | 0.00 |
max. wv | 1.23 | 0.33 |
wd | 0.69 | 0.04 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Szostak, B.; Doroz, R.; Marker, M. Contextual Background Estimation for Explainable AI in Temperature Prediction. Appl. Sci. 2025, 15, 1057. https://doi.org/10.3390/app15031057
Szostak B, Doroz R, Marker M. Contextual Background Estimation for Explainable AI in Temperature Prediction. Applied Sciences. 2025; 15(3):1057. https://doi.org/10.3390/app15031057
Chicago/Turabian StyleSzostak, Bartosz, Rafal Doroz, and Magdalena Marker. 2025. "Contextual Background Estimation for Explainable AI in Temperature Prediction" Applied Sciences 15, no. 3: 1057. https://doi.org/10.3390/app15031057
APA StyleSzostak, B., Doroz, R., & Marker, M. (2025). Contextual Background Estimation for Explainable AI in Temperature Prediction. Applied Sciences, 15(3), 1057. https://doi.org/10.3390/app15031057