From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models
Abstract
:1. Introduction
1.1. Explainability in Machine Learning
1.2. Explainability in Satellite Imagery Machine Learning Applications
1.3. Lulc Used to Explain Human Behavior and Social Modeling in GIS
1.4. Understanding Conflict Through Satellite Imagery
2. Materials and Methods
2.1. Data and Labeling
2.1.1. Data
2.1.2. Labeling
2.2. Methods
- A convolutional neural network (ResNet18) was fit to the 32,548 images labeled as the location of a “riot” or “no riot”, following the procedures outlined in [5]. We recorded the global accuracy and country-level accuracy for the model.
- Class activation mapping (Score-CAM) was applied to every image, generating a heat map localizing the regions of each image important for its classification.
- A threshold value was set to filter areas identified by Score-CAM as being critical for the classification; these areas were removed from each image (all values were set to 0), and then each image was classified as the location of a riot or non-riot a second time. We contrasted the revised accuracy (after critical areas were removed) with the original accuracy to determine the efficacy of Score-CAM in localizing where important features existed within the image.
- Using the same localized regions identified by Score-CAM, we tested if there were meaningful land cover distinctions between those regions and the remainder of each urban tile, testing if the land cover could provide a useful proximate feature for identifying riot locations.
2.2.1. Neural Network Training
2.2.2. Score-CAM-Informed Masking
2.2.3. Evaluating Score-CAM’s Effectiveness
2.2.4. Land Use Analysis
3. Results
3.1. Threshold Analysis
3.2. Softmax Analysis
3.3. Ulu Data as an Explanatory Variable
3.4. County-Level Analysis
4. Discussion
4.1. Feature Detection in the Context of Socioeconomic Outcomes
4.2. Limitations of Score-CAM and CAM Approaches
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ACLED | Armed Conflict Location and Event Data Project |
CAM | Class Activation Mapping |
CNN | Convolutional Neural Network |
DEGURB | Degree of Urbanisation |
GIS | Geographic Information System |
LIME | Local Interpretable Model-Agnostic Explanations |
LULC | Land Use and Land Cover |
RGB | Red, Green, and Blue |
ROC | Receiver Operating Characteristic |
Score-CAM | Score-Weighted Class Activation Mapping |
SHAP | Shapley Additive Explanations |
ULU | Urban Land Use |
XAI | Explainable Artificial Intelligence |
Appendix A
Appendix A.1
References
- Rodgers, D.; Gazdar, H.; Goodfellow, T. Cities and Conflict; London School of Economics and Political Science (LSE): London, UK, 2010. [Google Scholar]
- Askarizad, R.; Safari, H. The influence of social interactions on the behavioral patterns of the people in urban spaces (case study: The pedestrian zone of Rasht Municipality Square, Iran). Cities 2020, 101, 102687. [Google Scholar] [CrossRef]
- Snow, D.A.; Vliegenthart, R.; Corrigall-Brown, C. Framing the French riots: A comparative study of frame variation. Soc. Forces 2007, 86, 385–415. [Google Scholar] [CrossRef]
- Davies, T.P.; Fry, H.M.; Wilson, A.G.; Bishop, S.R. A mathematical model of the London riots and their policing. Sci. Rep. 2013, 3, 1303. [Google Scholar] [CrossRef]
- Warnke, S.; Runfola, D. Predicting Protests and Riots in Urban Environments With Satellite Imagery and Deep Learning. Trans. GIS 2024, 28, 2309–2327. [Google Scholar] [CrossRef]
- Jean, N.; Burke, M.; Xie, M.; Davis, W.M.; Lobell, D.B.; Ermon, S. Combining satellite imagery and machine learning to predict poverty. Science 2016, 353, 790–794. [Google Scholar] [CrossRef]
- Runfola, D.; Stefanidis, A.; Lv, Z.; O’Brien, J.; Baier, H. A multi-glimpse deep learning architecture to estimate socioeconomic census metrics in the context of extreme scope variance. Int. J. Geogr. Inf. Sci. 2024, 38, 726–750. [Google Scholar] [CrossRef]
- Runfola, D.; Stefanidis, A.; Baier, H. Using satellite data and deep learning to estimate educational outcomes in data-sparse environments. Remote Sens. Lett. 2022, 13, 87–97. [Google Scholar] [CrossRef]
- Runfola, D.; Baier, H.; Mills, L.; Naughton-Rockwell, M.; Stefanidis, A. Deep learning fusion of satellite and social information to estimate human migratory flows. Trans. GIS 2022, 26, 2495–2518. [Google Scholar] [CrossRef]
- Goodman, S.; BenYishay, A.; Runfola, D. A convolutional neural network approach to predict non-permissive environments from moderate-resolution imagery. Trans. GIS 2021, 25, 674–691. [Google Scholar] [CrossRef]
- Aung, T.S.; Overland, I.; Vakulchuk, R.; Xie, Y. Using satellite data and machine learning to study conflict-induced environmental and socioeconomic destruction in data-poor conflict areas: The case of the Rakhine conflict. Environ. Res. Commun. 2021, 3, 025005. [Google Scholar] [CrossRef]
- Goodman, S.; BenYishay, A.; Runfola, D. Spatiotemporal Prediction of Conflict Fatality Risk Using Convolutional Neural Networks and Satellite Imagery. Remote Sens. 2024, 16, 3411. [Google Scholar] [CrossRef]
- Planet Team. Planet Application Program Interface: In Space for Life on Earth; Digital Globe: San Francisco, CA, USA, 2023. [Google Scholar]
- Obadic, I.; Levering, A.; Pennig, L.; Oliveira, D.; Marcos, D.; Zhu, X. Contrastive Pretraining for Visual Concept Explanations of Socioeconomic Outcomes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 575–584. [Google Scholar]
- Machicao, J.; Specht, A.; Vellenich, D.; Meneguzzi, L.; David, R.; Stall, S.; Ferraz, K.; Mabile, L.; O’brien, M.; Corrêa, P. A deep-learning method for the prediction of socio-economic indicators from street-view imagery using a case study from Brazil. Data Sci. J. 2022, 21, 1929464. [Google Scholar] [CrossRef]
- Bansal, C.; Jain, A.; Barwaria, P.; Choudhary, A.; Singh, A.; Gupta, A.; Seth, A. Temporal prediction of socio-economic indicators using satellite imagery. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. Association for Computing Machinery, Hyderabad, India, 5–7 January 2020; pp. 73–81. [Google Scholar]
- Hall, O.; Ohlsson, M.; Rögnvaldsson, T. A review of explainable AI in the satellite data, deep machine learning, and human poverty domain. Patterns 2022, 3, 100600. [Google Scholar] [CrossRef]
- Dabkowski, P.; Gal, Y. Real time image saliency for black box classifiers. arXiv 2017, arXiv:1705.07857. [Google Scholar]
- Fong, R.C.; Vedaldi, A. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3429–3437. [Google Scholar]
- Petsiuk, V.; Das, A.; Saenko, K. Rise: Randomized input sampling for explanation of black-box models. arXiv 2018, arXiv:1806.07421. [Google Scholar]
- Chattopadhay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; IEEE: New York, NY, USA, 2018; pp. 839–847. [Google Scholar]
- Naidu, R.; Ghosh, A.; Maurya, Y.; Nayak K, S.R.; Kundu, S.S. IS-CAM: Integrated Score-CAM for axiomatic-based explanations. arXiv 2020, arXiv:2010.03023. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part I 13; Springer: Berlin/Heidelberg, Germany, 2014; pp. 818–833. [Google Scholar]
- Mahendran, A.; Vedaldi, A. Understanding deep image representations by inverting them. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5188–5196. [Google Scholar]
- Dosovitskiy, A.; Brox, T. Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4829–4837. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Vaswani, A. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 Dedember 2017. [Google Scholar]
- Dosovitskiy, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL Tech. 2017, 31, 841. [Google Scholar] [CrossRef]
- Höhl, A.; Obadic, I.; Torres, M.Á.F.; Najjar, H.; Oliveira, D.; Akata, Z.; Dengel, A.; Zhu, X.X. Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing. arXiv 2024, arXiv:2402.13791. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Yamauchi, T.; Ishikawa, M. Spatial sensitive grad-cam: Visual explanations for object detection by incorporating spatial sensitivity. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; IEEE: New York, NY, USA, 2022; pp. 256–260. [Google Scholar]
- Sattarzadeh, S.; Sudhakar, M.; Plataniotis, K.N.; Jang, J.; Jeong, Y.; Kim, H. Integrated grad-cam: Sensitivity-aware visual explanation of deep convolutional networks via integrated gradient-based scoring. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; IEEE: New York, NY, USA, 2021; pp. 1775–1779. [Google Scholar]
- Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 24–25. [Google Scholar]
- Shi, T.; Li, Y.; Liang, H.; Yu, R. Score-CAMpp: Class activation map based on logarithmic transformation. In Proceedings of the 2022 16th IEEE International Conference on Signal Processing (ICSP), Beijing, China, 21–24 October 2022; IEEE: New York, NY, USA, 2022; Volume 1, pp. 256–259. [Google Scholar]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting black-box models: A review on explainable artificial intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
- Vasu, B.; Rahman, F.U.; Savakis, A. Aerial-cam: Salient structures and textures in network class activation maps of aerial imagery. In Proceedings of the 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Aristi Village, Zagori, Greece, 10–12 June 2018; IEEE: New York, NY, USA, 2018; pp. 1–5. [Google Scholar]
- Fu, K.; Dai, W.; Zhang, Y.; Wang, Z.; Yan, M.; Sun, X. Multicam: Multiple class activation mapping for aircraft recognition in remote sensing images. Remote Sens. 2019, 11, 544. [Google Scholar] [CrossRef]
- Simonyan, K. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013, arXiv:1312.6034. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “ Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Lundberg, S. A unified approach to interpreting model predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar]
- Yang, F.; Xu, Q.; Li, B. Ship detection from optical satellite images based on saliency segmentation and structure-LBP feature. IEEE Geosci. Remote Sens. Lett. 2017, 14, 602–606. [Google Scholar] [CrossRef]
- Temenos, A.; Temenos, N.; Kaselimi, M.; Doulamis, A.; Doulamis, N. Interpretable deep learning framework for land use and land cover classification in remote sensing using SHAP. IEEE Geosci. Remote Sens. Lett. 2023, 20, 8500105. [Google Scholar] [CrossRef]
- Khan, M.; Hanan, A.; Kenzhebay, M.; Gazzea, M.; Arghandeh, R. Transformer-based land use and land cover classification with explainability using satellite imagery. Sci. Rep. 2024, 14, 16744. [Google Scholar] [CrossRef]
- Kokhlikyan, N.; Miglani, V.; Martin, M.; Wang, E.; Alsallakh, B.; Reynolds, J.; Melnikov, A.; Kliushkina, N.; Araya, C.; Yan, S.; et al. Captum: A unified and generic model interpretability library for pytorch. arXiv 2020, arXiv:2009.07896. [Google Scholar]
- Tahir, A.; Munawar, H.S.; Akram, J.; Adil, M.; Ali, S.; Kouzani, A.Z.; Mahmud, M.P. Automatic target detection from satellite imagery using machine learning. Sensors 2022, 22, 1147. [Google Scholar] [CrossRef] [PubMed]
- Carleer, A.; Debeir, O.; Wolff, E. Assessment of very high spatial resolution satellite image segmentations. Photogramm. Eng. Remote Sens. 2005, 71, 1285–1294. [Google Scholar] [CrossRef]
- Brewer, E.; Lin, J.; Runfola, D. Susceptibility & defense of satellite image-trained convolutional networks to backdoor attacks. Inf. Sci. 2022, 603, 244–261. [Google Scholar]
- Burka, B.M.; Roro, A.G.; Regasa, D.T. Dynamics of pastoral conflicts in eastern Rift Valley of Ethiopia: Contested boundaries, state projects and small arms. Pastoralism 2023, 13, 5. [Google Scholar] [CrossRef]
- Tan, S.; Hassen, N.A. Examining the choice of land conflict resolution mechanisms: The case between the harshin and yocaale woredas of the Somali region of Ethiopia. J. Environ. Manag. 2023, 342, 118250. [Google Scholar] [CrossRef] [PubMed]
- Kugler, T.A.; Grace, K.; Wrathall, D.J.; de Sherbinin, A.; Van Riper, D.; Aubrecht, C.; Comer, D.; Adamo, S.B.; Cervone, G.; Engstrom, R.; et al. People and Pixels 20 years later: The current data landscape and research trends blending population and environmental data. Popul. Environ. 2019, 41, 209–234. [Google Scholar] [CrossRef]
- Council, N.R.; on Environmental Change, B.; on the Human Dimensions of Global Change, C. People and Pixels: Linking Remote Sensing and Social Science; National Academies Press: Cambridge, MA, USA, 1998. [Google Scholar]
- Seto, K.C.; Güneralp, B.; Hutyra, L.R. Global forecasts of urban expansion to 2030 and direct impacts on biodiversity and carbon pools. Proc. Natl. Acad. Sci. USA 2012, 109, 16083–16088. [Google Scholar] [CrossRef] [PubMed]
- Walsh, S.J.; Crews-Meyer, K.A.; Crawford, T.W.; Welsh, W.F. Population and environment interactions: Spatial considerations in landscape characterization and modeling. Scale Geogr. Inq. Nature Soc. Method 2004, 41–65. [Google Scholar] [CrossRef]
- Rindfuss, R.R.; Walsh, S.J.; Turner, B.L.; Fox, J.; Mishra, V. Developing a science of land change: Challenges and methodological issues. Proc. Natl. Acad. Sci. USA 2004, 101, 13976–13981. [Google Scholar] [CrossRef] [PubMed]
- Runfola, D.S.M.; Pontius Jr, R.G. Measuring the temporal instability of land change using the Flow matrix. Int. J. Geogr. Inf. Sci. 2013, 27, 1696–1716. [Google Scholar] [CrossRef] [PubMed]
- Fortier, J.; Rogan, J.; Woodcock, C.E.; Runfola, D.M. Utilizing temporally invariant calibration sites to classify multiple dates and types of satellite imagery. Photogramm. Eng. Remote Sens. 2011, 77, 181–189. [Google Scholar] [CrossRef]
- Alo, C.A.; Pontius Jr, R.G. Identifying systematic land-cover transitions using remote sensing and GIS: The fate of forests inside and outside protected areas of Southwestern Ghana. Environ. Plan. B Plan. Des. 2008, 35, 280–295. [Google Scholar] [CrossRef]
- Stow, D.; Hamada, Y.; Coulter, L.; Anguelova, Z. Monitoring shrubland habitat changes through object-based change identification with airborne multispectral imagery. Remote Sens. Environ. 2008, 112, 1051–1061. [Google Scholar] [CrossRef]
- Rogan, J.; Chen, D. Remote sensing technology for mapping and monitoring land-cover and land-use change. Prog. Plan. 2004, 61, 301–325. [Google Scholar] [CrossRef]
- Li, X.; Chen, D.; Duan, Y.; Ji, H.; Zhang, L.; Chai, Q.; Hu, X. Understanding Land use/Land cover dynamics and impacts of human activities in the Mekong Delta over the last 40 years. Glob. Ecol. Conserv. 2020, 22, e00991. [Google Scholar] [CrossRef]
- Murillo-Sandoval, P.J.; Kilbride, J.; Tellman, E.; Wrathall, D.; Van Den Hoek, J.; Kennedy, R.E. The post-conflict expansion of coca farming and illicit cattle ranching in Colombia. Sci. Rep. 2023, 13, 1965. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Niu, J.; Buyantuev, A.; Wu, J. A multilevel analysis of effects of land use policy on land-cover change and local land use decisions. J. Arid. Environ. 2014, 108, 19–28. [Google Scholar] [CrossRef]
- Addae, B.; Oppelt, N. Land-use/land-cover change analysis and urban growth modelling in the Greater Accra Metropolitan Area (GAMA), Ghana. Urban Sci. 2019, 3, 26. [Google Scholar] [CrossRef]
- Wu, Y.; Li, S.; Yu, S. Monitoring urban expansion and its effects on land use and land cover changes in Guangzhou city, China. Environ. Monit. Assess. 2016, 188, 54. [Google Scholar] [CrossRef] [PubMed]
- Mandal, J.; Ghosh, N.; Mukhopadhyay, A. Urban growth dynamics and changing land-use land-cover of megacity Kolkata and its environs. J. Indian Soc. Remote Sens. 2019, 47, 1707–1725. [Google Scholar] [CrossRef]
- Ishtiaque, A.; Shrestha, M.; Chhetri, N. Rapid urban growth in the Kathmandu Valley, Nepal: Monitoring land use land cover dynamics of a himalayan city with landsat imageries. Environments 2017, 4, 72. [Google Scholar] [CrossRef]
- Runfola, D.M.; Hughes, S. What makes green cities unique? Examining the economic and political characteristics of the grey-to-green continuum. Land 2014, 3, 131–147. [Google Scholar] [CrossRef] [PubMed]
- Runfola, D.M.; Polsky, C.; Nicolson, C.; Giner, N.M.; Pontius Jr, R.G.; Krahe, J.; Decatur, A. A growing concern? Examining the influence of lawn size on residential water use in suburban Boston, MA, USA. Landsc. Urban Plan. 2013, 119, 113–123. [Google Scholar] [CrossRef]
- Yin, J.; Dong, J.; Hamm, N.A.; Li, Z.; Wang, J.; Xing, H.; Fu, P. Integrating remote sensing and geospatial big data for urban land use mapping: A review. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102514. [Google Scholar] [CrossRef]
- Chen, B.; Xu, B.; Gong, P. Mapping essential urban land use categories (EULUC) using geospatial big data: Progress, challenges, and opportunities. Big Earth Data 2021, 5, 410–441. [Google Scholar] [CrossRef]
- Duncan, M.J.; Winkler, E.; Sugiyama, T.; Cerin, E.; Dutoit, L.; Leslie, E.; Owen, N. Relationships of land use mix with walking for transport: Do land uses and geographical scale matter? J. Urban Health 2010, 87, 782–795. [Google Scholar] [CrossRef] [PubMed]
- Ewing, R.; Cervero, R. Travel and the built environment: A meta-analysis. J. Am. Plan. Assoc. 2010, 76, 265–294. [Google Scholar] [CrossRef]
- Jacobs-Crisioni, C.; Rietveld, P.; Koomen, E.; Tranos, E. Evaluating the impact of land-use density and mix on spatiotemporal urban activity patterns: An exploratory study using mobile phone data. Environ. Plan. A 2014, 46, 2769–2785. [Google Scholar] [CrossRef]
- Frank, L.D.; Schmid, T.L.; Sallis, J.F.; Chapman, J.; Saelens, B.E. Linking objectively measured physical activity with objectively measured urban form: Findings from SMARTRAQ. Am. J. Prev. Med. 2005, 28, 117–125. [Google Scholar] [CrossRef]
- Jia, Y.; Ge, Y.; Ling, F.; Guo, X.; Wang, J.; Wang, L.; Chen, Y.; Li, X. Urban land use mapping by combining remote sensing imagery and mobile phone positioning data. Remote Sens. 2018, 10, 446. [Google Scholar] [CrossRef]
- Guan, D.; Li, H.; Inohae, T.; Su, W.; Nagaie, T.; Hokao, K. Modeling urban land use change by the integration of cellular automaton and Markov model. Ecol. Model. 2011, 222, 3761–3772. [Google Scholar] [CrossRef]
- Verburg, P.H.; Schot, P.P.; Dijst, M.J.; Veldkamp, A. Land use change modelling: Current practice and research priorities. GeoJournal 2004, 61, 309–324. [Google Scholar] [CrossRef]
- Batty, M. The New Science of Cities; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
- Alberti, M.; Marzluff, J.; Hunt, V.M. Urban driven phenotypic changes: Empirical observations and theoretical implications for eco-evolutionary feedback. Philos. Trans. R. Soc. B Biol. Sci. 2017, 372, 20160029. [Google Scholar] [CrossRef]
- Xu, J.Z.; Lu, W.; Li, Z.; Khaitan, P.; Zaytseva, V. Building damage detection in satellite imagery using convolutional neural networks. arXiv 2019, arXiv:1910.06444. [Google Scholar]
- Mueller, H.; Groeger, A.; Hersh, J.; Matranga, A.; Serrat, J. Monitoring war destruction from space using machine learning. Proc. Natl. Acad. Sci. USA 2021, 118, e2025400118. [Google Scholar] [CrossRef]
- Nabiee, S.; Harding, M.; Hersh, J.; Bagherzadeh, N. Hybrid U-Net: Semantic segmentation of high-resolution satellite images to detect war destruction. Mach. Learn. Appl. 2022, 9, 100381. [Google Scholar] [CrossRef]
- Eklund, L.; Degerald, M.; Brandt, M.; Prishchepov, A.V.; Pilesjö, P. How conflict affects land use: Agricultural activity in areas seized by the Islamic State. Environ. Res. Lett. 2017, 12, 054004. [Google Scholar] [CrossRef]
- Planet Team. PlanetScope: Constellation and Sensor Overview; Digital Globe: San Francisco, CA, USA, 2023. [Google Scholar]
- Raleigh, C.; Kishi, R.; Linke, A. Political instability patterns are obscured by conflict dataset scope conditions, sources, and coding choices. Humanit. Soc. Sci. Commun. 2023, 10, 74. [Google Scholar] [CrossRef]
- Schiavina, M.; Melchiorri, M.; Pesaresi, M. GHS-SMOD R2023A—GHS Settlement Layers, Application of the Degree of Urbanisation Methodology (Stage I) to GHS-POP R2023A and GHS-BUILT-S R2023A, Multitemporal (1975–2030); European Commission, Joint Research Centre (JRC): Brussels, Belgium, 2023. [Google Scholar] [CrossRef]
- European Commission and Statistical Office of the European Union. Applying the Degree of Urbanisation—A Methodological Manual to Define Cities, Towns and Rural Areas for International Comparisons—2021 Edition; Publications Office of the European Union: Brussels, Belgium, 2021; ISBN 978-92-76-20306-3. [Google Scholar] [CrossRef]
- Runfola, D.; Anderson, A.; Baier, H.; Crittenden, M.; Dowker, E.; Fuhrig, S.; Goodman, S.; Grimsley, G.; Layko, R.; Melville, G.; et al. geoBoundaries: A global database of political administrative boundaries. PLoS ONE 2020, 15, e0231866. [Google Scholar] [CrossRef] [PubMed]
- Guzder-Williams, B.; Mackres, E.; Angel, S.; Blei, A.M.; Lamson-Hall, P. Intra-urban land use maps for a global sample of cities from Sentinel-2 satellite imagery and computer vision. Comput. Environ. Urban Syst. 2023, 100, 101917. [Google Scholar] [CrossRef]
- ACLED Codebook. 2023. Available online: https://acleddata.com/acleddatanew/wp-content/uploads/dlm_uploads/2023/06/ACLED_Codebook_2023.pdf (accessed on 14 January 2025).
- Pearce, T.; Brintrup, A.; Zhu, J. Understanding softmax confidence and uncertainty. arXiv 2021, arXiv:2106.04972. [Google Scholar]
- Subramanya, A.; Srinivas, S.; Babu, R.V. Confidence estimation in deep neural networks via density modelling. arXiv 2017, arXiv:1707.07013. [Google Scholar]
- Moon, J.; Kim, J.; Shin, Y.; Hwang, S. Confidence-aware learning for deep neural networks. In International Conference on Machine Learning; PMLR: Birmingham, UK, 2020; pp. 7034–7044. [Google Scholar]
- Hendrycks, D.; Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv 2016, arXiv:1610.02136. [Google Scholar]
- Rozsa, A.; Günther, M.; Boult, T.E. Adversarial robustness: Softmax versus openmax. arXiv 2017, arXiv:1708.01697. [Google Scholar]
- Sen, J.; Sen, A.; Chatterjee, A. Adversarial Attacks on Image Classification Models: Analysis and Defense. arXiv 2023, arXiv:2312.16880. [Google Scholar]
Instrument | Image Area | Availability | Wavelength (nm) | ||
---|---|---|---|---|---|
Red | Green | Blue | |||
Dove Classic | 25 × 11.5 sq km | July 2014–April 2022 | 590–670 | 500–590 | 455–515 |
Dove-R | 25 × 23 sq km | March 2019–April 2022 | 650–682 | 547–585 | 464–517 |
SuperDove | 32.5 × 19.6 sq km | March 2020–present | 650–680 | 547–583 | 465–515 |
Country | Images | Country | Images | Country | Images |
---|---|---|---|---|---|
South Korea | 7494 | Pakistan | 2622 | Iran | 2334 |
Lebanon | 1656 | Israel/Palestine | 1572 | China | 1550 |
South Africa | 1480 | Chile | 1302 | Japan | 1256 |
India | 1148 | Brazil | 1112 | Bangladesh | 1092 |
Ukraine | 924 | Thailand | 890 | Italy | 728 |
Russia | 668 | Indonesia | 678 | Venezuela | 648 |
Greece | 634 | Yemen | 604 | Taiwan | 562 |
U.K. | 566 | Peru | 522 | Iraq | 506 |
Original Image | Masked Image | |
---|---|---|
Accuracy | 89.41% | 68.54% |
Precision | 89.38% | 79.91% |
Recall | 89.46% | 49.52% |
F1 | 89.42% | 61.15% |
True Positives | 2903 | 1607 |
False Positives | 345 | 404 |
True Negatives | 2900 | 2841 |
False Negatives | 342 | 1638 |
L1 Similarity | L2 Similarity | Cosine Similarity | |
---|---|---|---|
Mean | 0.7004 | 0.4046 | 0.7946 |
Median | 0.6473 | 0.3682 | 0.8753 |
Standard Deviation | 0.4143 | 0.2447 | 0.2157 |
Null Hypothesis | |||
T Statistic | −502.99 | −272.37 | 101.66 |
p Value | 1 | 1 | 1 |
Result | Similar | Similar | Similar |
Country | Unmasked Accuracy | Masked Accuracy | Change in Accuracy |
---|---|---|---|
United Kingdom | 0.8393 | 0.5357 | −0.3036 |
Taiwan | 0.8214 | 0.5179 | −0.3036 |
Ukraine | 0.9185 | 0.6467 | −0.2717 |
Israel/Palestine | 0.8949 | 0.6338 | −0.2611 |
Indonesia | 0.8657 | 0.6194 | −0.2463 |
South Africa | 0.8243 | 0.5845 | −0.2399 |
Bangladesh | 0.8899 | 0.6514 | −0.2385 |
Venezuela | 0.8906 | 0.6563 | −0.2344 |
South Korea | 0.9212 | 0.6903 | −0.2310 |
India | 0.8860 | 0.6667 | −0.2193 |
Thailand | 0.8708 | 0.6517 | −0.2191 |
Brazil | 0.8514 | 0.6351 | −0.2162 |
Pakistan | 0.9256 | 0.7099 | −0.2156 |
Italy | 0.8819 | 0.6806 | −0.2014 |
Yemen | 0.7833 | 0.5833 | −0.2000 |
Chile | 0.9115 | 0.7231 | −0.1885 |
Lebanon | 0.9455 | 0.7576 | −0.1879 |
Peru | 0.7788 | 0.6058 | −0.1731 |
China | 0.9000 | 0.7290 | −0.1710 |
Iraq | 0.9100 | 0.7500 | −0.1600 |
Japan | 0.8560 | 0.7200 | −0.1360 |
Greece | 0.8730 | 0.7460 | −0.1270 |
Iran | 0.9442 | 0.8219 | −0.1223 |
Russia | 0.7803 | 0.6667 | −0.1136 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Warnke, S.; Runfola, D. From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models. Remote Sens. 2025, 17, 313. https://doi.org/10.3390/rs17020313
Warnke S, Runfola D. From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models. Remote Sensing. 2025; 17(2):313. https://doi.org/10.3390/rs17020313
Chicago/Turabian StyleWarnke, Scott, and Daniel Runfola. 2025. "From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models" Remote Sensing 17, no. 2: 313. https://doi.org/10.3390/rs17020313
APA StyleWarnke, S., & Runfola, D. (2025). From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models. Remote Sensing, 17(2), 313. https://doi.org/10.3390/rs17020313