Evaluating Human Expert Knowledge in Damage Assessment Using Eye Tracking: A Disaster Case Study
Abstract
:1. Introduction
2. Related Work
2.1. Eye Tracking and Visual Attention
2.2. Immersive Techniques for Training
2.3. AI and Computer Vision in Infrastructure
2.4. Challenges in Developing Human-Centered AI/ML Decision Support
3. Methodology
- First, to understand the differences in visual attention between expert and novice building inspectors, eye tracking features were explored in depth, and statistical analysis was conducted.
- Second, to model visual attention patterns, a task-specific ML model was trained leveraging inspectors’ gaze data to predict saliency maps and subsequently compare and discern the distinct attention patterns of experts and novices.
- Third, saliency metrics analysis was conducted to evaluate the ML model’s performance in predicting visual attention.
3.1. Experimental Design Setup
3.1.1. Participants
3.1.2. Apparatus
- You are required to look at the images collected from a disaster site.
- Each image will be shown for 20 sec, and you cannot zoom in/out.
- Locate any type of damage or peculiarities that you see on the structure.
- You will not be evaluated based on your skills; this study will not impact you.
- You can tilt your head slightly, but you should not move your head.
- Once the experiment is started, I cannot answer any of your questions.
- At the end of the experiment, you will see the “Task is completed” window.
3.2. Experimental Procedure
3.2.1. Data Collection
3.2.2. Data Processing
3.3. Eye Tracking Metrics and Analysis
3.4. Statistical Modeling Methods
3.5. Machine Learning Approach
3.5.1. Datasets
3.5.2. Model Architecture
3.5.3. Implementation Details and Training
3.6. Saliency Metrics Analysis
4. Results
4.1. Eye Tracking Measures
4.1.1. Qualitative Analysis
4.1.2. Quantitative Analysis
4.2. Statistical Modeling
Discussion
4.3. Saliency Maps Generation
4.4. Saliency Metrics Evaluation
5. Discussion and Limitations
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Image | AUC_Borjii | CC | Kldiv | NSS | Similarity |
---|---|---|---|---|---|
‘001.jpg’ | 0.5107 | 0.2985 | 0.9812 | 0.3780 | 0.4246 |
‘002.jpg’ | 0.5083 | 0.6298 | 0.6118 | 0.3629 | 0.5886 |
‘003.jpg’ | 0.5060 | 0.6963 | 0.6139 | 0.3319 | 0.6031 |
‘004.jpg’ | 0.5111 | 0.6817 | 0.6302 | 0.4798 | 0.5826 |
‘005.jpg’ | 0.5558 | 0.4323 | 0.9548 | 0.4666 | 0.4394 |
‘006.jpg’ | 0.5117 | 0.4161 | 0.8816 | 0.3530 | 0.4748 |
‘007.jpg’ | 0.5007 | 0.0194 | 1.1577 | 0.0961 | 0.4214 |
‘008.jpg’ | 0.5022 | 0.0637 | 1.3890 | 0.1144 | 0.3859 |
‘009.jpg’ | 0.5067 | 0.1239 | 0.7674 | 0.1349 | 0.5055 |
‘010.jpg’ | 0.4978 | −0.0172 | 0.8773 | 0.0487 | 0.4656 |
‘011.jpg’ | 0.4976 | −0.0749 | 1.1147 | −0.0285 | 0.4133 |
‘012.jpg’ | 0.5354 | 0.3722 | 0.8758 | 0.2810 | 0.4811 |
‘013.jpg’ | 0.5420 | 0.4435 | 1.1080 | 0.6698 | 0.3795 |
‘014.jpg’ | 0.5131 | 0.3354 | 1.0725 | 0.4300 | 0.3919 |
‘015.jpg’ | 0.5572 | 0.3021 | 1.1867 | 0.5148 | 0.3664 |
‘016.jpg’ | 0.5735 | 0.5193 | 0.8774 | 0.6189 | 0.4504 |
‘017.jpg’ | 0.5034 | 0.7012 | 0.4577 | 0.4238 | 0.6564 |
‘018.jpg’ | 0.5115 | 0.6130 | 0.4120 | 0.3658 | 0.6464 |
‘019.jpg’ | 0.5388 | 0.7089 | 0.6023 | 0.6567 | 0.5988 |
‘020.jpg’ | 0.5124 | 0.2791 | 0.7376 | 0.1814 | 0.5274 |
‘021.jpg’ | 0.5065 | 0.1437 | 1.0380 | 0.2047 | 0.4418 |
‘022.jpg’ | 0.5157 | 0.3005 | 1.1495 | 0.3141 | 0.3846 |
‘023.jpg’ | 0.5040 | 0.3058 | 0.9354 | 0.1943 | 0.4497 |
‘024.jpg’ | 0.5148 | 0.6162 | 0.5539 | 0.4713 | 0.5972 |
‘025.jpg’ | 0.5116 | 0.7192 | 0.5380 | 0.6929 | 0.6460 |
‘026.jpg’ | 0.5018 | 0.5980 | 0.7224 | 0.6270 | 0.5730 |
‘027.jpg’ | 0.5178 | 0.5857 | 0.5507 | 0.5179 | 0.6009 |
‘028.jpg’ | 0.5023 | 0.5821 | 0.5949 | 0.4383 | 0.5858 |
‘029.jpg’ | 0.5445 | 0.3543 | 1.0044 | 0.4851 | 0.4652 |
‘030.jpg’ | 0.5048 | 0.4651 | 1.0066 | 0.5149 | 0.4196 |
‘031.jpg’ | 0.5888 | 0.4427 | 0.8703 | 0.5211 | 0.4601 |
‘032.jpg’ | 0.5456 | 0.3815 | 1.0430 | 0.4713 | 0.3982 |
‘033.jpg’ | 0.5313 | 0.2376 | 1.2683 | 0.5465 | 0.3608 |
Average | 0.5208 | 0.4023 | 0.8662 | 0.3903 | 0.4905 |
References
- Khan, A.; Gupta, S.; Gupta, S.K. Multi-hazard disaster studies: Monitoring, detection, recovery, and management, based on emerging technologies and optimal techniques. Int. J. Disaster Risk Reduct. 2020, 47, 101642. [Google Scholar] [CrossRef]
- Benson, C.; Edward, J.C. Economic and Financial Impacts of Natural Disasters: An Assessment of Their Effects and Options for Mitigation. Economics and Environmental Science. Available online: https://www.semanticscholar.org/paper/Economic-and-Financial-Impacts-of-Natural-an-of-and-Benson-Clay/a04c5f181b292050dddf011d50863872b7e52e6a (accessed on 4 June 2024).
- Chang, C.-M.; Lin, T.-K.; Moreu, F.; Singh, D.K.; Hoskere, V. Post Disaster Damage Assessment Using Ultra-High-Resolution Aerial Imagery with Semi-Supervised Transformers. Sensors 2023, 23, 8235. [Google Scholar] [CrossRef] [PubMed]
- ATC-20. Available online: https://www.atcouncil.org/atc-20 (accessed on 12 February 2024).
- Preliminary Damage Assessments|FEMA.gov. Available online: https://www.fema.gov/disaster/how-declared/preliminary-damage-assessments#report-guide (accessed on 12 February 2024).
- Varghese, S.; Hoskere, V. Unpaired image-to-image translation of structural damage. Adv. Eng. Inform. 2023, 56, 101940. [Google Scholar] [CrossRef]
- Mishra, M.; Lourenço, P.B. Artificial intelligence-assisted visual inspection for cultural heritage: State-of-the-art review. J. Cult. Herit. 2024, 66, 536–550. [Google Scholar] [CrossRef]
- McRae, J.N.; Nielsen, B.M.; Gay, C.J.; Hunt, A.P.; Nigh, A.D. Utilizing Drones to Restore and Maintain Radio Communication During Search and Rescue Operations. Wilderness Environ. Med. 2021, 32, 41–46. [Google Scholar] [CrossRef] [PubMed]
- Zwegliński, T. The Use of Drones in Disaster Aerial Needs Reconnaissance and Damage Assessment—Three-Dimensional Modeling and Orthophoto Map Study. Sustainability 2020, 12, 6080. [Google Scholar] [CrossRef]
- Saleem, M.R.; Mayne, R.; Napolitano, R. Analysis of gaze patterns during facade inspection to understand inspector sense-making processes. Sci. Rep. 2023, 13, 2929. [Google Scholar] [CrossRef] [PubMed]
- Máthé, K.; Buşoniu, L. Vision and Control for UAVs: A Survey of General Methods and of Inexpensive Platforms for Infrastructure Inspection. Sensors 2015, 15, 14887–14916. [Google Scholar] [CrossRef] [PubMed]
- Narazaki, Y.; Hoskere, V.; Chowdhary, G.; Spencer, B.F. Vision-based navigation planning for autonomous post-earthquake inspection of reinforced concrete railway viaducts using unmanned aerial vehicles. Autom. Constr. 2022, 137, 104214. [Google Scholar] [CrossRef]
- Bolourian, N.; Hammad, A. LiDAR-equipped UAV path planning considering potential locations of defects for bridge inspection. Autom. Constr. 2020, 117, 103250. [Google Scholar] [CrossRef]
- Mirzaei, K.; Arashpour, M.; Asadi, E.; Feng, H.; Mohandes, S.R.; Bazli, M. Automatic compliance inspection and monitoring of building structural members using multi-temporal point clouds. J. Build. Eng. 2023, 72, 106570. [Google Scholar] [CrossRef]
- Xu, Y.; Brownjohn, J.M.W. Review of machine-vision based methodologies for displacement measurement in civil structures. J. Civ. Struct. Health Monit. 2018, 8, 91–110. [Google Scholar] [CrossRef]
- Li, L.; Betti, R. A machine learning-based data augmentation strategy for structural damage classification in civil infrastructure system. J. Civ. Struct. Health Monit. 2023, 13, 1265–1285. [Google Scholar] [CrossRef]
- Atha, D.J.; Jahanshahi, M.R. Evaluation of deep learning approaches based on convolutional neural networks for corrosion detection. Struct. Health Monit. 2018, 17, 1110–1128. [Google Scholar] [CrossRef]
- Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput. Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
- Liu, C.; Man, J.; Liu, C.; Wang, L.; Ma, X.; Miao, J.; Liu, Y. Research on damage identification of large-span spatial structures based on deep learning. J. Civ. Struct. Health Monit. 2024, 14, 1035–1058. [Google Scholar] [CrossRef]
- Moore, M.; Phares, B.; Graybeal, B.; Rolander, D.; Washer, G. Reliability of Visual Inspection for Highway Bridges. McLean. 2001. Available online: https://www.researchgate.net/publication/273680040_Reliability_of_Visual_Inspection_for_Highway_Bridges (accessed on 2 February 2024).
- Wang, Z.; Cha, Y.J. Unsupervised deep learning approach using a deep auto-encoder with a one-class support vector machine to detect damage. Struct. Health Monit. 2021, 20, 406–425. [Google Scholar] [CrossRef]
- Keskin, M.; Ooms, K.; Dogru, A.O.; De Maeyer, P. Exploring the Cognitive Load of Expert and Novice Map Users Using EEG and Eye Tracking. ISPRS Int. J. Geo-Inf. 2020, 9, 429. [Google Scholar] [CrossRef]
- Bruder, C.; Hasse, C. Differences between experts and novices in the monitoring of automated systems. Int. J. Ind. Ergon. 2019, 72, 1–11. [Google Scholar] [CrossRef]
- Hosking, S.G.; Liu, C.C.; Bayly, M. The visual search patterns and hazard responses of experienced and inexperienced motorcycle riders. Accid. Anal. Prev. 2010, 42, 196–202. [Google Scholar] [CrossRef]
- Silva, A.F.; Afonso, J.; Sampaio, A.; Pimenta, N.; Lima, R.F.; Castro, H.d.O.; Ramirez-Campillo, R.; Teoldo, I.; Sarmento, H.; Fernández, F.G.; et al. Differences in visual search behavior between expert and novice team sports athletes: A systematic review with meta-analysis. Front. Psychol. 2022, 13, 1001066. [Google Scholar] [CrossRef] [PubMed]
- Takamido, R.; Kurihara, S.; Umeda, Y.; Asama, H.; Kasahara, S.; Tanaka, Y.; Fukumoto, S.; Kato, T.; Korenaga, M.; Hoshi, M.; et al. Evaluation of expert skills in refinery patrol inspection: Visual attention and head positioning behavior. Heliyon 2022, 8, e12117. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Ouyang, X.; Liu, T.; Wang, Q.; Shen, D. Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis. IEEE Trans. Med. Imaging 2022, 41, 1688–1698. [Google Scholar] [CrossRef] [PubMed]
- Brunyé, T.T.; Carney, P.A.; Allison, K.H.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Eye Movements as an Index of Pathologist Visual Expertise: A Pilot Study. PLoS ONE 2014, 9, e103447. [Google Scholar] [CrossRef]
- Ritchie, M.J.; Parker, L.E.; Kirchner, J.A.E. From novice to expert: Methods for transferring implementation facilitation skills to improve healthcare delivery. Implement. Sci. Commun. 2021, 2, 39. [Google Scholar] [CrossRef] [PubMed]
- Brunyé, T.T.; Nallamothu, B.K.; Elmore, J.G. Eye-tracking for assessing medical image interpretation: A pilot feasibility study comparing novice vs expert cardiologists. Perspect. Med. Educ. 2019, 8, 65–73. [Google Scholar] [CrossRef] [PubMed]
- Feng, Z.; González, V.A.; Amor, R.; Lovreglio, R.; Cabrera-Guerrero, G. Immersive virtual reality serious games for evacuation training and research: A systematic literature review. Comput. Educ. 2018, 127, 252–266. [Google Scholar] [CrossRef]
- Hsu, E.B.; Li, Y.; Bayram, J.D.; Levinson, D.; Yang, S.; Monahan, C. State of virtual reality based disaster preparedness and response training. PLoS Curr. 2013, 5, ecurrents.dis.1ea2b2e71237d5337fa53982a38b2aff. [Google Scholar] [CrossRef] [PubMed]
- Hartwig, M.; Bond, C.F. Why do lie-catchers fail? A lens model meta-analysis of human lie judgments. Psychol. Bull. 2011, 137, 643–659. [Google Scholar] [CrossRef]
- Granhag, P.A.; Rangmar, J.; Strömwall, L.A. Small Cells of Suspects: Eliciting Cues to Deception by Strategic Interviewing. J. Investig. Psychol. Offender Profiling 2015, 12, 127–141. [Google Scholar] [CrossRef]
- Dimoka, A.; Davis, F.D.; Gupta, A.; Pavlou, P.A.; Banker, R.D.; Dennis, A.R.; Ischebeck, A.; Müller-Putz, G.; Benbasat, I.; Gefen, D.; et al. On the use of neurophysiological tools in is research: Developing a research agenda for neurois. MIS Q 2012, 36, 679–702. [Google Scholar] [CrossRef]
- Sun, Z.K.; Wang, J.Y.; Luo, F. Experimental pain induces attentional bias that is modified by enhanced motivation: An eye tracking study. Eur. J. Pain 2016, 20, 1266–1277. [Google Scholar] [CrossRef] [PubMed]
- Causse, M.; Lancelot, F.; Maillant, J.; Behrend, J.; Cousy, M.; Schneider, N. Encoding decisions and expertise in the operator’s eyes: Using eye-tracking as input for system adaptation. Int. J. Hum. Comput. Stud. 2019, 125, 55–65. [Google Scholar] [CrossRef]
- Guazzini, A.; Yoneki, E.; Gronchi, G. Cognitive dissonance and social influence effects on preference judgments: An eye tracking based system for their automatic assessment. Int. J. Hum. Comput. Stud. 2015, 73, 12–18. [Google Scholar] [CrossRef]
- Li, J.; Li, H.; Umer, W.; Wang, H.; Xing, X.; Zhao, S.; Hou, J. Identification and classification of construction equipment operators’ mental fatigue using wearable eye-tracking technology. Autom. Constr. 2020, 109, 103000. [Google Scholar] [CrossRef]
- Seinfeld, S.; Feuchtner, T.; Maselli, A.; Müller, J. User Representations in Human-Computer Interaction. Hum. Comput. Interact. 2021, 36, 400–438. [Google Scholar] [CrossRef]
- Egeth, H.E.; Yantis, S. Visual Attention: Control, Representation, and Time Course. Annu. Rev. Psychol. 1997, 48, 269–297. [Google Scholar] [CrossRef]
- Kaspar, K. What Guides Visual Overt Attention under Natural Conditions? Past and Future Research. Int. Sch. Res. Not. 2013, 2013, 868491. [Google Scholar] [CrossRef]
- Lavie, N. Perceptual Load as a Necessary Condition for Selective Attention. J. Exp. Psychol. Hum. Percept. Perform. 1995, 21, 451–468. [Google Scholar] [CrossRef]
- Itti, L.; Koch, C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 2000, 40, 1489–1506. [Google Scholar] [CrossRef]
- Koch, C.; Ullman, S. Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry. Hum. Neurobiol. 1987, 4, 115–141. [Google Scholar] [CrossRef]
- Geisler, W.S.; Cormack, L.K. Models of overt attention. In The Oxford Handbook of Eye Movements; Oxford Academic: Oxford, UK, 2011. [Google Scholar] [CrossRef]
- Tilke, J.; Ehinger, K.; Durand, F.; Torralba, A. Learning to predict where humans look. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2106–2113. [Google Scholar] [CrossRef]
- Ramanathan, S.; Katti, H.; Huang, R.; Chua, T.S.; Kankanhalli, M. Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis. In Proceedings of the MM’09—Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums, Beijing, China, 19–24 October 2009; pp. 729–732. [Google Scholar] [CrossRef]
- Takeichi, N.; Katagiri, T.; Yoneda, H.; Inoue, S.; Shintani, Y. Virtual Reality approaches for evacuation simulation of various disasters. Collect. Dyn. 2020, 5, 534–536. [Google Scholar] [CrossRef]
- Lovreglio, R. Virtual and Augmented Reality for Human Behaviour in Disasters: A Review. In Proceedings of the Fire and Evacuation Modeling Technical Conference (FEMTC), Virtual, 9–11 September 2020; Available online: https://www.researchgate.net/publication/343809101_Virtual_and_Augmented_Reality_for_Human_Behaviour_in_Disasters_A_Review (accessed on 13 June 2024).
- Lovreglio, R.; Duan, X.; Rahouti, A.; Phipps, R.; Nilsson, D. Comparing the effectiveness of fire extinguisher virtual reality and video training. Virtual Real. 2021, 25, 133–145. [Google Scholar] [CrossRef]
- Li, C.; Liang, W.; Quigley, C.; Zhao, Y.; Yu, L.F. Earthquake Safety Training through Virtual Drills. IEEE Trans. Vis. Comput. Graph. 2017, 23, 1275–1284. [Google Scholar] [CrossRef] [PubMed]
- Kashiyama, K.; Ling, G.; Matsumoto, J. Modeling and Simulation of Tsunami Using Virtual Reality Technology. Videos of Plenary Lectures presented at the VI International Conference on Coupled Problems in Science and Engineering (COUPLED PROBLEMS 2015). 2016. Available online: https://www.scipedia.com/public/Contents_2016ag (accessed on 13 June 2024).
- Chittaro, L. Passengers’ Safety in Aircraft Evacuations: Employing Serious Games to Educate and Persuade. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7284, pp. 215–226. [Google Scholar] [CrossRef]
- Lovreglio, R.; Ngassa, D.-C.; Rahouti, A.; Paes, D.; Feng, Z.; Shipman, A. Prototyping and Testing a Virtual Reality Counterterrorism Serious Game for Active Shooting. SSRN Electron. J. 2021. [Google Scholar] [CrossRef]
- Gamberini, L.; Bettelli, A.; Benvegnù, G.; Orso, V.; Spagnolli, A.; Ferri, M. Designing ‘Safer Water.’ A Virtual Reality Tool for the Safety and the Psychological Well-Being of Citizens Exposed to the Risk of Natural Disasters. Front. Psychol. 2021, 12, 674171. [Google Scholar] [CrossRef] [PubMed]
- Fujimi, T.; Fujimura, K. Testing public interventions for flash flood evacuation through environmental and social cues: The merit of virtual reality experiments. Int. J. Disaster Risk Reduct. 2020, 50, 101690. [Google Scholar] [CrossRef]
- Sermet, Y.; Demir, I. Flood action VR: A virtual reality framework for disaster awareness and emergency response training. In Proceedings of the SIGGRAPH ′19: ACM SIGGRAPH 2019 Posters, Los Angeles, CA, USA, 28 July–1 August 2019. [Google Scholar] [CrossRef]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Matikainen, L.; Lehtomäki, M.; Ahokas, E.; Hyyppä, J.; Karjalainen, M.; Jaakkola, A.; Kukko, A.; Heinonen, T. Remote sensing methods for power line corridor surveys. ISPRS J. Photogramm. Remote Sens. 2016, 119, 10–31. [Google Scholar] [CrossRef]
- Chen, S.; Laefer, D.F.; Mangina, E.; Zolanvari, S.M.I.; Byrne, J. UAV Bridge Inspection through Evaluated 3D Reconstructions. J. Bridge Eng. 2019, 24, 05019001. [Google Scholar] [CrossRef]
- Murphy, R.R.; Stover, S. Rescue robots for mudslides: A descriptive study of the 2005 La Conchita mudslide response. J. Field Robot. 2008, 25, 3–16. [Google Scholar] [CrossRef]
- Goodrich, M.A.; Schultz, A.C. Human–Robot Interaction: A Survey. Found. Trends® Hum. Comput. Interact. 2008, 1, 203–275. [Google Scholar] [CrossRef]
- Flach, P. Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach. Cambridge University Press. Available online: http://people.cs.bris.ac.uk/~flach/mlbook// (accessed on 13 June 2024).
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- Donsa, K.; Spat, S.; Beck, P.; Pieber, T.R.; Holzinger, A. Towards Personalization of Diabetes Therapy Using Computerized Decision Support and Machine Learning: Some Open Problems and Challenges. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2015; Volume 8700, pp. 237–260. [Google Scholar] [CrossRef]
- Sowah, R.A.; Bampoe-Addo, A.A.; Armoo, S.K.; Saalia, F.K.; Gatsi, F.; Sarkodie-Mensah, B. Design and Development of Diabetes Management System Using Machine Learning. Int. J. Telemed. Appl. 2020, 2020, 8870141. [Google Scholar] [CrossRef] [PubMed]
- Mamykina, L.; Epstein, D.A.; Klasnja, P.; Sprujt-Metz, D.; Meyer, J.; Czerwinski, M.; Althoff, T.; Choe, E.K.; De Choudhury, M.; Lim, B. Grand Challenges for Personal Informatics and AI. In Proceedings of the CHI EA ′22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar] [CrossRef]
- Tobii, A.B. Tobii Pro Nano. Available online: https://www.tobii.com/products/discontinued/tobii-pro-nano?creative=642408166205&keyword=tobii%20pro&matchtype=p&network=g&device=c&utm_source=google&utm_medium=cpc&utm_campaign=&utm_term=tobii%20pro&gad_source=1&gclid=CjwKCAjwvIWzBhAlEiwAHHWgvXxCQj1eg-gN4_615kH8Qk84Cru1ENPhQ25pZIqojwLO_JoL5BRWeBoCWToQAvD_BwE (accessed on 5 June 2024).
- Tobii, A.B. Tobii Pro Lab. Computer Software. 2014. Available online: http://www.tobiipro.com/ (accessed on 31 January 2022).
- Kaushal, S.; Soto, M.G.; Napolitano, R. Understanding the Performance of Historic Masonry Structures in Mayfield, KY after the 2021 Tornadoes. J. Cult. Herit. 2023, 63, 120–134. [Google Scholar] [CrossRef]
- Olsen, A. The Tobii IVT Fixation Filter. 2012, pp. 1–21. Available online: http://www.vinis.co.kr/ivt_filter.pdf (accessed on 31 January 2022).
- Jiang, M.; Huang, S.; Duan, J.; Zhao, Q. SALICON: Saliency in Context. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1072–1080. [Google Scholar] [CrossRef]
- Kroner, A.; Senden, M.; Driessens, K.; Goebel, R. Contextual encoder–decoder network for visual saliency prediction. Neural Netw. 2020, 129, 261–270. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Liu, N.; Han, J. A Deep Spatial Contextual Long-Term Recurrent Convolutional Network for Saliency Detection. IEEE Trans. Image Process. 2018, 27, 3264–3274. [Google Scholar] [CrossRef] [PubMed]
- Cornia, M.; Baraldi, L.; Serra, G.; Cucchiara, R. Predicting human eye fixations via an LSTM-Based saliency attentive model. IEEE Trans. Image Process. 2018, 27, 5142–5154. [Google Scholar] [CrossRef]
- Borji, A.; Tavakoli, H.R.; Sihite, D.N.; Itti, L. Analysis of scores, datasets, and models in visual saliency prediction. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 921–928. [Google Scholar] [CrossRef]
- Engelke, U.; Liu, H.; Wang, J.; Le Callet, P.; Heynderickx, I.; Zepernick, H.-J.; Maeder, A. Comparative study of fixation density maps. IEEE Trans. Image Process. 2013, 22, 1121–1133. [Google Scholar] [CrossRef]
- Le Meur, O.; Baccino, T. Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behav. Res. Methods 2012, 45, 251–266. [Google Scholar] [CrossRef] [PubMed]
- Riche, N.; Duvinage, M.; Mancas, M.; Gosselin, B.; Dutoit, T. Saliency and human fixations: State-of-the-art and study of comparison metrics. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1153–1160. [Google Scholar] [CrossRef]
- Borji, A.; Sihite, D.N.; Itti, L. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Trans. Image Process. 2013, 22, 55–69. [Google Scholar] [CrossRef] [PubMed]
- Wilming, N.; Betz, T.; Kietzmann, T.C.; König, P. Measures and Limits of Models of Fixation Selection. PLoS ONE 2011, 6, e24038. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Q.; Koch, C. Learning a saliency map using fixated locations in natural scenes. J. Vis. 2011, 11, 9. [Google Scholar] [CrossRef] [PubMed]
- Nodine, C.F.; Mello-Thoms, C.; Weinstein, S.P.; Kundel, H.L.; Toto, L.C. Do subtle breast cancers attract visual attention during initial impression? In Medical Imaging 2000: Image Perception and Performance; SPIE: Bellingham, WA, USA, 2000; Volume 3981, pp. 156–159. [Google Scholar] [CrossRef]
- McCarley, J.S.; Kramer, A.F.; Wickens, C.D.; Vidoni, E.D.; Boot, W.R. Visual skills in airport-security screening. Psychol. Sci. 2004, 15, 302–306. [Google Scholar] [CrossRef]
- Goldstein, E.B.; Humphreys, G.W.; Shiffrar, M.; Yost, W.A. Blackwell Handbook of Sensation and Perception. In Blackwell Handbook of Sensation and Perception; Wiley: Hoboken, NJ, USA, 2008; pp. 1–788. [Google Scholar] [CrossRef]
- Sweller, J. Cognitive Load Theory. In Psychology of Learning and Motivation—Advances in Research and Theory; Elsevier: Amsterdam, The Netherlands, 2011; Volume 55, pp. 37–76. [Google Scholar] [CrossRef]
- Gibson, E.J. Principles of Perceptual Learning and Development. Available online: https://psycnet.apa.org/record/1969-35014-000 (accessed on 16 June 2024).
- Norman, G. Research in clinical reasoning: Past history and current trends. Med. Educ. 2005, 39, 418–427. [Google Scholar] [CrossRef]
- Mostafaie, F.; Nabizadeh, Z.; Karimi, N.; Samavi, S. A General Framework for Saliency Detection Methods. 2019. Available online: https://arxiv.org/abs/1912.12027v2 (accessed on 18 June 2024).
- Subhash, B. Explainable AI: Saliency Maps. Medium. Available online: https://medium.com/@bijil.subhash/explainable-ai-saliency-maps-89098e230100 (accessed on 18 June 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Saleem, M.R.; Mayne, R.; Napolitano, R. Evaluating Human Expert Knowledge in Damage Assessment Using Eye Tracking: A Disaster Case Study. Buildings 2024, 14, 2114. https://doi.org/10.3390/buildings14072114
Saleem MR, Mayne R, Napolitano R. Evaluating Human Expert Knowledge in Damage Assessment Using Eye Tracking: A Disaster Case Study. Buildings. 2024; 14(7):2114. https://doi.org/10.3390/buildings14072114
Chicago/Turabian StyleSaleem, Muhammad Rakeh, Robert Mayne, and Rebecca Napolitano. 2024. "Evaluating Human Expert Knowledge in Damage Assessment Using Eye Tracking: A Disaster Case Study" Buildings 14, no. 7: 2114. https://doi.org/10.3390/buildings14072114
APA StyleSaleem, M. R., Mayne, R., & Napolitano, R. (2024). Evaluating Human Expert Knowledge in Damage Assessment Using Eye Tracking: A Disaster Case Study. Buildings, 14(7), 2114. https://doi.org/10.3390/buildings14072114