Automatic Classification of Eyewitness Messages for Disaster Events Using Linguistic Rules and ML/AI Approaches
Abstract
:1. Introduction
- Delta Aircraft flight’s Emergency landing (http://channelnewsasia.com/news/world/delta-flight-middle-of-the-ocean-seattle-beijing-emergency-land-11062706, accessed on 20 July 2021): A tweet posted by one of the passengers of the flight about its emergency landing on a remote island in Alaska due to the potential engine failure was the source for the news agency about the incident.
- California Earthquake (https://latimesblogs.latimes.com/technology/2008/07/twitter-earthqu.html, accessed on 20 July 2021): Half a dozen tweets were available on Twitter about the earthquake, a minute before the recorded USGS (https://www.usgs.gov/, accessed on 20 July 2021) time.
- New York Airplane crashes at Hudson Bay (https://www.telegraph.co.uk/technology/twitter/4269765/New-York-plane-crash-Twitter-breaks-the-news-again.html, accessed on 25 July 2021): An eyewitness tweeted about the crash, and the information became the headline of the “Daily Telegraph”.
- Boston Bombing (https://en.wikipedia.org/wiki/Boston_Marathon_bombing, accessed on 25 July 2021): The eyewitness tweets about the bombing incident were available well before any news channel covered the incident.
- Westgate Shopping Mall attack (https://en.wikipedia.org/wiki/Westgate_shopping_mall_attack, accessed on 25 July 2021): This attack in Nairobi, Kenya, was on Twitter thirty-three minutes before any TV news channels.
- A generalized linguistic rule-based methodology (LR-TED) that is scalable to other domains to automatically extract eyewitness reports without relying on domain experts or platform-specific metadata, such as Twitter.
- Crafting and evaluating grammar rules to extract linguistic features from the textual content of a message and using them for annotating eyewitness messages.
- Using disaster-related linguistic features to train and evaluate several machine learning and deep learning models to classify the real-world Twitter dataset into “Eyewitness”, “Non-Eyewitness”, and ‘Unknown” classes.
- Comparative analysis of the proposed methodology with a baseline manually crafted dictionary-based approach using several evaluation metrics such as precision, recall, and F-score.
2. Literature Review
3. Methodology
3.1. Gold-Standard Dataset
3.2. Pre-Processing
3.3. Linguistics Processing
- Tokens: Tokenization of the tweet content.
- POS: Part-of-speech for tagging of each word.
- NER: Named entity recognition for entity identification.
- LEMMA: Lemmatization for getting the base word.
- DepRel: Relation dependency among words.
3.4. Evaluation Strategy
4. Grammar Rule-Based Feature Extraction
4.1. Parsing of Tweets (Stanford CoreNLP)
4.2. Feature Extraction
5. Experiments and Results
5.1. Dataset Description and Pre-Processing
5.2. Parsing and Extracting Features
5.3. Impact of Dropped and New Features
5.4. Results of Proposed LR-TED Approach
5.5. Comparison of Proposed LR-TED Approach with the Static-Dictionary-Based Approach
5.6. Comparison of Proposed LR-TED Approach with Zahra’s Approach (All Experiments)
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Imran, M.; Castillo, C.; Diaz, F.; Vieweg, S. Processing social media messages in mass emergency: A survey. ACM Comput. Surv. (CSUR) 2015, 47, 1–38. [Google Scholar] [CrossRef]
- Vieweg, S.; Hughes, A.L.; Starbird, K.; Palen, L. Microblogging during Two Natural Hazards Events: What Twitter May Contribute to Situational Awareness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 1079–1088. [Google Scholar]
- Kwak, H.; Lee, C.; Park, H.; Moon, S. What is Twitter, a social network or a news media? In Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; pp. 591–600. [Google Scholar]
- Atefeh, F.; Khreich, W. A survey of techniques for event detection in twitter. Comput. Intell. 2015, 31, 132–164. [Google Scholar] [CrossRef]
- Khatoon, S.; Alshamari, M.A.; Asif, A.; Hasan, M.M.; Abdou, S.; Elsayed, K.M.; Rashwan, M. Development of social media analytics system for emergency event detection and crisismanagement. Comput. Mater. Contin. 2021, 68, 3079–3100. [Google Scholar]
- Anandhan, A.; Shuib, L.; Ismail, M.A. Microblogging Hashtag Recommendation Considering Additional Metadata. In Intelligent Computing and Innovation on Data Science; Lecture Notes in Networks and Systems; Springer: Singapore, 2020; Volume 118, pp. 495–505. [Google Scholar]
- Jain, D.K.; Kumar, A.; Sharma, V. Tweet recommender model using adaptive neuro-fuzzy inference system. Future Gener. Comput. Syst. 2020, 112, 996–1009. [Google Scholar] [CrossRef]
- Khatoon, S.; Romman, L.A.; Hasan, M.M. Domain independent automatic labeling system for large-scale social data using Lexicon and web-based augmentation. Inf. Technol. Control 2020, 49, 36–54. [Google Scholar] [CrossRef] [Green Version]
- AlGhamdi, N.; Khatoon, S.; Alshamari, M. Multi-Aspect Oriented Sentiment Classification: Prior Knowledge Topic Modelling and Ensemble Learning Classifier Approach. Appl. Sci. 2022, 12, 4066. [Google Scholar] [CrossRef]
- Abu Romman, L.; Syed, S.K.; Alshmari, M.; Hasan, M.M. Improving Sentiment Classification for Large-Scale Social Reviews Using Stack Generalization. In Proceedings of the International Conference on Emerging Technologies and Intelligent Systems; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2021; Volume 322, pp. 117–130. [Google Scholar] [CrossRef]
- AlAbdulaali, A.; Asif, A.; Khatoon, S.; Alshamari, M. Designing Multimodal Interactive Dashboard of Disaster Management Systems. Sensors 2022, 22, 4292. [Google Scholar] [CrossRef] [PubMed]
- Khatoon, S.; Asif, A.; Hasan, M.M.; Alshamari, M. Social Media-Based Intelligence for Disaster Response and Management in Smart Cities. In Artificial Intelligence, Machine Learning, and Optimization Tools for Smart Cities: Designing for Sustainability; Pardalos, P.M., Rassia, S.T., Tsokas, A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 211–235. [Google Scholar]
- Imran, M.; Castillo, C.; Lucas, J.; Meier, P.; Vieweg, S. AIDR: Artificial intelligence for disaster response. In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Korea, 7–11 April 2014; pp. 159–162. [Google Scholar]
- Zahra, K.; Imran, M.; Ostermann, F.O. Automatic identification of eyewitness messages on twitter during disasters. Inf. Process. Manag. 2020, 57, 102107. [Google Scholar] [CrossRef]
- Haider, S.; Afzal, M.T. Autonomous eyewitness identification by employing linguistic rules for disaster events. CMC-Comput. Mater. Contin. 2021, 66, 481–498. [Google Scholar] [CrossRef]
- Haworth, B.; Bruce, E. A review of volunteered geographic information for disaster management. Geogr. Compass 2015, 9, 237–250. [Google Scholar] [CrossRef]
- Landwehr, P.M.; Carley, K.M. Social media in disaster relief. In Data Mining and Knowledge Discovery for Big Data; Springer: Berlin/Heidelberg, Germany, 2014; Volume 1, pp. 225–257. [Google Scholar] [CrossRef]
- Truelove, M.; Vasardani, M.; Winter, S. Towards credibility of micro-blogs: Characterising witness accounts. GeoJournal 2015, 80, 339–359. [Google Scholar] [CrossRef]
- Diakopoulos, N.; De Choudhury, M.; Naaman, M. Finding and Assessing Social Media Information Sources in the Context of Journalism. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 2451–2460. [Google Scholar]
- Olteanu, A.; Vieweg, S.; Castillo, C. What to expect when the unexpected happens: Social media communications across crises. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 994–1009. [Google Scholar]
- Kumar, S.; Morstatter, F.; Zafarani, R.; Liu, H. Whom should I follow? Identifying relevant users during crises. In Proceedings of the 24th ACM Conference on Hypertext and Social Media, Paris, France, 1–3 May 2013; pp. 139–147. [Google Scholar]
- Morstatter, F.; Lubold, N.; Pon-Barry, H.; Pfeffer, J.; Liu, H. Finding eyewitness tweets during crises. arXiv 2014, arXiv:1403.1773. [Google Scholar]
- Truelove, M.; Vasardani, M.; Winter, S. Testing a model of witness accounts in social media. In Proceedings of the 8th Workshop on Geographic Information Retrieval, Fort Worth, TX, USA, 4–7 November 2014; pp. 1–8. [Google Scholar]
- Doggett, E.; Cantarero, A. Identifying eyewitness news-worthy events on twitter. In Proceedings of the Fourth International Workshop on Natural Language Processing for Social Media, Austin, TX, USA, 1 November 2016; 2016; pp. 7–13. [Google Scholar]
- Fang, R.; Nourbakhsh, A.; Liu, X.; Shah, S.; Li, Q. Witness identification in twitter. In Proceedings of the Fourth International Workshop on Natural Language Processing for Social Media, Austin, TX, USA, 1 November 2016; pp. 65–73. [Google Scholar]
- Tanev, H.; Zavarella, V.; Steinberger, J. Monitoring disaster impact: Detecting micro-events and eyewitness reports in mainstream and social media. In Proceedings of the 14th ISCRAM Conference, Albi, France, 21–24 May 2017. [Google Scholar]
- Essam, N.; Moussa, A.M.; Elsayed, K.M.; Abdou, S.; Rashwan, M.; Khatoon, S.; Hasan, M.M.; Asif, A.; Alshamari, M.A. Location Analysis for Arabic COVID-19 Twitter Data Using Enhanced Dialect Identification Models. Appl. Sci. 2021, 11, 11328. [Google Scholar] [CrossRef]
- Zahra, K.; Ostermann, F.O.; Purves, R.S. Geographic variability of Twitter usage characteristics during disaster events. Geo-Spat. Inf. Sci. 2017, 20, 231–240. [Google Scholar] [CrossRef]
- Kong, L.; Schneider, N.; Swayamdipta, S.; Bhatia, A.; Dyer, C.; Smith, N.A. A dependency parser for tweets. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 26–28 October 2014; pp. 1001–1012. [Google Scholar]
- Liu, Y.; Zhu, Y.; Che, W.; Qin, B.; Schneider, N.; Smith, N.A. Parsing tweets into universal dependencies. arXiv 2018, arXiv:1804.08228. [Google Scholar]
- Jurafsky, D. Speech & Language Processing; Pearson Education: London, UK, 2000; ISBN ISBN 9788131716724. [Google Scholar]
- Finkel, J.R.; Grenager, T.; Manning, C.D. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), Ann Arbor, MI, USA, 25–30 June 2005; pp. 363–370. [Google Scholar]
- Gui, T.; Zhang, Q.; Huang, H.; Peng, M.; Huang, X.-J. Part-of-speech tagging for twitter with adversarial neural networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September 2017; pp. 2411–2420. [Google Scholar]
- CoreNLP. Available online: https://stanfordnlp.github.io/CoreNLP/ (accessed on 15 December 2021).
- Barua, K.; Chakrabarti, P.; Panwar, A.; Ghosh, A. A Predictive Analytical Model in Education Scenario based on Critical Thinking using WEKA. Int. J. Technol. Res. Manag. 2018, 5. Available online: https://www.academia.edu/36468698/A_Predictive_Analytical_Model_in_Education_Scenario_based_on_Critical_Thinking_using_WEKA (accessed on 15 December 2021).
- Desai, A.; Sunil, R. Analysis of machine learning algorithms using WEKA. Int. J. Comput. Appl. 2012, 975, 8887. [Google Scholar]
- Sharma, P. Comparative analysis of various clustering algorithms using WEKA. Int. Res. J. Eng. Technol. 2015, 2, 107–112. [Google Scholar]
Sr.# | Eyewitness Feature | Examples |
---|---|---|
1 | Reporting small details of surroundings | “window shaking”, “water in basement” |
2 | Words indicating perceptual senses | “seeing”, “hearing”, “feeling” |
3 | Reporting impact of disaster | “raining”, “school canceled”, “flight delayed” |
4 | Words indicating intensity of disaster | “intense”, “strong”, “dangerous”, “big” |
5 | First person pronouns and adjectives | “i”, “we”, “me” |
6 | Personalized location markers | “my office”, “our area” |
7 | Exclamation and question marks | “!”, “?” |
8 | Expletives | “wtf”, “omg”, “s**t” |
9 | Mention of a routine activity | “sleeping”, “watching a movie” |
10 | Time indicating words | “now”, “at the moment”, “just” |
11 | Short tweet length | “one or two words” |
12 | Caution and advice for others | “watch out”, “be careful” |
13 | Mention of disaster locations | “area and street name”, “directions” |
State-of-the-Art | Concept | Approach | Findings | Shortcomings |
---|---|---|---|---|
Diakopoulos et al. [19] | This work proposed the idea of finding trustworthy information sources. | The authors adopted the human-centered design approach to developing the SRSR system that works on tweets by journalists. | Secured a precision of 89% and recall of 32%. | Adopted manual dictionaries and no features to identify the eyewitness. |
Olteanu et al. [20] | The author studied the role of social media during disaster events. | The author proposed the idea of a tweet broadcast recommendation system using temporal information and evaluated the approach for 26 different crises. | Achieved accuracy between 0–54%. | It requires the user and network information to proceed. No features to identify the eyewitness. |
Morstatter et al. [22] | The author presented the idea of identifying non-geotagged tweets from the crises region. | The author proposed the idea of identifying the linguistic patterns and proposed an automatic way to classify them as non-geotagged tweets from crisis regions. | Achieved F-Score of 0.882% and 0.831% for Hurricane Sandy and Boston Bombing incidents. | It distinguishes the tweets by identifying the language of the tweet. No features to identify the eyewitness. |
Truelove et al. [23] | This work explored the idea of identifying witness accounts and their related accounts. | Presented the witness and impact account identification model. The bushfire event tweets were utilized to evaluate the proposed approach. | Secured 77% of results of observations of smoke. | Require manual pre-processing, metadata, location, and network info. No features to identify the eyewitness. |
Doggett and Cantarero [24] | Presented a filtration-based technique to identify eyewitnesses. | The author presented the linguistic features and applied them as filters for the identification of eyewitness reports. | Avg. accuracy of 62% is achieved. | Require geo-location to identify eyewitness events. |
Fang et al. [25] | Presented a hybrid approach to identify eyewitnesses during emergency events. | Adopted a defined set of metadata and linguistic features, using the LIWC dictionary for keyword identification and OpenCalais for event labeling. | Achieved an average F1 score of 89.7%. | Used static dictionary of terms. Requires language and location information. |
Tanev et al. [26] | Presented a set of syntax and linguistic-based features for event detection. | Applied an unsupervised approach to news articles to detect the events. | Achieved 42% precision and 66% recall. | Domain-specific data of news to detect the events. |
Zahra et al. [14] | The authors manually investigated the tweet sources related to eyewitnesses and classified them into thirteen features for identification. | Experts of related domains manually analyzed the tweets to identify the proposed features from the tweet content. Feature words were extracted for all thirteen features. | F-Score of 0.917. | Manual implementation. Failed to Implement all characteristics. |
Category | Flood | Earthquake | Hurricane | Total |
---|---|---|---|---|
Eyewitness | 148 | 367 | 296 | 811 |
Non-Eyewitness | 113 | 321 | 100 | 534 |
Unknown | 1739 | 1312 | 1604 | 4655 |
Total Sample | 2000 | 2000 | 2000 | 6000 |
Category | Earthquake | Flood | Hurricane | Wildfire | Total |
---|---|---|---|---|---|
Eyewitness | 1600 | 627 | 465 | 189 | 2881 |
Non-Eyewitness | 200 | 822 | 336 | 432 | 1790 |
Unknown | 200 | 551 | 1199 | 1379 | 3329 |
Total Sample | 2000 | 2000 | 2000 | 2000 | 8000 |
IDX | Word | Lemma | POS | NER | HeadIDX | DepRel |
---|---|---|---|---|---|---|
1 | The | the | DT | O | 2 | det |
2 | house | house | NN | O | 4 | nsubj |
3 | is | is | VBZ | O | 4 | Aux |
4 | shaking | shake | VBG | O | 0 | ROOT |
5 | … | … | : | O | 4 | punct |
6 | Its | its | PRP$ | O | 8 | nmod:poss |
7 | an | a | DT | O | 8 | det |
8 | earthquake | earthquake | NN | Cause of Death | 6 | comp |
Feature # | Feature Description | Grammar-Rule | Comments |
---|---|---|---|
1 | “Reporting small details of surroundings” | POS in (‘NN’,’NNS’) and IDX < HeadIDX and DepRel in (‘nsubj’, ‘dobj’) and NER <> (‘CAUSE_OF_DEATH’,’URL’,’NUMBER’,’TIME’,’MONEY’)and OnTarget (POS in (‘VBG’,’VBD’) and DepRel in (‘ccomp’, ‘dobj’)) | Feature was Dropped by Zahra’s approach. |
2 | “Words indicating perceptual senses” | POS like ‘VB%’ and LEMMA in PreceptualSensesWordList() | Grammar rule with an online source at http://liwc.wpengine.com/ accessed on 25 September 2021. |
3 | “Reporting impact of disaster” | POS like ‘NN%’ and NER<>‘URL’ and OnTarget(NER<>‘URL’ and DisasterImpactWord(LEMMA))) | The list of impact-words are common feature words falling in this category? |
4 | “Words indicating the intensity of the disaster” | POS = ‘JJ’ and IDX < HeadIDX and NER = ‘O’ and OnTarget(POS like ‘NN%’ and NER = ‘CAUSE_OF_DEATH’) | |
5 | “First-person pronouns and adjectives” | LEMMA in FirstPersonPNouns() | This feature can contain a few words only. |
6 | “Personalized location markers” | POS = ‘PRP$’ and IDX < HeadIDX and LEMMA in (‘my’,’we’) and NER = ‘O’ and DepRel = ‘nmod:poss’ and OnTarget(LocativeNoun(WORD) and NER = ‘O’ and POS = ‘NN’) | The feature was dropped by Zahra’s approach. |
7 | “Exclamation and question marks” | WORD like (‘%?%’, ‘%!%’)) | We only must find these characters from the text. |
8 | “Expletives” | WORD in SlangWordsList () | A static list of expletives words from an online source are adopted. |
9 | “Mention of a routine activity” | POS = ‘VBG’ and WORD<>‘GON’ and NER = ‘O’ | The feature contains daily routine activity words. |
10 | “Time indicating words” | POS = ‘RB’ and DepRel = ‘advmod’ and lemma not in (‘!!’,’|’,’a.’) and ((WORD = ‘just’) or (WORD<>‘just’ and NER = ‘DATE’)) and OnTarget(POS in (‘RB’,’VBP’,’JJ’, ‘CD’)) | It contains the temporal information about an emergency or disastrous event. |
11 | “Short tweet-length” | NER<>‘URL’ and TweetWordsCount ([Tweet-Content]) | The number of words-count remains an open research area. |
12 | “Caution and advice for others” | POS = ‘VB’ and IDX < HeadIDX and DepRel = ‘cop’ and OnTarget (POS = ‘JJ’) | |
13 | “Mention of disaster locations” | LocationType (NER) or WORD is in (‘east’, ‘west’, ‘north’, ‘south’) | Feature was dropped by Zahra’s approach. |
14 | New Feature “No URL” | IF (URL Found in content) THEN “NO” ELSE “YES” | URL in the text means that the information is not firsthand. |
Automatically | Manually | Evaluation | ||||
---|---|---|---|---|---|---|
Feature # | Total Retrieved | Relevant Retrieved | Identified | Precision | Recall | F-Score |
Feature-1 | 108 | 107 | 110 | 0.99 | 0.97 | 0.98 |
Feature-2 | 512 | 481 | 496 | 0.94 | 0.97 | 0.95 |
Feature-3 | 2 | 2 | 3 | 1.00 | 0.67 | 0.80 |
Feature-4 | 292 | 269 | 299 | 0.92 | 0.90 | 0.91 |
Feature-5 | 794 | 592 | 595 | 0.75 | 0.99 | 0.85 |
Feature-6 | 33 | 33 | 38 | 1.00 | 0.87 | 0.93 |
Feature-7 | 0 | 0 | 0 | 0.00 | 0.00 | 0.00 |
Feature-8 | 484 | 438 | 450 | 0.90 | 0.97 | 0.94 |
Feature-9 | 275 | 234 | 297 | 0.85 | 0.79 | 0.82 |
Feature-10 | 134 | 127 | 155 | 0.95 | 0.82 | 0.88 |
Feature-11 | 0 | 0 | 0 | 0.00 | 0.00 | 0.00 |
Feature-12 | 7 | 7 | 11 | 1.00 | 0.64 | 0.78 |
Feature-13 | 690 | 357 | 593 | 0.52 | 0.60 | 0.56 |
Feature-14 | 0 | 0 | 0 | 0.00 | 0.00 | 0.00 |
Total | 3331 | 2647 | 3047 |
Disaster Type | Earthquake | Flood | Hurricane | Wildfire | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Category | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 |
Eyewitness | 0.92 | 0.70 | 0.79 | 0.46 | 0.43 | 0.44 | 0.51 | 0.48 | 0.50 | 0.23 | 0.39 | 0.29 |
Non-Eyewitness | 0.11 | 0.30 | 0.16 | 0.24 | 0.33 | 0.27 | 0.53 | 0.29 | 0.37 | 0.60 | 0.28 | 0.38 |
Unknown | 0.16 | 0.19 | 0.17 | 0.34 | 0.27 | 0.30 | 0.09 | 0.25 | 0.14 | 0.11 | 0.26 | 0.15 |
Disaster Type | Earthquake | Flood | Hurricane | Wildfire | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Category | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 |
Eyewitness | 0.88 | 0.76 | 0.82 | 0.38 | 0.57 | 0.45 | 0.40 | 0.63 | 0.49 | 0.15 | 0.63 | 0.24 |
Non-Eyewitness | 0.19 | 0.48 | 0.27 | 0.33 | 0.42 | 0.37 | 0.71 | 0.47 | 0.56 | 0.78 | 0.53 | 0.63 |
Unknown | 0.23 | 0.13 | 0.17 | 0.44 | 0.20 | 0.27 | 0.12 | 0.17 | 0.14 | 0.28 | 0.17 | 0.21 |
Disaster Type | Earthquake | Flood | Hurricane | Wildfire | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Category | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 |
Eyewitness | 0.86 | 0.95 | 0.91 | 0.40 | 0.74 | 0.52 | 0.42 | 0.75 | 0.54 | 0.15 | 0.69 | 0.25 |
Non-Eyewitness | 0.47 | 0.47 | 0.47 | 0.39 | 0.42 | 0.40 | 0.71 | 0.47 | 0.57 | 0.77 | 0.51 | 0.61 |
Unknown | 0.28 | 0.06 | 0.09 | 0.38 | 0.11 | 0.17 | 0.12 | 0.14 | 0.13 | 0.27 | 0.14 | 0.19 |
Earthquake | Flood | Hurricane | Wildfire | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
EW | Un | NEw | EW | Un | NEw | EW | Un | NEw | EW | Un | NEw | |
LR-TED | 0.93 | 0.07 | 0.75 | 0.45 | 0.51 | 0.65 | 0.60 | 0.36 | 0.80 | 0.26 | 0.55 | 0.86 |
Zahra’s Approach | 0.92 | 0.39 | 0.71 | 0.53 | 0.73 | 0.63 | 0.51 | 0.64 | 0.81 | 0.38 | 0.73 | 0.90 |
MODEL | Manual | R-F | N-B | SVM | LR-TED | RNN | CNN | Zahra |
---|---|---|---|---|---|---|---|---|
Disaster Type | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) |
Earthquake | 0.91 | 0.92 | 0.92 | 0.93 | 0.93 | 0.89 | 0.93 | 0.92 |
Flood | 0.52 | 0.42 | 0.43 | 0.40 | 0.45 | 0.07 | 0.41 | 0.53 |
Hurricane | 0.54 | 0.58 | 0.59 | 0.57 | 0.60 | 0.57 | 0.58 | 0.51 |
Wildfire | 0.25 | 0.24 | 0.30 | 0.00 | 0.26 | 0.00 | 0.21 | 0.38 |
MODEL | Manual | R-F | N-B | SVM | LR-TED | RNN | CNN | Zahra |
---|---|---|---|---|---|---|---|---|
Disaster Type | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) |
Earthquake | 0.47 | 0.75 | 0.73 | 0.74 | 0.75 | 0.00 | 0.74 | 0.71 |
Flood | 0.40 | 0.66 | 0.64 | 0.56 | 0.65 | 0.60 | 0.64 | 0.63 |
Hurricane | 0.57 | 0.80 | 0.80 | 0.80 | 0.80 | 0.80 | 0.81 | 0.81 |
Wildfire | 0.61 | 0.85 | 0.85 | 0.86 | 0.86 | 0.86 | 0.86 | 0.90 |
MODEL | Manual | R-F | N-B | SVM | LR-TED | RNN | CNN | Zahra |
---|---|---|---|---|---|---|---|---|
Disaster Type | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) | (F1) |
Earthquake | 0.09 | 0.08 | 0.14 | 0.00 | 0.07 | 0.00 | 0.01 | 0.39 |
Flood | 0.17 | 0.56 | 0.55 | 0.60 | 0.51 | 0.54 | 0.57 | 0.73 |
Hurricane | 0.13 | 0.39 | 0.39 | 0.27 | 0.36 | 0.00 | 0.32 | 0.64 |
Wildfire | 0.19 | 0.56 | 0.55 | 0.57 | 0.55 | 0.53 | 0.54 | 0.73 |
LR-TED | Zahra’s Approach | |||||
---|---|---|---|---|---|---|
Category | Precision | Recall | F-Score | Precision | Recall | F-Score |
Eyewitness | 0.86 | 0.95 | 0.91 | 0.40 | 0.74 | 0.52 |
Non-Eyewitness | 0.47 | 0.47 | 0.47 | 0.39 | 0.42 | 0.40 |
Unknown | 0.28 | 0.06 | 0.09 | 0.38 | 0.11 | 0.17 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Haider, S.; Mahmood, A.; Khatoon, S.; Alshamari, M.; Afzal, M.T. Automatic Classification of Eyewitness Messages for Disaster Events Using Linguistic Rules and ML/AI Approaches. Appl. Sci. 2022, 12, 9953. https://doi.org/10.3390/app12199953
Haider S, Mahmood A, Khatoon S, Alshamari M, Afzal MT. Automatic Classification of Eyewitness Messages for Disaster Events Using Linguistic Rules and ML/AI Approaches. Applied Sciences. 2022; 12(19):9953. https://doi.org/10.3390/app12199953
Chicago/Turabian StyleHaider, Sajjad, Azhar Mahmood, Shaheen Khatoon, Majed Alshamari, and Muhammad Tanvir Afzal. 2022. "Automatic Classification of Eyewitness Messages for Disaster Events Using Linguistic Rules and ML/AI Approaches" Applied Sciences 12, no. 19: 9953. https://doi.org/10.3390/app12199953
APA StyleHaider, S., Mahmood, A., Khatoon, S., Alshamari, M., & Afzal, M. T. (2022). Automatic Classification of Eyewitness Messages for Disaster Events Using Linguistic Rules and ML/AI Approaches. Applied Sciences, 12(19), 9953. https://doi.org/10.3390/app12199953