Next Article in Journal
ChatGPT Code Detection: Techniques for Uncovering the Source of Code
Previous Article in Journal
Bio-Inspired Hyperparameter Tuning of Federated Learning for Student Activity Recognition in Online Exam Environment
Previous Article in Special Issue
Artificial Intelligence-Driven Facial Image Analysis for the Early Detection of Rare Diseases: Legal, Ethical, Forensic, and Cybersecurity Considerations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach

by
Wafa Hussain Hantom
and
Atta Rahman
*
Department of Computer Science (CS), College of Computer Science and Information Technology (CCSIT), Imam Abdulrahman Bin Faisal University (IAU), P.O. Box 1982, Dammam 31441, Saudi Arabia
*
Author to whom correspondence should be addressed.
AI 2024, 5(3), 1049-1065; https://doi.org/10.3390/ai5030052
Submission received: 24 May 2024 / Revised: 21 June 2024 / Accepted: 27 June 2024 / Published: 2 July 2024

Abstract

:
Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media platforms. Due to this overwhelming interest, spammers can post texts, images, and videos containing suspicious links that can be used to spread viruses, rumors, negative marketing, and sarcasm, and potentially hack the user’s information. Spam detection is among the hottest research areas in natural language processing (NLP) and cybersecurity. Several studies have been conducted in this regard, but they mainly focus on the English language. However, Arabic tweet spam detection still has a long way to go, especially emphasizing the diverse dialects other than modern standard Arabic (MSA), since, in the tweets, the standard dialect is seldom used. The situation demands an automated, robust, and efficient Arabic spam tweet detection approach. To address the issue, in this research, various machine learning and deep learning models have been investigated to detect spam tweets in Arabic, including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB) and Long-Short Term Memory (LSTM). In this regard, we have focused on the words as well as the meaning of the tweet text. Upon several experiments, the proposed models have produced promising results in contrast to the previous approaches for the same and diverse datasets. The results showed that the RF classifier achieved 96.78% and the LSTM classifier achieved 94.56%, followed by the SVM classifier that achieved 82% accuracy. Further, in terms of F1-score, there is an improvement of 21.38%, 19.16% and 5.2% using RF, LSTM and SVM classifiers compared to the schemes with same dataset.

1. Introduction

In the present digital era, reviews that are posted on websites, applications, and social media platforms hold great significance. These reviews act as evaluations of various services, products, and places. People rely on these assessments to decide whether to use a particular service, purchase a product, or book a place. Additionally, these reviews have a profound impact on companies, as they shape their product features, services, and marketing campaigns based on customer feedback. Opinion-mining tools have been developed to assist businesses and decision-makers in improving product quality and enhancing sales and revenue. These tools include sentiment classification, feature-based opinion-mining, comparative sentences, and opinion searches [1].
However, reviews can also present a double-edged sword. Companies can limit who can provide reviews on their applications by linking them to serial or reservation numbers. Nevertheless, individuals can write reviews on social media platforms like Twitter and Facebook. Recently, competitors have taken advantage of this situation by employing paid attacks that can negatively impact business development and influence people’s decisions [2]. These paid attacks are often carried out by bots, making it difficult to determine whether the reviews are genuine or spam. Detecting spam in reviews involves two aspects: spam detection and spammer detection. Spam detection mainly focuses on classifying the submitted text as human-generated or bot-generated [3].
Spammer detection is a process that focuses on identifying the source of spam and determining whether it comes from an individual or a group of spammers. There are three techniques that can be utilized to identify spam or spammers. The first two techniques involve natural language processing (NLP) and product feature detection, and they apply to the text. The third technique involves analyzing the behavior of the reviewer, which includes examining their internet protocol (IP) address, the repeatability of their reviews, and the timing of their submissions [4]. Spamming in social media can lead to several issues, such as cluttering the feeds of consumers, making it difficult to find relevant and valuable content. Spam links and comments may also contain harmful information that can be exploited to distribute malware or engage in phishing scams. Moreover, spam content may include hate speech, which can worsen racial tensions and societal problems [5]. Ensemble learning is a powerful technique that involves combining multiple machine learning algorithms to achieve better performance than using these algorithms individually [5]. The study by [6] is considered one of the fundamental studies in ensemble learning. They introduced a technique of dividing the feature space by employing a multiple classifiers element. The authors in [7] showed that an ensemble of identical ANN classifiers performed way better than a single classifier in terms of prediction performance.
Schapire [8] proposed the boosting technique, which transforms a weak classifier into a strong one. Boosting has paved the way for robust algorithms such as AdaBoost, gradient boosting, and extreme gradient boosting (XGBoost) [9]. Ensemble learning is a technique that combines the predictions of multiple individual learners to obtain a more accurate prediction than what a single model can produce. There are two types of ensemble methods: parallel and sequential. Parallel methods involve training different base classifiers independently and combining their predictions using a combiner. Two popular parallel ensemble methods are the Bagging and Random Forest algorithms. These methods encourage diversity among ensemble members by generating base learners in parallel [10]. Sequential ensembles, like boosting algorithms, train models iteratively to correct the errors made by previous models. They do not fit the base models independently. Parallel ensembles are further classified into homogeneous and heterogeneous. Homogeneous ensembles comprise models built using the same machine learning algorithm, while heterogeneous ensembles are made up of models from different algorithms. The success of ensemble learning approaches depends on the accuracy and diversity of the base learners. Accuracy denotes the capability of a model to generalize efficiently based on unseen instances of the input data, while diversity refers to the differences in errors among the base learners [10].
Most of the research studies on spam detection are conducted in the English language. Very few studies have been carried out to detect spam in the Arabic language. Arabic is a comprehensive natural language, and there are several unique linguistic challenges posed by Arabic dialects and script variations across the Arabic peninsula. For example, a range of diacritics that not only change the representation but the entire meaning of the word, contextual and semantic diversity, a blend of native and modern language inflections, and special symbols with their diverse use are a few among the many characteristics that differentiate Arabic from the western languages. Therefore, there is a need to develop robust approaches to identify spam messages in Arabic, including different dialects and Modern Standard Arabic (MSA). The existing research is mainly confined to email spam detection and other social media, such as Facebook and YouTube, while tweet spam detection is limited. Moreover, the studies on tweet spam detection either use a limited dataset, or in some cases a conversion to English language is performed prior to spam tweet detection. This research focuses on identifying spam tweets in Arabic by considering the impact of text preprocessing on the identification process. The same entities are used for all the models during the training and testing phases. Various classifiers from machine learning (ML) and deep learning (DL) are employed to identify spam tweets, along with different Arabic NLP preprocessing and feature extraction techniques.
The rest of the paper is organized as follows: Section 2 provides an overview of the previous research conducted in the field of spam text detection. Section 3 provides the background of the ML and DL algorithms used in the study. Section 4 presents the proposed scheme. Section 5 presents the results and discussion, while Section 6 concludes the paper.

2. Literature Review

2.1. Related Studies in English Tweet Spam Detection

A study, detailed in [11], discusses spam sent to web pages, discussing types of spam and proposing a methodology based on feature extraction and classification algorithms for spam detection, achieving 81.8% recall and 83.1% precision with ADTree using best features. Bahnsen et al. [12] propose a flexible and intelligent model using classification algorithms and data mining methods to detect phishing websites, achieving an accuracy rate of 98.7% and highlighting the relationship between website features for future detection frameworks. Preethi and Velmayil [13] present a method for analyzing phishing URLs using lexical analysis, employing a pre-phish algorithm and machine learning methods to classify phishing and non-phishing URLs, achieving a 97.83% accuracy and a 1.82% false predictive rate. Nagaraj et al. [14] propose an ensemble machine learning model, combining Random Forest and neural networks, for classifying phishing websites with a prediction accuracy of 93.41%. Ubing et al. [15] present an ensemble learning approach based on feature elicitation and plural voting, achieving an accuracy of 95% for phishing website detection, surpassing existing methods. In recent studies, different models have been proposed for detecting spam and inappropriate content. The authors of [16] proposed a model that uses content-based features, such as word frequency count, sentiment polarity, and review length, to achieve an accuracy of 86.32% with the Naive Bayes classifier.
Jain et al. [17] proposed a multi-instance learning model and a convolutional neural network (CNN) model with Gated Recurrent Units (GRU) for text classification, achieving the highest accuracy of 91.9% with the CNN-GRU model. Mani et al. [18] introduced an ensemble technique that combines the Naive Bayes, Random Forest (RF), and Support Vector Machine (SVM) classifiers using N-gram features and achieves the highest average accuracy of 87.68% for detecting spam reviews. Siddique et al. [19] developed a model for email content discovery and classification, employing CNN, Naïve Bayes (NB), LSTM, and SVM algorithms, with the LSTM model achieving a maximum accuracy of 98.4% for perceiving and categorizing inappropriate and unsolicited spam emails written in Urdu. Lastly, Dewis and Viana [20] proposed “Phish Responder”, a Python-based solution that combines deep learning and NLP techniques to detect spam and phishing emails, achieving the highest average accuracy of 99% using the LSTM model for textual datasets and 94% with the MLP model for numerical datasets. Alzaqebah et al. [21] propose an improved version of the Multi-Verse Optimizer (MVO) algorithm for feature selection in cybercrime classification problems, demonstrating the superiority of the improved algorithm (IMVO) in maintaining solution diversity and improving searchability. AbdulNabi and Yaseen [22] examine machine and deep learning algorithms, including Bidirectional Encryption Representations from Transformer (BERT), for spam and phishing email detection, showing that the BERT model achieves maximum accuracy and an F1-score of 98.67% and 98.66%, respectively, compared to other classifiers.
After a brief review, it is apparent that spam detection, especially tweet spam detection in the English language, has achieved significant improvements in terms of accuracy and other evaluation metrics. This is largely due to the development of well-established models and the consistent use of accents and dialects in the language, which makes NLP more efficient and effective in detecting spam and analyzing other types of social media content.
Table 1 provides a summary of studies conducted on spam detection in the English language. It also includes the methods and classifiers used, type of dataset, and the evaluation results in terms of accuracy, precision, recall and F1-score.

2.2. Related Studies in Arabic Tweet Spam Detection

Al-Kabi et al. [23] propose a system for ranking Arabic web pages and detecting spam based on content and link features. The system utilizes user feedback to improve its performance and demonstrates an improvement compared to other methods in terms of performance and accuracy. Abdallah Ghourabi et al. [24] employ machine learning techniques to detect spam SMS messages in Arabic and English. They propose a hybrid deep learning model combining LSTM and CNN and evaluate it against various classification algorithms. The CNN-LSTM model achieves superior performance with an accuracy, precision, recall, F1-score and an AUC of 98.37%, 95.39%, 87.87%, 91.48%, 93.7%, respectively. Mohammed et al. [25] present an intelligent and adaptive learning approach for detecting spam emails. They propose a visual anti-spam model using a trainable Naive Bayes classifier trained in Arabic, English, and Chinese. The proposed model efficiently detects and filters spam emails, achieving an overall accuracy of 98.4%, a false positive rate of 0.08%, and a negative rate error of 2.90%. Alkadri [26] proposes an integrated Twitter spam detection framework focusing on Arabic content. The framework combines NLP, data augmentation, and supervised ML algorithms. The model achieves a total accuracy of 92% and improves the F1-score from 58% to 89% by increasing the data. It is worth mentioning that accuracy was obtained for a selected and small subset of the actual dataset. The authors in [27] propose four techniques for identifying spam in the Arabic reviews, combining ML techniques with rule-based classifiers and employing content-based features such as N-gram and negation processing. The group approach achieves 95.25% and 99.98% classification accuracies on the DOSC and HARD datasets, respectively, outperforming existing work by 25%. Alzanin and Azmi [28] propose two learning models, semi-supervised learning using the Expectation–Maximization (E-M) algorithm and unsupervised learning using the NB algorithm, for detecting fake Arabic tweets. The semi-supervised learning model performs better, with an accuracy of 78.6%, using features based on tweets and topics. A study [29] conducted a systematic literature review on the use of AI strategies for crime prediction. The review analyzed 120 research papers and identified various crime analysis types, types of crimes studied, prediction techniques, performance metrics, and the strengths and weaknesses of proposed methods, and limitations. The review describes that supervised machine learning is the most commonly used method and provides guidance for researchers in the field of smart crime prediction.
Alotaibi et al. [30] address improving customer service for the Saudi Telecom Company (STC) in Saudi Arabia. The researchers analyze tweets from the Twitter platform to measure user satisfaction and identify their sentiments and criticisms. They propose a BERT-based model for spam detection and sentiment analysis in imbalanced data from Arabic tweets. The model is trained using a dataset of 24,513 Arabic tweets, and its performance is evaluated using F1-score, accuracy, and recall metrics. The results demonstrate that the MARBERT model performs well in Arabic multi-label sentiment analysis, outperforming existing techniques in the literature with an F1-score of 75%. Alorini and Rawat introduced a dataset in their study [31], which consisted of Gulf Dialectical Arabic (Gulf DA) translated into English. The purpose of this dataset was to build a Gulf Knowledge Base (GulfKB). The researchers then utilized Bayesian inference in the GulfKB model-based reasoning to identify malicious content and suspicious users. Through numerical evaluation, they demonstrated that their approach achieved an accuracy of 91% and surpassed other existing methods described in the current literature.
Alghamdi and Khan introduced an intelligent system in their research [32], which aimed to analyze Arabic tweets for the purpose of identifying suspicious messages. Researchers have developed a system that uses supervised machine learning algorithms to detect suspicious activities in Arabic tweets. The system involves collecting a dataset of Arabic tweets and manually labeling them as suspicious or not suspicious. Six supervised machine learning algorithms were evaluated, and the support vector machine algorithm outperformed the others, achieving a mean accuracy of 86.72%. The study has contributed to the field by developing a labeled dataset of Arabic tweets and establishing a statistical benchmark for future research. This system can be an effective tool for law enforcement agencies to identify suspicious messages and prevent crime.
Alhassun and Rassam conducted a study [33] with the aim of assessing the effectiveness of a combined framework of text and metadata in detecting spam from Arabic Twitter accounts. The researchers examined whether account suspensions could serve as an indicator of Arabic spam accounts. The long short-term memory (LSTM)-combined model achieved high precision and recall rates of 94% and 93.8%, respectively, outperforming the logistic regression (LR) and SVM approaches. The proposed framework demonstrated its superiority by achieving the highest accuracy of 94.27% in the combined model. Despite the challenges posed by Arabic tweets and their high sensitivity, the text-based model utilizing convolutional neural networks (CNN) performed well, with an accuracy of 80%. Kaddoura et al. [34] presented a deep learning and classical machine learning approach to Arabic tweet spam classification. In this regard, they have collected a dataset and labelled it manually [35]. N-gram methods were applied for feature extraction and joined with SVM, NN, NB and LR, while Global vector (GloVe) and fastText models were used for the deep learning approaches that outperformed the aforementioned models.
Table 2 summarizes the techniques involving Arabic spam detection in various social media datasets. Techniques in [26,30] used Arabic Twitter datasets similar or close to the current study that also involve an additionally collected dataset.
Based on the comprehensive review of the literature (Table 2), it is apparent that the existing research in the Arabic language is mainly confined to email spam detection and other social media, such as Facebook and YouTube, while tweet spam detection (based on tweet text, not account) is somewhat limited and there is much room for improvement in terms of accuracy and other figures of merit. Moreover, the studies on tweet spam detection are either using a limited, self-generated dataset, or in some cases a conversion to English language is performed prior to spam tweet detection [31]. Therefore, the situation demands a comprehensive Arabic tweet spam detection approach with a diverse dataset and improved accuracy. The proposed study aims to fill this potential research gap.

3. Materials and Methods

3.1. Ensemble Machine Learning Techniques and Algorithms

Machine learning originates from pattern recognition and artificial intelligence, specifically within the subfield of computer science. It is closely intertwined with computational statistics and primarily revolves around prediction. Over the past few years, significant research efforts in machine learning have been dedicated to various domains, including NLP, computer vision, pattern recognition, cognitive computing, and knowledge representation. These areas represent critical application areas for machine learning techniques, enabling advancements in language understanding, image analysis, pattern detection, cognitive modeling, and the representation of knowledge in computational systems [37]. Ensemble learning is a technique that combines multiple machine learning (ML) algorithms to achieve a better performance compared to using individual algorithms alone. Based on the literature review, we have shortlisted classical ML techniques as SVM and NB, while RF is shortlisted as an ensemble technique [36,37,38,39].
RF models are machine learning techniques that forecast output by combining the results of a series of regression decision trees. Each tree is built separately and is based on a random vector sampled from the input data, with same distribution of the trees in the forest. The NB technique is a simple text categorization algorithm. It is a probabilistic approach for each attribute in each class set. It has been effectively used for various issues and applications, but it excels in NLP. Likewise, SVM is a perfect and powerful supervised machine learning model for text and other data classification, regardless of the size of the dataset [36,37,38,39].

3.2. Deep Learning Techniques

Deep learning is a type of machine learning that focuses on training artificial neural networks with several layers, also known as deep neural networks. These networks are designed to imitate the structure and function of the human brain, with interconnected layers of artificial neurons. One of the main advantages of deep learning is its ability to automatically learn hierarchical representations from raw data. Traditional machine learning approaches often require feature engineering, which involves manually designing and selecting relevant features from the input data. In contrast, deep learning learns these features automatically as part of the model training process, eliminating the need for manual feature engineering and allowing the model to extract complex and abstract representations directly from the data [40]. Long Short-Term Memory (LSTM) is a type of recurrent neural network (RNN) architecture that addresses the challenge of capturing long-term dependencies in sequential data. It introduces a memory unit and gate mechanism, which enables the network to selectively remember or forget information over a sequence of inputs. In the current study, we have employed LSTM as a deep learning model because of its effectiveness in similar problems, as observed in the literature [41]. Though the techniques investigated in the proposed study are classical and exist in the literature, the nature of the dataset, the preprocessing techniques and the handling of native Arabic NLP is a task in itself, and it makes the study novel and distinguished.

3.3. Synthetic Minority Over-Sampling Technique (SMOTE)

SMOTE is a technique used for data augmentation that balances the class distribution by creating synthetic examples of the minority class. Unlike duplicating existing minority class instances, SMOTE generates synthetic samples by interpolating between neighboring instances in the feature space. This process helps in adding more data points to the dataset and better understanding the distribution of the classes in the dataset, and is known as oversampling. Similarly, if it is performed in reverse where instances of one class are reduced to equate with the other, this is known as undersampling [42]. In Figure 1, we can see undersampling on the left and how we reduce samples to balance the classes; on the right, we can see oversampling and how we multiply one class to achieve a balanced dataset.

3.4. Natural Language Processing (NLP)

NLP is a field of study in artificial intelligence that aims to enable computers to understand, interpret and process human language for various purposes, such as opinion mining, sentiment analysis, and affecting detection in text. In today’s world, NLP is essential due to the vast amount of unstructured data that is available. Basic tasks, such as content rating, subject discovery and modeling, contextual extraction, sentiment analysis, speech-to-text, text-to-speech, automatic document summarization, and machine translation are often used in high-level NLP capabilities [43].

3.5. Dataset

For the present study, we have investigated a diverse dataset obtained from three different sources. Firstly, the dataset was collected from Twitter using the API. Secondly, it was obtained from a study conducted by Alotaibi et al. [30]. Thirdly, it was sourced from a recent study [26] for additional comparisons. The objective behind aggregating data from various sources was to develop a comprehensive model for detecting tweet spam, with diverse dialects from the Arabic region, including MSA and others.

4. Proposed Approach

This section presents the proposed approach, followed in conducting the research. Figure 2 shows the research methodology flowchart.
Research Steps:
  • Read data from Twitter using Panda’s library of Python language and extract the data frame.
  • Preprocessing: This step is crucial when applying AI algorithms because algorithms are only sometimes compatible with it.
  • NLP: This step is essential to converting data to a form to which can AI be applied. It contains normalizing letters, to convert letters with multiple forms to a single form. Tokenize text or convert each word to a token for initializing data to the next. Lemmatizing means converting each word to the root.
  • Feature extraction means converting each word to a number and replacing each word with its number. This step is essential to convert non-numerical data to numerical data suitable for AI.
  • Balancing: when the first class has more samples than the second class, the performance will not be good, so balancing means generating samples for the class that has fewer samples to be balanced with another class.
  • ML and deep learning: This step builds and trains ensemble ML and DL models on ready data.
  • Evaluation: comparing accuracy and other metrics to evaluate two or more models to select the best.

4.1. Data Preprocessing

In this step, all unwanted words, characters, and URLs will be deleted, and clean data will be generated. This step is crucial for applying machine learning (ML) and deep learning (DL) algorithms because with them, algorithms work in many cases. In addition, this step is essential for making an accurate and suitable model to decide the data entered. This step contains normalizing text to make suitable text without URLs inside it. It further includes removing punctuation and diacritics to remove all the unwanted and useless characters in Arabic text, because these characters negatively affect the model’s performance model. The detailed data preprocessing pipeline is illustrated in Figure 3 and described subsequently. The input text is normalized, then several eliminations take place, such as diacritics, hashtags, punctuation symbols and stop words. After that, tokenization and lemmatization is performed to make the text ready for the next phase.
(a)
Text Normalization: Normalization is the process of reducing letters to their basic form. As the Arabic language is rich morphologically, it requires normalization. For instance, Tatweel (like: “كتــــــــــــــــــــــــــــــــــــــــــــــاب” to “كتاب”). Table 3 presents the normalization form for certain Arabic letters.
(b)
Removing diacritics (Ai 05 00052 i001), punctuation (‘+*/….), and repeating chars: removing diacritics, punctuation, and repeating characters to clean and standardize the text data for further analysis. For instance, “أَكَلَ مُحَمَّد تُفَاحَة” to “أكل محمد تفاحة”; this is shown in Table 4, taken from our previous work [44].
(c)
Eliminating hashtags, user references or indications, and URLs.
(d)
Eliminating punctuation symbols, for instance, full stops and commas, because they do not play any significant role in spam detection.
(e)
Eliminating stop words: Stop words are applied to formation of the language but usually do not contribute to its subjects. For instance, الذي, هذا, من are a few Arabic stop words. The Arabic stop words are collected from various Arabic sources [44]. A few examples of stop words are given in Table 5.
(f)
Tokenization: Convert the text into tokens, individual words, or meaningful units to facilitate further analysis. After the tokenization step, the data becomes separable and more adequate for the analysis.
(g)
Lemmatization: Convert each word to its base or root form to reduce inflectional variations and ensure consistency. In the existing studies, stemming was used in this regard, though that is vulnerable to over-stemming and under-stemming phenomenon. Though a bit computationally expensive, lemmatization is way better in terms of accuracy, since it keeps the context intact while returning the word base form, aka lemma, from the dictionary. It efficiently handles grammar and delivers the accurate language representation.

4.2. Split, Training and Testing Dataset

Dataset splitting involves dividing the available data into two or more subsets, which are used to create separate training and testing datasets for machine learning models. The purpose of this process is to evaluate the performance of a model on an independent dataset that it has not previously seen during training. Typically, the dataset is split into a training set and a testing set. The training set is used to train the model, while the testing set is used to evaluate the model’s performance. Alternatively, cross-validation can be used, in which the dataset is divided into multiple subsets, each of which is used for training and testing. To split the dataset into train and test sets, we used 80% of the dataset for training and 20% for testing. The training set was used to train the model to recognize patterns and make predictions. The testing set was used to evaluate the trained model’s generalization ability to new, unseen data. It is important to note that the testing dataset is independent/exclusive of the training dataset.

4.3. Feature Extraction

In the current study, since we are working with textual data, and there is a need to extract important features to improve the accuracy of spam prediction. Feature extraction is the process of converting textual data into numerical data that is suitable for prediction. This is achieved by converting words in comments to numerical symbols, also known as sequences, where each word or letter is assigned a code. For instance, the word “negatives” may be encoded as “1”. Whenever this word appears in any text, email, or tweet, it is replaced with its corresponding symbol “1”. To ensure that all sentences have the same length, the padding sequence method is used. This involves adding zeros to shorter lines to match the length of longer lines, resulting in uniform lengths for the texts. Various techniques, such as term frequency—inverse document frequency (TF-IDF), word embeddings, and bag-of-words are used for feature extraction, as explained subsequently.

4.3.1. Term Frequency–Inverse Document Frequency (TF-IDF)

TF-IDF is most used for text classification and feature extraction. Term frequency is the number of times a word appears within a document, as given in Equation (1). The Inverse Document Frequency returns how common or rare a word is in the entire record set, as given in Equation (2). So, if the word is ubiquitous and appears in many records, this number will be 0. Otherwise, it will be 1, as given in Equation (3).
T F d = l o g ( 1 + f r e q ( d ) )
I D F t = log n D F t + 1
T F I D F t , d = T F t , d × I D F ( t )

4.3.2. Sequence of N Words (N-Gram)

N-grams extract the sequence of N words. In the proposed approach, we used Unigram with range (min: 1, max: 1). For example, the sentence “محمد أكل التفاحة” should be divided into {‘محمد,’ ‘أكل,’ ‘التفاحة’}.

4.4. Dataset Balancing

The dataset obtained from various sources contained an imbalanced number of instances. In general, a ratio of about 20–80 was observed between spam and non-spam tweets, respectively. It is apparent that the dataset is imbalanced, which may lead to unfair analysis. To balance the dataset, the SMOTE technique is applied. This technique is helpful in not only balancing the dataset classes, but also guarantees fairness in the analysis, where each class takes an equal part in the model training and evaluation.

4.5. Model Evaluation

Machine learning and deep learning models are often evaluated using accuracy and error to determine the relationship between predicted and actual values. To evaluate the performance of a proposed model on a given dataset, four measures are typically used: accuracy, F1-score, recall, and precision, as cited in references [45,46,47]. These formular are expressed by means of True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN).
  • Accuracy is the ratio of true classified (TP and TN) outcomes to the total number of classified instances (TP, TN, FP and FN). It can be calculated as the following equation:
A c c u r a c y = T P + T N T P + T N + F P + F N
  • The recall is calculated as the percentage of positive tweets (TP) correctly identified by the model in the dataset. It can be calculated using the following equation:
R e c a l l = T P T P + F N
  • The precision measure represents the proportion of true positive (TP) tweets among all forecasted positive tweets (TP and FP), and is calculated using the following equation:
P r e c i s i o n = T P T P + F P
  • The F1-score is a measure that combines precision and recall in a harmonic mean. The equation to calculate the F1-score is as follows:
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
It is worth mentioning that the distinction between the proposed and the existing models is mainly based on the diverse preprocessing methods applied to the Arabic text prior to the model building, by incorporating a refined list of diacritics, stop words and others. That has eventually contributed to better feature extraction and improved models’ training and effectiveness, as evident in the next section. That is the main reason why the use of widely known machine learning and deep learning techniques yields such a clear improvement in the results.

5. Results and Discussion

This section presents the results of all classifiers in detail and demonstrates various effects of Unigram and TF-IDF on text classification for each model. After preprocessing the dataset and extracting the features, the dataset is fed to the classifiers to determine whether a tweet is spam. The following sections summarize the results of the two experiments.

5.1. Results

The proposed models, including RF as ensemble learning model, LSTM as deep learning model, and SVM and NB as classical machine learning models, have been implemented in the Python programming language using the aforementioned dataset. This is mainly because these thrived in similar problems per the literature.
The Random Forest classifier was the first model we trained. This algorithm created a set of decision trees, each of which was trained on a different subset of the data using a random selection of features. By combining the predictions of multiple trees, the Random Forest classifier aimed to increase the overall accuracy and robustness of the model. Additionally, we performed hyperparameter tuning to optimize the model’s performance. After evaluation, the model achieved an accuracy of 96.57%, a precision of 95%, a recall of 97.80%, and an F1-score of 96.38%. These results are consistent, significant, and promising when compared to similar models in the literature.
Similarly, after hyperparameter tuning, the LSTM (second experiment) model was configured with 64 neurons in the hidden layer and trained for 30 epochs. The algorithm achieved an accuracy of 94.58%, a precision of 91.25%, a recall of 97.28%, and an F1-score of 94.16%. While these results are slightly lower than the RF algorithm’s results, they are still consistent, considerable, and promising when compared to a similar dataset in the literature. This is mainly due to the power of ensemble classification. The RF algorithm outperforms the LSTM model by approximately 2% in accuracy, 3.75% in precision, 0.52% in recall, and 2.22% in F1-score.
It was found that the classical ML algorithms SVM and NB showed a relatively much poorer performance compared to the RF and LSTM models (third experiment). The SVM algorithm achieved accuracy, precision, recall, and F1-score of 82.07%, 74.98%, 86.27%, and 80.2%, respectively. Meanwhile, the NB algorithm exhibited the poorest performance with accuracy, precision, recall, and F1-score of 66.41%, 67.31%, 65.86%, and 66.3%, respectively. In terms of performance, SVM outperformed NB by margins of 15.66%, 7.67%, 20.41%, and 13.9% for accuracy, precision, recall, and F1-score, respectively. On the other hand, LSTM outperformed SVM with accuracy, precision, recall, and F1-score of 12.51%, 16.27%, 11.01%, and 13.96%, respectively.
Table 6 presents the obtained performance results for the proposed algorithms for all four metrics including accuracy, precision, recall and F1-score, respectively.
Figure 4 presents the results obtained from all four algorithms, providing a comparison in terms of accuracy, precision, recall, and F1-score, respectively, against all the proposed algorithms.

5.2. Comparison with State-of-the-Art Approaches

A comparison with state-of-the-art approaches is conducted with schemes that have the same and identical datasets. In the literature, the F1-score is a commonly used metric for comparison, since it encompasses other metrics like precision, accuracy, and recall [48,49,50]. A comparison between the proposed approach and Alotaibi et al.’s [30] work is conducted as they have a common dataset. The technique in [30] achieved an F1-score of 75%, whereas the proposed approach obtained F1-scores of 96.38% and 94.16% for the ensemble learning and deep learning models, respectively. For the SVM model, the proposed approach outperformed by 4.8%, while LSTM and RF outperformed by 19.16% and 21.38%, respectively. Similarly, another scheme by Alkadri et al. [26] with identical datasets collected from Saudi Arabia yielded the highest F1-score of 89% using SVC, while the proposed RF and LSTM schemes outperformed by 5.16% and 7.38%, respectively. However, SVM underperformed by 6.8% in this regard, which is shown in Figure 5.

5.3. Discussion

This study proposed classical machine learning and deep learning models for tweet spam detection in Arabic. In this regard, four algorithms were investigated, including SVM, NB, RF and LSTM. A comparison was made among the four algorithms in terms of accuracy, precision, recall and F1-score. After training the model on the used dataset, the results were presented and reviewed in the previous section. By examining and analyzing the practical results of the proposed model, several criteria were adopted for comparing the two algorithms. The dataset relies not only on the presence of potentially suspicious URLs, but primarily on the text and its meaning or semantics, as it is the best indicator for determining whether it is spam. Here, we also focus on the number of followers, likes, and retweets. Moreover, in the NLP part, various dialects have been catered for, including Modern Standard Arabic (MSA). Finally, it was observed that Random Forest and LSTM were good choices for classifying Arabic texts, in contrast to SVM and NB. The experimental results demonstrate that Random Forest has many accurate labels in prediction due to its ensemble nature, and the LSTM has a good result for accuracy, loss, and overfitting.
In contrast to the English language, Arabic language tweet spam detection involves more preprocessing with diverse operations. This makes the Arabic tweet spam detection more complicated and vulnerable to classification errors. For instance, the diversity of dialects, diacritics marks, punctuation symbols, as well as the type and number of grammatical rules make it different and complicated compared to the English language. So, the Arabic tweet spam detection involves additional efforts starting from dataset collection, including preprocessing and the training and evaluation of the models.
Regarding the limitations of the study, it should be noted that the dataset used for analysis is somewhat restricted. Nonetheless, the tweets were collected from a diverse range of users with different Arabic dialects. To enhance the dataset, it is recommended to employ data augmentation techniques. It is also recommended to use advanced feature extraction techniques and encoders to further fine tune the results. For example, word to vector (word2vec) and global vectors for word representation (GloVe) along with Modern Arabic Bidirectional Encoder Representations from Transformers (MARBERT), are large-scale pre-trained masked language models focused on both Dialectal Arabic (DA) and MSA [30,50].

6. Conclusions

The purpose of this study was to identify spam tweets in Arabic by utilizing machine learning and deep learning techniques. Four different models, namely Support Vector Machine, Naïve Bayes, Random Forest, and LSTM, were tested and evaluated using a combined dataset that was collected and combined with existing datasets. The experimental results revealed that the Random Forest classifier achieved the highest accuracy, precision, recall, and F1-score, followed by the LSTM model. There were no signs of overfitting observed. However, the SVM and NB models performed relatively poorly in terms of all metrics, with SVM performing better than NB overall. The proposed models exhibited a promising and improved performance in contrast to closely related state-of-the-art approaches. These findings suggest that ensemble and deep learning models are suitable for classifying Arabic tweets and are superior to other methods. In the future, the authors intend to investigate stacking ensemble models and transfer learning using more enriched and augmented datasets. Moreover, the researchers in the field may investigate other feature extraction methods and preprocessing techniques within the existing problem, such as word to vector (word2vec) and global vectors for word representation (GloVe).

Author Contributions

Conceptualization, A.R.; Data curation, W.H.H.; Methodology, A.R. and W.H.H.; Software, W.H.H.; Supervision, A.R.; Writing—original draft, W.H.H.; Writing—review and editing, A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data can be provided by the corresponding author based on a reasonable request.

Conflicts of Interest

The authors have no conflicts of interest to declare regarding the current study.

References

  1. Atta-ur-Rahman; Dash, S.; Luhach, A.K.; Chilamkurti, N.; Baek, S.; Nam, Y. A Neuro-fuzzy approach for user behaviour classification and prediction. J. Cloud Comp. 2019, 8, 1–15. [Google Scholar] [CrossRef]
  2. Alqahtani, A.; Alhaidari, F.; Rahman, A.; Mahmud, M.; Sultan, K. Decision Support System Assisted E-Recruiting System. J. Comput. Theor.Nanosci. 2019, 16, 335–340. [Google Scholar]
  3. Sajid, N.A.; Rahman, A.; Ahmad, M.; Musleh, D.; Basheer Ahmed, M.I.; Alassaf, R.; Chabani, S.; Ahmed, M.S.; Salam, A.A.; AlKhulaifi, D. Single vs. Multi-Label: The Issues, Challenges and Insights of Contemporary Classification Schemes. Appl. Sci. 2023, 13, 6804. [Google Scholar] [CrossRef]
  4. Rahman, A.; Alrashed, S.A.; Abraham, A. User Behaviour Classification and Prediction Using Fuzzy Rule Based System and Linear Regression. J. Inf. Assur. Secur. 2017, 12, 86–93. [Google Scholar]
  5. Aljabri, M.; Mohammad, R.M.A. Click fraud detection for online advertising using machine learning. Egypt. Inform. J. 2023, 24, 341–350. [Google Scholar] [CrossRef]
  6. Al-Azani, S.; El-Alfy, E.-S.M. Detection of Arabic spam tweets using word embedding and machine learning. In Proceedings of the 2018 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Sakhier, Bahrain, 18–20 November 2018; pp. 1–5. [Google Scholar] [CrossRef]
  7. Dasarathy, B.; Sheela, B. A composite classifier system design: Concepts and methodology. Proc. IEEE 1979, 67, 708–713. [Google Scholar] [CrossRef]
  8. Hansen, L.K.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef]
  9. Schapire, R.E. The strength of weak learnability. Mach. Learn. 1990, 5, 197–227. [Google Scholar] [CrossRef]
  10. Polikar, R. Ensemble Learning in Ensemble Machine Learning: Methods and Applications; Springer: Boston, MA, USA, 2012; pp. 1–34. [Google Scholar]
  11. Modi, J.H. Detection of Web Spam using Different Classification Algorithms. Int. J. Eng. Res. Technol. IJERT 2014, 3, 718–720. [Google Scholar]
  12. Bahnsen, A.C.; Bohorquez, E.C.; Villegas, S.; Vargas, J.; Gonzalez, F.A. Classifying phishing URLs using recurrent neural networks. In Proceedings of the 2017 APWG Symposium on Electronic Crime Research (eCrime), Scottsdale, AZ, USA, 25–27 April 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  13. Preethi, V.; Velmayil, G. Automatic phishing website detection using URL features and machine learning technique. Int. J. Eng. Tech. 2016, 2, 107–115. Available online: http://www.ijetjournal.org (accessed on 1 December 2019).
  14. Nagaraj, K.; Bhattacharjee, B.; Sridhar, A.; Gs, S. Detection of phishing websites using a novel twofold ensemble model. J. Syst. Inf. Technol. 2018, 20, 1328–7265. [Google Scholar] [CrossRef]
  15. Ubing, A.A.; Kamilia, S.; Abdullah, A.; Jhanjhi, N.; Supramaniam, M. Phishing website detection: An improved accuracy through feature selection and ensemble learning. Int. J. Adv. Comput. Sci. Appl. IJACSA 2019, 10, 252–257. [Google Scholar] [CrossRef]
  16. Hassan, R.; Islam, R. Detection of fake online reviews using semi-supervised and supervised learning. In Proceedings of the International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’sBazar, Bangladesh, 7–9 February 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  17. Jain, N.; Kumar, A.; Singh, S.; Singh, C.; Tripathi, S. Deceptive Reviews Detection Using Deep Learning Techniques; Springer Nature: Cham, Switzerland, 2019. [Google Scholar]
  18. Mani, S.; Kumari, S.; Jain, A.; Kumar, P. Spam review detection using ensemble machine learning. In Proceedings of the Machine Learning and Data Mining in Pattern Recognition: 14th International Conference, MLDM 2018, New York, NY, USA, 15–19 July 2018; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  19. Bin Siddique, Z.; Khan, M.A.; Din, I.U.; Almogren, A.; Mohiuddin, I.; Nazir, S. Machine Learning-Based Detection of Spam Emails. Sci. Program. 2021, 2021, 6508784. [Google Scholar] [CrossRef]
  20. Dewis, M.; Viana, T. Cyber and Phish Responder: A Hybrid Machine Learning Approach to Detect Phishing and Spam Emails. Appl. Syst. Innov. 2022, 5, 73. [Google Scholar] [CrossRef]
  21. Alzaqebah, M.; Jawarneh, S.; Mohammad, R.M.A.; Alsmadi, M.K.; Almarashdeh, I. Improved Multi-Verse Optimizer Feature Selection Technique with Application to Phishing, Spam, and Denial of Service Attacks. Int. J. Commun. Netw. Inf. Secur. IJCNIS 2021, 13, 76–81. [Google Scholar] [CrossRef]
  22. AbdulNabi, I.; Yaseen, Q. Spam Email Detection Using Deep Learning Techniques. Procedia Comput. Sci. 2021, 184, 853–858. [Google Scholar] [CrossRef]
  23. Al-Kabi, M.N.; Wahsheh, H.A.; Alsmadi, I.M. OLAWSDS: An Online Arabic Web Spam Detection System. Int. J. Adv. Comput. Sci. Appl. 2014, 5, 105–110. [Google Scholar]
  24. Ghourabi, A.; Mahmood, M.A.; Alzubi, Q.M. A Hybrid CNN-LSTM Model for SMS Spam Detection in Arabic and English Messages. Future Internet 2020, 12, 156. [Google Scholar] [CrossRef]
  25. Mohammed, M.A.; Ibrahim, D.A.; Salman, A.O. Adaptive intelligent learning approach based on visual anti-spam email model for multi-natural language. J. Intell. Syst. 2021, 30, 774–792. [Google Scholar] [CrossRef]
  26. Alkadri, A.M.; Elkorany, A.; Ahmed, C. Enhancing Detection of Arabic Social Spam Using Data Augmentation and Machine Learning. Appl. Sci. 2022, 12, 11388. [Google Scholar] [CrossRef]
  27. Saeed, R.M.; Rady, S.; Gharib, T.F. An ensemble approach for spam detection in Arabic opinion texts. J. King SaudUniv.-Comput. Inf. Sci. 2022, 34, 1407–1416. [Google Scholar] [CrossRef]
  28. Alzanin, S.M.; Azmi, A.M. Rumor detection in Arabic tweets using semi-supervised and unsupervised expectation-maximization. Knowl. Based Syst. 2019, 185, 104945. [Google Scholar] [CrossRef]
  29. Dakalbab, F.; Abu Talib, M.; Abu Waraga, O.; Nassif, A.B.; Abbas, S.; Nasir, Q. Artificial intelligence & crime prediction: A systematic literature review. Soc. Sci. Humanit. Open 2022, 6, 100342. [Google Scholar]
  30. Alotaibi, A.; Rahman, A.-U.; Alhaza, R.; Alkhalifa, W.; Alhajjaj, N.; Alharthi, A.; Abushoumi, D.; Alqahtani, M.; Alkhulaifi, D. Spam and sentiment detection in Arabic tweets using MARBERT model. Math. Model. Eng. Probl. 2022, 9, 1574–1582. [Google Scholar] [CrossRef]
  31. Alorini, D.; Rawat, D.B. Bayesian reasoning based malicious data discovery on gulf-dialectical arabic tweets. In Proceedings of the 2018 IEEE International Symposium on Technology and Society (ISTAS), Washington, DC, USA, 13–14 November 2018; pp. 133–138. [Google Scholar] [CrossRef]
  32. AlGhamdi, M.A.; Khan, M.A. Intelligent Analysis of Arabic Tweets for Detection of Suspicious Messages. Arab. J. Sci. Eng. 2020, 45, 6021–6032. [Google Scholar] [CrossRef]
  33. Alhassun, A.S.; Rassam, M.A. A Combined Text-Based and Metadata-Based Deep-Learning Framework for the Detection of Spam Accounts on the Social Media Platform Twitter. Processes 2022, 10, 439. [Google Scholar] [CrossRef]
  34. Kaddoura, S.; Alex, S.A.; Itani, M.; Henno, S.; AlNashash, A.; Hemanth, D.J. Arabic spam tweets classification using deep learning. Neural Comput. Appl. 2023, 35, 17233–17246. [Google Scholar] [CrossRef]
  35. Kaddoura, S.; Henno, S. Dataset of Arabic spam and ham tweets. Data Brief 2024, 52, 109904. [Google Scholar] [CrossRef]
  36. Hassan, S.I.; Elrefaei, L.; Andraws, M. Arabic Tweets Spam Detection Based on Various Supervised Machine Learning and Deep Learning Classifiers. MSA Eng. J. 2023, 2, 1099–1119. [Google Scholar] [CrossRef]
  37. Thomas, R.N.; Gupta, R. A survey on machine learning approaches and its techniques. In Proceedings of the 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 22–23 February 2020; pp. 1–6. [Google Scholar] [CrossRef]
  38. Alabbad, D.A.; Ajibi, S.Y.; Alotaibi, R.B.; Alsqer, N.K.; Alqahtani, R.A.; Felemban, N.M.; Rahman, A.; Aljameel, S.S.; Ahmed, M.I.B.; Youldash, M.M. Birthweight Range Prediction and Classification: A Machine Learning-Based Sustainable Approach. Mach. Learn. Knowl. Extr. 2024, 6, 770–788. [Google Scholar] [CrossRef]
  39. Musleh, D.A.; Alkhwaja, I.; Alkhwaja, A.; Alghamdi, M.; Abahussain, H.; Alfawaz, F.; Min-Allah, N.; Abdulqader, M.M. Arabic Sentiment Analysis of YouTube Comments: NLP-Based Machine Learning Approaches for Content Evaluation. Big Data Cogn. Comput. 2023, 7, 127. [Google Scholar] [CrossRef]
  40. Pouyanfar, S.; Sadiq, S.; Yan, Y.; Tian, H.; Tao, Y.; Reyes, M.P.; Shyu, M.-L.; Chen, S.-C.; Iyengar, S.S. A survey on deep learning. ACM Comput. Surv. 2018, 51, 1–36. [Google Scholar] [CrossRef]
  41. Lindemann, B.; Müller, T.; Vietz, H.; Jazdi, N.; Weyrich, M. A survey on long short-term memory networks for time series prediction. Procedia CIRP 2021, 99, 650–655. [Google Scholar] [CrossRef]
  42. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  43. Qureshi, M.A.; Asif, M.; Anwar, S.; Shaukat, U.; Rahman, A.; Khan, M.A.; Mosavi, A. Aspect level songs rating based upon reviews in English. Comput. Mater. Contin. 2023, 74, 2589–2605. [Google Scholar]
  44. Alqarni, A.; Rahman, A. Arabic Tweets-Based Sentiment Analysis to Investigate the Impact of COVID-19 in KSA: A Deep Learning Approach. Big Data Cogn. Comput. 2023, 7, 16. [Google Scholar] [CrossRef]
  45. Musleh, D.A.; Alkhales, T.A.; Almakki, R.A.; Alnajim, S.E.; Almarshad, S.K.; Alhasaniah, R.S.; Aljameel, S.S.; Almuqhim, A.A. Twitter Arabic sentiment analysis to detect depression using machine learning. Comput. Mater. Contin. 2022, 71, 3463–3477. [Google Scholar]
  46. Jan, F.; Rahman, A.; Busaleh, R.; Alwarthan, H.; Aljaser, S.; Al-Towailib, S.; Alshammari, S.; Alhindi, K.R.; Almogbil, A.; Bubshait, D.A.; et al. Assessing Acetabular Index Angle in Infants: A Deep Learning-Based Novel Approach. J. Imaging 2023, 9, 242. [Google Scholar] [CrossRef]
  47. Ahmed, M.I.B.; Saraireh, L.; Rahman, A.; Al-Qarawi, S.; Mhran, A.; Al-Jalaoud, J.; Al-Mudaifer, D.; Al-Haidar, F.; AlKhulaifi, D.; Youldash, M.; et al. Personal Protective Equipment Detection: A Deep-Learning-Based Sustainable Approach. Sustainability 2023, 15, 13990. [Google Scholar] [CrossRef]
  48. Ahmed, M.I.B.; Alabdulkarem, H.; Alomair, F.; Aldossary, D.; Alahmari, M.; Alhumaidan, M.; Alrassan, S.; Rahman, A.; Youldash, M.; Zaman, G. A Deep-Learning Approach to Driver Drowsiness Detection. Safety 2023, 9, 65. [Google Scholar] [CrossRef]
  49. Ahmed, M.S.; Rahman, A.; AlGhamdi, F.; AlDakheel, S.; Hakami, H.; AlJumah, A.; AlIbrahim, Z.; Youldash, M.; Alam Khan, M.A.; Basheer Ahmed, M.I. Joint Diagnosis of Pneumonia, COVID-19, and Tuberculosis from Chest X-ray Images: A Deep Learning Approach. Diagnostics 2023, 13, 2562. [Google Scholar] [CrossRef]
  50. Musleh, D.; Rahman, A.; Alkherallah, M.A.; AlBo-Hassan, M.K.; Alawami, M.M.; Alsebaa, H.A.; Alnemer, J.A.; Al-Mutairi, G.F.; Aldossary, M.I.; Aldowaihi, D.A.; et al. Machine Learning Approach to Cyberbullying Detection in Arabic Tweets. Comput. Mater. Contin. 2024, 80, 1–21. [Google Scholar] [CrossRef]
Figure 1. Oversampling and undersampling phenomena.
Figure 1. Oversampling and undersampling phenomena.
Ai 05 00052 g001
Figure 2. Methodology of the proposed study.
Figure 2. Methodology of the proposed study.
Ai 05 00052 g002
Figure 3. Preprocessing pipeline.
Figure 3. Preprocessing pipeline.
Ai 05 00052 g003
Figure 4. Comparison of RF and LSTM models.
Figure 4. Comparison of RF and LSTM models.
Ai 05 00052 g004
Figure 5. Comparison with state-of-the-art approaches.
Figure 5. Comparison with state-of-the-art approaches.
Ai 05 00052 g005
Table 1. Summary of the techniques in English language.
Table 1. Summary of the techniques in English language.
Ref.YearMethod/ClassifierDatasetEvaluation
[11]2023LADTree, Naïve Bayes and SVM using WEKA.The dataset collects both feature and link content.Precision 83.1% and Recall 81.8%
[12]2017RF and RNNThe database comprised of one million legitimate URLsAccuracy of 98.7%
[15]2019Logistic Regression (LR), RF, Prediction modelA dataset of phishing websites from the university repositoryAccuracy of 95%
[16]2019NBA dataset comprising around 1600 reviews in textual form from twenty hotels in USA.Accuracy of 86.32%
[18]2018NB, RF, and SVM.The dataset contains 10-fold cross-validation.Accuracy of 87.68%.
[19]2021CNN, NB, LSTM, and SVMSpam emails from KaggleThe highest accuracy achieved by the LSTM model was 98.4%
[20]2023LSTM, MLP and Phish ResponderSpam base (Numeric), PhishingEmail Collection (Numerical), Spam Email Dataset (Text), Spam Email (Text), Spam Classification for Basic NLP (Text), Spam Email (Numerical) LSTM with textual dataset accuracy 99%. MLP with numerical datasets accuracy 94%.
[22]2021DNN (Deep Neural Network)Two open-source datasets were used for email spamAccuracy of 98.67% and F1 98.66%.
Table 2. Summary of the techniques in Arabic language.
Table 2. Summary of the techniques in Arabic language.
Ref.YearMethod/ClassifierDatasetEvaluation
[6]2019Word Embedding with machine learning (DT, NB and SVM)Publicly available dataset of 3503 tweetsAccuracy 87.33% for word2vec with SVM
[26]2022SVM, NB, and LRArabic tweets dataset of a size of 1.6 million instances collected over a span of five monthsThe proposed approach indicated a 58% to 89% improvement in F1-score. A total accuracy of 92% with a small and selected dataset.
[28]2019Semi-supervised expectation–maximization (E-M) and supervised Gaussian NBSelf-collected 271,000 Arabic tweets, consisting of 89 rumors and 88 non-rumor events.Semi-supervised learning model accuracy 78.6%.
[30]2023BERT, MARBERTArabic tweets dataset with 24,513 instances.F1-score 75%
[31]2019Bayesian ReasoningOver 2000 Arabic tweets (translated in English) from Gulf dialects: Saudi, Kuwaiti, Emirati, Bahraini, Qatari, and Omani.Accuracy 91%
[32]2020DT, KNN, Linear Discriminant Algorithm (LAD), SVM, ANN, and LSTM.Arabic tweets collected using API.SVM with the highest accuracy of 86.72%
[33]2022CNNThe dataset was obtained to detect spam accounts.CNN alone accuracy: 80%.
The combined model accuracy: 94.27%.
[34]2023SVM, NN, LR, and NB. GloVe and FastText models with DLSelf-collected and labelled Arabic tweets dataset [35]FastText with DL outperformed rest of the models
[36]2023SVM, CNN-LSTMAutomatically generated Arabic tweetsSVM with unigram: Accuracy 83.11%. CNN-LSTM: 82.65%
Table 3. Examples of some letters in normalized form.
Table 3. Examples of some letters in normalized form.
LetterNormalized Form
إ,أ,آ,اا
ىي
ئء
ؤء
ةه
كـك
Table 4. Arabic Diacritic Marks [44].
Table 4. Arabic Diacritic Marks [44].
Diacritic MarksCharacters
FathaAi 05 00052 i002
TashdeedAi 05 00052 i003
Tanwin FathAi 05 00052 i004
DammaAi 05 00052 i005
Tanwin DammAi 05 00052 i006
KasraAi 05 00052 i007
Tanwin KasrAi 05 00052 i008
SukunAi 05 00052 i009
Table 5. Example of Arabic stop words.
Table 5. Example of Arabic stop words.
#Word
1انها
2اثناء
3اجل
4في
5احيانا
6اذا
7ايضا
Table 6. Performance evaluation of the proposed models.
Table 6. Performance evaluation of the proposed models.
AlgorithmAccuracyPrecisionRecallF1-Score
RF96.579597.896.38
LSTM94.5891.2597.2894.16
SVM82.0774.9886.2780.2
NB66.4167.3165.8666.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hantom, W.H.; Rahman, A. Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach. AI 2024, 5, 1049-1065. https://doi.org/10.3390/ai5030052

AMA Style

Hantom WH, Rahman A. Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach. AI. 2024; 5(3):1049-1065. https://doi.org/10.3390/ai5030052

Chicago/Turabian Style

Hantom, Wafa Hussain, and Atta Rahman. 2024. "Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach" AI 5, no. 3: 1049-1065. https://doi.org/10.3390/ai5030052

Article Metrics

Back to TopTop