Next Article in Journal
Realizing Mathematics of Arrays Operations as Custom Architecture Hardware-Software Co-Design Solutions
Previous Article in Journal
Building Knowledge Graphs from Unstructured Texts: Applications and Impact Analyses in Cybersecurity Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection

Department of Information Technology, Durban University of Technology, Durban 4001, South Africa
*
Author to whom correspondence should be addressed.
Information 2022, 13(11), 527; https://doi.org/10.3390/info13110527
Submission received: 10 October 2022 / Revised: 1 November 2022 / Accepted: 1 November 2022 / Published: 4 November 2022
(This article belongs to the Section Review)

Abstract

:
The ubiquitous access and exponential growth of information available on social media networks have facilitated the spread of fake news, complicating the task of distinguishing between this and real news. Fake news is a significant social barrier that has a profoundly negative impact on society. Despite the large number of studies on fake news detection, they have not yet been combined to offer coherent insight on trends and advancements in this domain. Hence, the primary objective of this study was to fill this knowledge gap. The method for selecting the pertinent articles for extraction was created using the preferred reporting items for systematic reviews and meta-analyses (PRISMA). This study reviewed deep learning, machine learning, and ensemble-based fake news detection methods by a meta-analysis of 125 studies to aggregate their results quantitatively. The meta-analysis primarily focused on statistics and the quantitative analysis of data from numerous separate primary investigations to identify overall trends. The results of the meta-analysis were reported by the spatial distribution, the approaches adopted, the sample size, and the performance of methods in terms of accuracy. According to the statistics of between-study variance high heterogeneity was found with τ2 = 3.441; the ratio of true heterogeneity to total observed variation was I2 = 75.27% with the heterogeneity chi-square (Q) = 501.34, the degree of freedom = 124, and p ≤ 0.001. A p-value of 0.912 from the Egger statistical test confirmed the absence of a publication bias. The findings of the meta-analysis demonstrated satisfaction with the effectiveness of the recommended approaches from the primary studies on fake news detection that were included. Furthermore, the findings can inform researchers about various approaches they can use to detect online fake news.

1. Introduction

The rapid adoption of social media has significantly altered the manner in which people live their lives, resulting in newspapers and other traditional news sources becoming less relevant. “Social media” refers to platforms such as Twitter and Facebook that help individuals from around the globe build networks and share information and/or sentiments in real-time [1]. Due to the broad usage of social media and the pervasive availability of the internet, social media is the ideal location to propagate disinformation or fake news due to little to no oversight of social media platforms.
The term “fake news” can be described as claims or stories that are purposefully and verifiably untrue and attempt to pass themself off as news or journalistic reports [2,3,4]. However, it can be challenging for average people to distinguish this type of news from the plethora of information publicly available because of restrictions in knowledge and experience. Researchers have examined fake news from various viewpoints and produced a basic classification of several categories of fake news [5]. According to Shu et al. [6], the classification of fake news detection includes knowledge-based, style-based, source-based, and propagation-based methods. Knowledge-based methods are used to check for the truthfulness of claims by verifying if the knowledge contained in the news content is accurate. This method is seen as a superior option for scalable fact checking [6,7]. Style-based methods aim to detect the distinctions between the writing styles of fake and real news. The text, images, and/or videos contained in to-be-verified content can be used to extract journalistic style, allowing one to infer the goal of the news content [6,8]. Propagation based methods detect fake news based on news propagated on social networks [6,8]. Source-based methods demonstrate that fake news can be detected by examining the credibility of its source, where credibility is frequently characterized in terms of quality and believability, offering plausible grounds for belief [6,8].
Fake news has been spreading for many years and is not a new problem [9]. Although this incorrect or misleading news is an intentional propagation that causes society to trust misleading information, identifying fake news from authentic news based on shared content has become increasingly challenging. Because social media actively fosters the flow of information from user to user, it is difficult to spot bogus news content. As a result of its global diffusion and the inability of humans to deal with the quick spread of news on the internet, dealing with misinformation is a challenging undertaking.
By broadcasting misleading and biased information, fake news has the potential to damage people’s trust in authorities, experts, and the government. Furthermore, this type of news has serious consequences for society, politics, information technology, and financial issues, as well as for everyone who lives in a cyber environment where there is a lack of trust [10]. The rise of fake news on social media has compelled the progress of research for accurately detecting these news instances. As a result, researchers have developed a variety of approaches, with some claiming to be superior to others. Therefore, a well-established, accurate-focused approach to detecting online fake news is urgently needed to mitigate its significant influence and harm to society.
Therefore, researchers have developed a variety of detection methods that rely on artificial intelligence (AI) techniques, including deep learning [11,12,13,14], machine learning [15,16,17,18], and ensemble [19,20,21] approaches. Numerous review and survey papers in this domain have been published as a result of the extensive collection of studies on the subject. The vast majority of review studies already published, such as Collins et al. [22], Choraś et al. [23], Varlamis et al. [24], Shahid et al. [25], Khan et al. [26], and Lozano et al. [27], are descriptive rather than providing a quantitative evaluation of techniques for detecting fake news. Hence, a meta-analysis is necessary as it allows for a credible analysis of the findings from the published literature to uncover varied perspectives [28]. Furthermore, meta-analysis often improves reliability and assures previous study results on the detection of online fake news. Due to a lack of studies on meta-analysis to understand the proper detection approach, the purpose of this study was to utilize meta-analysis as a statistical tool to assess the efficacy of different proposed approaches from primary studies on fake news detection conducted independently in the literature. The following are the unique contributions of this study:
  • The discovery of a variety of sources in research on the detection of online fake news can help researchers make better decisions by identifying appropriate AI approaches for detecting fake news online.
  • The examination of publication bias in establishing the reliability of the main conclusions of research on detection methods.
  • The identification of studies that contribute most to the heterogeneity of the detection studies.
The remainder of the paper is structured as follows: the related work to review various methods suggested in the literature for spotting fake news is covered in Section 2. The material and methods are presented in Section 3 in both theoretical and applied forms. Section 4 presents the results and discussion, while Section 5 presents the study conclusion.

2. Related Works

Traditional news media generally rely on news content for the identification of fake news, as opposed to social media, where additional social context auxiliary information can be used as supplementary information to help detect fake news. The use of supervised fake news detection models based on machine learning (ML) and deep learning (DL) techniques has significantly expanded in recent years due to their excellent detection accuracy. These methods extract the distinguishing characteristics of fake news using feature representation based on linguistic and visual data [6]. Linguistic-based characteristics are derived from many levels of textual content organization, such as characters, words, phrases, and documents. Visual-based features are derived from visual resources such as images and videos in order to recognize the numerous characteristics of fake news. With the reported increase in online fake news [29,30], automated methods for its detection on social media have attracted the attention of researchers worldwide [31,32,33]. COVID-19 and the numerous related hoaxes, rumors, and misinformation surrounding the cures, treatment, and prevention have further fueled the interest of researchers in improved methods for detection [34]. Even with this increased attention, the task of detecting fake news is still reported as challenging [35].
Through the analysis of the literature relating to this area, it is evident that a diverse range of ML and DL approaches as well as hybrid and ensemble versions of these have been employed. This section presents the literature relating to the approaches mentioned above.
Several researchers have developed ML methods for the detection of fake news. Vicario et al. [36] built a logistic regression (LR) classifier to predict this type of news using a massive Italian dataset consisting of actual news and hoaxes published on Facebook, achieving an accuracy of 91%. The LR method also achieved the highest accuracy (96%) in the study by Stitini et al. [37] where Bidirectional Encoder Representations from Transformers (BERT) transformed the dataset text into vectors. Random forest (RF) often emerges as the method achieving the most accurate results, with an accuracy of 97.3% reported by Fayaz et al. [38]. The study used data from the ISOT fake news dataset and compared results with other state-of-the-art machine learning techniques such as gradient boosting machines (GBM), extreme gradient boosting machines (Boost), and the adaptive boost regression model. Support vector machine (SVM) models have also shown promising results, with an accuracy of 93.15% being achieved when applying the data from the fake news dataset extracted from Kaggle, outperforming the LR approach applied to the same data by 6.82% [39].
While many researchers investigate the performance of individual ML methods, some researchers chose to investigate the effect of applying an ensemble of ML methods on the data to achieve improved accuracy results. A blended ensemble machine learning method that applies the LR, SVM, linear discriminant analysis, stochastic gradient descent, and ridge regression techniques achieved 79.9% accuracy when data from the ISOT and LIAR datasets were used [40]. Accuracies over 95% have been achieved by many studies that have applied voting ensemble methods to the datasets, including Elsaeed et al. [41], Verma et al. [42], Biradar et al. [43], Kanagavalli and Priya [44], and Elhadad, Li, and Gebali [21], who achieved accuracy measures of 95.6%, 96.7%, 97%, 98.6%, and 99.7%, respectively. These works based their results on data from different datasets, including ISOT, WELFake, COVID19 Fake, LIAR, and researcher-created datasets.
DL methods such as convolutional neural networks (CNN), long-short term memory (LSTM), and bi-directional long-short term memory (BiLSTM) have attracted much interest in the area of fake news detection. Galli et al. [45] applied both ML and DL methods to datasets, comparing the results obtained. It was established that, by applying the CNN technique to the limited PoliFact dataset, an accuracy of 75.6% was achieved. The study reported that the CNN method outperformed the other approaches investigated, which include, among others, naive Bayes (NB); RF; LR; nearest neighbor (NN); decision tree; gradient boost; and BiLSTM. BiLSTM has also been investigated for its value in detecting fake news by many other researchers [46,47,48,49]. With most studies focusing on the English language, both Fouad, Sabbeh, and Medhat [47] and Nassif, Elnagar, Elgendy, and Afadar [48] investigated the accuracy of state-of-the-art classification methods for the identification of fake news in the Arabic language. Fouad, Sabbeh, and Medhat [47] evaluated the performance of eight machine learning algorithms and also experimented with five different combinations of deep learning algorithms, including CNN and LSTM, with the results indicating that the BiLSTM method outperformed the other methods, achieving an accuracy of 75% on the dataset of size 4561. Nassif, Elnagar, Elgendy, and Afadar [48] created a customized dataset based on tweets that consisted of 5000 fake and 5000 true news instances. Their Arabic Bi-directional Encoder Representations from the Transformers model (ARBERT) achieved 98.8% accuracy on the data.
Ensemble deep learning approaches have also been investigated for their value as detection methods, with the novel MisRoBÆRTa technique proposed by Truică and Apostol [20]. The technique combines CNN and many BiLSTM, achieving an accuracy of 92.5% when tested on a dataset with a sample size of 100,000. Jang et al. [50] collected data from Twitter and classified tweets as fake news by using the temporal propagation pattern of the retweeted quotes. The authors applied a two-phase deep learning model based on CNN and LSTM for training and testing, achieving an accuracy measure of 85.7%. An ensemble-based deep learning technique for classifying news as real or fake achieved a significant accuracy of 89.8% using data from the LIAR dataset [51]. The approach used two deep learning models, with a Bi-LSTM-gated recurrent unit (GRU) being used for the textual “statement” attribute, while the deep dense learning model was used on the remaining nine attributes.
While these studies all reported on the accuracy of the employed methods, the literature also includes studies that survey and review current approaches. In the paper by Collins, Hoang, Nguyen, and Hwang [22], a synthesis of methods for combating misinformation and fake news on social media is presented, while possible solutions, methodological gaps, and challenges relating to current detection methods were presented in a systematic review by Choraś, Demestichas, Giełczyk, Herrero, Ksieniewicz, Remoundou, Urda, and Woźniak [23]. Similarly, Shahid, Jamshidi, Hakak, Isah, Khan, Khan, and Choo [25], through a survey of novel AI approaches, uncovered key challenges in the area while also highlighting potential future research to be considered. An approach-specific survey by Varlamis, Michail, Glykou, and Tsantilas [24] investigated and reported on the studies that apply graph convolutional networks (GCNs) for detecting rumors, fake content, and fake accounts, with the aim of the paper being to provide a starting point for those researchers wanting to further investigate GCNs for the detection of fake news. Both Khan, Hakak, Deepa, Dev, and Trelova [26] and Lozano, Brynielsson, Franke, Rosell, Tjörnhammar, Varga, and Vlassov [27] chose to rather review ML models, providing a set of advantages and disadvantages associated with the datasets used in the reviewed studies. Additionally, Shu, Sliva, Wang, Tang, and Liu [6] provided a thorough analysis from a data mining perspective and emphasized the future research prospects according to four categories: data-oriented, feature-oriented, model-oriented, and application-oriented. One of the potential study areas for fake news detection that Shu, Sliva, Wang, Tang, and Liu [6] suggested is model-oriented fake-news research, which opens the path for the development of more effective and useful models based on supervised and unsupervised approaches to fake news detection.
While the current literature provides insight into the latest methods being employed and highlights reviews that have been performed, there appears to be no single study that quantitatively analyzes the current methods proposed for fake news detection. Furthermore, no systematic, comprehensive study of model-oriented fake news detection based on supervised learning techniques such as ML, DL, and ensemble methods has been conducted. Lozano, Brynielsson, Franke, Rosell, Tjörnhammar, Varga, and Vlassov [27] also highlight the lack of literature that considers multiple datasets and multiple approaches for detection. With the increasing number of publications in this research area and the reported proliferation of fake news, a systematic analysis is required so that an objective and comprehensive understanding of current supervised approaches can be obtained. The results provide valuable insight to researchers in the field regarding the DL, ML, and ensemble methods that were applied. This study, therefore, aimed to identify current trends, approaches, and methods for online fake news detection. Through meta-analysis, the patterns and correlations that exist in the area of ML, DL, and ensemble methods were unveiled and reported on.

3. Materials and Methods

This section provides a detailed description of the data extraction method applied as well as the criteria applied for the selection of relevant works. The meta-analysis is also presented and the measures are described.

3.1. Literature Search Strategy

A search of the literature was conducted to identify all published studies reporting on fake news detection. Following the recommendations of preferred reporting items for systematic reviews and meta-analyses (PRISMA) [52], the literature search strategy, screening and selection of publications, identification of parameters to be extracted, quality assessment, data extraction into tabular format, and reporting results were carried out [53]. The researchers searched the academic databases of the Web of Science to find pertinent published articles for this meta-analysis study. Previous research revealed that searching only one database is sufficient, as the checking of additional databases shows a minimal effect on the meta-analysis outcome [54,55]. On 17 August 2022, The Web of Science database was scoured for English-language papers published between 2014 and 2022.
The search terms used during a comprehensive literature search were: (“Fake news detection” OR “online fake news” OR “false news” OR (“fake news” AND “social media”) OR (“fake news” AND (“internet” OR “online”))). Between 2014 and 2022, 2159 published articles in total were found before applying exclusion criteria relating to publication years, document types, open access, and languages. This resulted in 945 studies being identified for screening and thereafter imported into Excel. Furthermore, reference lists from relevant papers were manually checked to identify any citations that the electronic database search may have missed.

3.2. Inclusion and Exclusion Criteria

The 945 studies identified for screening were subjected to inclusion and exclusion criteria as shown in Table 1.

3.3. Quality Assessment and Data Extraction

The authors of this study assessed the merits and relevance of each article. From the chosen studies, data that met the inclusion criteria were taken for further analysis. The studies that used an ensemble approach such as voting and stacking were manually labeled as an ensemble approach. Some articles did not explicitly state the method for detecting fake news, but the authors were still able to categorize them using the approach they presented. The systematic review and meta-analysis were appropriate for the study, which had 100% of the information and met all the inclusion criteria [56].
The Excel spreadsheet was populated with article data extracted according to variables listed in Table 2. The resulting database consisted of nine variables, which were populated with both qualitative and quantitative data that were retrieved through the review of the selected studies.
The data for the meta-analysis, therefore comprised of a matrix with nine fields and 125 rows and consisted of information for fake news detection approaches. The PRISMA flowchart detailing the extraction of relevant studies is presented in Figure 1.

3.4. Data Synthesis and Statistical Analysis

In order to prepare for the statistical analysis, the information was entered into an Excel spreadsheet. These data were then imported into the statistical analysis software, STATA version 17. The effect sizes of each included primary study and the total pooled effect size of all primary studies were calculated using the data extracted. The random-effects model served as the groundwork for our analysis. Due to data collected from published studies authored by several authors who worked independently on various fake news detection datasets, dataset sample sizes, fake news detection approaches, and fake news detection methods, the randomization hypothesis is plausible. Hence, the distinct underlying effect sizes of the included studies were presupposed in a random-effects model [57]. Using the Cochrane Q statistic, the study heterogeneity was determined; consequently, τ2 and I2 were employed to measure study heterogeneity [56]. I2 values of 25%, 50%, and 75% respectively reflect low, medium, and high heterogeneity. The effect sizes were calculated using the Forest plot [58] as a preamble to assessing heterogeneity and biases in the results of the included studies. For the purpose of assessing the efficacy of different fake news detection approaches, a pooled estimate was produced using a DerSimonian and Laird random-effects model.
Furthermore, when conducting a moderator analysis in a systematic review with meta-analysis, subgroup analysis and meta-regression are frequently utilized [57]. To compare a sample of data, subgroup analysis divides participant data into smaller groups. Hence, to identify the source of study heterogeneity, this study performed a subgroup analysis focused on research performance evaluation metrics, i.e., the accuracy, of the included studies. The subgroups were based on the approach used (machine learning, deep learning, ensemble deep learning, ensemble machine learning, hybrid, and sentiment analysis for fake news detection). In addition, to determine if any subsets of the included studies captured the pooled effect size, meta-regression analyses were conducted [59].
In systematic reviews and meta-analyses, publication bias is a possible problem that cannot be avoided. Additionally, it poses one of the biggest risks to the reliability of meta-analysis. The goal of the investigation was to determine to what extent publication bias affects a study’s outcome when judging the reliability of its main findings. In order to report publication bias among the included studies, this study used a funnel plot [60]. Since visual interpretation is subjective, in this investigation, the statistical Egger’s test as well as the visual examination of the funnel plot were used to determine publication bias. p < 0.05 was chosen to denote the statistical significance of publication bias for Egger’s regression test [56].

4. Results

One of the 945 studies identified by the database search was removed because it was a duplicate paper, and four others were eliminated because they were not in English. During the screening process, eight papers were rejected, as only the abstract was available. After analyzing the full text of the remaining 932 papers, a further 807 were rejected for a list of reasons: a review article, not related to fake news detection, not based on DL, ML, or ensemble, no reported sample size, or no reported accuracy. The final meta-analysis was performed on 125 studies.

4.1. Meta-Analysis Summary

To estimate the accuracy of fake news detection methods, random-effects model meta-analyses were performed using the sample size and accuracy based on effect size and standard error of effect size. Table 3 reveals that the between-study variability was high when considering the statistics of τ2 = 3.4401. True heterogeneity to total observed variation I2 = 75.270% (p < 0.001) was significantly high (25%, 50%, and 75% are respectively considered low, moderate, and high levels [61]), indicating that the variability is due to heterogeneity and not by chance. The high heterogeneity chi-square test result, (Q) = 501.340, is further evidence of the heterogeneity in effect sizes. An overall random pooled effect size of −9.942 within a 95% confidence interval (CI) of −10.317 to −9.567 was observed. The Forest plot (Figure 2) provided a graphical representation of the meta-analysis summary.
The Galbraith plot (Figure 3) shows a strong relationship between the sample size and the accuracy achieved by the model used, with the negative slope of the regression line further indicating that the accuracy reduces as the sample size increases. With only eight studies falling outside the 95% CI, this further supports high heterogeneity.

4.2. Subgroup Analysis

While the heterogeneity measures do provide an overall measure, they do not provide an indication of the source of heterogeneity. Subgroup analysis was therefore performed so that the source and level of heterogeneity between each group (each approach) could be determined. Creating subgroups allows comparisons to be made between data groups, and the interpretation can lead to informative insights into different approaches. Table 4, which provides a summary of the subgroup analysis per approach, shows notable differences. The machine learning approach featured as the primary contributor to the high level of heterogeneity (I2 = 85.70%, heterogeneity chi-square = 181.83, a degree of freedom = 26, and p < 0.001). Hybrid models also had high heterogeneity (I2 = 76.45%, heterogeneity chi-square = 38.22, a degree of freedom = 9, and p < 0.001). Moderate to high heterogeneity was evident for deep learning models (I2 = 70.92%, heterogeneity chi-square = 202.90, a degree of freedom = 59, and p < 0.001). Ensemble machine learning approaches presented with significantly moderate heterogeneity (I2 = 56.25%, heterogeneity chi-square = 34.29, a degree of freedom = 15, and p = 0.003), while ensemble deep learning was not significantly heterogenous and was, therefore, rather homogenous (I2 = 0.00%, heterogeneity chi-square = 8.70, a degree of freedom = 10, and p = 0.561). The heterogeneity of the sentiment analysis approach could not be determined and was not relevant as it has a degree of freedom = 0.

4.3. Meta-Regression

Due to the difference in sample sizes, the year of study, and the approach used, the parameters responsible for heterogeneity need to be examined further. Meta-regression was used to investigate the sources of heterogeneity, with parameters of sample size, year, and approach being used as moderators. The results in Table 5 indicate that sample size was the only significant cause of heterogeneity, with p < 0.001. This is further supported by the Bubble plot on year (Figure 4) and on sample size (Figure 5), where the distribution of studies on sample size was far more widespread.

4.4. Publication Bias

Publication bias cannot be avoided in systematic reviews and meta-analyses [148], with the literature suggesting that this bias be evaluated so that sound conclusions surrounding the extent to which bias may influence the generalizability of findings are addressed. The publication bias for this study was visually evaluated using a funnel plot. The symmetrical distribution of studies within the triangular region of Figure 6 indicates that this bias is not relevant. Due to the interpretation of the funnel plot relying on a visual evaluation, it can be subjective. For this reason, Egger’s test was performed as a quantitative measure of publication bias. From Table 6, it is evident that no significant bias exists, as the p-value obtained from the Egger’s test was 0.912.

4.5. Descriptive Statistics of Primary Studies

The publication trends for the years 2019 to 2022 are illustrated in Figure 7, where it is evident that the interest in methods for fake news detection increased significantly from five in 2019 to 15 in 2020, with 2021 seeing 51 publications. The year 2022 reports the highest number of studies, with 54 publications in the timeframe of January to August. The reported plethora of fake news on social media surrounding the COVID-19 pandemic could have contributed to this increased interest in detection methods [79,119]. From Figure 8, it is evident that the methods used most often are those based on a deep learning approach, with Figure 7 showing that this approach has seen an increase in popularity over the years investigated. While the number of studies based on machine learning approaches in 2021 and 2022 was almost the same (Figure 7), it must be noted that the analysis was based on data collected up to August 2022; therefore, there is a high possibility that additional publications may occur in the journals being published in later months of 2022. Ensemble approaches also have the potential to see an increasing number of publications, but, with this being a newer area, their value is yet to be established.
With DL and ML emerging as the approaches relied upon most often for fake news detection, and the existence of a number of methods within each of these approaches, Figure 9 represents the DL and ML methods with the highest frequency of use in the 125 articles analyzed. The most common method applied in DL approaches is CNN (10), with this being followed by LSTM (7) and BiLSTM (6). ML approaches are most reliant on RF (13), with SVM (3) and LG (3) being used but to a far lesser extent.

5. Conclusions

This study employed a systematic review and meta-analysis methodology to quantitatively evaluate the fake news detection methods based on DL, ML, and ensemble. A database, created with nine variables related to these methods using data from 125 scientific articles, was the basis for the meta-analysis. For the included studies, effect sizes, heterogeneity, subgroup analysis, meta-regression analysis, and publication bias were all addressed. This was due to the different sample sizes and approaches that were previously used in the methods. The main approaches used in the literature were deep learning, ensemble deep learning, ensemble machine learning, hybrid, machine learning, and sentiment analysis.
The results led to the following deductions.
  • Deep learning was the most widely used approach, with the CNN method most commonly employed due to its most effective architecture for accurate and efficient detection.
  • The most used method in machine learning is RF. It is capable of handling hundreds of input variables and performs well on large datasets. Additionally, RF calculates the relative value of every feature and creates an incredibly accurate classifier.
  • The sample sizes used by each study to establish detection accuracy varied significantly. The sample size and the accuracy of the fake news detection method are strongly negatively correlated. This underscores how crucial it is to use a large number of samples when testing fake news detection methods. Further, the sample size utilized to determine the detection accuracy was a major contributor to heterogeneity.
  • The findings of the study revealed the existence of heterogeneity and revealed a trivial publication bias, demonstrating the effectiveness of the inclusion and exclusion criteria in reducing bias.
  • Finally, the meta-analysis results revealed that the efficacy of the various proposed approaches from the included primary studies was sufficient for the detection of online fake news.
The meta-analysis carried out in this work contributed to highlighting the improvements in detection techniques, with a particular focus on deep learning, machine learning, and ensemble approaches. Review findings also support the importance of deep learning and machine learning techniques. The meta-analysis enables the presentation of transparent, unbiased, and repeatable summaries of the fake news detection techniques. The study acknowledges the important relationship between sample size and detection accuracy in light of the findings. This insightful meta-analysis helps comprehend the recent developments in the research area. It is believed that this meta-analysis highlights current state-of-the-art methods and, more importantly, provides direction for the further investigation of novel methods for spotting fake news.
The review was performed on methods that rely on supervised learning models, which may be a limitation to the study; therefore, further studies that consider semi-supervised and unsupervised models may reveal additional results. Moreover, with this study relying on literature in only the Web Of Science database, further research that includes numerous databases and additional performance evaluations other than accuracy is advised.

Author Contributions

Conceptualization, S.J. and T.T.A.; methodology, S.J. and T.T.A.; formal analysis, S.J. and R.C.T.; data curation, S.J., T.T.A. and R.C.T.; writing—original draft preparation, S.J., T.T.A. and R.C.T.; writing—review and editing, S.J., T.T.A. and R.C.T.; funding acquisition, T.T.A. All authors have read and agreed to the published version of the manuscript.

Funding

Durban University of Technology Research Capacity Development Grant Allocation: Emerging Researcher’s Grant.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaplan, A.M.; Haenlein, M. Users of the world, unite! The challenges and opportunities of Social Media. Bus. Horiz. 2010, 53, 59–68. [Google Scholar] [CrossRef]
  2. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef] [Green Version]
  3. McNair, B. Fake News: Falsehood, Fabrication and Fantasy in Journalism; Routledge: London, UK, 2017. [Google Scholar]
  4. Ni, S.; Li, J.; Kao, H.-Y. MVAN: Multi-View Attention Networks for Fake News Detection on Social Media. IEEE Access 2021, 9, 106907–106917. [Google Scholar] [CrossRef]
  5. Parikh, S.B.; Atrey, P.K. Media-rich fake news detection: A survey. In Proceedings of the Conference on Multimedia Information Processing And Retrieval (MIPR), Miami, FL, USA, 10–12 April 2018; IEEE: New York, NY, USA, 2018; pp. 436–441. [Google Scholar]
  6. Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explor. Newsl. 2017, 19, 22–36. [Google Scholar] [CrossRef]
  7. Mahid, Z.I.; Manickam, S.; Karuppayah, S. Fake news on social media: Brief review on detection techniques. In Proceedings of the 2018 Fourth International Conference on Advances in Computing, Communication & Automation (ICACCA), Subang Jaya, Malaysia, 26–28 October 2018; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar]
  8. Zafarani, R.; Zhou, X.; Shu, K.; Liu, H. Fake news research: Theories, detection strategies, and open problems. In Proceedings of the 25th International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; ACM: New York, NY, USA, 2019; pp. 3207–3208. [Google Scholar]
  9. Rafique, A.; Rustam, F.; Narra, M.; Mehmood, A.; Lee, E.; Ashraf, I. Comparative analysis of machine learning methods to detect fake news in an Urdu language corpus. PeerJ Comput. Sci. 2022, 8, e1004. [Google Scholar] [CrossRef]
  10. Zhang, X.; Ghorbani, A.A. An overview of online fake news: Characterization, detection, and discussion. Inf. Process. Manag. 2020, 57, 102025. [Google Scholar] [CrossRef]
  11. Galende, B.A.; Hernández-Peñaloza, G.; Uribe, S.; García, F.Á. Conspiracy or Not? A Deep Learning Approach to Spot It on Twitter. IEEE Access 2022, 10, 38370–38378. [Google Scholar] [CrossRef]
  12. Raza, S.; Ding, C. Fake news detection based on news content and social contexts: A transformer-based approach. Int. J. Data Sci. Anal. 2022, 13, 335–362. [Google Scholar] [CrossRef]
  13. Choi, J.; Ko, T.; Choi, Y.; Byun, H.; Kim, C.-K. Dynamic graph convolutional networks with attention mechanism for rumor detection on social media. PLoS ONE 2021, 16, e0256039. [Google Scholar]
  14. Bangyal, W.H.; Qasim, R.; Ahmad, Z.; Dar, H.; Rukhsar, L.; Aman, Z.; Ahmad, J. Detection of fake news text classification on COVID-19 using deep learning approaches. Comput. Math. Methods Med. 2021, 2021, 5514220. [Google Scholar] [CrossRef]
  15. Salem, F.K.A.; Al Feel, R.; Elbassuoni, S.; Ghannam, H.; Jaber, M.; Farah, M. Meta-learning for fake news detection surrounding the Syrian war. Patterns 2021, 2, 100369. [Google Scholar] [CrossRef] [PubMed]
  16. Kausar, S.; Tahir, B.; Mehmood, M.A. ProSOUL: A framework to identify propaganda from online Urdu content. IEEE Access 2020, 8, 186039–186054. [Google Scholar] [CrossRef]
  17. Khan, T.; Michalas, A. Seeing and Believing: Evaluating the Trustworthiness of Twitter Users. IEEE Access 2021, 9, 110505–110516. [Google Scholar] [CrossRef]
  18. Panagiotou, N.; Saravanou, A.; Gunopulos, D. News Monitor: A Framework for Exploring News in Real-Time. Data 2021, 7, 3. [Google Scholar] [CrossRef]
  19. Qasem, S.N.; Al-Sarem, M.; Saeed, F. An ensemble learning based approach for detecting and tracking COVID19 rumors. Comput. Mater. Contin. 2021, 70, 1721–1747. [Google Scholar]
  20. Truică, C.-O.; Apostol, E.-S. MisRoBÆRTa: Transformers versus Misinformation. Mathematics 2022, 10, 569. [Google Scholar] [CrossRef]
  21. Elhadad, M.K.; Li, K.F.; Gebali, F. Detecting misleading information on COVID-19. IEEE Access 2020, 8, 165201–165215. [Google Scholar] [CrossRef]
  22. Collins, B.; Hoang, D.T.; Nguyen, N.T.; Hwang, D. Trends in combating fake news on social media–a survey. J. Inf. Telecommun. 2021, 5, 247–266. [Google Scholar] [CrossRef]
  23. Choraś, M.; Demestichas, K.; Giełczyk, A.; Herrero, Á.; Ksieniewicz, P.; Remoundou, K.; Urda, D.; Woźniak, M. Advanced Machine Learning techniques for fake news (online disinformation) detection: A systematic mapping study. Appl. Soft Comput. 2021, 101, 107050. [Google Scholar] [CrossRef]
  24. Varlamis, I.; Michail, D.; Glykou, F.; Tsantilas, P. A Survey on the Use of Graph Convolutional Networks for Combating Fake News. Future Internet 2022, 14, 70. [Google Scholar] [CrossRef]
  25. Shahid, W.; Jamshidi, B.; Hakak, S.; Isah, H.; Khan, W.Z.; Khan, M.K.; Choo, K.-K.R. Detecting and Mitigating the Dissemination of Fake News: Challenges and Future Research Opportunities. IEEE Trans. Comput. Soc. Syst. 2022, 1–14. [Google Scholar] [CrossRef]
  26. Khan, S.; Hakak, S.; Deepa, N.; Dev, K.; Trelova, S. Detecting COVID-19 related Fake News using feature extraction. Front. Public Health 2022, 1967. [Google Scholar] [CrossRef]
  27. Lozano, M.G.; Brynielsson, J.; Franke, U.; Rosell, M.; Tjörnhammar, E.; Varga, S.; Vlassov, V. Veracity assessment of online data. Decis. Support Syst. 2020, 129, 113132. [Google Scholar] [CrossRef]
  28. Field, A.P.; Gillett, R. How to do a meta-analysis. Br. J. Math. Stat. Psychol. 2010, 63, 665–694. [Google Scholar] [CrossRef] [PubMed]
  29. Tembhurne, J.V.; Almin, M.M.; Diwan, T. Mc-DNN: Fake News Detection Using Multi-Channel Deep Neural Networks. Int. J. Semant. Web Inf. Syst. IJSWIS 2022, 18, 1–20. [Google Scholar] [CrossRef]
  30. Awan, M.J.; Yasin, A.; Nobanee, H.; Ali, A.A.; Shahzad, Z.; Nabeel, M.; Zain, A.M.; Shahzad, H.M.F. Fake news data exploration and analytics. Electronics 2021, 10, 2326. [Google Scholar] [CrossRef]
  31. Sharma, D.K.; Garg, S. IFND: A benchmark dataset for fake news detection. Complex Intell. Syst. 2021, 1–21. [Google Scholar] [CrossRef]
  32. Ghayoomi, M.; Mousavian, M. Deep transfer learning for COVID-19 fake news detection in Persian. Expert Syst. 2022, 39, e13008. [Google Scholar] [CrossRef]
  33. Do, T.H.; Berneman, M.; Patro, J.; Bekoulis, G.; Deligiannis, N. Context-aware deep Markov random fields for fake news detection. IEEE Access 2021, 9, 130042–130054. [Google Scholar] [CrossRef]
  34. Kumari, R.; Ashok, N.; Ghosal, T.; Ekbal, A. What the fake? Probing misinformation detection standing on the shoulder of novelty and emotion. Inf. Process. Manag. 2022, 59, 102740. [Google Scholar] [CrossRef]
  35. Ying, L.; Yu, H.; Wang, J.; Ji, Y.; Qian, S. Fake news detection via multi-modal topic memory network. IEEE Access 2021, 9, 132818–132829. [Google Scholar] [CrossRef]
  36. Vicario, M.D.; Quattrociocchi, W.; Scala, A.; Zollo, F. Polarization and fake news: Early warning of potential misinformation targets. ACM Trans. Web TWEB 2019, 13, 1–22. [Google Scholar] [CrossRef]
  37. Stitini, O.; Kaloun, S.; Bencharef, O. Towards the Detection of Fake News on Social Networks Contributing to the Improvement of Trust and Transparency in Recommendation Systems: Trends and Challenges. Information 2022, 13, 128. [Google Scholar] [CrossRef]
  38. Fayaz, M.; Khan, A.; Bilal, M.; Khan, S.U. Machine learning for fake news classification with optimal feature selection. Soft Comput. 2022, 1–9. [Google Scholar] [CrossRef]
  39. Islam, N.; Shaikh, A.; Qaiser, A.; Asiri, Y.; Almakdi, S.; Sulaiman, A.; Moazzam, V.; Babar, S.A. Ternion: An Autonomous Model for Fake News Detection. Appl. Sci. 2021, 11, 9292. [Google Scholar] [CrossRef]
  40. Hansrajh, A.; Adeliyi, T.T.; Wing, J. Detection of online fake news using blending ensemble learning. Sci. Program. 2021, 2021, 3434458. [Google Scholar] [CrossRef]
  41. Elsaeed, E.; Ouda, O.; Elmogy, M.M.; Atwan, A.; El-Daydamony, E. Detecting Fake News in Social Media Using Voting Classifier. IEEE Access 2021, 9, 161909–161925. [Google Scholar] [CrossRef]
  42. Verma, P.K.; Agrawal, P.; Amorim, I.; Prodan, R. WELFake: Word embedding over linguistic features for fake news detection. IEEE Trans. Comput. Soc. Syst. 2021, 8, 881–893. [Google Scholar] [CrossRef]
  43. Biradar, S.; Saumya, S.; Chauhan, A. Combating the infodemic: COVID-19 induced fake news recognition in social media networks. Complex Intell. Syst. 2022, 1–13. [Google Scholar] [CrossRef]
  44. Kanagavalli, N.; Priya, S.B. Social Networks Fake Account and Fake News Identification with Reliable Deep Learning. Intell. Autom. Soft Comput. 2022, 33, 191–205. [Google Scholar] [CrossRef]
  45. Galli, A.; Masciari, E.; Moscato, V.; Sperlí, G. A comprehensive Benchmark for fake news detection. J. Intell. Inf. Syst. 2022, 59, 237–261. [Google Scholar] [CrossRef] [PubMed]
  46. Sadeghi, F.; Bidgoly, A.J.; Amirkhani, H. Fake news detection on social media using a natural language inference approach. Multimed. Tools Appl. 2022, 81, 33801–33821. [Google Scholar] [CrossRef]
  47. Fouad, K.M.; Sabbeh, S.F.; Medhat, W. Arabic fake news detection using deep learning. CMC-Comput. Mater. Contin. 2022, 71, 3647–3665. [Google Scholar] [CrossRef]
  48. Nassif, A.B.; Elnagar, A.; Elgendy, O.; Afadar, Y. Arabic fake news detection based on deep contextualized embedding models. Neural Comput. Appl. 2022, 34, 16019–16032. [Google Scholar] [CrossRef] [PubMed]
  49. Lee, J.-W.; Kim, J.-H. Fake Sentence Detection Based on Transfer Learning: Applying to Korean COVID-19 Fake News. Appl. Sci. 2022, 12, 6402. [Google Scholar] [CrossRef]
  50. Jang, Y.; Park, C.-H.; Lee, D.-G.; Seo, Y.-S. Fake News Detection on Social Media: A Temporal-Based Approach. CMC-Comput. Mater. Contin. 2021, 69, 3563–3579. [Google Scholar] [CrossRef]
  51. Aslam, N.; Ullah Khan, I.; Alotaibi, F.S.; Aldaej, L.A.; Aldubaikil, A.K. Fake detect: A deep learning ensemble model for fake news detection. Complexity 2021, 2021, 5557784. [Google Scholar] [CrossRef]
  52. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; the Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [Green Version]
  53. Mikolajewicz, N.; Komarova, S.V. Meta-analytic methodology for basic research: A practical guide. Front. Physiol. 2019, 10, 203. [Google Scholar] [CrossRef] [Green Version]
  54. van Enst, W.A.; Scholten, R.J.; Whiting, P.; Zwinderman, A.H.; Hooft, L. Meta-epidemiologic analysis indicates that MEDLINE searches are sufficient for diagnostic test accuracy systematic reviews. J. Clin. Epidemiol. 2014, 67, 1192–1199. [Google Scholar] [CrossRef]
  55. Rice, D.B.; Kloda, L.A.; Levis, B.; Qi, B.; Kingsland, E.; Thombs, B.D. Are MEDLINE searches sufficient for systematic reviews and meta-analyses of the diagnostic accuracy of depression screening tools? A review of meta-analyses. J. Psychosom. Res. 2016, 87, 7–13. [Google Scholar] [CrossRef] [PubMed]
  56. Adeliyi, T.T.; Ogunsakin, R.E.; Adebiyi, M.; Olugbara, O. A meta-analysis of channel switching approaches for reducing zapping delay in internet protocol television. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 2502–4752. [Google Scholar] [CrossRef]
  57. Olugbara, C.T.; Letseka, M.; Ogunsakin, R.E.; Olugbara, O.O. Meta-analysis of factors influencing student acceptance of massive open online courses for open distance learning. Afr. J. Inf. Syst. 2021, 13, 5. [Google Scholar]
  58. Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 2015, 4, g7647. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Borenstein, M.; Hedges, L.V.; Higgins, J.P.; Rothstein, H.R. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res. Synth. Methods 2010, 1, 97–111. [Google Scholar] [CrossRef] [PubMed]
  60. Ogunsakin, R.E.; Olugbara, O.O.; Moyo, S.; Israel, C. Meta-analysis of studies on depression prevalence among diabetes mellitus patients in Africa. Heliyon 2021, 7, e07085. [Google Scholar] [CrossRef]
  61. Veroniki, A.A.; Jackson, D.; Viechtbauer, W.; Bender, R.; Bowden, J.; Knapp, G.; Kuss, O.; Higgins, J.P.; Langan, D.; Salanti, G. Methods to estimate the between-study variance and its uncertainty in meta-analysis. Res. Synth. Methods 2016, 7, 55–79. [Google Scholar] [CrossRef] [Green Version]
  62. Jarrahi, A.; Safari, L. Evaluating the effectiveness of publishers’ features in fake news detection on social media. Multimed. Tools Appl. 2022, 1–27. [Google Scholar] [CrossRef]
  63. Wang, Y.; Wang, L.; Yang, Y.; Lian, T. SemSeq4FD: Integrating global semantic relationship and local sequential order to enhance text representation for fake news detection. Expert Syst. Appl. 2021, 166, 114090. [Google Scholar] [CrossRef]
  64. Abdelminaam, D.S.; Ismail, F.H.; Taha, M.; Taha, A.; Houssein, E.H.; Nabil, A. Coaid-deep: An optimized intelligent framework for automated detecting COVID-19 misleading information on twitter. IEEE Access 2021, 9, 27840–27867. [Google Scholar] [CrossRef]
  65. Seddari, N.; Derhab, A.; Belaoued, M.; Halboob, W.; Al-Muhtadi, J.; Bouras, A. A Hybrid Linguistic and Knowledge-Based Analysis Approach for Fake News Detection on Social Media. IEEE Access 2022, 10, 62097–62109. [Google Scholar] [CrossRef]
  66. Madani, Y.; Erritali, M.; Bouikhalene, B. Using artificial intelligence techniques for detecting COVID-19 epidemic fake news in Moroccan tweets. Results Phys. 2021, 25, 104266. [Google Scholar] [CrossRef]
  67. Endo, P.T.; Santos, G.L.; de Lima Xavier, M.E.; Nascimento Campos, G.R.; de Lima, L.C.; Silva, I.; Egli, A.; Lynn, T. Illusion of Truth: Analysing and Classifying COVID-19 Fake News in Brazilian Portuguese Language. Big Data Cogn. Comput. 2022, 6, 36. [Google Scholar] [CrossRef]
  68. Ying, L.; Yu, H.; Wang, J.; Ji, Y.; Qian, S. Multi-Level Multi-Modal Cross-Attention Network for Fake News Detection. IEEE Access 2021, 9, 132363–132373. [Google Scholar] [CrossRef]
  69. Ke, Z.; Li, Z.; Zhou, C.; Sheng, J.; Silamu, W.; Guo, Q. Rumor detection on social media via fused semantic information and a propagation heterogeneous graph. Symmetry 2020, 12, 1806. [Google Scholar] [CrossRef]
  70. Wu, L.; Rao, Y.; Nazir, A.; Jin, H. Discovering differential features: Adversarial learning for information credibility evaluation. Inf. Sci. 2020, 516, 453–473. [Google Scholar] [CrossRef] [Green Version]
  71. Abonizio, H.Q.; de Morais, J.I.; Tavares, G.M.; Barbon Junior, S. Language-independent fake news detection: English, Portuguese, and Spanish mutual features. Future Internet 2020, 12, 87. [Google Scholar] [CrossRef]
  72. Singh, B.; Sharma, D.K. Predicting image credibility in fake news over social media using multi-modal approach. Neural Comput. Appl. 2021, 1–15. [Google Scholar] [CrossRef]
  73. Amer, E.; Kwak, K.-S.; El-Sappagh, S. Context-Based Fake News Detection Model Relying on Deep Learning Models. Electronics 2022, 11, 1255. [Google Scholar] [CrossRef]
  74. Thaher, T.; Saheb, M.; Turabieh, H.; Chantar, H. Intelligent detection of false information in Arabic tweets utilizing hybrid harris hawks based feature selection and machine learning models. Symmetry 2021, 13, 556. [Google Scholar] [CrossRef]
  75. Gereme, F.; Zhu, W.; Ayall, T.; Alemu, D. Combating fake news in “low-resource” languages: Amharic fake news detection accompanied by resource crafting. Information 2021, 12, 20. [Google Scholar] [CrossRef]
  76. Kiruthika, N.; Thailambal, D.G. Dynamic Light Weight Recommendation System for Social Networking Analysis Using a Hybrid LSTM-SVM Classifier Algorithm. Opt. Mem. Neural Netw. 2022, 31, 59–75. [Google Scholar] [CrossRef]
  77. Ma, K.; Tang, C.; Zhang, W.; Cui, B.; Ji, K.; Chen, Z.; Abraham, A. DC-CNN: Dual-channel Convolutional Neural Networks with attention-pooling for fake news detection. Appl. Intell. 2022, 1–16. [Google Scholar] [CrossRef] [PubMed]
  78. Ahmed, B.; Ali, G.; Hussain, A.; Baseer, A.; Ahmed, J. Analysis of Text Feature Extractors using Deep Learning on Fake News. Eng. Technol. Appl. Sci. Res. 2021, 11, 7001–7005. [Google Scholar] [CrossRef]
  79. Tashtoush, Y.; Alrababah, B.; Darwish, O.; Maabreh, M.; Alsaedi, N. A Deep Learning Framework for Detection of COVID-19 Fake News on Social Media Platforms. Data 2022, 7, 65. [Google Scholar] [CrossRef]
  80. Tang, C.; Ma, K.; Cui, B.; Ji, K.; Abraham, A. Long text feature extraction network with data augmentation. Appl. Intell. 2022, 1–16. [Google Scholar] [CrossRef]
  81. Upadhyay, R.; Pasi, G.; Viviani, M. Vec4Cred: A model for health misinformation detection in web pages. Multimed. Tools Appl. 2022, 1–20. [Google Scholar] [CrossRef]
  82. Rohera, D.; Shethna, H.; Patel, K.; Thakker, U.; Tanwar, S.; Gupta, R.; Hong, W.-C.; Sharma, R. A Taxonomy of Fake News Classification Techniques: Survey and Implementation Aspects. IEEE Access 2022, 10, 30367–30394. [Google Scholar] [CrossRef]
  83. Al-Yahya, M.; Al-Khalifa, H.; Al-Baity, H.; Al Saeed, D.; Essam, A. Arabic fake news detection: Comparative study of neural networks and transformer-based approaches. Complexity 2021, 2021, 5516945. [Google Scholar] [CrossRef]
  84. Mertoğlu, U.; Genç, B. Automated fake news detection in the age of digital libraries. Inf. Technol. Libr. 2020, 39. [Google Scholar] [CrossRef]
  85. Xing, J.; Wang, S.; Zhang, X.; Ding, Y. HMBI: A New Hybrid Deep Model Based on Behavior Information for Fake News Detection. Wirel. Commun. Mob. Comput. 2021, 2021, 9076211. [Google Scholar] [CrossRef]
  86. Jiang, T.; Li, J.P.; Haq, A.U.; Saboor, A.; Ali, A. A novel stacking approach for accurate detection of fake news. IEEE Access 2021, 9, 22626–22639. [Google Scholar] [CrossRef]
  87. Varshney, D.; Vishwakarma, D.K. A unified approach of detecting misleading images via tracing its instances on web and analyzing its past context for the verification of multimedia content. Int. J. Multimed. Inf. Retr. 2022, 11, 445–459. [Google Scholar] [CrossRef]
  88. Paka, W.S.; Bansal, R.; Kaushik, A.; Sengupta, S.; Chakraborty, T. Cross-SEAN: A cross-stitch semi-supervised neural attention model for COVID-19 fake news detection. Appl. Soft Comput. 2021, 107, 107393. [Google Scholar] [CrossRef]
  89. Ilie, V.-I.; Truică, C.-O.; Apostol, E.-S.; Paschke, A. Context-Aware Misinformation Detection: A Benchmark of Deep Learning Architectures Using Word Embeddings. IEEE Access 2021, 9, 162122–162146. [Google Scholar] [CrossRef]
  90. Kaliyar, R.K.; Goswami, A.; Narang, P. EchoFakeD: Improving fake news detection in social media with an efficient deep neural network. Neural Comput. Appl. 2021, 33, 8597–8613. [Google Scholar] [CrossRef]
  91. Akhter, M.P.; Zheng, J.; Afzal, F.; Lin, H.; Riaz, S.; Mehmood, A. Supervised ensemble learning methods towards automatically filtering Urdu fake news within social media. PeerJ Comput. Sci. 2021, 7, e425. [Google Scholar] [CrossRef]
  92. Ilias, L.; Roussaki, I. Detecting malicious activity in Twitter using deep learning techniques. Appl. Soft Comput. 2021, 107, 107360. [Google Scholar] [CrossRef]
  93. Waheeb, S.A.; Khan, N.A.; Shang, X. Topic Modeling and Sentiment Analysis of Online Education in the COVID-19 Era Using Social Networks Based Datasets. Electronics 2022, 11, 715. [Google Scholar] [CrossRef]
  94. Fang, Y.; Gao, J.; Huang, C.; Peng, H.; Wu, R. Self multi-head attention-based convolutional neural networks for fake news detection. PLoS ONE 2019, 14, e0222713. [Google Scholar] [CrossRef]
  95. Amoudi, G.; Albalawi, R.; Baothman, F.; Jamal, A.; Alghamdi, H.; Alhothali, A. Arabic rumor detection: A comparative study. Alex. Eng. J. 2022, 61, 12511–12523. [Google Scholar] [CrossRef]
  96. Karnyoto, A.S.; Sun, C.; Liu, B.; Wang, X. TB-BCG: Topic-Based BART Counterfeit Generator for Fake News Detection. Mathematics 2022, 10, 585. [Google Scholar] [CrossRef]
  97. Dixit, D.K.; Bhagat, A.; Dangi, D. Automating fake news detection using PPCA and levy flight-based LSTM. Soft Comput. 2022, 26, 12545–12557. [Google Scholar] [CrossRef] [PubMed]
  98. Kaliyar, R.K.; Goswami, A.; Narang, P. FakeBERT: Fake news detection in social media with a BERT-based deep learning approach. Multimed. Tools Appl. 2021, 80, 11765–11788. [Google Scholar] [CrossRef] [PubMed]
  99. Umer, M.; Imtiaz, Z.; Ullah, S.; Mehmood, A.; Choi, G.S.; On, B.-W. Fake news stance detection using deep learning architecture (CNN-LSTM). IEEE Access 2020, 8, 156695–156706. [Google Scholar] [CrossRef]
  100. Dixit, D.K.; Bhagat, A.; Dangi, D. Fake News Classification Using a Fuzzy Convolutional Recurrent Neural Network. CMC-Comput. Mater. Contin. 2022, 71, 5733–5750. [Google Scholar] [CrossRef]
  101. Olaleye, T.; Abayomi-Alli, A.; Adesemowo, K.; Arogundade, O.T.; Misra, S.; Kose, U. SCLAVOEM: Hyper parameter optimization approach to predictive modelling of COVID-19 infodemic tweets using smote and classifier vote ensemble. Soft Comput. 2022, 15, 1–20. [Google Scholar] [CrossRef]
  102. Kasnesis, P.; Toumanidis, L.; Patrikakis, C.Z. Combating Fake News with Transformers: A Comparative Analysis of Stance Detection and Subjectivity Analysis. Information 2021, 12, 409. [Google Scholar] [CrossRef]
  103. Kapusta, J.; Obonya, J. Improvement of misleading and fake news classification for flective languages by morphological group analysis. Informatics 2020, 7, 4. [Google Scholar] [CrossRef] [Green Version]
  104. Althubiti, S.A.; Alenezi, F.; Mansour, R.F. Natural Language Processing with Optimal Deep Learning Based Fake News Classification. CMC-Comput. Mater. Contin. 2022, 73, 3529–3544. [Google Scholar] [CrossRef]
  105. Lai, C.-M.; Chen, M.-H.; Kristiani, E.; Verma, V.K.; Yang, C.-T. Fake News Classification Based on Content Level Features. Appl. Sci. 2022, 12, 1116. [Google Scholar] [CrossRef]
  106. Karande, H.; Walambe, R.; Benjamin, V.; Kotecha, K.; Raghu, T. Stance detection with BERT embeddings for credibility analysis of information on social media. PeerJ Comput. Sci. 2021, 7, e467. [Google Scholar] [CrossRef]
  107. Himdi, H.; Weir, G.; Assiri, F.; Al-Barhamtoshy, H. Arabic fake news detection based on textual analysis. Arab. J. Sci. Eng. 2022, 1–17, 10453–10469. [Google Scholar] [CrossRef] [PubMed]
  108. Lee, S. Detection of Political Manipulation through Unsupervised Learning. KSII Trans. Internet Inf. Syst. TIIS 2019, 13, 1825–1844. [Google Scholar]
  109. Palani, B.; Elango, S.; Viswanathan, K.V. CB-Fake: A multimodal deep learning framework for automatic fake news detection using capsule neural network and BERT. Multimed. Tools Appl. 2022, 81, 5587–5620. [Google Scholar] [CrossRef]
  110. Cheng, M.; Li, Y.; Nazarian, S.; Bogdan, P. From rumor to genetic mutation detection with explanations: A GAN approach. Sci. Rep. 2021, 11, 5861. [Google Scholar] [CrossRef]
  111. Dong, X.; Victor, U.; Qian, L. Two-path deep semisupervised learning for timely fake news detection. IEEE Trans. Comput. Soc. Syst. 2020, 7, 1386–1398. [Google Scholar] [CrossRef]
  112. Ayoub, J.; Yang, X.J.; Zhou, F. Combat COVID-19 infodemic using explainable natural language processing models. Inf. Process. Manag. 2021, 58, 102569. [Google Scholar] [CrossRef]
  113. Buzea, M.C.; Trausan-Matu, S.; Rebedea, T. Automatic fake news detection for Romanian online news. Information 2022, 13, 151. [Google Scholar] [CrossRef]
  114. Alouffi, B.; Alharbi, A.; Sahal, R.; Saleh, H. An Optimized Hybrid Deep Learning Model to Detect COVID-19 Misleading Information. Comput. Intell. Neurosci. 2021, 2021, 9615034. [Google Scholar] [CrossRef]
  115. Rajapaksha, P.; Farahbakhsh, R.; Crespi, N. BERT, XLNet or RoBERTa: The Best Transfer Learning Model to Detect Clickbaits. IEEE Access 2021, 9, 154704–154716. [Google Scholar] [CrossRef]
  116. Saleh, H.; Alharbi, A.; Alsamhi, S.H. OPCNN-FAKE: Optimized convolutional neural network for fake news detection. IEEE Access 2021, 9, 129471–129489. [Google Scholar] [CrossRef]
  117. Goldani, M.H.; Momtazi, S.; Safabakhsh, R. Detecting fake news with capsule neural networks. Appl. Soft Comput. 2021, 101, 106991. [Google Scholar] [CrossRef]
  118. Kula, S.; Kozik, R.; Choraś, M. Implementation of the BERT-derived architectures to tackle disinformation challenges. Neural Comput. Appl. 2021, 1–13. [Google Scholar] [CrossRef]
  119. Das, S.D.; Basak, A.; Dutta, S. A heuristic-driven uncertainty based ensemble framework for fake news detection in tweets and news articles. Neurocomputing 2022, 491, 607–620. [Google Scholar] [CrossRef]
  120. Malla, S.; Alphonse, P. Fake or real news about COVID-19? Pretrained transformer model to detect potential misleading news. Eur. Phys. J. Spec. Top. 2022, 1–10. [Google Scholar] [CrossRef]
  121. Ghanem, B.; Rosso, P.; Rangel, F. An emotional analysis of false information in social media and news articles. ACM Trans. Internet Technol. TOIT 2020, 20, 1–18. [Google Scholar] [CrossRef]
  122. Apolinario-Arzube, Ó.; García-Díaz, J.A.; Medina-Moreira, J.; Luna-Aveiga, H.; Valencia-García, R. Comparing deep-learning architectures and traditional machine-learning approaches for satire identification in Spanish tweets. Mathematics 2020, 8, 2075. [Google Scholar] [CrossRef]
  123. Qureshi, K.A.; Malick, R.A.S.; Sabih, M.; Cherifi, H. Complex Network and Source Inspired COVID-19 Fake News Classification on Twitter. IEEE Access 2021, 9, 139636–139656. [Google Scholar] [CrossRef]
  124. Hayawi, K.; Mathew, S.; Venugopal, N.; Masud, M.M.; Ho, P.-H. DeeProBot: A hybrid deep neural network model for social bot detection based on user profile data. Soc. Netw. Anal. Min. 2022, 12, 43. [Google Scholar] [CrossRef]
  125. Rahman, M.; Halder, S.; Uddin, M.; Acharjee, U.K. An efficient hybrid system for anomaly detection in social networks. CyberSecurity 2021, 4, 10. [Google Scholar] [CrossRef]
  126. Ghaleb, F.A.; Alsaedi, M.; Saeed, F.; Ahmad, J.; Alasli, M. Cyber Threat Intelligence-Based Malicious URL Detection Model Using Ensemble Learning. Sensors 2022, 22, 3373. [Google Scholar] [PubMed]
  127. Jain, V.; Kaliyar, R.K.; Goswami, A.; Narang, P.; Sharma, Y. AENeT: An attention-enabled neural architecture for fake news detection using contextual features. Neural Comput. Appl. 2022, 34, 771–782. [Google Scholar] [CrossRef]
  128. Bezerra, R.; Fabio, J. Content-based fake news classification through modified voting ensemble. J. Inf. Telecommun. 2021, 5, 499–513. [Google Scholar]
  129. Agarwal, I.; Rana, D.; Shaikh, M.; Poudel, S. Spatio-temporal approach for classification of COVID-19 pandemic fake news. Soc. Netw. Anal. Min. 2022, 12, 68. [Google Scholar] [CrossRef]
  130. Toivanen, P.; Nelimarkka, M.; Valaskivi, K. Remediation in the hybrid media environment: Understanding countermedia in context. New Media Soc. 2022, 24, 2127–2152. [Google Scholar] [CrossRef]
  131. Mahabub, A. A robust technique of fake news detection using Ensemble Voting Classifier and comparison with other classifiers. SN Appl. Sci. 2020, 2, 525. [Google Scholar] [CrossRef] [Green Version]
  132. Chintalapudi, N.; Battineni, G.; Amenta, F. Sentimental analysis of COVID-19 tweets using deep learning models. Infect. Dis. Rep. 2021, 13, 329–339. [Google Scholar] [CrossRef]
  133. Al-Ahmad, B.; Al-Zoubi, A.M.; Abu Khurma, R.; Aljarah, I. An evolutionary fake news detection method for COVID-19 pandemic information. Symmetry 2021, 13, 1091. [Google Scholar] [CrossRef]
  134. Pujahari, A.; Sisodia, D.S. Clickbait detection using multiple categorisation techniques. J. Inf. Sci. 2021, 47, 118–128. [Google Scholar] [CrossRef] [Green Version]
  135. Albahar, M. A hybrid model for fake news detection: Leveraging news content and user comments in fake news. IET Inf. Secur. 2021, 15, 169–177. [Google Scholar] [CrossRef]
  136. Gayakwad, M.; Patil, S.; Kadam, A.; Joshi, S.; Kotecha, K.; Joshi, R.; Pandya, S.; Gonge, S.; Rathod, S.; Kadam, K. Credibility Analysis of User-Designed Content Using Machine Learning Techniques. Appl. Syst. Innov. 2022, 5, 43. [Google Scholar] [CrossRef]
  137. Rastogi, S.; Bansal, D. Disinformation detection on social media: An integrated approach. Multimed. Tools Appl. 2022, 81, 40675–40707. [Google Scholar] [CrossRef]
  138. Jang, Y.; Park, C.-H.; Seo, Y.-S. Fake news analysis modeling using quote retweet. Electronics 2019, 8, 1377. [Google Scholar] [CrossRef] [Green Version]
  139. Shao, C.; Chen, X. Deep-learning-based financial message sentiment classification in business management. Comput. Intell. Neurosci. 2022, 2022, 3888675. [Google Scholar] [CrossRef]
  140. Sansonetti, G.; Gasparetti, F.; D’aniello, G.; Micarelli, A. Unreliable users detection in social media: Deep learning techniques for automatic detection. IEEE Access 2020, 8, 213154–213167. [Google Scholar] [CrossRef]
  141. Mohammed, M.; Sha’aban, A.; Jatau, A.I.; Yunusa, I.; Isa, A.M.; Wada, A.S.; Obamiro, K.; Zainal, H.; Ibrahim, B. Assessment of COVID-19 information overload among the general public. J. Racial Ethn. Health Disparities 2022, 9, 184–192. [Google Scholar] [CrossRef]
  142. Coste, C.I.; Bufnea, D. Advances in Clickbait and Fake News Detection Using New Language-independent Strategies. J. Commun. Softw. Syst. 2021, 17, 270–280. [Google Scholar] [CrossRef]
  143. Alonso-Bartolome, S.; Segura-Bedmar, I. Multimodal Fake News Detection. Expert Syst. Appl. 2021, 13, 284. [Google Scholar]
  144. Ozbay, F.A.; Alatas, B. A novel approach for detection of fake news on social media using metaheuristic optimization algorithms. Elektron. Ir. Elektrotechnika 2019, 25, 62–67. [Google Scholar] [CrossRef] [Green Version]
  145. Ahmad, I.; Yousaf, M.; Yousaf, S.; Ahmad, M.O. Fake news detection using machine learning ensemble methods. Complexity 2020, 2020, 8885861. [Google Scholar] [CrossRef]
  146. Shang, L.; Zhang, Y.; Zhang, D.; Wang, D. Fauxward: A graph neural network approach to fauxtography detection using social media comments. Soc. Netw. Anal. Min. 2020, 10, 76. [Google Scholar] [CrossRef]
  147. Mazzeo, V.; Rapisarda, A.; Giuffrida, G. Detection of fake news on COVID-19 on Web Search Engines. Front. Phys. 2021, 351, 685730. [Google Scholar] [CrossRef]
  148. Mathur, M.B.; Vander Weele, T.J. Estimating publication bias in meta-analyses of peer-reviewed studies: A meta-meta-analysis across disciplines and journal tiers. Res. Synth. Methods 2021, 12, 176–191. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow diagram of database search using PRISMA.
Figure 1. Flow diagram of database search using PRISMA.
Information 13 00527 g001
Figure 2. Forest plot for distribution of effect size of fake news detection accuracy [4,9,11,12,13,14,15,16,17,18,19,20,21,26,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147].
Figure 2. Forest plot for distribution of effect size of fake news detection accuracy [4,9,11,12,13,14,15,16,17,18,19,20,21,26,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147].
Information 13 00527 g002
Figure 3. Galbraith plot of reviewed studies.
Figure 3. Galbraith plot of reviewed studies.
Information 13 00527 g003
Figure 4. Meta-regression based on year.
Figure 4. Meta-regression based on year.
Information 13 00527 g004
Figure 5. Meta-regression based on sample size.
Figure 5. Meta-regression based on sample size.
Information 13 00527 g005
Figure 6. Funnel plot with pseudo 95% confidence limits indicating publication bias.
Figure 6. Funnel plot with pseudo 95% confidence limits indicating publication bias.
Information 13 00527 g006
Figure 7. Publications by year.
Figure 7. Publications by year.
Information 13 00527 g007
Figure 8. Comparative distribution of approaches.
Figure 8. Comparative distribution of approaches.
Information 13 00527 g008
Figure 9. Frequency of method usage.
Figure 9. Frequency of method usage.
Information 13 00527 g009
Table 1. Exclusion and inclusion criteria.
Table 1. Exclusion and inclusion criteria.
Criterion
Exclusion Criteria
EC1Papers in which only the abstract is available
EC2Review and survey papers
EC3Duplicate records
EC4Papers not written in the English language
EC5Papers not relevant to fake news detection
EC6Papers not applying the DL, ML, or ensemble approaches
EC7Papers not reporting sample size
EC8Papers not reporting fake news detection results in terms of accuracy
Inclusion criteria
IC1Articles published in English
IC2Papers stating the fake news detection method using DL, ML, or ensemble approaches on linguistic or visual based data
IC3Papers providing clear information about the datasets and sample size
IC4Papers providing the detection results in terms of accuracy
Table 2. Fields created to extract the relevant information for the meta-analysis.
Table 2. Fields created to extract the relevant information for the meta-analysis.
Extraction ElementContentsType
1TitleTitle of the article
2AuthorThe authors of the article
3CountryThe country of the research institute
4YearThe year of publication
5ApproachDL, ML, Ensemble DL, Ensemble ML, Hybrid, and Sentiment analysis
6MethodFor instance, BiLSTM, CNN, LSTM, RF, LR, SVM, and NB
7DatasetList of the datasets used for evaluation
8Sample sizeThe number of samples used for detection
9AccuracyThe average accuracy of the results
Table 3. Meta-analysis summary: random-effects model (DerSimonian–Laird).
Table 3. Meta-analysis summary: random-effects model (DerSimonian–Laird).
Meta-Analysis Summary: Random-Effects Model: DerSimonian–LairdHeterogeneity τ23.440I275.27
Study (n = 125) Effect Size[95% CI]WeightStudy Effect Size[95% CI]Weight
(Sadeghi, Bidgoly, and Amirkhani 2022)[46]−10.230−12.692−7.7690.730(Khan et al. 2022)[26]−7.181−9.265−5.0970.800
(Jarrahi and Safari 2022)[62]−9.698−11.669−7.7270.820(Stitini, Kaloun, and Bencharef 2022)[37]−6.132−8.135−4.1300.820
(Ni, Li and Kao 2021)[4]−7.129−9.170−5.0880.810(Wang et al. 2021)[63]−10.617−12.699−8.5340.800
(Fouad, Sabbeh, and Medhat 2022)[47]−8.713−10.976−6.4490.770(Abdelminaam et al. 2021)[64]−12.803−14.901−10.7050.800
(Seddari et al. 2022)[65]−4.362−6.393−2.3320.810(Madani, Erritali, and Bouikhalene 2021)[66]−7.836−10.042−5.6310.780
(Tembhurne, Almin, and Diwan 2022)[29]−11.199−13.189−9.2090.820(Endo et al. 2022)[67]−9.402−11.423−7.3800.810
(Ying et al. 2021b)[68]−9.620−11.714−7.5260.800(Ke et al. 2020)[69]−8.930−10.970−6.8890.810
(Do et al. 2021)[33]−9.558−11.737−7.3790.780(Wu et al. 2020)[70]−11.147−13.797−8.4970.700
(Abonizio et al. 2020)[71]−9.362−11.484−7.2400.800(Singh and Sharma 2021)[72]−9.488−11.636−7.3410.790
(Amer, Kwak, and El-Sappagh 2022)[73]−10.722−12.692−8.7520.820(Thaher et al. 2021)[74]−7.734−9.905−5.5620.790
(Ying et al. 2021a)[35]−9.760−11.843−7.6760.800(Gereme et al. 2021)[75]−8.258−10.226−6.2910.820
(Jang et al. 2021)[50]−11.400−13.517−9.2830.800(Galende et al. 2022)[11]−8.610−10.775−6.4460.790
(Elsaeed et al. 2021)[41]−11.156−13.160−9.1510.820(Raza and Ding 2022)[12]−8.023−10.297−5.7480.770
(Galli et al. 2022)[45]−6.483−8.726−4.2400.770(Kiruthika and Thailambal 2022)[76]−8.740−10.932−6.5490.780
(Vicario et al. 2019)[36]−15.764−17.819−13.7100.810(Ma et al. 2022)[77]−10.665−12.661−8.6700.820
(Verma et al. 2021)[42]−11.220−13.213−9.2270.820(Choi et al. 2021)[13]−8.993−11.100−6.8870.800
(Ahmed et al. 2021)[78]−10.231−12.213−8.2490.820(Bangyal et al. 2021)[14]−9.261−11.251−7.2710.820
(Tashtoush et al. 2022)[79]−10.03−12.049−8.0100.810(Tang et al. 2022)[80]−10.593−12.589−8.5980.820
(Wang et al. 2021)[63]−9.090−11.174−7.0070.800(Upadhyay, Pasi, and Viviani 2022)[81]−9.546−11.602−7.4900.810
(Rohera et al. 2022)[82]−8.834−10.874−6.7940.810(Al-Yahya et al. 2021)[83]−12.435−14.712−10.1580.770
(Mertoğlu and Genç 2020)[84]−11.385−13.377−9.3930.820(Xing et al. 2021)[85]−10.180−12.988−7.3710.670
(Jiang et al. 2021)[86]−10.864−12.844−8.8840.820(Varshney and Vishwakarma 2022)[87]−9.406−11.376−7.4360.820
(Paka et al. 2021)[88]−10.767−12.774−8.7610.820(Ilie et al. 2021)[89]−11.645−13.739−9.5510.800
(Kaliyar, Goswami, and Narang 2021a)[90]−5.279−7.324−3.2340.810(Upadhyay, Pasi, and Viviani 2022)[81]−9.246−11.271−7.2210.810
(Akhter et al. 2021)[91]−8.236−10.473−5.9990.770(Kausar, Tahir, and Mehmood 2020)[16]−9.451−11.505−7.3960.810
(Sharma and Garg 2021)[31]−11.010−13.032−8.9890.801(Ilias and Roussaki 2021)[92]−15.732−17.888−13.5750.790
(Awan et al. 2021)[30]−10.140−12.105−8.1750.830(Waheeb, Khan, and Shang 2022)[93]−13.927−16.171−11.6830.770
(Ghayoomi and Mousavian 2022)[32]−9.671−11.690−7.6530.820(Salem et al. 2021)[15]−6.805−8.884−4.7260.800
(Fang et al. 2019)[94]−10.043−12.048−8.0390.820(Amoudi et al. 2022)[95]−8.589−10.781−6.3980.780
(Karnyoto et al. 2022)[96]−7.449−9.449−5.4490.820(Dixit, Bhagat, and Dangi 2022a)[97]−11.095−13.069−9.1200.820
(Kaliyar, Goswami, and Narang 2021b)[98]−9.954−11.925−7.9830.820(Umer et al. 2020)[99]−11.253−13.234−9.2710.820
(Dixit, Bhagat, and Dangi 2022b)[100]−11.407−13.622−9.1920.780(Olaleye et al. 2022)[101]−12.240−14.360−10.1200.800
(Islam et al. 2021)[39]−10.009−12.039−7.9790.810(Kasnesis, Toumanidis, and Patrikakis 2021)[102]−9.847−11.823−7.8700.820
(Kapusta and Obonya 2020)[103]−5.358−7.627−3.0900.770(Althubiti, Alenezi, and Mansour 2022)[104]−10.83−12.795−8.8650.830
(Fayaz et al. 2022)[38]−10.740−12.727−8.7530.820(Qasem, Al-Sarem, and Saeed 2021)[19]−8.269−10.306−6.2320.810
(Lai et al. 2022)[105]−10.646−12.626−8.6660.820(Khan and Michalas 2021)[17]−10.861−12.861−8.8600.820
(Karande et al. 2021)[106]−8.802−10.810−6.7940.820(Truică and Apostol 2022)[20]−11.591−13.629−9.5530.810
(Nassif et al. 2022)[48]−9.222−11.194−7.2500.820(Panagiotou, Saravanou, and Gunopulos 2021)[18]−6.296−8.341−4.2510.810
(Himdi et al. 2022)[107]−7.236−9.442−5.0300.780(Lee 2019)[108]−13.993−16.034−11.9520.810
(Palani, Elango, and Viswanathan K 2022)[109]−10.025−12.062−7.9870.810(Cheng et al. 2021)[110]−9.770−12.118−7.4230.750
(Biradar, Saumya, and Chauhan 2022)[43]−9.308−11.299−7.3180.820(Elhadad, Li, and Gebali 2020)[21]−8.924−10.887−6.9610.830
(Dong, Victor and Qian 2020)[111]−11.869−14.179−9.5590.760(Ayoub, Yang, and Zhou 2021)[112]−9.372−11.338−7.4060.830
(Buzea, Trausan-Matu, and Rebedea 2022)[113]−10.190−12.169−8.2110.820(Alouffi et al. 2021)[114]−7.004−8.967−5.0410.830
(Hansrajh, Adeliyi, and Wing 2021)[40]−11.191−13.386−8.9950.780(Rajapaksha, Farahbakhsh, and Crespi 2021)[115]−10.780−12.902−8.6580.800
(Saleh, Alharbi, and Alsamhi 2021)[116]−11.585−13.689−9.4810.800(Kumari et al. 2022)[34]−13.171−15.226−11.1160.810
(Goldani, Momtazi, and Safabakhsh 2021)[117]−11.302−13.650−8.9540.750(Kula, Kozik, and Choraś 2021)[118]−9.779−11.750−7.8080.820
(Das, Basak, and Dutta 2022)[119]−10.433−12.446−8.4200.820(Malla and Alphonse 2022)[120]−9.289−11.260−7.3180.820
(Ghanem, Rosso, and Rangel 2020)[121]−12.442−14.744−10.1400.760(Apolinario-Arzube et al. 2020)[122]−9.328−11.431−7.2250.800
(Qureshi et al. 2021)[123]−10.928−12.952−8.9040.810(Hayawi et al. 2022)[124]−11.366−13.409−9.3220.810
(Rafique et al. 2022)[9]−6.853−8.865−4.8410.820(Rahman et al. 2021)[125]−10.303−12.286−8.3200.820
(Aslam et al. 2021)[51]−8.532−10.600−6.4630.810(Ghaleb et al. 2022)[126]−13.414−15.401−11.4270.820
(Jain et al. 2022)[127]−10.002−12.879−7.1240.660(Bezerra and Fabio 2021)[128]−10.909−13.040−8.7780.790
(Agarwal et al. 2022)[129]−7.998−9.985−6.0120.820(Toivanen, Nelimarkka, and Valaskivi 2022)[130]−9.105−11.231−6.9790.790
(Mahabub 2020)[131]−8.836−10.852−6.8200.820(Chintalapudi, Battineni, and Amenta 2021)[132]−8.152−10.230−6.0740.800
(Al-Ahmad et al. 2021)[133]−8.594−10.851−6.3370.770(Pujahari and Sisodia 2021)[134]−9.711−11.701−7.7210.820
(Albahar 2021)[135]−14.655−16.772−12.5370.800(Gayakwad et al. 2022)[136]−16.686−18.684−14.6890.820
(Lee and Kim 2022)[49]−11.200−13.408−8.9920.780(Rastogi and Bansal 2022)[137]−8.016−9.986−6.0460.820
(Jang, Park, and Seo 2019)[138]−8.034−10.216−5.8520.780(Shao and Chen 2022)[139]−7.815−9.997−5.6330.780
(Sansonetti et al. 2020)[140]−13.442−15.483−11.4010.810(Mohammed et al. 2022)[141]−6.858−9.363−4.3540.720
(Coste and Bufnea 2021)[142]−7.380−9.661−5.1000.770(Khan et al. 2022)[26]−7.181−9.265−5.0970.800
(Alonso-Bartolome and Segura-Bedmar 2021)[143]−13.573−15.674−11.4720.800(Stitini, Kaloun, and Bencharef 2022)[37]−6.132−8.135−4.1300.820
(Ozbay and Alatas 2019)[144]−9.561−11.602−7.5200.810(Wang et al. 2021)[63]−10.617−12.699−8.5340.800
(Kanagavalli and Priya 2022)[44]−9.474−11.448−7.5000.820(Abdelminaam et al. 2021)[64]−12.803−14.901−10.7050.800
(Ahmad et al. 2020)[145]−11.303−13.357−9.2480.810(Madani, Erritali, and Bouikhalene 2021)[66]−7.836−10.042−5.6310.780
(Shang et al. 2020)[146]−15.803−18.094−13.5120.760(Endo et al. 2022)[67]−9.402−11.423−7.3800.810
(Mazzeo, Rapisarda, and Giuffrida 2021)[147]−8.157−10.158−6.1570.820
Theta −9.942−10.317−9.567 Test of homogeneity: Q = chi2(124) = 501.340
Test of theta = 0 z = −51.910 Prob > |z| = 0.000 Prob > Q = 0.000
Table 4. Subgroup analysis for the comparison of different approaches.
Table 4. Subgroup analysis for the comparison of different approaches.
GroupNumber of StudiesES 95% CIQI2Test for Heterogeneity
dfp-Value
Deep learning60−10.08 [−10.60, −9.58]202.9070.92590.000 *
Ensemble deep learning11−10.03 [−10.65, −9.40]8.700.00100.561
Ensemble machine learning16−10.23 [−11.00, 9.46]34.2956.25150.003
Hybrid10−11.13 [12.36, −9.90]38.2276.4590.000 *
Machine learning27−8.98 [−10.04, −7.91]181.8385.70260.000 *
Sentiment analysis1−8.15 [−10.23, −6.07]0.000.000-
Overall125−9.94 [−10.32, −9.57]501.3475.271240.000 *
* p < 0.001.
Table 5. Meta-regression model to assess the source of heterogeneity.
Table 5. Meta-regression model to assess the source of heterogeneity.
Sources of HeterogeneityEstimatesStd. Error95% CIp-Value
Year0.3610.201[−0.037, 0.756]0.075
Approach−0.0700.111[−0.290, 0.149]0.526
Sample size−0.0000.000[−0.000, −0.000]0.000 *
Constant−734.199406.931[−1539.826, −71.429]0.074
Year0.3610.201[−0.037, 0.756]0.075
* p < 0.001; Test of residual homogeneity: Q_res = chi2(121) = 353.71 Prob > Q_res = 0.0000.
Table 6. Egger’s test for examining publication bias.
Table 6. Egger’s test for examining publication bias.
ParameterEstimateStd. Errortp95% Conf. Interval
Slope−9.6412.723−3.540.001−15.031−4.250
Bias−0.2822.545−0.110.912−5.3194.755
Test of residual homogeneity: Q_res = chi2(121) = 353.71 Prob > Q_res = 0.0000.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Thompson, R.C.; Joseph, S.; Adeliyi, T.T. A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection. Information 2022, 13, 527. https://doi.org/10.3390/info13110527

AMA Style

Thompson RC, Joseph S, Adeliyi TT. A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection. Information. 2022; 13(11):527. https://doi.org/10.3390/info13110527

Chicago/Turabian Style

Thompson, Robyn C., Seena Joseph, and Timothy T. Adeliyi. 2022. "A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection" Information 13, no. 11: 527. https://doi.org/10.3390/info13110527

APA Style

Thompson, R. C., Joseph, S., & Adeliyi, T. T. (2022). A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection. Information, 13(11), 527. https://doi.org/10.3390/info13110527

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop