Next Article in Journal
Life Cycle Cost Model for Life Support Systems of Crewed Autonomous Transport for Deep Space Habitation
Next Article in Special Issue
VPN: Variation on Prompt Tuning for Named-Entity Recognition
Previous Article in Journal
Area Division Using Affinity Propagation for Multi-Robot Coverage Path Planning
Previous Article in Special Issue
An Optimized Approach to Translate Technical Patents from English to Japanese Using Machine Translation Models
 
 
Article
Peer-Review Record

The Detection of Fake News in Arabic Tweets Using Deep Learning

Appl. Sci. 2023, 13(14), 8209; https://doi.org/10.3390/app13148209
by Shatha Alyoubi 1,*, Manal Kalkatawi 2 and Felwa Abukhodair 2
Reviewer 1:
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(14), 8209; https://doi.org/10.3390/app13148209
Submission received: 15 May 2023 / Revised: 28 June 2023 / Accepted: 10 July 2023 / Published: 14 July 2023
(This article belongs to the Special Issue Applied Intelligence in Natural Language Processing)

Round 1

Reviewer 1 Report

The paper propose a model which utilizes the news content and social context of the user who participated in the news dissemination. But there are some problems.

1. The authors did not carry out experimental tests on accepted data sets, and the results are not convincing.

2. The author does not compare with other false news detection methods, which cannot highlight the superiority of the proposed model.

English expression needs further improvement.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The article mainly proposes an in-depth learning method to solve the problem of Fake news in Arab news. The perspective of the article is relatively new, but there are still some small problems that need to be improved.
1 The background introduction in the article is a bit too redundant. The background introduction needs to be as concise and clear as possible to introduce the background and meaning of the article, without too much content unrelated to the article itself.
2 The introduction of the method section in the article is not detailed. Please provide a more detailed explanation of the method you used. Here, I recommend several methods that may be relevant to the content of the article, such as [1,2].
3 Since the main purpose of this article is to solve the problem of Fake news in Arab news, the author needs to add more experiments to prove the feasibility and effectiveness of this method. [1]Back to common sense: Oxford dictionary descriptive knowledge augmentation for aspect-based sentiment analysis. Inf. Process. Manag. 60(3): 103260 (2023) [2]Joint event causality extraction using dual-channel enhanced neural network. Knowl. Based Syst. 258: 109935 (2022)

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper introduces two types of deep learning model, CNN and BiLSTM, that detects fake or real tweet news. It uses and compares five word embedding methods: Keras embedding (randomly initialized embedding), Word2vec, FastText, ARBERT and MARBERT. Tweets content text is passed through the embedding and CNN or BiLSTM layers. In addition to content, this paper uses social context features of 12 types. These features are encoded as a vector and concatenated with the output of CNN or BiLSTM layer. The concatenated result is then passed through FFNN and classified at the final Dense layer with softmax. 

The paper describes previous work and suggested methods in detail, but some clarification and additional experiments are needed as follows:

- In Table 1, the two columns, Classification approach and Textual feature representations, need to be clarified. mBERT, AraBERT, ... are contextual pre-trained embedding models and also need to belong to the column Textual feature representations, as like Word2vec and FastText.

- In Table 1, please give the number of data in the column Dataset.

- In subsection 3.2, the used pre-processing methods are described. They are appropriate for the classical non-contextual embedding methods but not for advanced contextual embeddings like ARBERT and MARBERT. These BERT-style embeddings used much lighter pre-processing than them. It would be better to use the same pre-processing with that of ARBERT and MARBERT when use ARBERT and MARBERT for embedding. 

- In subsection 3.3.2, 12 user features were adopted but it is unclear that all are borrowed from existing work or some are newly introduced by the authors. Please clarify it.

- There are no specific explanation on fine-tuning although it uses pre-trained embedding vectors or models. In case of Keras embedding, the randomly initialized embedding vectors are trained during the whole model's training process. However, it is unclear that Word2Vec, FastText, ARBERT and MARBERT are further trained (i.e., fine-tuned) or not during training process of the target task. 

- Table 6 and 7 show the hyper-parameters selected for each CNN or BiLSTM-based model after some tuning experiments. By the way, some hyper-parameters, especially learning rate, may be optimized to different values according to the base embedding model. For example, BERT-style embedding model commonly requires smaller learning rate than non-contextual pre-trained embeddings like Word2vec and FastText. So at least learning rate should be tuned differently according to the base embedding methods.

- Figure 4 shows the impact of user features (with and without user features). Please specify what is the embedding model used for these experiments (maybe, MARBERT, but no explanation). 12 kinds of user features were used. Why don't you compare the impact of each user feature?

- The suggested method was evaluated with the self-created dataset. And only one another dataset [40] was used but this dataset is even incomplete. So, it is impossible to compare the suggested method to any of the previous work described in Table 1. Please give some comparison results to prove the effectiveness of the suggested method by doing additional experiments using some datasets (having social context and used for deep learning models).

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I have no more comments.

 The quality of English language is better than last version

Back to TopTop