Tackling Misinformation Online

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 24467

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
Interests: social media mining; natural language processing; computational social science
1. Information System Group, Department of Computer Science and Applied Cognitive Science, University of Duisburg-Essen, 47057 Duisburg, Germany
2. Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK
Interests: online disinformation; information nutrition labels; social media analysis; natural language processing

E-Mail Website
Guest Editor
Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK
Interests: online disinformation and abuse analysis; natural language processing; social media analysis; digital journalism

E-Mail Website
Guest Editor
1. Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
2. Alan Turing Institute, London NW1 2DB, UK
Interests: online disinformation and abuse analysis; natural language processing; social media analysis; digital journalism

E-Mail Website
Guest Editor
1. Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
2. Alan Turing Institute, London NW1 2DB, UK
Interests: social media analytics; digital journalism; fact checking; social data science

E-Mail Website
Guest Editor
Centre for Research and Technology Hellas, Information Technologies Institute, Thessaloniki 570 01, Greece
Interests: Web Mining, Information Retrieval, Multimedia Mining; Artificial Intelligence; social network analysis

Special Issue Information

Dear Colleagues,

In the last decade, social media has become the platform par excellence for all kinds of online information exchange, such as, content creation, consumption, and sharing; commenting on and engaging with content posted by others; organization of events; reporting and tracking of real world events; rating and reviewing products; and catching up with the latest developments in the news. Among the best-known platforms today are Facebook, Twitter, Sina Weibo, Reddit, and Instagram. Besides individuals, the presence of companies, agencies, institutions, and politicians has also increased on social media. One of their objectives is to engage with a broader audience, while also learning from them. For instance, companies are interested in finding out what customers think about their products in order to improve their services and perform targeted advertising. Given the scale of social media use, it is also being leveraged to perform predictions on a variety of issues, such as political elections, referenda, and stock markets.

Although social media seems to offer a way to address all kinds of problems, it is also a source of new problems, some of which are a serious threat to society. One of the threats is the online information disorder and its manipulative power on public opinion. Information disorder has been categorized into the following three types: (1) misinformation, an honest mistake in information sharing; (2) disinformation, deliberate spreading of inaccurate information; and (3) malinformation, accurate information that is intended to harm others, such as leaks and cyberhate. Its spread can play an important role in shaping public opinion and reactions to events, which the viral properties of social media may then amplify. The influence of online information disorder has been evident in recent political events, such as Brexit and in Trump’s election, where social media played a significant role in shaping public opinion, and “fake news” and “post-truth” had an impact that is yet to be understood.

Topics of interest include, but are not limited to, the following:

  • Detection and tracking of rumors
  • Rumor veracity classification
  • Fact-checking social media
  • Detection and analysis of disinformation, hoaxes and fake news
  • Stance detection in social media
  • Qualitative user studies assessing the use of social media
  • Bots detection in social media
  • Measuring public opinion through social media
  • Assessing the impact of social media on public opinion
  • Political analyses of social media
  • Real-time social media mining
  • NLP for social media analysis
  • Multimedia content analysis in social media settings
  • Deepfake detection and case studies
  • Network analysis and diffusion of dis/misinformation
  • Usefulness and trust analysis of social media tools
  • Benchmarking disinformation detection systems
  • Open disinformation knowledge bases and datasets

Dr. Arkaitz Zubiaga
Dr. Ahmet Aker
Prof. Kalina Bontcheva
Dr. Maria Liakata
Prof. Rob Procter
Dr. Symeon Papadopoulos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • natural language processing
  • social media mining
  • Information nutrition labels for the Web
  • data mining
  • machine learning

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 981 KiB  
Article
Raising the Flag: Monitoring User Perceived Disinformation on Reddit
by Vlad Achimescu and Pavel Dimitrov Chachev
Information 2021, 12(1), 4; https://doi.org/10.3390/info12010004 - 22 Dec 2020
Cited by 7 | Viewed by 5832
Abstract
The truth value of any new piece of information is not only investigated by media platforms, but also debated intensely on internet forums. Forum users are fighting back against misinformation, by informally flagging suspicious posts as false or misleading in their comments. We [...] Read more.
The truth value of any new piece of information is not only investigated by media platforms, but also debated intensely on internet forums. Forum users are fighting back against misinformation, by informally flagging suspicious posts as false or misleading in their comments. We propose extracting posts informally flagged by Reddit users as a means to narrow down the list of potential instances of disinformation. To identify these flags, we built a dictionary enhanced with part of speech tags and dependency parsing to filter out specific phrases. Our rule-based approach performs similarly to machine learning models, but offers more transparency and interactivity. Posts matched by our technique are presented in a publicly accessible, daily updated, and customizable dashboard. This paper offers a descriptive analysis of which topics, venues, and time periods were linked to perceived misinformation in the first half of 2020, and compares user flagged sources with an external dataset of unreliable news websites. Using this method can help researchers understand how truth and falsehood are perceived in the subreddit communities, and to identify new false narratives before they spread through the larger population. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure 1

40 pages, 13898 KiB  
Article
Addressing Misinformation in Online Social Networks: Diverse Platforms and the Potential of Multiagent Trust Modeling
by Robin Cohen, Karyn Moffatt, Amira Ghenai, Andy Yang, Margaret Corwin, Gary Lin, Raymond Zhao, Yipeng Ji, Alexandre Parmentier, Jason P’ng, Wil Tan and Lachlan Gray
Information 2020, 11(11), 539; https://doi.org/10.3390/info11110539 - 23 Nov 2020
Cited by 4 | Viewed by 4676
Abstract
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of [...] Read more.
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of each environment may require distinct solutions for the processing. This then leads to a higher-level proposal for the actions to be taken in order to judge trustworthiness. Our final reflection concerns what information should be provided to users, once there are suspected misleading posts. Our aim is to enlighten both the organizations that host social networking and the users of those platforms, and to promote steps forward for more pro-social behaviour in these environments. As a look to the future and the growing need to address this vital topic, we reflect as well on two related topics of possible interest: the case of older adult users and the potential to track misinformation through dedicated data science studies, of particular use for healthcare. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure A1

26 pages, 423 KiB  
Article
A Reliable Weighting Scheme for the Aggregation of Crowd Intelligence to Detect Fake News
by Franklin Tchakounté, Ahmadou Faissal, Marcellin Atemkeng and Achille Ntyam
Information 2020, 11(6), 319; https://doi.org/10.3390/info11060319 - 12 Jun 2020
Cited by 14 | Viewed by 5277
Abstract
Social networks play an important role in today’s society and in our relationships with others. They give the Internet user the opportunity to play an active role, e.g., one can relay certain information via a blog, a comment, or even a vote. The [...] Read more.
Social networks play an important role in today’s society and in our relationships with others. They give the Internet user the opportunity to play an active role, e.g., one can relay certain information via a blog, a comment, or even a vote. The Internet user has the possibility to share any content at any time. However, some malicious Internet users take advantage of this freedom to share fake news to manipulate or mislead an audience, to invade the privacy of others, and also to harm certain institutions. Fake news seeks to resemble traditional media to establish its credibility with the public. Its seriousness pushes the public to share them. As a result, fake news can spread quickly. This fake news can cause enormous difficulties for users and institutions. Several authors have proposed systems to detect fake news in social networks using crowd signals through the process of crowdsourcing. Unfortunately, these authors do not use the expertise of the crowd and the expertise of a third party in an associative way to make decisions. Crowds are useful in indicating whether or not a story should be fact-checked. This work proposes a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party side. An experimentation has been conducted on 25 posts and 50 voters. A quantitative comparison with the majority vote model reveals that our aggregation model provides slightly better results due to weights assigned to accredited users. A qualitative investigation against existing aggregation models shows that the proposed approach meets the requirements or properties expected of a crowdsourcing system and a voting system. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure 1

19 pages, 600 KiB  
Article
Malicious Text Identification: Deep Learning from Public Comments and Emails
by Asma Baccouche, Sadaf Ahmed, Daniel Sierra-Sosa and Adel Elmaghraby
Information 2020, 11(6), 312; https://doi.org/10.3390/info11060312 - 10 Jun 2020
Cited by 27 | Viewed by 6786
Abstract
Identifying internet spam has been a challenging problem for decades. Several solutions have succeeded to detect spam comments in social media or fraudulent emails. However, an adequate strategy for filtering messages is difficult to achieve, as these messages resemble real communications. From the [...] Read more.
Identifying internet spam has been a challenging problem for decades. Several solutions have succeeded to detect spam comments in social media or fraudulent emails. However, an adequate strategy for filtering messages is difficult to achieve, as these messages resemble real communications. From the Natural Language Processing (NLP) perspective, Deep Learning models are a good alternative for classifying text after being preprocessed. In particular, Long Short-Term Memory (LSTM) networks are one of the models that perform well for the binary and multi-label text classification problems. In this paper, an approach merging two different data sources, one intended for Spam in social media posts and the other for Fraud classification in emails, is presented. We designed a multi-label LSTM model and trained it on the joint datasets including text with common bigrams, extracted from each independent dataset. The experiment results show that our proposed model is capable of identifying malicious text regardless of the source. The LSTM model trained with the merged dataset outperforms the models trained independently on each dataset. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure 1

Back to TopTop