*Editorial* **Special Issue "Fighting Fake News: A Generational Approach"**

**Eugène Loos 1,\* and Loredana Ivan 2**


To reach a state of equal opportunity in our society, access to credible, accessible information [1,2] across all generations is of the utmost importance. Access to (digital) information about services and products is crucial [3]. Van den Hoven [4], referring to Rawls [5,6], goes so far as to refer to accessible information as a "primary good". As all citizens have an equal right to information, Bovens [7], Bovens and Loos [8] even advocate granting citizens' information rights, following along the lines of the classic (freedom) rights.

We define fake news as "any kind of misleading information that could mistakenly be considered accurate, regardless of the mechanisms that led to its propagation" [9]. See [10] for a typology of scholarly definitions and [11] for a discussion of related terms, such as mis-, dis- and mal-information. Fake news endangers the accessibility of information for younger and older citizens [12–14], see also https://www.stopcoronafakenews.com/en/ (accessed on 5 March 2022). The question we are confronted with now is how to fight fake news so that all generations can continue to have access to credible, accessible information.

One approach involves introducing legal measures requiring tech platforms, such as Google, Facebook and Twitter, to self-regulate themselves. These platforms have been requested by the EU to provide monthly reports on the actions they have taken to combat the dissemination of fake news (https://reut.rs/3o19Kg8, accessed on 5 March 2022). Dumitru et al. [9] state that "As part of these self-regulatory measures, Facebook and Google committed to a more stringent policing of the content that is tolerated on their platforms" (https:// about.fb.com/news/2020/04/COVID-19-misinfo-update/, https://blog.google/outreachinitiatives/google-news-initiative/news-brief-april-2021-updates-google-news-initiative/, accessed on 5 March 2022). Additionally, Twitter stated that "as the global community faces the COVID-19 pandemic together, Twitter is helping people find reliable information, connect with others, and follow what's happening in real time ( ... )" (https://blog.twitter. com/en\_us/topics/company/2020/COVID-19#protecting, accessed on 5 March 2022)." Another such measure is the development of a code of conduct (https://digital-strategy. ec.europa.eu/en/policies/code-practice-disinformation, accessed on 5 March 2022). Dumitru et al. (in press) [9] point to the statement issued by the Sounding Board of the multistakeholder Forum on Disinformation on 24 September 2018, which declared that "[ ... ] the "Code of practice", as presented by the working group, contains no common approach, no clear and meaningful commitments, no measurable objectives or KPIs, hence no possibility to monitor progress, and no compliance or enforcement tool: it is by no means self-regulation, and therefore the Platforms, despite their efforts, have not delivered a Code of Practice". They conclude: "In short, the extent to which a legal approach using self-regulationandacodeofprinciplesreallyworkstofightfakenewsremainsunclear".

Technological innovation has opened the door for a second approach in the form of automatic deception detection [15,16]. Google has already started checking the factualness of the news presented on their platform, and Facebook recently introduced a new oversight board (an international committee of judges, journalists and academics) that will help steer the company's policy on the freedom of expression. For more information, see the

**Citation:** Loos, E.; Ivan, L. Special Issue "Fighting Fake News: A Generational Approach". *Societies* **2022**, *12*, 57. https://doi.org/ 10.3390/soc12020057

Received: 28 February 2022 Accepted: 7 March 2022 Published: 30 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

following EU initiative: https://www.poynter.org/international-fact-checking-networkfact-checkers-code-principles (accessed on 5 March 2022). Traditional fact checking and innovative technological detection might help to fight fake news to some extent, but they are not a solution in themselves. Apart from technical feasibility, fake news will become increasingly sophisticated and harder (if not impossible) to detect. Moreover, there is an even more fundamental issue: Who has the authority to decide the criteria for the credibility of online information—the state, the platform companies or the press? Using sophisticated tools to withhold certain news from citizens could in the end threaten their access to credible information, which eventually erodes democracy.

It may therefore be argued that a more durable solution would be to empower citizens so that they themselves are able to judge the credibility of information. We distinguished a third, educational approach based on media literacy [9,12,17] (see also https://www. stopcoronafakenews.com/en/toolkit-educatieve, accessed on 5 March 2022) focusing on interventions at schools, other educational institutions and community centers: "Media literacy should not only focus on people's ability to use certain devices and technologies, but also on promoting a deep understanding of modern forms of media, how these work and how they produce and use news items, all of which may be attained through systematic media education programs [18]. It is not only important to investigate the feasibility of interventions at an early age to empower young citizens such that they are able to establish the trustworthiness of news. It is also essential to involve other generations as due to the paucity of studies in this field, it would be naive to assume that they are not vulnerable to fake news" [9].

This Special Issue of *Societies* comprises seven papers that present empirical research in Bosnia and Herzegiovina (1×) [19], one multiple-country study (Argentine, Australia, France, Ireland, Italy, Netherlands, Spain, United Kingdom, USA, Qatar, New Zealand and Costa Rica) [20], the USA (3×) [21–23], Romania (2×) [24,25], focusing on how different generations perceive fake news, including young and middle-aged groups of people [19], multiple age groups [22,25], university students and adults in general [20], elementary students (grades 1–5 in USA [21], children and adolescents [24], and paying attention to age, education and gender [23]. The use of an ad hoc analysis sheet, validated by the interjudge method [20], could represent an interesting approach to investigate how people in different professions discern reliable information from fake news, whereas descriptive observational data [21] might provide insights into how different age groups search for information and how often they are exposed to fake news. Some authors [19] used thematic analysis to investigate differences between generations in perceiving fake news; others [25] used surveys to describe the differences between generations in the perceived incidence of fake information. Study [23] used surveys to assess the impact of the characteristics of online articles and their authors, publishers and sponsors on perceived trustworthiness to ascertain how readers make online article trust decisions. In other studies [22,24], experiments were conducted to explore the rationale people use when deciding what information to trust. Overall, this Special Issue provides insights into the different methodologies available to research fake news from a generational perspective across different age groups.

**Conflicts of Interest:** The authors declare no conflict of interest.
