Next Article in Journal
Identifying Central Aspects of Well-Being Among Individuals in Situations of Forced Migration in Finland
Previous Article in Journal
Academic Achievement in a Digital Age: Intersections of Support and Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Fact-Checking Service: Artificial Intelligence’s Uses in Ibero-American Fact-Checkers

by
João Canavilhas
1,* and
Liliane Ito
2,3,*
1
Department of Communication, Philosophy and Politics, Beira-Interior University, 6201-001 Covilhã, Portugal
2
Department of Journalism, São Paulo State University, Bauru 17033-360, Brazil
3
Department of Communication, Brazilian Institute of Development, Research, and Education, Brasília 70830-401, Brazil
*
Authors to whom correspondence should be addressed.
Soc. Sci. 2025, 14(9), 514; https://doi.org/10.3390/socsci14090514
Submission received: 17 July 2025 / Revised: 14 August 2025 / Accepted: 21 August 2025 / Published: 26 August 2025
(This article belongs to the Special Issue Disinformation in the Age of Artificial Intelligence)

Abstract

To fight disinformation, fact-checking initiatives have used artificial intelligence tools as part of the investigation process, whether to monitor false narratives, to assist in the construction of content, or to allocate verifications more effectively. In this theoretical and empirical study, in-depth interviews were conducted with representatives of four fact-checking organizations: Polígrafo (Portugal), Chequeado (Argentina), Maldita (Spain), and Aletheia (Brazil). Our aim was to find out how they use AI in the fact-checking process, and it was concluded that each organization uses these technologies to respond to specific needs. This is why different levels of use were identified, from tooling (simple) to disruptive (advanced), which are related to their human structure and economic situation. The most developed Ibero-American organizations are on a par with their global counterparts.

1. Fact-Checking as a Response to Disinformation

In today’s platformed society (Van Dijck et al. 2018), where the use of artificial intelligence (AI) can empower so-called echo chambers, fact-checking initiatives have been an important counterpoint to disinformation and information disorder (Wardle and Derakhshan 2019).
In response to the spread of lies, rumors, and biased narratives, fact-checking initiatives emerged in the mid-2000s, aiming to investigate and disprove disinformative content of public interest after it was spread by social actors, mainly politicians and individuals with political, economic, cultural, and social prominence. Data from 2025 by Duke University, in the United States, which maps fact-checking initiatives worldwide, shows that there are 446 institutions of this type spread across 102 countries. The pioneers were Factcheck.org, associated with the University of Pennsylvania, launched in 2003, and the UK’s Channel 4 Fact Check in 2005.
Although it usually refers to all verification, Ireton and Posetti (2019) classifies the thorough evaluation of speeches—official or not—made by social agents of social relevance as fact-checking. Other types of checking are debunking, which can be described as the verification of false information and fraud, regardless of the format (video, audio, image, text), and pre-bunking, which refers to the search for primary evidence such as eyewitnesses or geolocation, as well as reverse image research and other OSINT (open-source intelligence) techniques. The aim of the latter is to prevent false stories from being published before they go viral.
In the field of production, fact-checking agencies are often born within newsrooms, but this is not always the case. There are also non-governmental organizations, university initiatives, and others created by members of civil society interested in building a better-quality public debate. In common, they have verification procedures that are generally very similar to journalistic checking and verification. Perhaps that is why it is common to find journalists or journalism students on the teams, even in those that do not have links to the media. Fact-checking newsrooms also include “new actors from other fields who challenge professional practices with diverse profiles, such as data scientists, developers and researchers” (Pereira et al. 2024, p. 6).
The values that rule the work of fact-checking organizations are also like those of journalists, such as impartiality, the pursuit of neutrality, rigorous investigation of data sources and expertise, professional ethics, and a focus on issues of public interest (Sodré and Ferrari 1986; Alsina 1989; Kunczik 1997). These attributes usually appear on the websites of fact-checkers, who also add transparency, a point that differs from journalism in general since the journalist does not have to specify each step of their investigation when publishing content, and in fact-checking this is an obligation, as described as the second principle of International Fact-Checking Network: “Signatories provide all sources in enough detail that readers can replicate their work, except in cases where a source’s personal security can be compromised. In such cases, signatories provide as much detail as possible” (IFCN 2025, online). In certain cases, like Comprova Project1, in Brazil, this is a mandatory pre-publication stage, when the procedures carried out in the investigation are analyzed and possible biases pointed out as ethical issues that may be involved are assessed (Trevisan Fossá and Müller 2019). After cross-checking, the content is finally published with some kind of conclusion and labeling—such as “true”, “false”, or “misleading”, for example—and these classifications can vary, depending on the fact-checking institution (Diniz 2017). These features of the production process in fact-checking seek to guarantee the integrity of the information that will be published, the replicability of the investigation by any interested parties, the contextualization of the topic, and the guarantee of fairness in the procedures adopted, reinforcing the reliability and credibility of the fact-checking discourse (IFCN 2025).
This means that fact-checkers have, as a shared professional practice, the duty to show how the investigative process took place with regard to a discourse or piece of information, detailing for the public the path of the check step by step (Suomalainen et al. 2025), with a wealth of detail with regard to publicizing the tools used, databases consulted, sources interviewed, and software used to detect fraud, among others, which guarantees that any other interested parties can repeat the procedure.
Institutions that fulfil these criteria and are transparent about their sources of funding (they cannot receive support from governments or companies, for example) can certify their credibility by joining international associations that certify the quality of fact-checking institutions. Two of the best-known and most relevant are the European Fact-Checking Standards Network (EFCSN) and the International Fact-checking Network (IFCN). “Fact-checking organizations apply to become verified registrants of the IFCN code of principles. This requires an external evaluation to assess the effective implementation of these standards.” (Ireton and Posetti 2019, p. 92).
Despite their orientation towards fact-checking and their original link to journalism, fact-checking institutions also work in the field of media education (Buckingham 2023), seeking to make citizens aware of the importance of understanding the media ecosystem by sharing information about their rights and duties, contextualizing issues of public interest, and clarifying the importance of government transparency.
This concern for citizen engagement is also visible in the type of tools made available by fact-checking organizations. One of the most widely used solutions has been bots, despite all the limitations mentioned by research. Difficulties with semantic interpretation (Kuznetsova et al. 2025), the non-repeatability of narratives (Allaphilippe et al. 2019), or the lack of quality datasets (Juršėnas et al. 2022) are just some of these obstacles. Although these tools have great potential in assessing the veracity of political information (Kuznetsova et al. 2025), precisely one of the areas of greatest social interest, the existing limitations still cause distrust of autonomous systems, both among researchers and users. Perhaps, for this reason, a significant portion of the systems used prioritize hybrid work between algorithms and humans, although the most current language models are beginning to provide contexts, approaching the quality of work performed by humans (Yang and Menczer 2023).
Knowing that trust is fundamental to increasing user engagement with the media, the same can be expected of fact-checkers (Lim and Perrault 2023). In the latter case, trust is gained through the quality of the technological infrastructure, but also through transparency in relation to fact-checking processes and funding models. Joining international networks, such as the European Fact-Checking Standards Network (EFCSN) and the International Fact-Checking Network (IFCN), implies total transparency in terms of funding, excluding government support, for example, to avoid constraining their actions.
Initiatives such as Digiteye (India), FactCheckNI (United Kingdom), 20 Minutes Fake off (France), Faktisk (Norway), Australian Associated Press (Australia), and Animal Político (Mexico)2 are all signatories of the IFCN. Despite their orientation towards fact-checking and their original link to journalism, fact-checking institutions currently also work in the field of media education (Buckingham 2023), seeking to make citizens aware of the importance of understanding the media ecosystem by sharing information about their rights and duties, contextualizing issues of public interest, and clarifying the importance of government transparency. Currently, some examples of fact-checking institutions that work with media education are Africa Check (South Africa, Senegal, Nigeria, and Kenya), Chequeado (Argentina), Lupa (Brazil), and Maldita (Spain).3

2. Artificial Intelligence Comes to Fact-Checking

Historically, technology has always influenced journalistic activity (Pavlik 2000), and the arrival of artificial intelligence in newsrooms has once again confirmed this trend. The similarities between journalism and fact-checking mean that technology will once again have an important impact on the production process of fact-checkers, as they deal with disinformation at infodemic volumes and speed (PAHO 2020). Increasingly false information is generated and distributed by bots, but their disinformative effectiveness will probably be linked to human intervention in their distribution (Flores Vivar 2019).
Despite the difficulty in making denials as effective as the original disinformation, until a certain point, human fact-checking was able to respond satisfactorily to the volume of disinformation in circulation. However, as soon as disinformers started using AI to produce and distribute false information, it became essential to fight using the same weapons. “Analysts must scan vast amounts of information using various tools and software in order to discover and monitor a disinformation campaign. This is performed by identifying patterns, classifying textual and audio-visual data, computing similarities between samples of content, and other techniques.” (Juršėnas et al. 2022, p. 8).
In journalism, where hybrid work between AI and journalists is proposed (Linden 2017; Canavilhas 2025), joint work will also be needed to combat disinformation because algorithms do not yet understand the subtleties of human speech.
Juršėnas et al. (2022) proposed three areas in which AI can help humans who analyze disinformation: detecting suspicious content for later inspection, partially automating the analysis of disinformative content, and identifying dissemination systems. However, for the system to work, the authors believe that many problems still need to be solved. The difficulty of algorithmically defining what disinformation is, the existence of huge datasets of unquestionable quality, and the fact that artificial neural networks have not yet reached the level of common-sense reasoning are just some of them. In a scenario where generative AI systems are producing increasingly realistic content, these limitations mean that humans must be involved in fact-checking to ensure that the systems are reliable.
Despite this, around 80% of fact-checking organizations say they intend to increase the use of AI in their work (Beckett and Yaseen 2023). Examples of the use of AI in fact-checking are popping up all over the world, and there are other examples in the countries studied.
In Portugal, the Lusa news agency, with the support of INESC-ID, CNCS—National Cybersecurity Centre, and the technology company in:know, leads the ContraFake project, which aims to “aggregate information and develop computational resources and technological tools based on artificial intelligence to protect and support communication professionals, citizens, and institutions against disinformation actions” (Lusa n.d.).
In Brazil, the two largest fact-checking agencies used AI in the 2022 Brazilian presidential elections. Lupa provided an intelligent assistant to which citizens could send suspicious information, which the newsroom then analyzed. The Aos Fatos agency developed an AI tool (Fátima) that automatically monitors the web, identifying false and potentially viral content to be analyzed by human journalists (Welter and Canavilhas 2023). Both cases are examples of hybrid work between humans and technology, seeking to avoid issues related to AI biases, for example.
In Spain, Newtral uses a chatbot like Lupa’s, where users can interact via WhatsApp to request verifications. AI is also used in automatic transcription for verification and statements, and it provides some tools for authenticating other content (Gonçalves et al. 2024).
Among the approximately 500 fact-checking institutions identified in 2025, one initiative that uses AI in fact-checking is Tech & Check, from Duke Reporter’s Lab, which includes tools such as ClaimReview—a standardized tagging system that facilitates the indexing and display of fact-checks by search engines and platforms. Tech & Check also uses Squash, an experimental platform that performs automatic fact-checking during political events, first transcribing the audio and then cross-referencing it with the ClaimReview database, displaying results to the user in real time. Another AI tool used in fact-checking procedures is Full Fact AI, developed by a team of eight technology professionals at Full Fact. The tool enables monitoring of the media ecosystem, focusing on detecting misinformation, cross-referencing it with previously checked investigations in its database, and then providing labels for the investigation. According to Full Fact, this tool is scalable and has already been sold to more than 40 fact-checking organizations and is present in 30 countries (Full Fact 2025).
Another well-distinguished example is Claimbuster, a platform presented as the “first-ever end-to-end fact-checking system” (Hassan et al. 2017), which monitors statements originating from a wide variety of platforms to confirm their veracity.
Alongside the more technical issues, the authors also propose passing tougher laws to combat disinformation and suggest international co-operation in developing tools, including the big platforms, on this front.
In terms of legislation, examples of laws and recommendations seeking to combat disinformation are emerging all over the world. The European Union began to pay more attention to this phenomenon in 2015, with the creation of the StratCom task force, a working group exclusively dedicated to monitoring disinformation campaigns developed by Russia (ERC 2019). This was followed by a series of plans and resolutions focusing on hate speech, the protection of minors, and defining the very concept of disinformation. In 2018, the High-Level Independent Group on Fake News and Online Disinformation (HLEG) was created, and that same year, the report that gave rise to the Code of Practice on Disinformation was published, with the aim of self-regulating the fight against disinformation involving social media. In the footsteps of this code, some networks began to implement tools to combat disinformation or to hire the services of external companies to carry out this activity (ERC 2019).
And, at the end of 2018, more precisely on 5 December, the European Commission published the Action Plan against Disinformation, which recommends increasing co-operation between states to improve the fight against disinformation, mobilizing the private sector, supporting fact-checkers and raising public awareness of the phenomenon, as well as proposing concrete measures to be adopted by European Union countries (ERC 2019).

Legislation About Disinformation in Ibero-America

In Portugal, the protection of citizens against disinformation was first included in the “Carta Portuguesa de Direitos Humanos na Era Digital” (Portuguese Charter of Human Rights in the Digital Age), law No. 27/2021. Point 1 of Article 6 stated that it was the state’s responsibility to ensure compliance with the European Action Plan against Disinformation, protecting society from those who produce disinformation, defined in point 2 as “any demonstrably false or misleading narrative created, presented and disseminated to obtain economic advantage or to deliberately mislead the public, and which is likely to cause public harm, namely a threat to democratic political processes, public policy-making processes and public goods”, whether disseminated through texts, videos, emails, or other networks. Law No. 15/2022 subsequently simplified the previous law and ensured that it was in line with the European Action Plan against Disinformation. Thus, five points of Article 6 were revoked, leaving only point 1: “The State shall ensure compliance in Portugal with the European Action Plan against Disinformation, to protect society against natural or legal persons, de jure or de facto, who produce, reproduce or disseminate narratives considered to be disinformation.”
In Spain, there is no law against disinformation, but there are several strategic documents that address the issue, as well as laws that have added provisions to protect citizens from this phenomenon.
In 2017, the National Security Committee of the Congress of Deputies developed a set of proposals against disinformation that were submitted to the government. In 2021, “La Estrategia de Seguridad Nacional” (National Security Strategy—Royal Decree 1150/2021), Chapter 3, includes disinformation campaigns among the threats. Following this document, the Procedure for the Development of the National Strategy against Disinformation Campaigns was published in 2025.
Other important milestones occurred in 2022, with the creation of the “Foro contra las campañas de desinformación en el ámbito de la Seguridad Nacional” (Forum against Disinformation Campaigns within the scope of National Security), and in 2024, the government approved the Action Plan for Democracy, which included the creation of a commission on disinformation and approved a national strategy against disinformation campaigns.
In the case of Brazil, the first Internet legislation is Law 12.965/2014, also known as the “Marco Civil da Internet” (Civil Rights Framework for the Internet), which establishes principles of freedom, privacy, neutrality, and transparency in the use of the network. However, enacted more than ten years ago, it does not deal directly with disinformation. The law project 2630/2020, approved by the Brazilian Senate in 2020, is the main one of the several proposed laws in the country. Specifically, the document proposes establishing standards for platforms, such as increasing transparency, preserving copyright, and holding Big Techs and providers liable in certain cases of spreading disinformation. At the time, the tech giants lobbied hard (Dias 2023) to have the vote on the bill in question stopped in the House of Representatives, something that has been going on for almost five years now.
Finally, in Argentina, there is still no national law aimed at tackling disinformation, despite several legislative proposals in this regard, such as S-1453/2020, which proposes making it a criminal offence to create and disseminate pieces of disinformation intended to cause panic, discredit authorities, or generate unrest, with a penalty of two to six years in prison. Other proposals make it compulsory for digital platforms to provide periodic transparency reports and propose modifying the civil liability of intermediaries, requiring platforms to monitor and remove illegal content after notification. Also, in the case of Argentina, in October 2020, the “Observatorio de la Desinformación y Violencia Simbólica” (Observatory of Disinformation and Symbolic Violence) was created, linked to the Public Defender’s Office, with the aim of monitoring malicious content in media and digital platforms, which caused protests from society and the press, forcing the government to clarify that the body has no power to sanction, being only a space for monitoring and research (Rocha et al. 2023).
There seems to be a linguistic alignment in the policies of the countries studied. While Portugal and Brazil chose to legislate to protect citizens from disinformation, Spain and Argentina chose to create surveillance structures and strategic guidance documents. In both cases, there are global reference frameworks, such as UNESCO’s Guidelines for the governance of digital platforms (UNESCO 2023) or the European Action Plan against Disinformation.

3. Materials and Methods

This study seeks to find out whether Ibero-American fact-checking organizations use artificial intelligence and at what points in the verification process they do so. The question that guides this work is, therefore, as follows:
PI: How are Ibero-American fact-checking organizations using artificial intelligence in their fight against disinformation?
To answer this question, we will analyze usage levels, the type of tools, the role of humans in the process, and the economic models that support the work since financial independence is a decisive factor for accreditation by organizations such as the European Fact-Checking Standards Network (EFCSN) and the International Fact-Checking Network (IFCN).

3.1. Sample

A survey carried out in the countries studied revealed that there are 31 organizations—including newspapers—that perform fact-checking, which are divided as follows: Brazil (12), Spain (10), Portugal (5), and Argentina (4).
The reason for choosing this sample (Table 1) is that it relates to its size and characteristics. In the cases of Portugal, Spain, and Argentina, the participating agencies are the best known in their respective countries. In the case of Brazil, we opted for an organization that, although not among the largest, such as Lupa or Aos Fatos, has a unique characteristic that was of interest to the study: its work is based on the voluntary work of its employees.

3.1.1. Polígrafo (Portugal)

Founded in 2018, Polígrafo4 is Portugal’s best-known fact-checking organization. The brainchild of Fernando Esteves, a journalist with a long career in traditional media, Polígrafo provides information for “Polígrafo SIC”, a program on SIC’s TV channel. Although it is a small organization, Fernando Esteves says that “There is no global project that carries the same weight in its particular society that Polígrafo has in Portugal—not even close” (Esteves 2024). A signatory member of the IFCN and EFCSN, Polígrafo works with media education projects aimed at young people, such as “Geração V—em nome da Verdade” (V Generation—in the name of Truth), an initiative funded by the Porticus Foundation in which young people aged 15 to 22 are trained to check information and spread the results on their social media.
Polígrafo does not have a physical newsroom in Lisbon, as its 15 employees work remotely. There are no system developers on the team. The TV SIC team is made up of 6 employees hired by the broadcaster to look after the program alone.
Polígrafo’s funding is mixed, with support from the Calouste Gulbenkian Foundation and the Sapo portal that hosts the site.

3.1.2. Maldita (Spain)

Inspired by the projects FactCheck.org, Chequeado, and Politifact, Maldita5 was created in 2018 as a non-profit fact-checking organization. It started out as an old Twitter account, but, thanks to a crowdfunding initiative, the organization has evolved into its current form. Its financial model relies on various sources of income, including donations from citizens, collaboration with different media (such as radio programs), and mentoring for institutions on fact-checking, technological partnerships, and, above all, subsidies and philanthropy.
According to Pablo Hernández Escayola, Maldita is “a non-profit foundation, so we don’t receive any advertising money. To fund what we need to do, we are very attentive to all funding opportunities that arise through development projects and calls for proposals related to disinformation, if they are aligned with our way of looking at the problem of disinformation. We have editorial independence—there is independence in all our agreements.” (Escayola 2024).
Maldita is part of the EFCSN, the IFCN, and the Global Investigative Journalism Network (GIJN), which brings together 244 organizations in 90 countries. The project claims to follow independent editorial methodologies and criteria guided by its own code of ethics.
The public is invited to join Maldita through three models of participation: with requests for information and verification; as specialized consultants for the production of journalistic pieces; and as advisors who exercise “independence vigilance”, i.e., they monitor the content and evaluate aspects such as the approach to topics and the textual language used by journalists in the pieces produced to disprove disinformation.

3.1.3. Chequeado (Argentina)

Among the fact-checking institutions in this investigation, Chequeado6 is the oldest, having been set up in 2010 in Buenos Aires, Argentina. It is among the first ten fact-checking organizations to emerge worldwide and the first in Latin America. The brainchild of the non-profit organization La Voz Publica, whose aim is to improve the quality of public discourse, seen as a decisive factor in the credibility of democratic institutions, Chequeado began as a website but now operates on various social media platforms and chat apps. Its current team is made up of 40 people.
Ana Laura Garcia Luna says the following:
“Chequeado has four main programs: the media program, which is basically the newsroom, responsible for producing journalistic content in various formats; the innovation program, which on one hand produces a lot of audiovisual content and works very closely with the newsroom on the production of journalistic content in different data formats, and also develops technology. For example, we carry out many developments using artificial intelligence that help us automate certain small parts of the process. Of course, the actual fact-checking is always performed by humans. Another area is the education program, which has two main lines of work, and then there’s the impact and new initiatives area, which is obviously focused on measuring Chequeado’s impact and developing new ways to do so. It also coordinates, for example, the Latin American fact-checking network. So, we all work in a very interconnected way because, in the end, we all end up working together on each project.”
Chequeado is a signatory of the IFCN and is the leading fact-checking program of LATAMChequea, Latin America’s fact-checking network. “We work to revalue the truth and raise the cost of lies” is the sentence that opens the section explaining the project, which is divided into four main programs: Media; Education; Innovation; and Impact and New Initiatives.

3.1.4. Aletheia (Brazil)

A non-governmental and non-profit organization, the Aletheia Movement, was born in 2022 from a project by Mateus Santos, a software engineer who worked at the Wikimedia Foundation. When he took part in the Mozilla Open Labs event in 2018, whose theme was “improving the internet”, Santos had the idea of designing a platform that would allow public discourse to be checked in an accessible and horizontal way. Inspired by the layout and operation of the genius.com7 website, he developed the AletheiaFact.Org8 platform, which, like the site that inspired it, allows a community gathered around a common ideal to work together—in this case, fact-checking.
The creation of the platform alone was not enough to get the movement started. Thus, Santos invited journalist Tamiris Volcean to join the team as a volunteer so that the actions—which until now had only included building the platform—could be achieved. Both are currently directors of the NGO, which is part of the “Rede Nacional de Combate à Desinformação” (National Network to Combat Disinformation).
With around 15 permanent volunteers (in the areas of systems development, communication, and legal), Movimento Aletheia’s mission is to “democratize fact-checking” from the technical side, which is open source for anyone interested, to the association of candidate fact-checkers, who can create a login and password on the site.
According to Tamiris Volcean, the “main goal today is to form a network, primarily academic rather than market-driven, to establish a stronger foundation for fact-checking in Brazil, and for this to be democratized through multiple actions, not only within the university, but by uniting the tripod of teaching, research, and extension.” (Volcean 2025).
Currently, the Aletheia Movement has created the “Comitê Nacional de Democratização de Checagem de Fatos” (National Committee for Democratisation of Fact Checking) based on the collaboration between teachers, researchers, students, and institutions from all regions of Brazil and with a branch in Portugal.

3.2. Methodology

Based on the literature review and the analysis of the work carried out by the organizations in the sample, the next step was to find out what role artificial intelligence plays in the fact-checking process. The decision was made to use the semi-structured interview because it is an instrument that allows “questions to be adapted and/or additional information to be requested whenever this proves important, and it is precisely this characteristic, i.e., its flexibility, that individualizes it in relation to other forms of enquiry” (Coutinho 2022).
The semi-structured interviews were divided into four groups of questions (General Data; Funding; Audience and Participation; AI Uses) and were conducted with representatives of the four organizations that make up the sample.
Chequeado (Argentina): Ana Laura Garcia Luna, journalist trainer for the Education program, and Milena Rosenzvit. The interview took place on the morning of 30 April 2024, lasted 49 min, and was performed remotely using Google Meet.
Maldita (Spain): Pablo Hernández Escayola, academic research coordinator at Maldita.es. The interview took place on 15 May 2024 in the afternoon, lasted 1 h 25 min, and was conducted face-to-face in Madrid.
Polígrafo (Portugal): Fernando Esteves, director and creator of fact-checking. The interview took place in the evening of 20 May 2024, lasted 1 h 14 min, and was carried out remotely using Google Meet.
Aletheia (Brazil): Tamiris Volcean, co-founder and executive director of the company, and Mateus Santos, co-founder and CPTO (Chief Product and Technology Officer). In the first case, the interview took place on 23 May 2025 and lasted 29 min, while in the latter, it took place on 29 May 2025 and lasted 45 min. Both were conducted via Google Meet.
The audio files of the interviews were fully transcribed using the Pinpoint tool. Once the audio files had been processed into text files, they were reviewed, and various transcription and spelling errors were corrected.

4. Results and Discussion

The analysis of the work conducted by the organizations in the sample made it possible to draw up the interview script that was used to talk to those responsible. The transcription of the interviews led to the arrangement of the discussion around two axes: the way these organizations use AI and the forms of funding.

4.1. Uses All Levels of AI to Fight Disinformation

After compiling the data, three levels of artificial intelligence implementation emerged in the institutions surveyed, conditioned by two types of elements: the purpose of use and the creation of AI. The levels were categorized as instrumental, experimental, and disruptive, with the first being the simplest and the last being the most complex (Table 2).
At the instrumental level, AI is used to support fact-checking routines, such as transcribing audio and video, performing translations, and other tasks that can be conducted using publicly accessible AI tools available to anyone, such as ChatGPT (4.0). In this case, the AI is acquired from third parties; that is, it is not developed by the fact-checking institution’s own technology team (e.g., through subscriptions to paid versions of ChatGPT, Manus, or DeepSeek). Also, there are cases in which AI is employed to distribute content on social media platforms, such as Instagram and Facebook, at this instrumental implementation level.
At the experimental level, in addition to the aspects above, AI implementation aims to be accessible to other contexts or settings since experimentation involves testing AI tools specifically designed for fact-checking or journalistic investigations. This stage often occurs in initiatives such as experimental laboratories within fact-checking organizations or research centers at universities.
Finally, the most advanced level of AI implementation in fact-checking is disruptive. Here, beyond the previous aspects, the creation of the AI is carried out by the organization’s own team of developers. The use of AI at this stage serves predictive purposes. Generative AI searches for patterns across internal databases to provide insights into false narratives likely to emerge at a given time or that are significantly increasing and should be fact-checked by the team. Most importantly, the AI developed internally is scalable and serves as a monetization tool, being sold to other fact-checking organizations and adding financial value to the institution that produces it. All levels intended for the end user are still subject to human review before publication.
This categorization is not mutually exclusive; in other words, the same fact-checking organization can belong to more than one level, with one being more relevant, depending on the specific context of each institution.

4.1.1. Uses of AI in Ibero-American Fact-Checkers

The analysis of the uses of AI (Table 3) shows that there are points in common between the organizations. They all use artificial intelligence platforms and resources at the tooling level, i.e., with the aim of automating processes that are at some stage of fact-checking (such as reverse image search or audio transcription, for example), which helps speed up verification in the face of significant volumes of disinformation. AI tools also help with the segmented distribution of content, according to the needs of the fact-checking organization. In these cases, the AI tools come from Big Techs such as OpenAI, Meta, or Google and are used on a subscription basis.
The second level of AI implementation is experimental. Cases such as Chequeado and Aletheia are examples of this type of use, in which artificial intelligence goes beyond the tool level and is then programmed for specific actions, such as the chatbot that searches the Official Gazette (as is the case with Aletheia); or, it is “studied” in publishing and content creation actions, such as what is performed in Chequeado’s AI Lab program, which aims to test certain AIs for editorial purposes and is possibly scalable to other fact-checking institutions.
Finally, the most advanced level of AI implementation is referred to here as the disruptive level, in which the development of the tool is carried out entirely within the organization, which has full-stack developers on its teams. These disruptive AI tools seek to perform all the other functions of the previous levels (tooling and experimental), as well as being an important source of monetization for the organization, which generally sells the technology to other interested parties, as is the case with the Maldita and Chequeado bots. At the disruptive level, there is also the use of AI to predict disinformation content, obtained by cross-referencing and analyzing data from users’ searches on chatbot sites.

4.1.2. Chatbots: Searching for the Truth with Robotic Support

Maldita.es provides a chatbot on its official website with the aim of interacting with users looking for fact-checks through questions. The response to users is carried out via artificial intelligence based on Large Language Models (LLMs), algorithms trained on vast amounts of text to understand and generate human language in a coherent way. The chatbot was created by a technology company owned by Maldita called Botalite.es.
The processing of all the requests we receive through the chatbot is carried out using artificial intelligence, mainly because, in large images or videos, sometimes the screenshot is a little bigger or smaller. What we achieve with artificial intelligence is for it to analyse and determine that, even if there are small variations and it’s not 100% identical to what we have in the database, it’s still possible to find a match and assimilate it. This allows us to respond more quickly and in an automated way to the queries we receive.
Questions from users using the chatbot are also a source for cross-referencing via artificial intelligence. Information is available in Maldita’s repository of checks, so it is possible to predict seasonal waves or themes of disinformation and, in advance, work on media education on the subject, falling within the scope of what is known as pre-bunking.
What we have implemented within our database with artificial intelligence is something that analyses all the entries we receive and looks for narratives among all the volume of alerts we receive through the chatbot. This is presented to us on a specific screen—which are the narratives that are being repeated most often and allow us to have a clear idea of whether there is some kind of disinformation narrative that is gaining a lot of traction and that may be going unnoticed by us. What we’ve realized is that if we work a posteriori, in other words, when a rumor is circulating and you start working on it, looking for sources, writing an article and publishing it, it’s much easier for that rumor to generate seven others along the same lines than if we’re simply working proactively (Escayola 2024). Another organization that uses a bot is Chequeado, which makes Chequeabot available in seven Latin American countries. This bot can be used by users, but also by Chequeado member organizations.
Chequeabot checks sentences in texts, audios, and videos; it automatically transcribes texts from videos and audios, including in real time (it is widely used in live political debates and to check podcasts), and it also monitors what is circulating on multiple social media to track down pieces of disinformation while they are circulating.
In the 2022 Brazilian presidential elections, it was used in partnership with the fact-checking agency Lupa as a live-checking tool in the debates between the presidential candidates.
Half of the checks of leaders’ statements we conducted in 2020, including several with thousands and thousands of readings, came from phrases found thanks to the automation platform we developed based on natural language processing and machine learning technologies. For those unfamiliar with this type of development, we proudly say that after five years of work: this result is unique in the world, we do not know of any other similar one with such a direct impact on our community. We also consider it relevant because this technology speaks our language, something that does not normally happen in innovative solutions at a global level, which tend to ‘speak’ the lingua franca of technology first: English9.
Chequeabot is, therefore, also a way of monetising Chequeado since the tool can be purchased by organizations interested in checking data with the AI technology developed by the Argentinian team. Some of bot’s functionalities, however, are open and free, such as “Desgrabador”, which allows the transcription of texts from YouTube videos, a useful tool for journalists and fact-checkers in general, with the option of easily selecting, in the text of the transcript, passages that direct to the exact moment of the speech in the video.
Finally, Aletheia offers a chatbot for users to search the Diário Oficial (Official Gazette), the Brazilian government’s official communication vehicle, which publishes normative and administrative acts and other matters of public interest. Thus, the chatbot can be used as an initial search for checking processes involving the publication of laws, decrees, tender notices, and judicial decisions, among others.

4.1.3. Collaborative AI: The Role of Platforms

Chequado uses its bot to establish relationships with other organizations and platforms, but this collaboration takes other forms in other fact-checkers.
Polígrafo, for example, participates in Meta’s 3PFC (Thirty Party Fact-Checking Program), but the end of this program is likely to have a negative impact on the scope of the checks, which, according to Polígrafo’s director, Fernando Esteves, is one of the initiative’s assets for penetrating Portuguese society. “We have a contract to check 50 pieces of disinformation a month. This content is placed in a Facebook back-office, all built on Artificial Intelligence that picks up the link and then acts on the original post.” The post with disinformation content is immediately classified with a filter, which shows the check and reduces the chances of the user continuing to browse the content. “Therefore, they no longer see the image that was shared and automatically the visibility of your post is reduced by almost 100 per cent, meaning that it is almost impossible for them to see that information” (Esteves 2024).
Meta’s program uses fact-checking agencies as autonomous service providers, amplifying the unmasking of disinformation through the power of hyperdistribution (Costa 2014) that only large platforms have today. In other words, reports on topics of public interest that would be difficult to reach a niche group are likely to be viewed by that group. Fernando Esteves explains that the program amplifies checks exponentially. “Imagine someone with 2 million followers on Facebook and 500,000 of them decide to share a [fake] post by that person […] What does [Meta’s AI] tool do? It doesn’t just limit access to your post: it goes after all the shares.”
From this explanation by the director of Polígrafo, you can get an idea of how damaging Meta’s decision to end its partnerships with fact-checking agencies is. And, the director goes on to explain that, of the 50 monthly checks, there have already been cases at Polígrafo in which this number has been multiplied by half a million limitations (or warnings about the falsity of content). “Now imagine, Portugal has 10 million people. That gives you an idea of how important artificial intelligence can be in scaling up the fight against disinformation. It’s crucial, it’s nuclear, it’s the biggest fight of modern times,” he argues.
Maldita has participated in programs to implement AI in a more robust way, in partnership with innovation labs, such as the project funded by the Harnessing AI for Truth program, run by the Leap innovation lab at the International Center For Journalists. “[…] we are working on developing a feature for our chatbot, based on generative AI. We want our users to be able to send us questions about climate issues and have our chatbot respond by generating text in response, using our articles.” (Escayola 2024).

4.1.4. Supporting Human Fact-Checking

In the case of Aletheia, which is a non-governmental organization run by volunteers, artificial intelligence was created to facilitate fact-checking processes, i.e., as a support tool. This is because this NGO aims to democratize fact-checking processes, i.e., its mission is to educate about media consumption and to promote training, carried out in partnership with Brazilian and Portuguese universities, so that anyone can carry out a full fact-check themselves, using Aletheia’s technological platform. “We’ve made some important partnerships, such as joining the Supreme Court’s program to combat disinformation. This happened because of the expressiveness with federal universities, and we offered an open course on democratizing fact-checking via the Supreme Court for the entire national territory” (Volcean 2025).
Using a chatbot, the checker (not necessarily a journalist) can directly search for information that has been published in the Diário Oficial (Official Gazette) of various cities across the country. “We have integrated with the Querido Diário platform, which is run by Open Knowledge Brasil. The checker’s productivity is extremely high using AI. In the end, it will always be up to the checker to decide [on the final labelling of the investigation].” (Santos 2025, n.p.).
In the opinion of this software developer, it is essential that fact-checking should not be completely automated. “It always has to be a human in the loop process, which is the human being making the decision, adjusting and approving, before following through. It cannot be any different” (Santos 2025).
About future projects, Aletheia is to implement a system for monitoring disinformation using artificial intelligence.
“We believe that having a community that does this within the Aletheia platform is a way of solving this problem [of disinformation]; people contributing their expertise, bringing this information into a public, unified database. And we believe that artificial intelligence will speed up this process” (Santos 2025) This system will be quite like the Maldita and Chequeado chatbots, thus stabilizing a model among fact-checking organizations.

4.2. Funding Models

As is the case in journalism, transparency about funding sources is a sensitive issue for fact-checking organizations, and certification by networks, such as the European Fact-Checking Standards Network (EFCSN) and the International Fact-Checking Network (IFCN), depends on it.
Among the subjects of this study, Chequeado receives funding from public notices and funds from companies and institutions such as Google, Luminate, and the Internet Society. In 2019, for example, it received USD 2 million, together with Full Fact and AfricaCheck, to implement artificial intelligence in its newsrooms, with the specific intention of speeding up fact-checking (Alencar and Aquino Bittencourt 2022).
Another funded structure is AI Lab, fostered by the ENGAGE fund, granted by the IFCN as part of the Global Fact-check Fund. AI Lab is made up of an interdisciplinary team that carries out experiments with generative AI tools for editorial production actions, such as publishing threads on X and using artificial intelligence to produce scripts for fact-checking videos already made by the organization. The results are always shared with the general community, containing the research question, the methodology of the experiments, and the results and limitations of each tool for the assigned task.
The question of funding is a key issue for watchdog organizations in general. At Maldita and Polígrafo, there are professionals whose sole job is to raise funds from national and international calls for proposals, as well as from funds that promote actions aimed at education and social improvements. Both receive such funding and are transparent about it in publicly available reports.
Maldita and Chequeado, in addition to funding and calls for proposals, develop technologies that use artificial intelligence, and these, by way of subscription, also represent an asset for them.
Finally, they all accept donations from individuals, something that is central to the Brazilian NGO Aletheia. The funding of fact-checking agencies is directly related to the possibilities of actions they have been developing with society. The more diversified the sources of funding (bringing in more money), the more activities are developed beyond fact-checking, such as media education programs, training, and face-to-face actions, among others. This causes a certain vicious and, at the same time, virtuous circle. If there is sufficient funding and transparency about its origin (not being funded by governments or private corporations), the more diversified the activities and the larger the teams, which is a requirement for joining associations such as the IFCN. Oppositely, there are more limitations, which leave fact-checking organizations on the margins of such international quality seals, in turn making it more difficult to raise funds at other times.

5. Conclusions

This article sought to understand how Ibero-American fact-checking organizations are using artificial intelligence in their fight against disinformation. As the forms of use are not phenomena isolated from the socio-economic context of the organizations, the answer to the research question sought to analyze the levels of use at the various stages of the process, the tools used, the role of humans in the process, and the economic models that support the work.
The study sample was two Iberian organizations (Portugal and Spain) and two South American organizations (Brazil and Argentina), and it was possible to trace levels of implementation according to the types of use and autonomy.
What all four organizations have in common is that they use AI at the tooling level to automate processes in the verification and/or content distribution phase, using tools from large technology companies.
Two of them, Chequeado and Aletheia, also make experimental use of these tools, adapting them to the performance of specific actions of interest to them.
Finally, Maldita and Chequeado make disruptive use, materializing it in the in-house development of their own tools that perform all the functions of the previous levels, while also doing predictive work. In these two cases, the level of development is on a par with other world-class organizations, such as Full Fact or Claimbuster, a situation related to their greater financial strength. As in the Big Tech world, the fact that these companies have more financial leeway allows them to develop their own tools, which they then market, generating revenue and making their position in the fact-checker ecosystem even stronger. The alternative is to use external technology, as in the case of Polygraph, or to develop in-house technology using volunteers, as at Aletheia, but these are more expensive or slower processes.
The study found that chatbots are the most used tools by these organizations. The aim is to give citizens and other organizations the chance to perform their fact-checking in a kind of “do-it-yourself” way that involves the citizen in the process. This process also has added value. These citizens are an important network for identifying potentially disinformative content.
Despite the potential of the tools, it was found that AI is still used as a complement to human work. It is used for initial screening or to search for information on the web, which then allows humans to make the best decision about the veracity of the phrase or fact they wish to verify.
AI is, therefore, an important technological element that offers scalability and speed to the control process and the distribution of results. Its advantages, especially for companies with their own development teams that are in the disruptive phase, extend to the financial side. The high monetization potential allows them to have a good financial situation and, consequently, greater independence from companies and governments, a crucial factor for their integration into international organizations such as the European Fact-Checking Standards Network (EFCSN) or the International Fact-Checking Network (IFCN).

6. Limitations and Future Research

This study is part of a broader project on the use of artificial intelligence in fact-checking in the Ibero-American region. Although this paper includes only four organizations from four countries—which may be considered a limitation—three of them are among the best-known in their respective countries.
It would also be valuable to incorporate direct observation of the fact-checking process, as this could raise new questions and provide deeper insights.
Another limitation is that the interview analysis was carried out without the support of any specialized software, such as Voyant, a situation that arose because the sample was small enough to allow for manual analysis.
Finally, a further limitation concerns the ethical dimension, which was not explored in this study, although the interviews include responses that could be used in future research.

Author Contributions

Conceptualization, J.C. and L.I.; methodology, L.I.; validation, J.C.; formal analysis, J.C. and L.I.; investigation, J.C. and L.I.; data curation, J.C.; writing—original draft preparation, L.I.; writing—review and editing, J.C.; supervision, J.C.; project administration, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
Available online: https://projetocomprova.com.br/ (accessed on 23 May 2025).
2
3
4
Available online: https://poligrafo.sapo.pt/ (accessed on 23 May 2025).
5
Available online: https://maldita.es/ (accessed on 23 May 2025).
6
Available online: https://chequeado.com/ (accessed on 23 May 2025).
7
Available online: https://aletheiafact.org/ (accessed on 23 May 2025).
8
Available online: https://botalite.es/ (accessed on 23 May 2025).
9
Pablo. 2021. Inteligencia artificial para chequear más rápido y mejor Available online: https://chequeado.com/inteligencia-artificial-para-chequear-mas-rapido-y-mejor/ (accessed on 24 May 2025).

References

  1. Alencar, Marta Thaís, and Maria Clara Aquino Bittencourt. 2022. Mercantilização da Checagem nas Agências Chequeado e Lupa na América Latina. Comunicação & Inovação 23: 3–20. [Google Scholar] [CrossRef]
  2. Allaphilippe, Alexandre, Alexis Gizikis, C. Clara Hanot, and Kalina Bontcheva. 2019. Automated Tackling of Disinformation–Major Challenges Ahead. European Parliament. Available online: https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2019)624278 (accessed on 12 July 2025).
  3. Alsina, Miquel Rodrigo. 1989. La Construcción de la Noticia. Barcelona: Paidós. [Google Scholar]
  4. Beckett, Charlie, and Mira Yaseen. 2023. Generating Change: A Global Survey of what News Organizations Are Doing with AI. London: The London School of Economics and Political Science. Available online: https://www.journalismai.info/research/2023-generating-change (accessed on 12 July 2025).
  5. Buckingham, David. 2023. Manifesto Pela Educação Midiática. São Paulo: Edições Sesc. [Google Scholar]
  6. Canavilhas, João. 2025. Tecnologia do Desassossego: O Jornalismo Humano Deve Sentir-se Ameaçado Pela Inteligência Artificial? (The Technology of Disquiet: Should Human Journalism Feel Threatened by Artificial Intelligence?). Comunicação E Sociedade 47: e025006. [Google Scholar] [CrossRef]
  7. Costa, Caio Túlio. 2014. Um modelo de negócio para o jornalismo digital. Revista de Jornalismo ESPM 9: 51–115. [Google Scholar]
  8. Coutinho, Clara Pereira. 2022. Metodologia de Investigação em Ciências Sociais: Teoria e Prática. Coimbra: Almedina. [Google Scholar]
  9. Dias, Tatiana. 2023. Ofensiva das big techs contra PL das Fake News expõe lobby mais poderoso do mundo. The Intercept Brasil. Available online: https://www.intercept.com.br/2023/05/08/pl-das-fake-news-big-techs-tem-maior-lobby-do-mundo/ (accessed on 24 May 2025).
  10. Diniz, Amanda Tavares de Melo. 2017. Fact-checking no ecossistema jornalístico digital: Práticas, possibilidades e legitimação. Mediapolis 5: 22–37. [Google Scholar] [CrossRef]
  11. ERC-Entidade Reguladora para a Comunicação Social. 2019. A Desinformação—Contexto Europeu e Nacional. Lisboa: ERC. [Google Scholar]
  12. Escayola, Pablo Hernández. 2024. Maldita, Madrid, Spain. Personal communication. [Google Scholar]
  13. Esteves, Fernando. 2024. Polígrafo, Lisbon, Portugal. Personal communication.
  14. Flores Vivar, Jesús Miguel. 2019. Inteligencia artificial y periodismo: Diluyendo el impacto de la desinformación y las noticias falsas a través de los bots. Doxa Comunicación 29: 197–212. [Google Scholar] [CrossRef]
  15. Full Fact. 2025. Full Fact AI. Available online: https://fullfact.org/ai/ (accessed on 14 July 2025).
  16. Gonçalves, Adriana, Luísa Torre, Florence Oliveira, and Pedro Jerónimo. 2024. AI and automation’s role in Iberian fact-checking agencies. Profesional de la Información 33: e330212. [Google Scholar] [CrossRef]
  17. Hassan, Naeemul, Gensheng Zhang, Fatma Arslan, Josue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, Aaditya Kulkarni, Anil Kumar Nayak, and et al. 2017. Claimbuster: The first-ever end-to-end fact-checking system. Proceedings of the VLDB Endowment 10: 1945–48. [Google Scholar] [CrossRef]
  18. International Fact-Checking Network. 2025. The Commitments of the Code of Principles. Available online: https://ifcncodeofprinciples.poynter.org/the-commitments (accessed on 23 June 2025).
  19. Ireton, Cherilyn, and Julie Posetti. 2019. Verificação dos fatos. In Jornalismo, Fake News & desinformação: Manual Para Educação e Treinamento em Jornalismo. Edited by Claire Ireton and Jill Posetti. Paris: UNESCO, pp. 125–138. [Google Scholar]
  20. Juršėnas, Alfonsas, Kasparas Karlauskas, Eimantas Ledinauskas, Gediminas Maskeliūnas, Julius Ruseckas, and Donatas Rondomanskas. 2022. The Role of AI in the Battle Against Disinformation. Riga: NATO Strategic Communications Centre of Excellence. [Google Scholar]
  21. Kunczik, Michael. 1997. Conceitos de Jornalismo: Norte e sul- Manual de Comunicação. São Paulo: Edusp. [Google Scholar]
  22. Kuznetsova, Elizaveta, Mykola Makhortykh, Victoria Vziatysheva, Martha Stolze, Ani Baghumyan, and Aleksandra Urman. 2025. In generative AI we trust: Can chatbots effectively verify political information? Journal of Computational Social Science 8: 14–31. [Google Scholar] [CrossRef]
  23. Lim, Gionnieve, and Simon T. Perrault. 2023. Fact Checking Chatbot: A Misinformation Intervention for Instant Messaging Apps and an Analysis of Trust in the Fact Checkers. In Mobile Communication and Online Falsehoods in Asia. Mobile Communication in Asia: Local Insights, Global Implications. Edited by Carol Soon. Heidelberg: Springer. [Google Scholar] [CrossRef]
  24. Linden, Tommy Carl-Gustav. 2017. Algorithms for journalism: The future of news work. The Journal of Media Innovations 4: 60–76. [Google Scholar] [CrossRef]
  25. Luna, Ana Laura Garcia. 2024. Chequeado, Buenos Aires, Argentina. Personal communication.
  26. Lusa. n.d. O que é o projeto «Combate às fake news»/«Contra Fake». Available online: https://combatefakenews.lusa.pt/o-projeto-combate-as-fake-news-contra-fake/ (accessed on 4 July 2025).
  27. PAHO. 2020. Entenda a Infodemia e a Desinformação na Luta Contra a COVID-19. Kit de Ferramentas de Transformação Digital: Ferramentas de Conhecimento. Washington, DC: Pan American Health Organization. [Google Scholar]
  28. Pavlik, John. 2000. The Impact of Technology on Journalism. Journalism Studies 1: 229–37. [Google Scholar] [CrossRef]
  29. Pereira, Gabriela Agostinho, Isabela Afonso Portas, and Liliane de Lucena Ito. 2024. Alfabetización mediática en TikTok: Combate a la Desinformación Sobre Política en el Contexto Electoral de São Paulo en 2024. Razón y Palabra 28: 1–12. [Google Scholar] [CrossRef]
  30. Rocha, Felipe, Gustavo Ribeiro, Julia d’Agostini, Mariana Freitas, Paulo Sarmento, and Pedro Peres Cavalcante. 2023. Tendências latinoamericanas no enfrentamento à desinformação: Iniciativas na Argentina, Brasil, Chile e Colômbia. Relatório. Brasília: Laboratório de Políticas Públicas e Internet. Available online: http://www.lapin.org.br (accessed on 23 May 2025).
  31. Santos, Mateus. 2025. Aletheia, São Paulo, Brazil. Personal communication.
  32. Sodré, Muniz, and Maria Helena Ferrari. 1986. Técnica de Reportagem: Notas sobre a Narrativa Jornalística. São Paulo: Summus Editorial. [Google Scholar]
  33. Suomalainen, Kari, Nooa Nykänen, Hannele Seeck, Youna Kim, and Ella McPherson. 2025. Fact-Checking in Journalism: An Epistemological Framework. Journalism Studies 26: 1129–49. [Google Scholar] [CrossRef]
  34. Trevisan Fossá, Maria Ivete, and Kauane Andressa Müller. 2019. Crosscheck as a legitimization strategy of the journalism field in response to fake news. Brazilian Journalism Research 15: 430–51. [Google Scholar] [CrossRef]
  35. UNESCO. 2023. Diretrizes para a Governança das Plataformas Digitais. Paris: UNESCO. [Google Scholar]
  36. Van Dijck, José, Thomas Poell, and Martijn De Waal. 2018. The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press. [Google Scholar]
  37. Volcean, Tamiris. 2025. Aletheia, São Paulo, Brazil. Personal communication.
  38. Wardle, Claire, and Hossein Derakhshan. 2019. Reflexão sobre a desordem da desinformação: Formatos da informação incorreta, desinformação e má informação. In Jornalismo, Fake News & Desinformação: Manual Para Educação e Treinamento em Jornalismo. Edited by Claire Ireton and Jill Posetti. Paris: UNESCO, pp. 46–58. [Google Scholar]
  39. Welter, Lahis, and João Canavilhas. 2023. La inteligencia artificial en la lucha contra la desinformación en las presidenciales brasileñas 2022: Estudio de caso con Lupa e o Aos Fatos. Miguel Hernández Communication Journal 14: 409–26. [Google Scholar] [CrossRef]
  40. Yang, Kai-Cheng, and Filippo Menczer. 2023. Large Language Models Can Rate News Outlet Credibility. Available online: https://arxiv.org/abs/2304.00228 (accessed on 30 June 2025).
Table 1. Sample.
Table 1. Sample.
PolígrafoMalditaChequeadoAletheia
Team15504015 volunteers
Software
developers
NoYesYesYes
Fact-checking
alliances
IFCN; EFCSNIFCN; EFCSNIFCN; LATAMChequeaRNCD
Creation2018201820102022
CountryPortugalSpainArgentinaBrazil
FundingContest funding; advertising on the site;
partnership with Sapo
Contest funding;
sale of technology (chatbot);
donations;
partnerships
Contest funding;
sale of technology (chatbot);
donations;
partnerships
Donations
Source: Own elaboration.
Table 2. Levels of AI implementation.
Table 2. Levels of AI implementation.
Levels
Features
ToolingExperimentalDisruptive
Productive checking routines
(transcriptions, translations, etc.)
YesYesYes
Predictive purposesNoNoYes
The acquisition of AI comes from third partiesYesYesNo
Automated content publishing with final human reviewNoYesYes
AI works in content distribution on platformsYesYesYes
The AI tool was developed by an
internal team
NoNoYes
Does it intend to be scalable to other realities or spaces?NoYesYes
Is it a form of monetization?NoNoYes
Source: Own elaboration.
Table 3. Uses of the AI X profile of the checking organization.
Table 3. Uses of the AI X profile of the checking organization.
PolígrafoMalditaChequeadoAletheia
Current uses of AIAssertive distribution on social mediaChatbot

Monitoring and
prediction of
disinformation
(pre-bunking)
Chatbot

Monitoring and
prediction of
disinformation
(pre-bunking)

Experimental automation of publications and
editorial productions
Chatbot for searching the Official Gazette (Diário Oficial)
Levels of AI
implementation
ToolingTooling, experimental, and
disruptive
Tooling, disruptive, and
experimental
Tooling and experimental
Source: Own elaboration.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Canavilhas, J.; Ito, L. On Fact-Checking Service: Artificial Intelligence’s Uses in Ibero-American Fact-Checkers. Soc. Sci. 2025, 14, 514. https://doi.org/10.3390/socsci14090514

AMA Style

Canavilhas J, Ito L. On Fact-Checking Service: Artificial Intelligence’s Uses in Ibero-American Fact-Checkers. Social Sciences. 2025; 14(9):514. https://doi.org/10.3390/socsci14090514

Chicago/Turabian Style

Canavilhas, João, and Liliane Ito. 2025. "On Fact-Checking Service: Artificial Intelligence’s Uses in Ibero-American Fact-Checkers" Social Sciences 14, no. 9: 514. https://doi.org/10.3390/socsci14090514

APA Style

Canavilhas, J., & Ito, L. (2025). On Fact-Checking Service: Artificial Intelligence’s Uses in Ibero-American Fact-Checkers. Social Sciences, 14(9), 514. https://doi.org/10.3390/socsci14090514

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop