Next Article in Journal / Special Issue
Artificial Intelligence and Journalism Education in Higher Education: Digital Transformation in Undergraduate and Graduate Curricula in Türkiye
Previous Article in Journal
Waithood, Music, Fakes, and Well-Being: Exploring the Mobile Lives of South African Township Youth Through the Mobile Diary Method
Previous Article in Special Issue
‘A Part of Our Work Disappeared’: AI Automated Publishing in Social Media Journalism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What If I Prefer Robot Journalists? Trust and Objectivity in the AI News Ecosystem

by
Elena Yeste-Piquer
*,
Jaume Suau-Martínez
,
Marçal Sintes-Olivella
and
Enric Xicoy-Comas
Blanquerna School of Communication and International Relations, Ramon Llull University, 08001 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Journal. Media 2025, 6(2), 51; https://doi.org/10.3390/journalmedia6020051
Submission received: 17 February 2025 / Revised: 24 March 2025 / Accepted: 25 March 2025 / Published: 1 April 2025

Abstract

:
The use of artificial intelligence (AI) in journalism has transformed the sector, with media generating content automatically without journalists’ involvement, and various media companies implementing AI solutions. Some research suggests AI-authored articles are perceived as equally credible as human-written content, while others raise concerns about misinformation and trust erosion Most studies focus on journalists’ views, with audience attitudes explored mainly through quantitative methods, though there is no consensus regarding the acceptability of AI use by news organizations. We explore AI’s role in journalism through audience research, conducting five focus groups to understand public perceptions. The findings highlight concerns about AI-generated content, particularly potential errors, opacity, and coldness of the content. The information is perceived as somewhat less valuable, being viewed as more automated and requiring less human effort. These concerns coexist with a certain view of AI content as more objective, unbiased, and closer to the ideal of independence from political and economic pressures. Nevertheless, citizens with more AI knowledge question the neutrality of automated content, suspecting biases from corporate interests or journalists influencing the prompts.

1. Introduction

Journalists have expressed concerns about AI’s potential to erode trust and spread misinformation (Cools & Diakopoulos, 2024; Van Dalen, 2024), with calls for external regulation to preserve journalistic ethics (Forja-Pena et al., 2024). Interestingly, consumers with low trust in the media may be less averse to AI-generated content (Kolo et al., 2022). Overall, the integration of AI in journalism presents complex challenges for maintaining audience trust, highlighting the broader issue of how trust in journalism is shaped by factors such as transparency, credibility, and evolving audience expectations (Lermann Henestrosa & Kimmerle, 2024). Our paper addresses these concerns from an audience perspective, attempting to shed some light on how AI content relates to broader issues of trust in news media and journalism.
Trust in journalism and the media, much like trust between individuals, is built on accumulated experience. At the same time, it projects into the future as an expectation (Van Dalen, 2019), inevitably involving some degree of uncertainty and risk, as this trust may ultimately be disappointed or betrayed by its trustee. The concept of trust is closely linked to credibility and reputation, despite being distinct notions—etymologically, both “trust” and “confidence” stem from the idea of faith. Citizens trust journalism and the media as “a trustworthy source of information that serves the public interest” (Singer, 2006, p. 2), expecting them to provide truthful information, as they themselves cannot always seek out or verify that information (Tsfati & Cohen, 2005). Beyond that, audiences rely on the media to evaluate and prioritize news, guiding them toward the most relevant issues (McCombs, 2004). Content from trusted sources influences us more strongly, while we tend to apply less critical scrutiny to it. Public trust in journalism and the media is essential for a functioning democratic system. This institutional trust—distinct yet parallel to trust in specific media outlets—legitimizes journalism’s role in society and is a necessary condition for the press to fulfill its democratic function (Gurevich & Blumler, 1990; Kieran, 1999; Entman, 2005; Kovach & Rosenstiel, 2003; McQuail, 2003; Ryan, 2001; Schudson, 2008).
Various studies have shown a significant decline in trust in journalism and media organizations (Chang & Tang, 2023; Hanitzsch et al., 2018; Tsfati & Ariely, 2014). However, this trend is not universal, but is observed only in certain societies, with the United States standing out as a key example (Gronke & Cook, 2007; Jones, 2004; Ladd, 2012). Moreover, research indicates a strong correlation between distrust in journalism and broader public distrust in institutions and democracy (Ariely, 2015; Cappella & Jamieson, 1997; Chang & Tang, 2023; Hanitzsch et al., 2018; Jones, 2004). As Cappella puts it, “Journalists have felt the spillover of the decline in confidence in social and political institutions” (Cappella, 2002, p. 231). The increasing political polarization in Western societies, along with the rise of populist movements—whether from the far right or far left—whose defining characteristic often includes attacks on “elites” or “the establishment” (Casero-Ripollés et al., 2017; Mazzoleni, 2014; Moffitt & Tormey, 2014; Mudde & Rovira Kaltwasser, 2013), further exacerbates this trend. As these phenomena erode trust in institutions and democracy, they also contribute to a decline in public trust in journalism. As Dahlgren stated, “in the present tumultuous juncture of Western democracies, dominated by the populist revolt, traditional distrust of media has turned into an assault on basic Enlightenment premises, eroding shared understandings of reality and compatible discourse. ‘Knowledge’ becomes legitimated via emotionality” (Dahlgren, 2018, p. 20). However, the discourse surrounding the decline of trust in journalism often assumes that this trend is inherently negative. This perspective overlooks the possibility that decreasing trust in traditional media may signal a shift toward a more critically engaged public rather than outright media rejection. As skepticism increases, audiences may develop more selective and discerning media consumption habits, potentially leading to a demand for higher journalistic standards and more transparent reporting, as well as interest in alternative news sources (Newman et al., 2024).
The perception of bias in journalistic content, whether due to political preferences or economic–commercial interests, is another factor contributing to the erosion of trust in journalism and the media—an issue that has reportedly intensified during what has been termed The Third Age of Political Communication (Blumler & Kavanagh, 1999). Numerous studies have examined how audiences evaluate different types of media bias, particularly in the context of controversial or polarizing issues (Elejalde et al., 2018; Tsfati et al., 2022). There is a strong consensus that perceptions of bias are significantly influenced by the ideological position of the audience consuming and assessing the news (Eberl, 2019; Soontjens & Van Erkel, 2020). In addition, certain journalistic practices and behaviors have further distanced audiences from the media, such as the tendency to cover political news through a purely strategic or cynical lens (strategy coverage) (Cappella & Jamieson, 1997; Sabato, 1991; Patterson, 1993), as well as high-profile scandals involving journalists and media organizations. Notable examples include the fabrication cases of Stephen Glass at Rolling Stone and The New Republic, Jayson Blair at The New York Times, and the illegal phone-hacking scandal involving News of the World in the UK.
As public trust in journalism continues to erode due to political polarization, perceived bias, and ethical scandals (Newman et al., 2024), emerging technologies like AI-generated journalism present both challenges and opportunities in shaping audience perceptions, as reflected in academic research (Parratt-Fernández et al., 2021).
Research indicates that some audiences may perceive AI-generated news as more objective and independent from human biases, political agendas, and corporate interests, although the results are sometimes divergent and non-conclusive (Forja-Pena et al., 2024; Hewapathirana & Perera, 2024; Hofeditz et al., 2021; Moravec et al., 2024; Yang et al., 2023). Hence, concerns persist regarding the transparency, accountability, and potential biases embedded in automated content, particularly among those with a deeper understanding of AI’s limitations (Zhang & Dafoe, 2020; Binns et al., 2018). This duality, as well as the lack of conclusive findings, underscores the need to examine how audiences reconcile concerns about AI-generated content with the perception that it may offer a more neutral alternative to traditional journalism.
It is nowadays a fact that the integration of artificial intelligence (AI) into journalism has transformed news production, distribution, and consumption, creating both opportunities and challenges. AI-driven automation has enhanced efficiency in newsrooms, allowing for the rapid generation of financial reports, sports summaries, and other data-driven stories (Sonni et al., 2024). While AI increases productivity, studies indicate that its impact extends beyond automation, shaping investigative journalism, news personalization, and audience engagement (Parratt-Fernández et al., 2021). The rise of hybrid “journalist–programmer” roles suggests that AI is not simply replacing journalists but altering their responsibilities, requiring new skills in data analysis and computational thinking (Ioscote et al., 2024). Furthermore, the widespread use of AI in content recommendation systems raises concerns about algorithmic biases, filter bubbles, and the amplification of misinformation (Trejos-Gil & Gómez-Monsalve, 2024). The implementation of AI also varies globally, with North America and Europe leading in adoption and Latin America and Africa facing infrastructural and regulatory challenges (Calvo-Rubio & Ufarte-Ruiz, 2021). Privacy issues related to data collection and personalization also require regulatory scrutiny, as AI-driven journalism increasingly relies on user data to tailor content (Trejos-Gil & Gómez-Monsalve, 2024).
Recent academic studies have examined the coexistence of concerns regarding AI-generated content with the perception of its objectivity, neutrality, and independence from political and economic pressures. The results are sometimes contradictory. Some authors seem to find that AI authorship does not necessarily reduce perceived credibility or trustworthiness compared to human-written articles (Henestrosa et al., 2022; Toff & Simon, 2025). However, labeling content as AI-generated can decrease trustworthiness, particularly among those with higher trust in news and greater journalism knowledge (Toff & Simon, 2025). Interestingly, AI-generated credibility warnings can be as effective, if not more so, as those provided by human journalists in influencing readers’ assessments of article credibility (Sumpter & Neal, 2021). Factors such as users’ AI experience and the credibility of media companies positively impact trust in AI-generated content (Hofeditz et al., 2021). While transparency about AI use in journalism does not necessarily increase credibility (ibid), disclosing sources used to generate content can counteract negative effects on perceived trustworthiness (Toff & Simon, 2025). Furthermore, research indicates that news attributed to both AI and human journalists is perceived as less biased than news attributed solely to AI (Waddell, 2019). However, another study found no significant differences in credibility between AI-generated and human-written health news articles, with AI authors sometimes perceived as more believable (La-Rosa Barrolleta & Sandoval-Martín, 2024). Moreover, the placement of automation attribution can affect credibility, with disclosure at the end of an article being more favorable (Waddell, 2019).
Although AI-generated content might be perceived as objective and free from human biases, research indicates that these systems can inadvertently perpetuate existing societal biases present in their training data. For instance, a study by Mehrabi et al. (2021) provides a comprehensive survey on bias and fairness in machine learning, highlighting various sources of bias and the challenges in mitigating them. Furthermore, a study by Trhlik and Stenetorp (2024) quantifies generative media bias by analyzing a corpus of real-world and AI-generated news articles, revealing significant disparities in political bias among different large language models. Furthermore, a critical examination by Bender et al. (2021) highlights that large language models, such as GPT-3, generate text based on patterns in their training data without genuine understanding, leading to outputs that may inadvertently reflect biases present in the data. This lack of true comprehension and reliance on data patterns can result in content that perpetuates existing prejudices. Similarly, a study by Feng et al. (2023) reveals that AI language models can exhibit political biases, generating responses that lean towards particular ideologies based on the prevalence of those views in their training data. This suggests that the perceived neutrality of AI-generated content can be compromised by inherent biases in the data used for training. Additionally, the concept of algorithmic bias, as discussed by Mehrabi et al. (2021), emphasizes that AI systems can unintentionally learn and propagate societal biases, leading to outputs that may favor certain groups over others. Hence, it is important to better understand if citizens are aware of these limitations and biases, which can create skepticism regarding the purported objectivity of automated content among those with greater knowledge about how AI works.
Hence, our study establishes three research questions that aim to combine literature on trust in news with more recent studies on trust in AI content as well as its potential inherent biases. Our research aims to address the following:
RQ1:
How do concerns about AI-generated content relate to the belief that it is more objective, unbiased, and free from political or economic influences?
RQ2:
Do citizens with a greater understanding of AI have doubts about the objectivity or neutrality of automated content?
RQ3:
How do citizens perceive the role of corporate interests or journalist influence in shaping biases in AI-generated news?

2. Materials and Methods

This study employs focus group discussions as a qualitative research method to investigate citizens’ perceptions of artificial intelligence (AI), as well as its potential impacts and applications in the journalistic context. This approach aligns with previous studies that have demonstrated the effectiveness of focus groups as a research tool for exploring user engagement with information and news. For instance, Masip et al. (2021) used focus groups to study how WhatsApp users interact with news content. Calvo-Rubio and Rojas-Torrijos (2024) employed them to investigate the integration of quality criteria in AI-generated news articles, while Volk et al. (2024) conducted focus groups with 40 participants to understand how individuals experience information abundance.
The study established several specific objectives, organized into three thematic blocks. The first block focused on assessing participants’ knowledge and use of AI, exploring their perceptions, including associations, positive and negative attributes, and expectations, as well as examining the professional and personal applications of AI. The second block concentrated on the role of AI in journalism, investigating perceptions of its use, associated advantages and drawbacks, levels of trust in AI-generated content, and its perceived objectivity. The third block focused on strategies for detecting potential manipulation in AI-generated video content, providing insights into how participants assess the authenticity of such material. The participants were shown both deepfake videos and original content to test whether citizens could distinguish between AI-generated content and content produced by journalists. The study also aimed to identify the tools or strategies participants use to make these distinctions, as well as to determine whether these skills are innate or acquired, in a context where the ability to verify fake videos, deep fakes, and content generation is increasingly in demand, creating the need for new professional profiles to emerge (Sánchez Esparza et al., 2024). This structure allowed for a comprehensive exploration of the intersections between AI, public perception, and its implications in journalism, framed within practice theory. This approach, as a paradigm in media research, conceptualizes media not as texts or production structures, but as practices (Couldry, 2004).
Between 25 June and 4 July 2024, five focus groups were conducted, each lasting two hours, with 7 participants in one group and 8 participants in the remaining four groups. The focus groups represented a diverse range of variables. Regarding age, one group consisted of participants aged 18 to 25, two groups included individuals aged 26 to 35, and two groups involved those aged 36 to 50. Geographically, all participants were from Barcelona and its metropolitan area. Gender representation was balanced, with 50% women and 50% men. A minimum of 50% of the participants were regular consumers of information through direct means (such as newspapers, TV, and radio) or via social media. In terms of technology use, two groups included individuals with low to medium use of AI tools who had occasional experience using them, including 50% of the 18 to 25 age group. The other two groups included participants with medium to high use, employing a variety of AI tools daily or almost daily in both personal and professional contexts. They also represented 50% of the 18 to 25 age group. All participants were frequent technology users with active social media profiles, and the sample included individuals with diverse educational backgrounds.
Participants were selected according to the aforementioned criteria from a panel provided by GESOP (Gabinet d’Estudis Socials i Opinió Pública), a market research firm based in Barcelona. All sessions were both visually and audibly recorded, with the informed consent of all participants. The groups were moderated by a researcher expert in qualitative research and were transcribed in full for subsequent content analysis (Bardin, 1977; Berelson, 1952; Krippendorff, 2004; Weber, 1990).

3. Results

3.1. How Do Concerns About AI-Generated Content Relate to the Belief That It Is More Objective, Unbiased, and Free from Political or Economic Influences?

There is an initial consensus that AI appears to be objective and impartial because it lacks the capacity to form opinions. This assumption is based on the idea that, as a machine, AI can remain free from the biases and political or economic pressures that influence humans. However, this perception is challenged by at least three factors, suggesting that AI can still generate opinions. Therefore, its objectivity is conditional (see Table 1): “If you only feed it information from a particular ideology or opinion, AI will adopt that opinion. The advantage of AI, supposedly, is that it has access to all information, so it can consider multiple perspectives and strive to be as objective as possible” (18–25, medium to high use).
The first limiting factor is corporate or business interests. AI is predominantly developed and controlled by private companies whose lack of transparency generates distrust among participants. As one respondent explained (see Table 1), “I don’t know what algorithms are behind it. I don’t fully control it, nor am I the one creating it—it is a complex thing I will never fully understand” (26–35, medium to low use). These companies have their own interests, which can influence the programming of AI. As another participant noted (see Table 1), “Behind this, you know there is a company, a media outlet leaning one way or another, so you have to be aware that this AI is not delivering absolute information” (26–35, medium to high use).
The second factor is biases in programming, which can stem not only from corporate interests but also from the programmers themselves, whose perspectives may influence AI’s development. As one respondent reflected (see Table 1), “Behind AI, there’s someone programming it, controlling it, and deciding what information it is given” (36–50, medium to high use).
The third factor is the opacity of AI learning processes. There is a lack of transparency regarding how AI selects and processes information and how it learns. This opacity fosters distrust, as one participant observed (see Table 1): “I would trust it more if it were truly an independent machine that learns and thinks on its own, rather than being controlled by a person whose interests are always at play” (36–50, medium to high use).
In addition to concerns about neutrality, there is also the unease that AI can make mistakes and unintentionally generate false information, which could have negative consequences. However, there is also optimism that AI can improve its accuracy through learning.
In summary, while AI is perceived as a more objective source of information, its inherent biases and lack of transparency limit this perception and generate skepticism about its potential for misinformation. This complex coexistence of neutrality and doubts makes AI a subject of debate, with participants calling for greater regulation to ensure its responsible use.

3.2. Do Citizens with a Greater Understanding of AI Have Doubts About the Objectivity or Neutrality of Automated Content?

Citizens with a more intensive use of AI, categorized in this study as Medium–High Use, demonstrate a greater awareness of the factors that can compromise the neutrality of AI-generated content. Their familiarity with AI’s capabilities and limitations leads them to question the notion of absolute objectivity. This concern is encapsulated in the statement (see Table 2): “I fear the global manipulation that could occur if it’s used with bad intentions” (26–35, medium to high use).
These participants emphasize the need for human verification of information. For now, AI cannot be solely relied upon, especially since errors need to be corrected based on experience. The participants highlighted the significant role prompting plays in interacting with AI.
They have also directly experienced errors and the generation of false information by AI. As one participant stated (see Table 2), “I lack confidence. When I ask for things, I try to verify them. Since I didn’t create it myself, I can’t trust it 100%” (26–35, medium to high use). This direct experience reinforces their skepticism about AI’s reliability.
As mentioned earlier, participants recognize the importance of prior programming and data collection in ensuring AI’s reliability. They understand that AI does not “think” independently but replicates patterns and biases present in the data with which it was trained.
Regarding strategies to detect AI-manipulated videos, people generally focus on potential irregularities in form, such as editing and coordination between sound and image, and the coherence of the content. Additionally, the context helps assess the credibility of the content, taking into account factors like the source, while comments, if available, often help identify fake content.
The most common videos received are humorous ones (memes), which are often not cross-checked. For verification, the content must generate impact or interest, raising doubts about its truthfulness. The most frequently used strategies include searching on Google, using social media, through influencers or hashtags, and consulting trusted media outlets. If the news cannot be verified, it is often considered false.
Nonetheless, AI can serve as a starting point for creating informative content, with the human touch needed to validate and enhance it by adding experience, sensitivity, and critical analysis—qualities considered essential to ensuring journalistic quality and reliability. As one participant concluded (see Table 2), “If something matters to you, you’ll work on it; you won’t leave it to AI to do it” (26–35, medium to high use).

3.3. How Do Citizens Perceive the Role of Corporate Interests or Journalist Influence in Shaping Biases in AI-Generated News?

The participants often attributed the potential biases in AI-generated news to corporate interests. When discussing the influence and power of AI, as well as the structure of its database, one participant (see Table 3) highlighted the importance of considering “who has filled these information sources, who selected them, and who constructed the data on which AI relies” (26–35, medium to high use). The participants also pointed out that, as of now, this information is privately controlled with very little regulation, which creates significant uncertainty. There is a sense that the responsibility for an AI-generated news story is more diluted than for one created by a journalist. In any case, the ultimate responsibility for an AI-generated news story is generally considered to rest with the media outlet that published it.
As one participant noted (see Table 3), “At the moment, there is no fully autonomous artificial intelligence, so there is always a person behind it. Therefore, this debate is essentially about the question: What do you think is better—a journalist, or an AI with a person behind it? Or a journalist who uses an AI generated by another person or group of people?” (26–35, medium to high use).
Journalists are also perceived to be able to influence how AI is used to generate news. The combination of AI (for certain functions/types of information, information seeking, data analysis, etc.) with the journalist’s task (which reviews, prompts, brings personality and connection) is considered optimal. The brand of the medium, particularly in the written channel, plays a crucial role as a generator—or not—of trust and credibility. One participant specifically remarked (see Table 3), “It is important to understand the point of view, and with AI, this perspective is increasingly being lost” (26–35, medium to high use).
The participants from the five focus groups identified several ways in which AI could contribute to journalism. Primarily, it could assist with repetitive tasks and data analysis. For instance, AI can facilitate faster and more efficient translation and transcription, freeing journalists from tedious work and allowing them to focus more on investigation and analysis. AI could also process large datasets for historical analysis, which would be especially useful in investigating complex topics such as economics or science. Furthermore, AI can generate reports and summaries based on structured data, which could aid live event coverage.
At a secondary level, AI could produce brief and straightforward news pieces based on press releases or other structured sources. However, the participants emphasized the need for verification. AI can also adapt journalistic material for dissemination on social media, helping to expand the reach of news and connect with broader audiences. In these instances, the participants noted that the absence of a human touch might not be as critical since, as mentioned by one participant (see Table 3), “I rarely pay attention to who signs a news article” (36–50, medium to high use).
AI can quickly analyze vast amounts of data, aiding in identifying current trends, verifying the accuracy of information, and locating reliable sources. This functionality could significantly contribute to combating misinformation.
Despite these advantages, participants stressed the necessity of human oversight, particularly before publication. “It is not the same as a journalist, a robot journalist is created by a certain company in a specific way. How does it say it? When does it say it? At what moment? For me, the fact that we, as humans, have to put a face to something has a big influence. For example, you are not going to trust an AI robot delivering a news story in the same way you would trust a journalist who has spent their entire career, who has a background, ideologies, and values…” (26–35, medium to high use), explains one participant (see Table 3). Human intervention is essential to ensuring quality, accuracy, and context. Beyond verification, the journalist’s role is to humanize content, as (see Table 3) “without the human touch, it is less engaging” (26–35, medium to low use).
Ultimately, if not used responsibly, AI risks undermining the very goals of journalism by producing misinformation, eroding credibility, and perpetuating opacity and inaccuracies. For in-depth topics and journalistic content, participants agreed that, at least for now, AI cannot replace humans. As one participant summarized (see Table 3), “An AI has no emotions. While journalism depends on the outlet and its ideology, a story told by a person is more engaging than one told by an AI” (18–25, medium to high use).
Finally, the participants consider it essential to regulate the use of AI in journalism to maintain transparency and minimize biases. First, there is an emphasis on the need to indicate when content, particularly audio and video, has been created using AI, as is already done on platforms like TikTok. Second, in the case of print media, there is no clear consensus on when AI involvement should be indicated. However, there is general agreement that such a mention is necessary when the content is entirely or largely generated by AI. The participants (see Table 3) also emphasize that “the origin of all the information should be transparent. It should be very clear who is behind it, who is working on these AIs, and who has influence over them—absolutely all the people who hold economic power over them” (26–35, medium to high use).

4. Discussion and Conclusions

Our findings, while subject to the usual limitations of focus group research, reveal a complex and evolving understanding of AI-generated journalism, where concerns about its potential biases coexist with the perception that it may be more objective than human journalism. This dual perception aligns with previous studies indicating that AI-generated news is often viewed as free from human biases, political agendas, and corporate influence.The participants in our study initially assumed that AI, as a machine, lacks opinions and, therefore, can maintain neutrality. However, our research challenges the assumption that AI is inherently objective, revealing that its neutrality is conditional and contingent on external factors. While the participants initially assumed that AI lacks opinions and thus maintains neutrality, those with greater AI literacy recognized the limitations of this assumption. The awareness that AI models are trained on human-generated data and are influenced by corporate interests leads to skepticism, particularly among participants with higher knowledge of AI. This supports prior findings that AI-generated content is not inherently free from bias; rather, biases can stem from the data it is trained on, the algorithms used to process information, and the objectives of the corporations that develop these technologies (Mehrabi et al., 2021).
Moreover, the concern that AI might unintentionally perpetuate misinformation or be manipulated for corporate or political interests echoes previous research highlighting how large language models can exhibit biases depending on their training data and programming choices (Feng et al., 2023; Trhlik & Stenetorp, 2024). This finding reframes the discussion on bias in journalism, suggesting that while AI may be seen as an alternative to traditional reporting, it does not eliminate bias but rather shifts the point of distrust from journalists to programmers, corporate entities, and opaque training datasets.
A particularly innovative contribution of our study is the finding that greater familiarity with AI correlates with higher skepticism about its neutrality. The participants categorized as medium to high users of AI were significantly more critical of its objectivity, emphasizing their awareness of how AI systems are shaped by human decisions regarding programming, data selection, and algorithmic design. This insight reinforces prior research showing that individuals with more knowledge of AI are more likely to recognize its limitations, particularly its tendency to reflect biases embedded in its training data (Bender et al., 2021; Mehrabi et al., 2021). Our findings go a step further, demonstrating that trust in AI-generated journalism is not static, but rather shaped by audience expertise, contradicting assumptions that AI’s perceived neutrality makes it universally trusted. Instead, users who actively engage with AI are more aware of its potential for manipulation, misinformation, and the influence of corporate interests—echoing concerns raised in previous studies about how media organizations may shape AI outputs to align with economic or political objectives (Trejos-Gil & Gómez-Monsalve, 2024).
Another important finding is that the participants did not view AI as a complete replacement for human journalism, but rather as a tool best suited for automating routine tasks while requiring human oversight for credibility, verification, and emotional engagement. This perspective aligns with, but also refines, previous literature on AI in journalism, which suggests that AI can enhance efficiency but still lacks the nuance, interpretive abilities, and ethical judgment of human journalists (Henestrosa et al., 2022; Toff & Simon, 2025). For journalists, our findings underscore the importance of maintaining editorial oversight over AI-generated content. AI can be a valuable tool for automating routine tasks such as data-driven reporting, fact-checking, and summarization, but citizens are aware that human journalists remain essential for verification, interpretation, and ethical decision-making. Hence, our findings suggest that trust in AI journalism is not an all-or-nothing concept, but rather a layered process, where AI can serve as a useful aid, while final verification and interpretation must remain human-led.
Finally, our study also presents relevant insight for policymakers. It highlights the urgent need for transparency and regulation in AI-generated journalism. Many participants expressed concerns about the opacity of AI systems, calling for clearer disclosures about AI’s role in news production—an issue emphasized in prior research (Sumpter & Neal, 2021; Toff & Simon, 2025). Our findings suggest that trust in AI journalism will depend not just on the technology itself but also on how transparently it is implemented and communicated to audiences. By demonstrating that skepticism about AI’s neutrality is closely tied to users’ understanding of its mechanisms, our research underscores the importance of AI literacy and regulatory frameworks to ensure that AI-generated journalism is used ethically and responsibly. Given the potential risks of AI perpetuating misinformation or being manipulated for corporate or political ends, regulatory frameworks should prioritize transparency in AI training data, algorithmic decision-making, and disclosure practices. Media organizations and technology companies should work collaboratively to establish best practices that enhance public trust and accountability.
While our study provides valuable insights into audience perceptions of AI-generated journalism, several questions remain open for future research. First, further studies should explore how different demographic groups—including journalists, policymakers, and general audiences with varying levels of AI literacy—perceive AI-generated journalism. Investigating how trust in AI journalism varies across cultural and political contexts would also yield valuable insights into the global implications of AI in news production. Second, future research should examine how different AI-generated journalistic formats (e.g., breaking news, investigative reporting, opinion pieces) impact audience perceptions and trust. As AI technology continues to evolve, longitudinal studies could track changes in public attitudes over time, particularly as regulatory frameworks and AI literacy efforts advance. Finally, interdisciplinary research integrating media studies, artificial intelligence, and policy analysis could provide deeper insights into how AI-generated journalism can be implemented ethically and responsibly. As AI continues to shape the media landscape, fostering collaboration between journalists, technologists, and policymakers will be essential to ensuring that AI enhances, rather than undermines, the quality and integrity of news reporting.

Author Contributions

Conceptualization, E.Y.-P. and J.S.-M.; Methodology, E.Y.-P.; Validation, E.Y.-P. and J.S.-M.; Formal analysis, E.Y.-P. and E.X.-C.; Writing—original draft, E.Y.-P., J.S.-M., M.S.-O. and E.X.-C.; Writing—review & editing, E.Y.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This article is part of the research project “Impact of artificial intelligence and algorithms on online media, journalists and audiences” (PID2022-138391OB-I00) funded by the Spanish Ministry of Science, Innovation and Universities and by the European Commission NextGeneration EU/PRTR.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the study protocol was approved by the Ethics Committee of Ramon Llull University (CER code: FCRI0005/25) on 27 February 2025.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets used in this article are available. Requests for access should be made to the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ariely, G. (2015). Trusting the press and political trust: A conditional relationship. Journal of Elections, Public Opinion and Parties, 25(3), 351–367. [Google Scholar] [CrossRef]
  2. Bardin, L. (1977). L’analyse de contenu [Content analysis]. Presses Universitaires de France. [Google Scholar]
  3. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March 3–10). On the dangers of stochastic parrots: Can language models be too big? 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623), Virtual Event. [Google Scholar] [CrossRef]
  4. Berelson, B. (1952). Content analysis in communication research. Free Press. [Google Scholar]
  5. Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018, April 21–26). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. [Google Scholar]
  6. Blumler, J. G., & Kavanagh, D. (1999). The third age of political communication: Influences and features. Political Communication, 16(3), 209–230. [Google Scholar] [CrossRef]
  7. Calvo-Rubio, L. M., & Rojas-Torrijos, J. L. (2024). Criteria for journalistic quality in the use of artificial intelligence. Communication & Society, 37(2), 247–259. [Google Scholar] [CrossRef]
  8. Calvo-Rubio, L.-M., & Ufarte-Ruiz, M.-J. (2021). Artificial intelligence and journalism: Systematic review of scientific production in Web of Science and Scopus (2008–2019). Communication & Society, 34(2), 159–176. [Google Scholar] [CrossRef]
  9. Cappella, J. N. (2002). Cynicism and social trust in the new media environment. Journal of communication, 52(1), 229–241. [Google Scholar]
  10. Cappella, J. N., & Jamieson, K. H. (1997). The spiral of cynicism: The press and the public good. Oxford University Press. [Google Scholar]
  11. Casero-Ripollés, A., Sintes-Olivella, M., & Franch, P. (2017). The populist communication style in action: Podemos’s issues and functions on Twitter during the 2016 spanish general election. American Behavioral Scientist, 61(9), 986–1001. [Google Scholar] [CrossRef]
  12. Chang, A.-h., & Tang, Y.-C. (2023). The political foundation of mainstream media trust in East and Southeast Asia: A cross-national analysis. Asian Politics and Policy, 15(4), 585–604. [Google Scholar] [CrossRef]
  13. Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities. Journalism Practice, 1–19. [Google Scholar] [CrossRef]
  14. Couldry, N. (2004). Theorising media as practice. Social Semiotics, 14(2), 115–132. [Google Scholar] [CrossRef]
  15. Dahlgren, P. (2018). Media, knowledge and trust: The deepening epistemic crisis of democracy. Javnost—The Public, 25(1–2), 20–27. [Google Scholar] [CrossRef]
  16. Eberl, J. (2019). Lying press: Three levels of perceived media bias and their relationship with political preferences. Communications, 44(1), 5–32. [Google Scholar] [CrossRef]
  17. Elejalde, E., Ferres, L., & Herder, E. (2018). On the nature of real and perceived bias in the mainstream media. PLoS ONE, 13(3), e0193765. [Google Scholar] [CrossRef]
  18. Entman, R. M. (2005). The nature and sources of news. In K. Jamieson, & G. Overholser (Eds.), Institutions of american democracy: The press (pp. 48–65). Oxford University Press. [Google Scholar]
  19. Feng, S., Park, C. Y., Liu, Y., & Tsvetkov, Y. (2023, July 9–14). From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models. 61st Annual Meeting of the Association for Computational Linguistics (Vol. 1 Long Papers. , pp. 11737–11762), Toronto, ON, Canada. [Google Scholar] [CrossRef]
  20. Forja-Pena, T., García-Orosa, B., & López-García, X. (2024). The ethical revolution: Challenges and reflections in the face of the integration of artificial intelligence in digital journalism. Communication & Society, 37(3), 237–254. [Google Scholar] [CrossRef]
  21. Gronke, P., & Cook, T. E. (2007). Disdaining the media: The american public’s changing attitudes toward the news. Political Communication, 24(3), 259–281. [Google Scholar] [CrossRef]
  22. Gurevich, M., & Blumler, J. G. (1990). Political communication systems and democratic values. In J. Lichtenberg (Ed.), Democracy and the mass media (pp. 269–289). Cambridge University Press. [Google Scholar]
  23. Hanitzsch, T., Van Dalen, A., & Steindl, N. (2018). Caught in the nexus: A comparative and longitudinal analysis of public trust in the press. The International Journal of Press/Politics, 23(1), 3–23. [Google Scholar] [CrossRef]
  24. Henestrosa, M., Sádaba, C., & García-Avilés, J. A. (2022). The credibility of AI-generated news: An exploration of audience perceptions. Computers in Human Behavior, 135, 107367. [Google Scholar] [CrossRef]
  25. Hewapathirana, I., & Perera, N. (2024). Navigating the age of AI influence: A systematic literature review of trust, engagement, efficacy, and ethical concerns of virtual influencers in social media. Journal of Infrastructure Policy and Development, 8, 6352. [Google Scholar] [CrossRef]
  26. Hofeditz, L., Mirbabaie, M., Holstein, J., & Stieglitz, S. (2021, June 14–16). Do you trust an AI-Journalist? A credibility analysis of news content with ai-authorship. 29th European Conference on Information Systems, Marrakech, Morocco. [Google Scholar]
  27. Ioscote, F., Gonçalves, A., & Quadros, C. (2024). Artificial intelligence in journalism: A ten-year retrospective of scientific articles (2014–2023). Journalism and Media, 5(3), 873–891. [Google Scholar] [CrossRef]
  28. Jones, D. A. (2004). Why americans don’t trust the media: A preliminary analysis. Harvard International Journal of Press/Politics, 9(2), 60–75. [Google Scholar] [CrossRef]
  29. Kieran, M. (1999). Media ethics: A philosophical approach. Philosophical Quarterly, 49(197), 558–560. [Google Scholar]
  30. Kolo, C., Mütterlein, J., & Schmid, S. A. (2022, January 4–7). Believing journalists, AI, or fake news: The role of trust in media. 55th Hawaii International Conference on System Sciences, Maui, HI, USA. Available online: https://scholarspace.manoa.hawaii.edu/bitstreams/0422f555-fb62-418b-850b-b81519f73dad/download (accessed on 30 March 2025).
  31. Kovach, B., & Rosenstiel, T. (2003). The elements of journalism: What newspeople should know and the public should expect. Crown Publishers. [Google Scholar]
  32. Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Sage. [Google Scholar]
  33. Ladd, J. M. (2012). Why americans hate the media and how it matters. Princeton University Press. [Google Scholar] [CrossRef]
  34. La-Rosa Barrolleta, L. A., & Sandoval-Martín, T. (2024). Artificial intelligence versus journalists: The quality of automated news and bias by authorship using a Turing test. Anàlisi: Quaderns de Comunicació i Cultura, 70, 15–36. [Google Scholar] [CrossRef]
  35. Lermann Henestrosa, A., & Kimmerle, J. (2024). Understanding and perception of automated text generation among the public: Two surveys with representative samples in germany. Behavioral Sciences, 14(5), 353. [Google Scholar] [CrossRef]
  36. Masip, P., Suau, J., Ruiz-Caballero, C., Capilla, P., & Zilles, K. (2021). News engagement on closed platforms: Human factors and technological affordances influencing exposure to news on WhatsApp. Digital Journalism, 9(8), 1062–1084. [Google Scholar] [CrossRef]
  37. Mazzoleni, G. (2014). Mediatization and political populism. In F. Esser, & J. Strömbäck (Eds.), Mediatization of politics: Understanding the transformation of Western democracies (pp. 42–56). Palgrave Macmillan. [Google Scholar]
  38. McCombs, M. (2004). Setting the agenda. The mass media and public opinion. Polity Press. [Google Scholar]
  39. McQuail. (2003). Mass communication theory. An Introduction. Sage. [Google Scholar]
  40. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. [Google Scholar] [CrossRef]
  41. Moffitt, B., & Tormey, S. (2014). Rethinking populism: Politics, mediatisation and political style. Political Studies, 62(2), 381–397. [Google Scholar] [CrossRef]
  42. Moravec, V., Hynek, N., Skare, M., Gavurova, B., & Kubak, M. (2024). Human or machine? The perception of artificial intelligence in journalism, its socio-economic conditions, and technological developments toward the digital future. Technological Forecasting and Social Change, 200, 123162. [Google Scholar] [CrossRef]
  43. Mudde, C., & Rovira Kaltwasser, C. (2013). Exclusionary vs. inclusionary populism: Comparing contemporary europe and latin america. Government and Opposition, 48(2), 147–174. [Google Scholar] [CrossRef]
  44. Newman, N., Fletcher, R., Robertson, C. T., Arguedas, A. R., & Kleis Nielsen, R. (Eds.). (2024). Digital news report 2024 (pp. 92–93). Reuters Institute for the Study of Journalism. [Google Scholar] [CrossRef]
  45. Parratt-Fernández, S., Mayoral-Sánchez, J., & Mera-Fernández, M. (2021). The application of artificial intelligence to journalism: An analysis of academic production. Profesional De La información, 30(3), 1–12. [Google Scholar] [CrossRef]
  46. Patterson, T. E. (1993). Out of order. Knopf. [Google Scholar]
  47. Ryan, M. (2001). Journalistic ethics, objectivity, existential journalism, standpoint epistemology, and public journalism. Journal of Mass Media Ethics, 16(1), 3–22. [Google Scholar] [CrossRef]
  48. Sabato, L. J. (1991). Feeding frenzy. How attack journalism has transformed american politics. Free Press. [Google Scholar]
  49. Sánchez Esparza, M., Palella Stracuzzi, S., & Fernández Fernández, Á. (2024). Impact of artificial intelligence on RTVE: Verification of fake videos and deepfakes, content generation, and new professional profiles. Communication & Society, 37(2), 261–277. [Google Scholar] [CrossRef]
  50. Schudson, M. (2008). Why democracies need an unlovable press. Polity. [Google Scholar]
  51. Singer, J. B. (2006). The socially responsible existentialist: A normative emphasis for journalists in a new media environment. Journalism Studies, 7(1), 2–18. [Google Scholar] [CrossRef]
  52. Sonni, A. F., Hafied, H., Irwanto, I., & Latuheru, R. (2024). Digital newsroom transformation: A systematic review of the impact of artificial intelligence on journalistic practices, news narratives, and ethical challenges. Journalism and Media, 5(4), 1554–1570. [Google Scholar] [CrossRef]
  53. Soontjens, K., & Van Erkel, P. (2020). Finding perceptions of partisan news media bias in an unlikely place. The International Journal of Press/Politics, 27, 120–137. [Google Scholar] [CrossRef]
  54. Sumpter, M., & Neal, T. (2021, June 7). User perceptions of article credibility warnings: Towards understanding the influence of journalists and AI agents. 15th International AAAI Conference on Web and Social Media Workshop: Mediate 2021: News Media and Computational Journalism, Virtual Event. Available online: https://workshop-proceedings.icwsm.org/abstract.php?id=2021_64 (accessed on 30 March 2025).
  55. Toff, B., & Simon, F. M. (2025). “Or they could just not use it?”: The dilemma of AI disclosure for audience trust in news. The International Journal of Press/Politics, 19401612241308697. [Google Scholar] [CrossRef]
  56. Trejos-Gil, C. A., & Gómez-Monsalve, W. D. (2024). Inteligencia artificial en los medios y el periodismo. Revisión sistemática sobre España y Latinoamérica en las bases de datos Scopus y Web of Science (2018–2022). Palabra Clave, 27(4), e2741. [Google Scholar] [CrossRef]
  57. Trhlik, F., & Stenetorp, P. (2024, November 12–16). Quantifying generative media bias with a corpus of real-world and generated news articles. Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 4420–4445), Miami, FL, USA. [Google Scholar] [CrossRef]
  58. Tsfati, Y., & Ariely, G. (2014). Individual and contextual correlates of trust in media across 44 countries. Communication Research, 41(6), 760–782. [Google Scholar] [CrossRef]
  59. Tsfati, Y., & Cohen, J. (2005). Democratic consequences of hostile media perceptions: The case of gaza settlers. Harvard International Journal of Press/Politics, 10(4), 28–51. [Google Scholar] [CrossRef]
  60. Tsfati, Y., Strömbäck, J., Lindgren, E., Damstra, A., Boomgaarden, H. G., & Vliegenthart, R. (2022). Going beyond general media trust: An analysis of topical media trust, its antecedents and effects on issue (Mis)perceptions. International Journal of Public Opinion Research, 34(2), edac010. [Google Scholar] [CrossRef]
  61. Van Dalen, A. (2019). Journalism, trust, and credibility. In K. Wahl-Jorgensen, & T. Hanitzsch (Eds.), The handbook of journalism studies (pp. 356–371). Routledge. [Google Scholar] [CrossRef]
  62. Van Dalen, A. (2024). Revisiting the algorithms behind the headlines. How journalists respond to professional competition of generative AI. Journalism Practice, 1–18. [Google Scholar] [CrossRef]
  63. Volk, S. C., Schulz, A., Blassnig, S., Marschlich, S., Nguyen, M. H., & Strauß, N. (2024). Selecting, avoiding, disconnecting: A focus group study of people’s strategies for dealing with information abundance in the contexts of news, entertainment, and personal communication. Information, Communication & Society, 28, 21–40. [Google Scholar] [CrossRef]
  64. Waddell, T. F. (2019). Can an algorithm reduce the perceived bias of news? Testing the effect of machine attribution on news readers’ evaluations of bias, anthropomorphism, and credibility. Journalism & Mass Communication Quarterly, 96(1), 82–100. [Google Scholar] [CrossRef]
  65. Weber, R. P. (1990). Basic content analysis. Sage. [Google Scholar]
  66. Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., Xenos, M. A., & Brossard, D. (2023). In AI we trust: The interplay of media use, political ideology, and trust in shaping emerging AI attitudes. Journalism & Mass Communication Quarterly, 10776990231190868. [Google Scholar] [CrossRef]
  67. Zhang, B., & Dafoe, A. (2020, February 7–8). U.S. public opinion on the governance of artificial intelligence. The AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20). Association for Computing Machinery (pp. 187–193), New York, NY, USA. [Google Scholar] [CrossRef]
Table 1. AI Trust Factors and User Perspectives.
Table 1. AI Trust Factors and User Perspectives.
FactorDescriptionParticipant Quote (Age, Use Level)
Corporate or business InterestsAI is controlled by private companies.“I don’t fully control it, nor am I the one creating it…” (26–35, medium to high use)
Programming BiasesProgrammers’ perspectives shape AI, embedding human biases.“Behind AI, there’s someone programming it…” (36–50, medium to high use)
Opacity of LearningLack of transparency in how AI processes data fosters distrust.“I would trust it more if it were truly independent…” (36–50, medium to high use)
Table 2. AI Concerns: Fears and Reliability Issues.
Table 2. AI Concerns: Fears and Reliability Issues.
ConcernDescriptionParticipant Quote (Age, Use Level)
Fear of ManipulationWorry about AI being used maliciously for global influence.“I fear the global manipulation that could occur…” (26–35, medium to high use)
Skepticism from ExperienceUsers with a higher AI use recognize its limitations and errors.“I lack confidence. When I ask for things, I try to verify them…” (26–35, medium to high use)
Need for VerificationAI is not reliable alone; human oversight is needed to correct errors.“If something matters to you, you’ll work on it…” (26–35, medium to high use)
Table 3. AI Bias Sources and User Concerns.
Table 3. AI Bias Sources and User Concerns.
Source of New BiasDescriptionParticipant Quote (Age, Use Level)
Corporate InterestsCompanies behind AI have commercial priorities that bias content.“Who constructed the data on which AI relies?” (26–35, medium to high use, implicit)
Journalist InfluenceJournalists shape AI use, affecting its neutrality.“What do you think is better—a journalist or an AI with someone behind it?” (26–35, medium to high use)
TransparencyIndicate that it was created with AI.“The origin of all the information should be transparent.” (26–35, medium to high use)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeste-Piquer, E.; Suau-Martínez, J.; Sintes-Olivella, M.; Xicoy-Comas, E. What If I Prefer Robot Journalists? Trust and Objectivity in the AI News Ecosystem. Journal. Media 2025, 6, 51. https://doi.org/10.3390/journalmedia6020051

AMA Style

Yeste-Piquer E, Suau-Martínez J, Sintes-Olivella M, Xicoy-Comas E. What If I Prefer Robot Journalists? Trust and Objectivity in the AI News Ecosystem. Journalism and Media. 2025; 6(2):51. https://doi.org/10.3390/journalmedia6020051

Chicago/Turabian Style

Yeste-Piquer, Elena, Jaume Suau-Martínez, Marçal Sintes-Olivella, and Enric Xicoy-Comas. 2025. "What If I Prefer Robot Journalists? Trust and Objectivity in the AI News Ecosystem" Journalism and Media 6, no. 2: 51. https://doi.org/10.3390/journalmedia6020051

APA Style

Yeste-Piquer, E., Suau-Martínez, J., Sintes-Olivella, M., & Xicoy-Comas, E. (2025). What If I Prefer Robot Journalists? Trust and Objectivity in the AI News Ecosystem. Journalism and Media, 6(2), 51. https://doi.org/10.3390/journalmedia6020051

Article Metrics

Back to TopTop