Next Article in Journal
Predicting Cultural Acceptance Among Students in Thailand’s Southern Border Provinces: Key Influencing Factors
Previous Article in Journal
Scientific Advances and Applications of 360 Tours in the XXI Century
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Technological Culture and Politics: Artificial Intelligence as the New Frontier of Political Communication

by
Daniele Battista
1 and
Emiliana Mangone
2,*
1
Department of Management & Innovation Systems, University of Salerno, 84084 Fisciano, Salerno, Italy
2
Department of Political and Communication Sciences, University of Salerno, 84084 Fisciano, Salerno, Italy
*
Author to whom correspondence should be addressed.
Societies 2025, 15(4), 75; https://doi.org/10.3390/soc15040075
Submission received: 8 January 2025 / Revised: 19 March 2025 / Accepted: 21 March 2025 / Published: 23 March 2025

Abstract

:
Technological developments with the rapid and significant advances related to artificial intelligence (AI) have generated a broad debate on political, social, and ethical impacts, raising important questions that require multidisciplinary analysis and investigation. One of the issues under discussion is whether the integration of AI in the political context represents a promising opportunity to improve the efficiency of democratic participation and policy-making processes, as well as increase institutional accountability. The aim of this article is to propose a theoretical reflection that allows us to fully understand the implications and potential consequences of the application of AI in the political field without neglecting its social and ethical effects: can such uses really be considered democratic, or do they represent a dangerous trend of using algorithms for manipulative purposes? To achieve this, a deductive approach will be adopted based on theories, imaginaries, and expectations concerning AI in the specific context of politics. Through this type of analysis, knowledge will contribute to the understanding of the complex dynamics related to the use of AI in politics by offering a critical perspective and a picture of the different connections.

1. Introduction

Communication is a social interaction in that it is through the act of communication that individuals take their thoughts outside their own minds and, in so doing, open themselves up to dialogue with others. It is through communication that cultural reflexivity takes place. Societies, therefore, cannot do without communication systems that perform the main functions of the transmission of culture and transfer of information because the forms of interaction between individuals within the same do not assume a linear logic; rather, they often take on aspects of ambiguity.
The easy enjoyment of many new means of communication [1] has given communication processes a strong penetration into the everyday subjective experience and the images of the world that individuals possess and construct for themselves, which are then translated into the public opinion that guides attitudes. The source of information for individuals is no longer just the individuals with whom they relate daily and enact a symbolic exchange; the sources of information become other media that generate a symbolic exchange that is mediated and continuous. Over the last decades, the strong evolution of digitalization and information technologies has favored a significant transformation of communication processes and, in general, of the methods and possibilities of access to information that are always accessible and available over time. Platformization today makes information processes fluid, without filters, homogenizing, characterized by the impossibility of recognizing the authoritativeness and reliability of news, as well as by the massive overexposure and diffusion of information and fake news. At the same time, however, the fact that the media often represent the only source of information for individuals leads to a situation of dependency, especially about the ideas and images they can construct of reality, since, in this situation of pseudo-subordination, individuals have less power to select information than the power exercised by the media themselves [2]. This has led to the assertion in recent decades that studies should orient predominantly toward research into the effects at the level of the construction of the representations of reality that communicative events help to construct and convey.
This is even more true in the current historical phase in which the shift from network society [3] to platform society [4] is taking place. The socio-cultural and socio-institutional scope and pervasiveness of this have been defined in a multidimensional way by the so-called “informational society”, which is mainly based on the transversal dynamic of “network logic”, which becomes the vector of change in all the dimensions of social evolution. According to Castells, the term informational “indicates the attribute of a specific form of social organization in which information generation, processing, and transmission become the fundamental sources of productivity and power because of new technological conditions emerging in this historical period” [3] (p. 21, note 31) and differs from the information society because in this case we simply refer to the importance that information historically plays in any kind of society. The fluidity of the network society makes it possible to break down and recompose the fracture between the production model and the development model, as well as place the value of the concept of information no longer within rigid theoretical–conceptual schemes but rather within the framework of the meaning of the network and its dissolution in space.
According to van Dijck, Poell, and de Waal, it would be more appropriate to speak of a platform society since this expression “emphasizes the inextricable relation between online platforms and societal structures. Platforms do not reflect the social: they produce the social structures we live in it” [4] (p. 24). Platforms have a social responsibility and cannot be studied in isolation from each other, let alone separately from the cultural and political systems of a society. Each platform constitutes a global network infrastructure that connects users, corporations, and public institutions digitally, generating a complex ecosystem in which economic logic and political dynamics are in continuous interaction. Platforms, therefore, have turned into the infrastructural architecture that determines and guides the governance of markets, societies [5,6], and in recent years, also of politics as they define the geopolitics of ecosystems, condition public space and the prevalence of commercial interests over social ones, and clearly affect public opinion.
This provides a useful introductory basis for understanding the current informational age, i.e., the unstoppable acceleration of technological production and the speed at which information is disseminated has forced the social sciences to critically confront the conceptual and argumentative debate between technological determinism and the dimensions of the social sphere. According to Vattimo [7], post-modernity can be defined as a transparent society characterized by the decisive influence of new technologies that have favored a global and widespread form of communication, modifying public space and, consequently, the concept of democracy. The mechanisms on which platforms are based, i.e., selection, datafication, and commodification, are shaping economic and technological models as well as the actions of individuals. They are also transforming themselves into the main gatekeepers of data and providers of infrastructures that increasingly affect essential public sectors with the risk of taking substantial functions away from national governments.
Lash [8] argues that the technological culture that characterizes the informational age has replaced the culture of representation that was typical of modernity. In this sense, the author contrasts two types of information: the first, influenced by post-industrial theories, which inscribes information in the problem of rationality and knowledge, the latter to be understood as discursive and not as practical knowledge; the second type of information, which opposes but at the same time coexists with the previous one, can be defined as disinformation, at the basis of which, according to Lash [8], lies the dual need for free and global circulation of capital but also the rational regularization of the market. Therefore, from this perspective, the cultural value of information affirms itself on the basis of the variable, ephemeral, immediate, and confused relationship with objects and their representations. These elements characterize the current overproduction of information and represent the order that sustains the production and practice of technological culture.
This transformation process is closely linked to advances in the tools available for communication processes, a focus that has increasingly emphasized technological dynamics in recent decades. This, in the current political landscape, has resulted in the emergence of new dynamics of interaction between political actors, media, and citizens, profoundly transforming the way politics is conceived, disseminated, and consumed. In this regard, platforms and, in particular, the implementation of artificial intelligence (AI) systems represent one of the most significant milestones in this evolution [9], assuming a paradigmatic role and bringing about substantial transformations in various interactive dynamics. The ability to analyze complex data, process information in real time, and adapt to individual preferences could fundamentally reshape the way political leaders interact with the public [10] and contribute substantially to shaping public opinion.
The consumption of information online through platforms and social networks multiplies the chances of being trapped in the web of disinformation. On the other hand, if those who control words control reality, it is clear that the objective of power groups is to avoid the spread of awareness and critical consciousness through homogenization and disinformation strategies implemented through a polarizing use of platforms and digital ecosystems. Considering these aspects, important challenges emerge related to the verification of sources, the risk of manipulation of information, and equal access to news in a context in which disinformation and incorrect information compromise the creation of a reliable information environment capable of stimulating active participation in political life and guaranteeing the quality of sources. These points reflect on the deeper capacity of AI to shape and influence the political sphere, as well as how effective strategies can be adopted to ensure an informed and transparent debate in the age of AI [11]. The aim of this article is to propose a reflection on the potential offered by AI within today’s political landscape, particularly about the dynamics of the democratization processes and the construction of public opinion, highlighting its opportunities and challenges without, however, neglecting the analysis of the social and ethical implications inherent to the use of these technologies in politics.

2. The Precarious Balance of AI Between Potential and Criticality

The need to adapt the way messages are delivered reflects a context in which emerging technologies and social transformations interact in complex ways, influencing both the form and content of public communications. These changes represent both a challenge and an opportunity to adopt more sophisticated and targeted approaches capable of promoting new forms of participation. In this regard, the relationship between AI and politics is of significant importance today as it opens up a wide spectrum of horizons, and this technology has the potential to influence communication and even participation to varying degrees. The ability to analyze complex data, process information in real time, and conform to individual preferences demonstrates the highly innovative nature of AI in shaping current political dynamics. These technologies have the ability to redefine the way users receive, disseminate, and share information, as well as participate in conversations on social media platforms. This phenomenon is often referred to as “automating the news” [12], corresponding to the emergence of a new range of artificial intelligence capable of rewriting media roles and functions. This has sparked a growing interest in the numerous issues arising from the fusion of technology and democratic decision-making processes [13], as well as the certain impact of AI on public opinion [14]. Despite sounding like a mere rhetorical exercise, it is worth emphasizing how AI-related technologies, while offering vast opportunities for improving knowledge and social relations, simultaneously present risks of introducing distortions, manipulations, and disinformation [15,16] with the main objective of influencing public opinion by orienting it toward sentiments and actions that serve the source’s purpose (e.g., generating panic, changing opinion on a given social phenomenon or candidate, etc.). These tactics clearly erode democracy, stripping it of its meaning and transforming it into mere demagoguery based on the ad hoc construction of ideologized and polarized opinion “bubbles”. The notion of a filter bubble, launched by Pariser [17], indicates how the algorithms of digital platforms personalize information, creating closed places in which the users themselves are mainly exposed to contents that consolidate their pre-existing opinions. In the debate on digital communication, this issue is particularly relevant because it contributes to polarization and political radicalization, narrowing the dialogue between different positions. This duality emphasizes, once again, the importance of adopting a critical and reflexive approach to mitigate any potential negative effects while promoting a conscientious and ethical use of such technologies. Some scholars [18,19] argue that through algorithms and exclusionary groups—especially Facebook and Twitter (current X)—the spread and exposure to persuasive disinformation (fake news) increases. Interesting signs of this were observed during the populist election campaigns of Donald Trump in the United States, Jair Bolsonaro in Brazil, and Matteo Salvini in Italy, all of which were characterized by a significant spread of fake content. It is undeniable that due to rapid technological progress and growing public acceptance, AI is emerging as a highly relevant reality. Supporting this thesis, there are strategies and government communications that embrace this narrative, describing AI as an inevitable and profoundly transformative technological advance characterized by high economic prospects [11,20]. This positioning reflects the perception of AI as a catalyst for significant transformations, including in economic models, highlighting imminent opportunities and profound implications in a myriad of areas. In this scenario of rapid innovation, we cannot overlook the inevitable impact on the political arena, where AI could vigorously influence decision-making flows, legislative processes, and global interactions. The aspiration to entrust political responsibilities to AI-based systems reflects the vision of technology as an efficient and rational tool capable of guiding decisions and policies in an unbiased manner. This trend raises important questions about the reliance on automation in the context of political leadership and reflections on the balance between human and mechanical skills in shaping the future of government institutions. This, also considering the growing disconnect between the political sphere and citizenship, is only the tip of the iceberg of deeper issues involving trust, participation, and people’s expectations of institutions and politics.
During the last European elections in 2024, for example, there was an overall slight decrease in voter turnout compared to 2019, from 52.43% to 51.08%, a decrease of 1.35%. This negative trend particularly affected the southern European belt (Greece, Spain, Italy, Malta, and Portugal). Turnout dropped from 54.66% in 2019 to 47.56% in 2024, a considerable decrease of 7.09%. Even in the last U.S. presidential election held on 5 November 2024, the abstention rate was around 35%. The most surprising figure is from the British elections held on 4 July 2024: voter turnout reached one of the lowest values since 1885 at 59.8%. As a result, 40.2% of the eligible voters dropped out. This is a substantial drop compared to the previous elections (2019), where the turnout was 67.3%. Although these figures seem to explain the growing mistrust of voting in a seemingly straightforward manner, the underlying reality remains quite distorted and far more complicated. This seemingly “give-up” attitude may reflect a search for new models of political engagement or indicate the need for significant reforms to restore a closer and more meaningful connection between citizens and the political process. There are concrete examples where the replacement of decision-making elements traditionally associated with political representatives is emerging as an intriguing and debated prospect.
In this direction, a paradigmatic case is what happened in the United States with the “AI Politician” project, a database that collects the directives of each citizen to return a unified and shared political line. Similarly, the hypothesis of an entire nation governed by AI could have materialized in Denmark during the last elections if Syntetiske Parti [Synthetic Party] had obtained significant representation. Founded by the art collective Computer Lars and the MindFuture Foundation, a non-profit organization focused on technology, this party is unique in that it is led by a chatbot named Leader Lars.
The innovative prospect of a fully automated political party, in which AI takes on the role of leader and decision-maker, raises profound considerations about the evolution of democratic dynamics and the impact of emerging technologies on political participation. This case raises fundamental questions about the effectiveness, impartiality, and representativeness of such an approach in governance, but it could mark a significant step toward a form of government in which AI actively contributes to policy formulation and leadership. Since Leader Lars does not fall into the human category, its presence could not appear on the ballot. The members of the organization, however, had already expressed their willingness to act as spokespersons for AI issues at the institutional level. This dynamic, regardless of the results, introduces a broad and unprecedented vision into political participation, challenging traditional conventions and raising crucial questions about the representativeness and effectiveness of this modus operandi in the democratic context. The idea that human individuals can materialize the will of a non-human entity requires profound reflection on the nature and limits of representative democracy. The idea that human individuals can materialize the will of a non-human entity requires serious reflection on the nature and limits of representative democracy. This approach, which sees humans as intermediaries or spokespersons of a non-human entity, opens intricate scenarios that require a critical analysis of the dynamics of representation, democratic consensus, and fairness in decision-making processes. The unprecedented willingness of flesh-and-blood individuals to assume the role of an institutional AI spokesperson points to a strengthened link between artificial entities and human actors operating in the political fabric [21]. This throws down the gauntlet to established paradigms and opens new frontiers of study and reflection in the field of relations between technology and society, as well as between technology and politics. Looking back, we can also recall what happened in New Zealand following the launch of Sam, the first political robot. This initiative aimed to concretely improve the conditions of citizens and, despite its provocative nature, was notable for its interaction with the electorate and public opinion, managing to mobilize millions of users on platforms such as Facebook, X (formerly Twitter), and others. In the electoral market, characterized by increasingly volatile controversies and widespread disillusionment with ideal principles, there is an urgent need to capture public consensus. In this climate, in which politics is perceived as constantly distant and in which a prompt and immediate response to questions is sought, the opportunity to intensify interactions and dialogues with the grassroots proves invaluable. To the extent that political communication promoted by non-political actors plays a surprising role in reconnecting with the traditional political sphere [22]. The synergy between AI and democratic participation not only facilitates greater citizen engagement [23] but also contributes to a deeper and more informed understanding of political issues, outlining a perspective in which technology positively amplifies the democratic experience. It is universally acknowledged that civic engagement has favorable impacts on the intrinsic quality of democracy [24,25]. This awareness underlines the crucial importance attached to the active participation of citizens in governance, as a dialogue between the governed and the governing contributes substantially to promoting high-quality standards within a democratic system. This consideration reflects a widespread consensus regarding the central role that citizen participation plays in maintaining the viability and validity of democratic institutions. Understanding this rationale points the way to encouraging and supporting practices of civic aggregation, aiming to preserve and strengthen the foundations of democracy in modern societies. Therefore, the involvement of citizens in political affairs would be desirable and advisable, not relegating them to complete subordination to an elite of intermittently elected political representatives [26]. This theoretical imperative suggests the need to reconsider and reformulate the traditional paradigm of political participation, promoting broader civic participation to ensure more authentic and inclusive representation. Ultimately, the progressive and experimental perspective of some projects, such as those mentioned, suggests an awareness of the challenges associated with integrating AI into politics in all its forms. What seems indisputable is that the path toward a harmonious coexistence between AI and political decision-making processes will require an open dialogue, robust regulations, and constant ethical monitoring to ensure that these innovations contribute positively to democratic norms and political life.

3. AI and Algorithms: The Thin Line Between Progress and Ambiguity

One of the crucial challenges in contemporary political communication is, therefore, the integration and adaptation of all tools, devices, platforms, and languages to political dynamics and needs. According to Howard, Woolley, and Calo [27], political communication represents the process through which information, technology, and media are employed to consolidate political power. In this context, considering the current landscape, it is essential not to overlook algorithms, bots, and AI [28] when discussing the impact of social networks and messaging platforms in the field of politics. Through the predictive analysis of data and the processing of information from various sources, AI enables politicians to more accurately identify “target voters” [29]. This leads to the tailoring of political messages to the specific needs and preferences of voters, thereby improving campaign effectiveness [30]. In the formulation of public policies, AI possesses analytical and simulation devices that enable policymakers to anticipate and estimate the potential impact of such interventions more accurately. In this regard, managing large amounts of data and anticipating the consequences of policy decisions can provide clear added value for governance. Furthermore, the implementation of chatbots and virtual assistants on politicians’ web channels underlines the ability of AI to facilitate direct communication between representatives and citizens [31,32]. This, however, raises ethical questions regarding the clarity and accessibility of information [33]. In summary, AI offers a range of new functionalities in the dynamics of political and electoral processes, in the drafting of public policies, and in the interaction between various stakeholders. This needs to be studied in greater detail so as to fully understand its implications and pitfalls in the world of contemporary politics and institutions. The penetration of artificial intelligence systems is marking a new phase of restructuring in politics, overcoming the transformations previously introduced by the advent of the Internet [34]. The effects on political dynamics raise many uncertainties about democratic systems. Among the main conundrums are transparency in access to information, ethical accountability in automated decisions, and the potential replacement of direct citizen participation with algorithmically influenced responses. Addressing these issues is essential to ensure that AI helps to strengthen, rather than undermine, democratic principles. While the potential of AI to improve human progress is undeniable, its interference in the political arena is characterized by considerable ambivalence [35]. This is also reinforced by simplistically constructed imaginaries that have developed around the idea of “algorithm power” [36] and how they have produced a social idea of the algorithm as a black box, which it obviously is not, even though it now has a centrality in social processes. On the one hand, we benefit from AI-based communication applications that facilitate public discourse, promote connections between individuals, and improve the circulation of information [37]; on the other, there are legitimate concerns about the use of technologies that could undermine the foundations of the political system and democracy [38]. These concerns transcend mere interference in elections and strike at the heart of democratic politics, as well as interpersonal relationships between citizens, citizens and representatives, and citizens and public institutions charged with serving and safeguarding the common good.
Platformization has made information processes fluid, unfiltered, and homogenizing, characterized by the massive overexposure and diffusion of information and fake news, by the impossibility of recognizing the authoritativeness and reliability of news. All this makes it difficult for individuals to orient themselves with respect to a specific topic, giving rise to the phenomenon that has taken the name of “information epidemic or infodemic”. This term, coined at the beginning of the third millennium during the first SARS epidemic [39], has returned to vogue with the SARS-CoV-2 pandemic [40,41], demonstrating unequivocally how this phenomenon contributes to the further production of fragmentation of the world [42] through the construction of different social realities.
At this historical moment, it is therefore pertinent to focus on a particular concern that deserves special vigilance: the manipulation that undermines the cohesion of our society and the functioning of the political system [43]. Among the most insidious digital threats of our time, there is certainly a sophisticated technique that, thanks to artificial intelligence, could undermine the pillars of the democratic process, spreading disinformation and sowing distrust. The phenomenon of deepfakes, in fact, constitutes a widespread threat to democracy today since it uses the know-how offered by intelligent systems to create manipulated visual and vocal content in an extremely accurate way. This type of artifact is often used in the political field to discredit opponents, falsify official statements, and interfere with electoral campaigns [44]. Overall, it could generate a dizzying weakening of credibility toward institutions and official media, fueling the propagation of distorted narratives [45]. In general, such actions evidently undermine the functioning of democracy, stripping it of its real meaning and converting it into a mere demagogy based on ideologized and strongly polarized opinions [46]. The widespread use of AI, along with intentions that can be both malevolent and benevolent, highlights the bifurcation of the applications of this advanced technology. Its pervasive and ubiquitous diffusion in the contemporary landscape underlines the need to evaluate and understand the context in which it is used. AI, not being intrinsically neutral, can be directed toward either altruism or malice, depending on the intentions of the agents involved. Consequently, it is not intrinsically impartial, as it relies on training data that may reflect prejudices or inequalities present in society [47]. During the learning process, if the data contain cultural or discriminatory stereotypes, AI incorporates and amplifies such prejudices, leading to a phenomenon known as “algorithmic bias”. This lack of neutrality requires caution in the designing and training of AI systems, aiming to mitigate the risk of perpetuating or exacerbating the inequalities intrinsic to the starting data.
Public perception of the outcomes generated by AI can vary significantly, reflecting the perspectives, values, and sensitivities of different social actors. This underlines the importance of a critical debate regarding the ethical implications linked to its use [48]. Given this fundamental complexity, addressing this labyrinthine aspect requires a multilateral and collaborative approach involving various stakeholders. This involves the participation of scholars, developers, legislative authorities, activists, companies, and citizens to create regulatory frameworks and governance mechanisms capable of mitigating the negative impacts and maximizing the benefits of AI integration in society and, consequently, in politics.
Considering this, it is worth clarifying the dimensions of the phenomenon, the motivations behind these transformations, the means employed, and the beneficiaries [49]. The need to analyze the driving forces and tools used in current sociopolitical transformations emerges from an evolving understanding, which makes the acquisition of a comprehensive vision of these tensions an absolute priority. About two decades ago, Barber [50] (p. 575) posed a crucial question: “has modern technology corrupted or improved our polity?” This question still highlights the delicate relationship between technology and politics, prompting profound considerations of the trends and impulses that technological innovations exert on the political fabric. This perspective offers a field of inquiry that goes beyond simple considerations, pushing us to probe not only the tangible effects of technology on politics but also the more subtle forces that shape reality in terms that may not be immediately evident. In this way, Barber’s question serves as a critical beacon to shed light on this interconnection. It is clear that Information and Communication Technologies (ICT) have induced profound social transformations, shaping the fabric of “knowledge societies” [51], “knowledge-based economies” [52], “network society” [3,53,54], and societies “always-on” [55]. All these innovations have facilitated the flow of communication, promoted data sharing, and provided unique tools for the creation of interconnected networks. In particular, the proliferation of AI-based technologies has led to an exponential increase in the accessibility of various types of information, giving rise to algorithms capable of processing large amounts of data with extraordinary speed and efficiency. A tangible example of this impact is represented by advanced search engines such as Google [56]. Through sophisticated machine learning algorithms, these engines can analyze the context of users’ searches, processing large amounts of data from websites around the world. This technological development has triggered a drastic change in the way individuals and societies interact with information, outlining a landscape in which facilitated communication and advanced data processing are the foundation of the contemporary digital era [57]. We have not, however, fully embraced a “spiritual deferral to ‘algorithmic neutrality’” [58], as much as we emphasize the need to avoid excessive reliance on the supposed algorithmic neutrality. With the proliferation of real-time data acquisition and analysis technologies, coupled with the growing importance of social media and digital platforms, our lives have become an informational substrate for algorithmic processes that serve a wide range of purposes [59], sometimes resulting in manipulation.

4. Toward New Forms of Responsibility

The increasing use of automated tools and AI systems sparks heated debates among scholars and raises significant concerns about the risks and threats that these new developments pose to contemporary society and democratic systems [60]. This phenomenon highlights the urgent need to further our understanding of the sociopolitical impacts resulting from the integration of AI into decision-making structures and everyday practices. Advanced automation and AI not only promise to streamline processes and improve efficiency but also pose potential risks, including data security, privacy, and information manipulation. This context raises critical questions about transparency, fairness, and accountability in the design and implementation of automated systems, especially when they directly influence political decision-making processes and democratic participation. For these reasons, there has been an increased interest among political representatives in exercising control in the present through the ability to predict the future. This aspiration is linked to the progressive acquisition of technological know-how in the development of highly sophisticated analytical models. The latter, which uses advanced methodologies and draws from a diverse set of data sources (digital archives, sensory data from networked devices, online text analytics, historical documents, and other related datasets), is designed to provide accurate and valuable guidance to policy makers. This leads to the assertion that, in the future, we will witness an increasing synergy between the predictive potential of technology and the political decision-making level. The increased understanding of data correlations, the sophistication of processing algorithms, and the access to large amounts of information contribute to creating a fertile ground for shaping this scenario. The approach, focused on prediction and preparation, aims to improve the responsiveness to emerging challenges and guide the development of more effective public policies. The digital era has led to an unprecedented proliferation of information managed by complex algorithms [61] that, in their varying complexity, serve as guiding tools for data analysis and forecasting, underlining the critical importance of artificial intelligence and machine learning in this context.
It is essential, however, to address the ethical and social implications of this intricate exposition, also because the use of AI in the political field is destined to know no respite. One example is the Cambridge Analytica affair during the 2016 U.S. presidential elections, which demonstrated how Big Data and algorithms can be used to manipulate public opinion through microtargeting [62]. Other examples, such as the use of algorithms by YouTube, which fuel political polarization, raise concerns about the quality of democratic debate [63]. On the other hand, e-government experiments, such as those in Estonia, show a positive potential for AI in promoting democratic participation and administrative efficiency [64]. Even in more authoritarian contexts, such as in China, the use of AI for social control and population surveillance raises questions about freedom and civil rights [65]. These cases reflect the tensions between improving democratic participation and the risks associated with manipulation and surveillance, underlining the need for critical reflection on the application of AI in politics.
Massive data collection and processing, if not governed by rigorous ethical and regulatory principles, may raise concerns about privacy, security, and the potential for algorithmic discrimination [66]. In terms of privacy, for example, the practice of indiscriminate online tracking and non-consensual collection of personal data may significantly undermine the protection of users’ information [67]. To illustrate this threat concretely, consider the use of tracking cookies by websites. Suppose a user visits an online news site and, without their explicit consent, the site uses tracking cookies to monitor the pages visited, the time spent on each page, and other browsing activities. Subsequently, this information may be disclosed to advertisers or third parties without the user’s informed consent. As a result, the user may be subjected to highly targeted advertising based on their previous browsing activities [68]. This tangibly exemplifies how user privacy can be compromised without their direct consent, highlighting the regulatory challenges associated with the management of personal data in online contexts. The critical issues of regulation regarding AI arise from the need to balance the protection of fundamental rights, such as privacy, with the need to promote the circulation of data to supply the digital market. The protection of personal data, defined as a fundamental right, is confronted with technological developments that collect, analyze, and use enormous amounts of Big Data for commercial, political, and economic purposes. European legislation, such as the GDPR and the AI Regulation with resolution no. 1689 of 2024, seeks to regulate this tension, but the market often prevails over the protection of the individual. The real challenge remains that of ensuring global and multilateral regulation that protects the individual at a global level in a context in which fundamental rights risk losing value in favor of economic competitiveness [69]. It is therefore essential to find a balance between the potential benefits of these predictive technologies and the need to preserve the ethical values and fundamental rights of citizens. Furthermore, with “timeless time” [70], social fragmentation, and spatial liquidity, a new media ecosystem emerges characterized by altered roles and information production, with an exponential increase in data and the speed with which information interacts in the public sphere [71]. A distinct mass media environment has been established, characterized by changes in the location and production of information, which have led to a substantial increase in the volume of data and the rapidity of information interaction in the public sphere [72]. The emergence of this particular media environment has been catalyzed by significant changes in the ways in which information is produced, disseminated, and consumed by society itself.
These changes are mainly due to the digital transition and the widespread diffusion of media and information. While the rapid diffusion of information allows for a rapid orientation of public opinion, the multitude of sources (not always reliable) can create a complex and sometimes contradictory information landscape [73] that complicates management in the public sphere. At this point, the non-linearity and synchronicity of relationships in the digital society become evident, where information is produced, disseminated, and consumed instantaneously and simultaneously. This phenomenon is facilitated by the connectivity and speed of digital communications, creating a type of uninterrupted present and a direction of immediacy. Furthermore, the fragmentation of the social fabric translates into the possibility of self-selecting into online communities that share similar interests, values, or perspectives, where individuals are mainly exposed to information and opinions that confirm their existing beliefs. This tendency that favors the reinforcement of one’s own beliefs creates misinformation and reduces the possibility and capacity to reflect and discuss different points of view. In the same way that so-called echo chambers operate (fueled by confirmation bias, which creates an environment in which only information that reflects and reinforces one’s opinions is found), this process also passes through “filter bubbles” [74,75], that is, the tracking of users’ preferences through algorithms and the continuous proposal of similar contents until the aim of showing mainly contents that will be consumed or ideas that will be shared is achieved. In this case, the disinformation that hides behind a filter bubble is given, first of all, by the fact that a social network can hide posts and messages from users with a different opinion or news and information that it does not consider shareable; therefore, it becomes impossible to control the algorithm and realize that one is in a filter bubble [17]. It is obvious that all these changes have a significant impact on society and communication, influencing the media ecosystem. Traditional roles are being redefined, leaving room for completely new actors and tools. The democratization of content production allows a wider range of voices to participate in public discourse but, at the same time, poses challenges related to the accuracy of information and the spread of disinformation [76,77] with effects on the protection of fundamental rights and potential risks for democracies. In this scenario, navigating this new universe becomes crucially dependent on the development of critical discernment and literacy in new communication technologies. It therefore appears essential to have a critical and conscious education not only based on the ability or use of technologies, which requires targeted educational and sociopolitical interventions to simultaneously guarantee the acquisition of knowledge, skills for effective use, and critical and conscious reflexivity, but which must also be able to deal with the impact at a social level. On the one hand, the practices of using new technologies and digital platforms can offer greater possibilities for inclusion and integration through new opportunities for sociality and participatory citizenship [78], as well as the development of new skills, learning methods, and interactions summarized in the concept of transmedia literacy [79]. However, we must not forget the controversial effects due to the greater exclusion and marginalization caused by the digital divide. In societies characterized by a significant digitalization and platformization of communication processes, digital differences and inequalities produce exclusion and social marginalization because they deprive individuals of the right to develop their own abilities and therefore reduce the possibilities of integrating and participating in the economic and public world.
In today’s landscape, opinion climates are dominated not only by a series of complex and evolving factors [54] but also by the growing influence of AI, which can play a significant role in shaping opinion climates. This is manifested in a landscape characterized by high levels of instability and mobility [80], reflected in the constant dynamism of debate spaces. The acceleration of global interconnectivity and the rapid circulation of information, catalyzed by advanced computing technologies and AI algorithms, introduce unprecedented complexity in understanding social processes. The incorporation of AI not only reflects the evolution of communication paradigms but also introduces new challenges in perceiving opinions and defining convergent perspectives. Achieving effective synergy will critically depend on political institutions, experts, and society actively collaborating to develop appropriate ethical and regulatory frameworks. Only through diligent dialogue and fostering a culture of accountability can emerging challenges be addressed and a future shaped where these tools align with the protection of fundamental values. In this ever-changing context, research and collective reflection are essential to strategically adapt analytical methodologies and ensure that technological progress supports, rather than undermines, the democratic sphere and the social system.

5. Conclusions

The characteristics of current societies move along the direction of the role played by ecosystems and digital technologies, not without concerns and controversies, as we have seen. This does not only apply to the public sphere of politics but also to all other aspects of daily life. Some time ago, Kurzweil [81] highlighted the path of societies toward the so-called technological singularity, that is, the overcoming by technology of human capacities of understanding and control. Joy [82] underlined the urgency of a critical, conscious, and ethical reflection on the development and use of technologies. Starting from the second half of the last century, a broad debate has developed on these aspects, but the question relating to the effects and benefits of technologies remains open, especially considering the advent of AI.
The interaction between AI, accelerated digitalization, and social fragmentation has defined a new media and information ecosystem characterized by a proliferation of data and an unprecedented speed in the circulation of information. This context presents significant challenges for the ethical and regulatory management of privacy, information accuracy, and democratic participation. The widespread adoption of predictive technologies and advanced algorithms requires a delicate balance between the optimization of decision-making processes and the protection of citizens’ fundamental rights. Effectively addressing these challenges requires a continuous commitment to promoting a culture of accountability supported by solid regulations and an inclusive dialogue between political actors, experts, and civil society. Only through a collaborative and reflective approach can we strategically guide technological evolution toward a future in which AI positively contributes to building more balanced, informed, and democratic societies. In an increasingly hybrid and dynamic communication environment, the influence of algorithms as active agents in the production and dissemination of content has undeniably increased [83]. The study of AI-mediated political communication represents an emerging and not fully consolidated area of research, especially when compared to other disciplinary fields. The use of AI in the public arena, in fact, requires the adoption of an urgent regulatory effort capable of countering its dysfunctions with the establishment of specific supervisory bodies that certify its exercise in compliance with democratic principles. Transparency must be made the cornerstone, requiring the disclosure of algorithmic criteria and the drafting of public reports on their effects. Furthermore, a conscious involvement of technicians, legislators, and civil society is urgently needed to shape AI according to ethical dictates and not subordinate it to hegemonic purposes. Popular education on these processes is essential to avoid the alienation of the citizen from the res publica. At last, technological progress must be inserted into a path of equity so that its work does not obliterate the will of the people but sublimates it in a more structured government. In this way, the application of intelligent systems would be tempered with a view to collective supervision to prevent it from becoming a space of domination rather than emancipation.
This phenomenon requires an in-depth analysis of the dynamics through which algorithms shape political information and influence public opinion. Technological evolution has introduced new forms of political participation and interaction but has also raised critical questions about the transparency, correctness, and manipulation of information. Addressing these issues requires an interdisciplinary approach that integrates skills from communication studies, computer science, ethics, and social sciences to fully understand the sociopolitical implications of the use of AI in contemporary political communication. Ultimately, the integration of intelligent systems and digitalization, both on the bright and dark side, is part of the most powerful revolution underway. This becomes indisputable even for those who tend to minimize the media hype, which has now reached almost disproportionate dimensions of hype at every latitude [84]. The crux of the matter, however, will lie for many years to come in the rapid pace at which major changes will occur, to the point that it will often be difficult to establish in which direction technology, its innovations, and, above all, its applications will head.
Finally, it must be recognized that the integration of intelligent systems and digital technologies, both on the bright side and on the dark side, marks the most powerful of the ongoing revolutions. The ethical and regulatory implications highlight the need for global governance capable of dealing with emerging risks, protecting individual rights, and ensuring that technological innovation takes place with respect for human dignity. However, the crux of the matter will lie, for many years to come, in the rapid pace at which major changes will occur, to the point of often making it difficult to establish in which direction technology, its innovations, and especially its applications will head, at the same time arousing the urgency of appropriate ethical and regulatory responses. Exploring these issues will foster ongoing academic dialogue and concrete solutions to emerging challenges in the digital age. Our study thus serves as a basis for further investigation, stimulating critical reflections and theoretical and applicative developments in the field.

Author Contributions

Conceptualization, D.B. and E.M.; writing—original draft preparation, D.B.; writing—review and editing, E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lister, M.; Dovey, J.; Giddings, S.; Kelly, K.; Grant, I. New Media. A Critical Introduction; Routledge: London, UK, 2003. [Google Scholar]
  2. McCullagh, C.; Campling, J. Media Power. In A Sociological Introduction; Palgrave Macmillan: London, UK, 2002. [Google Scholar]
  3. Castells, M. The Rise of the Network Society; Blackwell: Oxford, UK, 1996. [Google Scholar]
  4. van Dijck, J.; De Waal, M.; Poell, T. The Platform Society. Public Values in a Connective World; Oxford University Press: New York, NY, USA, 2018. [Google Scholar]
  5. Plantin, J.C.; Lagoze, C.; Edwards, P.N.; Sandvig, C. Infrastructure studies meet platform studies in the age of Google and Facebook. New Media Soc. 2018, 20, 293–310. [Google Scholar] [CrossRef]
  6. Helmond, A. The platformization of the Web: Making web data platform ready. Soc. Media Soc. 2015, 1, 1–11. [Google Scholar] [CrossRef]
  7. Vattimo, G. The Transparent Society; Johns Hopkins University Press: Baltimore, MD, USA, 1992. [Google Scholar]
  8. Lash, S. Critique of Information; Sage: London, UK, 2002. [Google Scholar]
  9. Floridi, F. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities; Oxford University Press: New York, NY, USA, 2023. [Google Scholar]
  10. Crawford, K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021. [Google Scholar]
  11. Bareis, J.; Katzenbach, C. Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Sci. Technol. Hum. Values 2022, 47, 855–881. [Google Scholar]
  12. Diakopoulos, N. Automating the News: How Algorithms Are Rewriting the Media; Harvard University Press: Cambridge, MA, USA; London, UK, 2019. [Google Scholar]
  13. Theocharis, Y.; Jungherr, A. Computational social science and the study of political communication. Polit. Commun. 2021, 38, 1–22. [Google Scholar] [CrossRef]
  14. Groves, L.; Peppin, A.; Strait, A.; Brennan, J. Going Public: The Role of Public Participation Approaches in Commercial AI Labs. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 1162–1173. [Google Scholar]
  15. Kertysova, K. Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Secur. Hum. Rights 2018, 29, 55–81. [Google Scholar]
  16. Bontridder, N.; Poullet, Y. The role of artificial intelligence in disinformation. Data Policy 2021, 3, e32. [Google Scholar]
  17. Parisier, E. The Filter Bubble: What the Internet Is Hiding from You; Penguin: New York, NY, USA, 2011. [Google Scholar]
  18. Newman, N.; Fletcher, R.; Kalogeropoulos, A.; Nielsen, R. Reuters Institute Digital News Report 2019; Reuters Institute for the Study of Journalism: Oxford, UK, 2019; Available online: https://reutersinstitute.politics.ox.ac.uk/our-research/digital-news-report-2019 (accessed on 10 May 2024).
  19. Zimmer, F.; Scheibe, K.; Stock, M.; Stock, W.G. Fake News in Social Media: Bad Algorithms or Biased Users? J. Inf. Sci. Theory Pract. 2019, 7, 40–53. [Google Scholar]
  20. Zeng, J.; Chan, C.H.; Schäfer, M.S. Contested Chinese Dreams of AI? Public Discourse About Artificial Intelligence on WeChat and People’s Daily Online. Inf. Commun. Soc. 2022, 25, 319–340. [Google Scholar]
  21. Battista, D. For Better or for Worse: Politics Marries Pop Culture (TikTok and the 2022 Italian Elections). Soc. Regist. 2023, 7, 117–142. [Google Scholar]
  22. Austin, E.W.; Vord, R.V.D.; Pinkleton, B.E.; Epstein, E. Celebrity Endorsements and Their Potential to Motivate Young Voters. Mass Commun. Soc. 2008, 11, 420–436. [Google Scholar] [CrossRef]
  23. Polonski, V. How Artificial Intelligence Conquered Democracy. The Conversation, 8 August 2017. [Google Scholar]
  24. Michels, A. Innovations in Democratic Governance: How Does Citizen Participation Contribute to a Better Democracy? Int. Rev. Admin. Sci. 2011, 77, 275–293. [Google Scholar] [CrossRef]
  25. Archibugi, D.; Cellini, M. The Internal and External Levers to Achieve Global Democracy. Global Policy 2017, 8, 65–77. [Google Scholar] [CrossRef]
  26. Pateman, C. Participation and Democratic Theory; Cambridge University Press: Cambridge, UK, 1970. [Google Scholar]
  27. Howard, P.N.; Woolley, S.; Calo, R. Algorithms, Bots, and Political Communication in the US 2016 Election: The Challenge of Automated Political Communication for Election Law and Administration. J. Inf. Technol. Polit. 2018, 15, 81–93. [Google Scholar] [CrossRef]
  28. Barredo-Ibáñez, D.; De-la-Garza-Montemayor, D.J.; Torres-Toukoumidis, Á.; López-López, P.C. Artificial Intelligence, Communication, and Democracy in Latin America: A Review of the Cases of Colombia, Ecuador, and Mexico. Prof. Inf. 2021, 30, e300616. [Google Scholar] [CrossRef]
  29. Crilley, R. International Relations in the Age of ‘Post-Truth’ Politics. Int. Aff. 2018, 94, 417–425. [Google Scholar] [CrossRef]
  30. Battista, D. Knock, Knock! The Next Wave of Populism Has Arrived! An Analysis of Confirmations, Denials, and New Developments in a Phenomenon That Is Taking Center Stage. Soc. Sci. 2023, 12, 100. [Google Scholar] [CrossRef]
  31. Bykov, I.A.; Kurushkin, S.V. Digital Political Communication in Russia: Values of Humanism vs. Technocratic Approach. RUDN J. Polit. Sci. 2022, 24, 419–432. [Google Scholar] [CrossRef]
  32. Viudes, F.J. Revolucionando la Política: El Papel Omnipresente de la IA en la Segmentación y el Targeting de Campañas Modernas. Más Poder Local 2023, 53, 146–151. [Google Scholar] [CrossRef]
  33. Nida-Rumelin, J.; Weidenfeld, N. Umanesimo Digitale: Un’Etica per l’Epoca dell’Intelligenza Artificiale; FrancoAngeli: Milan, Italy, 2019. [Google Scholar]
  34. Dutton, W.H. (Ed.) Politics and the Internet; Routledge: London, UK, 2013. [Google Scholar]
  35. Morin, E. Réveillons-Nous! Denoël: Paris, France, 2022. [Google Scholar]
  36. Beer, D. The Social Power of Algorithms. Inf. Commun. Soc. 2016, 20, 1–13. [Google Scholar] [CrossRef]
  37. Leslie, D. Understanding Artificial Intelligence Ethics and Safety. Alan Turing Institute. arXiv 2019, arXiv:1906.05684. [Google Scholar]
  38. Hine, C. Evaluating the Prospects for University-Based Ethical Governance in Artificial Intelligence and Data-Driven Innovation. Res. Ethics 2021, 17, 464–479. [Google Scholar] [CrossRef]
  39. Rothkopf, D.J. When the Buzz Bites Back. The Washington Post, 11 May 2003, p. B01. Available online: https://www.washingtonpost.com/archive/opinions/2003/05/11/when-the-buzz-bites-back/bc8cd84f-cab6-4648-bf58-0277261af6cd/ (accessed on 27 December 2024).
  40. Nguyen, N. In the Coronavirus ‘Infodemic’, Here’s How to Avoid Bad Information. Misleading Information About COVID-19 Spreads Through Texts and Emails—But You Can Protect Yourself from Dubious Claims and Reports. The Wall Street Journal, 22 Mar 2020. Available online: https://www.wsj.com/articles/in-the-coronavirus-infodemic-you-can-manage-the-deluge-of-news-11584882002 (accessed on 27 December 2024).
  41. Boccia Artieri, G. Infodemic Disorder: COVID-19 and Post-truth. In Infodemic Disorder; La Rocca, G., Carignan, M.E., Boccia Artieri, G., Eds.; Palgrave Macmillan: London, UK, 2023; pp. 15–30. [Google Scholar]
  42. Geertz, C. The World in Pieces: Culture and Politics at the End of the Century. In Available Light: Anthropological Reflections on Philosophical Topics; Geertz, C., Ed.; Princeton University Press: Princeton, NJ, USA, 2001; pp. 218–263. [Google Scholar]
  43. Westerlund, M. The Emergence of Deepfake Technology: A Review. Technol. Innov. Manag. Rev. 2019, 9, 39–52. [Google Scholar]
  44. Diakopoulos, N.; Johnson, D. Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media Soc. 2021, 23, 2072–2098. [Google Scholar] [CrossRef]
  45. Battista, D. Political communication in the age of artificial intelligence: An overview of deepfakes and their implications. Soc. Regist. 2024, 8, 7–24. [Google Scholar]
  46. Farkas, J.; Schou, J. Fake news as a floating signifier: Hegemony, antagonism and the politics of falsehood. Javn. Public. 2018, 25, 298–314. [Google Scholar]
  47. Baker, J.E. Reducing Bias and Inefficiency in the Selection Algorithm. In Proceedings of the Second International Conference on Genetic Algorithms, Cambridge, MA, USA, 28–31 July 1987; Volume 206, pp. 14–21. [Google Scholar]
  48. Crawford, K.; Calo, R. There Is a Blind Spot in AI Research. Nature 2016, 538, 311–313. [Google Scholar] [CrossRef] [PubMed]
  49. Savaget, P.; Acero, L. Plurality in Understandings of Innovation, Sociotechnical Progress, and Sustainable Development: An Analysis of OECD Expert Narratives. Public Underst. Sci. 2018, 27, 611–628. [Google Scholar]
  50. Barber, B.R. Three Scenarios for the Future of Technology and Strong Democracy. Political Sci. Q. 1998, 113, 573–589. [Google Scholar]
  51. David, P.; Foray, D. Una Introducción a la Economía y a la Sociedad del Saber. France. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000125488_spa (accessed on 1 December 2023). CID: 20.500.12592/qrfj787.
  52. Foray, D.; Lundvall, B.A. Employment and Growth in the Knowledge-Based Economy; OECD: Paris, France, 1996; Available online: https://infoscience.epfl.ch/handle/20.500.14299/214934 (accessed on 5 January 2025).
  53. Castells, M. The Network Society: A Cross-Cultural Perspective; Edward Elgar Publishing: Cheltenham, UK, 2004. [Google Scholar]
  54. van Dijk, J. The Network Society: Social Aspects of New Media, 2nd ed.; Sage Publications: London, UK, 2006. [Google Scholar]
  55. Cellan-Jones, R. Always On: Hope and Fear in the Social Smartphone Era; Bloomsbury Continuum: London, UK, 2021. [Google Scholar]
  56. Mager, A. Algorithmic Ideology: How Capitalist Society Shapes Search Engines. Inf. Commun. Soc. 2012, 15, 769–787. [Google Scholar] [CrossRef]
  57. Edwards, A.; Fitzgerald, R.; Beneito-Montagut, R.; Housley, W. The SAGE Handbook of Digital Society; Sage Publications: London, UK, 2022. [Google Scholar]
  58. Morozov, E. Don’t Be Evil. New Repub. 2011, 242, 18–24. [Google Scholar]
  59. Amoore, L.; Piotukh, V. (Eds.) Algorithmic Life: Calculative Devices in the Age of Big Data; Routledge: London, UK, 2015. [Google Scholar]
  60. Volodenkov, S.; Fedorchenko, S. Subjectness of Digital Communication in the Context of the Technological Evolution of Contemporary Society: Threats, Challenges, and Risks. Przegląd Strateg. 2021, 14, 437–456. [Google Scholar] [CrossRef]
  61. Battista, D.; Uva, G. Exploring the Legal Regulation of Social Media in Europe: A Review of Dynamics and Challenges—Current Trends and Future Developments. Sustainability 2023, 15, 4144. [Google Scholar] [CrossRef]
  62. Zuboff, S. Surveillance capitalism and the challenge of collective action. New Labor Forum. 2019, 28, 10–29. [Google Scholar] [CrossRef]
  63. Tufekci, Z. Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colo. Tech. L. J. 2015, 13, 203–217. [Google Scholar]
  64. Lember, V.; Kattel, R.; Tõnurist, P. Technological capacity in the public sector: The case of Estonia. Int. Rev. Adm. Sci. 2018, 84, 214–230. [Google Scholar] [CrossRef]
  65. Creemers, R. Conception of cyber sovereignty: Rhetoric and realization. In Governing Cyberspace: Behavior, Power, and Diplomacy. Digital Technologies and Global Politics; Broeders, D., van den Berg, B., Eds.; Rowman & Littlefield: Lanham, MD, USA, 2020; pp. 107–142. [Google Scholar]
  66. Kellogg, K.C.; Valentine, M.A.; Christin, A. Algorithms at Work: The New Contested Terrain of Control. Acad. Manag. Ann. 2020, 14, 366–410. [Google Scholar] [CrossRef]
  67. Soltani, A.; Canty, S.; Mayo, Q.; Thomas, L.; Hoofnagle, C.J. Flash Cookies and Privacy. In 2010 AAAI Spring Symposium Series; SSRN: Rochester, NY, USA, 2010. [Google Scholar]
  68. Hu, X.; Sastry, N. Characterising Third Party Cookie Usage in the EU After GDPR. In Proceedings of the 10th ACM Conference on Web Science, Boston, MA, USA, 30 June–3 July 2019; pp. 137–141. [Google Scholar]
  69. Sartor, G. L’intelligenza Artificiale e il Diritto; Giappichelli: Turin, Italy, 2022. [Google Scholar]
  70. Castells, M. The Information Age: Economy, Society, and Culture; Blackwell Pub: Hoboken, NJ, USA, 1998; Volume 3. [Google Scholar]
  71. Annanny, M. Networked Press Freedom: Create Infrastructures for the Public Right to Hear; MIT Press: London, UK, 2018. [Google Scholar]
  72. De Blasio, E.; Kneuer, M.; Schünemann, W.; Sorice, M. The Ongoing Transformation of the Digital Public Sphere: Basic Considerations on a Moving Target. Media Commun. 2020, 8, 1–5. [Google Scholar] [CrossRef]
  73. Guo, Y.; Lu, Z.; Kuang, H.; Wang, C. Information Avoidance Behavior on Social Network Sites: Information Irrelevance, Overload, and the Moderating Role of Time Pressure. Int. J. Inf. Manag. 2020, 52, 102067. [Google Scholar] [CrossRef]
  74. Flaxman, S.; Goel, S.; Rao, J.M. Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opin. Q. 2016, 80, 298–320. [Google Scholar] [CrossRef]
  75. Rhodes, S.C. Filter Bubbles, Echo Chambers, and Fake News: How Social Media Conditions Individuals to Be Less Critical of Political Misinformation. Polit. Commun. 2022, 39, 1–22. [Google Scholar] [CrossRef]
  76. Wiesenberg, M.; Tench, R. Deep Strategic Mediatization: Organizational Leaders’ Knowledge and Usage of Social Bots in an Era of Disinformation. Int. J. Inf. Manag. 2020, 51, 102042. [Google Scholar] [CrossRef]
  77. Picarella, L. Intersections in the Digital Society: Cancel Culture, Fake News, and Contemporary Public Discourse. Front. Sociol. 2024, 9, 1376049. [Google Scholar] [CrossRef] [PubMed]
  78. Jenkins, H. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  79. Scolari, C.; Masanet, M.-J.; Guerrero-Pico, M.; Establés, M.-J. Transmedia Literacy in the New Media Ecology: Teens’ Transmedia Skills and Informal Learning Strategies. Prof. Inf. 2018, 27, 801–812. [Google Scholar] [CrossRef]
  80. Feijóo, C.; Maghiros, I.; Abadie, F.; Gómez-Barroso, J.L. Exploring a Heterogeneous and Fragmented Digital Ecosystem: Mobile Content. Telematics Inform. 2009, 26, 282–292. [Google Scholar] [CrossRef]
  81. Kurzweil, R. The Singularity is Near: When Humans Transcend Biology; Penguin Publishing Group: Cambridge, MA, USA, 2006. [Google Scholar]
  82. Joy, B. Why the Future Doesn’t Need Us. Wired Magazine, 2000. Available online: https://www.wired.com/2000/04/joy-2/ (accessed on 5 January 2025).
  83. García-Orosa, B.; Canavilhas, J.; Vázquez-Herrero, J. Algoritmos y Comunicación: Revisión Sistematizada de la Literatura. Comunicar 2023, 74, 9–21. [Google Scholar] [CrossRef]
  84. Jandrić, P. On the hyping of scholarly research (with a shout-out to ChatGPT). Postdigital Sci. Educ. 2024, 6, 383–390. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Battista, D.; Mangone, E. Technological Culture and Politics: Artificial Intelligence as the New Frontier of Political Communication. Societies 2025, 15, 75. https://doi.org/10.3390/soc15040075

AMA Style

Battista D, Mangone E. Technological Culture and Politics: Artificial Intelligence as the New Frontier of Political Communication. Societies. 2025; 15(4):75. https://doi.org/10.3390/soc15040075

Chicago/Turabian Style

Battista, Daniele, and Emiliana Mangone. 2025. "Technological Culture and Politics: Artificial Intelligence as the New Frontier of Political Communication" Societies 15, no. 4: 75. https://doi.org/10.3390/soc15040075

APA Style

Battista, D., & Mangone, E. (2025). Technological Culture and Politics: Artificial Intelligence as the New Frontier of Political Communication. Societies, 15(4), 75. https://doi.org/10.3390/soc15040075

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop