1. Introduction
Social media is becoming increasingly important for news dissemination and public opinion worldwide—there were 3.9 billion social media users worldwide in 2020. The number of users may grow to 5.85 billion by 2027 [
1]. While social media is proliferating, it is also being increasingly influenced by artificial intelligence and big data technologies. Various types of human–computer interaction, natural language recognition, machine learning, and other technologies based on algorithms and AI have been involved in the operation of social media, giving rise to a new mode of propaganda called “computational propaganda”. This propaganda mode’s impact on the society of China and the world is increasing.
“Computational propaganda” has recently received considerable attention from the global academic community. Scholars have identified a lot of cases. For example, Arnaudo [
2] studied the 2014 presidential campaign and the 2016 Rio de Janeiro elections in Brazil and found that almost all candidates used social bots and other technological means to aid their campaign propaganda; Cai and Liu [
3], and Ma [
4] studied the war propaganda of both Russia and Ukraine on social media related to the Russia–Ukrainian conflict. They found that both sides extensively used computational propaganda tools for war propaganda.
This article discusses and analyzes the models of computational propaganda from the perspectives of technology and strategies applied in computational propaganda. Based on these analyses, this article proposes two possible strategies that may be applied in “computational propaganda”. Moreover, this article also proposes four supervision measures for “computational propaganda”.
2. Technical Measures Utilized in Computational Propaganda
Based on our analysis, this article concludes that there are three main tools of “computational propaganda”:
(1) Social bots: According to the definition of Boshmaf [
5], a social bot is a type of robot utilized on online social platforms. It can automatically manipulate online social media accounts, using them to execute several social behaviors, such as sending messages, liking, commenting, and sending follow requests. The central feature of social bots is that their designers may attempt to make their behavior further resemble that of real users, making it more difficult for the human users that are interacting with these bots to discover that they are interacting with a social bot.
(2) Artificial intelligence: With the implementation of “computational propaganda”, artificial intelligence can assist propagandists in creating an accurate profile of the person being advocated. Based on knowledge of which news sites and social media platforms a social media user regularly uses and their daily activities on these platforms, AI built on machine learning techniques can use large amounts of online data to infer the user’s personality, political preferences, religious beliefs, and personal interests. This information is then fed into a model that can be used to tailor “computational propaganda” strategies for different users.
Furthermore, unlike social bots that interact with real users, “computational propaganda” combined with artificial intelligence is much more flexible. As the propagandist can keep track of emotional changes in the person being propagated in real time, it allows him or her to target and make real-time adjustments to the propaganda method based on emotions.
(3) Big data: Big data technology can break the boundaries of space and time in digital form and make diverse, multi-level, and multi-faceted records of the trajectory of changes in the propaganda audience and the public opinion environment. Therefore, big data technology is also one of the important technical backings to finely divide the propagandized, which can greatly improve the granularity and accuracy of audience analysis and enhance the accuracy and efficiency of propaganda [
6].
The advantage of this big-data-based model of audience analysis is:
Firstly, tracking public opinion with the help of big data models maximizes the capability of identifying potential supporters. The analysis of public opinion based on traditional methods has a low level of granularity and may overlook people who are part of the opposing group but support their position. In contrast, analysis based on big data models would minimize this possibility.
Secondly, regular field surveys based on big data technology can identify changes in the group of propagandists in a timely and sensitive manner, and campaign teams can flexibly adjust the allocation of propaganda resources according to the changes.
3. Strategies Utilized in Computational Propaganda
According to the analysis and survey in this article, there are two possible strategies in computational propaganda. This paper analyzes and summarizes the strategies of propagandists used to influence public opinion and their corresponding execution methods in calculated propaganda, as shown in
Table 1.
(1): Collective actions of social bots. By operating in organized clusters, ‘social bots’ can efficiently build public opinion, spread carefully crafted fake news, and ‘hijack’ public opinion with a relatively low expenditure of resources. This strategy is often used in the early stages of “calculated propaganda”.
Mark Granovetter [
7] proposed a ‘threshold model’ in 1978, which can be used to illustrate the mechanism by which organized action with a small number of social bots can influence public opinion on a large scale in social media computational propaganda. In this model, the main driver of each individual’s behavior is whether the others are being influenced by public opinion. Thresholds for groups tend to conform to a normal distribution. Only a tiny proportion of the overall population needs to collaborate to generate publicity to reach the threshold for collective action and trigger a chain reaction.
When the collective action threshold is reached, “calculated propagandists” are more likely to reach the collective action threshold by spreading false news than by spreading accurate information. Therefore, propagandists may use artificial intelligence, such as social bots, to disseminate many carefully crafted fake news stories at this stage.
Figure 1 shows the collective action procedure of social bots in the early stages of computational propaganda.
(2): Forming “echo chambers”. By exploiting the psychological tendency of people to use shortcuts to make quick judgments when faced with complex issues, “calculated propagandists” can “feed” specific media messages to their audience in order to create an “echo chamber”, whereby social media users with similar attitudes form independent, closed clusters. Once such clusters are formed, they provide great facilities for propagandists to induce specific paths of thought in their audiences. The creation of polarized and divided audience groups facilitates the formation of such echo chambers [
8].
This tactic is mainly achieved by using emotional contagion to provoke opposing views. The perpetrators of computational propaganda use disinformation to shape emotions and create antagonism, thereby driving audiences to extremes and divisions. To further ensure the effectiveness of this measure, propagandists use big data technology to monitor groups of Internet users, screen for individuals who are not affected by the emotional infection and who are moving towards radicalized views, and carry out targeted emotional indoctrination, thus expanding the scope for the polarization and division of opinion. In addition, propagandists also use big data to calculate quantifiable trajectories of emotional change and thresholds of change in conjunction with the personal profiles of the propagandized to precisely manipulate the internal environment of the small group of Internet users shaped by the emotional infection and ensure that the trajectory of polarizations and division can develop in line with the propagandists’ expected trajectory.
4. Possible Regulation Methods of Restricting “Computational Propaganda”
“Computational propaganda” has a wide range of ‘negative effects’, such as using emotion to create conflicting views that can lead to conflict and division, and using big data to analyze the public may threaten personal privacy. Thus, the control and regulation of “computational propaganda” is a common concern among political, media, and academic scholars worldwide. This paper proposes three possible measures for regulating computational propaganda based on the discussion above.
(1): Promoting “psychological vaccination” to the public. As mentioned above, in “computational propaganda”, especially in its early stages, propagandists will use clusters of social bots to manufacture fake news and spread it widely to trigger a widespread chain reaction and create a broad public impact.
Van der Linden and Roozenbeek [
9] proposed to utilize psychological tools to develop public vigilance against fake news on social media, which is referred to as a ‘psychological vaccine’. Promoting ‘psychological inoculation’ training to the public will help improve the public’s ability to identify fake news and reduce the likelihood of the public spreading fake news when using social media, thus making it more difficult for ‘calculated propagandists’ who spread fake news to reach a collective action threshold and trigger a chain reaction through fake news and reducing the probability of public opinion being manipulated by propagandists.
(2): Supervise and regulate “opinion leaders”. In ‘calculated propaganda’, propagandists may deliberately create polarized and divided opinions and ‘echo chambers’ to ensure their influence’s efficiency and stability. Thus, regulatory forces need to target and curb “echo chambers”. Highly influential users, or ‘opinion leaders’, may be a breakthrough in curbing echo chambers as they can gradually change themselves while steering opinions within their sphere of influence and eventually motivating a group of followers to take positive action. Several activists can eventually reach a collective action threshold, steering opinion towards neutrality and thus breaking the echo chamber.
Therefore, training and supervising high-influence users of social media, i.e., ‘opinion leaders’, and guiding them to actively maintain their own moderate, neutral, and rational attitudes to reduce the probability of these users being influenced by ‘calculated propaganda’ may be an effective supervision strategy to computational propaganda.
(3): Establishing a mechanism to monitor public opinion and detect chain reactions of public opinion in a timely manner: In “computational propaganda”, many tactics and tools used by propagandists aim to reach the threshold of collective action of the group as soon as possible, thus triggering a chain reaction. However, the traditional way of monitoring public opinion may not be applicable because of social media’s high efficiency in spreading information and influencing opinion. Therefore, we can follow the thinking of using big data to analyze voters in “calculated propaganda” and build a high-granularity big data public opinion monitoring system with “micro-targets” as the basic unit, which can reflect timely changes according to the outside world, so as to monitor public opinion at a high level of granularity when the system is designed to provide early warning when “calculated propaganda” is about to reach the threshold of collective action.
(4): Social bots are essential tools used in “computational propaganda”. They can be used to send and retweet messages on a large scale to create a false public opinion or as a tool for the mass distribution of fake news, which causes a mass chain reaction. They can also be an essential tool for creating divisive public opinion and an ‘echo chamber’ effect by sending out mass messages with specific content and positions to specific social media groups.
Identifying social bots is beneficial for regulators to detect, investigate, and deal with “computational propaganda” activities that are harmful to the typical public opinion environment or that even obstruct public safety; on the other hand, it is beneficial for social media platforms and the public to improve their capability to deal with harmful “computational propaganda” activities.
Moreover, this would help social media platform operators and the public to improve their capability to deal with harmful “computational propaganda” and maintain a regular and stable public opinion environment.
5. Summary
“Computational propaganda” is a type of utilization of rapidly developing digital AI and big data technologies on social media. In order to promote the understanding, utilization, and regulation of this new propaganda method, this paper discusses tools, strategies, and possible regulation methods for this propaganda method. With the rapid development of this new propaganda method, the key points and possible regulation methods to execute and regulate computational propaganda may be helpful for propagandists and governmental public opinion regulators around the world.