Next Article in Journal
Design and Implementation of Aspect-Based Sentiment Analysis Task
Previous Article in Journal
How Much Rationality Is Needed for Decision Making?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Research on Promotion Strategies for Social Robots †

School of Humanities, Dalian University of Technology, Dalian 116000, China
Presented at the 2023 Summit of the International Society for the Study of Information (IS4SI 2023), Beijing, China, 14–16 August 2023.
Comput. Sci. Math. Forum 2023, 8(1), 57; https://doi.org/10.3390/cmsf2023008057
Published: 11 August 2023
(This article belongs to the Proceedings of 2023 International Summit on the Study of Information)

Abstract

:
In recent years, “computational propaganda” based on big data, social bot technology, and artificial intelligence has been widely used in a series of major events such as the US election, Brexit, and the Russia–Ukraine conflict, drawing attention from all walks of life around the world. This paper discusses and analyzes two dimensions of computational propaganda: propaganda strategies and technological methods. Based on these discussions and analyses, four tools for regulating computational propaganda are proposed.

1. Introduction

Social media is becoming increasingly important for news dissemination and public opinion worldwide—there were 3.9 billion social media users worldwide in 2020. The number of users may grow to 5.85 billion by 2027 [1]. While social media is proliferating, it is also being increasingly influenced by artificial intelligence and big data technologies. Various types of human–computer interaction, natural language recognition, machine learning, and other technologies based on algorithms and AI have been involved in the operation of social media, giving rise to a new mode of propaganda called “computational propaganda”. This propaganda mode’s impact on the society of China and the world is increasing.
“Computational propaganda” has recently received considerable attention from the global academic community. Scholars have identified a lot of cases. For example, Arnaudo [2] studied the 2014 presidential campaign and the 2016 Rio de Janeiro elections in Brazil and found that almost all candidates used social bots and other technological means to aid their campaign propaganda; Cai and Liu [3], and Ma [4] studied the war propaganda of both Russia and Ukraine on social media related to the Russia–Ukrainian conflict. They found that both sides extensively used computational propaganda tools for war propaganda.
This article discusses and analyzes the models of computational propaganda from the perspectives of technology and strategies applied in computational propaganda. Based on these analyses, this article proposes two possible strategies that may be applied in “computational propaganda”. Moreover, this article also proposes four supervision measures for “computational propaganda”.

2. Technical Measures Utilized in Computational Propaganda

Based on our analysis, this article concludes that there are three main tools of “computational propaganda”:
(1) Social bots: According to the definition of Boshmaf [5], a social bot is a type of robot utilized on online social platforms. It can automatically manipulate online social media accounts, using them to execute several social behaviors, such as sending messages, liking, commenting, and sending follow requests. The central feature of social bots is that their designers may attempt to make their behavior further resemble that of real users, making it more difficult for the human users that are interacting with these bots to discover that they are interacting with a social bot.
(2) Artificial intelligence: With the implementation of “computational propaganda”, artificial intelligence can assist propagandists in creating an accurate profile of the person being advocated. Based on knowledge of which news sites and social media platforms a social media user regularly uses and their daily activities on these platforms, AI built on machine learning techniques can use large amounts of online data to infer the user’s personality, political preferences, religious beliefs, and personal interests. This information is then fed into a model that can be used to tailor “computational propaganda” strategies for different users.
Furthermore, unlike social bots that interact with real users, “computational propaganda” combined with artificial intelligence is much more flexible. As the propagandist can keep track of emotional changes in the person being propagated in real time, it allows him or her to target and make real-time adjustments to the propaganda method based on emotions.
(3) Big data: Big data technology can break the boundaries of space and time in digital form and make diverse, multi-level, and multi-faceted records of the trajectory of changes in the propaganda audience and the public opinion environment. Therefore, big data technology is also one of the important technical backings to finely divide the propagandized, which can greatly improve the granularity and accuracy of audience analysis and enhance the accuracy and efficiency of propaganda [6].
The advantage of this big-data-based model of audience analysis is:
Firstly, tracking public opinion with the help of big data models maximizes the capability of identifying potential supporters. The analysis of public opinion based on traditional methods has a low level of granularity and may overlook people who are part of the opposing group but support their position. In contrast, analysis based on big data models would minimize this possibility.
Secondly, regular field surveys based on big data technology can identify changes in the group of propagandists in a timely and sensitive manner, and campaign teams can flexibly adjust the allocation of propaganda resources according to the changes.

3. Strategies Utilized in Computational Propaganda

According to the analysis and survey in this article, there are two possible strategies in computational propaganda. This paper analyzes and summarizes the strategies of propagandists used to influence public opinion and their corresponding execution methods in calculated propaganda, as shown in Table 1.
(1): Collective actions of social bots. By operating in organized clusters, ‘social bots’ can efficiently build public opinion, spread carefully crafted fake news, and ‘hijack’ public opinion with a relatively low expenditure of resources. This strategy is often used in the early stages of “calculated propaganda”.
Mark Granovetter [7] proposed a ‘threshold model’ in 1978, which can be used to illustrate the mechanism by which organized action with a small number of social bots can influence public opinion on a large scale in social media computational propaganda. In this model, the main driver of each individual’s behavior is whether the others are being influenced by public opinion. Thresholds for groups tend to conform to a normal distribution. Only a tiny proportion of the overall population needs to collaborate to generate publicity to reach the threshold for collective action and trigger a chain reaction.
When the collective action threshold is reached, “calculated propagandists” are more likely to reach the collective action threshold by spreading false news than by spreading accurate information. Therefore, propagandists may use artificial intelligence, such as social bots, to disseminate many carefully crafted fake news stories at this stage.
Figure 1 shows the collective action procedure of social bots in the early stages of computational propaganda.
(2): Forming “echo chambers”. By exploiting the psychological tendency of people to use shortcuts to make quick judgments when faced with complex issues, “calculated propagandists” can “feed” specific media messages to their audience in order to create an “echo chamber”, whereby social media users with similar attitudes form independent, closed clusters. Once such clusters are formed, they provide great facilities for propagandists to induce specific paths of thought in their audiences. The creation of polarized and divided audience groups facilitates the formation of such echo chambers [8].
This tactic is mainly achieved by using emotional contagion to provoke opposing views. The perpetrators of computational propaganda use disinformation to shape emotions and create antagonism, thereby driving audiences to extremes and divisions. To further ensure the effectiveness of this measure, propagandists use big data technology to monitor groups of Internet users, screen for individuals who are not affected by the emotional infection and who are moving towards radicalized views, and carry out targeted emotional indoctrination, thus expanding the scope for the polarization and division of opinion. In addition, propagandists also use big data to calculate quantifiable trajectories of emotional change and thresholds of change in conjunction with the personal profiles of the propagandized to precisely manipulate the internal environment of the small group of Internet users shaped by the emotional infection and ensure that the trajectory of polarizations and division can develop in line with the propagandists’ expected trajectory.

4. Possible Regulation Methods of Restricting “Computational Propaganda”

“Computational propaganda” has a wide range of ‘negative effects’, such as using emotion to create conflicting views that can lead to conflict and division, and using big data to analyze the public may threaten personal privacy. Thus, the control and regulation of “computational propaganda” is a common concern among political, media, and academic scholars worldwide. This paper proposes three possible measures for regulating computational propaganda based on the discussion above.
(1): Promoting “psychological vaccination” to the public. As mentioned above, in “computational propaganda”, especially in its early stages, propagandists will use clusters of social bots to manufacture fake news and spread it widely to trigger a widespread chain reaction and create a broad public impact.
Van der Linden and Roozenbeek [9] proposed to utilize psychological tools to develop public vigilance against fake news on social media, which is referred to as a ‘psychological vaccine’. Promoting ‘psychological inoculation’ training to the public will help improve the public’s ability to identify fake news and reduce the likelihood of the public spreading fake news when using social media, thus making it more difficult for ‘calculated propagandists’ who spread fake news to reach a collective action threshold and trigger a chain reaction through fake news and reducing the probability of public opinion being manipulated by propagandists.
(2): Supervise and regulate “opinion leaders”. In ‘calculated propaganda’, propagandists may deliberately create polarized and divided opinions and ‘echo chambers’ to ensure their influence’s efficiency and stability. Thus, regulatory forces need to target and curb “echo chambers”. Highly influential users, or ‘opinion leaders’, may be a breakthrough in curbing echo chambers as they can gradually change themselves while steering opinions within their sphere of influence and eventually motivating a group of followers to take positive action. Several activists can eventually reach a collective action threshold, steering opinion towards neutrality and thus breaking the echo chamber.
Therefore, training and supervising high-influence users of social media, i.e., ‘opinion leaders’, and guiding them to actively maintain their own moderate, neutral, and rational attitudes to reduce the probability of these users being influenced by ‘calculated propaganda’ may be an effective supervision strategy to computational propaganda.
(3): Establishing a mechanism to monitor public opinion and detect chain reactions of public opinion in a timely manner: In “computational propaganda”, many tactics and tools used by propagandists aim to reach the threshold of collective action of the group as soon as possible, thus triggering a chain reaction. However, the traditional way of monitoring public opinion may not be applicable because of social media’s high efficiency in spreading information and influencing opinion. Therefore, we can follow the thinking of using big data to analyze voters in “calculated propaganda” and build a high-granularity big data public opinion monitoring system with “micro-targets” as the basic unit, which can reflect timely changes according to the outside world, so as to monitor public opinion at a high level of granularity when the system is designed to provide early warning when “calculated propaganda” is about to reach the threshold of collective action.
(4): Social bots are essential tools used in “computational propaganda”. They can be used to send and retweet messages on a large scale to create a false public opinion or as a tool for the mass distribution of fake news, which causes a mass chain reaction. They can also be an essential tool for creating divisive public opinion and an ‘echo chamber’ effect by sending out mass messages with specific content and positions to specific social media groups.
Identifying social bots is beneficial for regulators to detect, investigate, and deal with “computational propaganda” activities that are harmful to the typical public opinion environment or that even obstruct public safety; on the other hand, it is beneficial for social media platforms and the public to improve their capability to deal with harmful “computational propaganda” activities.
Moreover, this would help social media platform operators and the public to improve their capability to deal with harmful “computational propaganda” and maintain a regular and stable public opinion environment.

5. Summary

“Computational propaganda” is a type of utilization of rapidly developing digital AI and big data technologies on social media. In order to promote the understanding, utilization, and regulation of this new propaganda method, this paper discusses tools, strategies, and possible regulation methods for this propaganda method. With the rapid development of this new propaganda method, the key points and possible regulation methods to execute and regulate computational propaganda may be helpful for propagandists and governmental public opinion regulators around the world.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Statista. Number of Social Media Users Worldwide from 2017 to 2027. 2023. Available online: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/ (accessed on 23 June 2023).
  2. Arnaudo, D. Computational Propaganda in Brazil: Social Bots during Elections; Woolley, S., Howard, P.N., Eds.; Working Paper 2017(8); Project on Computational Propaganda: Oxford, UK, 2017. [Google Scholar]
  3. Cai, R.; Liu, Y. From “Twitter revolution’ to “Warlock”—How do social medias reform modern wars. Explor. Argum. 2022, 11, 68–78+178. [Google Scholar]
  4. Ma, L. Comments on Algorithm war and cognition war in Russia-Ukraine conflicts. Foreign Commun. 2022, 10, 21–25. [Google Scholar]
  5. Boshmaf, Y.; Muslukhov, I.; Beznosov, K.; Ripeanu, M. Key Challenges in Defending against Malicious Socialbots; In Proceedings of the 5th USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET 12), San Jose, CA, USA, 24 April 2012.
  6. Zhang, F. Realities and corresponding strategies in propaganda in big data era. J. Fuzhou Party Sch. 2020, 3, 39–43. [Google Scholar]
  7. Granovetter, M. Threshold Models of Collective Behavior. Am. J. Sociol. 1978, 83, 1420–1443. [Google Scholar] [CrossRef] [Green Version]
  8. Zhou, J.; Liu, M. Global tendency, spread and influence of computational propaganda. Mod. Commun. 2022, 6, 28–36. [Google Scholar] [CrossRef]
  9. Van der Linden, S.; Roozenbeek, J. Psychological inoculation against fake news. The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation; Routledge: London, UK, 2020. [Google Scholar] [CrossRef]
Figure 1. Collective action procedure of social bots in computational propaganda.
Figure 1. Collective action procedure of social bots in computational propaganda.
Csmf 08 00057 g001
Table 1. Strategies utilized in computational propaganda.
Table 1. Strategies utilized in computational propaganda.
StrategiesExecution MethodsConsequence
Collective action of social botsMassively retweet news at the beginning of propagandaAchieve a collective action threshold as fast as possible, triggering a chain spread amongst the public
Massively manufacture and retweet fake news
Frequently interact with opinion leaders
Trigger and accelerate the fragmentation and polarization of public opinionUtilize the tendency of choosing shortcuts and make quick judgments of people; “feed” information of specific stancesIncrease the efficiency and speed of information spread
Intentionally spread some information from the opposite stanceMaintain the stability of the “echo chamber”
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, L. Research on Promotion Strategies for Social Robots. Comput. Sci. Math. Forum 2023, 8, 57. https://doi.org/10.3390/cmsf2023008057

AMA Style

Xu L. Research on Promotion Strategies for Social Robots. Computer Sciences & Mathematics Forum. 2023; 8(1):57. https://doi.org/10.3390/cmsf2023008057

Chicago/Turabian Style

Xu, Lingyu. 2023. "Research on Promotion Strategies for Social Robots" Computer Sciences & Mathematics Forum 8, no. 1: 57. https://doi.org/10.3390/cmsf2023008057

Article Metrics

Back to TopTop