Next Article in Journal
Rice Yield Forecasting Using Hybrid Quantum Deep Learning Model
Previous Article in Journal
Force-Directed Immersive 3D Network Visualization
Previous Article in Special Issue
Teachers’ Needs for Support during Emergency Remote Teaching in Greek Schools: Role of Social Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Opinion Formation in Networks: A Multi-Issue and Evidence-Based Approach

Institute for Pedagogical Innovation, Research and Excellence, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore
Computers 2024, 13(8), 190; https://doi.org/10.3390/computers13080190
Submission received: 7 June 2024 / Revised: 23 July 2024 / Accepted: 6 August 2024 / Published: 7 August 2024
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)

Abstract

:
In this study, we present a computational model for simulating opinion dynamics within social networks, incorporating cognitive and social psychological principles such as homophily, confirmation bias, and selective exposure. We enhance our model using Dempster–Shafer theory to address uncertainties in belief updating. Mathematical formalism and simulations were performed to derive empirical results from showcasing how this method might be useful for modeling real-world opinion consensus and fragmentation. By constructing a scale-free network, we assign initial opinions and iteratively update them based on neighbor influences and belief masses. Lastly, we examine how the presence of “truth” nodes with high connectivity, used to simulate the influence of objective truth on the network, alters opinions. Our simulations reveal insights into the formation of opinion clusters, the role of cognitive biases, and the impact of uncertainty on belief evolution, providing a robust framework for understanding complex opinion dynamics in social systems.

1. Introduction

Computational social science is an interdisciplinary field that utilizes computational methods and data analysis to study social phenomena and human behavior. Agent-based modeling in computational social science can uncover patterns, predict trends, and simulate social processes. When integrated with statistical models, it enhances our understanding of complex social dynamics, a goal of social physics [1,2,3]. Social physics covers a wide range of topics, such as the spread of information or misinformation through social networks, much like the diffusion of particles in a medium [4]. It also examines patterns of human mobility and crowd dynamics to improve urban planning and public safety [5]. Another application is understanding collective behavior, where reputation models in decision-making can help explain social phenomena [6]. By leveraging big data and advanced analytics, social physics offers a quantitative approach to solving complex social problems. In this article, we focus on a specific domain in the study of social physics—opinion dynamics. The dynamics of opinion formation within social networks have garnered significant attention due to their applicable impact on public discourse and societal cohesion [7]. Social networks serve as critical platforms for information dissemination and exchange, facilitating connectivity and synchronicity among individuals and influencing public opinion in complex and multifaceted ways [8].
The simulation of opinion dynamics within these networks is crucial for understanding how opinions evolve and spread, providing insights into polarization and consensus formation [9,10]. By modeling these processes, researchers can predict potential outcomes of information dissemination, identify factors that contribute to the stability or volatility of public opinion, and develop strategies to mitigate the negative impacts of misinformation and polarization. Online communities allow for the rapid spread of information and ideas, connecting people across diverse geographical locations and cultural backgrounds. This interconnectedness can lead to the swift formation of public sentiment on various issues, shaping societal norms and influencing political, economic, and social landscapes. However, online social settings can lead to formations of echo chambers where algorithms feed viewers based on the preference of material they engage with through selective exposure [11,12].
Key social phenomena, such as homophily, bias, influence, and selective exposure, are also integral to studying opinion dynamics. Homophily, the tendency of individuals to associate and bond with similar others, often leads to the formation of echo chambers—closed loops where information that aligns with existing beliefs is amplified while dissenting views are minimized [13,14]. This phenomenon is exacerbated by selective exposure, where individuals preferentially seek information that confirms their preexisting beliefs and avoid information that challenges them. Biases in information processing and social influence, where individuals’ opinions are swayed by those they interact with, further reinforce these echo chambers [15,16]. For example, social media algorithms are designed to cause one to be “hooked” into remaining on the platform by learning one’s browsing and trace history, thereby recommending content of the same form and topic, potentially reinforcing the echo chamber. This selective exposure can significantly influence the convergence or divergence of opinions within social networks, reinforcing existing biases and potentially leading to increased polarization [17,18]. Such polarization can have profound implications for social cohesion, as it can create deep divides between different groups within society, making it challenging to reach a consensus on critical issues. Selective exposure, the tendency of individuals to seek out information that reinforces their existing beliefs while avoiding contradictory viewpoints, plays a crucial role in the formation and persistence of echo chambers [19]. Understanding the mechanisms behind these dynamics is essential for addressing the challenges of echo chambers, especially in the context of misinformation and truth propagation.
One commonly adopted opinion dynamics model is the bounded confidence model first proposed by Hegselmann and Krause [20]. The bounded confidence model is a framework to study how individual opinions evolve within a population. Each individual updates their opinion by considering only those opinions within a confidence interval of their own opinions.
Mathematically, let x i ( t ) denote the opinion of individual i at time t, where x i ( t ) 1 , 1 , t . Each i has a confidence interval ϵ i , where i only considers the opinion of agent j if
x j ( t ) x i ( t ) ϵ i .
Consequently, the opinion of i at t + 1 is updated based on the average opinions of all individuals whose opinions lie within i’s neighborhood of acceptance, I ( i , x ) , where
I ( i , x ) : = j : x j x i ϵ i ,
and the updated opinion can be expressed as
x i ( t + 1 ) = I ( i , x ) 1 j I ( i , x ) x j ( t ) .
Tan and Cheong extended this work to include multi-issue consensus [21], by including an issue index k, where the opinions are now defined as x k i ( t ) . Tan and Cheong further specified three interaction instances, where the opinions are updated based on the dynamics of the modified update rule defined by a new neighborhood of acceptances. Exclusivist interaction represents one extreme where an agent considers the opinion of another only if it is sufficiently close in all dimensions. In this model, an agent will only take into account opinions that fall within a specific range for every issue simultaneously. This exclusivity leads to a narrow scope of accepted viewpoints, thereby reducing the chances for multi-issue consensus. Inclusivist interaction is the opposite extreme, where an agent considers the opinion of another if it is sufficiently close in any one dimension. Here, even if two agents disagree on most issues, they can still influence each other’s opinions if they share a similar view on at least one issue. This inclusivity allows for a broader acceptance of diverse viewpoints and promotes a greater potential for consensus. However, neither the exclusivist nor inclusivist models are entirely realistic. It is more reasonable to assume that agreement on one issue will only partially encourage agreement on another issue. To that end, Tan and Cheong defined a model that interpolates between the two extremes called general interaction:
I k ( i , x ) : = j : x k j x k i ϵ k and x l j x l i α k ϵ l , l k ,
and
x i ( t + 1 ) = I ( i , x ) 1 j I ( i , x ) x j ( t ) ,
where I ( i , x ) = k = 1 m I k ( i , x ) . α k is a factor that is termed the degree of inclusivity. What these equations say is that an agent i will consider the opinion of another agent j if, for any issue k, x j is within the latitude of acceptance ϵ k i of x i and for every other issue l k , also within an expanded latitude of acceptance of α k ϵ l i . The idea is that as long as agent i agrees sufficiently with agent j on one issue k, they are more willing to hear each other out on all other issues l k . α k is the parameter that describes how much more willing the agents are to consider each other’s opinions. If we set α k = 1 , k , we obtain the exclusivist model, and if α k , k , we obtain the inclusivist model of opinion intersection. Figure 1 illustrates the results from this model for the two extremes and the general interaction.
While these works have deepened our theoretical understanding of social interaction and opinion dynamics, there remain research gaps for potential further investigation. Two of such gaps are the homogeneity in agent behavior and the simplified interactions based entirely on a static network. Firstly, the network does not evolve due to interaction behaviors, which do not reflect reality. Real-life social interaction changes are influenced by homophily, belief, and influence, which are not fully captured in their models. Secondly, all agents behave the same way with a uniform α for all agents across all issues. In reality, agents have varied tolerance thresholds, differing levels of influence, and personal biases. Critically, uncertainty plays a significant role in social interactions, influencing behaviors, communication patterns, decision-making processes, and the formation of social networks. Understanding how uncertainty impacts social dynamics can help analyze and predict social phenomena. The Dempster–Shafer theory (DST), also known as the Theory of Evidence, is a mathematical framework for modeling epistemic uncertainty [22,23]. Unlike traditional probability theory, DST represents uncertainty without requiring precise probability assignments. Mathematically, DST is described with a frame of discernment Θ as a finite set of mutually exclusive and exhaustive hypotheses representing all possible states. The Basic Probability Assignment (BPA), denoted by m, assigns a probability mass to each subset A of Θ such that m : 2 Θ 0 , 1 satisfying
A Θ m ( A ) = 1 , where m ( ) = 0 .
The belief function B e l for a subset A Θ is defined as the sum of the masses of all subsets B that are contained within A, given by
B e l ( A ) = B A m ( B ) .
Consequently, Dempster’s rule of combination combines two independent sets of evidence represented by the BPAs m 1 and m 2 [24]. The combined BPA m is computed as
m ( A ) = 1 1 K B C = A m 1 ( B ) m 2 ( C ) ,
where K is a normalization factor given by
K = B C = m 1 ( B ) m 2 ( C ) .
While the use of other models to describe belief has been adopted in the context of opinion dynamics [25], and DST has been used to model decision-making [26], DST has yet to be used to handle uncertainty and combine multiple sources of evidence for opinion formation using the bounded confidence model.
Lastly, “truth” often refers to an objective reality or a widely accepted fact that individuals in a society aim to converge upon. The concept of truth in social networks is pivotal, as it influences how information spreads, how consensus is formed, and how public opinion evolves. Researchers have employed various models to study how opinions converge towards the truth or diverge from it due to the influence of different factors, including social pressure, misinformation, and the small presence of influential agents [27,28,29]. Furthermore, it is known that truth is perceived and formed within social groups and networks, leading to democratic faultlines and subgroup polarization [30]. However, balancing subgroup consensus and global diversity can lead to a dynamic and evolving perception of truth [31]. Collectively, these studies underscore the fact that a small number of resolute individuals (analogous to truths) profoundly influence the collective opinions of larger groups. Consequently, further research is needed to explore the role of truths in complex opinion dynamics systems, especially in the case of shifting evidence and beliefs.
This article attempts to introduce a robust model of opinion dynamics that addresses these research gaps. The rest of the article describes the formalism of our opinion dynamics model in Section 2. We then study the general behavior of such opinion dynamics using three interaction types and conduct computational experiments to show how different types of truths affect the convergence and divergence of opinions in Section 3. Finally, we conclude in Section 4.

2. Formalism and Methods

2.1. Network Formalism

We consider a population N = { 1 , 2 , , n } of interacting agents belonging to a social group G = ( N , E ) , where E are all the edges in the network. Each i N is a node in the network and is connected to a subset of M i N \ { i } by undirected weighted edges E i j = ( i , j ) , w i j . Associated with each agent i is a vector of opinions x i of dimension p. The k-th component of the vector is denoted by x k i . For each issue index k, we also define the population opinion vector, x k : = x k 1 , x k 2 , , x k n . Similarly, belief masses are assigned to agents where m k i denotes the belief mass of agent i regarding issue k. The belief mass represents the degree of confidence agent i assigns to its opinion on issue k.

2.2. Opinion Dynamics Formalism

To expand the model under the general interaction, we first define an agent-wise partition on the set of issues, K. For each agent, i, the set of issues can be partitioned K = P i P i ¯ , where P i is the set of issues which i considers under the strict latitude of acceptance. The modified neighborhood of acceptance is
I k ( i , x ) : = j : x k j x k i ϵ k i x l j x l i α l i ϵ l i , l k , k P i , j M i ,
where α l i > 1 is the modified degree of inclusivity, which gives us the expanded latitude of acceptances on the set of issues l P i . This expanded model not only allows for each individual to have a different set of issues considered under the strict latitude of acceptance, but each issue can also have a different degree of inclusivity. This is a significant generalization of the general interaction previously introduced in Equation (4).
Next, to update agent i’s opinion based on the influence of other connected agents, we consider intrinsic and extrinsic factors. First, for the intrinsic factor, consider the case where each agent has a bias parameter β 0 , 1 . Agent i’s bias parameter β i indicates the weight of how strong an agent’s bias is towards its own opinion. An agent with a high β tends to gravitate one’s opinions towards neighbors’ opinions, while a low β places greater weight on one’s own opinion. This allows us to calculate the intrinsic influence of agent i on issue k:
J k i = ( 1 β i ) x k i ( t ) + β i x k j ( t ) , if j I k ( i , x ) .
For the extrinsic factor, this is influenced by the weight of the connection between i and neighboring agents j, w i j ( t ) and the belief mass of neighbors j regarding issue k, m k j . The total combined influence is therefore
A k i = 1 W j j I k ( i , x ) w i j × m k j × J k i ,
where W is the normalizing factor for extrinsic factors given by
W = j j I k ( i , x ) w i j × m k j .
Consequently, the new opinion of agent i on issue k accounts for one to respond to the influence of others, which is also a factor in opinion evolution. The parameter h 0 , 1 , termed homophily in social settings, indicates the tendency of agents to be influenced by others similar to them. Thus, an agent i with a low h i makes minimal adjustments based on the neighbors’ opinions. The updated opinion evolution is thus
x k i ( t + 1 ) = x k i ( t ) + h i A k i x k i ( t ) .
The scaling by h i of the difference in the overall influence and the agent’s current opinion ensures that agents with higher homophily are more likely to conform to the opinions of their social circle.
Finally, to update agent i’s belief mass, the beliefs from neighboring agents j are combined using Dempster’s rule of combination, given by Equation (8). To reinforce the echo chamber effect, we incorporate a mechanism of selective exposure to adjust the strength of connections between nodes based on opinion similarity. Specifically, for each agent i, if a neighbor’s opinion j is sufficiently close to that of the node, within the threshold ϵ k i , we strengthen the connection by increasing the edge weight by 1. Conversely, if the opinions are dissimilar, we weaken the connection by decreasing the edge weight by 1, ensuring that the edge weight does not drop below 1 so that no edges are removed from the original social network given by
w i j ( t + 1 ) = w i j ( t ) + 1 if x k j ( t ) x j j ( t ) ϵ k i max w i j ( t ) 1 , 1 otherwise .

2.3. Modeling Truth

The “truth” can be incorporated into this model by introducing node(s) T. We will model two types of truths in this work:
  • Exoteric Truth: This type of truth is general and accessible to everyone. It is meant for the public and is openly shared and disseminated. Exoteric truths are designed to be easily understood and widely accepted by the general populace. This type of truth is characterized by its high degree of connection and influence over a wide population. To model this, the node T is connected to every existing node in the network with a very high initial edge weight to ensure that T acts as an initial dominant source of influence. The other nodes’ opinions are continually pulled towards the truth T, given by x k T , simulating the concept of convergence towards a truth. x k T is a constant in this case.
  • Privileged Truth: This type of truth is initially accessible only to a select group of people who have privileged information. Although it is not meant to be a secret, it is not immediately available to the general public. Over time, this information might spread and become more widely known. To model this, the node T is connected to the neighbors of the node with the highest degree in the network with a very high initial edge weight to ensure that T acts as an initial dominant source of influence. The other nodes’ opinions are pulled indirectly towards the truth T, given by x k T , a constant in time.

2.4. Simulations

To investigate the dynamics of opinion formation and the influence of echo chambers in social networks, we will employ Python for our simulations, utilizing the NetworkX package to construct and analyze the networks. The pseudo-codes are found in the Supplementary Information. Our simulations will encompass three distinct experiments as follows:
  • The behavior of opinion dynamics in a network for opinion evolution with DST;
  • The comparison between presence and various types of truth and how they alter the dynamics of opinion evolution;
  • The strength and pervasiveness of truth affecting opinion consensus and fragmentation.

3. Results and Discussion

In our simulations, we use the parameters and randomized domain found in Table 1. The network used is generated by the function
barabasi _ albert _ graph ( n = 50 , m = 3 , initial _ graph = H ) ,
where H = erdos_renyi_graph(n=50,p=0.1) [32,33]. While the scale-free network is chosen for simulation in this work, the method introduced here can be applied to different network topologies. Many real-world networks naturally exhibit scale-free properties. Examples include the Internet, social networks, and biological networks. Leveraging a scale-free structure aligns well with the inherent topology of these systems and the potential generalizability of the results presented in this work to the social dynamics settings.

3.1. Interaction Outcomes

Firstly, we want to investigate how the number of issues | K | affects opinion consensus. Figure 2 shows the outcome of simulating interactions on the scale-free Barabási–Albert (BA) network, a common network topology in social settings. In all simulations, the number of agents in the network is 50, averaged over 1000 experiments for | K | = { 2 , 5 , 10 } number of issues, with each agent having a varying number of issues considered for the extended latitude of acceptance.
The multi-issue simulation demonstrates that as the number of issues under consideration increases, the potential for a dominant consensus to form also rises. This phenomenon occurs because including multiple issues allows for a broader range of perspectives and preferences to be integrated into the decision-making process. Consequently, individuals and groups are more likely to find common ground across various issues, leading to the emergence of a dominant consensus. This consensus reflects a more comprehensive agreement that accommodates the diverse interests and priorities of the individuals. Furthermore, the simulation indicates that the dynamics of negotiation and compromise become more intricate with multiple issues at play, ultimately fostering a more inclusive and representative outcome.
However, we further note that when the number of issues under consideration becomes too plentiful, it can decrease the potential or delay the formation of a dominant consensus, as seen in the case when comparing | K | = 5 and | K | = 10 . This phenomenon occurs due to several factors. Firstly, the complexity of discussions can increase significantly with more issues, making it difficult for individuals to focus on and thoroughly understand each other. This complexity overload can lead to fragmented conversations where no single issue receives the attention needed for a strong consensus to emerge. Additionally, with more issues, there is a higher likelihood of divergent interests and opinions. Individuals may prioritize different issues based on their personal interests, leading to conflicts and disagreements rather than a unified agreement.
This is one of the key factors for employing Focus Group Discussions (FGDs) in research and decision-making processes. FGDs gather diverse individuals to discuss multiple issues, enabling the collection of varied opinions and insights. The interactive nature of FGDs encourages individuals to share their views and consider different perspectives, promoting a deeper understanding of the issues at hand [34]. Through this process, FGDs help to identify commonalities and divergences among individuals, facilitating the formation of a dominant consensus that reflects a broad spectrum of interests and opinions. Integrating multiple issues into the discussion allows FGDs to achieve a more holistic and representative outcome, similar to the dynamics observed in multi-issue simulations. However, as the name suggests, FGDs must be focused on a fixed number of issues. When the number of issues discussed becomes too diverse, the increased complexity can overwhelm individuals, making it difficult for them to focus on and deeply engage with each issue. This often leads to fragmented discussions where no single topic is thoroughly explored, diminishing the likelihood of a strong consensus [35].
Next, we examine the dynamic difference resulting from including evidence-based DST. We consider a system where the opinions are affected by the frame of discernment and the evolution of belief masses, reflecting the new confidence level in the updated opinions. We compare it to a topologically similar system, with opinions being affected only by influence. That is, Equations (12) and (13) can be collectively rewritten as
A k i = j I k ( i , x ) w i j × J k i j I k ( i , x ) w i j ,
where the updated opinion evolution remains unchanged, as given by Equation (14).
When comparing the two models of opinion update from Figure 3, it is immediately apparent that the inclusion of evidence affects the evolution of opinions in two significant ways. Firstly, the presence of evidence accelerates the process of achieving a dominant consensus. This implies that when individuals incorporate evidence into decision-making, the overall time required to reach a unified opinion is reduced. Secondly, the inclusion of evidence also affects the final distribution of opinions among agents. It notably increases the number of agents whose opinions converge towards the dominant consensus. This indicates that evidence serves as a moderating force, leading the majority to align with the prevailing viewpoint quickly. Consequently, in a bounded confidence opinion dynamical system, the integration of evidence fosters a more nuanced and potentially stable opinion dynamic, enhancing the pathway to consensus.

3.2. Dynamics under the Influence of Truth

Next, we simulate opinion dynamics in the presence of truth, beginning with scenarios involving weak truth and progressing to those with strong truth for both exoteric and privileged truths. Here, we define weak truth as truths with influence that are similar to a smaller order of magnitude as agent–agent connections, i.e., O w T j O w i j , for an eligible j. Conversely, strong truth is one where O w T j O w i j . This simulation is important because it provides insights into how varying degrees of truthful information influence the evolution of public opinion. We first define a central truth to compare the degree of truths, set as x T = [ 0.0 ] k , k , where [ 0.0 ] k denotes a vector of zeros of length k.
Figure 4 shows the extremes of the strength of exoteric and privileged truths. By simulating weak truth, we can observe how a limited or diluted form of truth, regardless of its type, impacts opinion formation, potentially leading to a slower consensus building or the persistence of misinformation, as seen in both cases of (I). Weak truths with little to no influence in the system of opinion dynamics often do not affect any changes to the evolution of opinions, leading key individuals to drive opinions instead. Simulating strong truth allows us to understand how robust, clear, and undeniable truths can accelerate consensus and reduce the prevalence of false beliefs. These simulations are crucial for policymakers, educators, and communicators as they highlight the effectiveness of truth in shaping public discourse. They can inform strategies for combating misinformation, designing educational campaigns, and understanding the resilience of truth in diverse social contexts.
Our results comparing strong truth in exoteric and privileged cases reveal intriguing dynamics in how consensus forms around societal truths, whether these truths are universally accessible (exoteric) or confined to an influential subset (privileged). Common in both scenarios, there is a convergence towards a dominant consensus at a similar rate. However, the characteristics of the final spread of opinions differ. While both exoteric and privileged truths ultimately lead to a dominant consensus, the nature of this consensus differs significantly. The exoteric model fosters a strong, cohesive agreement, whereas the privileged model shows multiple and diverse opinion formation, not amounting to fragmentation, around the core truth (haze around the truth, t τ ). This has applications in the real world. For instance, in scientific consensus (an exoteric truth), widespread access to information and data has led to a strong, unified agreement about its reality and causes. On the other hand, in contexts where truth is privileged, such as insider financial information available only to select investors, there tends to be a greater diversity of opinions and strategies surrounding the dominant consensus. This diversity stems from the varying interpretations and applications of the privileged information.

3.3. Dynamics under the Presence of Varied Truth

In exploring opinion dynamics, it is essential to acknowledge that not all truths occupy a centrist position; some may reside at the extremes. This understanding is critical as we delve into simulations incorporating strong truths within scenarios characterized by extreme and centrist exoteric truths. Examining these diverse truth positions, we aim to identify the tipping points for opinion formation. This approach allows us to investigate how extreme truths can influence overall opinion dynamics and under what conditions they might dominate or coexist with centrist truths. Understanding these tipping points is vital for predicting shifts in public opinion, managing societal polarization, and fostering environments where a plurality of truths can be discussed and understood. Through these simulations, we can gain deeper insights into the mechanisms driving opinion changes and the resilience of centrist positions in the face of extreme viewpoints. A centrist truth for issue k is defined as x k T ( t ) = 0.0 , t , while an extreme truth is defined as x k T ( t ) = ± 0.75 , t .
Figure 5 shows the results of | K | = 5 , where the number of extreme truths incrementally increases. The introduction of extreme truths continues to result in opinion convergence for most agents within the population. This outcome suggests that even when the truths are extreme, the inherent dynamics of the system encourage agents to gravitate towards the truth. However, a notable observation is a distinct delay in achieving consensus, even for the issues with centrist truths. This delay can be attributed to several factors inherent in the nature of extreme truths and the opinion dynamics model. Firstly, agents require time to adjust their opinions as they are exposed to extreme truths. This adjustment period is longer because agents may hold an initial opinion on the issue that diametrically opposes the truth. The process of repeatedly interacting with and adjusting to extreme truths introduces additional iterations before a stable consensus can be reached. Secondly, the delay in achieving a consensus may also be due to the complex interactions between agents influenced by extreme truths and their interaction with other agents. These interactions create a more complex landscape for opinion evolution, requiring more time to navigate and converge. This means that it takes more time and more interactions for the opinions to shift significantly towards consensus. As the number of extreme truths increases, the collective inertia of the system increases, thereby prolonging the convergence time.
While introducing extreme truths—in the presence of centrist truths—does not prevent the eventual convergence of opinions within the population, it does introduce complexities that delay the process. The initial polarization, longer adjustment periods, stronger influence of extremes, and complex inter-agent interactions all contribute to this delay. Moreover, an important detail is that if all the issues in the model are characterized by extreme truths, with no centrist truths present, the population will never converge to any of these extreme truths. This phenomenon occurs because the presence of only extreme truths exacerbates polarization without providing a moderating influence that centrist truths typically offer. In such a scenario, the agents’ opinions remain perpetually divided and oscillate without ever stabilizing. This lack of convergence highlights the critical role of centrist truths in facilitating consensus by acting as a balancing force that tempers the influence of extreme truths.
We note several other permutations, unexplored in this work, of how opinions evolve based on different forms and methods of simulating truths. For example, we have focused exclusively on strong truths across all issue dimensions in the last simulation. However, examining a combination of both strong and weak truths, spanning both centrist and extreme positions, might yield interesting and nuanced results that warrant future research. Such combinations could reveal more complex dynamics of opinion formation, highlighting how varying intensities and positions of truths interact to shape public consensus, polarization, and the stability of different opinion clusters. Future investigations into these permutations can provide a more comprehensive understanding of the multifaceted nature of opinion dynamics, offering valuable insights for fields ranging from social psychology to political science.

4. Conclusions

In conclusion, we develop a computational model to investigate the dynamics of opinion formation and the emergence of echo chambers in a network through evidence-based opinion updates. We model a network where each node represents an agent with opinions on multiple issues. Agents interact based on a modified general interaction where each agent has varying numbers and issues considered under the expanded latitude of acceptance. We also introduced selective exposure by varying the strength of connections (edge weights) based on opinion similarity. “Truth” nodes with high connectivity are used to simulate the influence of various types of truth on the network. Our model and findings collectively hold significant theoretical and practical implications, directly impacting the understanding of truth in networks. For example, by formalizing intuitive descriptions of how people interact with truth and engage in multi-issue discussions online, our work enables opinion dynamics to encompass a broader range of phenomena. The conclusions from our models can guide how online platforms manage discourse and interactions. This approach helps refocus debates on divisive online issues by moving beyond simplistic assumptions about multi-issue dialogue, promoting inclusivity in cross-issue interactions, which benefits users and platform administrators alike.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/computers13080190/s1, Algorithm S1: Simulation of Opinion Dynamics with Belief Mass Update; Algorithm S2: UpdateOpinions Function.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Castellano, C.; Fortunato, S.; Loreto, V. Statistical physics of social dynamics. Rev. Mod. Phys. 2009, 81, 591–646. [Google Scholar] [CrossRef]
  2. Castellano, C. Social Influence and the Dynamics of Opinions: The Approach of Statistical Physics. Manag. Decis. Econ. 2012, 33, 311–321. [Google Scholar] [CrossRef]
  3. Jusup, M.; Holme, P.; Kanazawa, K.; Takayasu, M.; Romić, I.; Wang, Z.; Geček, S.; Lipić, T.; Podobnik, B.; Wang, L.; et al. Social physics. Phys. Rep. 2022, 948, 1–148. [Google Scholar] [CrossRef]
  4. Xie, J.; Meng, F.; Sun, J.; Ma, X.; Yan, G.; Hu, Y. Detecting and modelling real percolation and phase transitions of information on social media. Nat. Hum. Behav. 2021, 5, 1161–1168. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, H.; Ding, J.; Li, Y.; Wang, Y.; Zhang, X.P. Social Physics Informed Diffusion Model for Crowd Simulation. Proc. AAAI Conf. Artif. Intell. 2024, 38, 474–482. [Google Scholar] [CrossRef]
  6. Lai, J.W.; Cheong, K.H. A Parrondo paradoxical interplay of reciprocity and reputation in social dynamics. Chaos Solitons Fractals 2024, 179, 114386. [Google Scholar] [CrossRef]
  7. Xia, H.; Wang, H.; Xuan, Z. Opinion Dynamics: A Multidisciplinary Review and Perspective on Future Research. Int. J. Knowl. Syst. Sci. 2011, 2, 72–91. [Google Scholar] [CrossRef]
  8. Lai, J.W.; Cheong, K.H. Boosting Brownian-inspired games with network synchronization. Chaos Solitons Fractals 2023, 168, 113136. [Google Scholar] [CrossRef]
  9. Bungert, L.; Roith, T.; Wacker, P. Polarized consensus-based dynamics for optimization and sampling. Math. Program. 2024. [Google Scholar] [CrossRef]
  10. Peralta, A.F.; Neri, M.; Kertész, J.; Iñiguez, G. Effect of algorithmic bias and network structure on coexistence, consensus, and polarization of opinions. Phys. Rev. E 2021, 104, 044312. [Google Scholar] [CrossRef]
  11. Bhandari, A.; Bimo, S. Why’s Everyone on TikTok Now? The Algorithmized Self and the Future of Self-Making on Social Media. Soc. Media + Soc. 2022, 8, 205630512210862. [Google Scholar] [CrossRef]
  12. Oh, P.; Peh, J.W.; Schauf, A. The functional aspects of selective exposure for collective decision-making under social influence. Sci. Rep. 2024, 14, 6412. [Google Scholar] [CrossRef] [PubMed]
  13. Mele, A. A Structural Model of Homophily and Clustering in Social Networks. J. Bus. Econ. Stat. 2021, 40, 1377–1389. [Google Scholar] [CrossRef]
  14. Khanam, K.Z.; Srivastava, G.; Mago, V. The homophily principle in social network analysis: A survey. Multimed. Tools Appl. 2022, 82, 8811–8854. [Google Scholar] [CrossRef]
  15. Wang, X.; Sirianni, A.D.; Tang, S.; Zheng, Z.; Fu, F. Public Discourse and Social Network Echo Chambers Driven by Socio-Cognitive Biases. Phys. Rev. X 2020, 10, 041042. [Google Scholar] [CrossRef]
  16. Diaz-Diaz, F.; San Miguel, M.; Meloni, S. Echo chambers and information transmission biases in homophilic and heterophilic networks. Sci. Rep. 2022, 12, 9350. [Google Scholar] [CrossRef] [PubMed]
  17. Baumann, F.; Lorenz-Spreen, P.; Sokolov, I.M.; Starnini, M. Modeling Echo Chambers and Polarization Dynamics in Social Networks. Phys. Rev. Lett. 2020, 124, 048301. [Google Scholar] [CrossRef] [PubMed]
  18. Baumann, F.; Lorenz-Spreen, P.; Sokolov, I.M.; Starnini, M. Emergence of Polarized Ideological Opinions in Multidimensional Topic Spaces. Phys. Rev. X 2021, 11, 011012. [Google Scholar] [CrossRef]
  19. Rabb, N.; Cowen, L.; de Ruiter, J.P. Investigating the effect of selective exposure, audience fragmentation, and echo-chambers on polarization in dynamic media ecosystems. Appl. Netw. Sci. 2023, 8, 78. [Google Scholar] [CrossRef]
  20. Hegselmann, R.; Krause, U. Opinion Dynamics and Bounded Confidence: Models, Analysis and Simulation. J. Artif. Soc. Soc. Simul. 2002, 5, 1–33. [Google Scholar]
  21. Tan, Z.X.; Cheong, K.H. Cross-issue solidarity and truth convergence in opinion dynamics. J. Phys. A Math. Theor. 2018, 51, 355101. [Google Scholar] [CrossRef]
  22. Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  23. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  24. Yager, R.R. On the dempster-shafer framework and new combination rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
  25. Hua, Z.; Fei, L.; Xue, H. Consensus reaching with dynamic expert credibility under Dempster-Shafer theory. Inf. Sci. 2022, 610, 847–867. [Google Scholar] [CrossRef]
  26. Lu, X.; Mo, H.; Deng, Y. An evidential opinion dynamics model based on heterogeneous social influential power. Chaos Solitons Fractals 2015, 73, 98–107. [Google Scholar] [CrossRef]
  27. Mobilia, M. Does a Single Zealot Affect an Infinite Group of Voters? Phys. Rev. Lett. 2003, 91, 028701. [Google Scholar] [CrossRef] [PubMed]
  28. Yildiz, E.; Acemoglu, D.; Ozdaglar, A.E.; Saberi, A.; Scaglione, A. Discrete Opinion Dynamics with Stubborn Agents. SSRN Electron. J. 2011, 109, 102410. [Google Scholar] [CrossRef]
  29. Tian, Y.; Wang, L. Opinion dynamics in social networks with stubborn agents: An issue-based perspective. Automatica 2018, 96, 213–223. [Google Scholar] [CrossRef]
  30. Mäs, M.; Flache, A.; Takács, K.; Jehn, K.A. In the Short Term We Divide, in the Long Term We Unite: Demographic Crisscrossing and the Effects of Faultlines on Subgroup Polarization. Organ. Sci. 2013, 24, 716–736. [Google Scholar] [CrossRef]
  31. Flache, A.; Macy, M.W. Local Convergence and Global Diversity: From Interpersonal to Social Influence. J. Confl. Resolut. 2011, 55, 970–995. [Google Scholar] [CrossRef]
  32. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef]
  33. Erdős, P.; Rényi, A. On random graphs. I. Publ. Math. Debr. 2022, 6, 290–297. [Google Scholar] [CrossRef]
  34. Boateng, W. Evaluating the Efficacy of Focus Group Discussion (FGD) in Qualitative Social Research. Int. J. Bus. Soc. Sci. 2012, 3, 54–57. [Google Scholar]
  35. Hennink, M. Focus Group Discussions; Understanding Qualitative Research; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
Figure 1. An illustrative example of the results of opinion evolution under Tan and Cheong’s (a) inclusivist interaction, (b) general interaction, and (c) exclusivist interaction in the bounded confidence model.
Figure 1. An illustrative example of the results of opinion evolution under Tan and Cheong’s (a) inclusivist interaction, (b) general interaction, and (c) exclusivist interaction in the bounded confidence model.
Computers 13 00190 g001
Figure 2. Simulation results of the modified bounded confidence model for the BA network, where agents are under evidence-based (DST) opinion evolution, averaged over 100 experiments. The x-axis is the timestep t, while the y-axis shows the opinions x k i ( t ) . The number of issues is | K | = { 2 , 5 , 10 } .
Figure 2. Simulation results of the modified bounded confidence model for the BA network, where agents are under evidence-based (DST) opinion evolution, averaged over 100 experiments. The x-axis is the timestep t, while the y-axis shows the opinions x k i ( t ) . The number of issues is | K | = { 2 , 5 , 10 } .
Computers 13 00190 g002
Figure 3. Simulation results of the modified bounded confidence model for the BA network where agents are under influence-based opinion evolution. The x-axis is the timestep t, while the y-axis shows the opinions x k i ( t ) .
Figure 3. Simulation results of the modified bounded confidence model for the BA network where agents are under influence-based opinion evolution. The x-axis is the timestep t, while the y-axis shows the opinions x k i ( t ) .
Computers 13 00190 g003
Figure 4. Simulation results of the evidence-based bounded confidence model with (a) exoteric and (b) privileged truths in the case of (I) weak and (II) strong truths averaged over 100 experiments. The red line denotes the “truth” in the four simulations. In (a) (I), the blue line denotes the node with the highest influence.
Figure 4. Simulation results of the evidence-based bounded confidence model with (a) exoteric and (b) privileged truths in the case of (I) weak and (II) strong truths averaged over 100 experiments. The red line denotes the “truth” in the four simulations. In (a) (I), the blue line denotes the node with the highest influence.
Computers 13 00190 g004
Figure 5. Simulation results for different numbers of strong exoteric truths in combinations of centrist and extreme truths. The x-axis is the timestep t, while the y-axis shows the evolution of opinions x k i ( t ) . The red line denotes the “truth” for the particular issue.
Figure 5. Simulation results for different numbers of strong exoteric truths in combinations of centrist and extreme truths. The x-axis is the timestep t, while the y-axis shows the evolution of opinions x k i ( t ) . The red line denotes the “truth” for the particular issue.
Computers 13 00190 g005
Table 1. Initial parameters used in simulations.
Table 1. Initial parameters used in simulations.
SymbolInitial ValueDescription
| N | 50Number of agents in population
τ 10,000 Steps in simulation
x k i ( 0 ) U ( 1 , 1 ) *Range of initial opinions
m k i ( 0 ) U ( 0.5 , 1 ) *Range of initial belief masses
w i j ( 0 ) , i T 1Weights of network edges between agents
h i U ( 0 , 1 ) *Homophily parameter
ϵ k i U ( 0.1 , 0.25 ) *Latitude of acceptance of agent i, for issue k
β i U ( 0.1 , 0.3 ) *Bias parameter
α k U ( 1.5 , 3.0 ) *Expanded latitude of acceptance factor
* U ( a , b ) denotes the uniform distribution from the interval [a,b].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lai, J.W. Dynamic Opinion Formation in Networks: A Multi-Issue and Evidence-Based Approach. Computers 2024, 13, 190. https://doi.org/10.3390/computers13080190

AMA Style

Lai JW. Dynamic Opinion Formation in Networks: A Multi-Issue and Evidence-Based Approach. Computers. 2024; 13(8):190. https://doi.org/10.3390/computers13080190

Chicago/Turabian Style

Lai, Joel Weijia. 2024. "Dynamic Opinion Formation in Networks: A Multi-Issue and Evidence-Based Approach" Computers 13, no. 8: 190. https://doi.org/10.3390/computers13080190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop