Next Article in Journal
On Foundational Physics
Previous Article in Journal
Borel and the Emergence of Probability on the Mathematical Scene in France
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Upscaling Reputation Communication Simulations †

1
Max Planck Institute for Astrophysics, Karl-Schwarzschildstraße 1, 85748 Garching, Germany
2
Faculty of Physics, Ludwig-Maximilians-Universität, Geschwister-Scholl Platz 1, 80539 Munich, Germany
3
School of Physics, The University of Sydney, Physics Road, Camperdown, NSW 2006, Australia
4
Leibniz-Institut für Wissensmedien, 72076 Tübingen, Germany
5
Faculty of Psychology, University of Tübingen, 72074 Tübingen, Germany
6
Excellence Cluster ORIGINS, Boltzmannstr. 2, 85748 Garching, Germany
*
Author to whom correspondence should be addressed.
Presented at the 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Paris, France, 18–22 July 2022.
Phys. Sci. Forum 2022, 5(1), 39; https://doi.org/10.3390/psf2022005039
Published: 26 December 2022

Abstract

:
Social communication is omnipresent and a fundamental basis of our daily lives. Especially due to the increasing popularity of social media, communication flows are becoming more complex, faster and more influential. It is therefore not surprising that in these highly dynamic communication structures, strategies are also developed to spread certain opinions, to deliberately steer discussions or to inject misinformation. The reputation game is an agent-based simulation that uses information theoretical principles to model the effect of such malicious behavior taking reputation dynamics as an example. So far, only small groups of 3 to 5 agents have been studied, whereas now, we extend the reputation game to larger groups of up to 50 agents, also including one-to-many conversations. In this setup, the resulting group dynamics are examined, with particular emphasis on the emerging network topology and the influence of agents’ personal characteristics thereon. In the long term, the reputation game should thus help to determine relations between the arising communication network structure, the used communication strategies and the recipients’ behavior, allowing us to identify potentially harmful communication patterns, e.g., in social media.

1. Introduction

These days, online communication platforms and social media are becoming increasingly popular in nearly everyone’s life. Never before has it been easier to obtain information or build up a network to spread information as quickly and effective as today. This is why also topics of global significance such as political, economical, as well as social concerns are being discussed online to a large extent by now. While this, on the one hand, is a big advantage in many regards, it is also known to bring high risks on the other hand. More and more unhealthy trends develop on social media platforms, including the rapid spreading of fake-news, emerging filter-bubbles, hate-speach, and many more. Sometimes, these effects are just a natural consequence of the extremely dynamical and unmanageable structure of modern social media; sometimes, discussions are also actively misguided and distorted by single users with personal interests or fraudulently used social bots. Neither of the two scenarios should be tolerated and techniques to counteract these trends are being developed with various approaches.
We, in particular, focus on the propagation of information throughout a network of communicating agents and the circumstances under which potentially deceptive information can be placed effectively. Once the origins and causes of both intended and unintended harmful communication pattern are understood, techniques to prevent them can be developed. To this end, we set up the socio-physical model the reputation game [1] on large scales in order to simulate communication networks a little closer to social media.

2. The Reputation Game

2.1. Principles

Most generally speaking, the reputation game models communications between agents including their decision making and opinion formation processes. A set A of agents n (so far, we have used n = 3 to n = 5 agents, whereas in this work n = 50 ) is thereby interacting with each other in a predefined number of rounds, where one round consists of n conversations, each initiated by one of the n agents, respectively. Within one conversation, the initiator chooses a conversation partner, which can be every agent except for itself. Afterwards the initiator also chooses a conversation topic, which can be the honesty of every other agent including the initiator and their interlocutor themselves. Depending on the initiator’s intrinsic honesty and a random variable, the communication will then be either honest, i.e., the speaker transmits their true belief on the topic, or dishonest, i.e., the speaker communicates a lie in order to shift the receiver’s opinion in a desired direction. In the reputation game, the agents generally choose the direction of lies according to their friendship status with the topic. In the scope of this work, all agents lie positively about friends as well as about themselves and negatively about their enemies, see also Table 1). After having communicated this message, the receiver answers about the same topic, again either honestly or dishonestly chosen randomly according to that agent’s honesty. Having done so, both agents judge the credibility of the messages they just received and update their knowledge both on the speaker and the topic accordingly. This way, the agents are able to form their opinions step by step, always considering uncertainty and potential deceptions in their conclusions. The final goal of all agents is thereby to find out each others’ true honesty, as well as to push their own honesty in the eyes of the others—which we call their reputation—as highly as possible.

2.2. Steps towards Reality

To the basic principles of the reputation game described in Section 2.1, a few new concepts are now introduced in order to be a little closer to reality and to prepare the setup for larger groups. One is that friendships that have been absolute so far, i.e., either being total friendship ( f = 1 ) or total enmity ( f = 0 ), can now take any value between 0 and 1. This way, partial friendship becomes possible and different strengths of friendships and enmities are modeled. A complimentary message that an agent made towards another agent therefore does not result in an absolute friendship anymore, but rather increases the friendship value between the agents a bit. The value of f should thus be understood as the probability that someone behaves like a friend in any conversation. This way, friendship can establish in a more natural way also making uncertainty about someone’s friendship status possible. The second newly introduced feature are personality traits such as friendship affinity and shyness, which are distributed randomly among the agents. Friendship affinity defines how much weight an agent gives the friendship value when choosing a conversation partner, and shyness in turn specifies the weight on the agents’ acquaintance, respectively (more details in Section 2.3.1). This way, the size and structure of the agents’ connection network depends on their character and varies for different agents. As a result, we will observe in Section 3 how their different character traits influence the network structures the agents find themselves in.

2.3. Large Groups

As already mentioned in Section 1, the reputation game’s long term goal is to be applied to communication patterns in social media. In [1], we have already shown that the reputation game is able to reproduce several socio-psychological effects that are observable in real-life scenarios, such as echo chambers, self-deception, deception symbiosis, group formation, or freezing of group opinions. However, in order to be able to apply it usefully to social media, it certainly still requires more than three to five participating agents. The main goal of this work is therefore to describe the required changes in the reputation game setup, as well as to present the resulting effects.

2.3.1. Group Formation

First of all, the assumption that in principle the agents choose their conversation partners completely randomly among all others can no longer be held up, because obviously that is not the case on social media platforms or in any other large or even small community. Instead only at the beginning the agents meet randomly, but as soon as they know each other a little better, other criteria start to take effect. In the case of the reputation game, these are two: friendships and acquaintances. On the one hand, when initiating a conversation, an agent tends to talk to their friends, where friends in the reputation game are defined as agents who have spoken positively about the conversation initiator before. This is meant to be a simple self-confirmation technique that is—of course in a much more complex way—also used in people’s everyday lives and has been shown to be one reason why social media platforms have become so popular [2]. On the other hand, agents also choose their conversation partner according to their acquaintance, i.e., once a clique has formed, it is likely that the conversation initiator will turn to a member of that group again. This is a natural process in human network building as well, which automatically leads to group formation both in the real world and the reputation game simulations. Of course, peoples’ individual tendency to stick to their group or explore new contacts varies strongly from person to person, which is also taken account for in the reputation game in form of the different agents’ personal shyness values.
Technically, acquaintances among the agents are kept track of during the whole simulation by each agent measuring their relation strengths to all others. This is conducted by any agent i counting the number of conversations it has had to another agent j, r i j c , as well as the number of messages it got about agent j, r i j m . Together with a weighting factor Q = 10 by which we assume a conversation to be more important than a message about someone, the overall acquaintance of agent i with agent j is given by
r i j = r i j m + Q r i j c .
Following [1] a friendship of agent i to agent j is still measured by positive or negative statements agent i has heard agent j saying about itself compared to the median of what i has heard others saying about itself. However, as mentioned in Section 2.2, a single positive or negative statement is no longer enough to establish a friendship or an enmity, but rather the number of positive and negative statements an agent i has heard another agent j making about itself is counted as π i j and ν i j , respectively. The friendship of agent i towards agent j will then develop over time and is defined as
f i j = f Beta ( f | π i j + 1 , ν i j + 1 ) = π i j + 1 π i j + ν i j + 2
in analogy to the agents’ internal reputation representation (see [1]).
The probability of the initiating agent a choosing agent b as a conversation partner is further assumed to be proportional to r a b S a f a b F a , where S a and F a are agent a’s personal friendship affinity and shyness as described in Section 2.2. In addition, the frequency of choosing agent c as conversation topic is proportional to the initiator’s acquaintance to c, which ensures that the speaker does not talk about an agent it has no knowledge of, but has at least had some contact to it. This is the default behavior of agents in the reputation game, which we therefore call ordinary agents. A summary of ordinary agents’ properties and decision making strategies is given in Table 1.

2.3.2. One-to-Many Conversations

Another feature newly introduced to the reputation game is the possibility for agents to talk to more than one other agent at a time. This clearly is a feature needed for the simulation of social-media-like conversation patterns and as a side effect enlarges the typically emerging group size in our simulation, especially preventing too many two-agent-groups. In half of the conversations, an agent initiates; they will therefore address at least two other agents, where the exact number of recipients is chosen randomly each time. However, the probability decreases steeply with the number of recipients for very shy agents and less steeply for less shy agents, leading to a typical audience size between 2 and 6, but also allowing for 20 or 30 recipients sometimes. Other than in one-to-one conversations the recipients in one-to-many communications do not answer, again in analogy to the unidirectional influence social media users have when sharing some information to many other users at once.

2.3.3. A Seemingly Infinite Network

When speaking about simulating large network sizes such as on social media platforms one of course has to consider computational limits as well. In order to find a good trade-off between computationally manageable simulation sizes and realistically large networks, we use the measure of infinitely large appearing groups. This is obviously the case for any social media platform, since, for every user, there is certainly at least one other user he or she has never had any contact with before. Analogously, we aim for the same goal in the reputation game simulations, while staying close to the boundary to avoid unnecessarily inflating the computational cost of increasing group sizes. With the group size of 50 agents used in this work, we achieved the infinity limits in approximately one third of the simulations. In another third, there was one agent who is aware of all others, and in the last third, there are mainly two, but at maximum, seven agents who had heard of all the others at least once. The fluctuations are caused by the agents’ randomly distributed shyness values that influence their outgoingness and might become small enough sometimes to allow for such extremely well connected agents. Nevertheless, these are exceptions, for the vast majority of agents the network appears infinite.

3. Results and Discussion

In the following, we present the results of 100 simulations consisting of 300 conversation rounds each where only ordinary agents were involved. This way, we want to ensure statistically meaningful results in an otherwise chaotic system such as the reputation game. Furthermore, the focus on ordinary agents as conceptual basis is intended to lay a foundation for further, more advanced simulations in the future.

3.1. Network Structure

First of all, since the large group size is the main novelty here, we have a look at the evolved network structure. Within the 50 agents, there form several subgroups, where mainly two different types emerge. On the one hand, there are relatively isolated groups of a few agents that have strong contact among each other, but do not communicate much with other agents outside their group. On the other hand, there are also pairwise strong connections between most agents in the form of a nearly fully connected network. For a more qualitative measurements of the network topology we need to introduce two qualities frequently used to characterize social networks. These are closeness centrality (CC), indicating how close the typical connection between an agent and their neighbourhood is, and degree centrality (DC), indicating to how many others an agent is connected to [3]. Closeness centrality and degree centrality of an agent i are defined as
CC i = n 1 j A \ { i } d i j
DC i = | { j A \ { i } | r i j > 1 } | ,
respectively. Here, d i j is the shortest path between agents i and j, where we choose the direct path distance between the agents to be their reciprocal symmetric acquaintance 2 Q r i j c + r i j m + r j i m 2 1 . One character trait of agents that influences their centrality is the agents’ personal shyness. As can be seen in Figure 1, especially the degree centrality strongly correlates with an agent’s shyness, but also for the closeness centrality, one can observe a small negative correlation, meaning that a shy agent has slightly less contact with their neighbours in the network compared to a less shy agent. This is easily understandable, since first of all the typical number of recipients for one-to-many conversations depends on the agents’ shynesses. Therefore, shy agents tend to talk to less agents who in turn also communicate less with the agent at hand, which leads to less contacts in general as well as to less frequent contact to specific agents. Secondly, a low shyness also increases the agent’s probability to talk to somebody it does not know that well yet, which obviously helps to establish more contacts, i.e., a higher degree centrality. Of course, with a low shyness this higher number of contacts is then also easier to keep contact with due to the on average higher number of conversation partners and the in turn higher probability to be spoken to by others. The reason why closeness centrality only correlates a little with the agents’ shyness values is its definition according to Equation refeq: CC. There the denominator is mainly dominated by the large number of far away agents, because every agent (independent of their shyness) has a large number of agents it only knows casually. The typical distance to the agents’ good acquaintances, however, becomes a little lost due to this definition.
In addition to the general tendencies, we can also compare the the average centrality reached by all agents (blue lines) and by agent 0, the least honest one (red lines). Whereas agent 0 typically reaches the same closeness centrality than others, i.e., their distances to others in the network are average, it is on average connected to a higher number of agents compared to more honest ones. This can be explained by the fact that in the reputation game, being dishonest generally helps to establish friendships, as was already shown in [1]. This way, an above-average number of agents consider agent 0 a friend and consequently more agents are likely to choose agent 0 as a conversation partner. In those conversations, agent 0 might in turn hear about new agents with whom they can later talk themself. Thereby, agent 0 is able to make slightly more contacts than others only due to their lower honesty. All those effects let us conclude that the newly introduced character trait shyness works as it was supposed to, as well as that findings from the small reputation game setup still hold.

3.2. Information Transmission

Besides the pure acquaintance network structure in an communication based reputation system of course the transmission and evolution of knowledge is of central interest. Therefore, we define two measures, which we call the informedness of an agent i and agreement between two agents i and j regarding to their knowledge about all the agents’ honesties x. Both describe the allignment of knowledge state vectors, either with the real truth that the agents do not know during the simulation or with others’ knowledge states. When denoting an agent i’s trust in agent k’s honesty as x ¯ i k and agent k’s real honesty as x k , informedness and agreement are defined as
informedness i = k A x ¯ i k 1 2 · x k 1 2
agreement i j = k A x ¯ i k 1 2 · x ¯ j k 1 2 .
In the two top panels in Figure 2, one can see that both an agent’s closeness centrality and their degree centrality correlate with the agent’s informedness. However, degree centrality seems to be more important in order to be well informed, as it enables agents to come in touch with more different opinions and therefore to better estimate the real, unknown truth. When looking at the red dots, which indicate agent 0’s results, or at the red line indicating the average, one can observe that agent 0 is typically better informed as all the others (black dots or blue line). Since, in these simulations, the only difference from agent 0 to others was their low honesty (all others were more honest), agent 0’s high informedness compared to others’ probably is a result of agent 0 being bothered with less lies than the others on average and at the same time having more chances to manipulate the others. i.e., agent 0 had more honest information sources than others and therefore was able to find out the truth better. Furthermore, by more easily maintaining friendships, agent red’s informedness might also benefit from it having both more different conversation partners and an accordingly slightly higher number of conversations in total. In general, one can say, however, that the majority of agents at least partly found out the truth as their average (blue line) is significantly higher than the 0-informedness line and the fraction of points lying above the 0-level is 72 % . Theoretically, there is an upper bound for informedness where an agents knows the exact truth about all the 50 agents, which would correspond to an informedness of n ( x 1 2 ) 2 = 4.1 6 ¯ . However, since, due to the large group size, the agents are not supposed to be informed about everyone else, this limit will not be reached. Rather, we can have a look at typical group sizes as can be seen from the lower left panel in Figure 2. The blue line shows the average number of agents that reached a certain number of conversations with another agent until the end of a simulation. For example, one can see that on average agents crossed the 100 communications line with approximately two to three other agents. Keeping this in mind, the maximally reached informedness of around 2.0 mean means that those agents knew the exact truth about approximately 24 other agents or accordingly less about more agents, which is already half of the network.
Finally, we can also have a look at the lower right panel in Figure 2. Here, the agreement between two agents as introduced in Equation (6) is depicted against the number of conversations this pair of agents had with each other. Interestingly, only the very extremes of high conversation numbers show a correlation with the agreement of the two agents, indicating that the agents only influence each other significantly when they have intensive contact with each other. When comparing the blue and red line, i.e., the average agreement agents reached among all pairs and with agent 0, respectively, one finds that agent 0 agrees below average well with others. This is probably due to the fact that agent 0, when talking to others, lies in the vast majority of cases and thereby does not reveal anything of their true belief. This way, agent 0’s belief state becomes less entangled with the others’ knowledge, which leads to a worse agreement. This inside demonstrates how the information trading processes in the reputation game run at three levels: First, the collection of information in conversations with others, which is most efficient when talking to honest agents. Second, (not) disclosing own information in order to create an advantage and third, misleading the others by deliberate lies in order to extend the lead even further. In Figure 2, we have seen that the least honest agent 0 acts on all three levels and manages to become the best informed agent without sharing this knowledge with others. From an information-trading perspective, this behavior already puts agent 0 in a strong position, which emerged in the simulation only from the one property of dishonesty.
Additionally, the lower left panel shows that only a fraction of 8 % of pairs of agents (partly) disagree with each other, which is lower than the fraction that disagrees with the actual truth ( 22 % ), and also the average agreement is higher than the average informedness (blue lines). This means that although the agents tend to find out a good fraction the reality, they also build group opinions that do not represent reality but rather are an artifact of echoing false information, even without any particularly manipulative agents in the system, false information develops and settles in the agents’ minds, which clearly also happens in real world scenarios and is known as the illusory truth effect [4].

4. Conclusions

In this work, we have shown that extending the reputation game simulation to a large group of 50 agents yields sensible results. We thereby focused on ordinary agents in order to see whether or not the foundation for future, more advanced simulations works in the intended way. We have demonstrated that the agents’ personal shyness values indeed have consequences on their neighbourhood in terms of their local network topology. In addition, the emerging group sizes of typically 3–4 agents that know each other well in the sense of more than 100 conversations is compared to the whole network size of 50 agents a reasonable group size. However, it would need to be investigated whether 100 conversations are still enough to build solidified opinions in this new, larger setup. Nevertheless, we could show that the agents manage to filter out true information in this highly chaotic and lie-distorted extended system, indicating that their reasoning and opinion formation processes still work. Furthermore we even reproduced a widespread phenomenon when dealing with uncertain information, the illusory truth effect. In order to understand the network’s dynamics more deeply, though, there will still be some more tests required, for example regarding potentially existing information flows and resulting reputation hierarchies.
Overall, however, the reputation game in its large-scale-setup seems to be on a good way to provide the basis for more sophisticated simulations including social-media-inspired behavioral patterns and the study of associated effects.

Author Contributions

Conceptualization, V.K., C.B., S.U. and T.E.; Formal analysis, V.K. and T.E.; Writing—original draft, V.K.; Visualization, V.K.; Supervision, C.B., S.U. and T.E. All authors have read and agreed to the published version of the manuscript.

Funding

Sonja Utz is a member of the Machine Learning Cluster of Excellence, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—EXC number 2064/1—Project number 390727645.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Enßlin, T.; Kainz, V.; Bœhm, C. A Reputation Game Simulation: Emergent Social Phenomena from Information Theory. arXiv 2021, arXiv:2106.05414. [Google Scholar] [CrossRef]
  2. Toma, C.L.; Hancock, J.T. Self-affirmation underlies Facebook use. Personal. Soc. Psychol. Bull. 2013, 39, 321–331. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, J.; Luo, Y. Degree centrality, betweenness centrality, and closeness centrality in social network. In Proceedings of the 2017 2nd International Conference on Modelling, Simulation and Applied Mathematics (MSAM2017), Bangkok, Thailand, 26–27 March 2017; Volume 132, pp. 300–303. [Google Scholar]
  4. Hasher, L.; Goldstein, D.; Toppino, T. Frequency and the conference of referential validity. J. Verbal Learn. Verbal Behav. 1977, 16, 107–112. [Google Scholar] [CrossRef]
Figure 1. The agents’ centrality as influenced by their personal shyness value. (Left) closeness centrality, i.e., an agent’s typical distance to others. (Right) degree centrality, i.e., the number of contacts an agent had during the simulation. Red dots indicate the results of agent 0, the least honest one. The horizontal lines show the average centrality of all agents (blue) and agent 0 (red).
Figure 1. The agents’ centrality as influenced by their personal shyness value. (Left) closeness centrality, i.e., an agent’s typical distance to others. (Right) degree centrality, i.e., the number of contacts an agent had during the simulation. Red dots indicate the results of agent 0, the least honest one. The horizontal lines show the average centrality of all agents (blue) and agent 0 (red).
Psf 05 00039 g001
Figure 2. Analysis of network properties with regards to transmitted information. Top: Influence of an agent’s closeness centrality (left) and degree centrality (right) on their informedness. Red dots indicate the results of the least honest agent 0 and the red line their on average achieved informedness. Analogously the blue line indicates the average of all agents. In green, the 0-informedness level is shown, which can either mean no knowledge at all, or correct knowledge about some agents and incorrect knowledge about some others that cancel each other out. Bottom Left: The number of agents that an agent reached a certain number of conversations with until the end of the simulation. The blue line again indicates the average. For example, on average an agent has had at least 1 conversation to about 24 other agents and there is no pair of agents that reached more than 500 conversations. Bottom Right: The agreement between a pair of agents as influenced by their number of conversations. Similarly to the upper panels the averages of all agents and agent 0 are indicated by the blue and red lines, respectively, and the 0-agreement level is shown in green, indicating that either both agents have no knowledge at all, or they agree and disagree in equal parts.
Figure 2. Analysis of network properties with regards to transmitted information. Top: Influence of an agent’s closeness centrality (left) and degree centrality (right) on their informedness. Red dots indicate the results of the least honest agent 0 and the red line their on average achieved informedness. Analogously the blue line indicates the average of all agents. In green, the 0-informedness level is shown, which can either mean no knowledge at all, or correct knowledge about some agents and incorrect knowledge about some others that cancel each other out. Bottom Left: The number of agents that an agent reached a certain number of conversations with until the end of the simulation. The blue line again indicates the average. For example, on average an agent has had at least 1 conversation to about 24 other agents and there is no pair of agents that reached more than 500 conversations. Bottom Right: The agreement between a pair of agents as influenced by their number of conversations. Similarly to the upper panels the averages of all agents and agent 0 are indicated by the blue and red lines, respectively, and the 0-agreement level is shown in green, indicating that either both agents have no knowledge at all, or they agree and disagree in equal parts.
Psf 05 00039 g002
Table 1. Communication strategies for ordinary agents. Here, agent a is always considered the speaker, agent b the conversation partner and agent c the topic. P ( h ) denotes the probability of making an honest statement and P ( b | ¬ h ) is the probability to blush when lying. The deception strategy indicates in which direction the speaker a wants to manipulate the receiver’s opinion on the topic, a means that the speaker always lies positively when speaking about itself, and f a c means that the lie size and direction depend on the speaker’s friendship towards the topic c. The agent’s receiver strategy defines which criteria are considered when judging and interpreting a received message. For more detailed information see [1].
Table 1. Communication strategies for ordinary agents. Here, agent a is always considered the speaker, agent b the conversation partner and agent c the topic. P ( h ) denotes the probability of making an honest statement and P ( b | ¬ h ) is the probability to blush when lying. The deception strategy indicates in which direction the speaker a wants to manipulate the receiver’s opinion on the topic, a means that the speaker always lies positively when speaking about itself, and f a c means that the lie size and direction depend on the speaker’s friendship towards the topic c. The agent’s receiver strategy defines which criteria are considered when judging and interpreting a received message. For more detailed information see [1].
Agent aFriendshipShyness P ( a · ba · · ) P ( a c ba · b ) P ( h a c b ) P ( b ¬ h ) DeceptionReceiver
Affinity F a S a ==StrategyStrategy
ordinary 10 G ( 0.5 , 0.1 2 ) 10 G ( 0.5 , 0.1 2 ) ( 1 δ a b ) r a b S a f a b F a r a c S a x a 0.1 a ; f a c elsecritical
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kainz, V.; Bœhm, C.; Utz, S.; Enßlin, T. Upscaling Reputation Communication Simulations. Phys. Sci. Forum 2022, 5, 39. https://doi.org/10.3390/psf2022005039

AMA Style

Kainz V, Bœhm C, Utz S, Enßlin T. Upscaling Reputation Communication Simulations. Physical Sciences Forum. 2022; 5(1):39. https://doi.org/10.3390/psf2022005039

Chicago/Turabian Style

Kainz, Viktoria, Céline Bœhm, Sonja Utz, and Torsten Enßlin. 2022. "Upscaling Reputation Communication Simulations" Physical Sciences Forum 5, no. 1: 39. https://doi.org/10.3390/psf2022005039

APA Style

Kainz, V., Bœhm, C., Utz, S., & Enßlin, T. (2022). Upscaling Reputation Communication Simulations. Physical Sciences Forum, 5(1), 39. https://doi.org/10.3390/psf2022005039

Article Metrics

Back to TopTop