Next Article in Journal
Exploring the Roles of Local Mobility Patterns, Socioeconomic Conditions, and Lockdown Policies in Shaping the Patterns of COVID-19 Spread
Previous Article in Journal
How Schools Affected the COVID-19 Pandemic in Italy: Data Analysis for Lombardy Region, Campania Region, and Emilia Region
Previous Article in Special Issue
Intransitiveness: From Games to Random Walks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reviewing Stranger on the Internet: The Role of Identifiability through “Reputation” in Online Decision Making

by
Mirko Duradoni
1,†,
Stefania Collodi
1,†,
Serena Coppolino Perfumi
2,† and
Andrea Guazzini
1,3,*,†
1
Department of Education, Languages, Intercultures, Literatures and Psychology, University of Florence, 50135 Firenze, Italy
2
Department of Sociology, Stockholm University, S-106 91 Stockholm, Sweden
3
Center for the Study of Complex Dynamics (CSDC), University of Florence, 50121 Firenze, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Future Internet 2021, 13(5), 110; https://doi.org/10.3390/fi13050110
Submission received: 1 April 2021 / Revised: 23 April 2021 / Accepted: 24 April 2021 / Published: 27 April 2021
(This article belongs to the Special Issue Selected Papers from the INSCI2019: Internet Science 2019)

Abstract

:
The stranger on the Internet effect has been studied in relation to self-disclosure. Nonetheless, quantitative evidence about how people mentally represent and perceive strangers online is still missing. Given the dynamic development of web technologies, quantifying how much strangers can be considered suitable for pro-social acts such as self-disclosure appears fundamental for a whole series of phenomena ranging from privacy protection to fake news spreading. Using a modified and online version of the Ultimatum Game (UG), we quantified the mental representation of the stranger on the Internet effect and tested if people modify their behaviors according to the interactors’ identifiability (i.e., reputation). A total of 444 adolescents took part in a 2 × 2 design experiment where reputation was set active or not for the two traditional UG tasks. We discovered that, when matched with strangers, people donate the same amount of money as if the other has a good reputation. Moreover, reputation significantly affected the donation size, the acceptance rate and the feedback decision making as well.

1. Highlights

  • The propensity to accept an allocation in an online Ultimatum Game is affected by the Donor’s reputation.
  • Feedback about Donors is affected by their previously acquired reputation.
  • A Donor’s allocation behavior is influenced by the Receiver’s reputation.
  • When the reputation is unknown, individuals tend to donate the same amount of money as if the other has a good reputation.

2. Introduction

The “stranger on the Internet” effect has been recently presented as the online manifestation of the well-known “stranger on the train” psychological phenomenon [1]. This effect refers to the fact that people disclose significantly more and faster to unknown individuals (i.e., strangers) when further future interactions are not likely [2], which is common on the Internet [3,4,5,6,7,8,9]. From a multidisciplinary point of view, the stranger on the Internet effect can also be framed based on game theory. For instance, the generous tit-for-tat strategy in the Prisoners’ Dilemma, which encourages being “fair” toward strangers, assuming that they will reciprocate, is the only cooperative Nash equilibrium in specific circumstances [10] and provides higher payoffs in the alternating games [11]. In other words, generous tit-for-tat represents a form of openness towards strangers. While Misoch’s work analyzed the phenomenon based on self-disclosure [12,13], the representation of the “stranger” whom individuals interact with has been often neglected by literature.
Given the dynamic development of web technologies, being able to capture and quantify how much strangers can be considered suitable for pro-social acts such as self-disclosure appears fundamental for a whole series of phenomena ranging from privacy protection to fake news spreading [14,15,16,17]. For instance, on most social media platforms, users do not actively choose the source of their feed; rather, the platform shows content taken from friends, sources based on past activities, and advertisers who have paid to place their content in the user’s feed. The advertisers are usually not known to the target audience (i.e., are strangers) and may target some individuals with malicious intent (e.g., steal personal information, cheat, and share false information).
By employing the Ultimatum Game (UG) [18], it is possible to quantify the representation of the “stranger on the Internet”. Indeed, individuals’ behavior within the game could be analyzed based on the identifiability of their interactors. The UG is an experimental economics game in which two parties interact, usually in an anonymous way. The first player proposes how to divide a sum of money with the second party. If the second player rejects this division, neither gets anything. If the second accepts, the first gets his/her demand and the second gets the rest. The UG is particularly suitable for our research purposes since it has an empirical robust threshold (approximately 40% of their endowment) on which people rely to allocate resources [19] and so deviation from it may be attributable to our manipulation (i.e., full-anonymity vs. reputation). Moreover, the UG is, for many aspects, “gender-invariant”. In the work of Solnick [20], the average offers made did not differ based on gender. Moreover, UG offers seemed to not display a great variability across countries and cultures [21], thus allowing for the greater generalizability of results. A demonstration of a UG session can be seen on Jacob Clifford’s YouTube channel for those unfamiliar with the Ultimatum Game: https://youtu.be/_MgMpLTtJA0 (accessed on 27 April 2021).

2.1. Reputation and Online Decision Making

Reputation has become a fundamental meter for judging the trustworthiness of sources [22,23,24]. For instance, reputation affects search result credibility on search engines [25], builds customer trust in e-banking services [26], and influences the consumer decision-making process, particularly in the tourist sector [27]. Indeed, reputation is a cue for understanding which behaviors are accepted within one group. For instance, an e-market “reputational system” deeply influence vendors’ behavior [28] by representing, in an economic and perceptively ergonomic way, the system local norm (i.e., to be a trustworthy vendor).
Reputation also affects the decision making of individuals who are not directly identifiable by reputation. Indeed, in e-markets, equally trustworthy individuals realized different exchange volumes according to their reputation [29].
Interestingly, reputation influence upon people’s decision making sometimes originates some apparently “irrational” outcomes (e.g., behaviors disconnected from the personal experience). For example, people continue to prefer high-rated partners even if they charge a much higher price for the same good [30]. In other words, people seem willing to accept a worse offer if it comes from a highly reputed member. Moreover, when reputation is not the direct translation of users’ behavior (e.g., historical log), reputation attribution can be biased. Indeed, novel empirical findings have highlighted that reputation, once acquired, seems to be maintained over time (i.e., the reputation inertia effect) despite users’ actual behavior [31,32].
Given the literature evidence, the paper aimed to quantify prosociality towards strangers, by comparing allocations towards them and those directed to individuals identified by reputation, since reputation is a widely adopted feature to identify people online [33,34,35,36]. Moreover, we aimed to quantify how much reputation really counts in attracting offers and inducing acceptance.

2.2. Hypotheses

We proposed to our participants an Ultimatum Game, with the reputation visible in some trials and invisible in others to capture behavioral differences towards strangers and individuals identified by reputation. The Ultimatum Game has been widely used to assess people’s pro-social behavior in a situation where a second player has some form of power over the first player’s behavior [37,38] as in many real-life and online situations where behavioral standards are co-defined (e.g., to be a reliable seller, to offer a service of a given quality, and to adequately protect personal data). The emergence of a standard occurs precisely because an unfair behavior of the first player can be punished with non-cooperation. Introducing a reputation into the game amplifies the ability to make known who respects that standard and who does not. The variation of social behavior based on the interactor reputation allowed us to evaluate how this influences self-disclosure and acceptance dynamics. Based on the main findings presented by the literature, we analyzed how reputation affects people’s decision making in an Ultimatum Game, so in the reception and in the donation phases.
Generally, we expect reputation to influence behavior in the reception phase, with a higher reputation broadening the acceptance range and, conversely, a lower reputation narrowing it. In the same way, the level of reputation should affect the feedback; namely, participants will feel more inclined to positively value a subject with an already high reputation, and vice versa. The broader reputation effect, in these cases, should appear when subjects, engaging in an exchange with a counterpart with a high reputation, accept amounts of money below the average threshold (around 40% of the endowment) indicated by literature [19] and evaluate the interaction positively in the feedback phase.
Similarly, we expect reputation to also affect the donation phase; namely, a subject should donate more generously to counterparts with high reputations. This would be in line with indirect reciprocity, which implies that a positive reputation increases the likelihood of being helped by another unrelated person in the future.
In conditions with no reputation, we expect subjects to generally accept on the basis of the average threshold. However, in the donation phase, we could expect social heuristics oriented towards cooperation to take place, so participants will donate more [39].

3. Materials and Methods

3.1. Sampling

The research was conducted in accordance with the guidelines for the ethical treatment of human participants of the Italian Psychological Association (AIP). The participants were recruited through a completely voluntary census. All the participants (or their legal guardians) signed an informed consent form and could withdraw from the experimental session at any time. The participants were 444 (76 males) with an average age of 15.82 (s.d., 1.30). The ratio between males and females was kept constant in all the experimental conditions. All the participants completed the experiment.

3.2. The Conditions and the Game

In order to verify our hypotheses, we developed four different conditions that concerned the presence or the absence of a reputational system (see Table 1).
Like the original ultimatum game, our game included two phases: donation and reception. Furthermore, the order of the phases was constant; namely, the players played as donors in the first phase and as receivers in the second. Although the participants knew that they were interacting with other players, in reality, the subjects interacted only with our system, which was programmed in order to record the proposals made during the donation phase and to generate offers using an uniform distribution (ranging from 0 to 10 euros) in the reception phase. The system also recorded the players’ decisions when they acted as Receivers, and generated a random reputation ranking when necessary for the succession of the different experimental conditions. The probability distribution of reputation was uniform as well. We specify that, in the Reputation ON conditions, the opponent’s reputation was displayed from the first turn, since participants were instructed to believe that reputation was achieved by their counterparts during previous sessions acting as Donors, while the player was never characterized by a reputation score.
In the donation phase, participants had to decide how much of their stack (from 0 to 10 euros) they wanted to offer to their counterparts 15 times. The Donors also knew that the Receivers’ decision would determine their gain. Indeed, if the Receivers accepted their offers, the resources were split among them according to the Donors’ will, while nobody got anything if the Receiver refused the deal. We specified to the subjects that, in our game, the exchanges were asynchronous and delayed in time. In the reception phase, players displayed their interactors as anonymous (as they were themselves), so players could not know if they had interacted with them or not before. In the sessions in which the reputation system was present, Donors could see the reputation (expressed as colored circles and ranging from +5 to −5) acquired by their counterparts. In the reception phase, the players had to evaluate the offers 15 times. Their gain in this phase followed the same rule of the previous one. Like in the donation phase, two reputational conditions were present, so reputation could or could not be visible. After each decision (i.e., accept or refuse), Receivers had to decide about the Donors’ behavior by rating them with a plus or a minus. This action was requested to the players to increase reputation credibility. Nonetheless, although the players were instructed to believe that reputation was built up during previous sessions, reputation did not really evolve or change since it was randomly extracted from a uniform distribution, to have approximately the same number of observations for each reputation level.

3.3. Procedures

The experiments took place in a computer lab. Before the experimental session started, the experimenters presented the game to the participants. The instructions were read aloud and explained using a PowerPoint presentation. Once the explanation phase ended, the experimenter led the participants to their designated computers. The participants were separated from the others through carton partitions. After completing a brief demographic survey (age, gender, and years of education), the participants obtained the permission to run the game. The experiments lasted a maximum of 30 min.

3.4. Data Analysis

First, we verified the preconditions necessary for the inferential analyses of the experiment’s data. For the continuous variables that were used, the normality of the distribution was assessed through the analysis of asymmetry and kurtosis values, and their averages and standard deviations were produced. Then, we proceeded to the inferential analyses using a general linear mixed model (GLMM) approach [40] due to the repeated measures structure of the experimental data. GLMM allows seeing, in a robust way, the existence of net combined effects of the variance explained by the factors taken individually.

4. Results

In Table 2, the descriptive statistics of the game-related variables are presented.

4.1. Donors’ Reputation Affects the Propensity to Accept

We analyzed, through a Generalized Linear Mixed Model, the players’ propensity to accept or to reject the Donors’ resource allocation. As we could imagine, higher offers were more likely to be accepted by the Receivers. However, also, allocations that came from well-rated Donors were accepted more often with respect to those that had been made by badly rated Donors. No effect of the receiving condition was found in relation to the acceptance behavior (Table 3 and Figure 1).

4.2. How Do People Evaluate the Donors’ Behavior?

Generally, in Ultimatum Games, a “fair” offer is around 40% (i.e., an average of 41.01%) of the amount to share [19]. We use this evidence to define the feedback coherence as follows:
The Receiver provides coherent feedback when he/she gives a plus to Donors’ offers higher than or equal to 41% and a minus to those proposals under this threshold. Conversely, the Receiver acts incoherently when he/she gives a plus to Donors’ offers below 41% or a minus to those allocations higher than or equal to 41%.
At this point, we investigated which factors could affect the feedback behavior in terms of coherence. In other words, we were interested in assessing whether a change in what is considered “fair” was possible. The results obtained with a GLMM approach are reported in Table 4.
The Receivers’ feedback coherence was influenced by the amount offered by their counterparts. A higher offer was more often considered fair by the Receivers (i.e., coherent feedback), while lower allocations resulted in a fuzzier (i.e., incoherent) feedback behavior. In other words, high offers seem to elicit a greater evaluation similarity (i.e., a positive feedback). Instead, a greater difference in judgments between individuals was observed towards lower offers, with a portion of individuals acting incoherently (i.e., providing positive feedback to offers under the 41% threshold). Furthermore, there is an interaction effect between the amount offered by Donors and their own reputation in relation to the Receivers’ feedback coherence. The Receivers were most coherent in that situation with higher offers made by well-rated individuals, while the lowest coherence was found with those offers near the 41% threshold provided by negatively rated interactors (Figure 2).
Notably, equal offers were treated differently in terms of coherence according to reputation, with those made by well-rated individuals more frequently given coherent feedback.

4.3. Players’ Feedback Behavior Appears to Be Affected by Opponent’s Reputation

To investigate the players’ tendency to provide positive feedback (i.e., a plus), we proceeded with a GLMM for repeated measures. The results are presented in Table 5.
More generous allocations were more frequently rewarded with a like from the Receivers. Moreover, the Donor’s reputation showed a positive association with the Receiver’s probability of feeding back with a like independently from the amount offered. Well-rated opponents more easily acquired further positive feedback, while negatively rated opponents were more frequently evaluated by our participants with a dislike. Therefore, equally “generous” Donors are treated differently in terms of Receivers’ positive feedback.

4.4. Donation Differences between Conditions

As we can see from Table 6, the average donation in those sessions where the reputation system was enabled was lower. Overall, more “generous” allocations were performed when Receivers were not identified by their reputation. Specifically, when reputation was absent (i.e., totally anonymous interactions), our participants tended to donate the same amount as with an opponent characterized by a good reputation (i.e., a +3 reputational score on a scale ranging between −5 and +5).

4.5. Opponent’s Reputation Affects the Amount of Donation

The reputation’s influence upon donation decision making was further investigated considering only the donation phases in which the reputation system was provided to our participants.
Figure 3 underlines the existence of a positive relationship between the average participants’ donation and the Receivers’ reputation. Well-rated Receivers received, on average, a greater portion of the Donor’s endowment, while people’s donation towards badly rated opponents was smaller (Table 7).
Finally, as we can gather from Figure 3, strangers received, on average, the typical donation of positively reputed individuals.

5. Discussion

Participants’ donations allowed us to quantify how they depict strangers in terms of reputation. When individuals have to decide how much of their endowment to give to someone else, the reputation of their partner matters. However, when no reputational information was provided by the system, people interacted more generously as Donors. Interestingly, participants donated more on average when their partners had positive reputations (3 out of 5). Without any additional information about their interactors, individuals seem to rely on that automated predisposition towards cooperation individuated by the social heuristics hypothesis’ scholars [39] and that is particularly salient from early to mid-adolescence [41]. Since there were no clues about the partners’ trustworthiness that could be used to outweigh this heuristic decision-making process, allocations were made more “indiscriminately”. Instead, when indirect reciprocity mechanisms are in check, cooperation could be preferentially directed towards those individuals who the group identify as “valuable” members. Giving to an unknown individual (i.e., a stranger) the same amount of resources as to a well-rated person may not be a “mismatch” between humans’ evolved psychology and the environment we lived in [42]. Indeed, being generous with unrated individuals could be a successful method for identifying other cooperative individuals and turning them into interaction partners for the future [43].
Overall, our paper contributes to a better understanding of users’ behaviors towards online strangers. On the one hand, quantifying users’ availability towards strangers may help to prevent the fraudulent exploitation of users [14,15,16,17]. Indeed, since we know that people online tend to trust strangers as if they have a good reputation, new technological solutions and policies are needed to protect users from exploitation. On the other hand, our results could be adopted to ignite patients’ self-disclosure in teletherapy settings [44,45,46]. Self-disclosure is a fundamental aspect of therapy success. The willingness of people to adopt prosocial behavior towards a stranger could be exploited, by appropriately modulating the participants’ anonymity, to activate reciprocity dynamics within a therapeutic setting (e.g., an online group therapy) and thus achieve a higher level of self-disclosure.
As for reputation effects, we observed how greatly people rely on reputation for their decision making. Reputation appeared to push individuals to neglect their personal experience (i.e., feedback) and individual preferences (i.e., acceptance) and to adhere to an emerging group standard. This tendency is interpretable within the Social Identity Model of Deindividuation Effects [47,48], which defines the condition needed for conformism to take place online [49].
In our work, we observed that donation, feedback and acceptance behaviors are all preferentially adjusted for well-rated interactors. Moreover, we observed how individuals were susceptible to reputation’s influence even when reputation was a “fake” cue. Thus, our work contributes to defining the potential aspects and biases due to reputation dynamics in virtual environments [29], especially in those ones that rely on rating systems such as the one used in e-markets and e-WOM services when people are called to choose a product or service that has been previously evaluated by a community [27,29,50,51,52].
Independently of size, allocations coming from well-rated interactors were always preferred (i.e., accepted more often). Thus, reputation affected people’s acceptance thresholds and pushed them to accept less from individuals with good reputations [30].
Although gender does not usually affect offers in the UG [20], we were not able to exclude the gender effect in our specific sample since our gender ratio within each condition was unfavorable. For this reason, our results may be due to the cooperative and befriending tendency of females regarding everyone, including strangers [53]. For this reason, any future attempt to understand prosocial behavior within the context of an e-game such as UG should really make an effort to recruit almost equal numbers of males and females. Another important aspect to be assessed in future research regards the personal experience that both male and female adolescents gather from UG sessions. The likability of the UG should, in fact, be considered as a possible variable to be accounted for in a future iteration of this experiment. Moreover, since cultural differences in the behavior of responders are possible [21], acceptance-dynamics-related results could be culturally biased. Therefore, future research should assess if our results regarding reputation’s influence on acceptance dynamics are robust in other geographical contexts since it has been widely and traditionally used to study people’s economic decisions in many parts of the world and in different cultures [54,55,56]. For what concerns the feedback-related behavior, our results are in line with the previous studies involving widespread feedback systems [31,57]. Independently from the amount offered to the subjects, well-rated individuals had a greater probability of receiving a positive evaluation [31].
Several next steps could be imagined for this research area based on our results. In the end, we only analyzed how the counterpart’s reputation affected adolescent players in their choices. What would happen if reputation were placed on them instead of their counterparts is still to be investigated. Moreover, changing the counterpart’s rating could be a good path to explore. People in the future will increasingly find themselves interacting with non-human entities thanks to technological advancements. Therefore, assessing how people would behave when matched with social robots [58,59], deep fakes [60], artificial intelligence [61], or other artificial entities rated by reputation would allow estimating how much our results are robust in these scenarios.

6. Conclusions

To conclude, people appeared to behave in a prosocial and “open” way when matched with an online stranger (as if the stranger had a positive reputation). Moreover, reputation appears able to distort the dynamics of a web-based social system. On the one hand, reputation could reduce the effectiveness and robustness of such systems due to potential cascade effects. On the other, reputation “persuasiveness” could be useful for coping with the typical free-riding and social-loafing dynamics of social virtual systems. Therefore, a wise use of identifiability features within virtual environments may help in effectively managing online interaction (e.g., Web Projects) [62] and to contrast some dysfunctional dynamics appearing in public information [63].

Author Contributions

Conceptualization, M.D. and A.G.; Data curation, M.D.; Formal analysis, M.D. and A.G.; Investigation, M.D., S.C., S.C.P. and A.G.; Methodology, A.G.; Project administration, A.G.; Supervision, M.D. and A.G.; Writing—original draft, M.D., S.C. and A.G.; Writing—review & editing, S.C.P. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not Applicable, the study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Misoch, S. Stranger on the internet: Online self-disclosure and the role of visual anonymity. Comput. Hum. Behav. 2015, 48, 535–541. [Google Scholar] [CrossRef]
  2. Acquisti, A.; John, L.; Loewenstein, G. Strangers on a plane: Context-dependent willingness to divulge sensitive information. J. Consum. Res. 2011, 37, 858–875. [Google Scholar]
  3. Hallam, C.; Zanella, G. Online self-disclosure: The privacy paradox explained as a temporally discounted balance between concerns and rewards. Comput. Hum. Behav. 2017, 68, 217–227. [Google Scholar] [CrossRef]
  4. Bargh, J.A.; McKenna, K.Y.; Fitzsimons, G.M. Can you see the real me? Activation and expression of the “true self” on the Internet. J. Soc. Issues 2002, 58, 33–48. [Google Scholar] [CrossRef] [Green Version]
  5. Chiou, W.B. Adolescents’ sexual self-disclosure on the internet: Deindividuation and impression management. Adolescence 2006, 41, 547. [Google Scholar]
  6. Joinson, A.N. Self-disclosure in computer-mediated communication: The role of self-awareness and visual anonymity. Eur. J. Soc. Psychol. 2001, 31, 177–192. [Google Scholar] [CrossRef]
  7. Suler, J. The online disinhibition effect. Cyberpsychol. Behav. 2004, 7, 321–326. [Google Scholar] [CrossRef]
  8. Taddei, S.; Contena, B.; Grana, A. Does web communication warm-up relationships? Self-disclosure in Computer Mediated Communication (CMC). BPA Appl. Psychol. Bull. (Boll. Psicol. Appl.) 2010, 260, 13–22. [Google Scholar]
  9. Weisband, S.; Kiesler, S. Self disclosure on computer forms: Meta-analysis and implications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 13–18 April 1996; ACM: New York, NY, USA, 1996; pp. 3–10. [Google Scholar]
  10. Rand, D.G.; Ohtsuki, H.; Nowak, M.A. Direct reciprocity with costly punishment: Generous tit-for-tat prevails. J. Theor. Biol. 2009, 256, 45–57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Wedekind, C.; Milinski, M. Human cooperation in the simultaneous and the alternating Prisoner’s Dilemma: Pavlov versus Generous Tit-for-Tat. Proc. Natl. Acad. Sci. USA 1996, 93, 2686–2689. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Archer, J. Self-disclosure. In The Self in Social Psychology; Psychology Press: Hove, UK, 1980. [Google Scholar]
  13. Cozby, P.C. Self-disclosure: A literature review. Psychol. Bull. 1973, 79, 73. [Google Scholar] [CrossRef]
  14. Zlatolas, L.N.; Welzer, T.; Heričko, M.; Hölbl, M. Privacy antecedents for SNS self-disclosure: The case of Facebook. Comput. Hum. Behav. 2015, 45, 158–167. [Google Scholar] [CrossRef]
  15. Krasnova, H.; Kolesnikova, E.; Guenther, O. “It Won’t Happen to Me!”: Self-Disclosure in Online Social Networks. In Proceedings of the 15th Americas Conference on Information Systems, San Francisco, CA, USA, 6–9 August 2009. [Google Scholar]
  16. Liang, H.; Shen, F.; Fu, K.w. Privacy protection and self-disclosure across societies: A study of global Twitter users. New Media Soc. 2017, 19, 1476–1497. [Google Scholar] [CrossRef]
  17. Kim, A.; Dennis, A.R. Says who? The effects of presentation format and source rating on fake news in social media. Mis Q. 2019, 43, 3. [Google Scholar] [CrossRef]
  18. Güth, W.; Schmittberger, R.; Schwarze, B. An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 1982, 3, 367–388. [Google Scholar] [CrossRef] [Green Version]
  19. Tisserand, J.C. Ultimatum game: A meta-analysis of the past three decades of experimental research. In Proceedings of the International Academic Conferences. International Institute of Social and Economic Sciences, Prague, Czech Republic, 1–4 September 2014. number 0802032. [Google Scholar]
  20. Solnick, S.J. Gender differences in the ultimatum game. Econ. Inq. 2001, 39, 189–200. [Google Scholar] [CrossRef]
  21. Oosterbeek, H.; Sloof, R.; Van De Kuilen, G. Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Exp. Econ. 2004, 7, 171–188. [Google Scholar] [CrossRef]
  22. Duradoni, M.; Paolucci, M.; Bagnoli, F.; Guazzini, A. Fairness and trust in virtual environments: The effects of reputation. Future Internet 2018, 10, 50. [Google Scholar] [CrossRef] [Green Version]
  23. De Meo, P.; Fotia, L.; Messina, F.; Rosaci, D.; Sarné, G.M. Providing recommendations in social networks by integrating local and global reputation. Inf. Syst. 2018, 78, 58–67. [Google Scholar] [CrossRef]
  24. Oliveira, T.; Alhinho, M.; Rita, P.; Dhillon, G. Modelling and testing consumer trust dimensions in e-commerce. Comput. Hum. Behav. 2017, 71, 153–164. [Google Scholar] [CrossRef] [Green Version]
  25. Haas, A.; Unkel, J. Ranking versus reputation: Perception and effects of search result credibility. Behav. Inf. Technol. 2017, 36, 1285–1298. [Google Scholar] [CrossRef]
  26. Yap, K.B.; Wong, D.H.; Loh, C.; Bak, R. Offline and online banking–where to draw the line when building trust in e-banking? Int. J. Bank Mark. 2010, 28, 27–46. [Google Scholar] [CrossRef]
  27. Reyes-Menendez, A.; Saura, J.R.; Martinez-Navalon, J.G. The impact of e-WOM on hotels management reputation: Exploring tripadvisor review credibility with the ELM model. IEEE Access 2019, 7, 68868–68877. [Google Scholar] [CrossRef]
  28. Frey, V. Boosting trust by facilitating communication: A model of trustee investments in information sharing. Ration. Soc. 2017, 29, 471–503. [Google Scholar] [CrossRef]
  29. Frey, V.; Van De Rijt, A. Arbitrary inequality in reputation systems. Sci. Rep. 2016, 6, 38304. [Google Scholar] [CrossRef] [Green Version]
  30. Przepiorka, W.; Norbutas, L.; Corten, R. Order without law: Reputation promotes cooperation in a cryptomarket for illegal drugs. Eur. Sociol. Rev. 2017, 33, 752–764. [Google Scholar] [CrossRef] [Green Version]
  31. Duradoni, M.; Gronchi, G.; Bocchi, L.; Guazzini, A. Reputation matters the most: The reputation inertia effect. Hum. Behav. Emerg. Technol. 2020, 2, 71–81. [Google Scholar] [CrossRef]
  32. Collodi, S.; Panerati, S.; Imbimbo, E.; Stefanelli, F.; Duradoni, M.; Guazzini, A. Personality and Reputation: A Complex Relationship in Virtual Environments. Future Internet 2018, 10, 120. [Google Scholar] [CrossRef] [Green Version]
  33. Collodi, S.; Fiorenza, M.; Guazzini, A.; Duradoni, M. How Reputation Systems Change the Psychological Antecedents of Fairness in Virtual Environments. Future Internet 2020, 12, 132. [Google Scholar] [CrossRef]
  34. Basili, M.; Rossi, M.A. Platform-mediated reputation systems in the sharing economy and incentives to provide service quality: The case of ridesharing services. Electron. Commer. Res. Appl. 2020, 39, 100835. [Google Scholar] [CrossRef]
  35. Kokkodis, M. Dynamic, Multidimensional, and Skillset-Specific Reputation Systems for Online Work. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3671201 (accessed on 27 April 2021).
  36. Kusuma, L.; Rejeki, S.; Robiyanto, R.; Irviana, L. Reputation system of C2C e-commerce, buying interest and trust. Bus. Theory Pract. 2020, 21, 314–321. [Google Scholar] [CrossRef]
  37. Thielmann, I.; Spadaro, G.; Balliet, D. Personality and prosocial behavior: A theoretical framework and meta-analysis. Psychol. Bull. 2020, 146, 30. [Google Scholar] [CrossRef] [PubMed]
  38. Zak, P.J.; Kurzban, R.; Ahmadi, S.; Swerdloff, R.S.; Park, J.; Efremidze, L.; Redwine, K.; Morgan, K.; Matzner, W. Testosterone administration decreases generosity in the ultimatum game. PLoS ONE 2009, 4, e8330. [Google Scholar] [CrossRef] [Green Version]
  39. Rand, D.G.; Peysakhovich, A.; Kraft-Todd, G.T.; Newman, G.E.; Wurzbacher, O.; Nowak, M.A.; Greene, J.D. Social heuristics shape intuitive cooperation. Nat. Commun. 2014, 5, 3677. [Google Scholar] [CrossRef] [Green Version]
  40. McCulloch, C.E.; Neuhaus, J.M. Generalized Linear Mixed Models; Wiley Online Library: Hoboken, NJ, USA, 2001. [Google Scholar]
  41. Padilla-Walker, L.M.; Carlo, G.; Memmott-Elison, M.K. Longitudinal change in adolescents’ prosocial behavior toward strangers, friends, and family. J. Res. Adolesc. 2018, 28, 698–710. [Google Scholar] [CrossRef] [PubMed]
  42. Fehr, E.; Henrich, J. Is Strong Reciprocity a Maladaptation? On the Evolutionary Foundations of Human Altruism. 2003. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=382950 (accessed on 27 April 2021).
  43. Raihani, N.J.; Bshary, R. The reputation of punishers. Trends Ecol. Evol. 2015, 30, 98–103. [Google Scholar] [CrossRef]
  44. Valkenburg, P.M.; Sumter, S.R.; Peter, J. Gender differences in online and offline self-disclosure in pre-adolescence and adolescence. Br. J. Dev. Psychol. 2011, 29, 253–269. [Google Scholar] [CrossRef] [PubMed]
  45. Nguyen, M.; Bin, Y.S.; Campbell, A. Comparing online and offline self-disclosure: A systematic review. Cyberpsychol. Behav. Soc. Netw. 2012, 15, 103–111. [Google Scholar] [CrossRef] [PubMed]
  46. Farber, B.A. Patient self-disclosure: A review of the research. J. Clin. Psychol. 2003, 59, 589–600. [Google Scholar] [CrossRef]
  47. Spears, R.; Postmes, T.; Lea, M.; Wolbert, A. When are net effects gross products? Communication. J. Soc. Issues 2002, 58, 91–107. [Google Scholar] [CrossRef]
  48. Postmes, T.; Spears, R.; Sakhel, K.; De Groot, D. Social influence in computer-mediated communication: The effects of anonymity on group behavior. Personal. Soc. Psychol. Bull. 2001, 27, 1243–1254. [Google Scholar] [CrossRef]
  49. Perfumi, S.C.; Bagnoli, F.; Caudek, C.; Guazzini, A. Deindividuation effects on normative and informational social influence within computer-mediated-communication. Comput. Hum. Behav. 2019, 92, 230–237. [Google Scholar] [CrossRef] [Green Version]
  50. Jolivet, G.; Jullien, B.; Postel-Vinay, F. Reputation and prices on the e-market: Evidence from a major french platform. Int. J. Ind. Organ. 2016, 45, 59–75. [Google Scholar] [CrossRef] [Green Version]
  51. Anderson, M.; Magruder, J. Learning from the crowd: Regression discontinuity estimates of the effects of an online review database. Econ. J. 2012, 122, 957–989. [Google Scholar] [CrossRef]
  52. Luca, M. Reviews, reputation, and revenue: The case of Yelp. com. In Com (March 15, 2016). Harvard Business School NOM Unit Working Paper; Harvard Business School: Boston, MA, USA, 2016. [Google Scholar]
  53. Taylor, S.E.; Klein, L.C.; Lewis, B.P.; Gruenewald, T.L.; Gurung, R.A.; Updegraff, J.A. Biobehavioral responses to stress in females: Tend-and-befriend, not fight-or-flight. Psychol. Rev. 2000, 107, 411. [Google Scholar] [CrossRef]
  54. Hoffman, E.; McCabe, K.A.; Smith, V.L. On expectations and the monetary stakes in ultimatum games. Int. J. Game Theory 1996, 25, 289–301. [Google Scholar] [CrossRef]
  55. Cameron, L.A. Raising the stakes in the ultimatum game: Experimental evidence from Indonesia. Econ. Inq. 1999, 37, 47–59. [Google Scholar] [CrossRef] [Green Version]
  56. Roth, A.E.; Prasnikar, V.; Okuno-Fujiwara, M.; Zamir, S. Bargaining and market behavior in Jerusalem, Ljubljana, Pittsburgh, and Tokyo: An experimental study. Am. Econ. Rev. 1991, 81, 1068–1095. [Google Scholar]
  57. Duradoni, M.; Bagnoli, F.; Guazzini, A. “Reputational Heuristics” Violate Rationality: New Empirical Evidence in an Online Multiplayer Game. In International Conference on Internet Science; Springer: Berlin/Heidelberg, Germany, 2017; pp. 370–376. [Google Scholar]
  58. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3. [Google Scholar] [CrossRef] [Green Version]
  59. Broekens, J.; Heerink, M.; Rosendal, H. Assistive social robots in elderly care: A review. Gerontechnology 2009, 8, 94–103. [Google Scholar] [CrossRef] [Green Version]
  60. Westerlund, M. The emergence of deepfake technology: A review. Technol. Innov. Manag. Rev. 2019, 9, 11. [Google Scholar] [CrossRef]
  61. Gillath, O.; Ai, T.; Branicky, M.S.; Keshmiri, S.; Davison, R.B.; Spaulding, R. Attachment and trust in artificial intelligence. Comput. Hum. Behav. 2021, 115, 106607. [Google Scholar] [CrossRef]
  62. Fedushko, S.; Peráček, T.; Syerov, Y.; Trach, O. Development of Methods for the Strategic Management of Web Projects. Sustainability 2021, 13, 742. [Google Scholar] [CrossRef]
  63. Zakharchenko, A.; Peráček, T.; Fedushko, S.; Syerov, Y.; Trach, O. When Fact-Checking and ‘BBC Standards’ Are Helpless: ‘Fake Newsworthy Event’Manipulation and the Reaction of the ‘High-Quality Media’on It. Sustainability 2021, 13, 573. [Google Scholar] [CrossRef]
Figure 1. Acceptance dynamics with respect to positive and negative reputation of the interactor. In the insert, the acceptance rate trend related to the receiving conditions is represented.
Figure 1. Acceptance dynamics with respect to positive and negative reputation of the interactor. In the insert, the acceptance rate trend related to the receiving conditions is represented.
Futureinternet 13 00110 g001
Figure 2. Feedback coherence trend in relation to Donors’ reputation levels.
Figure 2. Feedback coherence trend in relation to Donors’ reputation levels.
Futureinternet 13 00110 g002
Figure 3. Comparison between the average donation trend with respect to Receivers’ reputation scores (dots) and the amount offered without the reputation system (dashed line).
Figure 3. Comparison between the average donation trend with respect to Receivers’ reputation scores (dots) and the amount offered without the reputation system (dashed line).
Futureinternet 13 00110 g003
Table 1. Experimental Design. Number of subjects for each condition.
Table 1. Experimental Design. Number of subjects for each condition.
Experimental Design
Reception Phase
Reputation ONReputation OFF
Donation PhaseReputation ON111111
Reputation OFF111111
Table 2. Descriptive statistics. In the table are presented means and standard deviation values between brackets for all the game-related variables within each experimental condition.
Table 2. Descriptive statistics. In the table are presented means and standard deviation values between brackets for all the game-related variables within each experimental condition.
ConditionAmount OfferedAcceptance Rate“Plus” FeedbackFeedback Coherence
C13.40 (1.91)0.69 (0.46)0.57 (0.49)0.76 (0.42)
C23.24 (1.89)0.67 (0.47)0.59 (0.49)0.80 (0.39)
C33.75 (1.89)0.65 (0.47)0.57 (0.49)0.73 (0.44)
C43.42 (1.66)0.69 (0.46)0.63 (0.48)0.80 (0.39)
General3.48 (1.24)0.67 (0.17)0.59 (0.19)0.78 (0.11)
Note: C1: Rep-ON (Donation) and Rep-ON (Reception); C2: Rep-ON (Donation) and Rep-OFF (Reception); C3: Rep-OFF (Donation) and Rep-ON (Reception); C4: Rep-OFF (Donation) and Rep-OFF (Reception); Amount offered: The quantity of the endowment offered per turn; Acceptance rate: Ratio between the times the Receivers accepted (1) and refused (0) the offer; Plus feedback: The Receivers’ rate of positive feedback; Feedback Coherence: The rate of positive feedback towards Donors who offered higher than or equal to 41% of their endowment and of negative feedback to those proposals under this threshold.
Table 3. Generalized Linear Mixed Models 1. Acceptance dynamics.
Table 3. Generalized Linear Mixed Models 1. Acceptance dynamics.
GLMM Best Model for Acceptance Dynamics
Model PrecisionAkaikeFDf-1(2)
Best Model 78.5 % 517.077219.215 ***2 (3356)
Fixed effects
FactorFDf-1(2)Coefficient (β)Student t
Reputation11.174 ***1 (3356) 0.067 3.343 ***
Amount offered436.218 ***1 (3356) 0.546 20.886 ***
*** = p < 0.001.
Table 4. Generalized Linear Mixed Models 4. Coherence dynamics.
Table 4. Generalized Linear Mixed Models 4. Coherence dynamics.
GLMM Best Model for Coherence Dynamics
Model PrecisionAkaikeFDf-1(2)
Best Model 75.3 % 704.8335.365 ***3 (3355)
Fixed effects
FactorFDf-1(2)Coefficient (β)Student t
Reputation * Amount Offered7.760 ***1 (3355)0.0212.786 ***
Amount offered8.857 ***1 (3355) 0.067 2.976 ***
*** = p < 0.001.
Table 5. Generalized Linear Mixed Models 3. Plus dynamics.
Table 5. Generalized Linear Mixed Models 3. Plus dynamics.
GLMM Best Model for Plus Dynamics
Model PrecisionAkaikeFDf-1(2)
Best Model 75.5 % 557.493244.047 ***2 (3356)
Fixed effects
FactorFDf-1(2)Coefficient (β)
Reputation 15.497 ***1 (3356) 0.068 3.937 ***
Amount offered484.375 ***1 (3356) 0.446 22.009 ***
*** = p < 0.001.
Table 6. Generalized Linear Mixed Models 3. Effect of the introduction of a reputation system on the donation.
Table 6. Generalized Linear Mixed Models 3. Effect of the introduction of a reputation system on the donation.
GLMM Best Model for Donation Dynamics vs. Reputation System
AkaikeFDf-1(2)
Best Model27,604.19757.050 ***1 (6685)
Fixed effects
FactorFDf-1(2)Coefficient (β)Student t
Reputation System (Off)57.050 ***1 (6685) 0.352 7.553 ***
*** = p < 0.001.
Table 7. Generalized Linear Mixed Models 5. Average donation.
Table 7. Generalized Linear Mixed Models 5. Average donation.
GLMM Best Model for Average Donation Dynamics
AkaikeFDf-1(2)
Best Model13,745.669154.387 ***1 (3357)
Fixed effects
FactorFDf-1(2)Coefficient (β)Student t
Receiver Reputation154.387 ***1 (3357) 0.140 14.425 ***
*** = p < 0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duradoni, M.; Collodi, S.; Perfumi, S.C.; Guazzini, A. Reviewing Stranger on the Internet: The Role of Identifiability through “Reputation” in Online Decision Making. Future Internet 2021, 13, 110. https://doi.org/10.3390/fi13050110

AMA Style

Duradoni M, Collodi S, Perfumi SC, Guazzini A. Reviewing Stranger on the Internet: The Role of Identifiability through “Reputation” in Online Decision Making. Future Internet. 2021; 13(5):110. https://doi.org/10.3390/fi13050110

Chicago/Turabian Style

Duradoni, Mirko, Stefania Collodi, Serena Coppolino Perfumi, and Andrea Guazzini. 2021. "Reviewing Stranger on the Internet: The Role of Identifiability through “Reputation” in Online Decision Making" Future Internet 13, no. 5: 110. https://doi.org/10.3390/fi13050110

APA Style

Duradoni, M., Collodi, S., Perfumi, S. C., & Guazzini, A. (2021). Reviewing Stranger on the Internet: The Role of Identifiability through “Reputation” in Online Decision Making. Future Internet, 13(5), 110. https://doi.org/10.3390/fi13050110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop