1.1. Trust
Trust is difficult to define but invokes similar feelings in most people. In general, trust can be described as one’s certainty that an entity functions as expected, despite one’s inability to monitor and control the environment in which the entity operates [
1]. Due to the differences in the origins of trust in different fields, there are different types of trust with different aspects and properties, and each needs to be modeled in its own way [
2]. In psychology, trust is the psychological state of a person who accepts or ignores the possibility of being vulnerable to the trustee based on the positive expectations of the trustee’s intent or behavior [
3]. It is believed that trust has three dimensions: Cognitive, affective, and behavioral [
4]. In sociology, trust has been defined as “a bet about the future contingent actions of the trustee” [
5]. However, this bet or expectation is said to be trust only if it has some impact on the actions of the person who places the trust in another. Trust can be understood and discussed from two perspectives: Personal and social. At the personal level, which is a psychological viewpoint, trust revolves around the notion of vulnerability [
6]. This trust can be differentiated from cooperation based on the presence of guarantees (assurances) that interactions will be controlled and bad behavior will be met with sanctions (penalties). However, cooperation can be considered a type of trust if it relies more on the future consequences of actions (fear of future behavior of others) [
7]. In this respect, social trust only has two dimensions: Cognitive and behavioral, as the affective dimension builds over time as the trust between involved people increases [
8]. At the social level, trust can be viewed as a property of a social group, which is reflected in the group’s collective psychological state. This causes the group members to act under the assumption that other members are trustworthy and expect other members to trust them as well [
9].
In the context of computer science, trust can be separated into two categories: User trust and system trust. The concept of user trust originates from psychology and sociology [
10]. The standard definition of user trust is the subjective expectation of an entity regarding future behavior of others [
11]. According to this definition, trust is inherently personal. In online systems, such as eBay and Amazon, trust is built on feedback arising from the past interactions of users [
12]. From this perspective, trust is relational. Frequent interactions between two people strengthen the relationships that build trust based on past experiences. Trust increases with positive experiences and decreases with negative ones.
There are two types of trust in online systems: Direct trust, which results from the person’s experience of direct interaction with others, and recommendation trust, which is based on experiences of other people in a social network and in a sense grow based on the propagative property of trust. In P2P-based trust models, peers collect information about other peers from their own social networks, which are also known as referral networks [
13]. In these models, each peer is assessed from two perspectives: Trustworthiness as an interaction partner, that is, the capability to provide a certain service, or in other words expertise [
14], and trustworthiness as a recommender, which refers to the ability to provide good recommendations, which is also known as sociability. After each interaction, the expertise and sociability of peers in the referral chain will be updated to reflect the experience of the member. The immediate neighbors of the member must be updated periodically to reflect the changes in trust in those peers based on their expertise and sociability.
The standard definition of system trust originates from the field of security [
15]. According to this definition, trust is the “expectation that a device or system will faithfully behave in a particular manner to fulfill its intended purpose”. For example, we call a computer trustworthy if its software and hardware perform their duties as we expect, that is, if its services are available, functioning, and behaving as they must and always do [
16]. The concept of system trust must be supported by software and hardware solutions. For example, researchers in [
17] presented a software-based mechanism, and in [
18] a hardware-based mechanism for this purpose. In all disciplines, trust relationships revolve around two concepts: Risk and interdependence [
19]. Risk originates from uncertainty in the intention of others. Interdependence means that people in trust relationship have somewhat aligned interests that they cannot achieve without the other. A relationship that does not meet these two conditions is not a trust relationship. Hence, it can be stated that risk and interdependence are necessary conditions for trust, and thus, changes in these conditions can alter the form and level of trust.
Trust has many different aspects. In the calculative aspect of trust, it is said to be the result of a calculation by the trustor that is designed to maximize the trustor’s stakes in the interaction. This aspect of trust is popular in economics for modeling trust and cooperation with Prisoner’s dilemma games. This aspect also has common use in organizational science. In an article published in 1990, Coleman James describes this phenomenon as follows: A rational actor trusts if the ratio of the chance of success to the chance of failure is greater than the ratio of the value of loss in the event of failure to the value of gain in the event of success [
20]. In the relational aspect, trust is defined as the confidence built over time as a result of repeated interactions between the trustor and the trustee. The basis of relational trust is the trustor’s knowledge about the relation itself, as trustworthiness and dependability in previous interactions improve the positive expectations of the trustee’s intentions [
21]. In computer science, this form of trust is called direct trust, that is, the trust is based on the direct interactions of two individuals. The emotional aspect of trust is defined in terms of security and comfort by which the trustor relies on the trustee [
22]. In psychology, emotional trust is said to be the outcome of direct relations between individuals [
23]. Influenced by cognitive trust, emotional trust helps the trustor develop a positive perception of the continuity of the relationship. Empirical studies, such as [
24], have shown that the previous direct experiences of the trustor can influence the trustor’s emotions toward and emotional trust on the trustee. Holmes believes that emotional trust is analogous to an emotional security, which helps a person to feel comfortable about relying on another person beyond the existing evidence of the person [
23]. In its cognitive aspect, trust is the sense of confidence derived from reason and rational behavior [
9]. According to the social capital theory [
25], cognitive trust is influenced by three forms of social capital: Information channels, norms and sanctions, and the trustee’s obligations to the trustor. The trustee’s cognitive trust on the trustee can also be influenced by the social relation networks and strong relations between members [
26]. Specifically, positive referrals by social network relations increase the cognitive trust of the trustor toward the trustee [
27]. Mullring’s research [
28] suggests that cognitive trust precedes emotional trust and that emotional trust leads to the formation of desirable or undesirable expectations on part of the trustee. Institutional trust is the trust resulting from an environment created by an organization, where cooperation between members is encouraged and misconducts are properly disciplined [
9]. This can be done at the organizational level [
29] or at the social level (e.g., legal systems formed to protect individual and property rights). For example, Publius [
30] is an application that utilizes institutional trust to help users publish content anonymously without concern about censorship and manipulation of contents.
Trust also has multiple properties that are critical to understanding this concept. First, trust is context-specific, meaning that it is always bounded to a particular context. For example, if person A trusts person B in regard to medical knowledge, this trust does not necessarily extend to the context of mechanical knowledge. This means that, for example, person B is trustworthy as a doctor, but not as a repairman. This property of trust is not the same as trust context, which represents the environment where trust relationship is present (e.g., law enforcement, insurance, social control, etc.) [
31]. The second property of trust is its dynamic nature, in the sense that it can increase or decrease with new experiences (interaction or observation) [
32]. Trust may also decrease over time. Newer experiences are more important than old ones, and also old experiences gradually become obsolete and outdated. This particular characteristic of trust is extensively modeled in computer science and has been used in various approaches, for example, by setting the old experiences to age, giving more weight to new experiences, and using only the most recent experiences. In some models, trust computations are performed periodically to ensure that the results are up-to-date. The next property of trust is its propagative nature. For example, when person A trusts person B, and person B trusts person C, then person A somewhat trusts person C even though he does not know him. However, this does not mean that trust is transitive [
33]. It is the propagative nature of trust that allows trust information to disseminate from one member to another member of a social network and a trust chain to be created. This property of trust has been the subject of many studies, such as [
34,
35] where researchers have used this property in their trust models. Another property of trust is that it is self-reinforcing. People are likely to exhibit positive behaviors toward persons whom they trust. On the contrary, they are less likely to interact with persons whom they do not trust enough, and this may lead to even less trust between people [
36]. The last property of trust that is of interest to this study is that trust in event sensitive, meaning that, while it takes a long time to build trust, it can be destroyed by a single event [
37].
1.2. Related Works
Generally, trust evaluation models in social networks can be divided into three categories; trust models based on network topology, trust models based on interaction, and hybrid trust model. As follows, examples and more explanations for each one were provided [
2].
1.2.1. Trust Models Based on Network Topology
Network topology is influenced by level of trust in social networks. High density in a network (such as more relations between members) can show more trust between them. Increasing both levels of output and input can, in turn, increase trust between members. Studies on network topology [
38] have shown that:
Members whose levels of output are greater have higher levels of trust.
If a person’s relation is more oriented to individuals with higher output levels, they are endowed with higher levels of trust.
While centralization of individuals (those who are in a network center) has a positive impact on their levels of trust, the average levels of trust in all members decease as the entire network centralizes.
Measures presented in this study have been accepted by numerous researchers, but this is not common. For example, interactions can be found in social networks formed on the basis of controversies and objections and members with high levels of output can demonstrate their objections towards a specific topic and do not trust in them. More examinations have revealed that social network topology is usually formed based on the concept of trust or friend-of-a-friend FOAF protocol. Generally, a network of trust is created for each person. This network indicates other members available in an individual’s social network as nodes and the amount of trust in each member via edges. Then, various approaches are used for scrolling networks and gaining trust between both nodes. This approach represents the essence of trust dissemination for its estimation. As follows, some of the more important studies in this domain were reviewed.
In the method presented with the development of the concept of FOAF for the creation of a trust network in semantic web in [
39], individuals were allowed to specify their level of trust in people they knew. In this model, nine certain levels from “very trusted” to “very untrusted” were used to express trust. Trust can be also expressed at a generalized level or in a specific field (domain). Several different levels of trust can be further determined for an entity in various fields. This model utilizes ontology for defining different levels in various fields. The FOAF graph developed through annotation of trust is also used for obtaining trust between two nodes or individuals in a network that are not connected directly to each other. Moreover, trust is calculated by means of network topology using weighted edges.
In the TidalTrust method, FOAF relation list is used to extract trust relationships between two users in a social network [
40]. This method is based on the assumption that neighbors with higher levels of trust in each other accept each other better as a third party in terms of trust. Accordingly, in this method, the paths between two users with no direct relations and level of trust between users of these paths are used to calculate trust. According to this method, shorter paths and higher levels of trust have less difference between average points for stable grading of trust. Given the development of her research studies in the domain of trust, Golbeck developed another investigation [
41] with a focus on similarities of users’ profiles to build trust in social networks. However, other investigations such as [
42] discussed trust based on similarities between two members. The main component of similarity between two users in the virtual world in this study was mentioned as similarities of published contents between them and cosine similarity was also used to determine the similarity of the published contents. The main point is that if two texts are written to confirm and refute a theory with the same keywords based on tf-idf criteria (this issue occurred in the example provided in this study), these two texts are assessed using a similar manner, but they are in fact opposed. Therefore, for the above-mentioned method in this study, many counterexamples can be presented. In another model, in [
43], where an extended version is provided by Golbeck, mutual grading of trust and components ensuring each entity in a network were considered and trust was then calculated via a graph of weighted edges. Initially, the similarities between the two raters were estimated by means of comparison of ratings given by them to similar entities. Then, it was used to select a suitable neighbor to benefit from their recommendations. Such recommendations were subsequently compared and the one belonging to the rater similar to the trustee was selected.
Another approach established to create a local-group trust matrix in the domain of semantic web is named Appleseed [
44]. This method has been established with the idea of developing a trust matrix in semantic web. The reasons behind the creation of this method are using graph exploration of partial trust and reducing calculation complexities. In most cases, this approach meets the need to explore a comprehensive trust graph and then mitigate the complexity of calculations by lowering the domain of calculations to a reduced trust graph. In [
45], researchers also used a graph-based solution and a trust network to generate a node recommender in social networks. This model used similarities between graphs for more recommendations. In another study [
46], a method was also developed for calculating trust based on trust chain and trust graph. The proposed model by these researchers benefited from proof-of-trust graph calculating trust within a trust chain. In social trust model presented by Caverli, social relations and feedback were simultaneously employed to calculate trust [
47]. Therefore, users could rate each other following interactions. Then, trust manager could combine these ratings to gain social trust between members. The rank for each member was also weighted using the quality of relations (high-quality communications refer to greater relations with members with high level of trust).
In another model called Sunny [
48], there was a focus on data obtained from social chains and channels. In this regard, they presented Bayesian trust extraction model to estimate the reliability of information about trust obtained from a specific source. In another study [
49]; the researcher, unlike many similar investigations extracting trust network from user feedback, developed this concept without explicit grading of users. The level of trust and trust links in the proposed method by this researcher were calculated through considering reputation for each user (i.e., a person’s skill in a specific domain) and level of dependence on that issue or specific field. The given method included two main steps:
Calculating a user’s skill in a specific subject: calculating the quality of reviews of users’ contents relying on reputation of those who had rated them or authors’ reputation.
Determining level of dependence between users on thematic categories through calculating average ratings and users’ reviews in thematic categories, and also calculating level of trust via level of dependence in users to a subject and other skills concerning that subject.
In [
50], researchers provided a method based on gravity to estimate trust. There were two main steps within this method. Firstly, the power of friendship was re-estimated based on extensiveness of trusted neighbors for each user and this depended on interpreting their relations with others with level of trust and limitations. Then, neighborhoods in social networks were utilized to calculate effective trust processes in non-neighborhoods in social networks. This model was formed based on the assumption that social relations could change over time and such relations might impose restrictions on trust relationships.
Methods in which network topology is considered the basis for calculating trust only sheds light on the aspect of the number of users associated with each other and the flow of trust in their network. From this perspective, these methods have been criticized since they do not take actual interactions between users into account. Therefore, the volume, number, and shared contents between users are among the main features of trust in social networks.
1.2.2. Trust Models Based on Interaction
Unlike the models explained in the previous section, some models only used interactions with networks to estimate trust. Some major research studies in this domain were delineated as follows. In the method proposed by a group of researchers using behavioral pattern of user interactions, trust in an online community was predicted [
51]. They also identified two categories to display users’ interactions and actions in a community:
Categorizing users’ activities in terms of information shared such as reviews, comments posted, ratings, etc. through measures such as number/sequence of reviews, number/sequence of rates, and average of number/length of comments posted, and so on.
Categorizing binary interactions for different possible interactions/relations that may occur between two individuals; for example, between author and rater, author and author, and rater and rater.
This model also includes time difference between user reactions creating a relationship which is called “temporary cause”. In this method, a supervised training approach is provided which predicts trust between users based on evidence obtained from users themselves (user factors) like information obtained from interactions between two users (interaction factors). Then, such factors are employed to train the categorizer predicting trust between users.
STrust Model is a social trust model that is exclusively based on interactions in social networks [
52]. This model consists of two types of trust:
Popularity trust which refers to acceptance of a member in a community and shows a member’s trust from other members’ perspective.
Participation trust that points to members’ participation in a community and reflects their trust in a community.
Popularity trust is extracted from criteria such as number of followers, readers, and positive feedback to individuals’ posts. Moreover, participation trust is comprised of criteria, such as the sequence of visits to networks/organizations by members, the number of people followed, as well as the number of posts read and commented. A combination of popularity trust and participation trust can, thus, determine a foundation to gain social trust in a community.
In the same study, trust was estimated on the basis of behavioral interactions between members in a social network [
53]. Behavioral trust is thus estimated based on two types of trust:
Conversational trust that specifies length and/or sequence of relations between two members. Longer relations or those with longer sequences indicate more trust between two individuals.
Publication trust which refers to publication of information received from a person in a network by another one. Publication of more information received from one person in a network by another individual shows their trust in that person’s information and implicitly reflects their trust in producers of that information.
Social trust models based on interaction consider interactions to estimate trust but disregard social trust topology which provides important information about members interacting with each other in a community as a significant source for estimating social trust. Therefore, trust models need to consider graph topology and interactions to calculate social trust in social networks.
1.2.3. Hybrid Trust Models
Trust models make use of a combination of interactions and social network topology for social trust calculation. In this respect, a group of researchers [
54] proposed a model for opportunity networks. Such networks allow users participate in various social interactions via programs, such as content distribution and micro-blogs. This model involves network topology and its dynamicity and also provides two complete approaches for building social trust:
Explicit social trust, which can be established based on conscious social relations. Whenever two users interact with each other, they exchange their lists of friends with each other and store them as graphs of friends. Trust is also created based on a friendship graph in which individuals assign highest level of trust with a value of one to each other through direct relationships.
Implicit trust, which is created based on sequence and length of relationships between two users. For this purpose, two criteria can be used: Familiarity and similarity of nodes. Familiarity refers to duration of interactions/relations between two nodes and similarity is degree of compliance of two nodes in a familiarity circle.
In this model, explicit social trust is estimated based on network topology, but implicit trust can be estimated on the basis of users’ interactions in a network. In this model, only duration and sequence of interactions are taken into account; while the essence of interactions for estimating trust between two individuals is of utmost importance. For example, in cases wherein two individuals are debating, it does not mean trust.