Next Article in Journal
Monitoring Ground Displacement in Mining Areas with Time-Series Interferometric Synthetic Aperture Radar by Integrating Persistent Scatterer/Slowly Decoherent Filtering Phase/Distributed Scatterer Approaches Based on Signal-to-Noise Ratio
Next Article in Special Issue
Securing Construction Workers’ Data Security and Privacy with Blockchain Technology
Previous Article in Journal
Remote Sensing Image Target Detection Method Based on Refined Feature Extraction
Previous Article in Special Issue
Online Voting Scheme Using IBM Cloud-Based Hyperledger Fabric with Privacy-Preservation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trust Model of Privacy-Concerned, Emotionally Aware Agents in a Cooperative Logistics Problem

by
Javier Carbo
*,† and
Jose Manuel Molina
Computer Science Department, University Carlos III de Madrid, Av Gregorio Peces Barba 22, Campus de Colmenarejo, 28270 Madrid, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(15), 8681; https://doi.org/10.3390/app13158681
Submission received: 2 July 2023 / Revised: 17 July 2023 / Accepted: 18 July 2023 / Published: 27 July 2023

Abstract

:
In this paper, we propose a trust model to be used in a hypothetical mixed environment where humans and unmanned vehicles cooperate. We address the inclusion of emotions inside a trust model in a coherent way to investigate the practical approaches to current psychological theories. The most innovative contribution of this work is the elucidation of how privacy issues play a role in the cooperation decisions of the emotional trust model. Both emotions and trust were cognitively modeled and managed with the beliefs, desires and intentions (BDI) paradigm in autonomous agents implemented in GAML (the programming language of the GAMA agent platform), that communicate using the IEEE FIPA standard. The trusting behavior of these emotional agents was tested in a cooperative logistics problem wherein agents have to move objects to destinations and some of the objects and places are associated with privacy issues. Simulations of the logistic problem show how emotions and trust contribute to improving the performance of agents in terms of both time savings and privacy protection.

1. Introduction

Including emotions in the interactions between humans and autonomous computational entities (agents) is among the current challenges faced by artificial intelligence: so-called affective computing [1]. Success in addressing this challenge would provide efficiency and common understanding in such interactions. Trust (how trust is achieved, applied and updated) is also of considerable relevance with respect to how and whether such interactions between humans and agents take place. Both uniquely human concepts, trust and emotion have been addressed in the field of psychology from different theoretical perspectives in the scientific literature, as discussed in Section 2. We intend to suggest a system of agents that integrates the existing links between privacy, trust and emotions in a meaningful way that is coherent with such literature.
We intend to achieve this goal by inspiring ourselves with a hypothetical cooperative logistics problem wherein humans and embodied unmanned vehicles gait according to their internal emotions and where they also perceive the emotions of other agents they meet (by observing their gait). Although our simulation does not include humans or futuristic embodied unmanned vehicles that are able to emotionally gait, the implemented agents reason and communicate in a human-like way, and they perceive simulated emotions through the hypothetical gaits of the agents they meet. In such a suggested cooperative logistic problem, these agents have to perform repetitive tasks (moving boxes). Because several of these tasks may overwhelm the ability of an agent, they request the cooperation of other agents to perform some of these tasks (move boxes). Asking for and accepting cooperation involves forming a trusting relationship, wherein trust in other agents is built not only based on knowledge of past direct interactions but also on the internal emotions of the agent and with the interpretation of the perceived emotion of the interacting agent. In this way, internal emotions and the perception of alien emotions of other agents act as indirect knowledge in the trust decision, replacing the role that reputation information about third parties often plays in trust models. The trust model we suggest is therefore focused on social punishment applied to misbehaving agents (non-cooperative behavior) through reasoning about their own and alien emotions (acquired by observation and based on the privacy concern associated with the task to be accomplished).
On one hand, our intended hypothetical use case consists of humans and embodied unmanned vehicles that are both moving boxes in the same environment. Existing unmanned vehicles are either able to autonomously transport boxes in real urban environments or they are embodied and able to gait as humans; however, the two features are currently mutually exclusive. In a case in which only humans have the ability to gait, only human emotions can be perceived, so our model must be significantly adapted to this limitation.
On the other hand, the privacy issues associated with the boxes may take several forms depending on the nature of the box itself; for instance, some examples of real-life situations in which humans may feel shame are those regarding the potential object they are carrying:
  • A bouquet of flowers, especially if the handler is seen in a suspicious way due to personal circumstances, e.g., already married, too young/old, etc.;
  • A stroller for a baby, especially if the handler’s work colleagues do not know about it;
  • A set of masks when there is a mask shortage in a pandemic situation;
  • A piece of cloth not coherent with the perceived gender role of the handler;
  • Any kind of item with a political/ideological meaning.
Possible reasons that justify privacy issues inducing shame in the handler often differ in nature, making it nearly impossible to produce an exhaustive list. Our work is not intended to be specific to any such context.
Our contribution is not exclusively focused on the tasks to be accomplished over time (in our use case, the logistic problem, i.e., sharing the boxes to be moved, produces time savings) but is also strongly linked to the avoidance of private disclosure. The avoidance of privacy disclosure can significantly contribute to a decrease in the negative effects of social prejudices against discriminated minorities. Our combined goal consists of both the task to be accomplished and avoidance of privacy disclosure. The experimental results reported in Section 7 are presented to show how emotions and the trust model influence both of these aims.
Specifically, the implementation of the agents in our contribution takes the form of a humanized internal architecture that includes emotions and privacy reasoning, which is an internal cognitive/symbolic representation of human reasoning according to which agents interact among themselves based on implicit observation of the gaits of the other agents they meet and the explicit exchange of human-like messages asking for cooperation.
Section 2 illustrates the state of the art corresponding to the computational representation of emotions, trust and privacy. Section 3 describes how we address the definition and computation of emotions and personality in our approach. Section 4 shows the applied trust model and its relationship with emotions and personality. Section 5 describes the FIPA protocols used in this study and the BDI reasoning followed by agents. Section 6 shows the cooperative logistic problem defined to test the execution of our model using the GAMA version 1.82 agent platform. Section 7 includes the experimental results. Finally, Section 8 concludes this work, discussing the possible uses and benefits of our model.

2. State of the Art

2.1. Computational Representation of Emotions

There is no consensus in the academic literature as to how emotions can be categorized or how emotions arise and are applied [2]. The simplest classifications distinguish either six (anger, disgust, fear, joy, sadness and surprise [3]) or eight (trust, fear, surprise, sadness, disgust, anger, anticipation and joy [4]) basic/primary/innate emotions. However when emotions are limited to facial expressions, the authors of [5] reduced them to four instances: anger, sadness, fear, joy and surprise. A more fine-grained representation [6] includes secondary emotions that result from the evaluation of expectations, such as relief or hope. The adoption of these emotions in an affective state (called mood) involve at least one dimension: the pleasure it generates (called valence), which can be either a positive (good/pleasant) or negative value (bad/unpleasant). The most complete categorization is the so-called OCC (based on name of its authors) cognitive theory [7], which distinguishes emotions according to the source: events, agents or objects. A human/agent is in a particular mood when they feel an emotion with sufficient intensity, which decreases over time. Therefore, many formalizations [8,9] include at least a second dimension, noted as arousal, which represents the (positive) level of intensity/excitement regarding the given emotion. But the most widely used model, known as the PAD (pleasure, arousal and dominance) model [10], includes a third dimension: dominance. The PAD model represents the level of control felt by the human/agent in facing the current situation. A positive value indicates that the human/agent is dominant (in control of the emotion), and a negative value indicates that the human/agent is submissive to it. In the PAD model, an emotion is represented by a point in three-dimensional space. A mood is adopted when a given emotion exceeds a given threshold and decreases over time (a forgetting curve) depending on the personality of the human/agent. Unlike mood, personality is not temporal and does not depend on the occurrence of a particular event/context. The most extended theory [11] of personalities considers five factors: openness, conscientiousness, extroversion, neuroticism and agreeableness, whereas a another popular personality theory [12] considers just three factors: extroversion, neuroticism and psychoticism. Extroversion is linked to increased display, sensitivity and duration of positive emotions, whereas neuroticism has a similar effect but with respect to negative emotions [13]. Additionally, neuroticism is associated with a reduced ability to focus attention to complete tasks [14] jointly with an aversion to novelty and uncertainty [15]. Psychoticism traits are associated with rejection of cultural norms and non-compliance with social expectations [16].
Emotions arise in an uncontrolled way (particularly submissive emotions), producing sudden embodied effects that may be observed physically [6,17]. Several previously described anthropomorphic systems employ fully embodied agents to show emotions, for instance, in physical interactions [18,19] and in conversations [20,21]. Overall, the most frequently used method of showing emotions is through facial expressions [22,23]. Although less used as a way to show and perceive emotions, the gait of humans and embodied agents has also been studied [24,25]. Additionally, automated detection of human emotions has raised considerable privacy and legal concerns [26,27] that are not addressed in this contribution.

2.2. Computational Representation of Trust

Agents are intended, by their own nature, to be self-interested, i.e., their behavior cannot be assumed to be altruistic and cooperative [28,29]. Because they are also dynamic, the past behavior of an agent can only serve as a rough estimation of its future behavior. Agent systems are also, by definition, open, so agents come and go; therefore, different levels of knowledge about the behavior of other agents coexist. Finally, as such systems can be highly populated, the occurrence of interactions with a majority of the other agents in the system is not likely. The joint existence of these features produces a significant difficulty in estimating the expected reliability of potential partners in interactions. In order to overcome such a difficulty, trust models based on reputation computations have been proposed in recent decades [30]. Whereas reputation represents a quantitative evaluation of the expected behavior of another agent, which has to be numerically aggregated and updated from indirect sources and direct experiences (each model suggests a different computation), trust represents the cognitive decision of an agent to participate in a potentially risky or uncertain interaction [31].
Cognitive decisions of agents are often implemented using the beliefs, desires and intentions (BDI) paradigm and produce intelligent agent behavior through cyclic deliberation about explicit beliefs, desires and intentions of the agent [32]. Such cognitive agents are expected to make use of interaction protocols to communicate meaningful messages with each other according to the IEEE Foundation for Intelligent Physical Agents (FIPA) standard [33]. Such a cognitive trust decision in BDI agents considers factors such as beliefs about the general situation in which the interaction decision is takes place and beliefs about the other agents [34]. Whereas in real life, emotions play a key role in trusting decisions, the beliefs agents use do not represent emotions; agents make decisions unemotionally according to their beliefs about the situation and about the other agents. Even the only properly emotional trust model found in the literature [35] decides with a trust model and an emotional model implemented as independent reasoning blocks. However, the need for an emotional trust model for agents has been recently recognized in the literature [36]. Furthermore, the authors of [37] observed a close relationship between the beliefs of agents and the emotions described by the OCC model of emotions [7]. Finally, underlying some of the purely trust-based models proposed in the literature, such as that proposed in [38], where failures in the trusting decision have a much greater mathematical impact on reputation than success, a hidden implicit emotional process occurs.

2.3. Computational Representation of Privacy Issues in Social Interactions

Privacy protection has been largely bounded by legal systems, especially in the European Union (EU), as demonstrated by Regulation 2016/679 of the European Parliament [39]. While privacy can be roughly defined as the right to control one’s own personal information [40], it is a very complex and broad issue that exceeds the legal perspective. Despite being legal (as they may take place in public spaces), some interactions cause the perception of potentially shameful, intimate or sensible tasks by the interacting partners. The nature, scope, context and purpose of any social interaction defines the level of privacy harm produced [41,42]. The acknowledgment of this level of potential privacy harm to one’s own social image and that of others in interactions causes an emotional response in both interacting parties [43]. Therefore, as in real life, privacy not only plays a legal role but also an emotional role in interactions (inhibiting or promoting behavior). In the same way, when agents act autonomously on behalf of humans, the emotional impact of the potential privacy-related interactions performed by the agent have to be represented, weighted and considered in the decision making of such agents. The particular causes of privacy issues involved in the social interactions may vary and may be related to ideological, gender, work and health issues. Privacy issues can take different forms depending on the personal circumstances associated with social interactions. The present work is not focused on any one such circumstance, nor do we specifically represent a model of all such causes and their given circumstances leading to privacy concerns. Instead, we assume that social meetings sometimes have associated privacy issues that can be avoided through cooperation between humans and autonomous agents. Our use case of a logistic problem in which a human delegates the task of moving a box to an unmanned vehicle avoids the privacy issues associated with the box itself and the places to which it is moved, independent of the specific privacy issues and their causes.

3. Proposed Emotional Model

3.1. Sources of Emotions

Among all the possible representations of emotions described in Section 1, we decided to represent a set of five of the six basic emotions [3]: joy, sadness, anger, fear and surprise. We made this decision based on the limitation of how emotions are assumed to be perceived, as demonstrated in our contribution through the gait of humans and embodied agents. For example, through interactions with agents, their gaits may be perceived as:
  • Happy: for instance, when the encountered agent walks straight and looks toward its front side;
  • Sad: for instance, when the encountered agent has a curved back and looks down at the floor;
  • Anger: for instance, when the encountered agent raises its shoulders, its hands form fists and its moves are fast and rigid;
  • Fear: for instance, when the encountered agent stands still and its hands and legs shake;
  • Surprise: for instance, when the encountered agent stands still and its head leans back as it raises its hands.
In our model, in order to represent the emotive reactions that take place in real life, agents feel emotions according to the combination of three dynamic external perceptions (the sources of emotion):
1.
Others: when the emotion perceived in response to the encountered agent is anger, a fear emotion is produced;
2.
Alien privacy: when a privacy issue is associated the other agent involved in the meeting (which may be due to the object carried by the other agent), a surprise emotion is be produced;
3.
Own privacy: when a privacy issue is associated with the subject agent involved in the meeting (which may be due to the the place of the meeting or the object carried by the subject agent), an anger emotion is produced;
4.
Positive rewards of performance: when the subject agent successfully accomplishes its task, a joy emotion is produced;
5.
Negative rewards of performance: when the subject agent poorly accomplishes its task, a sadness emotion is produced.
Additionally, we suggest that agents be assigned one of the three personalities proposed by [12]: extroverted, neurotic or psychotic. The personality of an agent has acts as emotion enhancer. Whereas an extroverted personality tends to enhance positive emotions (joy and surprise), neuroticism enhances negative emotions (sadness and fear), and psychoticism enhances antisocial emotions (anger). Whereas emotions are dynamic and dependent on the sequential situations addressed by and several emotions may coexist (although just one is dominant and becomes the mood of the agent), personalities are static, predefined and mutually exclusive.

3.2. The PAD Levels of Emotions

Based on the combination of these sources, in each iteration cycle, a resulting value represents the possibility of an agent feeling an emotion. Because several emotions may simultaneously be felt, the emotion corresponding to the greatest of them is the that shown by the agent when two agents meet in the same place. As the literature (Section 1) states, emotions are often defined by three dimensions: pleasure, arousal and dominance (PAD). The five basic emotions related to gait perception are associated with the specific values shown in Table 1 [44]. Due to the intrinsic meaning of these concepts, whereas pleasure and dominance take positive and negative values (between −1 and 1), arousal always takes a positive value (between 0 and 1). The feeling of an emotion ( w e ) is computed using Equation (1), where d e stands for the distance between the current 3D PAD position of the agent and that of each basic emotion, Δ e is the minimum threshold required to activate an emotion and ϕ e establishes the point of saturation of each emotion, as proposed in [45].
w e = 1 d e Δ e ϕ e Δ e
Because emotions tend to be balanced over time, we progressively reduce the pleasure level of an emotion ( P i ) with a decreasing function that depends the excitement of the agent (arouse level), as shown by Equation (2), where V p is a constant that softens the level of decrease and is fixed at 0.1 .
P i = P i 1 V p × A i 1

3.3. Pad Changes Due to the Source of Emotion

Each of the five sources of emotion listed in Section 3.1 (others, own privacy, alien privacy, and positive and negative rewards) is intended to cause an emotional reaction explained below (fear, anger, surprise, joy or sadness) through changes in the PAD values.
The pleasure level is positively affected (with a value of 0.1) by the instant satisfaction that an agent feels when a reward for achieving a current goal is satisfied (in our logistic problem, when the box reaches the destination in time). On the other hand, when the agent fails to achieve a current goal (in our logistic problem, this takes place in both situations: social punishment due to the agent’s own privacy disclosure and failure in box delivery), its pleasure level is negatively affected (with a value of −0.1).
We propose that the level of arousal of the agent increases directly proportionally to the sum of the variation of the next potential source of emotions: the perceived anger emotion of the currently encountered agent and the privacy disclosures of both agents (own and alien) that are induced by the boxes and the place of the meeting. Equation (3) shows this computation, where V a is a constant that softens the level of increase/decrease fixed at 0.1 , and S i , j is the contribution to the arousal by each j source of emotion.
A i = A i 1 + S i , j S i 1 , j × V a
Given the emotions that we intend to produce from the sources of emotion, dominance is decreased (by 0.1) when anger is perceived in the gait of the currently encountered agent. However, because dominance represents a dichotomy between being controlled and being in control, two other conditions cause a sense of control/lack of control in the agent:
  • The number of tasks for which the agent is responsible in a given moment (in our logistic problem, due its own boxes to be moved and delegated boxes), as a high number of tasks causes a sense of a lack of control of the situation. Each task exceeding the first task causes a decrease of −0.05 in dominance; on the other hand, having no current task causes an increase of 0.1 in dominance, and having just one task results in a 0.05 increase;
  • The overall achievement/performance of the agent across all executions (not just an instant reward for the current goal), as poor performance causes a sense of a lack of control (forcing the agent to accept offers of help from untrusted agents) and because the accumulated reward per task is less than the average reward, causing a decrease of −0.1 in dominance, while a greater-than-average reward causes an increase of 0.1.
As for pleasure, a decreasing function is applied to dominance component. The level of decrease also depends on the excitement of the agent is (arousal level), as shown in Equation (4), where V d is a constant that softens the level of decrease fixed at 0.1 .
D i = D i 1 V d × A i 1

3.4. Influence of Personality on Emotions and Performance Ability

The literature states that personality influences how emotions are felt in different ways: extroversion enhances joy and surprise, neuroticism enhances sadness and fear and psychoticism enhances anger. We intend to represent such influence through the ways in which changes in pleasure, arousal and dominance are computed. Specifically, we suggest the use of constants ( V p , V a and V d ) that soften the level of decrease over time for pleasure, arousal and dominance, respectively. The resulting modifications are described as follows:
  • Because joy and and surprise have the highest associated positive values of pleasure in Table 1, an extroverted personality promotes joy and surprise (as suggested in [13]) by causing low-level changes in pleasure through a small softening constant ( V p = 0.05 instead of V p = 0.1 ) when the value is positive;
  • Because sadness and fear are associated the highest negative values of pleasure in Table 1, a neurotic personality promotes sadness and fear (as suggested in [13]) by causing low-level changes in pleasure through a small softening constant ( V p = 0.05 instead of V p = 0.1 ) when the value is negative;
  • Because anger is associated with the highest positive values of dominance in Table 1, a psychotic personality promotes anger by causing low-level changes in dominance through a small softening constant ( V d = 0.05 instead of V d = 0.1 ) when the value is positive.
The accomplishment of the moving tasks is strongly associated with the personality of the agents. Because neuroticism is associated with a reduced ability to focus attention to complete tasks according to [14], all (own and delegated) moving tasks are delayed (by a cycle) when performed by neurotic agents. Because psychoticism leads to rejection of cultural norms and non-compliance with social expectations according to [16], psychotic agents perform the delegated moving tasks with a delay of one cycle.

4. Proposed Trust Model

Interactions between agents cause both rewards and punishments that strongly influence the ways in which agents trust each other. On one hand, the privacy concern caused by other agents (in the logistic problem, whenever our agent meets another agent in a sensible cell or with a sensible box) produces negative feedback (social punishment) in our agent. On the other hand, our agent receives positive feedback (social reward) when the task is successfully accomplished (in the logistic problem, when the box reaches the destination). If the moving of a box is partly delegated to other (trusted) agents, our agent receives a reward corresponding to the level of joint success of the trusted agents proportionally to the incurred delays in delivery. Therefore, we can now distinguish between two different rewards that agents receive: privacy-related rewards and the delay-related rewards, both of which can be further classified into two subtypes: those related to the box and place (privacy-related rewards) and those related the subject agent’s performance and that of alien agents (delay-related rewards).
We can distinguish between two different trusting decisions:
1.
The decision to request cooperation, where our agent becomes the trusting agent, as it delegates a task to another (trusted) agent, taking some risks (in our logistic problem, trusting another agent to move a box towards its destination), with no associated certainty or guarantee of the future behavior of the other agent, although afterward, the trusting agent receives delayed feedback about the behavior of the trusted agent (in our logistic problem, the trusting agent learns whether the box reached the destination in a given time or not).
2.
The decision to answer a cooperation request from another agent, where our agent becomes the trusting agent, as it carries out a delegated task for other (trusted) agent, taking some risks (in our case, trusting the other agent to reach the destination in a given time). Again, there is no associated certainty or guarantee of the future behavior of the other agent, although afterward, the trusting agent receives delayed feedback about the behavior of the trusted agent (in our logistic problem, the trusting agent will learn whether the moving task was performed in time or not).
The decision to request cooperation from another agent depends upon the following criteria:
  • The mood (current feeling) of the trusting agent: positive emotions (joy and surprise) of the agent encourage trusting decisions, whereas negative (sadness and fear) and antisocial (anger) emotions discourage trusting decisions, with a bonus/malus of 0.1 of trust required;
  • The privacy issues of the agent involved in the trusting decision (in our logistic problem, the level of privacy associated with the box to be delegated). If privacy issues are involved, then 0.1 less trust is assigned to the other agent;
  • How much the other agent is trusted: Trust is computed based the previous performance of the other agent in interactions with the subject agent (in our logistic problem, the level of previous success of the other agent performing moving tasks of the subject agent). Success (delivery of the box without delay) is associated with an increase of 0.1 in trust, whereas each cycle of delay translates to a −0.05 decrease in trust;
  • How much the agent needs help (in our logistic problem, the number of boxes already carried): For each box already being carried, 0.05 less trust is assigned to the other agent.
The decision to answer a cooperation request from another agent depends upon the following criteria:
  • The mood (current feeling) of the trusting agent: Positive emotions (joy and surprise) of the agent encourage trusting decisions, whereas negative (sadness and fear) and antisocial (anger) emotions discourage trusting decisions, with a bonus/malus of 0.1 of trust required;
  • Privacy issues involved in the trusting decision for the agent (in our logistic problem, the level of privacy associated with the box to be moved): If privacy issues are involved for with subject agent, then 0.1 less trust is assigned to the other agent;
  • How much the other agent is trusted, where trust is computed based the previous performance of the other agent in interactions with the subject agent (in our logistic problem, the level of success of the other agent in previously moving boxes of the subject agent): success (delivery of the box without delay) is associated with an increase of 0.1 in trust, whereas each cycle of delay translates to a −0.05 decrease in trust;
  • How much the other agent may help (in our logistic problem, the number of boxes already carried and assigned boxes with their time requests and relative paths to their destination): for each box already being carried, 0.05 more trust to is required to accept the cooperation offer.
The perceived emotion of the other agent does not directly influence both trusting decisions, i.e., the influence is indirect: perceived alien emotion influences the emotion of the subject agent, and such own emotion influences the corresponding trusting decision. In the same way, privacy issues associated with the to-be-trusted agent do not directly influence both trusting decisions; they influence the alien emotion, which influences the emotion of the subject agent, and such emotions influence the corresponding trusting decision.

5. FIPA Protocols and BDI Reasoning

According to the BDI paradigm [46], the behavior of agents is determined by the achievement of desires through the execution of plans corresponding to the intended desires caused by the perception of certain conditions (beliefs expressed as predicates). In this section, we explore the adopted desires, the firing beliefs and the corresponding plans used by agents in our system.
The agents of our system perform their moving tasks following a repeated iteration cycle. Before such a cycle takes place, agents are initiated with:
  • A random fixed personality chosen among three possible personalities: neurotic, psychotic or extroverted;
  • An initial mood (current feeling) derived from the neutral (0) PAD values of pleasure, arousal and dominance;
  • An initial location (randomly chosen anywhere in the existing grid);
  • An initial desire to be i d l e .
Agents continuously perceive the existence of the boxes assigned to them, jointly with the destination that they have to reach and whether they have privacy issues or not. In the same way, agents continuously perceive other agents that are busy, jointly with their location. These perceptions cause the adoption of p e n d i n g p a c k a g e and b u s y c a r r i e r beliefs, respectively, and they occur in parallel with any desires of the agent.
Once initialized, the repeating iteration cycle starts with the execution of the desire to be i d l e . The corresponding plan is accomplished according to the following steps if any pending package is perceived:
  • Transformation of the closest package among all pending packages into a current package belief;
  • Transformation of all other pending packages into m o v i n g p a c k a g e beliefs;
  • Dropping of the i d l e desire and adoption of a m o v i n g desire.
In cases in which no pending packages are perceived, the agent moves towards the location of the closest b u s y c a r r i e r (where busy means currently moving at least one package).
The plan corresponding to the desire to m o v e a package also transforms all pending packages into m o v i n g p a c k a g e beliefs. Then, in cases in which another agent and any other package, in addition to the current package, are present in the same cell, the following steps are carried out:
  • The farthest package is chosen as a candidate to be delegated;
  • The level of trust in the encountered in the same cell is determined;
  • Trust modifiers are computed based on emotions and personality;
  • A decision is made to delegate the candidate package in cases of sufficient trust in the encountered agent.
The decided delegating process is implemented as a call for proposal (CFP) interaction protocol (FIPA-compliant), as graphically outlined in Figure 1.
Delegation can take place several times for the same package, forming sequential instances of this CFP protocol. Figure 2 graphically represents such a linked sequence of CFP protocol instances.
Once a package has been moved to its destination, it disappears from the simulation.

6. Problem Definition in the GAMA Platform

In the simulations, carrier agents (with different initial random locations, personalities and assigned packages) move and interact in an abstract environment represented by a grid. At the beginning of the simulation, several boxes appear with random destinations and assignments to carriers. Initially and in any time, an agent may concurrently have several boxes to move (requiring the cooperation of other agents to satisfy all moving goals in time). The simulation ends when all packages have reached their destinations.
According to this problem definition, agents feel different emotions when they meet each other and when they succeed or fail in moving a box. We assume that agents perceive the gaits of the other agents when they meet in the same location. Additionally, we assume the ability of agents to identify which boxes and cells cause personal privacy concerns.
In order to implement our trust model, we used the GAMA agent platform [47] version 1.8.2, which is an open-source, FIPA-compliant software that includes the possibility of using the BDI paradigm for reasoning [48]. GAMA allows for large-scale simulations and integrates geographic information system (GIS) data [49]. The GMA agent also includes an extension that links norms with emotions [50].
The parameters of the simulation define the setup of the problem, as shown in Figure 3 and listed below jointly with the values used in the experiment:
  • Whether to include trust in the simulation or not (variable: true or false);
  • Whether to include emotions in the simulation or not (variable: true or false);
  • Number of packages (fixed at 15);
  • Number of carrier agents (fixed at 15);
  • Percentage of initially idle carrier agents (variable: 0%, 20%, 40%, 60% or 80%);
  • Size of the square grid, in number of cells (fixed at 30);
  • Probability of a cell/box being private (variable: 0.0, 0.2, 0.4, 0.6 or 0.8);
  • Probability of being neurotic (fixed at 33%);
  • Probability of being psychotic (fixed at 33%);
  • Penalty associated with privacy disclosure (fixed at 2.0);
  • Reward for reaching the target in time (fixed at 1.0);
  • Penalty for a delay in reaching the target (fixed at 2.0);
  • Basic trust threshold to cooperate (fixed at 0.5);
  • Initial trust in an unknown agent (fixed at 0.5).
Agents are represented as rectangles with their corresponding IDs inside in a grid of cells and colored in blue when they are idle, whereas the destinations of the boxes to be moved according to the original assignment are represented by green-colored circles containing the ID of the assigned carrier, as shown in Figure 4. Once the agents perceive the packages and start moving, the destinations of boxes to be moved by other agents that differ from the original assignment are represented by yellow-colored circles containing the ID of the currently moving carrier, and the carriers moving a package are represented by black-colored squares containing their ID and cells in which two or more agents meet are represented as red-colored squares, as shown in Figure 5.

7. Experimental Results

As all experimentation is strongly dependent on the particular values of many model variables, proving any definitive validation is beyond the scope of the present work. Our goal is to show a possible mechanism by which privacy-induced emotions may be integrated in a trust model, enabling a futuristic interaction between unmanned automated elements and humans, both transporting objects. The role of emotions is not specifically designed to improve the rewards obtained by the agents. Although emotions affect these rewards, they are only intended to mimic human behavior, causing reactions that are as realistic as possible according to psychological theories and previously reported practical approaches to these theories, as explained in Section 1.
Using the problem definition presented in Section 6, we compare two alternatives with our emotional trust model (denoted as e m o t i o n a l t r u s t ):
  • A trust model without emotions (denoted as n o e m o t i o n s ): the current feeling does not modify how much trust is required to propose and to accept a proposal for delegation of a task (happy and surprised emotions decrease the trust requirement, whereas sadness, anger and fearful emotions increase the trust requirement);
  • No trust model at all (denoted as n o t r u s t ): No agent trusts any other agent, and no cooperation takes place (no objects are delegated to other agents to be carried towards their destinations). All agents carry the initially assigned objects to their destinations by themselves, without any way to decrease the corresponding delays. This is the worst case, serving as the benchmark to show the improvement achieved by the other alternatives.
All comparisons are repeated 100 times to decrease the variability caused by the initial random locations of objects and agents. The comparison is measured in two ways. First, we determine how these three models obtain rewards when the number of idling agents changes. When the number of initially idle agents increases, the possibility of cooperation increases (fewer delays and the avoidance of more privacy issues are possible). Figure 6 shows the results obtained in this first comparison, according to which we can conclude that:
  • The rewards for any alternative can be expected to increase as the percentage of idle agents increases; however, rewards appear to reach a saturation point between 20% and 40%, beyond which no significant increase is perceived;
  • With the independence of the percentage of idle agents, the no-trust alternative obtains considerably fewer rewards than the other two alternatives (no emotions and emotional trust);
  • When the percentage of idle agents is very low (20%), only a few boxes can be delegated, causing results that significantly differ from those obtained with greater percentages. This is especially true for the no-trust alternative;
  • Except when the percentage of idle agents is very low (20%), the use of emotions slightly increases the rewards obtained by agents (the emotional trust alternative slightly outperforms the no-emotions alternative). Therefore, in these cases, using emotions in the trust model causes some improvement. However the difference is minimal;
  • When the the percentage of idle agents is very low (20%), the use of emotions clearly leads to fewer rewards, with an apparent decrease in the quality of decisions made by the trust model (the no-emotion alternative clearly outperforms the emotional trust alternative).
In the second comparison, we determine how these three models obtain rewards when the privacy probability changes. Privacy probability represents the probability of a box or a cell becoming a privacy issue for a particular agent. The privacy of the box is constant and fixed from the moment the assignment takes place (both initially and for delegation), whereas the privacy of a cell is computed whenever a meeting takes place. For both parameters, the same probability value is used. Therefore, in one extreme, a zero-privacy probability indicates that no cell or box is private at any time. Therefore, privacy plays no role in the simulation. Given our intention to study the effect of a progressive increase in this variable, we remark here on the role of this variable in the model:
  • Currently carrying a private box increases the chances of proposing a delegation to another agent whenever a meeting takes place, as the box may not be private for the other agent (corresponding to a smaller burden for the other agent to carry it to its destination);
  • Both private boxes and cells cause a punishment reward whenever a meeting takes place;
  • Both private boxes and cells cause a decrease in the pleasure level of the agent (PAD variable) whenever a meeting takes place;
  • Both private boxes and cells cause an increase in the arousal level of the agent (PAD variable) whenever a meeting takes place.
Figure 7 shows the results obtained in this second comparison, according to which we can conclude that:
  • As the privacy probability increases, the rewards associated with any alternative decrease;
  • Independent of the value of privacy probability, the no-trust alternative results in considerably fewer rewards than the other two alternatives (no emotions and emotional trust);
  • Except when the privacy probability is very high (0.8), the use of emotions slightly decreases the rewards obtained by agents (the emotional trust line is slightly below the noemotions line). So it appears that in these cases, using emotions in the trust model causes some decline, but the difference is very small.
  • When the the privacy probability is very high (0.8), the use of emotions appears to obtain worse rewards, and it seems to decrease the quality of the decisions taken by the trust model (the no-emotions alternative slightly outperforms the emotional trust alternative).
Taking the results of both comparisons jointly, we observe that the inclusion of emotions in a trust model according psychological theories and based on previous practical approaches to these theories as explained in Section 1 and with the particular values of all the variables of our model does not universally improve the rewards obtained by the trust model without emotions. In some circumstances, the proposed model slightly improves the quality of the implemented decisions, whereas in other circumstances, it clearly reduces the quality of such decisions. In most circumstances, the differences are minimal, and compared to the model with no trust, emotions do not negate the most significant advantages (in terms ) that a trust model provides.

8. Conclusions

Interactions between autonomous agents and humans in mixed environments are associated with several challenges. Although our focus is on conversational means, emotions (their expression, perception and reasoning) may also play a major role. In this paper, we have addressed the topic of using emotions within a trust model with an innovative inclusion with respect to the role of privacy issues in an emotional trust model. To the best of our knowledge, this issue has not been addressed before in the scientific literature. Our research accomplishes several milestones:
  • We proposed a particular way to include privacy in an emotional model that is compliant with psychological theories and previous practical approaches to these theories, as explained in Section 1;
  • We proposed a particular way to include privacy in the cooperation decisions of a trust model;
  • We suggested a set of particular values for all variables that form our privacy-sensible emotional model;
  • We implemented our model in the reasoning of symbolic agents in GAML (the programming language of the GAMA agent platform) according to the belief, desires and intentions deliberative paradigm, which is communicated using the IEEE FIPA standard.
  • We also defined a cooperative logistic problem to test our model.
  • And finally, we have executed agent simulations that generated two different comparisons. Such comparisons allowed us to observe the contribution of emotions and trust in the defined cooperative logistic problem to improve both of our goals: time savings and privacy protection.
In our proposal, we intended for the agents to be as humanized as possible through the use of a symbolic approach that provides explainability and transparency via BDI reasoning with FIPA. Privacy issues are also an important factor in how we feel when interacting with others, and emotional agents should address their perceptions, expression and reasoning. The inclusion of privacy sensitivity in an emotional trust model is so innovative that a comparison with alternative models is not yet possible.
Although the experimental results that we obtained from the simulation of the logistic problem do not seem to encourage the use of emotions to satisfy both our goals (time savings and privacy protection), the inclusion of emotions is assumed to be a key element for autonomous agents to become accepted in real-life applications. And, since the emotions do not significantly harm the results, its inclusion could balance the little bit of loss in performance with the improvement in representativeness that emotions provide to social interactions. This is the most significant contribution of our work, rather than the potential gain that sharing tasks themselves can provide.
Our contribution is not the only way to model the social punishments that privacy issues cause, and many particular settings could take another form and lead to very different experimental results, including the conceptual representation of emotions based on the problem used to test the model and the mathematical quantification of all involved factors. Many other alternative uses of privacy in an emotional interaction between humans and autonomous agents are possible, but our model provides a step towards a path worth exploring.
The adaptation of our emotional trust model to the hypothetical mixed environment of humans and unmanned vehicles is out of the scope of this contribution since it would require taking into account the existence of embodied unmanned vehicles expressing emotions through their gait, which we can currently only imagine. Also, other circumstances would have to be taken into account, such as the capacity and range of the different types of autonomous vehicles. Other transportation issues, such as the traffic, weather conditions or the different relevance of the boxes may also play a role. The real involvement of humans in the experiment would also make it even more difficult to test the model and to reach a conclusion. But, the usefulness of this research is not dependent on such an adaptation, and our model is useful because it provides a first proposal on how to involve privacy in the emotional deliberation that takes place in trust decisions. In spite of the lack of specificity in the definition of what boxes are carrying the agents, of what causes privacy concerns, and of the multiple adaptations required to apply it in real life, our proposal shows how privacy can be related to the emotional representation of agents trusting each other, and when emotions and trust are applied, how the agents moving boxes in our simulation address the combined goal of time saving and privacy protection.

Author Contributions

Conceptualization, J.C. and J.M.M.; Software, J.C.; Writing—original draft, J.C. and J.M.M.; Writing—review & editing, J.C. and J.M.M.; Supervision, J.M.M.; Funding acquisition, J.M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by public research projects of the Spanish Ministry of Science and Innovation (CACTUS) (reference PID2020-118249RB-C22) and the Spanish Ministry of Economy and Competitiveness (MINECO) (reference TEC2017-88048-C2-2-R). This was also supported by the Madrid Government under a Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3MXX) and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study. The complete code of all the agents, along with the environment and setup files, is publicly available in the Sourceforge repository (https://trustemotionalagents.sourceforge.io) in order to provide transparency and to facilitate the complete replication of all the simulations included in this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Picard, R. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  2. Johnson, G. Theories of Emotion. Internet Encyclopedia of Philosophy. 2009. Available online: http://www.iep.utm.edu/emotion/ (accessed on 26 July 2023).
  3. Ekman, P. Basic Emotions. In Handbook of Cognition and Emotion; John Wiley & Sons: Hoboken, NJ, USA, 1999; pp. 45–60. [Google Scholar]
  4. Plutchik, R. Emotions and Life: Perspectives from Psychology, Biology, and Evolution; American Psychological Association: Washington, DC, USA, 2003. [Google Scholar]
  5. Becker, C.; Prendinger, H.; Ishizuka, M.; Wachsmuth, I. Evaluating Affective Feedback of the 3D Agent Max in a Competitive Cards Game. In Affective Computing and Intelligent Interaction; Springer: Berlin/Heidelberg, Germany, 2005; pp. 466–473. [Google Scholar]
  6. Damasio, A. Descartes’ Error: Emotion, Reason, and the Human Brain; Quill: New York, NY, USA, 1994. [Google Scholar]
  7. Ortony, A.; Clore, G.; Collins, A. The Cognitive Structure of Emotion. Contemp. Sociol. 1988, 18. [Google Scholar] [CrossRef] [Green Version]
  8. Russell, J. A Circumplex Model of Affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  9. Watson, D.; Tellegen, A. Toward a Consensual Structure of Mood. Psychol. Bull. 1985, 98, 219–235. [Google Scholar] [CrossRef] [PubMed]
  10. Mehrabian, A.; Russell, J. An Approach to Environmental Psychology; MIT Press: Cambridge, MA, USA, 1974. [Google Scholar]
  11. McCracken, L.M.; Zayfert, C.; Gross, R.T. The pain anxiety symptoms scale: Development and validation of a scale to measure fear of pain. Pain 1992, 50, 67–73. [Google Scholar] [CrossRef]
  12. Eysenk, H. The biological basis of personality. Nature 1963, 199, 1031–1034. [Google Scholar] [CrossRef]
  13. Tan, H.H.; Foo, M.D.; Kwek, M. The Effects of Customer Personality Traits on the Display of Positive Emotions. Acad. Manag. J. 2004, 47, 287–296. [Google Scholar] [CrossRef] [Green Version]
  14. Rothbart, M. Becoming Who We Are: Temperament and Personality in Development; Guilford Press: New York, NY, USA, 2012. [Google Scholar]
  15. Kagan, J.; Fox, N. Biology, culture, and temperamental biases. In Handbook of Child Psychology: Social, Emotional, and Personality Development; Eisenberg, N., Damon, W., Lerner, R., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2006; pp. 167–225. [Google Scholar]
  16. Eysenck, H.J.; Eysenck, S.B.G. Psychoticism as a Dimension of Personality; Taylor & Francis Group: Abingdon, UK, 1976. [Google Scholar]
  17. LeDoux, J. The Emotional Brain: The Mysterious Underpinnings of Emotional Life; Touchstone Book, Simon & Schuster: Manhattan, NY, USA, 1996. [Google Scholar]
  18. Cassell, J. Embodied Conversational Interface Agents. Commun. ACM 2000, 43, 70–78. [Google Scholar] [CrossRef]
  19. Prendinger, H.; Ishizuka, M. (Eds.) Life-Like Characters: Tools, Affective Functions, and Applications; Cognitive Technologies; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  20. de Rosis, F.; Pelachaud, C.; Poggi, I.; Carofiglio, V.; Carolis, B.D. From Greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. Int. J. Hum.-Comput. Stud. 2003, 59, 81–118. [Google Scholar] [CrossRef] [Green Version]
  21. Ochs, M.; Devooght, K.; Sadek, D.; Pelachaud, C. A Computational Model of Capability-Based Emotion Elicitation for Rational Agent. In Proceedings of the 1st workshop on Emotion and Computing-Current Research and Future Impact, German Conference on Artificial Intelligence (KI), Bremen, Germany, 19 June 2006; pp. 7–10. [Google Scholar]
  22. Breazeal, C. Emotion and Sociable Humanoid Robots. Int. J. Hum.-Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
  23. Itoh, K.; Miwa, H.; Zecca, M.; Takanobu, H.; Roccella, S.; Carrozza, M.; Dario, P.; Takanishi, A. Mechanical design of emotion expression humanoid robot we-4rii. In CISM International Centre for Mechanical Sciences, Courses and Lectures; Springer International Publishing: Cham, Switzerland, 2006; pp. 255–262. [Google Scholar]
  24. Roether, C.L.; Omlor, L.; Christensen, A.; Giese, M.A. Critical features for the perception of emotion from gait. J. Vis. 2009, 9, 15. [Google Scholar] [CrossRef]
  25. Xu, S.; Fang, J.; Hu, X.; Ngai, E.; Wang, W.; Guo, Y.; Leung, V.C.M. Emotion Recognition From Gait Analyses: Current Research and Future Directions. IEEE Trans. Comput. Soc. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
  26. Duval, S.; Becker, C.; Hashizume, H. Privacy Issues for the Disclosure of Emotions to Remote Acquaintances Without Simultaneous Communication. In Universal Access in Human Computer Interaction. Coping with Diversity, Proceedings of the 4th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2007, Held as Part of HCI International 2007, Beijing, China, 22–27 July 2007, Proceedings, Part I; Stephanidis, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4554, pp. 82–91. [Google Scholar]
  27. McStay, A. Emotional AI: The Rise of Empathic Media; SAGE Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  28. Weiss, G. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  29. Wooldridge, M.; Jennings, N. Agent theories, architectures and languages: A survey. In Lecture Notes in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 1995; Volume 890, pp. 1–39. [Google Scholar]
  30. Sabater-Mir, J.; Sierra, C. Review on Computational Trust and Reputation Models. Artif. Intell. Rev. 2005, 24, 33–60. [Google Scholar] [CrossRef]
  31. Falcone, R.; Castelfranchi, C. Social Trust: A Cognitive Approach. In Trust and Deception in Virtual Societies; Castelfranchi, C., Tan, Y.H., Eds.; Springer: Dordrecht, The Netherlands, 2001; pp. 55–90. [Google Scholar]
  32. Rao, A.S.; Georgeff, M.P. BDI Agents: From Theory to Practice. ICMAS 1995, 95, 312–319. [Google Scholar]
  33. Poslad, S. Specifying protocols for multi-agent system interaction. ACM Trans. Autonom. Adapt. Syst. 2007, 4, 15-es. [Google Scholar] [CrossRef]
  34. Barber, K.S.; Fullam, K.; Kim, J. Challenges for Trust, Fraud and Deception Research in Multi-Agent Systems. In Proceedings of the 2002 International Conference on Trust, Reputation and Security: Theories and Practice, AAMAS’02, Berlin, Germany, 15 July 2002; pp. 8–14. [Google Scholar]
  35. Bitencourt, G.K.; Silveira, R.A.; Marchi, J. TrustE: An Emotional Trust Model for Agents. In Proceedings of the 11th Edition of the European Workshop on Multi-agent Systems (EUMAS 2013), Toulouse, France, 12–13 December 2013; pp. 54–67. [Google Scholar]
  36. Granatyr, J.; Osman, N.; Dias, J.A.; Nunes, M.A.S.N.; Masthoff, J.; Enembreck, F.; Lessing, O.R.; Sierra, C.; Paiva, A.M.; Scalabrin, E.E. The Need for Affective Trust Applied to Trust and Reputation Models. ACM Comput. Surv. 2017, 50, 1–36. [Google Scholar] [CrossRef]
  37. Steunebrink, B.; Dastani, M.; Meyer, J.J.C. The OCC model revisited. In Proceedings of the 4th Workshop on Emotion and Computing, Paderborn, Germany, 15 September 2009; pp. 40–47. [Google Scholar]
  38. Yu, B.; Singh, M.P. A Social Mechanism of Reputation Management in Electronic Communities. In The CIA; Klusch, M., Kerschberg, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; Volume 1860, pp. 154–165. [Google Scholar]
  39. Parliament, E. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and repealing Directive 95/46/EC (General Data Protection Regulation). OJ 2016, L 119, 1–88. [Google Scholar]
  40. Gotterbarn, D. Privacy Lost: The Net, Autonomous Agents, and ‘Virtual Information’. Ethics Inf. Technol. 1999, 1, 147–154. [Google Scholar] [CrossRef]
  41. Wright, D. Making privacy impact assessment more effective. Inf. Soc. 2013, 29, 307–315. [Google Scholar] [CrossRef]
  42. Stewart, B. Privacy Impact Assessment: Optimising the Regulator’s Role. In Privacy Impact Assessment; Springer: Berlin/Heidelberg, Germany, 2012; pp. 437–444. [Google Scholar]
  43. Stark, L. The emotional context of information privacy. Inf. Soc. 2016, 32, 14–27. [Google Scholar] [CrossRef]
  44. Russell, J.A.; Mehrabian, A. Evidence for a three-factor theory of emotions. J. Res. Personal. 1977, 11, 273–294. [Google Scholar] [CrossRef]
  45. Becker-Asano, C.; Wachsmuth, I. Affective computing with primary and secondary emotions in a virtual human. Auton. Agents Multi-Agent Syst. 2010, 20, 32–49. [Google Scholar] [CrossRef] [Green Version]
  46. Rao, A.S.; George, M.P. BDI agents: From theory to practice. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95), San Francisco, CA, USA, 12–14 June 1995; pp. 312–319. [Google Scholar]
  47. Grignard, A.; Taillandier, P.; Gaudou, B.; Vo, D.A.; Huynh, N.Q.; Drogoul, A. GAMA 1.6: Advancing the Art of Complex Agent-Based Modeling and Simulation. In PRIMA; Boella, G., Elkind, E., Savarimuthu, B.T.R., Dignum, F., Purvis, M.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8291, pp. 117–131. [Google Scholar]
  48. Caillou, P.; Gaudou, B.; Grignard, A.; Truong, C.Q.; Taillandier, P. A Simple-to-Use BDI Architecture for Agent-Based Modeling and Simulation. In Advances in Social Simulation 2015; Jager, W., Verbrugge, R., Flache, A., de Roo, G., Hoogduin, L., Hemelrijk, C., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 15–28. [Google Scholar]
  49. Taillandier, P.; Vo, D.A.; Amouroux, E.; Drogoul, A. GAMA: A Simulation Platform That Integrates Geographical Information Data, Agent-Based Modeling and Multi-scale Control. In Principles and Practice of Multi-Agent Systems; Desai, N., Liu, A., Winikoff, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 242–258. [Google Scholar]
  50. Bourgais, M.; Taillandier, P.; Vercouter, L. An Agent Architecture Coupling Cognition and Emotions for Simulation of Complex Systems. In Proceedings of the Social Simulation Conference, Rome, Italy, 19–23 September 2016. [Google Scholar]
Figure 1. FIPA CFP interaction protocol for delegating a task.
Figure 1. FIPA CFP interaction protocol for delegating a task.
Applsci 13 08681 g001
Figure 2. Sequentially linked FIPA CFP interaction protocol for a given task several times.
Figure 2. Sequentially linked FIPA CFP interaction protocol for a given task several times.
Applsci 13 08681 g002
Figure 3. Parameter definition in the CUI simulation.
Figure 3. Parameter definition in the CUI simulation.
Applsci 13 08681 g003
Figure 4. Initial situation in a scenario in which carrier agents move to the destinations of their boxes.
Figure 4. Initial situation in a scenario in which carrier agents move to the destinations of their boxes.
Applsci 13 08681 g004
Figure 5. Ongoing situation in which carrier agents move to the destinations of their boxes.
Figure 5. Ongoing situation in which carrier agents move to the destinations of their boxes.
Applsci 13 08681 g005
Figure 6. Comparison of the three alternatives when the percentage of initially idle carrier agents increases.
Figure 6. Comparison of the three alternatives when the percentage of initially idle carrier agents increases.
Applsci 13 08681 g006
Figure 7. Comparison of the three alternatives when the probability of privacy issues increases.
Figure 7. Comparison of the three alternatives when the probability of privacy issues increases.
Applsci 13 08681 g007
Table 1. PAD definition of each emotion according to [44].
Table 1. PAD definition of each emotion according to [44].
EmotionPleasureArousalDominance
joy0.750.480.35
sad−0.630.27−0.33
surprise0.40.67−0.13
fearful−0.640.6−0.43
angry−0.510.590.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Carbo, J.; Molina, J.M. Trust Model of Privacy-Concerned, Emotionally Aware Agents in a Cooperative Logistics Problem. Appl. Sci. 2023, 13, 8681. https://doi.org/10.3390/app13158681

AMA Style

Carbo J, Molina JM. Trust Model of Privacy-Concerned, Emotionally Aware Agents in a Cooperative Logistics Problem. Applied Sciences. 2023; 13(15):8681. https://doi.org/10.3390/app13158681

Chicago/Turabian Style

Carbo, Javier, and Jose Manuel Molina. 2023. "Trust Model of Privacy-Concerned, Emotionally Aware Agents in a Cooperative Logistics Problem" Applied Sciences 13, no. 15: 8681. https://doi.org/10.3390/app13158681

APA Style

Carbo, J., & Molina, J. M. (2023). Trust Model of Privacy-Concerned, Emotionally Aware Agents in a Cooperative Logistics Problem. Applied Sciences, 13(15), 8681. https://doi.org/10.3390/app13158681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop