Next Article in Journal
Real-Time Embedded System-Based Approach for Sensing Power Consumption on Motion Profiles
Next Article in Special Issue
Machine-Learning-Based Traffic Classification in Software-Defined Networks
Previous Article in Journal
VaccineHero: An Extended Reality System That Reduces Toddlers’ Discomfort during Vaccination
Previous Article in Special Issue
An Efficient Classification of Rice Variety with Quantized Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems

1
Department of Software and Internet Technologies, Technical University of Varna, 9010 Varna, Bulgaria
2
Department of Communication Engineering and Technologies, Technical University of Varna, 9010 Varna, Bulgaria
3
Department of Computer Science and Engineering, Technical University of Varna, 9010 Varna, Bulgaria
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(18), 3852; https://doi.org/10.3390/electronics12183852
Submission received: 5 August 2023 / Revised: 6 September 2023 / Accepted: 8 September 2023 / Published: 12 September 2023

Abstract

:
Social awareness and relationship management components can be seen as a form of emotional intelligence. In the present work, we propose task-related adaptation on the machine side that accounts for a person’s momentous cognitive and emotional state. We validate the practical significance of the proposed approach in person-specific and person-independent setups. The analysis of results in the person-specific setup shows that the individual optimal performance curves for that person, according to the Yerkes–Dodson law, are displaced. Awareness of these curves allows for automated recognition of specific user profiles, real-time monitoring of the momentous condition, and activating a particular relationship management strategy. This is especially important when a deviation is detected caused by a change in the person’s state of mind under the influence of known or unknown factors.

1. Introduction

The evolution of human–machine collaborative systems undoubtedly demands the creation of advanced human-friendly artificial intelligence (AI) architectures capable of person-specific adaptations, endowing the machines with the ability to adequately interpret human inputs, intentions, actions, and behaviors, allowing them to make decisions that are transparent to humans and also conform to societal and moral constraints. These are required for achieving the acceptability of technology and simultaneously for managing the effectiveness of collaborative systems.
Recent scientific studies in this regard are focusing on the intelligence of the machines, presented by their so-called cognitive component, including the ability to acquire information necessary for the achievement of a specific goal, processing this information through various machine learning (ML) methods, shaping a domain-specific knowledge, and as a result, implementation of respective action. The effectiveness of the ML process, measured through feedback from the machine, forms its individual intelligence quotient (IQ) [1], just like the IQ scoring for human individuals.
Human general intelligence also has a second component, emotional intelligence (EI), formulated and studied first by J. Miller and P. Salovey [2] and additionally developed by D. Goleman [3]. EI describes the ability to understand and manage one’s own emotions and the ability to perceive and interpret the emotional states of surrounding individuals correctly and subsequently to shape proper behavior toward them to achieve successful collaboration, i.e., the degree of EI or the emotional quotient (EQ) is directly related to the ability to collaborate. The contemporary understanding of EI from the perspective of psychology includes four components and the corresponding abilities:
  • Self-awareness.
  • Self-management.
  • Social awareness.
  • Relationship management.
From the perspective of computer science and human–machine interaction (HMI), these could be interpreted as the following.
Self-awareness: As the machine does not possess inherent feelings and emotions to be aware of, this component represents the ability of the machine to interpret each situation inherent in the dynamics of the collaboration process: the performance of the system (including the individual performance of the human user, participating in the system); phase/stage of the collaborative task; detected deviations, etc.
Self-management: Self-management relates to the machine’s adaptive abilities in response to the interpreted situation regarding the parameters of the collaborative task.
Social awareness: This component represents the ability of the machine to perceive and correctly interpret the emotional and cognitive states of its human partner during the implementation of a particular task, as well as to assess and forecast the influence of the changes in these human states on the task results.
Relationship management: Relationship management relates to the ability of the machine to adaptively change its behavior to ensure a positive impact on the human user (respectively, to the HMI collaborative system), relevant to their emotional and cognitive dynamics.
The latter two components are of particular interest in the current research. Concerning the social awareness of a machine, we consider its abilities for verbal and non-verbal perception (recognition and interpretation) of the human state through various technologies. Specifically, the verbal category involves natural language understanding and processing based on the recent advances in large language models (LLMs) [4,5,6,7,8,9,10,11], speech-to-text [12] and text-to-speech [13] conversion, translation [14], etc. The non-verbal category concerns cognitive state detection methods based on biosensors and visual and tactile input. The diversity of contemporary technologies providing the means of such awareness on the machine side includes:
  • Eye tracking—for attention recognition and state detection, by assessment of some parameters [15], e.g., pupil dilation [16,17], saccade and fixation duration [18];
  • Facial expression recognition—for emotional and cognitive state detection [19,20,21];
  • Electrocardiography (ECG), electrodermal activity (EDA) [22,23,24,25,26], and skin temperature sensors [27]—for assessment of the human body’s physiological reactions related to stress, anxiety, agitation, cognitive load, etc. It was also shown that the detection of heart rate variability could be based on photoplethysmography (PPG) sensors integrated into wearable devices [28,29], as well as on traditional electrode-based technology integrated into plasters [30] or smart textiles [31,32];
  • Electroencephalography (EEG) [33,34,35,36,37,38]—for capturing the cognition processes, even in cases of a lack of behavioral reaction caused by a specific stimulus. The potential of this technology is promising, primarily through the development of brain–computer interfaces (BCIs) [39,40,41,42] in that number those with direct brain connection [43].
  • Equipment-extensive technologies for human state assessment, such as functional near-infrared spectroscopy (fNIRS) [44,45], capturing the hemodynamics in different parts of the brain; magnetic resonance imaging (MRI) [46], enabling the creation of a digital twin of a human brain [47]; electromyographic sensors [48,49,50] for detection as well as activation of controlled activities of a particular muscle group.
Despite these and other technological advances, recent research on integrating human emotional and cognitive states in adaptive HMI is still fragmented, and social awareness and especially relationship management, based on this awareness, remain neglected. Accounting for this gap, the reported research offers an advance toward enabling social awareness of machines, which opens the opportunity for integrating relationship management strategies. This article presents recent advances in integrating a multimodal emotion recognition component in the intelligent human–machine interface framework—iHMIfr [51], which enables adaptation on the machine side for the implementation of intelligent performance management functionality. The experimental evaluation in a person-independent and person-specific scenario reported in Section 3 validates the proposed approach.

2. Materials and Methods

2.1. Background

The research reported here builds on the conceptual architecture of the intelligent human–machine interface framework (IHMIfr) [51], the workflow adaptation for intelligent human–machine interfaces [52], and the implemented task-related transformation of a collaborative system [53]. The proposed unified view contributes to creating social awareness and relationship management strategies for machines. The proposed advance, which in this research is seen as the second phase, is focused on including a representative user model based on assessing the human state of mind in consistency with iHMIfr. The concept of the experimental validation was upgraded as a second phase of the task-related adaptation presented in [53].

2.2. Research Design

In the following, we present the conceptual design (Section 2.2.1), the experimental protocol, and the dataset used for its experimental validation (Section 2.2.2, Section 2.2.3 and Section 2.2.4).

2.2.1. Conceptual Design

The collaborative system implemented in the first phase of our research was based on using two components: the application and the decision manager. The simplified workflow concerning the decision for adaptation is shown in Figure 1. At this end, a methodology for task-related ex-post transformation was formulated, including a definition of success criteria based on a particular task, a definition of task parameters (speed, complexity, etc.) thresholds, a description of task adaptation strategy based on an estimation of the current performance of the user, and implementation of the selected strategy in the application manager.
The advance proposed here includes a representative user model that consistently assesses the human state of mind with iHMIfr. The human emotional and cognitive state is then interpreted, and a profiling strategy is applied to achieve personalized adaptation. In the present work, the analysis and interpretation of the momentous cognitive state are based on well-known psychological concepts, including the Yerkes–Dodson law and statistical data processing. The methodology followed at this phase includes the acquisition of multimodal input signals during the implementation of the collaborative task; interpretation and assessment of emotional and cognitive states and their registration and monitoring in the human model; definition of criteria for assessment of the emotional and cognitive state; creation of a personalized profile of the users, based on the selected criteria; design of adaptive strategy, based on the current value of the selected criteria; and implementation of the strategy in the application manager.

2.2.2. Experimental Setup

A purposely developed web-based application was used to acquire user data for experimental validation. The web application is coded in JavaScript v. ES12 as a primary language, and ReactJS v. 18.2.0 is used for the reactive rendering of components. Styling is done with Cascading Style Sheets (CSS) in combination with the Material UI library3. The browser’s local memory is used for real-time cashing, and after the end of the task, all the data are saved in a .csv file format.
The input, as seen in Figure 1, is based on the MorphCast platform v.1.16 [54] through the integration of the SDK offered by the manufacturer. MorphCast provides real-time measurements of various psychological parameters, such as emotional valence, arousal, attention, and basic emotional states. These serve as multimodal input streams for user data modeling and interpretation components.

2.2.3. Dataset

Twenty-one volunteers took part in the experiment. All of them were students or researchers at the Technical University of Varna. All participants were informed in detail about the study’s substance and the data collected throughout the experiment. Each one signed written informed consent for inclusion before initiating their participation. In addition, all of the participants completed a questionnaire including data about their age, gender, handedness (left or right), experience in computer games (rarely/moderately/often), weekly time spent on computer/mobile phone/other gaming devices games (under 10 h/between 10 and 20 h/over 20 h), night sleeping time—usually and the last night (in hours) and self-assessment of the current working capacity/efficiency (low/medium/high).
The data were collected from all 21 participants from the Emotional and Cognitive State Tracking Dataset (ETICS). It consists of three .csv files for each individualization level from the second part of the test and correspondingly three files for the third part (with task adaptation), containing:
  • Timestamp/stimuli/answer (user response)/performance/speed/reaction time—from the application manager.
  • Timestamp/arousal/valence/attention/angry/disgust/fear/happy/neutral/sad/surprise—from MorphCast platform.
  • Data fusion of the above, matched by timestamp in the moment of user response.
In addition, synchronized high-resolution video and audio from the PC camera, capturing the user’s face (and face expressions) and screen capture video from the application (the stimuli of the cognitive test), are available in .mp4 file format. Additional information about our research and the presented dataset can be found on Supplementary Materials.

2.2.4. Experimental Protocol

The web-based application and data acquisition platform (Section 2.2.2) were installed on a computer equipped with a high-resolution camera. Each volunteer was left alone during recordings, except the experiment manager, who remained in the room. Each participant completed the informed consent and the questionnaire (Section 2.2.3).
The experimental protocol consisted of three parts—introduction to the rules, individualization phase, and testing phase. The task that the volunteers had to fulfill represented a cognitive T-load D-back test proposed in [55] and explained in detail in [53]. In brief, the T-load D-back test is a two-component task in which the machine provides the user with stimuli from two characters: letters and numbers. The submission sequence is letter–number–letter–number. The user must respond to the numbers, whether they are even or not, by pressing the keys. Regarding letters, a reaction (again by pressing a key) is necessary only if the current letter is the same as the previous one. The first part introduces the participants to the task’s separate components (allowing them to try and receive feedback about the success) and their combination (i.e., the actual test).
The individualization of the task (i.e., the second part) is tailored so that it is initially adapts to the individual characteristics of the specific person, such as cognitive capacity, emotional involvement, concentration, level of fatigue, etc. Individualization consists of successive blocks of the same number of symbols, with the task speed increasing with each subsequent block. The goal is to detect for each user a certain threshold at which the performance drops below a preliminarily defined success criterion (85% in this case). A score above 85% is considered task completion. Each user’s individualized (in terms of speed) charge is then used to place them in a high-stress (high workload) situation where the probability of errors is high. Task-specific parameters—current stage (level speed), performance (% errors), and response time (in milliseconds)—are considered, as they are necessary in forming the decision for task-related adaptation. During the individualization stage, there is a minimum break of 1 min after each level. The participant can prolong the break at their discretion.
The third part consists of a 15-minute test with a constant task-related adaptation based on the current performance (estimated as the average on each sequence of 10 stimuli) and the selected settings of the application adaptive strategy. The selected, in this case, adaptive process is purely reflective, as follows: when the current performance is below the success criterion (85%), the application manager slows the process (by adding 100 ms between the stimuli); if the performance drops below a critical threshold (50%), the speed is decreased by a double step (200 ms); when the performance is above the success criterion, the speed remains the same; and when the performance is above the criterion for excellent performance (95%), the speed increases with a single step (100 ms) to raise the efficiency of the HMI system.
During the second and third parts of the test, data fusion on the decision level is performed, matching the data from the application manager and the data from the MorphCast platform and synchronizing them by timestamps.

3. Results

The experimental results reported here have been obtained using the RapidMiner platform for data science and machine learning [56].

3.1. Results in Person-Independent Scenario

In this scenario, all the fused .csv files pertaining to the 21 volunteers are united in a common dataset. Some of the statistical parameters calculated in relation to the data attributes are presented in Table 1.
First, we need to acknowledge that the observed average value in terms of user effectiveness was impressive—0.879 ± 0.116, i.e., 87.9%. The system’s adaptability can explain the high average value of this parameter concerning the task. Depending on the estimation of the current performance (averaged for the last 10 stimuli, according to the adaptive strategy chosen in the case), the time step was dynamic and readjusted. The low standard deviation value of ±0.116 among different volunteers indicates that the system adapted accordingly to each person’s individuality. The mean interstimulus interval (defining the speed of stimuli appearance) is 965.401 ms, and the mean user response time is 612.346 ms, with a variance of 160–170 ms due to individual participant differences.
The average for all participants’ estimated arousal and emotional valence are −0.391 ± 0.223 and −0.311 ± 0.101, respectively. The standard deviation values indicate a certain degree of similarity among different people’s cognitive and emotional states due to their involvement in the same task in a common setup.
Because values of attention tracking were normalized to the interval [0, 1], an average value of about 0.5 was observed, which is in the center of this interval. At the same time, the histogram shows a non-Gaussian distribution of values, with two large clusters around 0 and 1. These extreme values correspond to the momentary values of attention and inattention during the task execution. The intermediate values most probably correspond to transition episodes between the two extremes, and thus are relatively rare.
Of considerable interest in our analysis is the interrelationship between cognitive and emotional states on the one hand and the participant’s performance on the other. In cognitive psychology, this relationship is represented by the Yerkes–Dodson law, shown in Figure 2.
The experimental results have been analyzed in two different ways. In the first, the arousal values were juxtaposed against the cumulative performance following the data processing algorithm:
  • Data extraction from all participants in the experiment (the sample).
  • Combining the data (in RapidMiner—through the “Append” operator).
  • Sorting the data in ascending order concerning the attribute “arousal”.
  • Setting the values of the “success” attribute as follows: 1 on success and −1 on failure.
  • Creating a new attribute, “cumulative performance”, and applying a cumulative function about the “performance” attribute (taking values −1 and 1).
  • Using the cumulative performance rate values to establish arousal dependence.
The result is shown in Figure 3, where the interrelationship between arousal and cumulative performance is in good agreement with the Yerkes–Dodson law. At the same time, the adaptability of the task (through the preset strategy) ensures movement along the curve in the zone of optimal performance and does not let it drop.
The second analysis considers the users’ performance as consisting of two components—their attention and personal abilities to implement a particular task. While the latter depends strongly on the user’s personality (and is expected to show different patterns), the first could represent the cognitive part of the performance, i.e., it should also follow the Yerkes–Dodson law. In Figure 4, we show the obtained relationship between the arousal and attention values.
As shown in the figure, at low arousal values, attention is very close to zero (i.e., absent), then, as arousal increases, it also increases until reaching saturation (values close to 1). The graph is in good agreement with the Yerkes–Dodson law, as the optimal average level of arousal to maintain the maximum degree of attention, based on the considered data set, can be determined in the interval [−0.25; 0.25].

3.2. Results in Person-Specific Scenario

In the person-specific scenario, the log files were processed per participant to extract regularities related to creating adequate user profiles. This included calculating and visualizing specific parameters and their subsequent comparison among the participants. We aimed to investigate the availability of characteristics relevant to factors such as gender, age, acquired skills, etc.

3.2.1. Valence–Arousal Relationship

The relationship between emotional valence and arousal was investigated to analyze the participants’ emotional states. It is presented in Figure 5, and the individual participants are distinguished by color. For comparison, Figure 6 shows a column graph of the average performance rate and the average level of attention, again by participants. When analyzing the data, it was observed that the participants with the highest average levels of attention (id_3, id_6, id_8, id_9, id_11, id_12, id_16, id_18, id_20) are predominantly present in the fourth quadrant of the valence—arousal scale, i.e., they are closer to conditions associated with stress. Participants with low levels of attention (id_1, id_4, id_15) mostly fill the second and third quadrants (probably due to their experience in computer games and acceptance of the test as a similar activity and due to the individual specificity of their perceptions). The routineness of task solving, especially for id_1 and id_4, was also confirmed by the high-performance rates accompanied by the lowest reaction times (Figure 7), with a low degree of dispersion presented by the standard deviation (Figure 8). In the first quadrant, there are relatively few data belonging to participants from all groups, and they are somewhat sporadic.

3.2.2. Arousal–Performance Relationship

As in the case of person-independent analysis, the relationship between emotional arousal and performance is established for the cumulative performance or through the attention component. In the first case, the following algorithm is used:
  • Data extraction from all participants in the experiment.
  • Sorting of the data of each participant in ascending order regarding the attribute “arousal”.
  • Setting the values of the “performance” attribute as follows: 1 on success and −1 on failure.
  • Creating a new attribute, “cumulative performance rate”, and applying a cumulative function concerning the “success rate” attribute (accepting values −1 and 1).
  • Creating a new attribute “id_” and setting unique values for each participant.
  • Combining the data (in RapidMiner—by using the “Append” operator).
  • Use of cumulative performance rate values to establish dependence on arousal.
The results obtained by implementing this algorithm are shown in Figure 9.
The analysis of results per participant shows the individual optimal performance curves for that person according to the Yerkes–Dodson law. These curves create opportunities for recognition by the machine of specific profiles of the participants, as well as for real-time monitoring and activating a particular type of strategy when a deviation is detected. A change in the person’s state of mind under the influence of some factor might cause variations that can be detected automatically.

3.2.3. Arousal–Attention Relationship

Another exciting aspect of the participant analysis is related to the arousal–attention relationship. This dependence is presented in Figure 10, with each participant’s data in a different color.
As in the person-independent analysis, the relation between arousal and attention again conformed to the Yerkes–Dodson law. However, a significant scatter of these data is also visible. Presenting the data by participants in the experiment and comparing them with the data from the conducted survey, some exciting regularities were revealed. As shown in Figure 11, a significant portion of the scattered data belongs to two profiles. After analysis of the survey data, it was clarified that these two profiles belong to participants with a dominant left hand.
Although all participants differed significantly in the levels of emotional arousal within the test, most of the observed curves follow a trajectory compliant with the Yerkes–Dodson law. The two profiles of left-handed participants differ significantly, which warrants a machine learning model to be elaborated for recognizing atypical profiles, such as a person’s left-handedness, as one of these atypical manifestations. The availability of potential atypicality linked to conditions such as dyslexia, autism, etc. was not investigated in our study.

4. Conclusions

The present research contributes toward developing an emotion modeling component to enhance machines with emotional intelligence functionality. Our concept builds on multimodal input concerning the user’s momentous cognitive and emotional conditions and the intelligent human–machine interface framework—iHMIfr. The experimental validation confirms that the automated recognition of mental states could help achieve awareness about the human condition in collaborative human–machine interaction.
The detection and profiling of personal traits during the execution of a given task and the corresponding activation of adequate adaptive strategies establish the ground for machine-to-human relationship management. Achieving social awareness and relationship management functionality, accompanied by a task-related adaptation (based on self-awareness and self-management on the machine side), contributes directly towards machine emotional intelligence.

Supplementary Materials

The following supporting information can be downloaded at http://isr.tu-varna.bg/ihmi/index.php/resursi (accessed on 7 September 2023).

Author Contributions

Conceptualization, M.M., V.M. and T.G.; methodology, M.M. and Y.K.; software, Y.K.; validation, M.M., Y.K. and V.M.; formal analysis, M.M.; investigation, M.M.; resources, V.M.; data curation, M.M.; writing—original draft preparation, M.M.; writing—review and editing, T.G.; visualization, M.M.; supervision, T.G.; project administration, T.G.; funding acquisition, T.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Bulgarian National Science Fund (BNSF), with grant agreement FNI KP-06-N37/18, entitled “Investigation on intelligent human-machine interaction inter-faces, capable of recognizing high-risk emotional and cognitive conditions”.

Data Availability Statement

The data presented in this study are available in Supplementary Materials.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Liu, F.; Liu, Y.; Shi, Y. Three IQs of AI systems and their testing methods. J. Eng. 2020, 2020, 566–571. [Google Scholar] [CrossRef]
  2. Salovey, P.; Mayer, J.D. Emotional Intelligence. Imagin. Cogn. Personal. 1990, 9, 185–211. [Google Scholar] [CrossRef]
  3. Goleman, D. Emotional Intelligence; Bantam Books: New York, NY, USA, 2005. [Google Scholar]
  4. Hazarika, D.; Poria, S.; Zimmermann, R.; Mihalcea, R. Conversational transfer learning for emotion recognition. Inf. Fusion 2021, 65, 1–12. [Google Scholar] [CrossRef]
  5. Mohammadi Baghmolaei, R.; Ahmadi, A. TET: Text emotion transfer. Knowl-Based Syst. 2023, 262, 110236. [Google Scholar] [CrossRef]
  6. You, L.; Han, F.; Peng, J.; Jin, H.; Claramunt, C. ASK-RoBERTa: A pretraining model for aspect-based sentiment classification via sentiment knowledge mining. Knowl-Based Syst. 2022, 253, 109511. [Google Scholar] [CrossRef]
  7. Zhang, X.; Ma, Y. An ALBERT-based TextCNN-Hatt hybrid model enhanced with topic knowledge for sentiment analysis of sudden-onset disasters. Eng. Appl. Artif. Intell. 2023, 123, 106136. [Google Scholar] [CrossRef]
  8. Vekkot, S.; Gupta, D. Fusion of spectral and prosody modelling for multilingual speech emotion conversion. Knowl-Based Syst. 2022, 242, 108360. [Google Scholar] [CrossRef]
  9. Leippold, M. Sentiment spin: Attacking financial sentiment with GPT-3. Financ. Res. Lett. 2023, 55, 103957. [Google Scholar] [CrossRef]
  10. Gupta, A.; Singhal, A.; Mahajan, A.; Jolly, A.; Kumar, S. Empirical Framework for Automatic Detection of Neural and Human Authored Fake News. In Proceedings of the 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 25–27 May 2022; IEEE: Madurai, India, 2022; pp. 1625–1633. [Google Scholar]
  11. Malhotra, A.; Jindal, R. Deep learning techniques for suicide and depression detection from online social media: A scoping review. Appl. Soft Comput. 2022, 130, 109713. [Google Scholar] [CrossRef]
  12. Mi, C.; Xie, L.; Zhang, Y. Improving data augmentation for low resource speech-to-text translation with diverse paraphrasing. Neural Netw. 2022, 148, 194–205. [Google Scholar] [CrossRef] [PubMed]
  13. Korzekwa, D.; Lorenzo-Trueba, J.; Drugman, T.; Kostek, B. Computer-assisted pronunciation training—Speech synthesis is almost all you need. Speech Commun. 2022, 142, 22–33. [Google Scholar] [CrossRef]
  14. Zhang, H.; Yang, X.; Qu, D.; Li, Z. Bridging the cross-modal gap using adversarial training for speech-to-text translation. Digit. Signal Process. 2022, 131, 103764. [Google Scholar] [CrossRef]
  15. Lim, Y.; Gardi, A.; Pongsakornsathien, N.; Sabatini, R.; Ezer, N.; Kistan, T. Experimental characterisation of eye-tracking sensors for adaptive human-machine systems. Measurement 2019, 140, 151–160. [Google Scholar] [CrossRef]
  16. Shi, L.; Bhattacharya, N.; Das, A.; Gwizdka, J. True or false? Cognitive load when reading COVID-19 news headlines: An eye-tracking study. In Proceedings of the CHIIR ’23: ACM SIGIR Conference on Human Information Interaction and Retrieval, Austin, TX, USA, 19–23 March 2023; ACM: Austin, TX, USA, 2023; pp. 107–116. [Google Scholar]
  17. Erdogan, R.; Saglam, Z.; Cetintav, G.; Karaoglan Yilmaz, F.G. Examination of the usability of Tinkercad application in educational robotics teaching by eye tracking technique. Smart Learn. Environ. 2023, 10, 27. [Google Scholar] [CrossRef]
  18. Li, S.; Duffy, M.C.; Lajoie, S.P.; Zheng, J.; Lachapelle, K. Using eye tracking to examine expert-novice differences during simulated surgical training: A case study. Comput. Hum. Behav. 2023, 144, 107720. [Google Scholar] [CrossRef]
  19. Fernandes, A.S.; Murdison, T.S.; Proulx, M.J. Leveling the Playing Field: A Comparative Reevaluation of Unmodified Eye Tracking as an Input and Interaction Modality for VR. IEEE Trans. Visual. Comput. Graphics 2023, 29, 2269–2279. [Google Scholar] [CrossRef] [PubMed]
  20. Shadiev, R.; Li, D. A review study on eye-tracking technology usage in immersive virtual reality learning environments. Comput. Educ. 2023, 196, 104681. [Google Scholar] [CrossRef]
  21. Pan, H.; Xie, L.; Wang, Z. C3DBed: Facial micro-expression recognition with three-dimensional convolutional neural network embedding in transformer model. Eng. Appl. Artif. Intell. 2023, 123, 106258. [Google Scholar] [CrossRef]
  22. Sung, G.; Bhinder, H.; Feng, T.; Schneider, B. Stressed or engaged? Addressing the mixed significance of physiological activity during constructivist learning. Comput. Educ. 2023, 199, 104784. [Google Scholar] [CrossRef]
  23. Campanella, S.; Altaleb, A.; Belli, A.; Pierleoni, P.; Palma, L.A. Method for Stress Detection Using Empatica E4 Bracelet and Machine-Learning Techniques. Sensors 2023, 23, 3565. [Google Scholar] [CrossRef]
  24. Chen, K.; Han, J.; Baldauf, H.; Wang, Z.; Chen, D.; Kato, A.; Ward, J.A.; Kunze, K. Affective Umbrella—A Wearable System to Visualize Heart and Electrodermal Activity, towards Emotion Regulation through Somaesthetic Appreciation. In Proceedings of the AHs ’23: Augmented Humans Conference, Glasgow, UK, 12–14 March 2023; ACM: Glasgow, UK, 2023; pp. 231–242. [Google Scholar]
  25. Sagastibeltza, N.; Salazar-Ramirez, A.; Martinez, R.; Jodra, J.L.; Muguerza, J. Automatic detection of the mental state in responses towards relaxation. Neural Comput. Appl. 2023, 35, 5679–5696. [Google Scholar] [CrossRef] [PubMed]
  26. Stržinar, Ž.; Sanchis, A.; Ledezma, A.; Sipele, O.; Pregelj, B.; Škrjanc, I. Stress Detection Using Frequency Spectrum Analysis of Wrist-Measured Electrodermal Activity. Sensors 2023, 23, 963. [Google Scholar] [CrossRef] [PubMed]
  27. Castro-García, J.A.; Molina-Cantero, A.J.; Gómez-González, I.M.; Lafuente-Arroyo, S.; Merino-Monge, M. Towards Human Stress and Activity Recognition: A Review and a First Approach Based on Low-Cost Wearables. Electronics 2022, 11, 155. [Google Scholar] [CrossRef]
  28. Mach, S.; Storozynski, P.; Halama, J.; Krems, J.F. Assessing mental workload with wearable devices—Reliability and applicability of heart rate and motion measurements. Appl. Ergon. 2022, 105, 103855. [Google Scholar] [CrossRef]
  29. Ngoc-Thang, B.; Tien Nguyen, T.M.; Truong, T.T.; Nguyen, B.L.-H.; Nguyen, T.T. A dynamic reconfigurable wearable device to acquire high quality PPG signal and robust heart rate estimate based on deep learning algorithm for smart healthcare system. Biosens. Bioelectron. X 2022, 12, 100223. [Google Scholar] [CrossRef]
  30. Wang, Z.; Matsuhashi, R.; Onodera, H. Towards wearable thermal comfort assessment framework by analysis of heart rate variability. Build. Environ. 2022, 223, 109504. [Google Scholar] [CrossRef]
  31. Goumopoulos, C.; Stergiopoulos, N.G. Mental stress detection using a wearable device and heart rate variability monitoring. In Edge-of-Things in Personalized Healthcare Support Systems; Elsevier: Amsterdam, The Netherlands, 2022; pp. 261–290. [Google Scholar]
  32. Chen, Y.; Wang, Z.; Tian, X.; Liu, W. Evaluation of cognitive performance in high temperature with heart rate: A pilot study. Build. Environ. 2023, 228, 109801. [Google Scholar] [CrossRef]
  33. Du, H.; Riddell, R.P.; Wang, X. A hybrid complex-valued neural network framework with applications to electroencephalogram (EEG). Biomed. Signal Process. Control. 2023, 85, 104862. [Google Scholar] [CrossRef]
  34. Soni, S.; Seal, A.; Mohanty, S.K.; Sakurai, K. Electroencephalography signals-based sparse networks integration using a fuzzy ensemble technique for depression detection. Biomed. Signal Process. Control. 2023, 85, 104873. [Google Scholar] [CrossRef]
  35. Zali-Vargahan, B.; Charmin, A.; Kalbkhani, H.; Barghandan, S. Deep time-frequency features and semi-supervised dimension reduction for subject-independent emotion recognition from multi-channel EEG signals. Biomed. Signal Process. Control. 2023, 85, 104806. [Google Scholar] [CrossRef]
  36. Liu, S.; Zhao, Y.; An, Y.; Zhao, J.; Wang, S.-H.; Yan, J. GLFANet: A global to local feature aggregation network for EEG emotion recognition. Biomed. Signal Process. Control. 2023, 85, 104799. [Google Scholar] [CrossRef]
  37. Gong, L.; Li, M.; Zhang, T.; Chen, W. EEG emotion recognition using attention-based convolutional transformer neural network. Biomed. Signal Process. Control. 2023, 84, 104835. [Google Scholar] [CrossRef]
  38. Quan, J.; Li, Y.; Wang, L.; He, R.; Yang, S.; Guo, L. EEG-based cross-subject emotion recognition using multi-source domain transfer learning. Biomed. Signal Process. Control. 2023, 84, 104741. [Google Scholar] [CrossRef]
  39. Baradaran, F.; Farzan, A.; Danishvar, S.; Sheykhivand, S. Automatic Emotion Recognition from EEG Signals Using a Combination of Type-2 Fuzzy and Deep Convolutional Networks. Electronics 2023, 12, 2216. [Google Scholar] [CrossRef]
  40. Baradaran, F.; Farzan, A.; Danishvar, S.; Sheykhivand, S. Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals. Electronics 2023, 12, 2232. [Google Scholar] [CrossRef]
  41. Cardona-Álvarez, Y.N.; Álvarez-Meza, A.M.; Cárdenas-Peña, D.A.; Castaño-Duque, G.A.; Castellanos-Dominguez, G.A. Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments. Sensors 2023, 23, 3763. [Google Scholar] [CrossRef]
  42. Li, X.; Chen, J.; Shi, N.; Yang, C.; Gao, P.; Chen, X.; Wang, Y.; Gao, S.; Gao, X. A hybrid steady-state visual evoked response-based brain-computer interface with MEG and EEG. Expert Syst. Appl. 2023, 223, 119736. [Google Scholar] [CrossRef]
  43. Musk, E. Neuralink. An Integrated Brain-Machine Interface Platform with Thousands of Channels. J. Med. Internet. Res. 2019, 21, e16194. [Google Scholar] [CrossRef]
  44. Zhou, L.; Wu, B.; Deng, Y.; Liu, M. Brain activation and individual differences of emotional perception and imagery in healthy adults: A functional near-infrared spectroscopy (fNIRS) study. Neurosci. Lett. 2023, 797, 137072. [Google Scholar] [CrossRef]
  45. Karmakar, S.; Kamilya, S.; Dey, P.; Guhathakurta, P.K.; Dalui, M.; Bera, T.K.; Halder, S.; Koley, C.; Pal, T.; Basu, A. Real time detection of cognitive load using fNIRS: A deep learning approach. Biomed. Signal Process. Control. 2023, 80, 104227. [Google Scholar] [CrossRef]
  46. Roberts, G.S.; Hoffman, C.A.; Rivera-Rivera, L.A.; Berman, S.E.; Eisenmenger, L.B.; Wieben, O. Automated hemodynamic assessment for cranial 4D flow MRI. Magn. Reson. Imaging 2023, 97, 46–55. [Google Scholar] [CrossRef]
  47. Paul, G. From the visible human project to the digital twin. In Digital Human Modeling and Medicine; Elsevier: Amsterdam, The Netherlands, 2023; pp. 3–17. [Google Scholar]
  48. Bangaru, S.S.; Wang, C.; Busam, S.A.; Aghazadeh, F. ANN-based automated scaffold builder activity recognition through wearable EMG and IMU sensors. Autom. Constr. 2021, 126, 103653. [Google Scholar] [CrossRef]
  49. Nicholls, B.; Ang, C.S.; Kanjo, E.; Siriaraya, P.; Mirzaee Bafti, S.; Yeo, W.-H.; Tsanas, A. An EMG-based Eating Behaviour Monitoring system with haptic feedback to promote mindful eating. Comput. Biol. Med. 2022, 149, 106068. [Google Scholar] [CrossRef] [PubMed]
  50. Tian, H.; Li, X.; Wei, Y.; Ji, S.; Yang, Q.; Gou, G.-Y.; Wang, X.; Wu, F.; Jian, J.; Guo, H.; et al. Bioinspired dual-channel speech recognition using graphene-based electromyographic and mechanical sensors. Cell Rep. Phys. Sci. 2022, 3, 101075. [Google Scholar] [CrossRef]
  51. Markov, M.; Ganchev, T. Intelligent human-machine interface framework. Int. J. Adv. Electron. Comput. Sci. 2022, 9, 41–46. [Google Scholar]
  52. Markov, M. Workflow adaptation for intelligent human-machine interfaces. Comput. Sci. Technol. J. Tech. Univ. Varna 2022, 1, 51–58. [Google Scholar]
  53. Markov, M.; Kalinin, Y.; Ganchev, T. A Task-related Adaptation in Intelligent Human-Machine Interfaces. In Proceedings of the 2022 International Conference on Communications, Information, Electronic and Energy Systems (CIEES), Veliko Tarnovo, Bulgaria, 24–26 November 2022; IEEE: Veliko Tarnovo, Bulgaria, 2022; pp. 1–4. [Google Scholar]
  54. Anon. Emotion AI Provider. Facial Emotion Recognition MorphCast. 2023. Available online: https://www.morphcast.com (accessed on 7 September 2023).
  55. O’Keeffe, K.; Hodder, S.; Lloyd, A. A comparison of methods used for inducing mental fatigue in performance research: Individualised, dual-task and short duration cognitive tests are most effective. Ergonomics 2020, 63, 1–12. [Google Scholar] [CrossRef]
  56. Anon RapidMiner|Amplify the Impact of Your People, Expertise & Data RapidMiner. Available online: https://www.rapidminer.com (accessed on 7 September 2023).
Figure 1. Workflow for decision for adaptation.
Figure 1. Workflow for decision for adaptation.
Electronics 12 03852 g001
Figure 2. The Yerkes–Dodson law—tasks of different difficulty. (Blue line—the relationship during normal task; Yellow line—the relationship during easy task; Orange line—the relationship during hard task).
Figure 2. The Yerkes–Dodson law—tasks of different difficulty. (Blue line—the relationship during normal task; Yellow line—the relationship during easy task; Orange line—the relationship during hard task).
Electronics 12 03852 g002
Figure 3. Arousal–cumulative performance relationship.
Figure 3. Arousal–cumulative performance relationship.
Electronics 12 03852 g003
Figure 4. Arousal–attention relationship.
Figure 4. Arousal–attention relationship.
Electronics 12 03852 g004
Figure 5. Arousal–valence relationship in participants.
Figure 5. Arousal–valence relationship in participants.
Electronics 12 03852 g005
Figure 6. Average performance rate and intermediate level of attention by participants.
Figure 6. Average performance rate and intermediate level of attention by participants.
Electronics 12 03852 g006
Figure 7. Average reaction time of participants.
Figure 7. Average reaction time of participants.
Electronics 12 03852 g007
Figure 8. Standard deviation of reaction time of participants.
Figure 8. Standard deviation of reaction time of participants.
Electronics 12 03852 g008
Figure 9. Arousal–cumulative performance of participants.
Figure 9. Arousal–cumulative performance of participants.
Electronics 12 03852 g009
Figure 10. Arousal–attention of participants.
Figure 10. Arousal–attention of participants.
Electronics 12 03852 g010
Figure 11. Arousal–attention of participants—left-handedness.
Figure 11. Arousal–attention of participants—left-handedness.
Electronics 12 03852 g011
Table 1. Statistical analysis in the person-independent scenario.
Table 1. Statistical analysis in the person-independent scenario.
NameTypeHistogramMinMaxAverageDeviation
Av. PerfRealElectronics 12 03852 i0010.40010.8790.116
speedIntegerElectronics 12 03852 i0026001600985.401176.338
reaction timeIntegerElectronics 12 03852 i00341380612.346165.508
arousalRealElectronics 12 03852 i004−0.7750.389−0.3910.223
valenceRealElectronics 12 03852 i005−0.8660.462−0.3110.101
attentionRealElectronics 12 03852 i006010.4950.401
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Markov, M.; Kalinin, Y.; Markova, V.; Ganchev, T. Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems. Electronics 2023, 12, 3852. https://doi.org/10.3390/electronics12183852

AMA Style

Markov M, Kalinin Y, Markova V, Ganchev T. Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems. Electronics. 2023; 12(18):3852. https://doi.org/10.3390/electronics12183852

Chicago/Turabian Style

Markov, Miroslav, Yasen Kalinin, Valentina Markova, and Todor Ganchev. 2023. "Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems" Electronics 12, no. 18: 3852. https://doi.org/10.3390/electronics12183852

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop