sensors-logo

Journal Browser

Journal Browser

Emotion Recognition in Human-Machine Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 44393

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, Vanderbilt University, Nashville, TN 37212, USA
Interests: Human-robot interaction; Human-computer interaction; Affective computing, physiological sensing, multi-modal sensing; Autism intervention; Dementia intervention
Department of Biomedical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
Interests: Human-robot interaction; Human-computer interaction; Smart and connected health/community; Technology-assisted mental healthcare; Automatic interaction cue sensing for cognitive and behavioral analysis

Special Issue Information

Dear Colleagues,

In recent years, human-machine interaction (HMI) research has been gaining momentum in numerous application domains such as healthcare, entertainment, and public services. In many HMI applications, it is essential for the machine, such as a computer or a robot, to measure, understand, simulate, and react to human emotions. Thus, emotion recognition has become one of the most important aspects of HMI. Currently, emotion recognition technology has broadly spanned from remote sensing (e.g., computer vision) to wearable devices (e.g., physiological sensors). Hybrid methods that integrate the advantages of different sensing and recognition mechanisms have also demonstrated their strength and flexibility, especially in application-driven designs demanded by different user groups characterized by demographic features, health conditions, application environments, and other factors that impact user preferences. Furthermore, emotion recognition in HMI requires careful balancing among design factors, such as computation speed, precision, robustness to environmental noise, detection range, sensing distance, etc. We invite original research papers and review articles on HMI-related emotion recognition innovations, including but not limited to algorithms, sensors, application-oriented system integration and tuning, as well as feasibility and usability studies.

Prof. Dr. Nilanjan Sarkar
Dr. Zhi Zheng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Affective computing
  • Emotion AI
  • Human-computer interaction
  • Human-robot interaction
  • Facial expression recognition
  • Gesture recognition
  • Speech inflection and emotion
  • Physiological signal sensing and processing
  • Human behavior sensing
  • Emotion-induced machine adaptation

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 2085 KiB  
Article
Effective Emoticon Suggestion Technique Based on Active Emotional Input Using Facial Expressions and Heart Rate Signals
by Jesung Kim, Mincheol Kang, Bohun Seo, Jeongkyu Hong and Soontae Kim
Sensors 2023, 23(9), 4460; https://doi.org/10.3390/s23094460 - 3 May 2023
Cited by 1 | Viewed by 1862
Abstract
The evolution of mobile communication technology has brought about significant changes in the way people communicate. However, the lack of nonverbal cues in computer-mediated communication can make the accurate interpretation of emotions difficult. This study proposes a novel approach for using emotions as [...] Read more.
The evolution of mobile communication technology has brought about significant changes in the way people communicate. However, the lack of nonverbal cues in computer-mediated communication can make the accurate interpretation of emotions difficult. This study proposes a novel approach for using emotions as active input in mobile systems. This approach combines psychological and neuroscientific principles to accurately and comprehensively assess an individual’s emotions for use as input in mobile systems. The proposed technique combines facial and heart rate information to recognize users’ five prime emotions, which can be implemented on mobile devices using a front camera and a heart rate sensor. A user evaluation was conducted to verify the efficacy and feasibility of the proposed technique, and the results showed that users could express emotions faster and more accurately, with average recognition accuracies of 90% and 82% for induced and intended emotional expression, respectively. The proposed technique has the potential to enhance the user experience and provide more personalized and dynamic interaction with mobile systems. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

13 pages, 3339 KiB  
Article
Semi-Supervised Behavior Labeling Using Multimodal Data during Virtual Teamwork-Based Collaborative Activities
by Abigale Plunk, Ashwaq Zaini Amat, Mahrukh Tauseef, Richard Alan Peters and Nilanjan Sarkar
Sensors 2023, 23(7), 3524; https://doi.org/10.3390/s23073524 - 28 Mar 2023
Cited by 2 | Viewed by 1926
Abstract
Adaptive human–computer systems require the recognition of human behavior states to provide real-time feedback to scaffold skill learning. These systems are being researched extensively for intervention and training in individuals with autism spectrum disorder (ASD). Autistic individuals are prone to social communication and [...] Read more.
Adaptive human–computer systems require the recognition of human behavior states to provide real-time feedback to scaffold skill learning. These systems are being researched extensively for intervention and training in individuals with autism spectrum disorder (ASD). Autistic individuals are prone to social communication and behavioral differences that contribute to their high rate of unemployment. Teamwork training, which is beneficial for all people, can be a pivotal step in securing employment for these individuals. To broaden the reach of the training, virtual reality is a good option. However, adaptive virtual reality systems require real-time detection of behavior. Manual labeling of data is time-consuming and resource-intensive, making automated data annotation essential. In this paper, we propose a semi-supervised machine learning method to supplement manual data labeling of multimodal data in a collaborative virtual environment (CVE) used to train teamwork skills. With as little as 2.5% of the data manually labeled, the proposed semi-supervised learning model predicted labels for the remaining unlabeled data with an average accuracy of 81.3%, validating the use of semi-supervised learning to predict human behavior. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

14 pages, 1578 KiB  
Article
Assessment of Different Feature Extraction Methods for Discriminating Expressed Emotions during Music Performance towards BCMI Application
by Mahrad Ghodousi, Jachin Edward Pousson, Valdis Bernhofs and Inga Griškova-Bulanova
Sensors 2023, 23(4), 2252; https://doi.org/10.3390/s23042252 - 17 Feb 2023
Viewed by 1616
Abstract
A Brain-Computer Music Interface (BCMI) system may be designed to harness electroencephalography (EEG) signals for control over musical outputs in the context of emotionally expressive performance. To develop a real-time BCMI system, accurate and computationally efficient emotional biomarkers should first be identified. In [...] Read more.
A Brain-Computer Music Interface (BCMI) system may be designed to harness electroencephalography (EEG) signals for control over musical outputs in the context of emotionally expressive performance. To develop a real-time BCMI system, accurate and computationally efficient emotional biomarkers should first be identified. In the current study, we evaluated the ability of various features to discriminate between emotions expressed during music performance with the aim of developing a BCMI system. EEG data was recorded while subjects performed simple piano music with contrasting emotional cues and rated their success in communicating the intended emotion. Power spectra and connectivity features (Magnitude Square Coherence (MSC) and Granger Causality (GC)) were extracted from the signals. Two different approaches of feature selection were used to assess the contribution of neutral baselines in detection accuracies; 1- utilizing the baselines to normalize the features, 2- not taking them into account (non-normalized features). Finally, the Support Vector Machine (SVM) has been used to evaluate and compare the capability of various features for emotion detection. Best detection accuracies were obtained from the non-normalized MSC-based features equal to 85.57 ± 2.34, 84.93 ± 1.67, and 87.16 ± 0.55 for arousal, valence, and emotional conditions respectively, while the power-based features had the lowest accuracies. Both connectivity features show acceptable accuracy while requiring short processing time and thus are potential candidates for the development of a real-time BCMI system. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

17 pages, 3183 KiB  
Article
Wireless Sensing Technology Combined with Facial Expression to Realize Multimodal Emotion Recognition
by Xiaochao Dang, Zetong Chen, Zhanjun Hao, Macidan Ga, Xinyu Han, Xiaotong Zhang and Jie Yang
Sensors 2023, 23(1), 338; https://doi.org/10.3390/s23010338 - 28 Dec 2022
Cited by 5 | Viewed by 2457
Abstract
Emotions significantly impact human physical and mental health, and, therefore, emotion recognition has been a popular research area in neuroscience, psychology, and medicine. In this paper, we preprocess the raw signals acquired by millimeter-wave radar to obtain high-quality heartbeat and respiration signals. Then, [...] Read more.
Emotions significantly impact human physical and mental health, and, therefore, emotion recognition has been a popular research area in neuroscience, psychology, and medicine. In this paper, we preprocess the raw signals acquired by millimeter-wave radar to obtain high-quality heartbeat and respiration signals. Then, we propose a deep learning model incorporating a convolutional neural network and gated recurrent unit neural network in combination with human face expression images. The model achieves a recognition accuracy of 84.5% in person-dependent experiments and 74.25% in person-independent experiments. The experiments show that it outperforms a single deep learning model compared to traditional machine learning algorithms. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

12 pages, 1976 KiB  
Article
Spectral Characteristics of EEG during Active Emotional Musical Performance
by Jachin Edward Pousson, Aleksandras Voicikas, Valdis Bernhofs, Evaldas Pipinis, Lana Burmistrova, Yuan-Pin Lin and Inga Griškova-Bulanova
Sensors 2021, 21(22), 7466; https://doi.org/10.3390/s21227466 - 10 Nov 2021
Cited by 12 | Viewed by 2568
Abstract
The research on neural correlates of intentional emotion communication by the music performer is still limited. In this study, we attempted to evaluate EEG patterns recorded from musicians who were instructed to perform a simple piano score while manipulating their manner of play [...] Read more.
The research on neural correlates of intentional emotion communication by the music performer is still limited. In this study, we attempted to evaluate EEG patterns recorded from musicians who were instructed to perform a simple piano score while manipulating their manner of play to express specific contrasting emotions and self-rate the emotion they reflected on the scales of arousal and valence. In the emotional playing task, participants were instructed to improvise variations in a manner by which the targeted emotion is communicated. In contrast, in the neutral playing task, participants were asked to play the same piece precisely as written to obtain data for control over general patterns of motor and sensory activation during playing. The spectral analysis of the signal was applied as an initial step to be able to connect findings to the wider field of music-emotion research. The experimental contrast of emotional playing vs. neutral playing was employed to probe brain activity patterns differentially involved in distinct emotional states. The tasks of emotional and neutral playing differed considerably with respect to the state of intended-to-transfer emotion arousal and valence levels. The EEG activity differences were observed between distressed/excited and neutral/depressed/relaxed playing. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

21 pages, 3736 KiB  
Article
Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications
by Aitor Toichoa Eyam, Wael M. Mohammed and Jose L. Martinez Lastra
Sensors 2021, 21(14), 4626; https://doi.org/10.3390/s21144626 - 6 Jul 2021
Cited by 36 | Viewed by 4678
Abstract
The utilization of robotic systems has been increasing in the last decade. This increase has been derived by the evolvement in the computational capabilities, communication systems, and the information systems of the manufacturing systems which is reflected in the concept of Industry 4.0. [...] Read more.
The utilization of robotic systems has been increasing in the last decade. This increase has been derived by the evolvement in the computational capabilities, communication systems, and the information systems of the manufacturing systems which is reflected in the concept of Industry 4.0. Furthermore, the robotics systems are continuously required to address new challenges in the industrial and manufacturing domain, like keeping humans in the loop, among other challenges. Briefly, the keeping humans in the loop concept focuses on closing the gap between humans and machines by introducing a safe and trustworthy environment for the human workers to work side by side with robots and machines. It aims at increasing the engagement of the human as the automation level increases rather than replacing the human, which can be nearly impossible in some applications. Consequently, the collaborative robots (Cobots) have been created to allow physical interaction with the human worker. However, these cobots still lack of recognizing the human emotional state. In this regard, this paper presents an approach for adapting cobot parameters to the emotional state of the human worker. The approach utilizes the Electroencephalography (EEG) technology for digitizing and understanding the human emotional state. Afterwards, the parameters of the cobot are instantly adjusted to keep the human emotional state in a desirable range which increases the confidence and the trust between the human and the cobot. In addition, the paper includes a review on technologies and methods for emotional sensing and recognition. Finally, this approach is tested on an ABB YuMi cobot with commercially available EEG headset. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

18 pages, 5634 KiB  
Article
Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases
by Shushi Namba, Wataru Sato, Masaki Osumi and Koh Shimokawa
Sensors 2021, 21(12), 4222; https://doi.org/10.3390/s21124222 - 20 Jun 2021
Cited by 26 | Viewed by 5251
Abstract
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. [...] Read more.
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

27 pages, 7306 KiB  
Article
Correlation Analysis of Different Measurement Places of Galvanic Skin Response in Test Groups Facing Pleasant and Unpleasant Stimuli
by Andres Sanchez-Comas, Kåre Synnes, Diego Molina-Estren, Alexander Troncoso-Palacio and Zhoe Comas-González
Sensors 2021, 21(12), 4210; https://doi.org/10.3390/s21124210 - 19 Jun 2021
Cited by 22 | Viewed by 8557
Abstract
The galvanic skin response (GSR; also widely known as electrodermal activity (EDA)) is a signal for stress-related studies. Given the sparsity of studies related to the GSR and the variety of devices, this study was conducted at the Human Health Activity Laboratory (H2AL) [...] Read more.
The galvanic skin response (GSR; also widely known as electrodermal activity (EDA)) is a signal for stress-related studies. Given the sparsity of studies related to the GSR and the variety of devices, this study was conducted at the Human Health Activity Laboratory (H2AL) with 17 healthy subjects to determine the variability in the detection of changes in the galvanic skin response among a test group with heterogeneous respondents facing pleasant and unpleasant stimuli, correlating the GSR biosignals measured from different body sites. We experimented with the right and left wrist, left fingers, the inner side of the right foot using Shimmer3GSR and Empatica E4 sensors. The results indicated the most promising homogeneous places for measuring the GSR, namely, the left fingers and right foot. The results also suggested that due to a significantly strong correlation among the inner side of the right foot and the left fingers, as well as the moderate correlations with the right and left wrists, the foot may be a suitable place to homogenously measure a GSR signal in a test group. We also discuss some possible causes of weak and negative correlations from anomalies detected in the raw data possibly related to the sensors or the test group, which may be considered to develop robust emotion detection systems based on GRS biosignals. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

14 pages, 3626 KiB  
Article
LUX: Smart Mirror with Sentiment Analysis for Mental Comfort
by Hyona Yu, Jihyun Bae, Jiyeon Choi and Hyungseok Kim
Sensors 2021, 21(9), 3092; https://doi.org/10.3390/s21093092 - 29 Apr 2021
Cited by 11 | Viewed by 4578
Abstract
As COVID-19 solidifies its presence in everyday life, the interest in mental health is growing, resulting in the necessity of sentiment analysis. A smart mirror is suitable for encouraging mental comfort due to its approachability and scalability as an in-home AI device. From [...] Read more.
As COVID-19 solidifies its presence in everyday life, the interest in mental health is growing, resulting in the necessity of sentiment analysis. A smart mirror is suitable for encouraging mental comfort due to its approachability and scalability as an in-home AI device. From the aspect of natural language processing (NLP), sentiment analysis for Korean lacks an emotion dataset regarding everyday conversation. Its significant differences from English in terms of language structure make implementation challenging. The proposed smart mirror LUX provides Korean text sentiment analysis with the deep learning model, which examines GRU, LSTM, CNN, Bi-LSTM, and Bi-GRU networks. There are four emotional labels: anger, sadness, neutral, and happiness. For each emotion, there are three possible interactive responses: reciting wise sayings, playing music, and sympathizing. The implemented smart mirror also includes more-typical functions, such as a wake-up prompt, a weather reporting function, a calendar, a news reporting function, and a clock. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

24 pages, 2314 KiB  
Article
Building a Twitter Sentiment Analysis System with Recurrent Neural Networks
by Sergiu Cosmin Nistor, Mircea Moca, Darie Moldovan, Delia Beatrice Oprean and Răzvan Liviu Nistor
Sensors 2021, 21(7), 2266; https://doi.org/10.3390/s21072266 - 24 Mar 2021
Cited by 20 | Viewed by 4929
Abstract
This paper presents a sentiment analysis solution on tweets using Recurrent Neural Networks (RNNs). The method is can classifying tweets with an 80.74% accuracy rate, considering a binary task, after experimenting with 20 different design approaches. The solution integrates an attention mechanism aiming [...] Read more.
This paper presents a sentiment analysis solution on tweets using Recurrent Neural Networks (RNNs). The method is can classifying tweets with an 80.74% accuracy rate, considering a binary task, after experimenting with 20 different design approaches. The solution integrates an attention mechanism aiming to enhance the network, with a two-way localization system: at memory cell level and at network level. We present an in-depth literature review for Twitter sentiment analysis and the building blocks that grounded the design decisions of our solution, employed as a core classification component within a sentiment indicator of the SynergyCrowds platform. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Graphical abstract

Other

Jump to: Research

30 pages, 1385 KiB  
Systematic Review
Empathic and Empathetic Systematic Review to Standardize the Development of Reliable and Sustainable Empathic Systems
by Karl Daher, Dahlia Saad, Elena Mugellini, Denis Lalanne and Omar Abou Khaled
Sensors 2022, 22(8), 3046; https://doi.org/10.3390/s22083046 - 15 Apr 2022
Cited by 2 | Viewed by 3778
Abstract
Empathy plays a crucial role in human life, and the evolution of technology is affecting the way humans interact with machines. The area of affective computing is attracting considerable interest within the human–computer interaction community. However, the area of empathic interactions has not [...] Read more.
Empathy plays a crucial role in human life, and the evolution of technology is affecting the way humans interact with machines. The area of affective computing is attracting considerable interest within the human–computer interaction community. However, the area of empathic interactions has not been explored in depth. This systematic review explores the latest advances in empathic interactions and behaviour. We provide key insights into the exploration, design, implementation, and evaluation of empathic interactions. Data were collected from the CHI conference between 2011 and 2021 to provide an overview of all studies covering empathic and empathetic interactions. Two authors screened and extracted data from a total of 59 articles relevant to this review. The features extracted cover interaction modalities, context understanding, usage fields, goals, and evaluation. The results reported here can be used as a foundation for the future research and development of empathic systems and interfaces and as a starting point for the gaps found. Full article
(This article belongs to the Special Issue Emotion Recognition in Human-Machine Interaction)
Show Figures

Figure 1

Back to TopTop