Application of Affective Computing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 February 2025 | Viewed by 1854

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computational Engineering, School of Engineering Sciences, Lappeenranta-Lahti University of Technology LUT, 53850 Lappeenranta, Finland
Interests: affective computing; micro-gesture; emotion AI; social signal processing; rPPG

E-Mail Website
Guest Editor
School of Software, Xi’an Jiaotong University, Xi’an 710049, China
Interests: face analysis; emotion analysis; image restoration; physiological signal analysis; action recognition; deep learning
School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
Interests: facial expression analysis; micro-expression analysis; speech emotion recognition; multi-modal emotion recognition; EEG emotion recognition; domain adaptation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We would like to invite you to submit your research work to our Special Issue, “Application of Affective Computing”.

This Special Issue aims to provide a platform for researchers to share their novel contributions related to machine learning and deep learning methods for affective computing and the application of affective computing. Affective computing is a field that focuses on enabling machines to automatically perceive, recognize, and express emotions from/with multimodal signals, such as video, image, audio, and text. If machines could understand emotions in a similar way to humans, existing human–computer interaction systems would become more natural. With the advancements in deep learning models and the use of well-designed structures and loss functions, affective computing has achieved significant progress and presented promising prospect in recent years. This Special Issue encourages contributions related to unimodal and multimodal affective computing, emotional signal synthesis and conversion, large-scale databases, recent advances in affective computing, as well as applications of affective computing in various fields such as healthcare, education, entertainment, and security.

Broad topics/keywords and areas of interest include but are not limited to:

  • Secure use of data in affective computing;
  • Identity-free affective computing;
  • Unimodal and multimodal affective computing;
  • Emotional signal synthesis and conversion;
  • Large-scale databases on affective computing;
  • Recent advances in affective computing;
  • Applications of affective computing in various fields such as healthcare, education, and entertainment.

Kind Regards,

Dr. Xin Liu
Dr. Jingang Shi
Dr. Yuan Zong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • affective computing
  • emotion recognition
  • multimodal information fusion
  • emotion database
  • emotional signal generation and conversion
  • deep neural networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

13 pages, 2696 KiB  
Article
High-Accuracy Classification of Multiple Distinct Human Emotions Using EEG Differential Entropy Features and ResNet18
by Longxin Yao, Yun Lu, Yukun Qian, Changjun He and Mingjiang Wang
Appl. Sci. 2024, 14(14), 6175; https://doi.org/10.3390/app14146175 - 16 Jul 2024
Viewed by 425
Abstract
The high-accuracy detection of multiple distinct human emotions is crucial for advancing affective computing, mental health diagnostics, and human–computer interaction. The integration of deep learning networks with entropy measures holds significant potential in neuroscience and medicine, especially for analyzing EEG-based emotion states. This [...] Read more.
The high-accuracy detection of multiple distinct human emotions is crucial for advancing affective computing, mental health diagnostics, and human–computer interaction. The integration of deep learning networks with entropy measures holds significant potential in neuroscience and medicine, especially for analyzing EEG-based emotion states. This study proposes a method combining ResNet18 with differential entropy to identify five types of human emotions (happiness, sadness, fear, disgust, and neutral) from EEG signals. Our approach first calculates the differential entropy of EEG signals to capture the complexity and variability of the emotional states. Then, the ResNet18 network is employed to learn feature representations from the differential entropy measures, which effectively captures the intricate spatiotemporal dynamics inherent in emotional EEG patterns using residual connections. To validate the efficacy of our method, we conducted experiments on the SEED-V dataset, achieving an average accuracy of 95.61%. Our findings demonstrate that the combination of ResNet18 with differential entropy is highly effective in classifying multiple distinct human emotions from EEG signals. This method shows robust generalization and broad applicability, indicating its potential for extension to various pattern recognition tasks across different domains. Full article
(This article belongs to the Special Issue Application of Affective Computing)
Show Figures

Figure 1

14 pages, 5092 KiB  
Article
Real-Time Analysis of Facial Expressions for Mood Estimation
by Juan Sebastián Filippini, Javier Varona and Cristina Manresa-Yee
Appl. Sci. 2024, 14(14), 6173; https://doi.org/10.3390/app14146173 - 16 Jul 2024
Viewed by 328
Abstract
This paper proposes a model-based method for real-time automatic mood estimation in video sequences. The approach is customized by learning the person’s specific facial parameters, which are transformed into facial Action Units (AUs). A model mapping for mood representation is used to describe [...] Read more.
This paper proposes a model-based method for real-time automatic mood estimation in video sequences. The approach is customized by learning the person’s specific facial parameters, which are transformed into facial Action Units (AUs). A model mapping for mood representation is used to describe moods in terms of the PAD space: Pleasure, Arousal, and Dominance. From the intersection of these dimensions, eight octants represent fundamental mood categories. In the experimental evaluation, a stimulus video randomly selected from a set prepared to elicit different moods was played to participants, while the participant’s facial expressions were recorded. From the experiment, Dominance is the dimension least impacted by facial expression, and this dimension could be eliminated from mood categorization. Then, four categories corresponding to the quadrants of the Pleasure–Arousal (PA) plane, “Exalted”, “Calm”, “Anxious” and “Bored”, were defined, with two more categories for the “Positive” and “Negative” signs of the Pleasure (P) dimension. Results showed a 73% of coincidence in the PA categorization and a 94% in the P dimension, demonstrating that facial expressions can be used to estimate moods, within these defined categories, and provide cues for assessing users’ subjective states in real-world applications. Full article
(This article belongs to the Special Issue Application of Affective Computing)
Show Figures

Figure 1

Other

Jump to: Research

25 pages, 2680 KiB  
Systematic Review
A Systematic Literature Review of Modalities, Trends, and Limitations in Emotion Recognition, Affective Computing, and Sentiment Analysis
by Rosa A. García-Hernández, Huizilopoztli Luna-García, José M. Celaya-Padilla, Alejandra García-Hernández, Luis C. Reveles-Gómez, Luis Alberto Flores-Chaires, J. Ruben Delgado-Contreras, David Rondon and Klinge O. Villalba-Condori
Appl. Sci. 2024, 14(16), 7165; https://doi.org/10.3390/app14167165 - 15 Aug 2024
Abstract
This systematic literature review delves into the extensive landscape of emotion recognition, sentiment analysis, and affective computing, analyzing 609 articles. Exploring the intricate relationships among these research domains, and leveraging data from four well-established sources—IEEE, Science Direct, Springer, and MDPI—this systematic review classifies [...] Read more.
This systematic literature review delves into the extensive landscape of emotion recognition, sentiment analysis, and affective computing, analyzing 609 articles. Exploring the intricate relationships among these research domains, and leveraging data from four well-established sources—IEEE, Science Direct, Springer, and MDPI—this systematic review classifies studies in four modalities based on the types of data analyzed. These modalities are unimodal, multi-physical, multi-physiological, and multi-physical–physiological. After the classification, key insights about applications, learning models, and data sources are extracted and analyzed. This review highlights the exponential growth in studies utilizing EEG signals for emotion recognition, and the potential of multimodal approaches combining physical and physiological signals to enhance the accuracy and practicality of emotion recognition systems. This comprehensive overview of research advances, emerging trends, and limitations from 2018 to 2023 underscores the importance of continued exploration and interdisciplinary collaboration in these rapidly evolving fields. Full article
(This article belongs to the Special Issue Application of Affective Computing)
Show Figures

Figure 1

Back to TopTop