Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (282)

Search Parameters:
Keywords = automatic emotions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6196 KiB  
Article
Building a Gender-Bias-Resistant Super Corpus as a Deep Learning Baseline for Speech Emotion Recognition
by Babak Abbaschian and Adel Elmaghraby
Sensors 2025, 25(7), 1991; https://doi.org/10.3390/s25071991 - 22 Mar 2025
Viewed by 133
Abstract
The focus on Speech Emotion Recognition has dramatically increased in recent years, driven by the need for automatic speech-recognition-based systems and intelligent assistants to enhance user experience by incorporating emotional content. While deep learning techniques have significantly advanced SER systems, their robustness concerning [...] Read more.
The focus on Speech Emotion Recognition has dramatically increased in recent years, driven by the need for automatic speech-recognition-based systems and intelligent assistants to enhance user experience by incorporating emotional content. While deep learning techniques have significantly advanced SER systems, their robustness concerning speaker gender and out-of-distribution data has not been thoroughly examined. Furthermore, standards for SER remain rooted in landmark papers from the 2000s, even though modern deep learning architectures can achieve comparable or superior results to the state of the art of that era. In this research, we address these challenges by creating a new super corpus from existing databases, providing a larger pool of samples. We benchmark this dataset using various deep learning architectures, setting a new baseline for the task. Additionally, our experiments reveal that models trained on this super corpus demonstrate superior generalization and accuracy and exhibit lower gender bias compared to models trained on individual databases. We further show that traditional preprocessing techniques, such as denoising and normalization, are insufficient to address inherent biases in the data. However, our data augmentation approach effectively shifts these biases, improving model fairness across gender groups and emotions and, in some cases, fully debiasing the models. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Graphical abstract

12 pages, 2745 KiB  
Article
A Study on Deep Learning Performances of Identifying Images’ Emotion: Comparing Performances of Three Algorithms to Analyze Fashion Items
by Gaeun Lee, Seoyun Yi and Jongtae Lee
Appl. Sci. 2025, 15(6), 3318; https://doi.org/10.3390/app15063318 - 18 Mar 2025
Viewed by 158
Abstract
Emotion recognition using AI has garnered significant attention in recent years, particularly in areas such as fashion, where understanding consumer sentiment can drive more personalized and effective marketing strategies. This study aims to propose an AI model that automatically analyzes the emotional emotions [...] Read more.
Emotion recognition using AI has garnered significant attention in recent years, particularly in areas such as fashion, where understanding consumer sentiment can drive more personalized and effective marketing strategies. This study aims to propose an AI model that automatically analyzes the emotional emotions of fashion images and compares the performance of CNN, ViT, and ResNet to determine the most suitable model. The experimental results showed that the vision transformer (ViT) model outperformed both ResNet50 and CNN models. This is due to the fact that transformer-based models, like ViT, offer greater scalability compared to CNN-based models. Specifically, ViT utilizes the transformer structure directly, which requires fewer computational resources during transfer learning compared to CNNs. This study illustrates that vision transformer (ViT) demonstrates higher performances with fewer computational resources than CNN during transfer learning. For academic and practical implications, the strong performance of ViT demonstrates the scalability and efficiency of transformer structures, indicating the need for further research applying transformer-based models to diverse datasets and environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 4127 KiB  
Article
Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
by Yanying Song and Wei Xiong
Sensors 2025, 25(6), 1855; https://doi.org/10.3390/s25061855 - 17 Mar 2025
Viewed by 481
Abstract
Digital technologies are undergoing comprehensive integration across diverse domains and processes of the human economy, politics, culture, society, and ecological civilization. This integration brings forth novel concepts, formats, and models. In the context of the accelerated convergence between the digital and physical worlds, [...] Read more.
Digital technologies are undergoing comprehensive integration across diverse domains and processes of the human economy, politics, culture, society, and ecological civilization. This integration brings forth novel concepts, formats, and models. In the context of the accelerated convergence between the digital and physical worlds, a discreet yet momentous transformation is being steered by artificial intelligence generated content (AIGC). This transformative force quietly reshapes and potentially disrupts the established patterns of digital content production and consumption. Consequently, it holds the potential to significantly enhance the digital lives of individuals and stands as an indispensable impetus for the comprehensive transition towards a new era of digital civilization in the future. This paper presents our award-winning project, a large language model (LLM)-powered 3D hyper-realistic interactive digital human system that employs automatic speech recognition (ASR), natural language processing (NLP), and emotional text-to-speech (TTS) technologies. Our system is designed with a modular concept and client–server (C/S) distributed architecture that emphasizes the separation of components for scalable development and efficient progress. The paper also discusses the use of computer graphics (CG) and artificial intelligence (AI) in creating photorealistic 3D environments for meta humans, and explores potential applications for this technology. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 1431 KiB  
Article
MSBiLSTM-Attention: EEG Emotion Recognition Model Based on Spatiotemporal Feature Fusion
by Yahong Ma, Zhentao Huang, Yuyao Yang, Zuowen Chen, Qi Dong, Shanwen Zhang and Yuan Li
Biomimetics 2025, 10(3), 178; https://doi.org/10.3390/biomimetics10030178 - 13 Mar 2025
Viewed by 370
Abstract
Emotional states play a crucial role in shaping decision-making and social interactions, with sentiment analysis becoming an essential technology in human–computer emotional engagement, garnering increasing interest in artificial intelligence research. In EEG-based emotion analysis, the main challenges are feature extraction and classifier design, [...] Read more.
Emotional states play a crucial role in shaping decision-making and social interactions, with sentiment analysis becoming an essential technology in human–computer emotional engagement, garnering increasing interest in artificial intelligence research. In EEG-based emotion analysis, the main challenges are feature extraction and classifier design, making the extraction of spatiotemporal information from EEG signals vital for effective emotion classification. Current methods largely depend on machine learning with manual feature extraction, while deep learning offers the advantage of automatic feature extraction and classification. Nonetheless, many deep learning approaches still necessitate manual preprocessing, which hampers accuracy and convenience. This paper introduces a novel deep learning technique that integrates multi-scale convolution and bidirectional long short-term memory networks with an attention mechanism for automatic EEG feature extraction and classification. By using raw EEG data, the method applies multi-scale convolutional neural networks and bidirectional long short-term memory networks to extract and merge features, selects key features via an attention mechanism, and classifies emotional EEG signals through a fully connected layer. The proposed model was evaluated on the SEED dataset for emotion classification. Experimental results demonstrate that this method effectively classifies EEG-based emotions, achieving classification accuracies of 99.44% for the three-class task and 99.85% for the four-class task in single validation, with average 10-fold-cross-validation accuracies of 99.49% and 99.70%, respectively. These findings suggest that the MSBiLSTM-Attention model is a powerful approach for emotion recognition. Full article
Show Figures

Figure 1

18 pages, 913 KiB  
Article
Improving Stuttering Through Augmented Multisensory Feedback Stimulation
by Giovanni Muscarà, Alessandra Vergallito, Valentina Letorio, Gaia Iannaccone, Martina Giardini, Elena Randaccio, Camilla Scaramuzza, Cristina Russo, Maria Giovanna Scarale and Jubin Abutalebi
Brain Sci. 2025, 15(3), 246; https://doi.org/10.3390/brainsci15030246 - 25 Feb 2025
Viewed by 594
Abstract
Background/Objectives: Stuttering is a speech disorder involving fluency disruptions like repetitions, prolongations, and blockages, often leading to emotional distress and social withdrawal. Here, we present Augmented Multisensory Feedback Stimulation (AMFS), a novel personalized intervention to improve speech fluency in people who stutter (PWS). [...] Read more.
Background/Objectives: Stuttering is a speech disorder involving fluency disruptions like repetitions, prolongations, and blockages, often leading to emotional distress and social withdrawal. Here, we present Augmented Multisensory Feedback Stimulation (AMFS), a novel personalized intervention to improve speech fluency in people who stutter (PWS). AMFS includes a five-day intensive phase aiming at acquiring new skills, plus a reinforcement phase designed to facilitate the transfer of these skills across different contexts and their automatization into effortless behaviors. The concept of our intervention derives from the prediction of the neurocomputational model Directions into Velocities of Articulators (DIVA). The treatment applies dynamic multisensory stimulation to disrupt PWS’ maladaptive over-reliance on sensory feedback mechanisms, promoting the emergence of participants’ natural voices. Methods: Forty-six PWS and a control group, including twenty-four non-stuttering individuals, participated in this study. Stuttering severity and physiological measures, such as heart rate and electromyographic activity, were recorded before and after the intensive phase and during the reinforcement stage in the PWS but only once in the controls. Results: The results showed a significant reduction in stuttering severity at the end of the intensive phase, which was maintained during the reinforcement training. Crucially, worse performance was found in PWS than in the controls at baseline but not after the intervention. In the PWS, physiological signals showed a reduction in activity during the training phases compared to baseline. Conclusions: Our findings show that AMFS provides a promising approach to enhancing speech fluency. Future studies should clarify the mechanisms underlying such intervention and assess whether effects persist after the treatment conclusion. Full article
(This article belongs to the Special Issue Latest Research on the Treatments of Speech and Language Disorders)
Show Figures

Figure 1

30 pages, 8759 KiB  
Article
Identifying Novel Emotions and Wellbeing of Horses from Videos Through Unsupervised Learning
by Aarya Bhave, Emily Kieson, Alina Hafner and Peter A. Gloor
Sensors 2025, 25(3), 859; https://doi.org/10.3390/s25030859 - 31 Jan 2025
Viewed by 560
Abstract
This research applies unsupervised learning on a large original dataset of horses in the wild to identify previously unidentified horse emotions. We construct a novel, high-quality, diverse dataset of 3929 images consisting of five wild horse breeds worldwide at different geographical locations. We [...] Read more.
This research applies unsupervised learning on a large original dataset of horses in the wild to identify previously unidentified horse emotions. We construct a novel, high-quality, diverse dataset of 3929 images consisting of five wild horse breeds worldwide at different geographical locations. We base our analysis on the seven Panksepp emotions of mammals “Exploring”, “Sadness”, “Playing”, “Rage”, “Fear”, “Affectionate” and “Lust”, along with one additional emotion “Pain” which has been shown to be highly relevant for horses. We apply the contrastive learning framework MoCo (Momentum Contrast for Unsupervised Visual Representation Learning) on our dataset to predict the seven Panksepp emotions and “Pain” using unsupervised learning. We significantly modify the MoCo framework, building a custom downstream classifier network that connects with a frozen CNN encoder that is pretrained using MoCo. Our method allows the encoder network to learn similarities and differences within image groups on its own without labels. The clusters thus formed are indicative of deeper nuances and complexities within a horse’s mood, which can possibly hint towards the existence of novel and complex equine emotions. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

27 pages, 2401 KiB  
Systematic Review
Approach–Avoidance Bias in Virtual and Real-World Simulations: Insights from a Systematic Review of Experimental Setups
by Aitana Grasso-Cladera, John Madrid-Carvajal, Sven Walter and Peter König
Brain Sci. 2025, 15(2), 103; https://doi.org/10.3390/brainsci15020103 - 22 Jan 2025
Cited by 1 | Viewed by 857
Abstract
Background: Approach and avoidance bias (AAB) describes automatic behavioral tendencies to react toward environmental stimuli regarding their emotional valence. Traditional setups have provided evidence but often lack ecological validity. The study of the AAB in naturalistic contexts has recently increased, revealing significant methodological [...] Read more.
Background: Approach and avoidance bias (AAB) describes automatic behavioral tendencies to react toward environmental stimuli regarding their emotional valence. Traditional setups have provided evidence but often lack ecological validity. The study of the AAB in naturalistic contexts has recently increased, revealing significant methodological challenges. This systematic review evaluates the use of virtual reality (VR) and real-world setups to study the AAB, summarizing methodological innovations and challenges. Methods: We systematically reviewed peer-reviewed articles employing VR and real-world setups to investigate the AAB. We analyzed experimental designs, stimuli, response metrics, and technical aspects to assess their alignment with research objectives and identify limitations. Results: This review included 14 studies revealing diverse methodologies, stimulus types, and novel behavioral responses, highlighting significant variability in design strategies and methodological coherence. Several studies used traditional reaction time measures yet varied in their application of VR technology and participant interaction paradigms. Some studies showed discrepancies between simulated and natural bodily actions, while others showcased more integrated approaches that preserved their integrity. Only a minority of studies included control conditions or acquired (neuro)physiological data. Conclusions: VR offers a potential ecological setup for studying the AAB, enabling dynamic and immersive interactions. Our results underscore the importance of establishing a coherent framework for investigating the AAB tendencies using VR. Addressing the foundational challenges of developing baseline principles that guide VR-based designs to study the AAB within naturalistic contexts is essential for advancing the AAB research and application. This will ultimately contribute to more reliable and reproducible experimental paradigms and develop effective interventions that help individuals recognize and change their biases, fostering more balanced behaviors. Full article
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)
Show Figures

Figure 1

11 pages, 586 KiB  
Article
A Time Course Analysis of the Conceptual and Affective Meanings of Words
by Dandan Jia, Ling Pan, Mei Chen and Zhijin Zhou
Behav. Sci. 2025, 15(1), 69; https://doi.org/10.3390/bs15010069 - 15 Jan 2025
Viewed by 572
Abstract
Words are the basic units of language and vital for comprehending the language system. Lexical processing research has always focused on either conceptual or affective word meaning. Previous studies have indirectly compared the conceptual and affective meanings of words. This study used emotion-laden [...] Read more.
Words are the basic units of language and vital for comprehending the language system. Lexical processing research has always focused on either conceptual or affective word meaning. Previous studies have indirectly compared the conceptual and affective meanings of words. This study used emotion-laden words, a special type of dual-meaning word, to directly compare the time course of processing conceptual and affective word meanings. Free association was applied in Experiment 1 to investigate the time course of conceptual and affective meanings in dual-meaning words. The results showed that conceptual-meaning processing was superior to affective-meaning processing. In Experiment 2, the semantic/affective priming paradigm was used to directly compare the time courses of processing conceptual and affective word meanings by manipulating stimulus onset asynchrony (SOA) in different ways. The results showed that semantic and affective priming effects could be obtained under short SOA conditions, with no differences between them. Consistent with Experiment 1, only the semantic priming effect was observed in the long SOA condition. These findings suggest that the conceptual and affective meanings of words have different time courses. The conceptual meaning of words includes automatic and controlled processing, whereas the affective meaning mainly involves automatic processing. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

24 pages, 3261 KiB  
Article
A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty
by Hongduo Wu, Dong Zhou, Ziyue Guo, Zicheng Song, Yu Li, Xingzheng Wei and Qidi Zhou
Appl. Sci. 2025, 15(1), 462; https://doi.org/10.3390/app15010462 - 6 Jan 2025
Viewed by 700
Abstract
The cognitive emotions of individuals during tasks largely determine the success or failure of tasks in various fields such as the military, medical, industrial fields, etc. Facial video data can carry more emotional information than static images because emotional expression is a temporal [...] Read more.
The cognitive emotions of individuals during tasks largely determine the success or failure of tasks in various fields such as the military, medical, industrial fields, etc. Facial video data can carry more emotional information than static images because emotional expression is a temporal process. Video-based Facial Expression Recognition (FER) has received increasing attention from the relevant scholars in recent years. However, due to the high cost of marking and training video samples, feature extraction is inefficient and ineffective, which leads to a low accuracy and poor real-time performance. In this paper, a cognitive emotion recognition method based on video data is proposed, in which 49 emotion description points were initially defined, and the spatial–temporal features of cognitive emotions were extracted from the video data through a feature extraction method that combines geodesic distances and sample entropy. Then, an active learning algorithm based on complexity and uncertainty was proposed to automatically select the most valuable samples, thereby reducing the cost of sample labeling and model training. Finally, the effectiveness, superiority, and real-time performance of the proposed method were verified utilizing the MMI Facial Expression Database and some real-time-collected data. Through comparisons and testing, the proposed method showed satisfactory real-time performance and a higher accuracy, which can effectively support the development of a real-time monitoring system for cognitive emotions. Full article
(This article belongs to the Special Issue Advanced Technologies and Applications of Emotion Recognition)
Show Figures

Figure 1

22 pages, 872 KiB  
Article
The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour
by Sharifa Alghowinem, Sabrina Caldwell, Ibrahim Radwan, Michael Wagner and Tom Gedeon
Information 2025, 16(1), 6; https://doi.org/10.3390/info16010006 - 26 Dec 2024
Viewed by 693
Abstract
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from [...] Read more.
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Figure 1

17 pages, 6430 KiB  
Article
The Potential for High-Priority Care Based on Pain Through Facial Expression Detection with Patients Experiencing Chest Pain
by Hsiang Kao, Rita Wiryasaputra, Yo-Yun Liao, Yu-Tse Tsan, Wei-Min Chu, Yi-Hsuan Chen, Tzu-Chieh Lin and Chao-Tung Yang
Diagnostics 2025, 15(1), 17; https://doi.org/10.3390/diagnostics15010017 - 25 Dec 2024
Viewed by 964
Abstract
Background and Objective: Cardiovascular disease (CVD), one of the chronic non-communicable diseases (NCDs), is defined as a cardiac and vascular disorder that includes coronary heart disease, heart failure, peripheral arterial disease, cerebrovascular disease (stroke), congenital heart disease, rheumatic heart disease, and elevated blood [...] Read more.
Background and Objective: Cardiovascular disease (CVD), one of the chronic non-communicable diseases (NCDs), is defined as a cardiac and vascular disorder that includes coronary heart disease, heart failure, peripheral arterial disease, cerebrovascular disease (stroke), congenital heart disease, rheumatic heart disease, and elevated blood pressure (hypertension). Having CVD increases the mortality rate. Emotional stress, an indirect indicator associated with CVD, can often manifest through facial expressions. Chest pain or chest discomfort is one of the symptoms of a heart attack. The golden hour of chest pain influences the occurrence of brain cell death; thus, saving people with chest discomfort during observation is a crucial and urgent issue. Moreover, a limited number of emergency care (ER) medical personnel serve unscheduled outpatients. In this study, a computer-based automatic chest pain detection assistance system is developed using facial expressions to improve patient care services and minimize heart damage. Methods: The You Only Look Once (YOLO) model, as a deep learning method, detects and recognizes the position of an object simultaneously. A series of YOLO models were employed for pain detection through facial expression. Results: The YOLOv4 and YOLOv6 performed better than YOLOv7 in facial expression detection with patients experiencing chest pain. The accuracy of YOLOv4 and YOLOv6 achieved 80–100%. Even though there are similarities in attaining the accuracy values, the training time for YOLOv6 is faster than YOLOv4. Conclusion: By performing this task, a physician can prioritize the best treatment plan, reduce the extent of cardiac damage in patients, and improve the effectiveness of the golden treatment time. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

18 pages, 1651 KiB  
Article
Sentiment Analysis of Product Reviews Using Machine Learning and Pre-Trained LLM
by Pawanjit Singh Ghatora, Seyed Ebrahim Hosseini, Shahbaz Pervez, Muhammad Javed Iqbal and Nabil Shaukat
Big Data Cogn. Comput. 2024, 8(12), 199; https://doi.org/10.3390/bdcc8120199 - 23 Dec 2024
Cited by 1 | Viewed by 3613
Abstract
Sentiment analysis via artificial intelligence, i.e., machine learning and large language models (LLMs), is a pivotal tool that classifies sentiments within texts as positive, negative, or neutral. It enables computers to automatically detect and interpret emotions from textual data, covering a spectrum of [...] Read more.
Sentiment analysis via artificial intelligence, i.e., machine learning and large language models (LLMs), is a pivotal tool that classifies sentiments within texts as positive, negative, or neutral. It enables computers to automatically detect and interpret emotions from textual data, covering a spectrum of feelings without direct human intervention. Sentiment analysis is integral to marketing research, helping to gauge consumer emotions and opinions across various sectors. Its applications span analyzing movie reviews, monitoring social media, evaluating product feedback, assessing employee sentiments, and identifying hate speech. This study explores the application of both traditional machine learning and pre-trained LLMs for automated sentiment analysis of customer product reviews. The motivation behind this work lies in the demand for more nuanced understanding of consumer sentiments that can drive data-informed business decisions. In this research, we applied machine learning-based classifiers, i.e., Random Forest, Naive Bayes, and Support Vector Machine, alongside the GPT-4 model to benchmark their effectiveness for sentiment analysis. Traditional models show better results and efficiency in processing short, concise text, with SVM in classifying sentiment of short length comments. However, GPT-4 showed better results with more detailed texts, capturing subtle sentiments with higher precision, recall, and F1 scores to uniquely identify mixed sentiments not found in the simpler models. Conclusively, this study shows that LLMs outperform traditional models in context-rich sentiment analysis by not only providing accurate sentiment classification but also insightful explanations. These results enable LLMs to provide a superior tool for customer-centric businesses, which helps actionable insights to be derived from any textual data. Full article
Show Figures

Figure 1

20 pages, 5165 KiB  
Article
Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism
by Yahong Ma, Zhentao Huang, Yuyao Yang, Shanwen Zhang, Qi Dong, Rongrong Wang and Liangliang Hu
Brain Sci. 2024, 14(12), 1289; https://doi.org/10.3390/brainsci14121289 - 21 Dec 2024
Viewed by 1093
Abstract
Background: Emotions play a crucial role in people’s lives, profoundly affecting their cognition, decision-making, and interpersonal communication. Emotion recognition based on brain signals has become a significant challenge in the fields of affective computing and human-computer interaction. Methods: Addressing the issue of inaccurate [...] Read more.
Background: Emotions play a crucial role in people’s lives, profoundly affecting their cognition, decision-making, and interpersonal communication. Emotion recognition based on brain signals has become a significant challenge in the fields of affective computing and human-computer interaction. Methods: Addressing the issue of inaccurate feature extraction and low accuracy of existing deep learning models in emotion recognition, this paper proposes a multi-channel automatic classification model for emotion EEG signals named DACB, which is based on dual attention mechanisms, convolutional neural networks, and bidirectional long short-term memory networks. DACB extracts features in both temporal and spatial dimensions, incorporating not only convolutional neural networks but also SE attention mechanism modules for learning the importance of different channel features, thereby enhancing the network’s performance. DACB also introduces dot product attention mechanisms to learn the importance of spatial and temporal features, effectively improving the model’s accuracy. Results: The accuracy of this method in single-shot validation tests on the SEED-IV and DREAMER (Valence-Arousal-Dominance three-classification) datasets is 99.96% and 87.52%, 90.06%, and 89.05%, respectively. In 10-fold cross-validation tests, the accuracy is 99.73% and 84.26%, 85.40%, and 85.02%, outperforming other models. Conclusions: This demonstrates that the DACB model achieves high accuracy in emotion classification tasks, demonstrating outstanding performance and generalization ability and providing new directions for future research in EEG signal recognition. Full article
Show Figures

Figure 1

16 pages, 285 KiB  
Article
Impact of Automation Level of Dairy Farms in Northern and Central Germany on Dairy Cattle Welfare
by Lianne Lavrijsen-Kromwijk, Susanne Demba, Ute Müller and Sandra Rose
Animals 2024, 14(24), 3699; https://doi.org/10.3390/ani14243699 - 21 Dec 2024
Viewed by 1444
Abstract
An increasing number of automation technologies for dairy cattle farming, including automatic milking, feeding, manure removal and bedding, are now commercially available. The effects of these technologies on individual aspects of animal welfare have already been explored to some extent. However, as of [...] Read more.
An increasing number of automation technologies for dairy cattle farming, including automatic milking, feeding, manure removal and bedding, are now commercially available. The effects of these technologies on individual aspects of animal welfare have already been explored to some extent. However, as of now, there are no studies that analyze the impact of increasing farm automation through various combinations of these technologies. The objective of this study was to examine potential correlations between welfare indicators from the Welfare Quality® Assessment protocol and dairy farms with varying degrees of automation. To achieve this, 32 trial farms in Northern and Central Germany were categorized into varying automation levels using a newly developed classification system. The Welfare Quality® Assessment protocol was used to conduct welfare assessments on all participating farms. Using analysis of variance (ANOVA), overall welfare scores and individual measures from the protocol were compared across farms with differing automation levels. No significant differences were observed in overall welfare scores, suggesting that the impact of automation does not exceed other farm-related factors influencing animal wellbeing, such as housing environment or management methods. However, significant effects of milking, feeding, and bedding systems on the appropriate behavior of cattle were observed. Higher levels of automation had a positive impact on the human–animal relationship and led to positive emotional states. Moreover, farms with higher automation levels had significantly lower scores for the prevalence of severe lameness and dirtiness of lower legs. It could be concluded that a higher degree of automation could help to improve animal welfare on dairy farms. Full article
22 pages, 5055 KiB  
Article
Studying Pupil-Size Changes as a Function of Task Demands and Emotional Content in a Clinical Interview Situation
by Daniel Gugerell, Benedikt Gollan, Moritz Stolte and Ulrich Ansorge
Appl. Sci. 2024, 14(24), 11714; https://doi.org/10.3390/app142411714 - 16 Dec 2024
Viewed by 759
Abstract
The human pupil changes size in response to processing demands or cognitive (work)load and emotional processing. Therefore, it is important to test if automatic tracking of cognitive load by pupil-size measurement is possible under conditions of varying levels of emotion-related processing. Here, we [...] Read more.
The human pupil changes size in response to processing demands or cognitive (work)load and emotional processing. Therefore, it is important to test if automatic tracking of cognitive load by pupil-size measurement is possible under conditions of varying levels of emotion-related processing. Here, we investigated this question in an experiment simulating a highly relevant applied context in which cognitive load and emotional processing can vary independently: a clinical interview. Our participants conducted a live clinical interview via computer monitor with a confederate as an interviewee. We used eye-tracking and automatic extraction of participants’ pupil size to monitor cognitive load (single vs. dual tasks, between participants), while orthogonally varying the emotional content of the interviewee’s answers (neutral vs. negative, between participants). We ensured participants’ processing of the verbal content of the interview by asking all participants to report on the content of the interview in a subsequent memory test and by asking them to discriminate if the answers of the interviewee referred to only herself or to somebody else (too). In the dual-task condition, participants had to monitor additionally if the facial emotional expressions of the interviewee matched the content of her verbal responses. Results showed that pupil-size extraction reliably discriminated between high and low cognitive load, albeit to a lower degree under negative emotional content conditions. This was possible with an algorithmic online measure of cognitive load as well as with a conventional pupil-size measure, providing proof of the external validity of the algorithm/online measure. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

Back to TopTop