Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (152)

Search Parameters:
Keywords = fear of happiness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 406 KB  
Article
Investor Emotions and Cognitive Biases in a Bearish Market Simulation: A Qualitative Study
by Alain Finet, Kevin Kristoforidis and Julie Laznicka
J. Risk Financial Manag. 2025, 18(9), 493; https://doi.org/10.3390/jrfm18090493 - 4 Sep 2025
Abstract
Our paper investigates how emotions and cognitive biases shape small investors’ decisions in a bearish market or are perceived as such. Using semi-structured interviews and a focus group, we analyze the behavior of eight management science students engaged in a three-day trading simulation [...] Read more.
Our paper investigates how emotions and cognitive biases shape small investors’ decisions in a bearish market or are perceived as such. Using semi-structured interviews and a focus group, we analyze the behavior of eight management science students engaged in a three-day trading simulation with virtual portfolios. Our findings show that emotions are active forces influencing judgment. Fear, often escalating into anxiety, was pervasive in response to losses and uncertainty, while frustration and powerlessness frequently led to decision paralysis. Early successes sometimes generated happiness and pride but also resulted in overconfidence and excessive risk-taking. These emotional dynamics contributed to the emergence of cognitive biases such as loss aversion, anchoring, confirmation bias, overconfidence, familiarity bias and herd behavior. Emotions often acted as precursors to biases, which then translated into specific decisions—such as holding losing positions, impulsive “revenge” trades or persisting with unsuitable financial strategies. In some cases, strong emotions bypassed cognitive biases and directly drove behavior. Social comparison through portfolio rankings also moderated responses, offering both comfort and additional pressure. By applying a qualitative perspective—not commonly used in behavioral finance—our study highlights the dynamic chain of emotions → biases → decisions and the role of social context. While limited by sample size and the short simulation period, this research provides empirical insights into how psychological mechanisms shape investment behavior under stress, offering avenues for future quantitative studies. Full article
(This article belongs to the Special Issue Behaviour in Financial Decision-Making)
Show Figures

Figure 1

16 pages, 2093 KB  
Article
Neuromarketing and Health Marketing Synergies: A Protection Motivation Theory Approach to Breast Cancer Screening Advertising
by Dimitra Skandali, Ioanna Yfantidou and Georgios Tsourvakas
Information 2025, 16(9), 715; https://doi.org/10.3390/info16090715 - 22 Aug 2025
Viewed by 361
Abstract
This study investigates the psychological and emotional mechanisms underlying women’s reactions to breast cancer awareness advertisements through the dual lens of Protection Motivation Theory (PMT) and neuromarketing methods, addressing a gap in empirical research on the integration of biometric and cognitive approaches in [...] Read more.
This study investigates the psychological and emotional mechanisms underlying women’s reactions to breast cancer awareness advertisements through the dual lens of Protection Motivation Theory (PMT) and neuromarketing methods, addressing a gap in empirical research on the integration of biometric and cognitive approaches in health marketing. Utilizing a lab-based experiment with 78 women aged 40 and older, we integrated Facial Expression Analysis using Noldus FaceReader 9.0 with semi-structured post-exposure interviews. Six manipulated health messages were embedded within a 15 min audiovisual sequence, with each message displayed for 5 s. Quantitative analysis revealed that Ads 2 and 5 elicited the highest mean fear scores (0.45 and 0.42) and surprise scores (0.35 and 0.33), while Ad 4 generated the highest happiness score (0.31) linked to coping appraisal. Emotional expressions—including fear, sadness, surprise, and neutrality—were recorded in real time and analyzed quantitatively. The facial analysis data were triangulated with thematic insights from interviews, targeting perceptions of threat severity, vulnerability, response efficacy, and self-efficacy. The findings confirm that fear-based appeals are only effective when paired with actionable coping strategies, providing empirical support for PMT’s dual-process model. By applying mixed-methods analysis to the evaluation of health messages, this study makes three contributions: (1) it extends PMT by validating the emotional–cognitive integration framework through biometric–qualitative convergence; (2) it offers practical sequencing principles for combining threat and coping cues; and (3) it proposes cross-modal methodology guidelines for future health campaigns. Full article
Show Figures

Graphical abstract

34 pages, 2061 KB  
Article
Analyzing Communication and Migration Perceptions Using Machine Learning: A Feature-Based Approach
by Andrés Tirado-Espín, Ana Marcillo-Vera, Karen Cáceres-Benítez, Diego Almeida-Galárraga, Nathaly Orozco Garzón, Jefferson Alexander Moreno Guaicha and Henry Carvajal Mora
Journal. Media 2025, 6(3), 112; https://doi.org/10.3390/journalmedia6030112 - 18 Jul 2025
Viewed by 742
Abstract
Public attitudes toward immigration in Spain are influenced by media narratives, individual traits, and emotional responses. This study examines how portrayals of Arab and African immigrants may be associated with emotional and attitudinal variation. We address three questions: (1) How are different types [...] Read more.
Public attitudes toward immigration in Spain are influenced by media narratives, individual traits, and emotional responses. This study examines how portrayals of Arab and African immigrants may be associated with emotional and attitudinal variation. We address three questions: (1) How are different types of media coverage and social environments linked to emotional reactions? (2) What emotions are most frequently associated with these portrayals? and (3) How do political orientation and media exposure relate to changes in perception? A pre/post media exposure survey was conducted with 130 Spanish university students. Machine learning models (decision tree, random forest, and support vector machine) were used to classify attitudes and identify predictive features. Emotional variables such as fear and happiness, as well as perceptions of media clarity and bias, emerged as key features in classification models. Political orientation and prior media experience were also linked to variation in responses. These findings suggest that emotional and contextual factors may be relevant in understanding public perceptions of immigration. The use of interpretable models contributes to a nuanced analysis of media influence and highlights the value of transparent computational approaches in migration research. Full article
Show Figures

Figure 1

21 pages, 34246 KB  
Article
A Multi-Epiphysiological Indicator Dog Emotion Classification System Integrating Skin and Muscle Potential Signals
by Wenqi Jia, Yanzhi Hu, Zimeng Wang, Kai Song and Boyan Huang
Animals 2025, 15(13), 1984; https://doi.org/10.3390/ani15131984 - 5 Jul 2025
Viewed by 463
Abstract
This study introduces an innovative dog emotion classification system that integrates four non-invasive physiological indicators—skin potential (SP), muscle potential (MP), respiration frequency (RF), and voice pattern (VP)—with the extreme gradient boosting (XGBoost) algorithm. A four-breed dataset was meticulously constructed by recording and labeling [...] Read more.
This study introduces an innovative dog emotion classification system that integrates four non-invasive physiological indicators—skin potential (SP), muscle potential (MP), respiration frequency (RF), and voice pattern (VP)—with the extreme gradient boosting (XGBoost) algorithm. A four-breed dataset was meticulously constructed by recording and labeling physiological signals from dogs exposed to four fundamental emotional states: happiness, sadness, fear, and anger. Comprehensive feature extraction (time-domain, frequency-domain, nonlinearity) was conducted for each signal modality, and inter-emotional variance was analyzed to establish discriminative patterns. Four machine learning algorithms—Neural Networks (NN), Support Vector Machines (SVM), Gradient Boosting Decision Trees (GBDT), and XGBoost—were trained and evaluated, with XGBoost achieving the highest classification accuracy of 90.54%. Notably, this is the first study to integrate a fusion of two complementary electrophysiological indicators—skin and muscle potentials—into a multi-modal dataset for canine emotion recognition. Further interpretability analysis using Shapley Additive exPlanations (SHAP) revealed skin potential and voice pattern features as the most contributive to model performance. The proposed system demonstrates high accuracy, efficiency, and portability, laying a robust groundwork for future advancements in cross-species affective computing and intelligent animal welfare technologies. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: New Horizons in Animal Welfare)
Show Figures

Figure 1

15 pages, 1027 KB  
Article
Parent–Child Eye Gaze Congruency to Emotional Expressions Mediated by Child Aesthetic Sensitivity
by Antonios I. Christou, Kostas Fanti, Ioannis Mavrommatis and Georgia Soursou
Children 2025, 12(7), 839; https://doi.org/10.3390/children12070839 - 25 Jun 2025
Cited by 1 | Viewed by 472
Abstract
Background/Objectives: Sensory Processing Sensitivity (SPS), particularly its aesthetic subcomponent (Aesthetic Sensitivity; AES), has been linked to individual differences in emotional processing. This study examined whether parental visual attention to emotional facial expressions predicts corresponding attentional patterns in their children, and whether this intergenerational [...] Read more.
Background/Objectives: Sensory Processing Sensitivity (SPS), particularly its aesthetic subcomponent (Aesthetic Sensitivity; AES), has been linked to individual differences in emotional processing. This study examined whether parental visual attention to emotional facial expressions predicts corresponding attentional patterns in their children, and whether this intergenerational concordance is mediated by child AES and moderated by child empathy. Methods: A sample of 124 Greek Cypriot parent–child dyads (children aged 7–12 years) participated in an eye-tracking experiment. Both parents and children viewed static emotional facial expressions (angry, sad, fearful, happy). Parents also completed questionnaires assessing their child’s SPS, empathy (cognitive and affective), and emotional functioning. Regression analyses and moderated mediation models were employed to explore associations between parental and child gaze patterns. Results: Children’s fixation on angry eyes was significantly predicted by parental fixation duration on the same region, as well as by child AES and empathy levels. Moderated mediation analyses revealed that the association between parent and child gaze to angry eyes was significantly mediated by child AES. However, neither cognitive nor affective empathy significantly moderated this mediation effect. Conclusions: Findings suggest that child AES plays a key mediating role in the intergenerational transmission of attentional biases to emotional stimuli. While empathy was independently associated with children’s gaze behavior, it did not moderate the AES-mediated pathway. These results highlight the importance of trait-level child sensitivity in shaping shared emotional attention patterns within families. Full article
(This article belongs to the Section Global Pediatric Health)
Show Figures

Figure 1

18 pages, 2098 KB  
Article
Development and Validation of the Children’s Emotions Database (CED): Preschoolers’ Basic and Complex Facial Expressions
by Nadia Koltcheva and Ivo D. Popivanov
Children 2025, 12(7), 816; https://doi.org/10.3390/children12070816 - 21 Jun 2025
Cited by 1 | Viewed by 541
Abstract
Background. Emotions are a crucial part of our human nature. The recognition of emotions is an essential component of our social and emotional skills. Facial expressions serve as a key element in discerning others’ emotions. Different databases of images of facial emotion [...] Read more.
Background. Emotions are a crucial part of our human nature. The recognition of emotions is an essential component of our social and emotional skills. Facial expressions serve as a key element in discerning others’ emotions. Different databases of images of facial emotion expressions exist worldwide; however, most of them are limited to only adult faces and include only the six basic emotions, as well as neutral faces, ignoring more complex emotional expressions. Here, we present the Children’s Emotions Database (CED), a novel repository featuring both basic and complex facial expressions captured from preschool-aged children. The CED is one of the first databases to include complex emotional expressions in preschoolers. Our aim was to develop such a database that can be used further for research and applied purposes. Methods. Three 6-year-old children (one female) were photographed while showing different facial emotional expressions. The photos were taken under standardized conditions. The children were instructed to express each of the following basic emotions: happiness, pleasant surprise, sadness, fear, anger, disgust; a neutral face; and four complex emotions: pride, guilt, compassion, and shame; this resulted in a total of eleven expressions for each child. Two photos per child were reviewed and selected for validation. The photo validation was performed with a sample of 104 adult raters (94 females; aged 19–70 years; M = 29.9; SD = 11.40) and a limited sample of 32 children at preschool age (17 girls; aged 4–7 years; M = 6.5; SD = 0.81). The validation consisted of two tasks—free emotion labeling and emotion recognition (with predefined labels). Recognition accuracy for each expression was calculated. Results and Conclusions. While basic emotions and neutral expressions were recognized with high accuracy, complex emotions were less accurately identified, consistent with the existing literature on the developmental challenges in recognizing such emotions. The current work is a promising new database of preschoolers’ facial expressions consisting of both basic and complex emotions. This database offers a valuable resource for advancing research in emotional development, educational interventions, and clinical applications tailored to early childhood. Full article
Show Figures

Figure 1

25 pages, 1822 KB  
Article
Emotion Recognition from Speech in a Subject-Independent Approach
by Andrzej Majkowski and Marcin Kołodziej
Appl. Sci. 2025, 15(13), 6958; https://doi.org/10.3390/app15136958 - 20 Jun 2025
Cited by 1 | Viewed by 988
Abstract
The aim of this article is to critically and reliably assess the potential of current emotion recognition technologies for practical applications in human–computer interaction (HCI) systems. The study made use of two databases: one in English (RAVDESS) and another in Polish (EMO-BAJKA), both [...] Read more.
The aim of this article is to critically and reliably assess the potential of current emotion recognition technologies for practical applications in human–computer interaction (HCI) systems. The study made use of two databases: one in English (RAVDESS) and another in Polish (EMO-BAJKA), both containing speech recordings expressing various emotions. The effectiveness of recognizing seven and eight different emotions was analyzed. A range of acoustic features, including energy features, mel-cepstral features, zero-crossing rate, fundamental frequency, and spectral features, were utilized to analyze the emotions in speech. Machine learning techniques such as convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and support vector machines with a cubic kernel (cubic SVMs) were employed in the emotion classification task. The research findings indicated that the effective recognition of a broad spectrum of emotions in a subject-independent approach is limited. However, significantly better results were obtained in the classification of paired emotions, suggesting that emotion recognition technologies could be effectively used in specific applications where distinguishing between two particular emotional states is essential. To ensure a reliable and accurate assessment of the emotion recognition system, care was taken to divide the dataset in such a way that the training and testing data contained recordings of completely different individuals. The highest classification accuracies for pairs of emotions were achieved for Angry–Fearful (0.8), Angry–Happy (0.86), Angry–Neutral (1.0), Angry–Sad (1.0), Angry–Surprise (0.89), Disgust–Neutral (0.91), and Disgust–Sad (0.96) in the RAVDESS. In the EMO-BAJKA database, the highest classification accuracies for pairs of emotions were for Joy–Neutral (0.91), Surprise–Neutral (0.80), Surprise–Fear (0.91), and Neutral–Fear (0.91). Full article
(This article belongs to the Special Issue New Advances in Applied Machine Learning)
Show Figures

Figure 1

18 pages, 11091 KB  
Article
Dynamic Facial Emotional Expressions in Self-Presentation Predicted Self-Esteem
by Xinlei Zang and Juan Yang
Behav. Sci. 2025, 15(5), 709; https://doi.org/10.3390/bs15050709 - 21 May 2025
Cited by 1 | Viewed by 653
Abstract
There is a close relationship between self-esteem and emotions. However, most studies have relied on self-report measures, which primarily capture retrospective and generalized emotional tendencies, rather than spontaneous, momentary emotional expressions in real-time social interactions. Given that self-esteem also shapes how individuals regulate [...] Read more.
There is a close relationship between self-esteem and emotions. However, most studies have relied on self-report measures, which primarily capture retrospective and generalized emotional tendencies, rather than spontaneous, momentary emotional expressions in real-time social interactions. Given that self-esteem also shapes how individuals regulate and express emotions in social contexts, it is crucial to examine whether and how self-esteem manifests in dynamic emotional expressions during self-presentation. In this study, we recorded the performances of 211 participants during a public self-presentation task using a digital video camera and measured their self-esteem scores with the Rosenberg Self-Esteem Scale. Facial Action Units (AUs) scores were extracted from each video frame using OpenFace, and four basic emotions—happiness, sadness, disgust, and fear—were quantified based on the basic emotion theory. Time-series analysis was then employed to capture the multidimensional dynamic features of these emotions. Finally, we applied machine learning and explainable AI to identify which dynamic emotional features were closely associated with self-esteem. The results indicate that all four basic emotions are closely associated with self-esteem. Therefore, this study introduces a new perspective on self-esteem assessment, highlighting the potential of nonverbal behavioral indicators as alternatives to traditional self-report measures. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

15 pages, 750 KB  
Article
Beyond Choice: Affective Representations of Economic and Moral Decisions
by Jongwan Kim and Chaery Park
Behav. Sci. 2025, 15(4), 558; https://doi.org/10.3390/bs15040558 - 21 Apr 2025
Viewed by 747
Abstract
Decision-making in economic and moral contexts involves complex affective processes that shape judgments of fairness, responsibility, and conflict resolution. While previous studies have primarily examined behavioral choices in economic games and moral dilemmas, less is known about the underlying affective structure of these [...] Read more.
Decision-making in economic and moral contexts involves complex affective processes that shape judgments of fairness, responsibility, and conflict resolution. While previous studies have primarily examined behavioral choices in economic games and moral dilemmas, less is known about the underlying affective structure of these decisions. This study investigated how individuals emotionally represent economic (ultimatum game) and moral (trolley dilemma) decision-making scenarios using multidimensional scaling (MDS) and classification. Participants rated their emotional responses, including positive (pleased, calm, happy, peaceful) and negative (irritated, angry, gloomy, sad, fearful, anxious) affective states, to 16 scenarios varying by game type, the presence or absence of conflict, and intensity. MDS revealed two primary affective dimensions of distinguishing conflict from no-conflict and economic from moral scenarios. No-conflict–economic scenarios were strongly associated with positive affective responses, while the no-conflict–moral scenarios elicited heightened fear and anxiety rather than positive emotions. Increasing unfairness in the ultimatum game affected affective representation, while variations in the number of lives at stake in the trolley dilemma did not. Cross-participant classification analyses demonstrated that game type and conflict conditions could be reliably predicted from affective ratings, indicating systematic and shared emotional representations across participants. These findings suggest that economic and moral decisions evoke distinct affective structures, with fairness modulating conflict perception in economic contexts, while moral decisions remain affectively stable despite changes in intensity. Full article
(This article belongs to the Section Behavioral Economics)
Show Figures

Figure 1

16 pages, 235 KB  
Article
‘You Should Be Yourself’—Secondary Students’ Descriptions of Social Gender Demands
by Karin Bergman Rimbe, Helena Blomberg, Magnus L. Elfström, Sylvia Olsson and Gunnel Östlund
Children 2025, 12(4), 502; https://doi.org/10.3390/children12040502 - 14 Apr 2025
Viewed by 785
Abstract
Background/Objectives: Swedish schools are mandated to counteract gender norms that restrict students’ life opportunities. School personnel also bear the responsibility of fostering students’ democratic responsibilities and healthy behaviors, which is crucial not only for their mental wellbeing but also for their academic performance, [...] Read more.
Background/Objectives: Swedish schools are mandated to counteract gender norms that restrict students’ life opportunities. School personnel also bear the responsibility of fostering students’ democratic responsibilities and healthy behaviors, which is crucial not only for their mental wellbeing but also for their academic performance, as stressed by the European Commission. Aim: The purpose of the present study is to explore adolescents’ performativity of gender when discussing social barriers to mental and emotional wellbeing. Methods: Fifty adolescents were interviewed in small gender-divided groups, and the transcribed text was analyzed using thematic analysis. Theoretically, interactionist perspective and gender analytic discourses are applied. Results: Emotional barriers to mental wellbeing were identified based on too cogent gender norms. Boys describe challenging each other and the environment by using a social facade that includes “stoneface” and harsh language, seldom showing sadness, even among close friends. The girls’ facade includes maintaining a “happy face” and trying to be attractive. Both genders underline the need for belonging, and most of them fear social exclusion from peers. According to the interviewees, it is socially acceptable for girls to display most feelings, even mental difficulties such as anxiety or phobia, but among boys, gender norms still hinder them from showing emotional vulnerabilities such as sadness and risking exclusion. Conclusions: Young people’s emotional wellbeing needs to be further developed and included in the curriculum. It is time for adults to focus on boys’ sadness and depressive emotions, as well as girls’ aggressiveness and frankness rather than their appearance, to push the river of equality forward. Full article
(This article belongs to the Section Pediatric Mental Health)
25 pages, 4884 KB  
Article
The Effect of Emotional Intelligence on the Accuracy of Facial Expression Recognition in the Valence–Arousal Space
by Yubin Kim, Ayoung Cho, Hyunwoo Lee and Mincheol Whang
Electronics 2025, 14(8), 1525; https://doi.org/10.3390/electronics14081525 - 9 Apr 2025
Cited by 1 | Viewed by 1766
Abstract
Facial expression recognition (FER) plays a pivotal role in affective computing and human–computer interaction by enabling machines to interpret human emotions. However, conventional FER models often overlook individual differences in emotional intelligence (EI), which may significantly influence how emotions are perceived and expressed. [...] Read more.
Facial expression recognition (FER) plays a pivotal role in affective computing and human–computer interaction by enabling machines to interpret human emotions. However, conventional FER models often overlook individual differences in emotional intelligence (EI), which may significantly influence how emotions are perceived and expressed. This study investigates the effect of EI on facial expression recognition accuracy within the valence–arousal space. Participants were divided into high and low EI groups based on a composite score derived from the Tromsø Social Intelligence Scale and performance-based emotion tasks. Five deep learning models (EfficientNetV2-L/S, MaxViT-B/T, and VGG16) were trained on the AffectNet dataset and evaluated using facial expression data collected from participants. Emotional states were predicted as continuous valence and arousal values, which were then mapped onto discrete emotion categories for interpretability. The results indicated that individuals with higher EI achieved significantly greater recognition accuracy, particularly for emotions requiring contextual understanding (e.g., anger, sadness, and happiness), while fear was better recognized by individuals with lower EI. These findings highlight the role of emotional intelligence in modulating FER performance and suggest that integrating EI-related features into valence–arousal-based models could enhance the adaptiveness of affective computing systems. Full article
(This article belongs to the Special Issue AI for Human Collaboration)
Show Figures

Figure 1

25 pages, 2013 KB  
Article
The Development of Emotion Recognition Skills from Childhood to Adolescence
by Marialucia Cuciniello, Terry Amorese, Carl Vogel, Gennaro Cordasco and Anna Esposito
Eur. J. Investig. Health Psychol. Educ. 2025, 15(4), 56; https://doi.org/10.3390/ejihpe15040056 - 8 Apr 2025
Viewed by 1179
Abstract
This study investigates how the ability to recognize static facial emotional expressions changes over time, specifically through three developmental stages: childhood, preadolescence, and adolescence. A total of 301 Italian participants were involved and divided into three age groups: children (7–10 years), pre-adolescents (11–13 [...] Read more.
This study investigates how the ability to recognize static facial emotional expressions changes over time, specifically through three developmental stages: childhood, preadolescence, and adolescence. A total of 301 Italian participants were involved and divided into three age groups: children (7–10 years), pre-adolescents (11–13 years), and adolescents (14–19 years). Participants completed an online emotional decoding task using images from the Child Affective Facial Expression (CAFE) database, depicting anger, disgust, fear, happiness, sadness, surprise, and neutrality, conveyed by children of different ethnicities (African American, Caucasian/European American, Latino, and Asian). Results indicated that female participants generally exhibited a higher emotion recognition accuracy than male participants. Among the emotions, happiness, surprise, and anger were the most accurately recognized, while fear was the least recognized. Adolescents demonstrated a better recognition of disgust compared to children, while pre-adolescents more poorly recognized neutrality compared to children and adolescents. Additionally, this study found that female facial expressions of disgust, sadness, and fear were more accurately recognized than male expressions, whereas male expressions of surprise and neutrality were better recognized than female expressions. Regarding the ethnicity of facial expressions, results revealed that ethnicity can be better or more poorly recognized depending on the emotion investigated, therefore presenting very heterogeneous models. Full article
Show Figures

Figure 1

18 pages, 1911 KB  
Article
Enhancing Embedded Space with Low–Level Features for Speech Emotion Recognition
by Lukasz Smietanka and Tomasz Maka
Appl. Sci. 2025, 15(5), 2598; https://doi.org/10.3390/app15052598 - 27 Feb 2025
Cited by 2 | Viewed by 622
Abstract
This work proposes an approach that uses a feature space by combining the representation obtained in the unsupervised learning process and manually selected features defining the prosody of the utterances. In the experiments, we used two time-frequency representations (Mel and CQT spectrograms) and [...] Read more.
This work proposes an approach that uses a feature space by combining the representation obtained in the unsupervised learning process and manually selected features defining the prosody of the utterances. In the experiments, we used two time-frequency representations (Mel and CQT spectrograms) and EmoDB and RAVDESS databases. As the results show, the proposed system improved the classification accuracy of both representations: 1.29% for CQT and 3.75% for Mel spectrogram compared to the typical CNN architecture for the EmoDB dataset and 3.02% for CQT and 0.63% for Mel spectrogram in the case of RAVDESS. Additionally, the results present a significant increase of around 14% in classification performance in the case of happiness and disgust emotions using Mel spectrograms and around 20% in happiness and disgust emotions for CQT in the case of best models trained on EmoDB. On the other hand, in the case of models that achieved the highest result for the RAVDESS database, the most significant improvement was observed in the classification of a neutral state, around 16%, using the Mel spectrogram. For CQT representation, the most significant improvement occurred for fear and surprise, around 9%. Additionally, the average results for all prepared models showed the positive impact of the method used on the quality of classification of most emotional states. For the EmoDB database, the highest average improvement was observed for happiness—14.6%. For other emotions, it ranged from 1.2% to 8.7%. The only exception was the emotion of sadness, for which the classification quality was average decreased by 1% when using the Mel spectrogram. In turn, for the RAVDESS database, the most significant improvement also occurred for happiness—7.5%, while for other emotions ranged from 0.2% to 7.1%, except disgust and calm, the classification of which deteriorated for the Mel spectrogram and the CQT representation, respectively. Full article
Show Figures

Figure 1

21 pages, 1123 KB  
Article
Cognitive Mechanisms Underlying the Influence of Facial Information Processing on Estimation Performance
by Xinqi Huang, Xiaofan Zhou, Mingyi Xu, Zhihao Liu, Yilin Ma, Chuanlin Zhu and Dongquan Kou
Behav. Sci. 2025, 15(2), 212; https://doi.org/10.3390/bs15020212 - 14 Feb 2025
Viewed by 823
Abstract
This study aimed to investigate the roles of facial information processing and math anxiety in estimation performance. Across three experiments, participants completed a two-digit multiplication estimation task under the conditions of emotion judgment (Experiment 1), identity judgment (Experiment 2), and combined emotion and [...] Read more.
This study aimed to investigate the roles of facial information processing and math anxiety in estimation performance. Across three experiments, participants completed a two-digit multiplication estimation task under the conditions of emotion judgment (Experiment 1), identity judgment (Experiment 2), and combined emotion and identity judgment (Experiment 3). In the estimation task, participants used either the down-up or up-down problem to select approximate answers. In Experiment 1, we found that negative emotions impair estimation performance, while positive and consistent emotions have a facilitating effect on estimation efficiency. In Experiment 2, we found that emotion and identity consistency interact with each other, and negative emotions actually promote estimation efficiency when identity is consistent. In Experiment 3, we found that emotion, identity consistency, and emotional consistency have complex interactions on estimation performance. Moreover, in most face-processing conditions, participants’ estimation performance is not affected by their level of math anxiety. However, in a small number of cases, mean proportions under happy and fearful conditions are negatively correlated with math anxiety. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

35 pages, 2088 KB  
Article
The Influence of Face Masks on Micro-Expression Recognition
by Yunqiu Zhang and Chuanlin Zhu
Behav. Sci. 2025, 15(2), 200; https://doi.org/10.3390/bs15020200 - 13 Feb 2025
Cited by 1 | Viewed by 1072
Abstract
This study aimed to explore the influence of various mask attributes on the recognition of micro-expressions (happy, neutral, and fear) and facial favorability under different background emotional conditions (happy, neutral, and fear). The participants were asked to complete an ME (micro-expression) recognition task, [...] Read more.
This study aimed to explore the influence of various mask attributes on the recognition of micro-expressions (happy, neutral, and fear) and facial favorability under different background emotional conditions (happy, neutral, and fear). The participants were asked to complete an ME (micro-expression) recognition task, and the corresponding accuracy (ACC), reaction time (RT), and facial favorability were analyzed. Results: (1) Background emotions significantly impacted the RT and ACC in micro-expression recognition, with fear backgrounds hindering performance. (2) Mask wearing, particularly opaque ones, prolonged the RT but had little effect on the ACC. Transparent masks and non-patterned masks increased facial favorability. (3) There was a significant interaction between background emotions and mask attributes; negative backgrounds amplified the negative effects of masks on recognition speed and favorability, while positive backgrounds mitigated these effects. This study provides insights into how masks influence micro-expression recognition, crucial for future research in this area. Full article
Show Figures

Figure 1

Back to TopTop