Next Article in Journal
Discriminating and Clustering Ordered Permutations Using Artificial Neural Networks: A Potential Application in ANN-Guided Genetic Algorithms
Next Article in Special Issue
A Comparative Study of Local Descriptors and Classifiers for Facial Expression Recognition
Previous Article in Journal
Complementary Segmentation of Primary Video Objects with Reversible Flows
Previous Article in Special Issue
Learning Robust Shape-Indexed Features for Facial Landmark Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

Event-Related Potentials during Verbal Recognition of Naturalistic Neutral-to-Emotional Dynamic Facial Expressions

by
Vladimir Kosonogov
1,*,
Ekaterina Kovsh
2 and
Elena Vorobyeva
2
1
International Laboratory of Social Neurobiology, HSE University, 101000 Moscow, Russia
2
Academy of Psychology and Education Sciences, Southern Federal University, 344006 Rostov-on-Don, Russia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(15), 7782; https://doi.org/10.3390/app12157782
Submission received: 29 April 2022 / Revised: 28 July 2022 / Accepted: 1 August 2022 / Published: 2 August 2022
(This article belongs to the Special Issue Research on Facial Expression Recognition)

Abstract

:
Event-related potentials during facial emotion recognition have been studied for more than twenty years. Nowadays, there has been a growing interest in the use of naturalistic stimuli. This research was aimed, therefore, at studying event-related potentials (ERP) during recognition of dynamic facial neutral-to-emotional expressions, more ecologically valid than static faces. We recorded the ERP of 112 participants who watched 144 dynamic morphs depicting a gradual change from a neutral expression to a basic emotional expression (anger, disgust, fear, happiness, sadness and surprise) and labelled those emotions verbally. We revealed some typical ERP, like N170, P2, EPN and LPP. Participants with lower accuracy exhibited a larger posterior P2. Participants with faster correct responses exhibited a larger amplitude of P2 and LPP. We also conducted a classification analysis that yielded the accuracy of 76% for prediction of participants who recognise emotions quickly on the basis of the amplitude of posterior P2 and LPP. These results extend data from previous research about the electroencephalographic correlates of facial emotion recognition.

1. Introduction

Perhaps, Darwin [1] was the first to scientifically describe facial expressions and their evolutionary functions in a variety of animals. Emotional facial expressions of conspecifics are crucial stimuli that provoke different adaptive reactions. In human culture, an adequate recognition of emotional facial expressions is required in interpersonal, professional and civic situations. Previous research provided a great amount of knowledge about how emotion is recognised from facial expressions [2,3,4,5]. Furthermore, many investigators have studied why individuals with psychiatric and neurological diseases are often impaired in their ability to recognise facial emotion expressions (see a recent review of Collin et al. [6]).
Event-related potentials (ERPs) of electroencephalograms (EEG), among other physiological methods, permit to study brain correlates of facial emotion recognition. Many studies have collected numerous data. Thus, the P1 wave is an early positive deflection, typically peaking around 100 ms at posterior sites [7,8]. It reflects an early sensory processing for stimuli presented at an expected point [9]. Numerous works suggest that the extrastriate cortex is the brain source of P1 [10,11]. Some studies revealed P1 during facial emotion recognition. Holmes et al. [12] showed that P1 amplitude is greater to fearful faces than to neutral ones. Rellecke et al. [13] also found a greater P1 in response to fear and anger compared to neutral expressions. In a study by Pourtois et al. [14], fearful faces also induced a larger P1 than neutral faces.
N170 is a wave, typically registered between 130 and 200 ms at posterior lateral scalp areas, in response to facial compared to non-facial stimuli [15,16]. In general, a meta-analysis by Hinojosa et al. [17] showed that N170 was larger to emotional faces relative to neutral ones. As to neural underpinnings, the occipitotemporal cortex and posterior fusiform gyrus are supposed to be sources of the N170 [18,19].
Early posterior negativity (EPN) is a wave associated with sensory encoding and perceptual analysis in the visual cortex [20]. In a study by Schupp et al. [21], threatening faces elicited a greater EPN, compared with neutral or friendly expressions. Hartigan and Richards [22] found that EPN was enhanced for disgust over neutral expressions. In a study by Mavratzakis et al. [23] the EPN was enhanced in response to fearful as compared to neutral or happy faces. Langeslag and Van Strien [24] showed that angry faces provoked a greater EPN in comparison with neutral faces. There is evidence that the EPN may originate in the occipital and parietal cortices [25]. This research prepares the ground for practical implications in the field of diagnostics. For example, Sarraf-Razavi et al. [26] found a significant reduction in the early posterior negativity of ERP for happy and angry faces in hyperactive children, compared to controls. In this vein, Frenkel and Bar-Haim [27] revealed an enhanced EPN to fearful faces in patients with anxiety disorders.
P2 is another wave that is sometimes found during face recognition. It is a positive component around 200 ms, that has been related to attention and categorization [28,29]. As for facial recognition, it has been shown that happy expressions provoked a larger amplitude of P2 in comparison to angry and neutral ones [30]. P300 also has been related to facial emotion recognition [31] and has even been shown as a marker of impairment in schizophrenic patients [32].
Parietocentral late positive complex (LPC) has also been registered in response to facial stimuli. Thus, Werheid et al. [33] found that non-attractive faces elicited relatively lower LPC amplitudes. Later, Lu et al. [34] revealed that this was the case only for male subjects, but not for females.
However, the majority of studies on EEG correlates of facial emotion recognition have been conducted with the use of static photographs of faces. This approach has received criticism. Krumhuber et al. [35] believe that dynamic stimuli improve coherence in the identification of emotion and permit to differentiate between genuine and fake emotional expressions. They categorically warn that researchers do not understand what faces do while they use static snapshots of faces as a paradigm for researching facial expressions. Ambadar et al. [36] found that dynamic facial expressions were recognized better than static photographs. Similarly, Lander and Bruce [37] showed the same effect for the recognition of identity. In addition, neurological investigations demonstrate that there are different pathways for recognition of static and dynamic facial expressions [38]. A meta-analysis by Zinchenko et al. [39] showed that brain regions, typically related to facial emotion recognition, like the amygdala and fusiform gyrus, showed greater responses to dynamic expressions in comparison to static ones.
Nowadays, dynamic facial stimuli have become more widespread. For example, Amaral et al. [40] presented complex ecological animations with social content at the single-trial level. There were 900 ms animated avatars or groups of them which alternated their gaze direction to the left and to the right. Qu et al. [41] proposed a database of facial emotion video clips. Some studies applied fMRI to study the recognition of dynamic facial expressions [42,43,44].
Only a few researchers used dynamic facial expressions to investigate ERP. Fichtenholtz et al. [45] found a wave analogous to N170 and a second positive wave, which they called P325. In this vein, Recio et al. [46] asked participants to categorise happy, angry, or neutral faces in a static image or dynamically evolving within 150 ms. Dynamic happy facial expressions were recognised faster and more accurately than static photographs. They observed the EPN and later positive potential (LPP), that were larger and longer during recognition of dynamic in comparison with static facial expressions. Trautmann-Lengsfeld et al. [47] presented videos during 1500 ms and also revealed N170, EPN and LPP. Stefanou and colleagues [48] presented video clips of 500 ms, constructed from static face photographs, and found the N170 wave at posterior sites.
As Recio et al. [49] noted, in face-to-face communication, facial emotional expressions do not consist of static images showing the apex with maximal expression intensity. Moreover, we believe that we should take into account that usually, before watching a dynamic facial emotional expression, people explore the neutral expression of the other person. That is why we supposed that presenting dynamic facial expressions to our participants preceded by neutral faces of actors would be more ecologically valid. Therefore, the aim of our work was to study ERP during facial emotion recognition. We wanted to explore the feasibility of using dynamic stimulation for emotion recognition in ERP research. We expected to register the waves, typical for such studies: P1, N170, P2, EPN and LPP. We wanted to compare these waves between participants with different levels of accuracy and reaction times. For this purpose, we applied a paradigm of verbal response to dynamic facial expressions, which demonstrated effectiveness in differentiating all basic facial emotional expressions [50]. In other words, this more ecological paradigm was able to detect the differences in accuracy and speed in each possible pair of basic facial emotional expressions. That is, happiness was the best recognised facial emotion, while fear was the worst (surprise, disgust, anger and sadness being arranged in between). This complete differentiation has not been achieved in previous studies, where participants responded with keys. Due to the wide-spread and fruitful application of artificial intelligence in the field of facial recognition [51,52,53], we also applied a classification analysis to predict what participants would recognise facial emotional expressions quickly on the basis of their brain amplitudes.

2. Methods

2.1. Participants

The sample consisted of 112 right-handed participants (59% of females, mean age  =  22.5, SD  =  3.4). All had normal or corrected-to-normal vision. No one had a psychiatric or neurological diagnosis, nor took medication. Procedures were conducted according to the Declaration of Helsinki and approved by the local ethics committee. The participants took part in the study without any reward.

2.2. Material and Design

Following a previous study [50], we used 144 dynamic morphs that we made from 56 coloured, full-face photographs of four men and four women from Karolinska Directed Emotional Faces collection (KDEF; [54]). They depicted seven facial expressions: anger, disgust, fear, happiness, sadness, surprise and neutral. With the help of the software Sqirlz Morph 2.1 by Xiberpix, we made 48 “neutral-to-emotion” morph pairs (six emotional expressions × eight actors). In the photos, we indicated key points of the face for the software to prepare a gradual change of 12 frames, from the first neutral expression to the final emotional one. Thus, each dynamic morph first contained a neutral expression (500 ms) and 11 frames (42 ms each) and lasted 962 ms (Figure 1). The idea of first presenting a static neutral face was adopted from the work by Fichtenholtz et al. [45]. Each dynamic morph was presented three times. The trials were quasi-randomised: each quarter of trials (1–36, 37–72, 73–108 and 108–144) contained the same number (6) of each facial emotional expressions. A facial emotional expression or an actor was not presented more than twice in a row. All participants viewed the same sequence of 144 dynamic morphs.
The participants were seated in a dark soundproof cubicle in front of a 19-inch computer screen at a distance of 60 cm, that is the visual angle equalled 19 degrees. A microphone (frequency: 20–16,000 Hz; sensitivity: 54 dB; impedance: 2.2 kΩ) was attached to their auricles so that its sensor was 2 cm from their lips and could not be moved by involuntary movements. The experiment was conducted in OpenSesame [55]. Each trial began with a fixation (a cross for 1 s), then participants viewed a face for 962 ms. The last frame remained until a response was made. They were asked to loudly announce the emotion label as quickly as possible (verbal reaction times were measured). A previous study found no differences in the pronunciation of emotion labels; hence, this should not influence the verbal reaction times to video clips with emotional faces [50]. A technician logged their responses to compute the accuracy later. During the intertrial period (3 s) before each trial, six emotion labels appeared alphabetically as a reminder of possible responses. Before the experiment, participants passed six training trials.

2.3. Data Collection and Reduction

The EEG was recorded from 32 electrodes, using an NVX-136 amplifier (MCS, Russia), according to the international 10–20 system (filters = 0.05–40 Hz; sampling rate = 1000 Hz; impedance ≤ 10 kΩ; ground: AFz). Ocular movements were recorded through one electrode placed at the orbicular muscle below the right eye. Earlobe electrodes were used as an online reference.
The EEG analysis was conducted off-line using EEGLAB [56]. The recording was split into 2000 ms epochs, with the baseline of 1000 ms. Eye-movement artefacts were corrected [57] and trials with activity exceeding ±100 μV were rejected. Trials with incorrect recognition were rejected. In total, 64.6% of trials were rejected. We decided to average ERP between electrodes (see below), as recommended by Huffmeijer et al. [58]. We averaged frontal (Fp1, Fp2, F3, F4, F7, F8, and Fz), central (FC1, FC2, C3, Cz, C4, CP1, and CP2), temporal (FT9, FT10, FC5, FC6, T3, T4, T5, T6, CP5, CP6, TP9, and TP10) and posterior sites (P3, Pz, P4, O1, Oz, and O2). The visual examination of the grand averages of four regions permitted us to identify four waves during watching neutral faces (P1, N170, P2, EPN) and five waves during the gradual appearance of a facial emotional expression (P1, N170, P2, EPN and LPP).

2.4. Data Analysis

To analyse electroencephalographic differences between participants with higher and lower accuracy of facial emotion recognition, we split the whole sample in half, using the median value of the total accuracy, and controlled the difference with the Mann–Whitney test. In other words, we built two subsamples, that is a high accuracy subsample (56 participants) and a low accuracy subsample (56 participants). Then, we compared (using t-tests) the amplitude of ERP in participants with lower and higher accuracy at each millisecond. As we had only two levels of dependent variables, we could afford comparisons at each millisecond to identify the time windows of waves more precisely. Pairwise comparisons employed the false discovery rate correction for multiple comparisons [59]. The same was conducted after the division of the whole sample in half depending on the median reaction time. That is, we obtained a fast reaction subsample (56 participants) and a slow reaction subsample (56 participants). Finally, we conducted a classification analysis of the recognition speed on the basis of brain waves, using K-nearest neighbours, Naïve Bayesian Classificator and Automated neural Networks. We predicted whether a participant belonged to the fast or slow group using his or her amplitude of each wave, averaged across trials. The overall flow of the study is depicted in Figure 2.

3. Results

We found a negative correlation between accuracy and time of facial emotion recognition (rs = −0.82, p < 0.001). The later the recognition of a stimulus was, the less accurate it was. The division of the whole sample in half using medians of accuracy and reaction time was successful. Participants, who were more accurate in total, turned out to be more accurate in recognition of all emotional expressions (only the difference in the accuracy of surprise recognition was marginal, Table 1). Participants, who responded faster in total, were faster at the recognition of each emotional expression (Table 2).

3.1. Event-Related Potentials of EEG

Figure 3 represents the grand average ERP of four regions. At temporal and posterior regions, we could identify the waves typically found in emotion recognition studies. While watching neutral faces, P1, N170, P2, and EPN were registered in their usual time windows. After the onset of the gradual appearance of a facial emotional expression, P1, N170, P2, EPN and LPP could be observed on the graph.
Topographic maps (Figure 4) show the reverse activity between anterior and posterior regions. The topoplot of ERP while watching neutral faces reflects the distribution of N170 at 170 ms, posterior P2 at 200 ms, and EPN at 500 ms. The topoplot of ERP in response to the gradual appearance of an emotional expression depicts another posterior P2 at 300 ms, EPN at 800 ms, and LPP at 1300 ms.

3.2. ERP Related to Individual Differences in Facial Emotion Recognition

We found an effect of the Accuracy on posterior ERP. Participants with a lower level of accuracy displayed a larger P2 amplitude (387–428 ms), ps < 0.048, ds = 0.36–0.38, (Figure 5). We did not observe any significant effect of Accuracy on frontal, central and temporal ERP (ps > 0.055). We also found an effect of the Reaction time on posterior ERPs. Participants with the faster correct answers displayed a larger amplitude of P2 (235–342 ms, ds = 0.36–0.42) and the LPP (1031–1199 ms, ds = 0.33–0.46), all ps < 0.041 (Figure 6). Finally, we found an effect of the reaction time on temporal ERP. Participants with the faster correct answers displayed a larger amplitude of the LPP (1064–1199 ms), all ps < 0.044. We did not observe any significant effect of Reaction time or Accuracy on frontal and central ERP (ps > 0.055).

3.3. Classification Analysis

The classification analysis revealed that Automated neural networks displayed the best result, from the methods we applied (Table 3). The prediction accuracy was 75.86. K-Nearest Neighbours showed the worst result.

4. Discussion

The aim of our study was to investigate event-related potentials of EEG during the emotion recognition of dynamic facial expressions. To make our paradigm more naturalistic, we first demonstrated a neutral face for 500 ms and then presented a gradual change to a basic facial emotional expression during 462 ms. This permitted us to obtain diverse ERP, typically found in studies of facial emotion recognition.
We recorded verbal reactions and reproduced differences in recognition accuracy between all basic facial emotional expressions, found by Kosonogov and Titova [50]. The correlation between accuracy and reaction time was negative. Stimuli that were recognised more accurately, were recognised faster. This finding is in accordance with previous studies on facial emotion recognition [60,61].
We registered some ERP typically found during facial emotion observation/recognition. After a visual investigation we referred to them as N170, P2, EPN, and LPP. They were prominent at temporal and posterior sites. Our naturalistic paradigm consisted of two parts. During the presentation of neutral faces for 500 ms, we found P1, N170, P2 and EPN. Then, a gradual change to a basic facial emotional expression (11 frames, 462 ms in total) provoked N170, P2, EPN and LPP. We admit that the first part (the perception of the neutral face) may have contaminated the ERP of the second one (the perception of a facial emotional expression). However, we wanted to build naturalistic stimuli, which is why we always presented a neutral face first. These waves appeared at their typical latencies during the presentation of neutral faces.
As for ERP during the gradual change of basic facial emotional expressions, the latencies were longer. Thus, we regarded the wave between 201 and 500 ms as P2 and the wave between 501 and 1000 ms as the EPN. This discrepancy could be explained by the prolonged presentation of dynamic stimuli. In other words, the presentation of faces during 462 ms could make ERP appear later and last longer.
Similar to previous studies [15,48,62], we observed N170 at temporal and posterior sites (both during the perception of neutral faces and dynamic emotional expressions). We found it both during watching neutral faces and in response to gradual changes of a basic facial emotional expression. It was more salient when watching neutral faces. However, our paradigm did not permit registering a clear N170 in response to the gradual appearance of a basic facial emotional expression, and we did not find any difference in N170 between subjects. A possible explanation would be that this time window was already contaminated by initial neutral faces.
We also revealed a positive wave after N170 (both during the perception of neutral faces and dynamic emotional expressions), which we considered as P2 [63]. Rossignol et al. [64] also found P2 in response to facial expressions, albeit they played a role in priming and were not recognised. Balconi and Carrera [65] demonstrated that P2, during the comparison of voices and facial expressions, was larger when both types of stimulation were congruent. P2 has been referred as a correlate of attention processes [29,66] and has been related to early discrimination of stimuli [67]. Especially, the potentials between 200–300 ms may reflect attentional capture by emotional stimuli [68]. Generally speaking, stimuli displaying people are more salient and attract attention (e.g., [69]). Particularly, Carretié et al. [70] found that faces have the ability to capture attention to a greater extent than scenes. A recent study showed that any emotional social content (not only faces, but also scenes) modulate anterior P2 [71]. However, it is important to note that our paradigm generated only salient posterior, but not anterior, P2.
During 501–1100 ms, we registered a negative wave (during the perception of dynamic emotional expressions), which we considered to be the early posterior negativity. It reflects a natural selective attention to emotional stimuli [25,72]. Bayer and Schacht [73] found similar EPN in response to emotional faces, scenes and words. Eimer et al. [74] demonstrated an enhanced early negativity at lateral temporal and occipital electrodes for emotions relative to neutral faces.
Finally, our paradigm permitted us to observe the LPP 1100 ms after the gradual appearance of a basic facial emotional expression. LPP amplitude could be interpreted as the allocation of attentional resources towards emotional stimuli of different characteristics [75]. Thus, Foti et al. [76] showed a larger amplitude to emotional expressions relative to neutral ones. Vico et al. [77] found larger amplitudes of the late ERP waves, P3 and LPP, during perception of the beloved of participants in comparison with unknown or famous faces. In a study by Choi et al. [78], high empathy people exhibited a larger amplitude of LPP in response to facial expressions.
Similar to Recio et al. [46], who presented dynamic facial expressions, we identified the EPN and LPP, although in our case their latencies were greater. While those researchers found them at 200–250 and 400–600 ms, the latencies in our study were 501–1100 and 1101–1500 ms. We believe that this discrepancy can be explained by the duration of the gradual change to a basic facial emotional expression. In their study, dynamic morphs consisted of three frames, a third of which (the final one, 100% of emotion) was presented at 100 ms. Whereas in our study, we consecutively presented 11 frames for 42 ms, so the final frame appeared only at 450 ms.
We also compared event-related potentials between participants with different performances of facial emotion recognition. Thus, participants with low accuracy exhibited larger amplitudes of posterior P2 (387–428 ms). This can reflect the attentional resources required for recognition. Of note, our EEG analysis embraced only trials with correct responses. Hence, participants with lower accuracy allocated their attention in such a way as to recognise emotional expressions correctly. This is in accordance with attentional allocation framework which relates a successful behavioural performance with an increased amplitude in attentional ERP waves [79,80]. For participants with high accuracy, such a task, supposedly, is easier and more habitual. In addition, participants with the faster correct responses displayed a larger amplitude of P2 and LPP after the beginning of the gradual appearance of a facial emotional expression. Therefore, faster, but correct recognition of emotional expressions required a greater allocation of processing resources. In other words, dynamic facial stimuli provoked a greater allocation of attention which helped to recognise emotions faster.
Further research could be directed at constructing different designs to attempt to capture subtle physiological differences between all basic facial emotional expressions. We admit, as a limitation, the fact that our design could not allow examination of ERP between facial emotional expressions (only 144 trials for six emotional expressions). For this purpose, one could present much more stimuli (30–50 per emotion, that would make the study much longer) or limit to, for instance, three basic facial emotional expressions. In our study, basic emotions differed much in the accuracy of their recognition. This does reflect typical differences in facial emotion recognition, but this entailed different numbers of trials with correct responses for different basic emotions. Hence, another future direction could lie in presenting a stimulus database which would be ranged by accuracy so that each basic facial emotional expression could be recognised in a certain number of cases. This also will help get a better signal-to-noise ratio, because in our study we had to delete 64.6% of trials. We also suppose that the forced choice of six facial emotional expressions could increase the cognitive load of our participants [81]. However, at the same time, we opted to build a more ecological paradigm that would embrace all six basic facial emotional expressions that could appear in people’s faces in everyday life. Also, a design without a neutral face first could be used to avoid the contamination effect of neutral faces; however, once again, this would not fit our intention to present naturalistic stimuli.
In general, the application of such naturalistic paradigms seems to be promising for psychiatric and neurological studies. Facial emotion recognition is influenced by many variables, like age [82], sex [83], the age of actors [84], ethnicity [85] and so on. Therefore, ERPs during the recognition of dynamic facial expressions could be modulated by these variables.
To conclude, we applied a more naturalistic and ecological paradigm to study the facial emotion recognition. The participants watched 144 dynamic morphs depicting a gradual change from a neutral expression to a basic facial emotional expression and labelled verbally those emotions. We revealed some typical ERPs, like N170, P2, EPN and LPP. Participants with lower accuracy exhibited a larger P2 during correct recognition. Participants with faster correct responses exhibited a larger amplitude of P2 and LPP during correct recognition. The classification analysis revealed the accuracy of 76% for prediction of participants who recognise emotional expressions quickly on the basis of the amplitude of posterior P2 and LPP. These differences between subsamples are supposed to reflect the attentional allocation, that is successful behavioural performance is related an increased amplitude in attentional ERPs.

Author Contributions

Conceptualization, methodology, all authors; software, V.K.; data curation, V.K. and E.K.; writing—original draft preparation, V.K.; writing—review and editing, E.K. and E.V.; visualization, V.K.; supervision, project administration, E.V funding acquisition, V.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the International Laboratory of Social Neurobiology ICN HSE RF Government Grant Ag. No. 075-15-2022-1037 and was carried out using HSE unique equipment (Reg. num 354937).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of HSE University (14 January 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Darwin, C. The Expression of the Emotions in Man and Animals; Murray: London, UK, 1872. [Google Scholar]
  2. Adolphs, R. Recognizing emotion from facial expressions: Psychological and neurological mechanisms. Behav. Cogn. Neurosci. Rev. 2002, 1, 21–62. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Liu, T.; Wang, J.; Yang, B.; Wang, X. Facial expression recognition method with multi-label distribution learning for non-verbal behavior understanding in the classroom. Infrared Phys. Technol. 2021, 112, 103594. [Google Scholar] [CrossRef]
  4. Liu, H.; Fang, S.; Zhang, Z.; Li, D.; Lin, K.; Wang, J. MFDNet: Collaborative poses perception and matrix Fisher distribution for head pose estimation. IEEE Trans. Multimed. 2021, 24, 2449–2460. [Google Scholar] [CrossRef]
  5. Liu, H.; Liu, T.; Zhang, Z.; Sangaiah, A.K.; Yang, B.; Li, Y.F. ARHPE: Asymmetric Relation-aware Representation Learning for Head Pose Estimation in Industrial Human-machine Interaction. IEEE Trans. Ind. Inform. 2022, 18, 7107–7117. [Google Scholar] [CrossRef]
  6. Collin, L.; Bindra, J.; Raju, M.; Gillberg, C.; Minnis, H. Facial emotion recognition in child psychiatry: A systematic review. Res. Dev. Disabil. 2013, 34, 1505–1520. [Google Scholar] [CrossRef] [PubMed]
  7. Martinez, A.; Anllo-Vento, L.; Sereno, M.I.; Frank, L.R.; Buxton, R.B.; Dubowitz, D.J.; Wong, E.C.; Hinrichs, H.; Heinze, H.J.; Hillyard, S.A. Involvement of striate and extrastriate visual cortical areas in spatial attention. Nat. Neurosci. 1999, 2, 364–369. [Google Scholar] [CrossRef] [PubMed]
  8. Pizzagalli, D.; Regard, M.; Lehmann, D. Rapid emotional face processing in the human right and left brain hemispheres: An ERP study. NeuroReport 1999, 10, 2691–2698. [Google Scholar] [CrossRef]
  9. Luck, S.J.; Heinze, H.J.; Mangun, G.R.; Hillyard, S.A. Visual event-related potentials index focused attention within bilateral stimulus arrays. II. Functional dissociation of P1 and N1 components. Electroencephalogr. Clin. Neurophysiol. 1990, 75, 528–542. [Google Scholar] [CrossRef]
  10. Finnigan, S.; O’Connell, R.G.; Cummins, T.D.; Broughton, M.; Robertson, I.H. ERP measures indicate both attention and working memory encoding decrements in aging. Psychophysiology 2011, 48, 601–611. [Google Scholar] [CrossRef]
  11. Herrmann, C.S.; Knight, R.T. Mechanisms of human attention: Event-related potentials and oscillations. Neurosci. Biobehav. Rev. 2001, 25, 465–476. [Google Scholar] [CrossRef]
  12. Holmes, A.; Nielsen, M.K.; Tipper, S.; Green, S. An electrophysiological investigation into the automaticity of emotional face processing in high versus low trait anxious individuals. Cogn. Affect. Behav. Neurosci. 2009, 9, 323–334. [Google Scholar] [CrossRef] [Green Version]
  13. Rellecke, J.; Sommer, W.; Schacht, A. Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biol. Psychol. 2012, 90, 23–32. [Google Scholar] [CrossRef]
  14. Pourtois, G.; Grandjean, D.; Sander, D.; Vuilleumier, P. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb. Cortex 2004, 14, 619–633. [Google Scholar] [CrossRef] [Green Version]
  15. Bentin, S.; McCarthy, G.; Perez, E.; Puce, A.; Allison, T. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 1996, 8, 551–565. [Google Scholar] [CrossRef] [Green Version]
  16. Itier, R.J.; Taylor, M.J. N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cereb. Cortex 2004, 14, 132–142. [Google Scholar] [CrossRef] [Green Version]
  17. Hinojosa, J.A.; Mercado, F.; Carretié, L. N170 sensitivity to facial expression: A meta-analysis. Neurosci. Biobehav. Rev. 2015, 55, 498–509. [Google Scholar] [CrossRef]
  18. Bötzel, K.; Schulze, S.; Stodieck, S.R.G. Scalp topography and analysis of intracranial sources of face-evoked potentials. Exp. Brain Res. 1995, 104, 135–143. [Google Scholar] [CrossRef]
  19. Rossion, B.; Joyce, C.A.; Cottrell, G.W.; Tarr, M.J. Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage 2003, 20, 1609–1624. [Google Scholar] [CrossRef]
  20. Pourtois, G.; Schettino, A.; Vuilleumier, P. Brain mechanisms for emotional influences on perception and attention: What is magic and what is not. Biol. Psychol. 2013, 92, 492–512. [Google Scholar] [CrossRef] [Green Version]
  21. Schupp, H.T.; Öhman, A.; Junghöfer, M.; Weike, A.I.; Stockburger, J.; Hamm, A.O. The facilitated processing of threatening faces: An ERP analysis. Emotion 2004, 4, 189. [Google Scholar] [CrossRef] [Green Version]
  22. Hartigan, A.; Richards, A. Disgust exposure and explicit emotional appraisal enhance the LPP in response to disgusted facial expressions. Soc. Neurosci. 2017, 12, 458–467. [Google Scholar] [CrossRef] [Green Version]
  23. Mavratzakis, A.; Herbert, C.; Walla, P. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study. NeuroImage 2016, 124, 931–946. [Google Scholar] [CrossRef]
  24. Langeslag, S.J.; Van Strien, J.W. Early visual processing of snakes and angry faces: An ERP study. Brain Res. 2018, 1678, 297–303. [Google Scholar] [CrossRef]
  25. Junghöfer, M.; Bradley, M.M.; Elbert, T.R.; Lang, P.J. Fleeting images: A new look at early emotion discrimination. Psychophysiology 2001, 38, 175–178. [Google Scholar] [CrossRef]
  26. Sarraf-Razavi, M.; Tehrani-Doost, M.; Ghassemi, F.; Nazari, M.A.; Ahmadi, Z.Z. Early posterior negativity as facial emotion recognition index in children with attention deficit hyperactivity disorder. Basic Clin. Neurosci. 2018, 9, 439–447. [Google Scholar] [CrossRef]
  27. Frenkel, T.I.; Bar-Haim, Y. Neural activation during the processing of ambiguous fearful facial expressions: An ERP study in anxious and nonanxious individuals. Biol. Psychol. 2011, 88, 188–195. [Google Scholar] [CrossRef]
  28. Antal, A.; Kéri, S.; Kovács, G.; Liszli, P.; Janka, Z.; Benedek, G. Event-related potentials from a visual categorization task. Brain Res. Protoc. 2001, 7, 131–136. [Google Scholar] [CrossRef]
  29. Crowley, K.E.; Colrain, I.M. A review of the evidence for P2 being an independent component process: Age, sleep and modality. Clin. Neurophysiol. 2004, 115, 732–744. [Google Scholar] [CrossRef]
  30. Wong, T.K.; Fung, P.C.; Chua, S.E.; McAlonan, G.M. Abnormal spatiotemporal processing of emotional facial expressions in childhood autism: Dipole source analysis of event-related potentials. Eur. J. Neurosci. 2008, 28, 407–416. [Google Scholar] [CrossRef]
  31. Balconi, M.; Lucchiari, C. Consciousness and emotional facial expression recognition: Subliminal/supraliminal stimulation effect on N200 and P300 ERPs. J. Psychophysiol. 2007, 21, 100–108. [Google Scholar] [CrossRef]
  32. Ueno, T.; Morita, K.; Shoji, Y.; Yamamoto, M.; Yamamoto, H.; Maeda, H. Recognition of facial expression and visual P300 in schizophrenic patients: Differences between paranoid type patients and non-paranoid patients. Psychiatry Clin. Neurosci. 2004, 58, 585–592. [Google Scholar] [CrossRef] [PubMed]
  33. Werheid, K.; Schacht, A.; Sommer, W. Facial attractiveness modulates early and late event-related brain potentials. Biol. Psychol. 2007, 76, 100–108. [Google Scholar] [CrossRef] [PubMed]
  34. Lu, Y.; Wang, J.; Wang, L.; Wang, J.; Qin, J. Neural responses to cartoon facial attractiveness: An event-related potential study. Neurosci. Bull. 2014, 30, 441–450. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Krumhuber, E.G.; Kappas, A.; Manstead, A.S. Effects of dynamic aspects of facial expressions: A review. Emot. Rev. 2013, 5, 41–46. [Google Scholar] [CrossRef]
  36. Ambadar, Z.; Schooler, J.W.; Cohn, J.F. Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. Psychol. Sci. 2005, 16, 403–410. [Google Scholar] [CrossRef]
  37. Lander, K.; Bruce, V. Recognizing famous faces: Exploring the benefits of facial motion. Ecol. Psychol. 2000, 12, 259–272. [Google Scholar] [CrossRef]
  38. Kilts, C.D.; Egan, G.; Gideon, D.A.; Ely, T.D.; Hoffman, J.M. Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. NeuroImage 2003, 18, 156–168. [Google Scholar] [CrossRef] [Green Version]
  39. Zinchenko, O.; Yaple, Z.A.; Arsalidou, M. Brain Responses to Dynamic Facial Expressions: A Normative Meta-Analysis. Front. Hum. Neurosci. 2018, 12, 227. [Google Scholar] [CrossRef] [Green Version]
  40. Amaral, C.P.; Simöes, M.A.; Castelo-Branco, M.S. Neural signals evoked by stimuli of increasing social scene complexity are detectable at the single-trial level and right lateralized. PLoS ONE 2015, 10, e0121970. [Google Scholar] [CrossRef]
  41. Qu, F.; Wang, S.J.; Yan, W.J.; Li, H.; Wu, S.; Fu, X. CAS(ME)2: A Database for Spontaneous Macro-Expression and Micro-Expression Spotting and Recognition. IEEE Trans. Affect. Comput. 2017, 9, 424–436. [Google Scholar] [CrossRef]
  42. LaBar, K.S.; Crupain, M.J.; Voyvodic, J.T.; McCarthy, G. Dynamic perception of facial affect and identity in the human brain. Cereb. Cortex 2003, 13, 1023–1033. [Google Scholar] [CrossRef]
  43. Sato, W.; Kochiyama, T.; Uono, S. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions. Sci. Rep. 2015, 5, 12432. [Google Scholar] [CrossRef] [Green Version]
  44. Sato, W.; Kochiyama, T.; Yoshikawa, S.; Naito, E.; Matsumura, M. Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study. Cogn. Brain Res. 2004, 20, 81–91. [Google Scholar] [CrossRef]
  45. Fichtenholtz, H.M.; Hopfinger, J.B.; Graham, R.; Detwiler, J.M.; LaBar, K.S. Event-related potentials reveal temporal staging of dynamic facial expression and gaze shift effects on attentional orienting. Soc. Neurosci. 2009, 4, 317–331. [Google Scholar] [CrossRef]
  46. Recio, G.; Sommer, W.; Schacht, A. Electrophysiological correlates of perceiving and evaluating static and dynamic facial emotional expressions. Brain Res. 2011, 1376, 66–75. [Google Scholar] [CrossRef]
  47. Trautmann-Lengsfeld, S.A.; Domínguez-Borràs, J.; Escera, C.; Herrmann, M.; Fehr, T. The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis. PLoS ONE 2013, 8, e66997. [Google Scholar] [CrossRef] [Green Version]
  48. Stefanou, M.E.; Dundon, N.M.; Bestelmeyer, P.E.G.; Koldewyn, K.; Saville, C.W.N.; Fleischhaker, C.; Feige, B.; Biscaldi, M.; Smyrnis, N.; Klein, C. Electro-cortical correlates of multisensory integration using ecologically valid emotional stimuli: Differential effects for fear and disgust. Biol. Psychol. 2019, 142, 132–139. [Google Scholar] [CrossRef] [Green Version]
  49. Recio, G.; Schacht, A.; Sommer, W. Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials. Biol. Psychol. 2014, 96, 111–125. [Google Scholar] [CrossRef]
  50. Kosonogov, V.; Titova, A. Recognition of all basic emotions varies in accuracy and reaction time: A new verbal method of measurement. Int. J. Psychol. 2019, 54, 582–588. [Google Scholar] [CrossRef]
  51. Liu, H.; Zheng, C.; Li, D.; Shen, X.; Lin, K.; Wang, J.; Zhang, J.; Zhang, J.; Xiong, N.N. EDMF: Efficient Deep Matrix Factorization with Review Feature Learning for Industrial Recommender System. IEEE Trans. Ind. Inform. 2022, 18, 4361–4437. [Google Scholar] [CrossRef]
  52. Liu, H.; Nie, H.; Zhang, Z.; Li, Y.F. Anisotropic angle distribution learning for head pose estimation and attention understanding in human-computer interaction. Neurocomputing 2021, 433, 310–322. [Google Scholar] [CrossRef]
  53. Li, Z.; Liu, H.; Zhang, Z.; Liu, T.; Xiong, N.N. Learning Knowledge Graph Embedding with Heterogeneous Relation Attention Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef] [PubMed]
  54. Lundqvist, D.; Flykt, A.; Öhman, A. The Karolinska Directed Emotional Faces—KDEF, CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet; Karolinska Institutet: Stockholm, Sweden, 1998; Available online: http://www.emotionlab.se/resources/kdef (accessed on 1 January 2021)ISBN 91-630-7164-9.
  55. Mathôt, S.; Schreij, D.; Theeuwes, J. OpenSesame: An open-source, graphical experiment builder for the social sciences. Behav. Res. Methods 2012, 44, 314–324. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [Green Version]
  57. Gratton, G.; Coles, M.G.; Donchin, E. A new method for off-line removal of ocular artifact. Electroencephalogr. Clin. Neurophysiol. 1983, 55, 468–484. [Google Scholar] [CrossRef]
  58. Huffmeijer, R.; Bakermans-Kranenburg, M.J.; Alink, L.R.; Van Ijzendoorn, M.H. Reliability of event-related potentials: The influence of number of trials and electrodes. Physiol. Behav. 2014, 130, 13–22. [Google Scholar] [CrossRef]
  59. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B 1995, 57, 289–300. [Google Scholar] [CrossRef]
  60. Derntl, B.; Habel, U.; Windischberger, C.; Robinson, S.; Kryspin-Exner, I.; Gur, R.C.; Moser, E. General and specific responsiveness of the amygdala during explicit emotion recognition in females and males. BMC Neurosci. 2009, 10, 91. [Google Scholar] [CrossRef] [Green Version]
  61. Brand, S.; Schilling, R.; Ludyga, S.; Colledge, F.; Sadeghi Bahmani, D.; Holsboer-Trachsler, E.; Puhse, U.; Gerber, M. Further Evidence of the Zero-Association Between Symptoms of Insomnia and Facial Emotion Recognition—Results from a Sample of Adults in Their Late 30s. Front. Psychiatry 2019, 9, 754. [Google Scholar] [CrossRef]
  62. Tanaka, H. Face-sensitive P1 and N170 components are related to the perception of two-dimensional and three-dimensional objects. NeuroReport 2018, 29, 583. [Google Scholar] [CrossRef]
  63. Kolassa, I.T.; Kolassa, S.; Bergmann, S.; Lauche, R.; Dilger, S.; Miltner, W.H.; Musial, F. Interpretive bias in social phobia: An ERP study with morphed emotional schematic faces. Cogn. Emot. 2009, 23, 69–95. [Google Scholar] [CrossRef]
  64. Rossignol, M.; Campanella, S.; Bissot, C.; Philippot, P. Fear of negative evaluation and attentional bias for facial expressions: An event-related study. Brain Cogn. 2013, 82, 344–352. [Google Scholar] [CrossRef]
  65. Balconi, M.; Carrera, A. Cross-modal integration of emotional face and voice in congruous and incongruous pairs: The P2 ERP effect. J. Cogn. Psychol. 2011, 23, 132–139. [Google Scholar] [CrossRef]
  66. Okita, T. Event-related potentials and selective attention to auditory stimuli varying in pitch and localization. Biol. Psychol. 1979, 9, 271–284. [Google Scholar] [CrossRef]
  67. Di Russo, F.; Taddei, F.; Apnile, T.; Spinelli, D. Neural correlates of fast stimulus discrimination and response selection in top-level fencers. Neurosci. Lett. 2006, 408, 113–118. [Google Scholar] [CrossRef]
  68. Kanske, P.; Plitschka, J.; Kotz, S.A. Attentional orienting towards emotion: P2 and N400 ERP effects. Neuropsychologia 2011, 49, 3121–3129. [Google Scholar] [CrossRef]
  69. Proverbio, A.M.; Zani, A.; Adorni, R. Neural markers of a greater female responsiveness to social stimuli. BMC Neurosci. 2008, 9, 56. [Google Scholar] [CrossRef] [Green Version]
  70. Carretié, L.; Kessel, D.; Carboni, A.; López-Martín, S.; Albert, J.; Tapia, M.; Mercado, F.; Capilla, A.; Hinojosa, J.A. Exogenous attention to facial versus non-facial emotional visual stimuli. Soc. Cogn. Affect. Neurosci. 2013, 8, 764–773. [Google Scholar] [CrossRef] [Green Version]
  71. Kosonogov, V.; Martinez-Selva, J.M.; Carrillo-Verdejo, E.; Torrente, G.; Carretié, L.; Sanchez-Navarro, J.P. Effects of social and affective content on exogenous attention as revealed by event-related potentials. Cogn. Emot. 2019, 33, 683–695. [Google Scholar] [CrossRef]
  72. Dolcos, F.; Cabeza, R. Event-related potentials of emotional memory: Encoding pleasant, unpleasant, and neutral pictures. Cogn. Affect. Behav. Neurosci. 2002, 2, 252–263. [Google Scholar] [CrossRef]
  73. Bayer, M.; Schacht, A. Event-related brain responses to emotional words, pictures, and faces—A cross-domain comparison. Front. Psychol. 2014, 5, 1106. [Google Scholar] [CrossRef] [Green Version]
  74. Eimer, M.; Holmes, A.; McGlone, F.P. The role of spatial attention in the processing of facial expression: An ERP study of rapid brain responses to six basic emotions. Cogn. Affect. Behav. Neurosci. 2003, 3, 97–110. [Google Scholar] [CrossRef] [Green Version]
  75. Schupp, H.T.; Junghöfer, M.; Weike, A.I.; Hamm, A.O. The selective processing of briefly presented affective pictures: An ERP analysis. Psychophysiology 2004, 41, 441–449. [Google Scholar] [CrossRef] [Green Version]
  76. Foti, D.; Olvet, D.M.; Klein, D.N.; Hajcak, G. Reduced electrocortical response to threatening faces in major depressive disorder. Depress. Anxiety 2010, 27, 813–820. [Google Scholar] [CrossRef]
  77. Vico, C.; Guerra, P.; Robles, H.; Vila, J.; Anllo-Vento, L. Affective processing of loved faces: Contributions from peripheral and central electrophysiology. Neuropsychologia 2010, 48, 2894–2902. [Google Scholar] [CrossRef]
  78. Choi, D.; Nishimura, T.; Motoi, M.; Egashira, Y.; Matsumoto, R.; Watanuki, S. Effect of empathy trait on attention to various facial expressions: Evidence from N170 and late positive potential (LPP). J. Physiol. Anthropol. 2014, 33, 18. [Google Scholar] [CrossRef] [Green Version]
  79. Posner, M. Orienting of attention. Q. J. Exp. Psychol. 2008, 32, 3–25. [Google Scholar] [CrossRef]
  80. Ortega, R.; López, V.; Aboitiz, F. Voluntary modulations of attention in a semantic auditory-visual matching Task: An ERP study. Biol. Res. 2008, 41, 453–460. [Google Scholar] [CrossRef]
  81. Plass, J.L.; Moreno, R.; Brünken, R. (Eds.) Cognitive Load Theory; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  82. Abbruzzese, L.; Magnani, N.; Robertson, I.H.; Mancuso, M. Age and gender differences in emotion recognition. Front. Psychol. 2019, 10, 2371. [Google Scholar] [CrossRef] [Green Version]
  83. Connolly, H.L.; Lefevre, C.E.; Young, A.W.; Lewis, G.J. Sex differences in emotion recognition: Evidence for a small overall female superiority on facial disgust. Emotion 2019, 19, 455–464. [Google Scholar] [CrossRef] [Green Version]
  84. Byrne, S.P.; Mayo, A.; O’Hair, C.; Zankman, M.; Austin, G.M.; Thompson-Booth, C.; McCrory, E.J.; Mayes, L.C.; Rutherford, H.J. Facial emotion recognition during pregnancy: Examining the effects of facial age and affect. Infant Behav. Dev. 2019, 54, 108–113. [Google Scholar] [CrossRef] [PubMed]
  85. Elfenbein, H.A.; Ambady, N. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychol. Bull. 2002, 128, 203–235. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. A dynamic morph depicting happiness.
Figure 1. A dynamic morph depicting happiness.
Applsci 12 07782 g001
Figure 2. The overall flow of the study.
Figure 2. The overall flow of the study.
Applsci 12 07782 g002
Figure 3. Grand average ERP ± SD (black ± grey) at frontal (Fp1, Fp2, F3, F4, F7, F8, and Fz), central (FC1, FC2, C3, Cz, C4, CP1, and CP2), temporal (FT9, FT10, FC5, FC6, T3, T4, T5, T6, CP5, CP6, TP9, and TP10), and posterior sites (P3, Pz, P4, O1, Oz, and O2) in response to dynamic facial expressions (all subjects, all trials, means ± SDs). We found some typical ERP, like N170, P2, EPN and LPP in temporal and posterior regions.
Figure 3. Grand average ERP ± SD (black ± grey) at frontal (Fp1, Fp2, F3, F4, F7, F8, and Fz), central (FC1, FC2, C3, Cz, C4, CP1, and CP2), temporal (FT9, FT10, FC5, FC6, T3, T4, T5, T6, CP5, CP6, TP9, and TP10), and posterior sites (P3, Pz, P4, O1, Oz, and O2) in response to dynamic facial expressions (all subjects, all trials, means ± SDs). We found some typical ERP, like N170, P2, EPN and LPP in temporal and posterior regions.
Applsci 12 07782 g003
Figure 4. Topographic maps of the event-related potentials during dynamic facial emotion recognition. For neutral faces they represent N170 at 170 ms, posterior P2 at 200 ms and EPN at 500 ms. In response to the gradual appearance of a facial emotional expression the topoplot displays another posterior P2 at 300 ms, EPN at 800 ms, and LPP at 1300 ms.
Figure 4. Topographic maps of the event-related potentials during dynamic facial emotion recognition. For neutral faces they represent N170 at 170 ms, posterior P2 at 200 ms and EPN at 500 ms. In response to the gradual appearance of a facial emotional expression the topoplot displays another posterior P2 at 300 ms, EPN at 800 ms, and LPP at 1300 ms.
Applsci 12 07782 g004
Figure 5. ERP at posterior sites (P3, Pz, P4, O1, Oz, and O2) depending on the accuracy of facial emotion recognition. Participants with a lower level of accuracy displayed a larger P2 amplitude Each curve represents the averaged data from 56 participants, 55–120 trials for each participant. The asterisk indicated the time windows of significant differences.
Figure 5. ERP at posterior sites (P3, Pz, P4, O1, Oz, and O2) depending on the accuracy of facial emotion recognition. Participants with a lower level of accuracy displayed a larger P2 amplitude Each curve represents the averaged data from 56 participants, 55–120 trials for each participant. The asterisk indicated the time windows of significant differences.
Applsci 12 07782 g005
Figure 6. ERP at posterior sites (P3, Pz, P4, O1, Oz, and O2) depending on the reaction time of facial emotion recognition. Participants with the faster correct answers displayed a larger amplitude of P2. Participants with the faster correct answers displayed a larger amplitude of the LPP. Each curve represents the averaged data from 56 participants, 55–120 trials for each participant. The asterisks indicated the time windows of significant differences.
Figure 6. ERP at posterior sites (P3, Pz, P4, O1, Oz, and O2) depending on the reaction time of facial emotion recognition. Participants with the faster correct answers displayed a larger amplitude of P2. Participants with the faster correct answers displayed a larger amplitude of the LPP. Each curve represents the averaged data from 56 participants, 55–120 trials for each participant. The asterisks indicated the time windows of significant differences.
Applsci 12 07782 g006
Table 1. Descriptive statistics of two subsamples with low and high accuracy of facial emotion recognition.
Table 1. Descriptive statistics of two subsamples with low and high accuracy of facial emotion recognition.
50% with Low Accuracy50% with High Accuracy
MedianIQRMedianIQRp
total61.858.7–65.771.770.0–74.40.001
happiness10090.9–10010095.4–1000.017
surprise83.372.2–91.788.276.2–94.70.084
disgust73.966.7–83.385.077.3–90.90.001
anger62.545.5–70.879.271.4–85.70.001
sadness38.929.4–50.056.547.1–66.70.001
fear17.45.9–29.227.818.2–34.80.001
Table 2. Descriptive statistics of two subsamples with slow and fast reaction times of facial emotion recognition.
Table 2. Descriptive statistics of two subsamples with slow and fast reaction times of facial emotion recognition.
50% with Slow RT50% with Fast RT
MedianIQRMedianIQRp
total16611541–181311641066–12670.001
happiness12391107–1423950861–10230.001
surprise16081466–193911791056–13250.001
disgust16271359–197111381018–13310.001
anger17541578–219312591116–13920.001
sadness19501730–223813911165–15460.001
fear19061565–222113341186–15370.001
Table 3. The classification accuracy of the recognition speed on the basis of the posterior brain waves.
Table 3. The classification accuracy of the recognition speed on the basis of the posterior brain waves.
MethodAccuracy, %
K-Nearest Neighbours62.07
Naïve Bayesian Classificator68.97
Automated Neural Network75.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kosonogov, V.; Kovsh, E.; Vorobyeva, E. Event-Related Potentials during Verbal Recognition of Naturalistic Neutral-to-Emotional Dynamic Facial Expressions. Appl. Sci. 2022, 12, 7782. https://doi.org/10.3390/app12157782

AMA Style

Kosonogov V, Kovsh E, Vorobyeva E. Event-Related Potentials during Verbal Recognition of Naturalistic Neutral-to-Emotional Dynamic Facial Expressions. Applied Sciences. 2022; 12(15):7782. https://doi.org/10.3390/app12157782

Chicago/Turabian Style

Kosonogov, Vladimir, Ekaterina Kovsh, and Elena Vorobyeva. 2022. "Event-Related Potentials during Verbal Recognition of Naturalistic Neutral-to-Emotional Dynamic Facial Expressions" Applied Sciences 12, no. 15: 7782. https://doi.org/10.3390/app12157782

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop