Next Article in Journal
Single-Sided Deafness and Hearing Rehabilitation Modalities: Contralateral Routing of Signal Devices, Bone Conduction Devices, and Cochlear Implants
Previous Article in Journal
Neurophysiological Parameters Influencing Sleep–Wake Discrepancy in Insomnia Disorder: A Preliminary Analysis on Alpha Rhythm during Sleep Onset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Looming Angry Faces: Preliminary Evidence of Differential Electrophysiological Dynamics for Filtered Stimuli via Low and High Spatial Frequencies

School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4072, Australia
*
Author to whom correspondence should be addressed.
Brain Sci. 2024, 14(1), 98; https://doi.org/10.3390/brainsci14010098
Submission received: 23 October 2023 / Revised: 15 January 2024 / Accepted: 17 January 2024 / Published: 19 January 2024
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)

Abstract

:
Looming motion interacts with threatening emotional cues in the initial stages of visual processing. However, the underlying neural networks are unclear. The current study investigated if the interactive effect of threat elicited by angry and looming faces is favoured by rapid, magnocellular neural pathways and if exogenous or endogenous attention influences such processing. Here, EEG/ERP techniques were used to explore the early ERP responses to moving emotional faces filtered for high spatial frequencies (HSF) and low spatial frequencies (LSF). Experiment 1 applied a passive-viewing paradigm, presenting filtered angry and neutral faces in static, approaching, or receding motions on a depth-cued background. In the second experiment, broadband faces (BSF) were included, and endogenous attention was directed to the expression of faces. Our main results showed that regardless of attentional control, P1 was enhanced by BSF angry faces, but neither HSF nor LSF faces drove the effect of facial expressions. Such findings indicate that looming motion and threatening expressions are integrated rapidly at the P1 level but that this processing relies neither on LSF nor on HSF information in isolation. The N170 was enhanced for BSF angry faces regardless of attention but was enhanced for LSF angry faces during passive viewing. These results suggest the involvement of a neural pathway reliant on LSF information at the N170 level. Taken together with previous reports from the literature, this may indicate the involvement of multiple parallel neural pathways during early visual processing of approaching emotional faces.

1. Introduction

A critical factor in survival is the efficiency of threat detection. Threats activate survival circuits and influence our behaviour, for example, by prompting approach or avoidance. As social creatures, emotional facial expressions convey significant threat cues that can prompt and inform these approach and avoidance responses [1,2,3,4]. Negative expressions like anger can motivate an avoidance response to escape threat and minimise conflict or harm [5,6,7,8]. Factors aside from expression, such as looming motion, also convey threats. We live in a dynamic world, and rapidly looming motion represents both a potential invasion of personal space and a potential collision [9,10,11,12,13,14]. As such, looming motion can elicit stereotypical reactions of fear [15,16,17], a response that appears to be apparent from birth [18,19,20,21] and monkeys [22].
Perhaps because of their evolutionary relevance, threat-relevant information from facial expressions can be differentiated at the early stages of visual processing. ERP studies reveal that emotional expressions evoke modulations that occur as early as 100 ms at the P1 over occipito-parietal sites [23,24,25]. A more robust finding is that the face-specific N170 is also emotion-sensitive [26,27,28], is typically enhanced by negative emotions like angry or fearful expressions for recent reviews [29,30], and is sometimes more pronounced in the right hemisphere [29,30]. Those early emotional effects suggest increased neural activation of facial expressions indicative of a threat.
Motion is also processed rapidly and has been found to modulate ERP markers such as the N1, P1, and P2. Motion onset or offset can evoke a posterior P1 component [31,32,33,34]. Looming motion is differentiated rapidly, with ERP effects reported at the P1 [35] and N1 [36]. Additionally, the P2 can reflect motion effects in response to stimuli saliency [37]. Therefore, motion and emotion are both processed rapidly and can signal threat-related information.
Interestingly, recent research has indicated that facial expressions and motion interact to modulate behavioural and neural responses, as shown by faster and more accurate responses for “looming” angry faces [1]. ERP studies have shed light on the neural mechanism underlying such occurrences. Yu et al. [37] found that angry expressions enhanced the P1 and N170 but that approaching angry expressions specifically prompted a further enhancement of the P1. This P1 enhancement could reflect the enhanced processing of threat-relevant faces [25,26,38,39]. Other schools of thought have suggested that such effects are the product of the processing of low-level visual information [40,41]. However, in a follow-up study, Yu et al. [42] replicated their looming emotional face study, including inverted faces. Face inversion is widely thought to destroy holistic processing and face/emotion recognition while maintaining low-level features [43,44,45,46,47,48,49,50,51]. The authors found that the enhanced P1 to looming angry expressions did not appear when faces were inverted, indicating that the early neural response required the identification of the emotional expression, thus corroborating the idea of rapid threat-related processing [42].
The early sensitivity to the interaction between motion and emotion suggests that the information converges rapidly through the different pathways. Regarding emotions, some research has proposed the existence of a rapid subcortical pathway for early threat detection. This subcortical path is purported to rely on projections from the retina to the superior colliculus and pulvinar and subsequently to the amygdala, allowing for the encoding of threats bypassing the typical thalamocortical streams [2,52,53,54,55,56]. This can be supported by case studies of individuals with blindness due to damage to the primary visual cortex, such as TN [57]. Although claiming nothing was seen, TN differentiated fearful facial expressions at a rate higher than chance; fMRI scanning also revealed that the right amygdala was activated during his performance [57,58]. With respect to motion processing, EEG and TMS studies in humans, as well as intracranial recordings in monkeys, have provided evidence that information regarding movement likely reaches cortical regions very rapidly (i.e., within ~50 ms) and is likely conveyed partly via magnocellular thalamo-extrastriate projections to V5 [34,59,60,61].
Interestingly, other studies have shown that when contrasted with receding stimuli, looming stimuli activate the superior colliculus and the pulvinar nucleus of the thalamus [62], as in the case of emotion and threat. For example, in the study by Cléry et al. [63] in marmosets, looming but not receding stimuli triggered strong and widespread activation in the superior colliculus and pulvinar areas and the putamen. Consequently, this suggests an overlap of the subcortical structures involved in the processing of looming motion and those involved in emotion. It is therefore reasonable to assume that these two types of information are integrated relatively early in time and that the initial interactive processing of looming motion and emotional expression may likely be conveyed through a rapid subcortical pathway directly to the amygdala, which relies on the superior colliculus and pulvinar.
One way to investigate the neural pathways underlying the processing of emotion and motion is to manipulate the spatial frequency content of the stimuli. Motion is processed via functionally specific aspects of the visual system. Magnocellular layers of the lateral geniculate nucleus encode large spatial regions at high temporal rates, processing low spatial frequency (LSF) at rapid speeds [64,65]. Indeed, behavioural and electrophysiological studies show faster responses to LSF information [66,67,68]. Meanwhile, parvocellular layers encode high spatial frequency (HSF) information, such as fine details of the stimuli over small spatial expanses and at a slower temporal rate [64,65]. Neural paths that receive input from these different layers are, therefore, sensitive to visual information at specific temporal resolutions and ranges of spatial frequencies [65,69,70].
While general face processing relies on input from a range of spatial frequencies, holistic and global processes like emotion perception have been suggested to rely on LSF information [71,72,73]. Structures involved in threat appraisal, like the amygdala, have been observed to activate LSF fearful faces [58,74], even when they are task-irrelevant [75]. Case studies where cortical lesions leave only the subcortical system intact show amygdala activation to emotional faces filtered to LSF but not HSF information [58]. Thus, the subcortical pathway is suggested to be reliant on LSF input. Furthermore, ERP studies support the idea that expression encoding is preferentially tuned to coarse visual input processed by magnocellular streams, with early modulations occurring for LSF-filtered faces. Indeed, larger posterior P1 amplitudes are evoked in LSF faces compared to unfiltered [76] and HSF faces [77,78].
This paper attempted to investigate the interaction of emotion and looming motion and the neural networks underlying these two types of threats. Given the posited sensitivity of magnocellular pathways to LSF information, manipulating spatial frequency was expected to provide insight into the involvement of the different pathways when conveying threat-related information. Using EEG/ERP techniques, the time course of neural responses to approaching and receding angry and neutral faces was assessed at varying levels of spatial frequencies (LSF, HSF, or unfiltered). In Experiment 1, dynamic LSF/HSF-filtered angry and neutral faces were compared with static ones in a passive viewing paradigm to rule out the initial influence of motion on effects relevant to spatial frequencies. Considering that endogenous attention might affect emotion modulations at the early stages [30], Experiment 2 required direct attention to the facial expressions and compared dynamic LSF-/HSF-filtered angry and neutral faces to unfiltered ones.
The study particularly explored the modulation pattern of emotion and motion on early ERP components of P1, N170, and P2 within each spatial frequency condition. We hypothesised that if looming motion and emotion rely on the magnocellular routes, approaching faces would enhance early responses, especially for LSF angry faces and that the effects of motion and emotion would interact. Due to the possible interference of low-level effects on the ERPs, different spatial frequency bands were not included in a single analysis, and effects were examined within each condition of spatial frequency separately. Crucially, we expected enhanced P1 and N170 responses for approaching angry faces in the LSF but not the HSF domain. In line with our previous observations, we predicted the P2 would show sensitivity to motion across spatial frequencies. Lastly, we expected endogenous attention to modulate emotion and motion effects on the early ERPs, especially filtered faces.

2. Experiment 1

2.1. Method

2.1.1. Participants

Twenty-nine students from the University of Queensland participated in this experiment. After the exclusion of three participants due to noisy recordings, 26 participants (13 females) were retained (age: 17–33 years; M = 22.50; SD = 4.19). All participants had normal or corrected-to-normal vision and no self-reported neurological conditions. Participants were recruited via advertisement on campus; they received AUD 40, or two course credits, for participating in the study. Participation was voluntary, and the experiment only proceeded once participants signed the Consent Form. This study was approved by the University of Queensland Ethics Committee.
The sample size was sufficient according to our initial power analysis using MorePower 6.0.4 [79]. This study used a 2 × 2 × 3 within-subjects design (described below). With an alpha (α) of 0.05 and an optimal power (1 − β) of 0.8, the results indicated that a sample of 26 was sufficient to reflect significant two-level main effects and 2 × 2 interaction with a large effect size (ηp2 = 0.25); and was sufficient for other possible effects with an effect size of ηp2 = 0.17.

2.1.2. Stimuli

The emotional faces of eight females and eight males were selected from the Radboud Faces Database [80], and the same identities were selected for their angry and neutral expressions. Original face pictures were first cropped to be squares of 512 × 512 pixels presenting the hair and face areas. Using a MATLAB 2022b [81] script developed by Perfetto et al. [82], each face was processed using the Butterworth2 filter in a low spatial frequency/LSF of <8 cpi (approx. 0.85 cpd) and a high spatial frequency/HSF of >32 cpi (approx. 3.4 cpd), respectively. Provided by the script, all the filtered pictures were also processed to have normalised contrast, thus maintaining the overall low-level visual features consistent across all of them. Each filtered face picture was imported to Gimp 2.0 (https://www.gimp.org, accessed on 4 July 2022) for elliptical cropping. The face area within a vertical ellipse of 285 × 375 pixels centred over the image middle point was visible, while the other pictorial area remained transparent (see Figure 1 for the example stimuli).

2.1.3. Design and Procedure

The study used a 2 (Emotion: anger and neutral) × 2 (Filter: HSF and LSF) × 3 (Motion: approaching, receding, and static) within-participants design. All facial stimuli were presented on a full-screen depth-cued background throughout the experiment. The background consisted of black lines (RGB: 0 × 0 × 0) arranged as a polar projection on a grey screen (RGB: 50 × 50 × 50; see Figure 1). Participants were presented with faces that matched their sex to avoid potential gender biases.
Each trial started with a fixation cross in the screen centre for 1000 ms, followed by the appearance of a single upright face. In the static conditions, the face was presented at the screen centre for 500 ms with a constant size of 8.3° × 6.4° (H × W). In the approaching conditions, faces were presented initially at a size of 8.3° × 6.4° (H × W) and immediately expanded to 12.4° × 9.5° over 500 ms with a constant speed. In the receding conditions, the exact opposite motion was used, with faces appearing first at 8.3° × 6.4° and contracting to 4.2° × 3.2° over 500 ms. Following a random period between 600 and 1000 ms after the face offset, a random number between 1 and 9 appeared for 100 ms (see Figure 1 for a typical trial), followed by a blank screen that added up to 4 s of a complete trial. Participants were required to respond regarding whether the number was even (press the “E” key) or odd (press the “O” key) as fast as possible. This number categorisation task aimed to reinforce participants’ fixation on the screen centre. The experiment consisted of 10 blocks of 96 trials, with a break for up to 5 min between each block. All conditions were randomised within each block with an equal number of repetitions. The total participation time was, on average, 1.5 h.

2.1.4. Apparatus

PsychoPy3 [83] was used to code and deliver the experiment. A screen with a resolution of 1080 × 1920 pixels and a refresh rate of 60 Hz (24-inch ASUS LCD monitor, model VG248QE, ASUSTeK Computer Inc. Taipei, Taiwan) placed 60 cm from the participants was used to present the stimuli. EEG data were recorded using the 64-channel BioSemi Active Two system (BioSemi Inc., Amsterdam, The Netherlands) at a sampling rate of 1024 Hz and a bandwidth (3 dB) of 208 Hz. Electrodes were positioned based on the extended international 10–20 system. Additionally, a Common Mode Sense (CMS) active electrode, coupled with a Driven Right Leg (DRL) passive electrode, functioned as the active reference and ground. These created a feedback loop designed to maintain the average potential similar to the reference voltage within the AD-box, essentially serving as the amplifier “zero” (https://www.biosemi.com/faq/cms&drl.htm, accessed on 16 January 2024).

2.1.5. EEG Data Processing

EEG data were processed offline using BrainVision Analyzer (Version 2.2.0, Brain Products GmbH, Gilching, Germany). Data were downsampled to 512 Hz, recalculated against the average reference of all 64 electrodes, and filtered offline from 0.1 Hz to 30 Hz. For each participant, epochs of 100 ms pre-stimulus onset to 500 ms post-stimulus onset were used to compute the ERPs of interest, which were baseline-corrected using the 100 ms pre-stimulus period. Artefact screening was conducted on each epoch before averaging ERPs. We manually rejected epochs containing eyeblinks and bad traces that exceeded the ±60 µv threshold. Participants who had less than 40% of trials remaining in any condition (<32 trials per condition) were considered to have insufficient data and were excluded from further analyses. The participants included in the analyses had approximately 63 trials per condition on average (SD = 9.82).
Responses from all participants were averaged at each electrode to obtain the grand mean ERP traces. These traces were then used to generate the EEG topographic maps, thus, the visual identification of electrodes displaying the highest activity during the time window associated with each interested ERP component. We identified the Region of Interest (ROI) for each ERP component by selecting groups of electrodes that exhibited the highest activity during the specific time windows. These choices align with the common preferences found in the literature as well [84,85,86]. Then, for each participant, the average activity of electrodes within the ROI was calculated to generate a single ERP trace for each condition.
In this manner, one ROI was determined for the P1, which included nine electrodes at the posterior region (P7, P9, PO7, O1, Oz, O2, P8, P10, and PO8 electrodes). The grand mean ERP traces of the P1 showed their peak within 84–108 ms for each condition, which was consistent with the literature. Thus, a time window of 80–115 ms locked to the face onset was selected to compute the mean P1 values. Using a similar approach, two ROIs were selected for the N170, which included one on the left (TP7, P7, and P9) and one on the right (TP8, P8, and P10). The peak N170 was found between 146 and 166 ms for each condition. Consequently, the mean amplitude for the N170 was computed on a 30 ms time window between 140 and 170 ms and centred over the maximum. For the P2 component, one central ROI at the occipital site was created, including electrodes of O1, Oz, and O2. Mean amplitudes of the P2 were computed on a 40 ms window located between 200 and 240 ms and centred over the peaks.

2.2. Results

Mean amplitudes of each ERP were obtained as described in the Method section and were exported to JASP (Version 0.14.1) for statistical analysis. The descriptive statistics (mean and standard deviation) of the ERP amplitudes of each condition can be seen in the Supplementary Materials. Topographic maps and grand traces of ERP of interest can be seen in Figure 2, Figure 3, Figure 4 and Figure 5. For ease of readability, the ERP traces of the N170 and P2 presented collapsed conditions reflecting significant main or interactive effects. Comprehensive ERP traces can be found in the Supplementary Materials. A series of repeated measures ANOVAs on the amplitudes were performed at each level of spatial frequency condition for each ERP component, respectively.

2.2.1. P1 Component

At both LSF and HSF levels, a 2-way repeated measures ANOVA was performed for Emotion (angry and neutral) and Motion Direction (approaching, receding, and static), respectively.
For HSF faces, there was no main effect of facial expression: F (1, 25) = 0.48, p = 0.497, ηp2 = 0.019; or motion direction, F (2, 50) = 0.16, p = 0.851, ηp2 = 0.006. No interaction was found either: F (2, 50) = 0.25, p = 0.779, ηp2 = 0.010.
Similarly, for LSF faces, there were no main effects of facial expression: F (1, 25) = 0.36, p = 0.555, ηp2 = 0.014; motion, F (2, 50) = 0.07, p = 0.928, ηp2 = 0.003; or interaction found significant either, F (2, 50) = 1.33, p = 0.275, ηp2 = 0.050.
Figure 2. Topographic maps and grand ERP traces of each condition for the P1, separately presented by HSF (A) and LSF (B) conditions. AP: approaching; RE: receding; ST: static; ANG: angry; NEU: neutral.
Figure 2. Topographic maps and grand ERP traces of each condition for the P1, separately presented by HSF (A) and LSF (B) conditions. AP: approaching; RE: receding; ST: static; ANG: angry; NEU: neutral.
Brainsci 14 00098 g002

2.2.2. N170 Component

At each level of spatial frequency condition (LSF vs. HSF), a 3-way repeated measures ANOVA was performed for ROI (L and R), Emotion (angry and neutral), and Motion Direction (approaching, receding, and static). Only a Motion main effect was found for HSF faces, F (2, 50) = 14.13, p < 0.001, ηp2 = 0.361. Using Bonferroni correction, the post hoc comparisons showed that the N170 was significantly enhanced by approaching and receding motions when each was compared with the static condition; tapproach (25) = 3.93, pbonf = 0.002, treced (25) = 5.43, pbonf < 0.001. However, no difference between the two moving conditions was found; t (25) = 1.08, pbonf = 0.867. In sum, stronger responses for moving than static stimuli are found at the N170 for HSF information: receding (−4.15 μV) = approaching (−4.02 μV) > static (−3.59 μV).
Figure 3. Topographic maps and grand ERP traces for the HSF conditions; the traces present the significant main effect of motion at the N170. L: left ROI; R: right ROI; AP: approaching; RE: receding; ST: static.
Figure 3. Topographic maps and grand ERP traces for the HSF conditions; the traces present the significant main effect of motion at the N170. L: left ROI; R: right ROI; AP: approaching; RE: receding; ST: static.
Brainsci 14 00098 g003
For the LSF face, a significant main effect of motion was also found; F (2, 50) = 4.31, p = 0.019, ηp2 = 0.147. Post hoc comparisons showed that the N170 was significantly enhanced by approaching when compared with receding motion; t (25) = 2.65, pbonf = 0.042. However, no other comparisons reached significance. Showing an overall pattern of approaching~static (−3.42 μV) = receding (−3.38 μV), approaching (−3.72 μV) > receding. Furthermore, an interaction between ROI and emotion was found; F (1, 25) = 4.74, p = 0.039, ηp2 = 0.159. A follow-up Simple Main Effect analysis showed that the N170 was enhanced by anger compared with neutral faces at the right ROI; F (1, 25) = 4.58, p = 0.042, ηp2 = 0.155; however, no effect of emotion was found at the left ROI; F (1, 25) = 0.18, p = 0.736, ηp2 = 0.007. No other main or interactive effect was found on the N170 either.
Figure 4. Topographic maps and grand ERP traces for the LSF conditions, the traces present the significant main effect of motion (A) and the interactive effect of emotion and ROI (B) at the N170. ANG: angry; NEU: neutral; L: left ROI; R: right ROI; AP: approaching; RE: receding; ST: static.
Figure 4. Topographic maps and grand ERP traces for the LSF conditions, the traces present the significant main effect of motion (A) and the interactive effect of emotion and ROI (B) at the N170. ANG: angry; NEU: neutral; L: left ROI; R: right ROI; AP: approaching; RE: receding; ST: static.
Brainsci 14 00098 g004

2.2.3. P2 Component

At the HSF level, a 2 (angry and neutral) × 3 (approaching, receding, and static) repeated measures ANOVA was performed. The main effect of Motion reached significance; F (2, 50) = 5.35, p = 0.008, pHuynh-Feldt corrected = 0.017, ηp2 = 0.176. Post hoc comparisons using Bonferroni correction revealed significantly smaller activity in approaching than receding and static conditions; t (25) = −3.27, pbonf = 0.009, t (25) = −3.15, pbonf = 0.013, respectively. No difference between receding and static conditions was shown. Overall, neural activities at the P2 show a pattern of approaching (4.04 μV) < receding (4.6 μV)~static (4.95 μV).
At the LSF level, the same analysis of ANOVA as HSF was used. It also showed a main effect of motion; F (2, 50) = 10.76, p < 0.001, ηp2 = 0.301. Interestingly, post hoc comparisons also revealed the same pattern of results as the HSF condition, such that the P2 was significantly lower for approaching faces (4.69 μV) than receding (5.41 μV); t (25) = −3.81, pbonf = 0.002, and static faces (5.33 μV), t (25) = −4.22, pbonf < 0.001, with no difference between receding and static faces. No other main or interactive effect was found on the P2.
Figure 5. Topographic maps and grand ERP traces for the HSF (A) and LSF conditions (B), respectively, and the traces present the significant main effect of motion at the P2. AP: approaching; RE: receding; ST: static.
Figure 5. Topographic maps and grand ERP traces for the HSF (A) and LSF conditions (B), respectively, and the traces present the significant main effect of motion at the P2. AP: approaching; RE: receding; ST: static.
Brainsci 14 00098 g005

2.3. Experiment 1 Summary

Using a passive-viewing paradigm, Experiment 1 aimed to investigate the P1, N170, and P2 modulation of facial expressions and motion via HSF- and LSF-filtered faces. At the P1 stage, neither HSF nor LSF faces evoked effects associated with facial expression or motion. The N170 was enhanced by LSF angry faces, although the effect was only significant in the right hemisphere. Interestingly, the N170 also reflected differential motion effects evoked by HSF and LSF groups. The HSF conditions reflected a differentiation between moving (i.e., approaching and receding) versus static status, while among the LSF conditions, approaching motion was differentiated from the others. The P2 demonstrated overall sensitivity to motion. Across the HSF and LSF conditions, the approaching motion was differentiated from the others (i.e., receding and static) by showing smaller amplitudes.

3. Experiment 2

Consistent with our initial expectation, the N170 in Experiment 1 showed enhancement to angry faces filtered by LSF. This result supports our hypothesis that coarse processing via magnocellular routes is involved in the early processing of threatening faces. The P1 was not modulated by moving compared to static faces, suggesting a minimum influence of motion on initial effects relevant to spatial frequencies. Surprisingly, the emotional modulations for filtered faces were not found at the P1. It suggests that the high or low spatial frequency bands alone are insufficient for early facial expression processing during passive viewing. Indeed, some studies showed that the neural processing of emotional faces requires attentional resources [87,88]. Attentional focus and task demand tend to modulate the effects of emotional expressions on early ERPs [30]. Thus, an alternative explanation to our P1 results is that faces filtered by HSF or LSF are not sufficiently attended during passive viewing, thereby not meeting the sensory processing threshold. To investigate this interpretation, Experiment 2 followed up with a task of facial expression discrimination on LSF, HSF, and unfiltered faces, thus requiring endogenous attention directed to the emotional expressions. Since processes associated with voluntary attentional control recruit cortical pathways [89,90,91], Experiment 2 can further help in interpreting the processing as being subcortical or cortical. If endogenous attention could engage the processing of emotional expressions, we expected that the P1 and N170 would be enhanced for approaching angry faces in LSF and unfiltered conditions, and the P2 would only reflect sensitivity to motion direction. If cortically based processing were involved in the early stages, we would expect the ERPs to show different modulations of facial expressions and motion between the passive and active viewing tasks. However, we are open to observing the potential modulation by endogenous attention.

3.1. Method

3.1.1. Participants

Following the same recruitment and reimbursement procedure described in Experiment 1, 33 participants volunteered in this experiment. After removing three participants who had insufficient data or noisy signals, 30 participants (20 females and 10 males) aged 19–34 (M = 24.23; SD = 3.46) were included for subsequent analysis. All participants had normal or corrected-to-normal vision and no self-reported neurological condition. The sample size is also considered sufficient, given it is larger than what was in the previous experiment.

3.1.2. Design and Procedure

The same HSF and LSF facial stimuli as in Experiment 1 and their unfiltered version (Broadband Spatial Frequencies/BSF) were used for this experiment (Figure 6). The contrast was normalised to maintain the same low-level visual features across all faces. The static condition was excluded based on the findings of the previous experiment, which demonstrated no influence of motion on effects relevant to spatial frequency initially (i.e., on the P1). Moreover, it could make the current experiment more manageable. Thus, this experiment used a 2 (Emotion: angry and neutral) × 2 (Motion Direction: approaching and receding) × 3 (Filter: LSF, HSF, and unfiltered) within-participants design. The depth-cued background was also displayed along with every face presentation.
Instead of number categorisation as an irrelevant task, this experiment employed an emotion identification task and asked participants to pay explicit attention to the expression of each face. Each trial started with a fixation cross in the centre of the screen for 1000 ms, followed by the appearance of a single face, which started approaching or receding immediately upon onset. The approaching or receding rates are the same as in Experiment 1. Thus, in the approaching conditions, faces were presented initially at a size of 8.3° × 6.4° (H × W) and immediately expanding to 10.4° × 7.9° over 250 ms with a constant speed. In the receding conditions, the exact opposite motion was used, with faces appearing first at 8.3° × 6.4° and contracting to 6.3° × 4.8° over 250 ms. After 750 ms of a blank screen following the face offset, a question of either “Was the face angry?” or “Was the face neutral?” appeared on the screen. Participants were instructed to respond “yes” (press the “Y” key) or “no” (press the “N” key). They were also informed that the response was not timed and that they only needed to be as accurate as possible. The question remained the same for each block (e.g., only ask about being angry), but the order of questions was randomised across blocks with equal repetition. This was meant to maintain the participants’ level of engagement constant throughout. The experiment consisted of 10 blocks of 96 trials with a break for up to 5 min between each block. All conditions of the facial stimuli were randomised within each block with an equal number of repetitions. The total participation time was, on average, 1.5 h.

3.1.3. Apparatus and EEG Data Processing

EEG data were acquired using the same set up described in Experiment 1. Filtering and artefact screening/rejection criteria identical to Experiment 1 were also applied. Participants included in the analyses had approximately 65 trials per condition on average (SD = 8.38).
Following the same approach of identifying ROIs as in Experiment 1, for each ERP component of interest, the electrodes included for each ROI were consistent with the previous experiment. Thus, one ROI was determined for the P1 at the posterior site (O1, Oz, O2, P7, P9, PO7, P8, P10, and PO8 electrodes). A time window of 85–115 ms locked to the face onset was selected to compute the mean P1 values. For the N170, one ROI on the left (TP7, P7, and P9) and one on the right (TP8, P8, and P10) were included. The mean amplitude for the N170 was computed on a 40 ms time window between 135 and 175 ms and centred over the maximum. For the P2, one central ROI was created at the occipital site, including electrodes of O1, Oz, and O2. Mean amplitudes of the P2 were computed on a 40 ms window located between 190 and 230 ms and centred over the peaks.

3.2. Results

As described above, the mean amplitudes of each ERP were obtained and exported to JASP (Version 0.14.1) for statistical analysis. The descriptive statistics (mean and standard deviation) of the ERP amplitudes can be seen in the Supplementary Materials. Topographic maps and grand traces of each ERP of interest are visually alike to those in Experiment 1; thus, they are not presented again. However, comprehensive ERP traces can be found in the Supplementary Materials. Using a similar analysis approach as in Experiment 1, repeated measures ANOVAs were performed.

3.2.1. P1 Component

Three separate 2-way repeated measures ANOVA were performed for Emotion (angry and neutral) and Motion Direction (approaching and receding) at SF, HSF, and LSF conditions, respectively.
For unfiltered/BSF faces, a significant main effect of motion was found; F (1, 29) = 9.84, p = 0.004, ηp2 = 0.253, such that an overall enhanced P1 was found for receding (3.94 μV) when compared with approaching (3.61 μV) motion. A significant main effect of emotion was also found, F (1, 29) = 14.15, p < 0.001, ηp2 = 0.328, showing that the P1 was overall stronger for angry (3.95 μV) than neutral (3.60 μV) faces.
For HSF faces, there were no significant main effects of facial expression, motion, or interaction at all.
For LSF faces, it also revealed a main effect of motion; F (1, 29) = 5.39, p = 0.027, ηp2 = 0.157. Like the SF condition, it also showed an enhanced P1 for receding (4.01 μV) compared to approaching motion (3.76 μV).

3.2.2. N170 Component

At BSF, LSF, and HSF, a 2 ROI (L and R) × 2 Emotion (angry and neutral) × 2 Motion (approaching and receding) repeated measures ANOVA was performed, respectively. For BSF faces, a main effect of emotion was found, F (1, 29) = 4.40, p = 0.045, ηp2 = 0.132, such that the N170 was overall enhanced for angry (−4.33 μV) compared with neutral (−4.05 μV) faces. Furthermore, an interaction between ROI and emotion was found; F (1, 29) = 4.22, p = 0.049, ηp2 = 0.127. A follow-up Simple Main Effect analysis revealed that the N170 was significantly enhanced by angry faces at the right ROI, F (1, 29) = 9.42, p = 0.005, ηp2 = 0.245, but no difference at the left ROI, F (1, 29) = 0.37, p = 0.547, ηp2 = 0.013. This indicates that the N170 emotion effect for SF faces is mainly explained by activities in the right ROI.
For HSF faces, only a Motion main effect was found, F (1, 29) = 5.91, p = 0.021, ηp2 = 0.169, where the N170 was overall stronger for receding (−4.19 μV) than approaching (−3.93 μV) faces. Interestingly, no main or interactive effect was found for LSF faces at the N170.

3.2.3. P2 Component

At each level of spatial frequency, a 2 (angry and neutral) × 2 (approaching and receding) repeated measures ANOVA was performed, respectively. For all conditions of spatial frequency, main effects of Motion were found, although they were marginally significant for BSF condition; FBSF (1, 29) = 3.96, p = 0.056, ηp2 = 0.12, FHSF (1, 29) = 8.53, p = 0.007, ηp2 = 0.227, FLSF (1, 29) = 11.88, p = 0.002, ηp2 = 0.291. Consistently, the same pattern of motion effect was found across all the SF conditions, such as the P2 tended to be more enhanced by receding than approaching motions (Table 1). No other main or interactive effect was found on the P2.

3.3. Experiment 2 Summary

Experiment 2 aimed to investigate how endogenous attention would modulate early components of looming emotional faces within different spatial frequency bands. Results on the P1 and N170 revealed overall enhancement to unfiltered angry faces, consistent with our previous studies [37,42]. As in Experiment 1, neither HSF nor LSF angry faces modulated the P1. Surprisingly, the P1 was overall enhanced by receding motion for both unfiltered and LSF groups, and the N170 showed no enhancement to LSF angry faces with endogenous attention. The motion effect for LSF conditions was not found either, although a general enhancement to HSF receding faces was shown. Lastly, the P2 demonstrated consistent results by always showing smaller amplitudes for approaching compared to receding faces across all spatial frequency groups.

4. Discussion

The current study used EEG/ERP to investigate the time course of neural activation to apparently approaching and receding emotional faces filtered by low and high spatial frequency bands. It aimed to determine if and when looming motion and facial threat interacted and whether this processing involved the magnocellular neural pathways. This latter point was examined by presenting low vs. high spatial frequency components of the stimuli, under the expectation that this would preferentially activate magno- vs. parvocellular pathways, respectively. The effect of endogenous attention on these neural events was also investigated. Experiment 1 used a passive-viewing paradigm as in our previous research. It presented HSF- and LSF-filtered angry and neutral faces in static, approaching, or receding motions on a depth-cued background. In the second experiment, endogenous attention was engaged by directing attention to the facial expression. The same HSF and LSF faces and their unfiltered/broadband counterparts (BSF) were used, presented in either approaching or receding motion.

4.1. The P1 Component

In this study, the early modulations of threat-relevant stimuli were hypothesised to rely on rapid and coarse processing via magnocellular routes. Our previous passive-viewing studies with unfiltered looming angry faces showed a P1 enhancement to angry faces that were further boosted by looming motion. We expected to observe similar P1 effects for LSF-filtered looming angry faces as had been seen for the unfiltered stimuli in our previous studies. However, contrary to our expectations, across both experiments of this study, faces filtered for LSF or HSF were not found to modulate the P1 responses to facial expressions, nor did they elicit interaction with looming motion.
The absence of any threat-relevant effect on our P1 for filtered faces is inconsistent with studies showing a stronger neural sensitivity to LSF-threatening faces at this stage [92,93,94,95,96]. One possible interpretation for this result pointed to the difference in task relevance of the faces. The null results of spatial frequency in our study were observed when passively viewing the faces, and studies reporting LSF-threat sensitivity at the P1 stage frequently required directed attention to the faces [92,93,95,96]. In line with this interpretation, several studies have demonstrated that the neural processing of emotional faces requires attentional resources [87,88]. Therefore, we followed up with Experiment 2, in which endogenous attention was directed to the facial expressions. We further hypothesised that attentional focus might enhance the initial visual processing of facial expressions, and it could be reflected in the P1 modulation by LSF emotional faces. Nevertheless, the P1 was again not modulated by either LSF- or HSF-filtered faces. Together, our results suggest that the P1 differentiation of threatening faces is not driven by a neural pathway that relies on coarse/LSF information alone.
On the other hand, the overall P1 differentiation of facial expressions for unfiltered faces was observed regardless of endogenous attention. It seems therefore possible that by filtering a face, LSF or HSF might remove the visual information crucial for expression-related processing at this stage or could reduce the intensity of sensory input required to achieve the processing threshold. As a result, only the unfiltered faces in our studies presented sufficient sensory information for the differentiation to be reflected at the P1. More importantly, our P1 enhancement for unfiltered angry faces is consistent with studies reporting its sensitivity for threat-related faces [25,26,38,39]. The timing of this effect is in line with the involvement of a rapidly activating neural pathway, which has been hypothesised to occur via the amygdala in early threat detection [95,96,97]. A recent iEEG study has reported stronger amygdala activities for BSF and LSF fearful faces beginning 74 ms post-stimulus onset, earlier than the fear-related response measured at the visual cortex [96]. Wang et al. [95] also reported amygdala activation to subliminal fearful faces within a window of 45–118 ms post-onset using iEEG methods. Moreover, patient studies of amygdala damage have demonstrated a loss of P1 modulations for threatening expressions [97]. Our findings thus keep with the literature suggesting rapid emotional threat processing. However, since we observed no emotional modulations of LSF faces on the P1, we cannot conclude an involvement of a magnocellular pathway. Consequently, we cannot determine if looming motion and emotion interact at the level of subcortical structures such as the superior colliculus or the pulvinar.
The other important result was that endogenous attention impacted the motion effects of the P1. There seemed to be a dissociation between passive and active viewing for the P1 responses of motion direction. The P1 only showed looming interactions with angry expressions among unfiltered faces when viewed passively. When directing endogenous attention, instead of the interaction in the looming condition, an overall P1 enhancement by receding motion was observed for both unfiltered and LSF faces. This indicates that spatial frequency filtering and endogenous attention differentially influence the initial processing of facial expressions and motion.
It appeared that during passive viewing, modulation via looming motion during the P1 relied on the perception of facial expressions; filtering attenuates P1 modulation by emotional expressions, and looming motion does not further interact with angry expressions. This finding aligns with our study using inverted faces [42]—the P1 showed no differentiation of angry faces nor further interaction with the looming motion to inverted faces. Since it is known that inversion impairs early recognition of faces and facial expressions, we took the P1 results to indicate that further enhancement by looming motion is dependent on the initial perception of facial expressions [42].
Endogenous attention, on the other hand, might enhance motion processing. The P1 enhancement to receding motion was observed for LSF-filtered faces, suggesting an increased ability to differentiate motion direction via the magnocellular channels. Alternatively, endogenous attention might activate the processing of some facial aspects that happen to be associated with motion direction. For example, face recognition might be improved with receding motion. Receding corresponds to a decrease in the retinal size of the stimuli. Thus, receding faces should proportionately include more facial features inside the foveal visual field, resulting in better recognition [64,65,98]. Further to this, it was found that LSF face images were more recognisable when presented at the size of a 2° visual angle compared with a 10° of visual angle [99]. The discrimination task indeed implied a higher requirement for face recognition. The enhanced P1 to faces moving away may thus have derived from an increase in neural activity linked to improved face recognition during receding motion, which is also applicable to LSF faces in this context.
The above interpretation further suggests that the initial processing of facial features is highly integrated with their dynamic aspects, and attentional properties modulate this processing. One could hypothesise, therefore, that automatic threat detection may occur in the context of exogenous attention via recruitment of a rapid subcortical-based neural pathway, but when endogenous attention is explicitly required by the task, visual processing of the task-relevant aspects gain priority, leading to the recruitment of higher-level computation networks for facial recognition which are improved with receding motion. Nevertheless, the receding effect could also reflect a more general sensitivity present for all objects with focused attention. To clarify this, a direct comparison between faces and other objects across spatial frequency conditions and more specific neuroimaging and analysis methods are required.

4.2. The N170 Component

As consistently reported in our previous studies, unfiltered angry faces globally enhanced the N170. The processes underlying this effect were previously hypothesised to be linked to rapid and coarse processing via magnocellular routes [64,65]. We therefore expected to observe enhanced N170 to both BSF and LSF angry faces across attentional conditions. We found that the N170 enhancement of BSF angry faces occurred under endogenous attention conditions as well. However, LSF angry faces only enhanced the N170 during passive viewing.
Our results showed that the N170 sensitivity to threatening facial expressions can be conveyed by LSF information under exogenous attention. This supported our hypothesis regarding the involvement of a magnocellular-based neural pathway in the N170 effect. However, our results also showed endogenous attention to facial expressions disrupts this N170 sensitivity for LSF input. Thus, the processing of LSF faces appears vulnerable to mechanisms associated with endogenous attention. This appears to suggest that the pathway for LSF input underlying the N170 may be cortical. This would be supported by some fMRI and TMS data, which suggest a dissociable cortical pathway between dynamic and static faces, with a crucial role of the superior temporal sulcus (STS) in response to moving faces [100,101]. Integration of recent literature shows that this “dynamic-face” pathway projects from the early visual cortex via motion-selective areas (V5/MT) into the STS [102]. The direct functional connection for motion perception between V1/V2 and V5/MT has been largely described [65,103,104], and the earliest activation of these areas has been reported to be within 30–50 ms following motion onset [59]. Another line of evidence reported that cortical regions, including the ventral prefrontal cortex (vPFC) and the insula, are also sensitive to emotional stimuli [105,106], and early differentiation in these areas can be achieved within 120–140 [105,107,108].
According to the above, it is possible that the processing of motion and facial expressions are integrated along this cortical “dynamic-face” pathway and lead to the modulation observed in the N170 window. Given that the rapid projection of motion input to the cortex (i.e., V5/MT) occurs mainly via magnocellular channels [64,65], it is possible that the LSF information regarding moving angry faces may be conveyed via this pathway to V5/MT and STS, although this interpretation of our current findings remains purely speculative. Regardless, the neural pathways involved in the processing of dynamic faces are likely modulated by endogenous attention. For example, using fMRI and diffusion tractography techniques, a recent study has demonstrated significant attentional modulatory activity along the posterior inferotemporal cortex parietal and frontal attentional regions [109]. This provides structural evidence for our interpretation that voluntary attentional control is necessary for emotional differentiation to LSF faces at the N170.
Importantly, our results showed that directed endogenous attention did not attenuate N170 enhancement for BSF angry faces, suggesting that the broad range of visual information contained in the unfiltered faces may be less sensitive to decreased attentional control. Moreover, the involvement of multiple neural pathways in the early differentiation of emotional faces [108,110] may produce a greater robustness to variations in endogenous attention.
Interestingly, motion directions tended to modulate the N170 when faces were filtered in HSF or LSF bands. These modulations varied across filtering and attentional conditions. These results are unexpected and difficult to interpret. Arguably, they may be the result of multiple parallel pathways being activated for different characteristics of the stimuli. However, this question would require additional investigations to be clarified.

4.3. The P2 Component

According to our previous study [37], we expected to see only motion sensitivity at the P2. As a negative correlation between the P2 amplitude and stimulus saliency has been reported [111] and the saliency of a visual stimulus increases with its pictorial size [112], we expected to find a smaller P2 for the approaching condition. Our results were as predicted across spatial frequency and attention conditions. Those results align with the literature, supporting that approaching stimuli are more salient, potentially due to increased sizes. We showed that this saliency effect of motion is independent of spatial frequency, further confirming the sensitivity to motion irrespective of the stimulus content. Although the P2 amplitudes to HSF faces followed a pattern of approaching < receding < static conditions when viewed passively, only the difference between approaching and static faces was significant. Endogenous attention tended to enhance differentiation between approaching and receding motions in HSF. This may be because HSF conveys fine details of the stimuli, and size changes associated with those details would be better processed with engaged attentional focus. This would align with findings showing engaged attentional effects at the P2 stage [24,113,114,115].

5. Conclusions

This study aimed to investigate the potential involvement of magnocellular pathways, which are thought to involve preferentially LSF inputs, on looming angry faces and further explored the effect of endogenous attention on neural responses. In two experiments, we used EEG/ERP techniques to measure the P1, N170, and P2 components in responses to approaching or receding emotional faces filtered by low and high spatial frequency bands under passive viewing and directed attention tasks. Our findings indicated that P1 was enhanced by BSF angry faces regardless of attentional control, while HSF and LSF faces did not elicit this effect. This suggests that looming motion and threatening expressions interact at the level of the P1. However, this effect does not rely on LSF or HSF inputs in isolation. The N170 showed an enhanced response to BSF angry faces irrespective of endogenous attention, but this enhancement was only evident during passive viewing. This is tentatively interpreted as indicating the involvement of a cortical neural pathway underlying the N170, differentiating LSF facial expressions. Overall, the results provided preliminary support for the involvement of multiple parallel neural pathways in the processing of looming emotional faces, with spatial frequency filtering and attentional control producing differential effects on these components.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/brainsci14010098/s1, Figures S1–S6: Grand Traces of Each Condition of the Reported ERPs; Tables S1–S11: Tables of the Descriptive Statistics of the Reported ERPs.

Author Contributions

Conceptualization, Z.Y. and A.J.P.; methodology, Z.Y.; software, Z.Y.; formal analysis, Z.Y. and E.M.; investigation, Z.Y.; resources, A.J.P.; data curation, Z.Y. and A.J.P.; writing—original draft preparation, Z.Y. and E.M.; writing—review and editing, A.J.P. and A.K.; visualisation, Z.Y.; supervision, A.J.P. and A.K.; project administration, Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Z.Y. was financed by a Ph.D. scholarship from the University of Queensland (UQ).

Institutional Review Board Statement

This study was approved by the University of Queensland Ethics Committee on 27 April 2020. Clearance number: 2020000843.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data collected for this research are stored confidentially on UQ Research Data Manager (https://research.uq.edu.au/rmbt/uqrdm, accessed on 23 October 2023); they are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adams, R.B.; Ambady, N.; Macrae, C.N.; Kleck, R.E. Emotional expressions forecast approach-avoidance behavior. Motiv. Emot. 2006, 30, 177–186. [Google Scholar] [CrossRef]
  2. LeDoux, J. Rethinking the emotional brain. Neuron 2012, 73, 653–676. [Google Scholar] [CrossRef] [PubMed]
  3. Marsh, A.A.; Ambady, N.; Kleck, R.E. The effects of fear and anger facial expressions on approach-and avoidance-related behaviors. Emotion 2005, 5, 119–124. [Google Scholar] [CrossRef]
  4. Mennella, R.; Vilarem, E.; Grèzes, J. Rapid approach-avoidance responses to emotional displays reflect value-based decisions: Neural evidence from an EEG study. NeuroImage 2020, 222, 117253. [Google Scholar] [CrossRef] [PubMed]
  5. Roelofs, K.; Putman, P.; Schouten, S.; Lange, W.-G.; Volman, I.; Rinck, M. Gaze direction differentially affects avoidance tendencies to happy and angry faces in socially anxious individuals. Behav. Res. Ther. 2010, 48, 290–294. [Google Scholar] [CrossRef] [PubMed]
  6. Springer, U.S.; Rosas, A.; McGetrick, J.; Bowers, D. Differences in startle reactivity during the perception of angry and fearful faces. Emotion 2007, 7, 516–525. [Google Scholar] [CrossRef]
  7. Adams, R.B., Jr.; Kleck, R.E. Perceived gaze direction and the processing of facial displays of emotion. Psychol. Sci. 2003, 14, 644–647. [Google Scholar] [CrossRef]
  8. Mühlberger, A.; Wieser, M.J.; Gerdes, A.B.; Frey, M.C.; Weyers, P.; Pauli, P. Stop looking angry and smile, please: Start and stop of the very same facial expression differentially activate threat-and reward-related brain networks. Soc. Cogn. Affect. Neurosci. 2011, 6, 321–329. [Google Scholar] [CrossRef]
  9. de Vignemont, F.; Iannetti, G. How many peripersonal spaces? Neuropsychologia 2015, 70, 327–334. [Google Scholar] [CrossRef]
  10. Graziano, M.S.; Cooke, D.F. Parieto-frontal interactions, personal space, and defensive behavior. Neuropsychologia 2006, 44, 845–859. [Google Scholar] [CrossRef]
  11. Hunley, S.B.; Lourenco, S.F. What is peripersonal space? An examination of unresolved empirical issues and emerging findings. Wiley Interdiscip. Rev. Cogn. Sci. 2018, 9, e1472. [Google Scholar] [CrossRef] [PubMed]
  12. Perry, A.; Rubinsten, O.; Peled, L.; Shamay-Tsoory, S.G. Don’t stand so close to me: A behavioral and ERP study of preferred interpersonal distance. Neuroimage 2013, 83, 761–769. [Google Scholar] [CrossRef] [PubMed]
  13. Sambo, C.F.; Iannetti, G.D. Better safe than sorry? The safety margin surrounding the body is increased by anxiety. J. Neurosci. 2013, 33, 14225–14230. [Google Scholar] [CrossRef] [PubMed]
  14. Spaccasassi, C.; Maravita, A. Peripersonal space is diversely sensitive to a temporary vs permanent state of anxiety. Cognition 2020, 195, 104133. [Google Scholar] [CrossRef] [PubMed]
  15. Bufacchi, R.J. Approaching threatening stimuli cause an expansion of defensive peripersonal space. J. Neurophysiol. 2017, 118, 1927–1930. [Google Scholar] [CrossRef] [PubMed]
  16. Wei, P.; Liu, N.; Zhang, Z.; Liu, X.; Tang, Y.; He, X.; Wu, B.; Zhou, Z.; Liu, Y.; Li, J.; et al. Processing of visually evoked innate fear by a non-canonical thalamic pathway. Nat. Commun. 2015, 6, 6756. [Google Scholar] [CrossRef] [PubMed]
  17. Lischinsky, J.E.; Lin, D. Looming Danger: Unraveling the Circuitry for Predator Threats. Trends Neurosci. 2019, 42, 841–842. [Google Scholar] [CrossRef]
  18. Ball, W.; Tronick, E. Infant responses to impending collision: Optical and real. Science 1971, 171, 818–820. [Google Scholar] [CrossRef]
  19. van der Weel, F.R.; van der Meer, A.L. Seeing it coming: Infants’ brain responses to looming danger. Naturwissenschaften 2009, 96, 1385–1391. [Google Scholar] [CrossRef]
  20. van der Weel, F.R.; Agyei, S.B.; van der Meer, A.L.H. Infants’ Brain Responses to Looming Danger: Degeneracy of Neural Connectivity Patterns. Ecol. Psychol. 2019, 31, 182–197. [Google Scholar] [CrossRef]
  21. Yonas, A.; Pettersen, L.; Lockman, J.J. Young infant’s sensitivity to optical information for collision. Can. J. Psychol. 1979, 33, 268–276. [Google Scholar] [CrossRef] [PubMed]
  22. Schiff, W.; Caviness, J.A.; Gibson, J.J. Persistent fear responses in rhesus monkeys to the optical stimulus of “looming”. Science 1962, 136, 982–983. [Google Scholar] [CrossRef] [PubMed]
  23. Blechert, J.; Sheppes, G.; Di Tella, C.; Williams, H.; Gross, J.J. See what you think: Reappraisal modulates behavioral and neural responses to social stimuli. Psychol. Sci. 2012, 23, 346–353. [Google Scholar] [CrossRef] [PubMed]
  24. Carretié, L.; Hinojosa, J.A.; Martín-Loeches, M.; Mercado, F.; Tapia, M. Automatic attention to emotional stimuli: Neural correlates. Hum. Brain Mapp. 2004, 22, 290–299. [Google Scholar] [CrossRef]
  25. Rellecke, J.; Sommer, W.; Schacht, A. Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biol. Psychol. 2012, 90, 23–32. [Google Scholar] [CrossRef] [PubMed]
  26. Batty, M.; Taylor, M.J. Early processing of the six basic facial emotional expressions. Cogn. Brain Res. 2003, 17, 613–620. [Google Scholar] [CrossRef] [PubMed]
  27. Righart, R.; De Gelder, B. Rapid influence of emotional scenes on encoding of facial expressions: An ERP study. Soc. Cogn. Affect. Neurosci. 2008, 3, 270–278. [Google Scholar] [CrossRef]
  28. Blau, V.C.; Maurer, U.; Tottenham, N.; McCandliss, B.D. The face-specific N170 component is modulated by emotional facial expression. Behav. Brain Funct. 2007, 3, 7. [Google Scholar] [CrossRef]
  29. Hinojosa, J.; Mercado, F.; Carretié, L. N170 sensitivity to facial expression: A meta-analysis. Neurosci. Biobehav. Rev. 2015, 55, 498–509. [Google Scholar] [CrossRef]
  30. Schindler, S.; Bublatzky, F. Attention and emotion: An integrative review of emotional face processing as a function of attention. Cortex 2020, 130, 362–386. [Google Scholar] [CrossRef]
  31. Carretié, L.; Hinojosa, J.A.; López-Martín, S.; Albert, J.; Tapia, M.; Pozo, M.A. Danger is worse when it moves: Neural and behavioral indices of enhanced attentional capture by dynamic threatening stimuli. Neuropsychologia 2009, 47, 364–369. [Google Scholar] [CrossRef] [PubMed]
  32. Hillyard, S.A.; Anllo-Vento, L. Event-related brain potentials in the study of visual selective attention. Proc. Natl. Acad. Sci. USA 1998, 95, 781–787. [Google Scholar] [CrossRef] [PubMed]
  33. Kubova, Z.; Kuba, M.; Spekreijse, H.; Blakemore, C. Contrast dependence of motion-onset and pattern-reversal evoked potentials. Vis. Res. 1995, 35, 197–205. [Google Scholar] [CrossRef] [PubMed]
  34. Ffytche, D.H.; Guy, C.N.; Zeki, S. The parallel visual motion inputs into areas V1 and V5 of human cerebral cortex. Brain 1995, 118, 1375–1394. [Google Scholar] [CrossRef]
  35. Fernández-Folgueiras, U.; Méndez-Bértolo, C.; Hernández-Lorca, M.; Bódalo, C.; Giménez-Fernández, T.; Carretié, L. Realistic (3D) looming of emotional visual stimuli: Attentional effects at neural and behavioral levels. Psychophysiology 2021, 58, e13785. [Google Scholar] [CrossRef] [PubMed]
  36. Vagnoni, E.; Lourenco, S.F.; Longo, M.R. Threat modulates neural responses to looming visual stimuli. Eur. J. Neurosci. 2015, 42, 2190–2202. [Google Scholar] [CrossRef] [PubMed]
  37. Yu, Z.; Kritikos, A.; Pegna, A.J. Enhanced early ERP responses to looming angry faces. Biol. Psychol. 2022, 170, 108308. [Google Scholar] [CrossRef]
  38. Itier, R.J.; Taylor, M.J. N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cereb. Cortex 2004, 14, 132–142. [Google Scholar] [CrossRef]
  39. Pourtois, G.; Grandjean, D.; Sander, D.; Vuilleumier, P. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb. Cortex 2004, 14, 619–633. [Google Scholar] [CrossRef]
  40. Bruchmann, M.; Schindler, S.; Straube, T. The spatial frequency spectrum of fearful faces modulates early and mid-latency ERPs but not the N170. Psychophysiology 2020, 57, e13597. [Google Scholar] [CrossRef]
  41. Schindler, S.; Bruchmann, M.; Gathmann, B.; Moeck, R.; Straube, T. Effects of low-level visual information and perceptual load on P1 and N170 responses to emotional expressions. Cortex 2021, 136, 14–27. [Google Scholar] [CrossRef] [PubMed]
  42. Yu, Z.; Kritikos, A.; Pegna, A.J. Up close and emotional: Electrophysiological dynamics of approaching angry faces. Biol. Psychol. 2023, 176, 108479. [Google Scholar] [CrossRef] [PubMed]
  43. Rakover, S.S. Explaining the face-inversion effect: The face–scheme incompatibility (FSI) model. Psychon. Bull. Rev. 2013, 20, 665–692. [Google Scholar] [CrossRef] [PubMed]
  44. Taubert, J.; Apthorp, D.; Aagten-Murphy, D.; Alais, D. The role of holistic processing in face perception: Evidence from the face inversion effect. Vis. Res. 2011, 51, 1273–1278. [Google Scholar] [CrossRef] [PubMed]
  45. Farah, M.J.; Wilson, K.D.; Drain, H.M.; Tanaka, J.R. The inverted face inversion effect in prosopagnosia: Evidence for mandatory, face-specific perceptual mechanisms. Vis. Res. 1995, 35, 2089–2093. [Google Scholar] [CrossRef]
  46. Valentine, T. Upside-down faces: A review of the effect of inversion upon face recognition. Br. J. Psychol. 1988, 79, 471–491. [Google Scholar] [CrossRef] [PubMed]
  47. Cashon, C.H.; Holt, N.A. Chapter Four—Developmental origins of the face inversion effect. Adv. Child Dev. Behav. 2015, 48, 117–150. [Google Scholar]
  48. Yin, R.K. Looking at upside-down faces. J. Exp. Psychol. 1969, 81, 141–145. [Google Scholar] [CrossRef]
  49. Itier, R.J.; Taylor, M.J. Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. Neuroimage 2002, 15, 353–372. [Google Scholar] [CrossRef]
  50. Jacques, C.; d’Arripe, O.; Rossion, B. The time course of the inversion effect during individual face discrimination. J. Vis. 2007, 7, 3. [Google Scholar] [CrossRef]
  51. Taylor, M.; Batty, M.; Itier, R. The faces of development: A review of early face processing over childhood. J. Cogn. Neurosci. 2004, 16, 1426–1442. [Google Scholar] [CrossRef] [PubMed]
  52. LeDoux, J. The emotional brain, fear, and the amygdala. Cell. Mol. Neurobiol. 2003, 23, 727–738. [Google Scholar] [CrossRef] [PubMed]
  53. LeDoux, J.E. Emotion, memory and the brain. Sci. Am. 1994, 270, 50–57. [Google Scholar] [CrossRef] [PubMed]
  54. Tamietto, M.; De Gelder, B. Neural bases of the non-conscious perception of emotional signals. Nat. Rev. Neurosci. 2010, 11, 697–709. [Google Scholar] [CrossRef] [PubMed]
  55. Diano, M.; Celeghin, A.; Bagnis, A.; Tamietto, M. Amygdala Response to Emotional Stimuli without Awareness: Facts and Interpretations. Front. Psychol. 2016, 7, 2029. [Google Scholar] [CrossRef] [PubMed]
  56. Tamietto, M.; Pullens, P.; de Gelder, B.; Weiskrantz, L.; Goebel, R. Subcortical connections to human amygdala and changes following destruction of the visual cortex. Curr. Biol. 2012, 22, 1449–1455. [Google Scholar] [CrossRef]
  57. Pegna, A.J.; Khateb, A.; Lazeyras, F.; Seghier, M.L. Discriminating emotional faces without primary visual cortices involves the right amygdala. Nat. Neurosci. 2004, 8, 24–25. [Google Scholar] [CrossRef]
  58. Burra, N.; Hervais-Adelman, A.; Celeghin, A.; De Gelder, B.; Pegna, A.J. Affective blindsight relies on low spatial frequencies. Neuropsychologia 2019, 128, 44–49. [Google Scholar] [CrossRef]
  59. Laycock, R.; Crewther, D.P.; Fitzgerald, P.B.; Crewther, S.G. Evidence for fast signals and later processing in human V1/V2 and V5/MT+: A TMS study of motion perception. J. Neurophysiol. 2007, 98, 1253–1262. [Google Scholar] [CrossRef]
  60. Kawano, K.; Shidara, M.; Watanabe, Y.; Yamane, S. Neural activity in cortical area MST of alert monkey during ocular following responses. J. Neurophysiol. 1994, 71, 2305–2324. [Google Scholar] [CrossRef]
  61. Grasso, P.A.; Ladavas, E.; Bertini, C.; Caltabiano, S.; Thut, G.; Morand, S. Decoupling of Early V5 Motion Processing from Visual Awareness: A Matter of Velocity as Revealed by Transcranial Magnetic Stimulation. J. Cogn. Neurosci. 2018, 30, 1517–1531. [Google Scholar] [CrossRef] [PubMed]
  62. Billington, J.; Wilkie, R.M.; Field, D.T.; Wann, J.P. Neural processing of imminent collision in humans. Proc. R. Soc. B Biol. Sci. 2011, 278, 1476–1481. [Google Scholar] [CrossRef] [PubMed]
  63. Cléry, J.C.; Schaeffer, D.J.; Hori, Y.; Gilbert, K.M.; Hayrynen, L.K.; Gati, J.S.; Menon, R.S.; Everling, S. Looming and receding visual networks in awake marmosets investigated with fMRI. Neuroimage 2020, 215, 116815. [Google Scholar] [CrossRef] [PubMed]
  64. Livingstone, M.; Hubel, D. Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science 1988, 240, 740–749. [Google Scholar] [CrossRef] [PubMed]
  65. Bullier, J. Integrated model of visual processing. Brain Res. Brain Res. Rev. 2001, 36, 96–107. [Google Scholar] [CrossRef] [PubMed]
  66. Lupp, U.; Hauske, G.; Wolf, W. Perceptual latencies to sinusoidal gratings. Vis. Res. 1976, 16, 969–972. [Google Scholar] [CrossRef]
  67. Broggin, E.; Savazzi, S.; Marzi, C.A. Similar effects of visual perception and imagery on simple reaction time. Q. J. Exp. Psychol. 2012, 65, 151–164. [Google Scholar] [CrossRef]
  68. Mihaylova, M.; Stomonyakov, V.; Vassilev, A. Peripheral and central delay in processing high spatial frequencies: Reaction time and VEP latency studies. Vis. Res. 1999, 39, 699–705. [Google Scholar] [CrossRef]
  69. Schiller, P.H.; Malpeli, J.G.; Schein, S.J. Composition of geniculostriate input ot superior colliculus of the rhesus monkey. J. Neurophysiol. 1979, 42, 1124–1133. [Google Scholar] [CrossRef]
  70. Mishkin, M.; Ungerleider, L.G.; Macko, K.A. Object vision and spatial vision: Two cortical pathways. Trends Neurosci. 1983, 6, 414–417. [Google Scholar] [CrossRef]
  71. Goffaux, V.; Hault, B.; Michel, C.; Vuong, Q.C.; Rossion, B. The respective role of low and high spatial frequencies in supporting configural and featural processing of faces. Perception 2005, 34, 77–86. [Google Scholar] [CrossRef] [PubMed]
  72. Goffaux, V.; Rossion, B. Faces are “spatial”--holistic face perception is supported by low spatial frequencies. J. Exp. Psychol. Hum. Percept. Perform. 2006, 32, 1023–1039. [Google Scholar] [CrossRef] [PubMed]
  73. Halit, H.; de Haan, M.; Schyns, P.G.; Johnson, M.H. Is high-spatial frequency information used in the early stages of face detection? Brain Res. 2006, 1117, 154–161. [Google Scholar] [CrossRef] [PubMed]
  74. Vuilleumier, P.; Armony, J.L.; Driver, J.; Dolan, R.J. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 2003, 6, 624–631. [Google Scholar] [CrossRef] [PubMed]
  75. Winston, J.S.; Vuilleumier, P.; Dolan, R.J. Effects of low-spatial frequency components of fearful faces on fusiform cortex activity. Curr. Biol. 2003, 13, 1824–1829. [Google Scholar] [CrossRef] [PubMed]
  76. Nakashima, T.; Kaneko, K.; Goto, Y.; Abe, T.; Mitsudo, T.; Ogata, K.; Makinouchi, A.; Tobimatsu, S. Early ERP components differentially extract facial features: Evidence for spatial frequency-and-contrast detectors. Neurosci. Res. 2008, 62, 225–235. [Google Scholar] [CrossRef]
  77. Jeantet, C.; Laprevote, V.; Schwan, R.; Schwitzer, T.; Maillard, L.; Lighezzolo-Alnot, J.; Caharel, S. Time course of spatial frequency integration in face perception: An ERP study. Int. J. Psychophysiol. 2019, 143, 105–115. [Google Scholar] [CrossRef]
  78. Obayashi, C.; Nakashima, T.; Onitsuka, T.; Maekawa, T.; Hirano, Y.; Hirano, S.; Oribe, N.; Kaneko, K.; Kanba, S.; Tobimatsu, S. Decreased spatial frequency sensitivities for processing faces in male patients with chronic schizophrenia. Clin. Neurophysiol. 2009, 120, 1525–1533. [Google Scholar] [CrossRef]
  79. Campbell, J.I.; Thompson, V.A. MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis. Behav. Res. Methods 2012, 44, 1255–1265. [Google Scholar] [CrossRef]
  80. Langner, O.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.; Hawk, S.T.; Van Knippenberg, A. Presentation and validation of the Radboud Faces Database. Cogn. Emot. 2010, 24, 1377–1388. [Google Scholar] [CrossRef]
  81. MathWorks. MATLAB, Version: 9.13. 0 (R2022b). Available online: https://www.mathworks.com (accessed on 4 July 2022).
  82. Perfetto, S.; Wilder, J.; Walther, D.B. Effects of spatial frequency filtering choices on the perception of filtered images. Vision 2020, 4, 29. [Google Scholar] [CrossRef] [PubMed]
  83. Peirce, J.; MacAskill, M. Building Experiments in PsychoPy. Available online: https://www.psychopy.org/index.html (accessed on 16 January 2024).
  84. Hajcak, G.; Weinberg, A.; MacNamara, A.; Foti, D. ERPs and the Study of Emotion. In The Oxford Handbook of Event-Related Potential Components; Kappenman, E.S., Luck, S.J., Eds.; Oxford University Press: Oxford, UK, 2012; p. 441. [Google Scholar]
  85. Luck, S.J. An Introduction to the Event-Related Potential Technique; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  86. Luck, S.J.; Kappenman, E.S. The Oxford Handbook of Event-Related Potential Components; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  87. Pessoa, L.; McKenna, M.; Gutierrez, E.; Ungerleider, L.G. Neural processing of emotional faces requires attention. Proc. Natl. Acad. Sci. USA 2002, 99, 11458–11463. [Google Scholar] [CrossRef] [PubMed]
  88. Pessoa, L.; Padmala, S.; Morland, T. Fate of unattended fearful faces in the amygdala is determined by both attentional resources and cognitive modulation. Neuroimage 2005, 28, 249–255. [Google Scholar] [CrossRef] [PubMed]
  89. Hopfinger, J.B.; West, V.M. Interactions between endogenous and exogenous attention on cortical visual processing. Neuroimage 2006, 31, 774–789. [Google Scholar] [CrossRef] [PubMed]
  90. Pessoa, L. To what extent are emotional visual stimuli processed without attention and awareness? Curr. Opin. Neurobiol. 2005, 15, 188–196. [Google Scholar] [CrossRef] [PubMed]
  91. Vuilleumier, P. How brains beware: Neural mechanisms of emotional attention. Trends Cogn. Sci. 2005, 9, 585–594. [Google Scholar] [CrossRef] [PubMed]
  92. Burt, A.; Hugrass, L.; Frith-Belvedere, T.; Crewther, D. Insensitivity to Fearful Emotion for Early ERP Components in High Autistic Tendency Is Associated with Lower Magnocellular Efficiency. Front. Hum. Neurosci. 2017, 11, 495. [Google Scholar] [CrossRef]
  93. Park, G.; Moon, E.; Kim, D.W.; Lee, S.H. Individual differences in cardiac vagal tone are associated with differential neural responses to facial expressions at different spatial frequencies: An ERP and sLORETA study. Cogn. Affect. Behav. Neurosci. 2012, 12, 777–793. [Google Scholar] [CrossRef]
  94. Vlamings, P.H.; Goffaux, V.; Kemner, C. Is the early modulation of brain activity by fearful facial expressions primarily mediated by coarse low spatial frequency information? J. Vis. 2009, 9, 12. [Google Scholar] [CrossRef]
  95. Wang, Y.; Luo, L.; Chen, G.; Luan, G.; Wang, X.; Wang, Q.; Fang, F. Rapid Processing of Invisible Fearful Faces in the Human Amygdala. J. Neurosci. 2023, 43, 1405–1413. [Google Scholar] [CrossRef]
  96. Mendez-Bertolo, C.; Moratti, S.; Toledano, R.; Lopez-Sosa, F.; Martinez-Alvarez, R.; Mah, Y.H.; Vuilleumier, P.; Gil-Nagel, A.; Strange, B.A. A fast pathway for fear in human amygdala. Nat. Neurosci. 2016, 19, 1041–1049. [Google Scholar] [CrossRef] [PubMed]
  97. Rotshtein, P.; Richardson, M.P.; Winston, J.S.; Kiebel, S.J.; Vuilleumier, P.; Eimer, M.; Driver, J.; Dolan, R.J. Amygdala damage affects event-related potentials for fearful faces at specific time windows. Hum. Brain Mapp. 2010, 31, 1089–1105. [Google Scholar] [CrossRef] [PubMed]
  98. Chan, A.H.S.; Courtney, A.J. Foveal acuity, peripheral acuity and search performance: A review. Int. J. Ind. Ergon. 1996, 18, 113–119. [Google Scholar] [CrossRef]
  99. Shahangian, K.; Oruc, I. Looking at a blurry old family photo? Zoom out! Perception 2014, 43, 90–98. [Google Scholar] [CrossRef] [PubMed]
  100. Pitcher, D.; Dilks, D.D.; Saxe, R.R.; Triantafyllou, C.; Kanwisher, N. Differential selectivity for dynamic versus static information in face-selective cortical regions. Neuroimage 2011, 56, 2356–2363. [Google Scholar] [CrossRef] [PubMed]
  101. Pitcher, D.; Duchaine, B.; Walsh, V. Combined TMS and FMRI reveal dissociable cortical pathways for dynamic and static face perception. Curr. Biol. 2014, 24, 2066–2070. [Google Scholar] [CrossRef]
  102. Pitcher, D.; Ungerleider, L.G. Evidence for a Third Visual Pathway Specialized for Social Perception. Trends Cogn. Sci. 2021, 25, 100–110. [Google Scholar] [CrossRef]
  103. Schoenfeld, M.A.; Heinze, H.J.; Woldorff, M.G. Unmasking Motion-Processing Activity in Human Brain Area V5/MT+ Mediated by Pathways That Bypass Primary Visual Cortex. NeuroImage 2002, 17, 769–779. [Google Scholar] [CrossRef]
  104. Ajina, S.; Kennard, C.; Rees, G.; Bridge, H. Motion area V5/MT+ response to global motion in the absence of V1 resembles early visual cortex. Brain 2015, 138, 164–178. [Google Scholar] [CrossRef]
  105. Carretie, L. Exogenous (automatic) attention to emotional stimuli: A review. Cogn. Affect. Behav. Neurosci. 2014, 14, 1228–1258. [Google Scholar] [CrossRef]
  106. Carretie, L.; Albert, J.; Lopez-Martin, S.; Tapia, M. Negative brain: An integrative review on the neural processes activated by unpleasant stimuli. Int. J. Psychophysiol. 2009, 71, 57–63. [Google Scholar] [CrossRef] [PubMed]
  107. Adolphs, R.; Spezio, M. Role of the amygdala in processing visual social stimuli. Prog. Brain Res. 2006, 156, 363–378. [Google Scholar] [CrossRef]
  108. Pessoa, L.; Adolphs, R. Emotion processing and the amygdala: From a ‘low road’ to ‘many roads’ of evaluating biological significance. Nat. Rev. Neurosci. 2010, 11, 773–783. [Google Scholar] [CrossRef]
  109. Sani, I.; Stemmann, H.; Caron, B.; Bullock, D.; Stemmler, T.; Fahle, M.; Pestilli, F.; Freiwald, W.A. The human endogenous attentional control network includes a ventro-temporal cortical node. Nat. Commun. 2021, 12, 360. [Google Scholar] [CrossRef] [PubMed]
  110. McFadyen, J.; Mermillod, M.; Mattingley, J.B.; Halász, V.; Garrido, M.I. A rapid subcortical amygdala route for faces irrespective of spatial frequency and emotion. J. Neurosci. 2017, 37, 3864–3874. [Google Scholar] [CrossRef] [PubMed]
  111. Straube, S.; Fahle, M. The electrophysiological correlate of saliency: Evidence from a figure-detection task. Brain Res. 2010, 1307, 89–102. [Google Scholar] [CrossRef]
  112. Pieters, R.; Wedel, M.; Zhang, J. Optimal feature advertising design under competitive clutter. Manag. Sci. 2007, 53, 1815–1828. [Google Scholar] [CrossRef]
  113. Holmes, A.; Kiss, M.; Eimer, M. Attention modulates the processing of emotional expression triggered by foveal faces. Neurosci. Lett. 2006, 394, 48–52. [Google Scholar] [CrossRef]
  114. Kanske, P.; Plitschka, J.; Kotz, S.A. Attentional orienting towards emotion: P2 and N400 ERP effects. Neuropsychologia 2011, 49, 3121–3129. [Google Scholar] [CrossRef]
  115. Carretié, L.; Kessel, D.; Carboni, A.; López-Martín, S.; Albert, J.; Tapia, M.; Mercado, F.; Capilla, A.; Hinojosa, J.A. Exogenous attention to facial vs non-facial emotional visual stimuli. Soc. Cogn. Affect. Neurosci. 2013, 8, 764–773. [Google Scholar] [CrossRef]
Figure 1. A typical trial procedure, with examples of facial stimuli in each condition. * Event of Interest marks the 0 ms of each epoch in the analysis.
Figure 1. A typical trial procedure, with examples of facial stimuli in each condition. * Event of Interest marks the 0 ms of each epoch in the analysis.
Brainsci 14 00098 g001
Figure 6. A typical trial procedure, with examples of facial stimuli in each condition.
Figure 6. A typical trial procedure, with examples of facial stimuli in each condition.
Brainsci 14 00098 g006
Table 1. P2 Amplitudes—Motion at each SF band (Marginal Means μV).
Table 1. P2 Amplitudes—Motion at each SF band (Marginal Means μV).
BSFLSFHSF
Approaching3.8094.2460.466
Receding4.2365.1051.230
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Z.; Moses, E.; Kritikos, A.; Pegna, A.J. Looming Angry Faces: Preliminary Evidence of Differential Electrophysiological Dynamics for Filtered Stimuli via Low and High Spatial Frequencies. Brain Sci. 2024, 14, 98. https://doi.org/10.3390/brainsci14010098

AMA Style

Yu Z, Moses E, Kritikos A, Pegna AJ. Looming Angry Faces: Preliminary Evidence of Differential Electrophysiological Dynamics for Filtered Stimuli via Low and High Spatial Frequencies. Brain Sciences. 2024; 14(1):98. https://doi.org/10.3390/brainsci14010098

Chicago/Turabian Style

Yu, Zhou, Eleanor Moses, Ada Kritikos, and Alan J. Pegna. 2024. "Looming Angry Faces: Preliminary Evidence of Differential Electrophysiological Dynamics for Filtered Stimuli via Low and High Spatial Frequencies" Brain Sciences 14, no. 1: 98. https://doi.org/10.3390/brainsci14010098

APA Style

Yu, Z., Moses, E., Kritikos, A., & Pegna, A. J. (2024). Looming Angry Faces: Preliminary Evidence of Differential Electrophysiological Dynamics for Filtered Stimuli via Low and High Spatial Frequencies. Brain Sciences, 14(1), 98. https://doi.org/10.3390/brainsci14010098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop