Next Article in Journal
Evaluating the Pattern of Relationships of Speech and Language Deficits with Executive Functions, Attention Deficit/Hyperactivity Disorder (ADHD), and Facets of Giftedness in Greek Preschool Children. A Preliminary Analysis
Next Article in Special Issue
Effects of Closed Mouth vs. Exposed Teeth on Facial Expression Processing: An ERP Study
Previous Article in Journal
Working Memory as the Focus of the Bilingual Effect in Executive Functions
Previous Article in Special Issue
Perceptual, Not Attentional, Guidance Drives Happy Superiority in Complex Visual Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Role of Foveal and Extrafoveal Processing in Emotion Recognition: A Gaze-Contingent Study

by
Alejandro J. Estudillo
Department of Psychology, Bournemouth University, Poole House Talbot Campus, Poole BH12 5BB, UK
Behav. Sci. 2025, 15(2), 135; https://doi.org/10.3390/bs15020135
Submission received: 22 November 2024 / Revised: 17 January 2025 / Accepted: 23 January 2025 / Published: 26 January 2025

Abstract

:
Although the eye-tracking technique has been widely used to passively study emotion recognition, no studies have utilised this technique to actively manipulate eye-gaze strategies during the recognition facial emotions. The present study aims to fill this gap by employing a gaze-contingent paradigm. Observers were asked to determine the emotion displayed by centrally presented upright or inverted faces. Under the window condition, only a single fixated facial feature was available at a time, only allowing for foveal processing. Under the mask condition, the fixated facial feature was masked while the rest of the face remained visible, thereby disrupting foveal processing but allowing for extrafoveal processing. These conditions were compared with a full-view condition. The results revealed that while both foveal and extrafoveal information typically contribute to emotion identification, at a standard conversation distance, the latter alone generally suffices for efficient emotion identification.

1. Introduction

Facial expressions are important biological and social cues that have evolved to facilitate communication among peers (Schmidt & Cohn, 2001; F. W. Smith & Schyns, 2009; Susskind et al., 2008). Humans use facial expressions to transmit information about their feelings, intentions, and the environment. For this reason, the accurate and rapid interpretation of a conspecific’s facial emotions is crucial for survival and social interaction (Darwin, 1872). Although, under central viewing conditions, the different types of emotions are recognised with a reasonable level of accuracy, even such optimal conditions are not always present when interacting with others. For example, it is common for people to cover specific facial features with items, such as surgical face masks, neck gaiters, or sunglasses, which can dramatically reduce emotion recognition accuracy (Grundmann et al., 2021; Kim et al., 2022; Wong & Estudillo, 2022).
The physical obstructions of facial features are not the only challenge that our visual system encounters in identifying emotions. For example, in daily life, we also frequently perceive faces at different eccentricity from our central vision or extrafoveally. In addition, even when these faces are perceived in front of us at a normal conversation distance, not all the facial features fall within our central vision, and some of these features will be processed extrafoveally. Despite the importance of foveal and extrafoveal information in visual cognition, their independent contributions to emotion recognition remain poorly understood. Thus, in this study, we investigated the independent roles of foveal and extrafoveal information in emotion recognition.

1.1. Literature Review

Foveal or central vision refers to the area of the visual field that is preferentially specialised in the processing of fine detail and high-resolution visual information (Stewart et al., 2020). This area, which corresponds approximately up to 2° eccentricity, contains a higher density of cones. Extrafoveal vision refers to the area of the visual field outside foveal vision. This area includes parafoveal vision, which extends approximately from 2° to 4–5° of the visual field, and peripheral vision, which encompasses the rest of the visual field. Visual acuity and contrast sensitivity decline as the eccentricity from foveal vision increases (Larson & Loschky, 2009; Loschky et al., 2005, 2019). Importantly, foveal and extrafoveal vision differ not only quantitatively in terms of acuity and resolution but also qualitatively in visual processing and task optimisation (Atkinson & Smithson, 2020; Duran & Atkinson, 2021; Rosenholtz, 2016; Strasburger et al., 2011). These qualitative differences are particularly critical in face perception research. Studies have shown that face-specific mechanisms are predominantly supported by foveal processing (Canas-Bajo & Whitney, 2022; Goren & Wilson, 2006; but see McKone, 2004). For instance, Canas-Bajo and Whitney (2022) demonstrated that the recruitment of face-specific mechanisms was reduced at 6° eccentricity compared with foveal vision.
Although emotion detection (e.g., discriminating neutral faces from fearful faces) appears to be relatively preserved at an eccentricity of up to 40° compared with gender detection (Bayle et al., 2011), emotion recognition performance generally decreases as the eccentricity from central presentation increases, with this effect being modulated by the type of emotion. For example, a recent study showed that emotion discrimination performance was impaired when the faces were presented at an eccentricity of 15° from central view (F. W. Smith & Rossit, 2018). Similar results were reported by an earlier study with synthetic faces, which revealed that the identification of angry, fearful, and sad face expressions was impaired at an eccentricity of 8° compared with central presentation (Goren & Wilson, 2006). Interestingly, this effect of eccentricity was not observed with happy faces. This finding does not only show that the perception of facial expressions is differentially affected by visual eccentricity depending on the type of emotion, but it also demonstrates that the so-called happy-face advantage (i.e., the better identification of happy faces compared with others; Calder et al., 2000; Calvo et al., 2014) remains robust even under challenging visual conditions.
However, much less is known about the role of parafoveal vision in emotion recognition. In fact, to the best of our knowledge, only two previous studies have explored the role of parafoveal vision in emotion recognition. In Calvo et al. (2010), a target emotional face and a scrambled face were briefly presented side by side, each at an eccentricity of 2.5° from the centre of the screen. After a backward mask, observers were presented with a probe word (e.g., happy), and they had to indicate if this word matched the previously presented face. The authors found that happy faces were identified faster compared with other emotions. Similarly, all emotions except happy faces were impaired when presented at an eccentricity of 2.5° from the centre compared with a central presentation (Calvo et al., 2014).
Altogether, the results of the reviewed studies suggest that emotion recognition is generally impaired when faces are presented extrafoveally (see also Zoghlami & Toscani, 2023). Nevertheless, these studies are not without limitations. For example, in Calvo and colleagues’ studies (Calvo et al., 2010, 2014) observers were asked to match a probe word to the previously presented face, which is an artificial way to assess emotion recognition that differs significantly from how people identify emotions in daily life. In addition, in the reviewed studies parafoveal and peripheral processing were studied by comparing faces presented at a specific eccentricity with those presented at the centre of the screen. Although this is the standard approach to studying extrafoveal processing in other domains such as reading (Balota & Rayner, 1983), perceptual learning (Beard et al., 1995) and object processing (Castelhano & Pereira, 2018), this approach presents two problems. First, from a methodological point of view, such a presentation confounds face position (i.e., eccentric from the screen centre) with the type of information (i.e., parafoveal or peripheral information). This is problematic as presenting faces eccentrically prevents the observers to process the faces by using face-specific mechanisms (Goren & Wilson, 2006).
Further, this eccentric presentation is also problematic from a more ecological point of view. Although there are no doubts that recognising emotions in the periphery has clear evolutionary advantages, in our daily life, emotions tend to be identified when we are directly interacting with others. Different facial features provide different diagnostic values in the recognition of different emotions (Barabanschikov, 2015; Calder et al., 2000; Calvo et al., 2014; Calvo & Marrero, 2009; Schurgin et al., 2014; M. L. Smith et al., 2005). For example, while the mouth seems to be a particularly important feature for identifying happy and surprise expressions, the eyes seem to be more relevant for the identification of sadness and fear (e.g., Calvo et al., 2014; Wegrzyn et al., 2017). Importantly, in face-to-face social interactions, not all the facial features of a conspecific fall within the foveal area, and some features are instead processed parafoveally or even peripherally. In fact, during a typical interaction at approximately 58 cm from an interlocutor, when observers maintain eye contact, the lower part of the face will fall within the parafoveal area, and farther-down features, such as the mouth, will fall within the peripheral area (Atkinson & Smithson, 2020; Mayrand et al., 2023). It is also noteworthy that although all the previously reviewed studies have demonstrated a clear advantage of central vision over parafoveal and peripheral vision in recognising emotions (Bayle et al., 2011; Calvo et al., 2014, 2014; Goren & Wilson, 2006; F. W. Smith & Rossit, 2018), none of these studies has truly isolated central vision. As previously mentioned, foveal or central vision corresponds to only the central 2° of the visual field. Consequently, in these studies, the central presentation of faces incorporates both foveal and extrafoveal information. Thus, the unique contribution of foveal and extrafoveal information to emotion recognition remains unknown.
The eye-tracking technique has been widely used in face-processing research to investigate observers’ gaze behaviour while performing different tasks (Althoff & Cohen, 1999; Bindemann, 2010; Lee et al., 2022; Peterson & Eckstein, 2012; Williams & Henderson, 2007). Although influenced by several factors (Yitzhak et al., 2021), emotion recognition research using eye tracking suggests that different facial expressions are associated with specific fixation patterns (Barabanschikov, 2015; Eisenbarth & Alpers, 2011; Paparelli et al., 2024; Schurgin et al., 2014), while the eye region tends to be fixated more frequently and for longer durations in anger and sadness compared with the mouth (Eisenbarth & Alpers, 2011; Schurgin et al., 2014), the opposite pattern is observed for happy faces (Beaudry et al., 2014; Eisenbarth & Alpers, 2011).
Although informative, using fixation patterns and other eye-tracking measures as dependent variables only provides a descriptive account of the relevance of different parts of the face to identifying emotions. Interestingly, the eye-tracking technique can also be potentially used as an independent variable by manipulating the amount and type of information through gaze-contingent paradigms (Billino et al., 2018; Estudillo & Bindemann, 2017; Hagen et al., 2023; McConkie & Rayner, 1975; Miellet & Caldara, 2012; Van Belle et al., 2010a, 2010b). For example, in Van Belle and colleagues’ study (Van Belle et al., 2010b), observers were first presented with a full-view target face. This face was then followed by two test faces presented side by side, and observers had to indicate which of these two faces corresponded to the target face. The paradigm comprises three experimental conditions. Under the mask condition, an oval mask hides the fixated facial feature but leaves the rest of the face available. In contrast, under the window condition, only the fixated facial feature was visible, forcing observers to rely only in this small area for identification. Both the mask and the window move in a gaze-contingent manner. These two conditions were compared with a control condition, whereby there was not any visual restriction. Compared with the control and mask conditions, performance decreased dramatically under the window condition (Evers et al., 2018; Van Belle et al., 2010a, 2010b; Verfaillie et al., 2014).

1.2. The Present Study

By using an adaptation of the gaze-contingent paradigm, the present study aims to dissociate the roles of foveal and extrafoveal information in emotion recognition. Observers were asked to identify anger, fear, happiness, sadness, and surprise facial emotions presented under three conditions: a full-view control condition; a gaze-contingent oval mask condition that blocked foveal information while allowing for extrafoveal information; and a gaze-contingent window condition that blocked extrafoveal information while only allowing for foveal information. Faces were also presented in upright and inverted orientation. This manipulation was included to explore whether the effects of foveal and parafoveal information on emotion recognition are face-specific or reflect more general visual processes (Rossion, 2008; Van Belle et al., 2010b). If foveal information is more important for the recognition of emotions than parafoveal information, we expect to find better performance under the window condition compared with the mask condition. On the contrary, if parafoveal information is more important for emotion recognition, we expect to find better performance under the mask compared with the window condition. Finally, as some research has reported that happy faces are identified with similar levels of accuracy when the faces are presented centrally and parafoveally (Calvo et al., 2010, 2014), the differences between the control and mask conditions might be smaller for happy faces compared with other emotions.

2. Materials and Methods

2.1. Design

A within-subjects design was utilised. The independent variables were facial emotions, with five conditions: anger, fear, happiness, sadness, and surprise; viewing conditions: full view, masked, and window; and orientation: upright and inverted. The dependent variables were recognition accuracy and response times. To avoid potential speed–accuracy trade-offs, these measures were used to calculate the rate-correct scores (RCSs) (Woltz & Was, 2006), an integrative measure of efficiency to solve cognitive tasks (Vandierendonck, 2017). The RCS is calculated by dividing the number of correct trials by the sum of reaction times for both correct and incorrect trials and represents the number of correct trials per second. Thus, higher RCSs indicate greater efficiency in solving the task.

2.2. Participants

Thirty Malaysian Chinese undergraduate students from University of Nottingham Malaysia (females = 15, Mage = 20.5 years, SDage = 1.70) took part in this study. Participants reported normal or corrected-to-normal vision.

2.3. Stimuli and Apparatus

A total of 20 identities (10 females) were taken from the Taiwanese Facial Expression Image Database (TFEID) (Chen & Yen, 2007). Each of the faces displayed the following emotions: anger, fear, happiness, sadness, and surprise. The external features (e.g., hair and ears) were cropped out of the photographs to direct attention towards inner facial features (e.g., eyebrows, eyes, nose, and mouth). The stimulus images were 480 by 600 pixels (17.93° by 22.41° at a distance of 75 cm from the screen).
The face stimuli were displayed on a 24′ BenQ monitor (driven by Microsoft Windows 7 Professional, version 6.1.7601) with a spatial resolution of 1920 by 1080 pixels. The experiment was programmed on SR Research Experiment Builder (version 1.10.1630). Eye movements were tracked with a desktop-mounted EyeLink 1000+ eye-tracking system at a 1000 Hz sampling rate, positioned 75 cm from the participant. To minimise head movements, participants were asked to place their heads on a chin and head rest.

2.4. Procedure

Participants were asked to seat in front of the computer and eye tracker in a dark enclosed room. The chin and head rest and the chair were adjusted for each participant. Verbal and written instructions were given to explain the emotion identification task. Participants were informed to press the response key corresponding with the perceived emotion: ‘d’ for anger, ‘f’ for fear, ‘g’ for happiness, ‘h’ for sadness, and ‘j’ for surprise. The standard nine-dot EyeLink calibration was conducted, and the validation procedure followed.
Each trial began with a central drift correction, which was followed by a fixation cross on the left side of the screen. Upon detection of the fixation, a target face appeared in the centre of the screen. The stimuli were presented until response either in full view, with a gaze-contingent mask, or with a gaze-contingent window. Under the mask condition, the fixated face part was covered by an oval mask, but the rest of the face was uncovered. In contrast, under the window condition, only the fixated face part was visible through an oval window. Both the mask and the window were gaze-contingent and measured 127 by 93 pixels (4.75° × 3.62° at a distance of 75 cm from the screen; see Figure 1). A blank screen appeared for 1000ms after the response.
There were 60 trials per emotion. For each emotion, half of the trials were presented in upright orientation, and the other half were presented upside down. For each orientation condition, 10 trials were presented under each of the viewing conditions. Trials were presented in random order, and the allocation of the identities and the emotions to each viewing condition was counterbalanced across participants.

3. Results

Figure 2 shows the mean RCSs across the orientation and viewing conditions. A 2 (orientation: upright, inverted) × 3 (viewing condition: control, mask, window) repeated measures ANOVA was run. The visual inspection of Q-Q plots indicated that the residuals were normally distributed. When the assumption of sphericity was violated, Greenhouse–Geisser corrections were applied to adjust the degrees of freedom. The ANOVA revealed the main effects of orientation [F(1, 28) = 71.25, p < 0.001, η2p = 0.71], viewing condition [F(2, 56) = 257.28, p < 0.001, η2p = 0.90], and an interaction between both factors [F(1.55, 43.40) = 7.41, p < 0.01, η2p = 0.21]. To explore this interaction, we conducted separate ANOVAs for each orientation. For upright faces, we found a main effect of the viewing condition [F(2, 56) = 192.40, p < 0.001, η2p = 0.87]. Post hoc analyses (Holm-corrected) revealed similar performance under the control and mask conditions (p = 0.38) but better performance under these two conditions compared with the window condition (both ps < 0.001, ds ≥ 2.68). For inverted faces, the main effect of the viewing condition was also significant [F(2, 56) = 158.10, p < 0.001, η2p = 0.85]. Post hoc analyses revealed better performance under the control compared with both the mask and window conditions (both ps < 0.001, ds ≥ 0.58) and under the mask compared with the window condition (p < 0.001, d = 2.04).
As previous research has shown differential roles of central and parafoveal information across different emotions (Calvo et al., 2010, 2014), in the second part of our analysis, we also included the factor emotion (see Figure 3). A 2 (orientation: upright, inverted) × 3 (viewing condition: control, mask, window) × 5 (emotion: anger, fear, happiness, sadness, surprise) repeated measures ANOVA revealed the main effects of orientation [F(1, 28) = 50.20, p < 0.001, η2p = 0.64], viewing condition [F(2, 56) = 304.77, p < 0.001, η2p = 0.91], and emotion [F(2.24, 62.94) = 75.49, p < 0.001, η2p = 0.79]. We also found two-way interactions between orientation and viewing condition [F(2, 56) = 5.62, p < 0.01, η2p = 0.16], orientation and emotion [F(2.94, 82.35) = 5.13, p < 0.01, η2p = 0.15], and viewing condition and emotion [F(4.80, 134.61) = 20.97, p < 0.001, η2p = 0.42]. Finally, the three-way interaction among these factors was also significant [F(8, 224) = 2.87, p < 0.01, η2p = 0.09].
Based on previous research (Calvo et al., 2010, 2014), we hypothesised that for happy faces, the differences between the control and mask conditions might be smaller compared with other emotions. To explore the three-way interaction, we conducted separate ANOVAs for each emotion (see Table 1). For fear, the ANOVA revealed the main effects of orientation, viewing condition, and interaction between both factors. This interaction seems to reflect better performance under the mask than the control condition in upright trials but similar performance under the control and mask conditions in inverted trials. This pattern was confirmed by separate ANOVAs for upright [F(2, 56) = 38.17, p < 0.001, η2p = 0.57] and inverted [F(1.50, 42.18) = 34.71, p < 0.001, η2p = 0.55] trials. For upright trials, a post hoc t-test (Holm-corrected) revealed better performance under the mask than the control or window condition (both ps < 0.001, ds ≥ 0.80) and better performance under the control compared with the window condition (p < 0.001, d = 0.94). For inverted trials, performance under the control and mask conditions was better compared with the window condition (both ps < 0.001). However, there were no differences between the control and mask conditions (p = 0.80, ds ≥ 1.31). In addition, separate t-tests for each viewing conditions revealed similar performance for upright and inverted trials under the control condition (p = 0.34) but better performance for upright than inverted trials under the mask and window conditions (both ps < 0.001, ds ≥ 0.65). For sadness, the ANOVA revealed the main effects of orientation, viewing condition, and interaction between both factors. To explore this interaction, we first conducted separate ANOVAs for upright [F(2, 56) = 119.90, p < 0.001, η2p = 0.81] and inverted trials [F(2, 56) = 78.26, p < 0.001, η2p = 0.73]. Post hoc analysis revealed that in both upright and inverted trials, participants performed better under the control compared with the masks and window conditions and under the mask compared with the window condition (all ps < 0.001, ds ≥ 1.53). In addition, separate t-tests for each viewing conditions revealed better performance for upright than inverted faces under the control and mask conditions (both ps < 0.001, ds ≥ 0.82) and under the window condition, but this effect seems somewhat smaller in the latter (p < 0.001, d = 0.40).
For anger, happiness, and surprise, the main effects of the viewing condition were significant. A post hoc t-test (Holm-corrected) revealed better performance under the control than the mask condition for happiness (p < 0.01, d = 0.45) but similar performance across these conditions for anger and surprise (both ps ≥ 0.09). Across these three emotions, performance was better under the control and mask conditions compared with the window condition (all ps < 0.001, ds ≥ 1.40).
The main effect of orientation was significant for anger and happiness, showing that performance was better in upright than inverted trials. However, performance was similar in upright and inverted trials for surprise. The interactions between orientation and viewing condition did not reach statistical significance for any of these emotions.

4. Discussion

By using a gaze-contingent paradigm, the present study explored the effect of foveal and extrafoveal information on emotion recognition. Observers were asked to identify facial emotions in full view, which allows for both foveal and extrafoveal information (i.e., control condition), when only foveal information was available (i.e., window condition), and when only extrafoveal information was available (i.e., mask condition). Overall, the results show that for upright face performance was similar under the control and mask conditions. However, inverted face performance was better under the control condition compared with the mask or window condition and under the mask compared with the window condition. These patterns of results are similar to those found in face identification tasks (Van Belle et al., 2010b). Interestingly, when the type of emotion was included in the analysis, we found that for happy, angry, and surprised faces, the orientation effect was independent from the viewing condition. In other words, for these emotions, performance was higher under the control and mask conditions compared with the window condition, with this effect being equivalent for upright and inverted faces. If it is assumed that the window and mask conditions disrupt holistic and featural processing, respectively (e.g., Van Belle et al., 2010b), this effect reflects that happy, angry, and surprise facial emotions likely rely on a combination of both featural and holistic processing. In fact, previous research has shown that perceiving the mouth is enough to identify happy faces (Calvo et al., 2014).
Although we found that isolating foveal information impaired emotion recognition across different emotions, isolating extrafoveal information had more variable effects, impairing only the recognition of sad faces. Thus, our results suggest that at normal conversation distance and when faces are presented centrally, extrafoveal information is generally sufficient for emotion recognition. Our results are in agreement with previous studies in patients with age-related macular degeneration, a major vision impairment affecting central vision. Although these patients presented substantial problems in detecting whether a face had an expression or not, despite their remarkable problems in central vision, they were still able to identify emotions with a level of performance that was close to that of age-matched controls (Boucart et al., 2008). However, our findings contrast with previous reports that found that all face emotions except happy faces were impaired when presented parafoveally (Calvo et al., 2014). Two different reasons could explain these differences. First, Calvo et al. (2014) analysed accuracy—measured with A’ (Snodgrass & Corwin, 1988)—and RTs separately, and the advantage for parafoveally presented happy faces was only found in accuracy. The problem of this approach is that it does not account for potential speed–accuracy trade-offs. To address this issue, in our study, we used the RCS as the dependent variable to avoid such trade-offs, as it combines reaction times and accuracy. Second, and perhaps more importantly, Calvo et al. (2014) manipulated parafoveal information by presenting faces eccentrically from the centre of the screen. This approach confounds face position and the type of information and prevents observers from processing faces by using face-specific mechanisms (Goren & Wilson, 2006).
Our results are not without limitations. First, although our findings suggest that extrafoveal information suffices for emotion recognition, the individual contributions of parafoveal and peripheral information remain unknown, as our manipulation disrupted both. Based on previous findings (Calvo et al., 2010, 2014), we tentatively suggest that parafoveal information is more important than peripheral information at a normal conversation distance, as more facial features would fall within the former. This issue, however, could be experimentally tested in the future by adapting the gaze-contingent paradigm to individually disrupt either parafoveal or peripheral information. In addition, while facial expressions are universally identified, research has shown important cultural differences in general visual strategies. In fact, it has been suggested that compared with Western individuals, people from Asian backgrounds exhibit a stronger bias toward a more global distribution of visual attention (Ji et al., 2000; McKone et al., 2010), and these differences also extend to the processing of faces (Blais et al., 2008, 2021). Thus, it is possible that the effects of foveal and extrafoveal information differ across cultures. In this study, all participants were of Southeast Asian origin. Given their tendency for more global information processing, it is possible that the mask effect may be reduced compared with Western participants, while the window effect could be amplified. Future research could investigate this by employing a gaze-contingent paradigm to directly compare these cultural groups. Finally, while common in cognitive psychology research, we acknowledge that using static, cropped facial images may limit the ecological validity of our findings.
In conclusion, the results of this study indicate that at a normal conversation distance, isolating extrafoveal information has minimal impact on the recognition of different emotions. However, isolating foveal information significantly impairs identification. These findings suggest that extrafoveal information is generally sufficient for accurate emotion recognition.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of the University of Nottingham Malaysia (ethics ID: SB06082018, date of approval: 7 August 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available in OSF at 10.17605/OSF.IO/UX63E.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Althoff, R. R., & Cohen, N. J. (1999). Eye-movement-based memory effect: A reprocessing effect in face perception. Journal of Experimental Psychology. Learning, Memory, and Cognition, 25(4), 997–1010. [Google Scholar] [CrossRef] [PubMed]
  2. Atkinson, A. P., & Smithson, H. E. (2020). The impact on emotion classification performance and gaze behavior of foveal versus extrafoveal processing of facial features. Journal of Experimental Psychology. Human Perception and Performance, 46(3), 292–312. [Google Scholar] [CrossRef]
  3. Balota, D. A., & Rayner, K. (1983). Parafoveal visual information and semantic contextual constraints. Journal of Experimental Psychology: Human Perception and Performance, 9(5), 726–738. [Google Scholar] [CrossRef] [PubMed]
  4. Barabanschikov, V. A. (2015). Gaze dynamics in the recognition of facial expressions of emotion. Perception, 44(8–9), 1007–1019. [Google Scholar] [CrossRef] [PubMed]
  5. Bayle, D. J., Schoendorff, B., Hénaff, M.-A., & Krolak-Salmon, P. (2011). Emotional facial expression detection in the peripheral visual field. PLoS ONE, 6(6), e21584. [Google Scholar] [CrossRef]
  6. Beard, B. L., Levi, D. M., & Reich, L. N. (1995). Perceptual learning in parafoveal vision. Vision Research, 35(12), 1679–1690. [Google Scholar] [CrossRef] [PubMed]
  7. Beaudry, O., Roy-Charland, A., Perron, M., Cormier, I., & Tapp, R. (2014). Featural processing in recognition of emotional facial expressions. Cognition and Emotion, 28(3), 416–432. [Google Scholar] [CrossRef] [PubMed]
  8. Billino, J., van Belle, G., Rossion, B., & Schwarzer, G. (2018). The nature of individual face recognition in preschool children: Insights from a gaze-contingent paradigm. Cognitive Development, 47, 168–180. [Google Scholar] [CrossRef]
  9. Bindemann, M. (2010). Scene and screen center bias early eye movements in scene viewing. Vision Research, 50(23), 2577–2587. [Google Scholar] [CrossRef]
  10. Blais, C., Jack, R. E., Scheepers, C., Fiset, D., & Caldara, R. (2008). Culture shapes how we look at faces. PLoS ONE, 3(8), e3022. [Google Scholar] [CrossRef] [PubMed]
  11. Blais, C., Linnell, K. J., Caparos, S., & Estéphan, A. (2021). Cultural differences in face recognition and potential underlying mechanisms. Frontiers in Psychology, 12, 627026. [Google Scholar] [CrossRef] [PubMed]
  12. Boucart, M., Dinon, J.-F., Despretz, P., Desmettre, T., Hladiuk, K., & Oliva, A. (2008). Recognition of facial emotion in low vision: A flexible usage of facial features. Visual Neuroscience, 25(4), 603–609. [Google Scholar] [CrossRef] [PubMed]
  13. Calder, A. J., Young, A. W., Keane, J., & Dean, M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance, 26(2), 527–551. [Google Scholar] [CrossRef] [PubMed]
  14. Calvo, M. G., Fernández-Martín, A., & Nummenmaa, L. (2014). Facial expression recognition in peripheral versus central vision: Role of the eyes and the mouth. Psychological Research, 78(2), 180–195. [Google Scholar] [CrossRef] [PubMed]
  15. Calvo, M. G., & Marrero, H. (2009). Visual search of emotional faces: The role of affective content and featural distinctiveness. Cognition and Emotion, 23(4), 782–806. [Google Scholar] [CrossRef]
  16. Calvo, M. G., Nummenmaa, L., & Avero, P. (2010). Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing. Visual Cognition, 18(9), 1274–1297. [Google Scholar] [CrossRef]
  17. Canas-Bajo, T., & Whitney, D. (2022). Relative tuning of holistic face processing towards the fovea. Vision Research, 197, 108049. [Google Scholar] [CrossRef] [PubMed]
  18. Castelhano, M. S., & Pereira, E. J. (2018). The influence of scene context on parafoveal processing of objects. Quarterly Journal of Experimental Psychology, 71(1), 229–240. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, L. F., & Yen, Y. S. (2007). Taiwanese facial expression image database. Brain Mapping Laboratory, Institute of Brain Science, National Yang-Ming University. [Google Scholar]
  20. Darwin, C. (1872). The expression of the emotions in man and animals (pp. vi, 374). John Murray. [Google Scholar] [CrossRef]
  21. Duran, N., & Atkinson, A. P. (2021). Foveal processing of emotion-informative facial features. PLoS ONE, 16(12), e0260814. [Google Scholar] [CrossRef]
  22. Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion, 11(4), 860–865. [Google Scholar] [CrossRef] [PubMed]
  23. Estudillo, A. J., & Bindemann, M. (2017). Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition? Quarterly Journal of Experimental Psychology, 70(5). [Google Scholar] [CrossRef] [PubMed]
  24. Evers, K., Van Belle, G., Steyaert, J., Noens, I., & Wagemans, J. (2018). Gaze-contingent display changes as new window on analytical and holistic face perception in children with autism spectrum disorder. Child Development, 89(2), 430–445. [Google Scholar] [CrossRef] [PubMed]
  25. Goren, D., & Wilson, H. R. (2006). Quantifying facial expression recognition across viewing conditions. Vision Research, 46(8), 1253–1262. [Google Scholar] [CrossRef] [PubMed]
  26. Grundmann, F., Epstude, K., & Scheibe, S. (2021). Face masks reduce emotion-recognition accuracy and perceived closeness. PLoS ONE, 16(4), e0249792. [Google Scholar] [CrossRef] [PubMed]
  27. Hagen, S., Vuong, Q. C., Jung, L., Chin, M. D., Scott, L. S., & Tanaka, J. W. (2023). A perceptual field test in object experts using gaze-contingent eye tracking. Scientific Reports, 13(1), 11437. [Google Scholar] [CrossRef] [PubMed]
  28. Ji, L. J., Peng, K., & Nisbett, R. E. (2000). Culture, control, and perception of relationships in the environment. Journal of Personality and Social Psychology, 78(5), 943–955. [Google Scholar] [CrossRef]
  29. Kim, G., Seong, S. H., Hong, S.-S., & Choi, E. (2022). Impact of face masks and sunglasses on emotion recognition in South Koreans. PLoS ONE, 17(2), e0263466. [Google Scholar] [CrossRef]
  30. Larson, A. M., & Loschky, L. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9(10), 6. [Google Scholar] [CrossRef] [PubMed]
  31. Lee, J. K. W., Janssen, S. M. J., & Estudillo, A. J. (2022). A more featural based processing for the self-face: An eye-tracking study. Consciousness and Cognition, 105, 103400. [Google Scholar] [CrossRef] [PubMed]
  32. Loschky, L., Szaffarczyk, S., Beugnet, C., Young, M. E., & Boucart, M. (2019). The contributions of central and peripheral vision to scene-gist recognition with a 180° visual field. Journal of Vision, 19(5), 15. [Google Scholar] [CrossRef] [PubMed]
  33. Loschky, L., Yang, J., Miller, M., & McConkie, G. (2005). The limits of visual resolution in natural scene viewing. Visual Cognition, 12(6), 1057–1092. [Google Scholar] [CrossRef]
  34. Mayrand, F., Capozzi, F., & Ristic, J. (2023). A dual mobile eye tracking study on natural eye contact during live interactions. Scientific Reports, 13(1), 11385. [Google Scholar] [CrossRef]
  35. McConkie, G. W., & Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics, 17(6), 578–586. [Google Scholar] [CrossRef]
  36. McKone, E. (2004). Isolating the special component of face recognition: Peripheral identification and a Mooney face. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(1), 181. [Google Scholar] [CrossRef]
  37. McKone, E., Aimola Davies, A., Fernando, D., Aalders, R., Leung, H., Wickramariyaratne, T., & Platow, M. J. (2010). Asia has the global advantage: Race and visual attention. Vision Research, 50(16), 1540–1549. [Google Scholar] [CrossRef] [PubMed]
  38. Miellet, S., & Caldara, R. (2012). When East meets West: Gaze-contingent Blindspots abolish cultural diversity in eye movements for faces. Journal of Vision, 5(2), 703. [Google Scholar] [CrossRef]
  39. Paparelli, A., Sokhn, N., Stacchi, L., Coutrot, A., Richoz, A.-R., & Caldara, R. (2024). Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition. Scientific Reports, 14(1), 16193. [Google Scholar] [CrossRef]
  40. Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences of the United States of America, 109(48), E3314–E3323. [Google Scholar] [CrossRef]
  41. Rosenholtz, R. (2016). Capabilities and limitations of peripheral vision. Annual Review of Vision Science, 2(1), 437–457. [Google Scholar] [CrossRef]
  42. Rossion, B. (2008). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychologica, 128(2), 274–289. [Google Scholar] [CrossRef]
  43. Schmidt, K. L., & Cohn, J. F. (2001). Human facial expressions as adaptations: Evolutionary questions in facial expression research. American Journal of Physical Anthropology, 116(S33), 3–24. [Google Scholar] [CrossRef]
  44. Schurgin, M. W., Nelson, J., Iida, S., Ohira, H., Chiao, J. Y., & Franconeri, S. L. (2014). Eye movements during emotion recognition in faces. Journal of Vision, 14(13), 14. [Google Scholar] [CrossRef] [PubMed]
  45. Smith, F. W., & Rossit, S. (2018). Identifying and detecting facial expressions of emotion in peripheral vision. PLoS ONE, 13(5), e0197160. [Google Scholar] [CrossRef] [PubMed]
  46. Smith, F. W., & Schyns, P. G. (2009). Smile through your fear and sadness: Transmitting and identifying facial expression signals over a range of viewing distances. Psychological Science, 20(10), 1202–1208. [Google Scholar] [CrossRef]
  47. Smith, M. L., Cottrell, G. W., Gosselin, F., & Schyns, P. G. (2005). Transmitting and decoding facial expressions. Psychological Science, 16(3), 184–189. [Google Scholar] [CrossRef] [PubMed]
  48. Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117(1), 34–50. [Google Scholar] [CrossRef]
  49. Stewart, E. E. M., Valsecchi, M., & Schütz, A. C. (2020). A review of interactions between peripheral and foveal vision. Journal of Vision, 20(12), 2. [Google Scholar] [CrossRef]
  50. Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5), 13. [Google Scholar] [CrossRef] [PubMed]
  51. Susskind, J. M., Lee, D. H., Cusi, A., Feiman, R., Grabski, W., & Anderson, A. K. (2008). Expressing fear enhances sensory acquisition. Nature Neuroscience, 11(7), 843–850. [Google Scholar] [CrossRef]
  52. Van Belle, G., De Graef, P., Verfaillie, K., Busigny, T., & Rossion, B. (2010a). Whole not hole: Expert face recognition requires holistic perception. Neuropsychologia, 48(9), 2620–2629. [Google Scholar] [CrossRef]
  53. Van Belle, G., Verfaillie, K., Rossion, B., & Lefèvre, P. (2010b). Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation. Journal of Vision, 10, 10. [Google Scholar] [CrossRef]
  54. Vandierendonck, A. (2017). A comparison of methods to combine speed and accuracy measures of performance: A rejoinder on the binning procedure. Behavior Research Methods, 49(2), 653–673. [Google Scholar] [CrossRef] [PubMed]
  55. Verfaillie, K., Huysegems, S., De Graef, P., & Van Belle, G. (2014). Impaired holistic and analytic face processing in congenital prosopagnosia: Evidence from the eye-contingent mask/window paradigm. Visual Cognition, 22(3–4), 503–521. [Google Scholar] [CrossRef]
  56. Wegrzyn, M., Vogt, M., Kireclioglu, B., Schneider, J., & Kissler, J. (2017). Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS ONE, 12(5), e0177239. [Google Scholar] [CrossRef]
  57. Williams, C. C., & Henderson, J. M. (2007). The face inversion effect is not a consequence of aberrant eye movements. Memory & Cognition, 35(8), 1977–1985. [Google Scholar]
  58. Woltz, D. J., & Was, C. A. (2006). Availability of related long-term memory during and after attention focus in working memory. Memory & Cognition 34, 668–684. [Google Scholar] [CrossRef]
  59. Wong, H. K., & Estudillo, A. J. (2022). Face masks affect emotion categorisation, age estimation, recognition, and gender classification from faces. Cognitive Research: Principles and Implications, 7(1), 91. [Google Scholar] [CrossRef]
  60. Yitzhak, N., Pertzov, Y., & Aviezer, H. (2021). The elusive link between eye-movement patterns and facial expression recognition. Social and Personality Psychology Compass, 15(7), e12621. [Google Scholar] [CrossRef]
  61. Zoghlami, F., & Toscani, M. (2023). Foveal to peripheral extrapolation of facial emotion. Perception, 52(7), 514–523. [Google Scholar] [CrossRef]
Figure 1. Upright stimuli under the viewing conditions control, mask, and window.
Figure 1. Upright stimuli under the viewing conditions control, mask, and window.
Behavsci 15 00135 g001
Figure 2. Mean RCSs across viewing and orientation conditions. Error bars denote 95% CIs.
Figure 2. Mean RCSs across viewing and orientation conditions. Error bars denote 95% CIs.
Behavsci 15 00135 g002
Figure 3. Mean RCSs across emotions, viewing, and orientation conditions. Error bars denote 95% CIs.
Figure 3. Mean RCSs across emotions, viewing, and orientation conditions. Error bars denote 95% CIs.
Behavsci 15 00135 g003
Table 1. Separate ANOVAs for each emotion.
Table 1. Separate ANOVAs for each emotion.
EmotionOrientationViewing ConditionOrientation × Viewing Condition
FearF = 20.28, p < 0.001, η2p = 0.42F = 69.47, p < 0.001, η2p = 0.71F = 4.44, p < 0.05, η2p = 0.13
SadnessF = 29.79, p < 0.001, η2p = 0.516F = 141.29, p < 0.001, η2p = 0.835F = 10.78, p < 0.001, η2p = 0.27
HappinessF = 5.03, p < 0.05, η2p = 0.15F = 122.80, p < 0.001, η2p = 0.81F = 0.78, p = 0.46
SurpriseF = 0.01, p = 0.97F = 85.64, p < 0.001, η2p = 0.75F = 2.82, p = 0.06
AngerF = 46.66, p < 0.001, η2p = 0.62F = 52.86, p < 0.001, η2p = 0.65F = 2.84, p = 0.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Estudillo, A.J. Exploring the Role of Foveal and Extrafoveal Processing in Emotion Recognition: A Gaze-Contingent Study. Behav. Sci. 2025, 15, 135. https://doi.org/10.3390/bs15020135

AMA Style

Estudillo AJ. Exploring the Role of Foveal and Extrafoveal Processing in Emotion Recognition: A Gaze-Contingent Study. Behavioral Sciences. 2025; 15(2):135. https://doi.org/10.3390/bs15020135

Chicago/Turabian Style

Estudillo, Alejandro J. 2025. "Exploring the Role of Foveal and Extrafoveal Processing in Emotion Recognition: A Gaze-Contingent Study" Behavioral Sciences 15, no. 2: 135. https://doi.org/10.3390/bs15020135

APA Style

Estudillo, A. J. (2025). Exploring the Role of Foveal and Extrafoveal Processing in Emotion Recognition: A Gaze-Contingent Study. Behavioral Sciences, 15(2), 135. https://doi.org/10.3390/bs15020135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop