Next Article in Journal
Recent Developments and Perspectives on Optimization Design Methods for Analog Integrated Circuits
Previous Article in Journal
Individual Differences in Vertebrate Behavioural Lateralisation: The Role of Genes and Environment
Previous Article in Special Issue
Bézier Curves and Surfaces with the Blending (α, λ, s)-Bernstein Basis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Correlation Between Gaze Patterns and Facial Geometric Parameters: A Cross-Cultural Comparison Between Real and Animated Faces

1
Department of Digital Media Design, Asia University, Taichung 41354, Taiwan
2
Department of Computer and Communication Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 824005, Taiwan
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(4), 528; https://doi.org/10.3390/sym17040528
Submission received: 2 March 2025 / Revised: 22 March 2025 / Accepted: 29 March 2025 / Published: 31 March 2025
(This article belongs to the Special Issue Computer-Aided Geometric Design and Matrices)

Abstract

:
People are naturally drawn to symmetrical faces, as symmetry is often associated with attractiveness. In contrast to human faces, animated characters often emphasize certain geometric features, exaggerating them while maintaining symmetry and enhancing their visual appeal. This study investigated the impact of geometric parameters of facial features on fixation duration and explored 60 facial samples across two races (American, Japanese) and two conditions (animated, real). Relevant length, angle, and area parameters were extracted from the eyebrows, eyes, ears, nose, and chin regions of the facial samples. Using an eye-tracking experiment design, fixation duration (FD) and fixation count (FC) were extracted from 10 s gaze stimuli. Sixty participants (32 males and 28 females) took part. The results showed that, compared to Japanese animation, American animation typically induced a longer FD and higher FC on features like the eyes (p < 0.001), nose (p < 0.001), ears (p < 0.01), and chin (p < 0.01). Compared to real faces, animated characters typically attracted a longer FD and higher FC on areas such as the eyebrows (p < 0.001), eyes (p < 0.001), and ears (p < 0.001), while the nose (p < 0.001) and chin (p < 0.001) attracted a shorter FD and lower FC. Additionally, a correlation analysis between FD and geometric features showed a high positive correlation in the geometric features of the eyes, nose, and chin for both American and Japanese animated faces. The geometric features of the nose in real American and Japanese faces showed a high negative correlation coefficient. These findings highlight notable differences in FD and FC across different races and facial conditions, suggesting that facial geometric features may play a role in shaping gaze patterns and contributing to the objective quantitative assessment of FD. These insights are critical for optimizing animated character design and enhancing engagement in cross-cultural media and digital interfaces.

1. Introduction

Animated characters are considered one of the factors that attract attention in computer vision images [1], and are often popular with the general public. The versatile representation of animated characters ranges from exaggerated to realistic portrayals, which significantly enhances viewers’ emotional engagement, making these characters more relatable and comprehensible [2]. Animated expressions are increasingly utilized in complex social settings, evolving to align with individuals’ experiences, perceptions, needs, and expectations. For instance, as we become accustomed to the large eyes of animated faces, our interest in enlarged eyes on real human faces also increases [3]. Surveys reveal that exaggerated animated facial stimuli in political portraits can sway public opinions by highlighting emotional or character traits of figures [4]. Furthermore, animated faces play a crucial role in enhancing emotional recognition and communication in children with autism [5,6]. Dawel et al. report that cartoon-like exaggeration of facial features enhances recognition across various face types, including low-resolution faces, different races, and elderly faces [7,8]. Moreover, animated faces often reflect cultural differences in visual presentation, with styles such as American and Japanese animations being particularly influential, thus affecting audience perceptions and behaviors [9]. Consequently, animated faces are essential for conveying emotional messages and garnering attention in diverse fields, including media representation, cultural exchange, and social interaction.
Research indicates that face perception and cognitive processing typically require active engagement [10,11], involving visual gaze activities like searching [12], categorizing [13], and identifying [14], which facilitate decision-making. Chanpornpakdi et al. employed event-related potential and eye-tracking investigations, demonstrating that even partial facial visibility can effectively elicit facial cognition and engagement [15]. This cognitive process begins with the acquisition of specific facial feature information. For example, the mouth is crucial for interpreting facial expressions, whereas the eyes are essential for identifying gender and age [16]. These findings suggest that individuals use unique and task-specific scanning paths when evaluating faces [17]. Recent studies reveal that exaggerated animated facial stimuli capture attention more effectively than real facial stimuli [18,19]. Cheetham et al. found that viewers tend to gaze more at the upper region of Avatar faces compared to real faces under free viewing conditions [20]. This highlights a form of attention modulation dependent on the category of the stimulus. This attention capture implies that our visual perception system is sensitive to animated facial features, which can be exaggerated to enhance salience. Through these studies, we gain a better understanding of the visual perception of facial features and their role in human cognition.
Additionally, studies indicate the existence of an ideal representation model for facial attractiveness perception [21,22], where ideal dimensions of facial features can be assessed through geometric measurements of distances between facial landmarks, thus providing an objective quantitative analysis of facial features [23]. Considering the potential impacts of animated facial media representations or applications, how should entertainment educators design characters to attract and/or persuade an increasingly diverse audience? Our examination of the link between visual attention and facial geometry aims to shed light on human visual behavior. This study underscores the importance of selective attention in our perception of faces and how facial geometry affects this process.

1.1. Racial Differences in Facial Geometric Morphometrics

Standardized parameters for facial feature measurement serve as a basis for evaluating face recognition and aesthetic judgment, with the objectivity of these parameterized facial geometric features becoming increasingly popular in the study of facial perception [24,25]. These geometric features include distances and angles calculated from facial landmarks used to measure facial geometry, including facial contours, eyebrows, eyes, nose, etc. [26,27]. Against the backdrop of racial differences, these geometric features may exhibit varying patterns of attractiveness. Farkas et al. found significant differences in facial morphologies between different races [28]. Liu et al. revealed significant differences in seven angular measurement features of faces from the United States and Japan [26]. Zheng et al. compared facial proportions and attractiveness between Asians and Caucasians, finding notable differences. The ratio of nose width to height affects the attractiveness for both groups, while the ratio of inner eye width to nose length is particularly important for Asians [29]. Another study by Farkas et al., using neoclassical facial standards, found that facial widths (such as inter-eye distance and nose width) are generally larger in Asians compared to North Americans, while eye fissure length and mouth width are smaller [30]. Jilani et al. analyzed various facial geometric parameters, revealing inter- and intra-group differences, including racial variations in eye measurements [31]. Therefore, these research findings suggest that facial geometric features serve as reliable indicators of facial perception, with distinct differences in facial features among different ethnic groups clearly evident.

1.2. Relationship Between Animated Characters’ Facial Features and Real Human Faces

Analyzing animated facial stimuli using facial geometric features also presents significant advantages [32,33]. Recently, Liu et al. analyzed 42 facial geometric features in American and Japanese animations, comparing them with real faces from the respective regions. The study found that 19 features in American animations closely resembled real faces, whereas 30 features in Japanese animations showed similar patterns [26]. This indicates that Japanese animations more closely mimic real human appearances, demonstrating the impact of regional differences. Animated characters in different regions exhibit various forms, mirroring the real facial appearances of those regions, thereby emphasizing racial stereotypes in animated faces [34,35]. This issue has sparked ongoing debates in animation studies about presenting or avoiding racial stereotypes in facial depictions. Japanese and American animations are often discussed as typical examples for comparative analysis [36]. Content analysis studies have shown that, in Asia, Japanese animated characters bear South Asian features [37], whereas North American Disney animated films display a more diverse range of facial features, more closely resembling the multicultural appearance of North Americans [38]. Although Disney films are the most racially diverse, a survey by Tyner et al. found that the racial representation of princesses and other characters in Disney animations is still predominantly based on Caucasian images, with a higher proportion of main characters [39]. Therefore, this implies that the portrayal of animated facial features is always influenced by the local real facial racial characteristics.

1.3. Eye-Tracking Methods in Facial Feature Analysis

A crucial method to reveal the process of assessing the visual attractiveness of facial features is using eye-tracking devices to explore the relationship between facial features and eye movements [40,41]. Eye movements are divided into saccades and fixations, with fixations being a direct indicator of attention distribution. By measuring the gaze positions and saccade areas, one can identify the regions of interest to the viewer [42]. In the visual gaze patterns of the face, it has been shown that there are differences in task performance between holistic and partial facial region processing [43]. Valuch et al. have revealed a visual attention bias towards attractive whole-face interest areas [40]. Leder et al. discovered a linear relationship between facial attractiveness and gaze behavior; more attractive faces receive longer and more frequent gazes [44].
Studies have found that saccadic patterns differ between animated and real faces, with animated faces having shorter scanning distances, suggesting a more detailed processing of specific features [9]. Studies comparing the brain responses between neutral human and cartoon faces found differences in brain activation in areas such as the occipitotemporal region, fusiform gyrus, superior temporal sulcus, and inferior frontal gyrus. These brain regions, typically more responsive to dynamic features of real faces, show weaker responses to animated faces [45,46]. However, some studies report stronger early occipitotemporal amplitudes for attractive animated faces [47], highlighting key differences in how the brain processes real versus animated faces, impacting observers’ physiological responses. This suggests that the brain’s visual system may process the simplified and exaggerated features of animated faces more easily. Furthermore, existing research indicates that visual attention during the assessment of facial attractiveness is influenced by facial geometric features. Visual attention analysis suggests that the size of the eyes and mouth plays a significant role in guiding visual attention across human faces [48]. Moreover, Huang et al. demonstrated that the degree of jaw protrusion significantly redirects attention from the eyes or nose to the lower face [49]. Similarly, variations in facial parameter sizes influence eye movements; specifically, subjects tend to focus more on the eyes when viewing larger faces, while their gaze becomes more dispersed when observing smaller faces [50]. These findings underscore the substantial impact that alterations in specific facial features have on visual gaze.
This exploratory eye-tracking study aims to examine the influence of different cultural backgrounds (American and Japanese) and facial types (real and animated) on visual attention, with a further focus on analyzing the relationship between facial geometric features and gaze metrics, such as fixation duration and fixation count. The study utilizes real facial images of American and Japanese political candidates, as well as a database of popular animated characters from both countries. Geometric parameters of facial features—including eyes, eyebrows, ears, nose, and chin—were extracted, encompassing standardized measures such as length, angle, and area. Through a series of eye-tracking experiments, participants’ visual behaviors while observing target facial feature areas were recorded. Comparative analyses were conducted across the following groups: real American faces versus real Japanese faces, animated American faces versus animated Japanese faces, real American faces versus animated American faces, and real Japanese faces versus animated Japanese faces. In addition, Pearson correlation analysis was employed to investigate the relationship between facial geometric features (e.g., length, angle, and area) and gaze metrics (e.g., fixation duration), aiming to explore the potential impact of facial structural features on visual attention.

2. Materials and Methods

2.1. Subjects

This study involved 60 participants aged 20 to 30 years, including 32 males and 28 females. The participants’ vision status, including strabismus and color vision abilities, were primarily screened based on self-report. All participants self-reported having normal vision or vision corrected to normal and no color vision deficiencies. Participants were right-handed, and had no history of neurological or psychiatric disorders.
The Institutional Review Board (IRB) of the China Medical University Affiliated Hospital approved the experiment with IRB number CRREC-110-128. This experiment adheres to the principles outlined in the Declaration of Helsinki. All participants signed a written informed consent form, and all experiments were conducted in accordance with IRB guidelines.

2.2. Stimuli

The experimental stimuli used in this study consisted of 60 facial images divided into four groups: American animation characters (AC, including 9 males and 6 females), Japanese animation characters (JC, including 5 males and 10 females), American real faces (AT, including 11 males and 4 females), and Japanese real faces (JT, including 9 males and 6 females), with 15 images in each group. The real face images were selected from members of American and Japanese Diet post-2000, assumed to represent public preferences and values through democratic processes. The animated faces came from animations that were either Oscar nominated or among the top ten at the box office between 2005 and 2019. These selection criteria aimed to ensure that the images adequately represent the characteristics of animated art and regional figures in their respective cultures. All images were sourced from typical works provided by the referenced study of Liu et al. (2022) [26]. The selected facial images all displayed a frontal view of the face, ensuring that the facial geometric features were not influenced by perspective or angle. All individuals in the images displayed a neutral expression to avoid the influence of emotional expression on the experimental results. To maintain visual consistency, all images had a white background and were uniformly adjusted to a resolution of 300 dpi with dimensions of 1024 × 1024 pixels. The original files of the selected pictures of real faces and animated character faces are listed in Appendix A.
The facial image stimuli used in this study were previously employed in a study on cultural influences on saccadic patterns in facial perception [9]. Unlike the previous study, this research does not reuse any saccadic indicator data. Instead, we have introduced a novel analysis approach focused on exploring the impact of facial geometric features—such as the proportions and symmetry of facial features—on visual gaze behavior. These analyses aim to reveal how facial geometric features influence visual processing in different cultural contexts, thereby deepening our understanding of how culture shapes visual perception.

2.3. Extraction of Facial Geometric Features

To extract key geometric feature parameters from each face, we utilized 26 predefined landmarks identified in prior studies (Liu et al. (2019)) [33], where specific facial features on real and animated characters’ faces are marked with designated alphanumeric codes at these landmark points. The markings of the selected landmarks are illustrated in Figure 1.
The measurement of facial geometric features included calculating the distances between landmarks around the eyes, eyebrows, ears, nose, and chin, angles formed by three points, and areas enclosed by four points, to quantify the size or shape of various facial features. This method has been used for facial feature assessment using decision tree classifiers. Additionally, it included statistical testing of 100 samples per group to analyze differences in facial features between real faces and their animated counterparts in Japan and the United States. Given the need to define areas of interest for facial features in our eye movement experiments, we selected 24 facial feature parameters related to the eyes, eyebrows, ears, nose, and chin areas suitable for AOI selection, with detailed definitions and classifications listed in Table 1. Figure 2 shows the approximate locations of 11 length features, 12 angle features, and 3 area features among all facial geometric features.

2.4. Procedure

This experiment employed the VT 2 eye tracker as the hardware for eye tracking, with a sampling rate of 80 Hz, a tracking distance of approximately 50–70 cm, and an accuracy of about 0.5°, utilizing binocular tracking technology. In addition, visual stimuli were displayed on a 14-inch monitor with a resolution of 1920 × 1200 pixels. The eye tracker was installed below the display to track the participants’ eye movements. The participants’ chins were unrestricted, allowing slight movements of the head and upper body, which minimized interference with the participants’ behavior and ensured the accuracy of the experimental results.
The primary goal of this study is to explore how participants spontaneously process and attend to visual stimuli in the absence of explicit task instructions. In the experiment, a set of facial expression images was presented, and participants were asked to perform a gaze task in a “free viewing” mode. It is important to emphasize that all participants were informed the day before the experiment to ensure they had sufficient rest.
Prior to the experiment, the eye tracker was calibrated to ensure the accuracy of the gaze tracking. The calibration test consisted of a standard 9-point calibration, where participants were required to focus their gaze on each red dot at the edges and center of the screen, with each dot staying for 1 s. Once the calibration process was completed and the calibration quality was deemed satisfactory, the experiment proceeded. In the facial stimulus gaze task, the images switched automatically, and each participant was required to view each facial image for 10 s, with a 2 s black screen between each image. After viewing 20 images, participants had a 120 s rest period. This cycle was repeated until all 60 images had been viewed. Furthermore, the order of the facial stimulus gaze task was randomized for each participant, as shown in Figure 3.

2.5. Eye Tracking Feature Extraction

Eye movement data were analyzed from the onset of the stimulus to the participant’s response. The data were preprocessed using Mangold Vision software version 2.5. For the analysis, we defined eight areas of interest (AOI) based on key facial features. These AOI were chosen to represent distinct regions on the face, and their boundaries were manually delineated for each stimulus image. The regions were as follows: AOI_1: the entire face; AOI_2: left eyebrow; AOI_3: right eyebrow; AOI_4: left eye; AOI_5: right eye; AOI_6: left ear; AOI_7: right ear; AOI_8: nose; AOI_9: chin.
Since facial geometry varies across different images (e.g., nose width, eye positioning), the areas of interest (AOIs) were dynamically adjusted for each image to accommodate these differences. This approach follows the concept of Dynamic AOI [51], which allows AOIs to adapt to specific object boundaries rather than relying on predefined static regions, ensuring precise coverage of key facial features during gaze stimulation. In our stimuli, the AOIs for each image were individually adjusted to account for variations in facial features. This ensured that the defined regions accurately covered the specific facial characteristics in each image. For example, the boundaries of the nose or eyes were redefined based on the unique geometry of the face in each stimulus. All images illustrating the areas of interest for facial regions are provided in Appendix B, as shown in Figure 4.
We calculated two key eye movement metrics, fixation duration and fixation count, as these metrics are widely used in eye-tracking research to assess visual attention and facial attractiveness [40,41].
Fixation duration refers to the sum of the durations of all fixations made within a given AOI, including any refixations. A longer fixation duration indicates increased cognitive processing or engagement with a particular feature, which is especially important in face perception research, as participants may spend more time fixating on facial features they perceive as attractive or socially significant [41,44].
Fixation count represents the total number of fixations that landed within a particular AOI. A higher fixation count suggests greater importance in processing a specific facial feature, providing insights into an individual’s interest and attentional focus on different facial regions [52].

2.6. Statistical Analysis

Eye movement metrics, such as fixation duration and fixation count, are typically continuous variables. According to previous eye-tracking research, these metrics usually exhibit a right-skewed distribution (e.g., a Gamma distribution) and are associated with the occurrence of outliers. Among these, common tests for normality include the Kolmogorov–Smirnov (KS) test [6,53], a non-parametric method used to assess the degree of fit between sample data and a normal distribution or to compare whether two sample distributions are identical. In the context of anomaly detection, the KS test can be employed to evaluate whether the data distribution significantly deviates from the expected normal distribution, thereby identifying potential outliers. Therefore, this study utilized the Kolmogorov–Smirnov test to assess the normality of eye-tracking data, with the results presented in Table 2. The test results reached statistical significance (Sig < 0.05), indicating that the data distribution does not conform to a normal distribution.
Statistical analyses were conducted using SPSS software (version 22.0). To examine the distribution of eye-tracking metrics across groups, descriptive statistics were presented using boxplots. The boxplot construction was as follows: The lower (Q1) and upper (Q3) quartiles are represented by the edges of the box, with the line inside the box indicating the mean of the data. The whiskers extend to the range defined by the standard deviation of the data, while points outside this range are marked as potential outliers.
Independent samples t-tests and Mann–Whitney U tests were employed to evaluate whether there were significant differences in eye-tracking metrics between face groups. The following pairwise comparisons were conducted: (1) American real faces vs. Japanese real faces, (2) American animated faces vs. Japanese animated faces, (3) American real faces vs. American animated faces, and (4) Japanese real faces vs. Japanese animated faces. A significance level of 0.05 was set for all analyses. Specifically, the independent samples t-test was used to compare the means between the two groups, assuming normality of the data. In cases where the assumption of normality was not met, the Mann–Whitney U test, a non-parametric alternative, was employed to compare the medians between the two groups. A p value of less than 0.05 was considered statistically significant for all comparisons.
Pearson correlation analysis was utilized to determine the relationship between the geometric features of the real and animated faces from America and Japan and the duration of fixation. All tests were two tailed, and a p value of less than 0.05 was considered statistically significant.

3. Results

3.1. Fixation Duration for Facial AOIs

The fixation duration parameters for the left and right eyes, left eyebrow, and left ear were not normally distributed. Mann–Whitney U tests, as shown in the Table 3, revealed that, compared to JC, AC exhibited significantly longer fixation durations for the left eye (p < 0.001), right eye (p < 0.001), and left eyebrow (p < 0.01). Moreover, the Mann–Whitney U test comparisons between the Japanese group (JC vs. JT) and the American group (AC vs. AT) showed significant differences in fixation duration. Compared to real faces, the eyes of animated characters (JC vs. JT: p < 0.001, AC vs. AT: p < 0.001) and the left ear (JC vs. JT: p < 0.001) showed significantly longer fixation durations. In contrast, compared to real faces, animated characters exhibited significantly shorter fixation durations in the left eyebrow region (p < 0.001). Additionally, the comparison between AT and JT showed that AT had significantly longer fixation durations for the left eyebrow (p < 0.001) and left ear (p < 0.001).
Based on the normality results of the fixation duration parameters for the right eyebrow, right ear, nose, chin, and the entire face, independent samples t-test results are shown in Table 4. Compared to JC, AC exhibited significantly longer fixation durations for the nose (p < 0.001) and chin (p < 0.001). Furthermore, the t-test comparisons between the Japanese group (JC vs. JT) and the American group (AC vs. AT) revealed significant differences in fixation durations. Compared to real faces, animated characters had significantly longer fixation durations for the right ear (JC vs. JT: p < 0.001) and the entire face (JC vs. JT: p < 0.01). In contrast, compared to real faces, animated characters exhibited significantly shorter fixation durations in the right eyebrow (JC vs. JT: p < 0.001, AC vs. AT: p < 0.001), nose (JC vs. JT: p < 0.001, AC vs. AT: p < 0.05), and chin (JC vs. JT: p < 0.001) regions. Additionally, the comparison between AT and JT showed that AT exhibited significantly longer fixation durations in the right eyebrow (p < 0.01) and the entire face (p < 0.01) regions.
Figure 5 shows the data distribution of fixation duration for each group (JC, AC, JT, and AT), as well as the comparison results of fixation duration under the four conditions.

3.2. Fixation Count for Facial AOIs

Based on the non-normal distribution of the fixation count parameters for the left and right eyes and left ear, the results of the Mann–Whitney U test, as shown in the Table 5, revealed that, compared to JC, AC exhibited more fixation counts for the left eye (p < 0.001) and right eye (p < 0.001). Furthermore, the Mann–Whitney U test comparisons between the Japanese group (JC vs. JT) and the American group (AC vs. AT) revealed significant differences in fixation count. Compared to real faces, animated characters exhibited significantly more fixation counts for the eyes (AC vs. AT: p < 0.001, JC vs. JT: p < 0.001) and left ear (JC vs. JT: p < 0.001). Additionally, the comparison between AT and JT showed that AT had significantly more fixation counts on the left ear (p < 0.001).
Based on the normality results of the fixation count parameters for the left and right eyebrows, right ear, nose, chin, and the entire face, the independent samples t-test results are shown in the Table 6. Compared to JC, AC exhibited significantly more fixation counts for the left eyebrow (p < 0.01), nose (p < 0.001), and chin (p < 0.001). Moreover, the t-test comparisons between the Japanese group (JC vs. JT) and the American group (AC vs. AT) revealed significant differences in fixation count. Compared to real faces, animated characters had significantly more fixation counts on the right ear (JC vs. JT: p < 0.001). In contrast, compared to real faces, animated characters exhibited significantly fewer fixation counts in the right eyebrow (AC vs. AT: p < 0.001, JC vs. JT: p < 0.001), nose (AC vs. AT: p < 0.05, JC vs. JT: p < 0.001), and chin (JC vs. JT: p < 0.001) regions. Additionally, the comparison between AT and JT showed that AT had significantly more fixation counts in the left eyebrow (p < 0.01), right eyebrow (p < 0.05), right ear (p < 0.001), and overall face (p < 0.01) regions.
Figure 6 shows the data distribution of fixation count for each group (JC, AC, JT, and AT), as well as the comparison results of fixation count under the four conditions.
This Figure 5 presents a comparison of fixation durations across four conditions, Japanese animated faces (JC), American animated faces (AC), Japanese real faces (JT), and American real faces (AT), for nine areas of interest (AOIs). The boxplots are constructed as follows: The lower and upper quartiles (Q1 and Q3) are depicted by the edges of the box, with the line inside the box indicating the mean. The whiskers extend to the range defined by the standard deviation of the data, while points outside this range are marked as potential outliers. p value: * p < 0.05, ** p < 0.01, *** p < 0.001.
This Figure 6 presents a comparison of fixation counts across four conditions, Japanese animated faces (JC), American animated faces (AC), Japanese real faces (JT), and American real faces (AT), for nine areas of interest (AOIs). The boxplots are constructed as follows: The lower and upper quartiles (Q1 and Q3) are depicted by the edges of the box, with the line inside the box indicating the mean. The whiskers extend to the range defined by the standard deviation of the data, while points outside this range are marked as potential outliers. p value: * p < 0.05, ** p < 0.01, *** p < 0.001.

3.3. Correlation Analysis Between Facial Geometric Features and AOI Fixation Duration

In examining the effects of facial geometric features on fixation duration for American and Japanese animations and real faces, this study found a series of significant correlations, as depicted in Table 7.
For American animations, the duration of eye area fixation was found to be positively correlated with the width of the eye (left: r = 0.72, p < 0.001; right: r = 0.72, p < 0.001, LA2) and eye area (left: r = 0.57, p < 0.01; right: r = 0.57, p < 0.01, R1). Similarly, the duration of fixation on the ear area showed a positive correlation with the width of the left ear (r = 0.66, p < 0.01, LB3). Fixations on the eyebrow area were positively correlated with the vertical width of the eyebrows (left: r = 0.89, p < 0.001; right: r = 0.90, p < 0.001, LA1), with the angle from the top of the eyebrow to the left edge of the eyebrow to the bottom of the eyebrow (left: r = 0.74; p < 0.001; right: r  = 0.72; p < 0.001, A5), and with angle from the top of the eyebrow to the right edge of the eyebrow to the bottom of the eyebrow (left: r  = 0.60; p < 0.001, right: r  = 0.61; p < 0.001, A7). The fixation duration in the nose area was positively correlated with the width of the nose (r = 0.53, p < 0.01, LB2) and the angle from the center of the eyebrows and the right cheekbone to the nasal base (r = 0.67, p < 0.01, A2), and negatively correlated with the angle of the nose (r = −0.70, p < 0.001, A3). Finally, fixation duration in the chin area was positively correlated with the distance from the lip line to the chin (r = 0.80, p < 0.001, LA3) and the length from the base of the nose to the chin (r = 0.78, p < 0.001, LA5).
In Japanese animations, the duration of fixation on the eye area was positively correlated with the vertical width of the eye (left: r = 0.75, p < 0.001; right: r = 0.75, p < 0.001, LA2) and the eye area (left: r = 0.64, p < 0.01; right: r = 0.64, p < 0.01, R1). Additionally, the fixation duration on the ear area showed a positive correlation with the width of the left ear (r = 0.55, p < 0.01, LB3). In the nose area, fixation duration was positively correlated with the width of the nose (r = 0.68, p < 0.01, LB2) and nasal area (r = 0.64, p < 0.01, R2). The chin area’s fixation duration positively correlated with the distance from the lip line to the chin (r = 0.52, p < 0.01, LA5) and the length from the base of the nose to the chin (r = 0.79, p < 0.001, LA6).
In real Japanese faces, the fixation duration on the nose area was negatively correlated with the angle of the nose (r = −0.60, p < 0.01, A3). Fixation duration in the chin area positively correlated with the length from the base of the nose to the lip line (r = 0.62, p < 0.01, LA4) and from the base of the nose to the chin (r = 0.60, p < 0.01, LA6). Additionally, for real American faces, the fixation duration in the ear area was positively correlated with the width of the left ear (r = 0.55, p < 0.01, LB3). Fixations on the right eyebrow area were positively correlated with the vertical width of the eyebrows (r = 0.55, p < 0.01, LA1). In the nose area, fixation duration negatively correlated with the angle from the center of the eyebrows to the left alar of the nose, to the base of the nose (r = −0.78, p < 0.001, A1), and with the angle from the center of the eyebrows and the right cheekbone to the nasal base (r = −0.64, p < 0.01, A2). Fixation duration in the chin area was positively correlated with the length from the base of the nose to the lip line (r = 0.50, p < 0.01, LA4).

4. Discussion

This exploratory eye-tracking study examines how cultural backgrounds (American and Japanese) and facial types (real and animated) influence visual attention, focusing on the relationship between facial geometric features and gaze metrics, such as fixation duration. The areas of interest included the left and right eyebrows, eyes, ears, nose, chin, and the entire face, covering four conditions of real and animated faces from American and Japan. Metrics for the gaze tasks included fixation duration and count. Results showed significant differences in fixation duration and count across most conditions for the same area of interest. Additionally, we extracted geometric feature parameters of facial stimuli including the eyebrows, eyes, ears, nose, and chin, and found through correlation analysis that these geometric features are related to the duration of gaze on specific facial areas, depending on the conditions. Subsequently, our main discussion focused on the differences in fixation eye movements for specific features of animated and real faces from American and Japan, and how facial geometric features affect fixation duration.
In this experiment, based on the results presented in Figure 5 and Figure 6, the average fixation duration and fixation count for the eyes, nose, and chin were relatively longer and more frequent, respectively, with more pronounced variance among groups in these AOIs. This indicates that some individuals might particularly focus on the eyes, while others may attend more to the nose or mouth. However, the overall trend consistently highlights these areas as primary focal points of facial attention. Our findings align closely with previous studies, indicating that the eyes, nose, and chin (or mouth) are often the most frequently fixated facial regions and exhibit the greatest variability [40,41], due to their significant role in conveying important social and identity-related information [42,54]. In contrast, areas such as eyebrows or ears exhibited average fixation durations typically shorter than one second, with smaller variability. Mean values for these regions across groups were comparatively low, making it challenging to reach comparable significance levels. Although these regions may occasionally play specific roles, they generally carry less socially relevant information in most tasks or daily interactions, resulting in less pronounced differences among groups compared to the eyes, nose, and chin regions.

4.1. Fixation Differences in American vs. Japanese Animation Faces

We found no difference in fixation duration and count on the overall facial areas between American and Japanese animated faces, indicating similar visual appeal when people view animated facial regions from American and Japan. Similar to the study by Chen et al., no significant differences were found in the saccadic patterns of audiences watching different animation styles [9]. However, our goal is to explain the visual representation of gaze patterns. According to the notion of attention stability, consistent gaze duration and scanning patterns may indicate the relatively stable attention of viewers when observing specific visual stimuli [55]. This suggests the effectiveness of American and Japanese animated facial stimuli in attracting and maintaining viewer attention. This provides a foundation for further research on how animation styles affect visual attention.
However, when comparing gaze behavior on specific facial areas, we indeed found significant differences between American and Japanese animations. Specifically, the duration and frequency of gazes on areas such as the chin, nose, eyes, and eyebrows were significantly higher in American animations than in Japanese animations. This indicates that people have different visual preferences when viewing certain features of animated faces from American and Japan. These differences could be due to distinct design strategies in American and Japanese animated faces, where American animations tend to use more exaggerated facial features, while Japanese animations may focus more on flattened noses or pointed chins to convey a specific Japanese animation visual style [33,36]. This could lead to differences in the size of salient regions of specific features in American and Japanese animated faces, subsequently affecting the regulatory factors of gaze duration on these features. Given the modulation factors of visual attention and salient regions, this significant effect could persist for a long time, closely correlating larger salient areas with more gazes and longer gaze durations [56].

4.2. Fixation Differences in American vs. Japanese Real Faces

In this study, all our participants were from Taiwan; therefore, facial features from Japan and the United States were considered non-ingroup faces. This was an important consideration in the design of the study, as it influences how we interpret the visual behavior differences related to race. Observations on facial familiarity have found that higher facial familiarity typically attracts fewer gazes [57]. This phenomenon highlights a tendency to gaze less at faces with higher familiarity. Given the influence of cultural proximity, we speculate that Taiwanese participants might demonstrate a reduced gaze towards Asian faces (e.g., Japanese faces) because of greater familiarity with these faces. In contrast, spending more time and performing more frequent gazes on American faces is generally explained by the novelty effect [58] or aesthetic preference [59], as the novelty of a face may lead to longer gaze durations because the brain needs more time to process unfamiliar features. Alternatively, there might be prolonged gazes on faces that are perceived as aesthetically attractive, a preference that can sometimes transcend racial boundaries.
In comparing facial features, we observed certain differences in how participants gazed at the eyebrow and ear regions of Japanese and American faces, although these areas showed relatively less pronounced eye movement values compared to other features. Unlike previous studies, we found no significant visual preferences in the eyes, nose, and chin areas specific to ethnic groups [60,61]. This suggests that our participants’ gaze behavior was similar across non-ingroup facial features, rather than focusing on areas typically associated with racial biases. These observations suggest that differences in attention distribution across facial features of different ethnic groups reflect variations in visual gaze patterns toward various racial groups. Our results indicate a removal of race-specific visual cues in facial features, potentially explaining the relatively stable visual gazes displayed by Taiwanese participants when encountering certain features of Japanese and American faces. Additionally, another explanation is that, according to previous research, animated faces play a key role in eliminating facial racial bias [62]. Our introduction of animated facial stimuli may have potentially eliminated biases in observing the eyes, nose, and chin areas across different racial groups of real faces. Therefore, our comparison of eye movement results while observing real facial features of American and Japanese individuals showed differences from findings of facial feature biases among different racial faces, specifically, our participants exhibited a similar visual distribution when viewing the main features (eyes, nose, chin) of real American and Japanese faces.
It is important to note that previous studies have shown that cultural and racial familiarity can influence gaze behavior, with individuals tending to focus more on features that differ from the typical facial structure of their own ethnic group [57,60]. If future studies include participants from non-Asian backgrounds, their visual behavior when observing faces of different races may differ, challenging the generalizability of our findings. We also suggest that future research expand the scope of investigation to include participants from diverse backgrounds to determine whether the observed patterns are consistent across different racial groups.

4.3. Fixation Comparison Between Animated and Real Faces

We found that, compared to American animated faces, real American faces received a significantly longer gaze duration and a higher number of gazes in facial areas. This observation is supported by studies measuring brain activity, where event-related potentials (ERP) have shown that real faces elicit stronger responses in brain regions associated with face perception, suggesting a deeper or more complex processing of real faces [46]. Additionally, Chassy and colleagues found a significant positive correlation between visual complexity and fixation counts using eye-tracking technology [63]. Therefore, compared to simplified animated faces, the information processing demands for real faces are higher, potentially causing observers to change their gaze points more frequently to handle and interpret this complex information, thus increasing fixation counts. Compared to real Japanese faces, Japanese animated faces receive significantly more gaze duration and a higher number of gazes in facial areas. This phenomenon may involve the influence of culture and media type on facial gaze differences. The unique Japanese visual style of anime often elicits uniformly positive responses from audiences in most regions [64,65], which could make viewers more attracted to this visual style.
In the comparison of facial features, whether in America or Japan, real faces exhibited a significantly longer gaze duration and a higher number of gazes on the nose and chin areas compared to animated faces, which showed advantages in the eyes, ears, and eyebrows regions. This result is consistent with other studies [20] that believed that the upper facial region of animated faces has a more pronounced visual attraction effect. The differences in gaze patterns between animated and real faces are primarily due to the differences in the configurations of animated and real human facial features [26]. This is related to how people interpret and process facial information. For example, with real faces, participants may focus more on the lower parts of the face (such as the nose and chin) that are closely related to social interaction [66]. Compared to real faces, animated faces often emphasize key facial features due to their exaggerated and stylized features, which are key in conveying emotions in social interactions [67,68]. This emphasis results in a distinct gaze pattern where the eyes, as crucial non-verbal cues, attract initial and focused attention.

4.4. Influence of Facial Geometric Features on Fixation Duration

Our study confirmed that facial geometric features significantly influence fixation duration in both real and animated subjects. Correlational analyses revealed significant associations between fixation duration and several facial features—namely, the vertical width and area of the eyes, and the widths and lengths of the ears, noses, and chins—in American and Japanese animated faces. Large eyes, wide ears, wide noses, and long chins serve as natural focal points, inherently drawing the viewer’s gaze. This underscores a pivotal visual strategy in animation: employing exaggerated facial features to captivate and maintain viewer attention [18]. Notably, American and Japanese animations exhibit distinct styles in feature exaggeration, affecting gaze duration and geometric parameters. Specifically, in Japanese animation, an enlarged nasal area—a crucial visual focal point—positively correlates with longer fixation durations. Conversely, in American animation, while nose length positively correlates with gaze duration, the angle from the nostrils to the base negatively affects it. According to Liu et al., nose styling distinguishes American from Japanese animations; American characters typically have longer, more realistic noses, while Japanese characters feature a flattened representation [33], both styles designed to attract viewer attention. Our analysis elucidates the distinct impact of geometric features on the nasal characteristics of American and Japanese animated characters, which significantly affects gaze duration. Additionally, the vertical width and angle of eyebrows in American animations positively correlate with increased gaze duration, highlighting their role as key visual elements.
Regarding real faces, considering that our samples come from elected officials, this may reveal public gaze patterns towards politicians’ facial features. Some research indicates that voters are likely to favor candidates perceived to embody leadership qualities and credibility [69,70]. Our findings on gaze behavior could reflect public perceptions of facial geometric features linked to political appearances. Firstly, in the nasal area, Japanese people typically have sharper nasal bases, while Americans have shorter noses with broader bases, features that correlate with the duration of gazes. Secondly, Japanese people generally have lower-positioned mouths, and, in American samples, broader eyebrows and ears also correlate with longer gaze durations. Lastly, regarding chin features, data from both countries indicate that a longer chin distance is associated with longer gaze durations. These related facial geometric features also help explain facial perception of leadership, credibility, and attractiveness, Nevertheless, our results indicate that the prominence of these features can increase gaze duration among Asian participants.
According to previous research, facial symmetry plays an important role in the subjective evaluation and visual perception of facial attractiveness [71]. Research has shown that people generally prefer symmetrical faces, and, when observing faces with strong symmetry, the gaze duration is typically longer and the frequency of fixations is typically higher [72]. Facial features in a frontal view exhibit the highest degree of symmetry, and, when an observer looks at a face from the front, the eyes, nose, and other features naturally draw the focal point of the gaze. The results of this study show that, compared to human faces, animated faces have a significantly longer FD and higher FC in the eye area. This, combined with the correlation of geometric features such as eye length and area, may explain why exaggerated eye regions in animated characters exhibit significant visual appeal while maintaining maximum symmetry. However, compared to animated faces, human faces exhibit a longer FD and higher FC in the nose and mouth regions. Does this indicate that the nose and chin in real human faces, whether symmetrical or asymmetrical, attract more attention? Due to factors such as congenital development, environmental influences, and diseases, facial asymmetry is a common phenomenon in human faces. According to the research by Kaipainen et al., the judgment of facial attractiveness is not influenced by absolute asymmetry in facial regions, with asymmetry primarily observed in the chin and cheek areas [73]. Nevertheless, in our sample, the natural asymmetry in the shape of the nose and chin may suggest that these regions have higher visual involvement compared to animated faces. However, the main reason may lie in the authenticity of the geometric structure and dynamic changes.

4.5. Limitation

This study has several limitations. Firstly, the current research extensively discusses the differences in visual fixation behavior between facial stimuli from American and Japanese animations and real human faces, with a primary focus on the eyes, eyebrows, nose, ears, and chin as the main areas of interest (AOIs). However, these studies have not sufficiently considered other facial regions, such as hairstyle (including hair color), cheeks, contours, and other non-facial features. Future research should consider controlling or annotating these facial features, which will contribute to a more comprehensive understanding of facial perception and its influence on other factors.
Additionally, while our current study only normalized the overall facial length (i.e., the distance from the top of the head to the chin) to a standard 1 cm unit, we did not standardize the individual facial features due to significant variations across different cultural backgrounds or animation styles. For instance, previous research has pointed out notable differences in the eye size and nose proportions between American and Japanese animations, among other features. These differences could influence gaze behavior. As such, we have not applied geometric feature normalization in this study, as doing so may alter visual attention parameters and affect the resulting gaze patterns. Future research will examine how standardizing geometric features can enhance our understanding of how facial geometry variations affect visual attention and gaze dynamics.
Secondly, while our study primarily focuses on the impact of different stimulus categories (e.g., American vs. Japanese animation characters, real faces), gender differences may also play a significant role in shaping participants’ responses. Although we did not explicitly design the experiment to explore gender effects, we recognize that male and female participants might react differently to male or female faces. Future research could better address this by increasing sample size and explicitly incorporating gender as a factor, enabling a more thorough investigation of gender’s influence on face perception and emotional responses.
Third, the distribution of gaze across different facial areas appears to be inconsistently influenced by expressions, with some expressions modulating gaze proportion while others show no significant effect. This study aims to investigate viewers’ eye movements in response to natural expressions, laying the groundwork for future research that may explore the impact of different facial expressions.
Lastly, as our samples were derived from popular cartoon characters and real faces of politicians, participant responses might be influenced by participants’ familiarity and past experiences, potentially biasing our data towards positive emotions. Future studies could verify this through emotion physiological instrumentation. Despite these limitations, the results of this study provide a valuable foundation for future research exploring the impact of facial geometric features using animated facial stimuli on visual behavior.

5. Conclusions

The results of this study indicate that American animations typically elicit longer fixation durations and a higher number of fixations on most facial features compared to Japanese animations. Moreover, compared to real faces, animated characters generally receive longer and more frequent fixations on specific features such as eyebrows, eyes, and ears, while the fixation durations on noses and chins are shorter. Additionally, correlational analyses reveal significant associations between facial geometric features and fixation durations on both real and animated facial representations, with variations observed across different conditions. This suggests that facial geometric features may have a potential role in shaping the patterns of gaze fixation durations when viewing both animated and real facial features.

Author Contributions

All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Z.-L.C. and K.-M.C. The first draft of the manuscript was written by Z.-L.C. All authors commented on previous versions of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Research Ethics Committee of China Medical University Hospital (CRREC-110-128).

Informed Consent Statement

Written informed consent was obtained from all participants for the publication of this paper.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors sincerely thank all the participants who helped make this study possible.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Appendix A. The Original Face Files Are Listed in the Following Links

Appendix B

The pictures of the facial areas of interest are listed in the following link: https://figshare.com/s/2272a32cbd360a988abe (accessed on 5 February 2025).

References

  1. Abraham, L. Effectiveness of cartoons as a uniquely visual medium for orienting social issues. J. Commun. Monogr. 2009, 11, 117–165. [Google Scholar] [CrossRef]
  2. van Rooij, M. Carefully constructed yet curiously real: How major American animation studios generate empathy through a shared style of character design. Animation 2019, 14, 191–206. [Google Scholar]
  3. Chen, H.; Russell, R.; Nakayama, K.; Livingstone, M. Crossing the ‘uncanny valley’: Adaptation to cartoon faces can influence perception of human faces. Perception 2010, 39, 378–386. [Google Scholar] [PubMed]
  4. Chirco, P.; Buchanan, T.M. We the People. Who? The face of future American politics is shaped by perceived foreignness of candidates of color. Anal. Soc. Issues Public Policy 2023, 23, 5–19. [Google Scholar]
  5. Pino, M.C.; Vagnetti, R.; Valenti, M.; Mazza, M. Comparing virtual vs real faces expressing emotions in children with autism: An eye-tracking study. Educ. Inf. Technol. 2021, 26, 5717–5732. [Google Scholar] [CrossRef]
  6. Ziv, I.; Avni, I.; Dinstein, I.; Meiri, G.; Bonneh, Y.S. Oculomotor randomness is higher in autistic children and increases with the severity of symptoms. Autism Res. 2024, 17, 249–265. [Google Scholar] [CrossRef]
  7. Davis, S.R.; Hand, E.M. Improving face recognition using artistic interpretations of prominent features: Leveraging caricatures in modern surveillance systems. In Intelligent Video Surveillance-New Perspectives; IntechOpen: Rijeka, Croatia, 2022. [Google Scholar]
  8. Dawel, A.; Wong, T.Y.; McMorrow, J.; Ivanovici, C.; He, X.; Barnes, N.; Irons, J.; Gradden, T.; Robbins, R.; Goodhew, S.C. Caricaturing as a general method to improve poor face recognition: Evidence from low-resolution images, other-race faces, and older adults. J. Exp. Psychol. Appl. 2019, 25, 256. [Google Scholar] [CrossRef]
  9. Chen, Z.-L.; Chang, K.-M. Cultural Influences on Saccadic Patterns in Facial Perception: A Comparative Study of American and Japanese Real and Animated Faces. Appl. Sci. 2023, 13, 11018. [Google Scholar] [CrossRef]
  10. Dellert, T.; Müller-Bardorff, M.; Schlossmacher, I.; Pitts, M.; Hofmann, D.; Bruchmann, M.; Straube, T. Dissociating the neural correlates of consciousness and task relevance in face perception using simultaneous EEG-fMRI. J. Neurosci. 2021, 41, 7864–7875. [Google Scholar] [CrossRef]
  11. Hou, X.; Shang, J.; Tong, S. Neural Mechanisms of the Conscious and Subliminal Processing of Facial Attractiveness. Brain Sci. 2023, 13, 855. [Google Scholar] [CrossRef]
  12. Guy, N.; Azulay, H.; Kardosh, R.; Weiss, Y.; Hassin, R.R.; Israel, S.; Pertzov, Y. A novel perceptual trait: Gaze predilection for faces during visual exploration. Sci. Rep. 2019, 9, 10714. [Google Scholar]
  13. Kawakami, K.; Friesen, J.P.; Fang, X. Perceiving ingroup and outgroup faces within and across nations. Br. J. Psychol. 2022, 113, 551–574. [Google Scholar] [PubMed]
  14. Peterson, M.F.; Eckstein, M.P. Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation. Psychol. Sci. 2013, 24, 1216–1225. [Google Scholar]
  15. Chanpornpakdi, I.; Wongsawat, Y.; Tanaka, T. Partial face visibility and facial cognition: Event-related potential and eye tracking investigation. Cogn. Neurodynamics 2025, 19, 47. [Google Scholar]
  16. Royer, J.; Blais, C.; Charbonneau, I.; Déry, K.; Tardif, J.; Duchaine, B.; Gosselin, F.; Fiset, D. Greater reliance on the eye region predicts better face recognition ability. Cognition 2018, 181, 12–20. [Google Scholar]
  17. Kanan, C.; Bseiso, D.N.; Ray, N.A.; Hsiao, J.H.; Cottrell, G.W. Humans have idiosyncratic and task-specific scanpaths for judging faces. Vis. Res. 2015, 108, 67–76. [Google Scholar] [PubMed]
  18. Hongpaisanwiwat, C.; Lewis, M. Attentional effect of animated character. In Human-Computer Interaction; Ios Press: Amsterdam, The Netherlands, 2003; pp. 423–430. [Google Scholar]
  19. Lee, Y.-I.; Choi, Y.; Jeong, J. Character drawing style in cartoons on empathy induction: An eye-tracking and EEG study. PeerJ 2017, 5, e3988. [Google Scholar]
  20. Cheetham, M.; Pavlovic, I.; Jordan, N.; Suter, P.; Jancke, L. Category processing and the human likeness dimension of the uncanny valley hypothesis: Eye-tracking data. Front. Psychol. 2013, 4, 108. [Google Scholar]
  21. Trujillo, L.T.; Anderson, E.M. Facial typicality and attractiveness reflect an ideal dimension of face structure. Cogn. Psychol. 2023, 140, 101541. [Google Scholar]
  22. Voorspoels, W.; Vanpaemel, W.; Storms, G. A formal ideal-based account of typicality. Psychon. Bull. Rev. 2011, 18, 1006–1014. [Google Scholar]
  23. Fan, J.; Chau, K.; Wan, X.; Zhai, L.; Lau, E. Prediction of facial attractiveness from facial proportions. Pattern Recognit. 2012, 45, 2326–2334. [Google Scholar]
  24. Komori, M.; Kawamura, S.; Ishihara, S. Effect of averageness and sexual dimorphism on the judgment of facial attractiveness. Vis. Res. 2009, 49, 862–869. [Google Scholar] [PubMed]
  25. Zhang, L.; Zhang, D.; Sun, M.-M.; Chen, F.-M. Facial beauty analysis based on geometric feature: Toward attractiveness assessment application. Expert Syst. Appl. 2017, 82, 252–265. [Google Scholar]
  26. Liu, K.; Chang, K.-M.; Liu, Y.-J. Facial Feature Study of Cartoon and Real People with the Aid of Artificial Intelligence. Sustainability 2022, 14, 13468. [Google Scholar] [CrossRef]
  27. Perez-Gomez, V.; Rios-Figueroa, H.V.; Rechy-Ramirez, E.J.; Mezura-Montes, E.; Marin-Hernandez, A. Feature selection on 2D and 3D geometric features to improve facial expression recognition. Sensors 2020, 20, 4847. [Google Scholar] [CrossRef] [PubMed]
  28. Farkas, L.G.; Katic, M.J.; Forrest, C.R. International anthropometric study of facial morphology in various ethnic groups/races. J. Craniofacial Surg. 2005, 16, 615–646. [Google Scholar]
  29. Zheng, S.; Chen, K.; Lin, X.; Liu, S.; Han, J.; Wu, G. Quantitative analysis of facial proportions and facial attractiveness among Asians and Caucasians. Math. Biosci. Eng. 2022, 19, 6379–6395. [Google Scholar] [PubMed]
  30. Le, T.T.; Farkas, L.G.; Ngim, R.C.; Levin, L.S.; Forrest, C.R. Proportionality in Asian and North American Caucasian faces using neoclassical facial canons as criteria. Aesthetic Plast. Surg. 2002, 26, 64–69. [Google Scholar]
  31. Jilani, S.K.; Ugail, H.; Logan, A. Inter-Ethnic and Demic-Group variations in craniofacial anthropometry: A review. PSM Biol. Res. 2018, 4, 6–16. [Google Scholar]
  32. Chen, K.-L.; Chen, I.-P.; Hsieh, C.-M. Analysis of Facial Feature Design for 3D Animation Characters. Vis. Commun. Q. 2020, 27, 70–83. [Google Scholar]
  33. Liu, K.; Chen, J.-H.; Chang, K.-M. A study of facial features of American and Japanese cartoon characters. Symmetry 2019, 11, 664. [Google Scholar] [CrossRef]
  34. Lu, A.S. The many faces of internationalization in Japanese anime. Animation 2008, 3, 169–187. [Google Scholar]
  35. Sammond, N. Race, resistance and violence in cartoons. In The Animation Studies Reader; Bloomsbury Academic: New York, NY, USA, 2019; pp. 217–234. [Google Scholar]
  36. Dai, B. Investigating Visual Differences Between Japanese and American Animation; Rochester Institute of Technology: Henrietta, NY, USA, 2016. [Google Scholar]
  37. Dastagir, S.F. Representations of South Asians in Japanese Animation; Birkbeck, University of London: London, UK, 2023. [Google Scholar]
  38. Zurcher, J.D.; Webb, S.M.; Robinson, T. The portrayal of families across generations in Disney animated films. Soc. Sci. 2018, 7, 47. [Google Scholar] [CrossRef]
  39. Tyner-Mullings, A.R. Disney animated movies, their princesses, and everyone else. Inf. Commun. Soc. 2023, 26, 891–903. [Google Scholar]
  40. Valuch, C.; Pflüger, L.S.; Wallner, B.; Laeng, B.; Ansorge, U. Using eye tracking to test for individual differences in attention to attractive faces. Front. Psychol. 2015, 6, 42. [Google Scholar]
  41. Fearington, F.W.; Pumford, A.D.; Awadallah, A.S.; Dey, J.K. Gaze Patterns During Evaluation of Facial Attractiveness: An Eye-Tracking Investigation. Laryngoscope 2024. [Google Scholar] [CrossRef]
  42. Itier, R.J.; Villate, C.; Ryan, J.D. Eyes always attract attention but gaze orienting is task-dependent: Evidence from eye movement monitoring. Neuropsychologia 2007, 45, 1019–1028. [Google Scholar]
  43. Bombari, D.; Mast, F.W.; Lobmaier, J.S. Featural, configural, and holistic face-processing strategies evoke different scan patterns. Perception 2009, 38, 1508–1521. [Google Scholar]
  44. Leder, H.; Mitrovic, A.; Goller, J. How beauty determines gaze! Facial attractiveness and gaze duration in images of real world scenes. i-Perception 2016, 7, 2041669516664355. [Google Scholar]
  45. James, T.W.; Potter, R.F.; Lee, S.; Kim, S.; Stevenson, R.A.; Lang, A. How realistic should avatars be? J. Media Psychol. 2015, 27, 109–117. [Google Scholar]
  46. Zhao, J.; Meng, Q.; An, L.; Wang, Y. An event-related potential comparison of facial expression processing between cartoon and real faces. PLoS ONE 2019, 14, e0198868. [Google Scholar]
  47. Lu, Y.; Wang, J.; Wang, L.; Wang, J.; Qin, J. Neural responses to cartoon facial attractiveness: An event-related potential study. Neurosci. Bull. 2014, 30, 441–450. [Google Scholar] [PubMed]
  48. Navajas, J.; Nitka, A.W.; Quian Quiroga, R. Dissociation between the neural correlates of conscious face perception and visual attention. Psychophysiology 2017, 54, 1138–1150. [Google Scholar] [PubMed]
  49. Huang, P.; Cai, B.; Zhou, C.; Wang, W.; Wang, X.; Gao, D.; Bao, B. Contribution of the mandible position to the facial profile perception of a female facial profile: An eye-tracking study. Am. J. Orthod. Dentofac. Orthop. 2019, 156, 641–652. [Google Scholar]
  50. Wang, S. Face size biases emotion judgment through eye movement. Sci. Rep. 2018, 8, 317. [Google Scholar]
  51. Jayawardena, G.; Jayarathna, S. Automated filtering of eye movements using dynamic aoi in multiple granularity levels. Int. J. Multimed. Data Eng. Manag. (IJMDEM) 2021, 12, 49–64. [Google Scholar]
  52. Holmqvist, K.; Nyström, M.; Andersson, R.; Dewhurst, R.; Jarodzka, H.; Van de Weijer, J. Eye Tracking: A Comprehensive Guide to Methods and Measures; Oup Oxford: Oxford, UK, 2011. [Google Scholar]
  53. Agost, M.-J.; Bayarri-Porcar, V. The Use of Eye-Tracking to Explore the Relationship Between Consumers’ Gaze Behaviour and Their Choice Process. Big Data Cogn. Comput. 2024, 8, 184. [Google Scholar] [CrossRef]
  54. Liu, M.; Zhan, J.; Wang, L. Specified functions of the first two fixations in face recognition: Sampling the general-to-specific facial information. Iscience 2024, 27, 110686. [Google Scholar]
  55. Pannasch, S.; Helmert, J.R.; Roth, K.; Herbold, A.-K.; Walter, H. Visual fixation durations and saccade amplitudes: Shifting relationship in a variety of conditions. J. Eye Mov. Res. 2008, 2, 1–19. [Google Scholar] [CrossRef]
  56. Constant, M.; Liesefeld, H.R. Effects of salience are long-lived and stubborn. J. Exp. Psychol. Gen. 2023, 152, 2685. [Google Scholar]
  57. Heisz, J.J.; Shore, D.I. More efficient scanning for familiar faces. J. Vis. 2008, 8, 9. [Google Scholar] [CrossRef]
  58. Wright, C.I.; Negreira, A.; Gold, A.L.; Britton, J.C.; Williams, D.; Barrett, L.F. Neural correlates of novelty and face–age effects in young and elderly adults. Neuroimage 2008, 42, 956–968. [Google Scholar] [CrossRef] [PubMed]
  59. Leder, H.; Tinio, P.P.; Fuchs, I.M.; Bohrn, I. When attractiveness demands longer looks: The effects of situation and gender. Q. J. Exp. Psychol. 2010, 63, 1858–1871. [Google Scholar] [CrossRef] [PubMed]
  60. Arizpe, J.; Kravitz, D.J.; Walsh, V.; Yovel, G.; Baker, C.I. Differences in looking at own-and other-race faces are subtle and analysis-dependent: An account of discrepant reports. PLoS ONE 2016, 11, e0148253. [Google Scholar] [CrossRef] [PubMed]
  61. Burgund, E.D. Looking at the own-race bias: Eye-tracking investigations of memory for different race faces. Vis. Cogn. 2021, 29, 51–62. [Google Scholar] [CrossRef]
  62. Rodríguez, J.; Bortfeld, H.; Gutiérrez-Osuna, R. Reducing the other-race effect through caricatures. In Proceedings of the 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition, Amsterdam, The Netherlands, 17–19 September 2008; pp. 1–5. [Google Scholar]
  63. Chassy, P.; Lindell, T.A.; Jones, J.A.; Paramei, G.V. A relationship between visual complexity and aesthetic appraisal of car front images: An eye-tracker study. Perception 2015, 44, 1085–1097. [Google Scholar] [CrossRef]
  64. Li, Y.; Jiang, Q. The development and influence of Japanese aesthetics and its manifestation in Japanese animation. In Proceedings of the SHS Web of Conferences, Online, 12–13 December 2022; p. 01007. [Google Scholar] [CrossRef]
  65. Pellitteri, M. The bias on characters’ visual traits in Japanese animation and the misconceived “transnationality” of anime. Mutual Images J. 2023, 11, 109–138. [Google Scholar] [CrossRef]
  66. Hessels, R.S.; Holleman, G.A.; Kingstone, A.; Hooge, I.T.; Kemner, C. Gaze allocation in face-to-face communication is affected primarily by task structure and social context, not stimulus-driven factors. Cognition 2019, 184, 28–43. [Google Scholar] [CrossRef]
  67. Kendall, L.N.; Raffaelli, Q.; Kingstone, A.; Todd, R.M. Iconic faces are not real faces: Enhanced emotion detection and altered neural processing as faces become more iconic. Cogn. Res. Princ. Implic. 2016, 1, 1–14. [Google Scholar] [CrossRef]
  68. Zhang, S.; Liu, X.; Yang, X.; Shu, Y.; Liu, N.; Zhang, D.; Liu, Y.-J. The Influence of Key Facial Features on Recognition of Emotion in Cartoon Faces. Front. Psychol. 2021, 12, 687974. [Google Scholar] [CrossRef]
  69. Antonakis, J.; Eubanks, D.L. Looking leadership in the face. Curr. Dir. Psychol. Sci. 2017, 26, 270–275. [Google Scholar]
  70. Maran, T.; Furtner, M.; Liegl, S.; Kraus, S.; Sachse, P. In the eye of a leader: Eye-directed gazing shapes perceptions of leaders’ charisma. Leadersh. Q. 2019, 30, 101337. [Google Scholar] [CrossRef]
  71. Perrett, D.I.; Burt, D.M.; Penton-Voak, I.S.; Lee, K.J.; Rowland, D.A.; Edwards, R. Symmetry and human facial attractiveness. Evol. Hum. Behav. 1999, 20, 295–307. [Google Scholar]
  72. Huang, Y.; Xue, X.; Spelke, E.; Huang, L.; Zheng, W.; Peng, K. The aesthetic preference for symmetry dissociates from early-emerging attention to symmetry. Sci. Rep. 2018, 8, 6263. [Google Scholar]
  73. Kaipainen, A.E.; Sieber, K.R.; Nada, R.M.; Maal, T.J.; Katsaros, C.; Fudalej, P.S. Regional facial asymmetries and attractiveness of the face. Eur. J. Orthod. 2016, 38, 602–608. [Google Scholar] [CrossRef]
Figure 1. Twenty-four facial landmarks were defined following [33].
Figure 1. Twenty-four facial landmarks were defined following [33].
Symmetry 17 00528 g001
Figure 2. Approximate locations of 11 length features, 12 angle features, and 3 area features in facial geometric features.
Figure 2. Approximate locations of 11 length features, 12 angle features, and 3 area features in facial geometric features.
Symmetry 17 00528 g002
Figure 3. System overview.
Figure 3. System overview.
Symmetry 17 00528 g003
Figure 4. Area of interest, AOI.
Figure 4. Area of interest, AOI.
Symmetry 17 00528 g004
Figure 5. Comparison of fixation durations across four conditions for nine areas of interest (AOIs). * p < 0.05, *** p < 0.001.
Figure 5. Comparison of fixation durations across four conditions for nine areas of interest (AOIs). * p < 0.05, *** p < 0.001.
Symmetry 17 00528 g005
Figure 6. Comparison of fixation counts across four conditions for nine areas of interest (AOIs). *** p < 0.001.
Figure 6. Comparison of fixation counts across four conditions for nine areas of interest (AOIs). *** p < 0.001.
Symmetry 17 00528 g006
Table 1. Description of facial geometric features.
Table 1. Description of facial geometric features.
NCodePointsDescription
1LA2u-s Vertical length of eye
2R1r-s-t-u Area of the eye
3LA1v-y Vertical length of eyebrow
4LB1w-x Length of eyebrow
5A4∠p-w-y Angle from the hairline to the left edge of the eyebrow to the bottom of the eyebrow
6A5∠v-w-y Angle from the top of the eyebrow to the left edge of the eyebrow to the bottom of the eyebrow
7A6∠p-x-y Angle from the hairline to the right edge of the eyebrow to the bottom of the eyebrow
8A7∠v-x-y Angle from the top of the eyebrow to the right edge of the eyebrow to the bottom of the eyebrow
9LB3m-k Length of ear
10LA3a-c Vertical length of nose
11LB2b-d Length of nose
12R2a-b-c-d Area of nose
13A2∠a-b-d Angle from the eyebrows’ center to the left alar of the nose, to the base of the nose
14A1∠a-l-c Angle from the eyebrows’ center, the right cheekbone to the nasal base
15A3∠b-c-d Angle of nose
16LA4c-e Length from base of nose to lip line
17LA6c-g Length from base of nose to chin
18LA5e-g Lip line to chin
19LB5f-h Width of jaw
20LB4i-j Mandibular width
21R3i-f-g-j Area between the left and right mandible angles and the chin
22A8∠f-g-h Sharpness the angle of the chin
23A9∠h-g-z Angle from the right side of the chin to the chin and to the lower right corner of the picture
24A10∠i-g-z Angle from left mandible angle to the chin and to the lower right corner of the picture
25A11∠j-g-z Angle from the right mandible angle to the chin to the lower right corner of the picture
26A12∠k-i-g Angle from the left ear to the left mandible angle and to the chin
N: Used to track the unique number for each measurement. Nos. 1–2: eye area; nos. 3–8: eyebrow area; nos. 9: ear area; nos. 10–15: nose area; nos. 16–26: chin area. Code: Each measurement item is assigned a specific code reused from the database. Points: Lists the designated points on the face used to calculate the measurement results, e.g., u-s represents the vertical length of the eye, measured from point u to point s.
Table 2. Test of normality for fixation duration and fixation count.
Table 2. Test of normality for fixation duration and fixation count.
Facial FeatureFixation DurationFixation Count
StatisticsSigStatisticsSig
Right eye0.1850.0280.1860.028
Left eye0.2420.0010.2230.004
Right eyebrow0.0940.6280.1050.495
Left eyebrow0.1770.0410.1670.062
Right ear0.1190.3380.1330.221
Left ear0.1860.0270.1740.046
Nose0.1160.3670.1250.284
Chin0.1490.1240.1490.125
The entire face0.0670.9320.0650.947
Table 3. Mann–Whitney U test on fixation duration, investigating the left and right eyes, left eyebrow, and left ear.
Table 3. Mann–Whitney U test on fixation duration, investigating the left and right eyes, left eyebrow, and left ear.
Mann–Whitney URight EyeLeft EyeLeft EyebrowLeft Ear
USigUSigUSigUSig
JC vs. AC348,497.5<0.001345,325.0<0.001381,898.50.03410,905.50.52
JC vs. JT514,790.0<0.001522,422.5<0.001393,642.00.30443,302.0<0.001
AC vs. AT554,677.5<0.001562,698.0<0.001381,122.00.03409,835.50.60
JT vs. AT389,275.50.15388,674.50.13367,991.0<0.001378,402.5<0.001
Table 4. Samples t-tests on fixation duration, investigating the right eyebrow, right ear, nose, chin, and the entire face.
Table 4. Samples t-tests on fixation duration, investigating the right eyebrow, right ear, nose, chin, and the entire face.
t-TestRight EyebrowRight EarNoseChinThe Entire Face
TSigTSigTSigTSigTSig
JC vs. AC−1.150.2510.730.467−6.80<0.001−3.57<0.0011.770.077
JC vs. JT−5.07<0.0014.99<0.001−9.42<0.001−4.09<0.0012.420.01
AC vs. AT−4.56<0.001−0.630.527−2.340.02−1.370.17−2.060.04
JT vs. AT−1.660.097−4.910.00−0.150.88−1.150.25−2.720.01
Table 5. Mann–Whitney U test on fixation count, investigating the left and right eyes and left ear.
Table 5. Mann–Whitney U test on fixation count, investigating the left and right eyes and left ear.
Mann–Whitney URight EyeLeft EyeLeft Ear
USigUSigUSig
JC vs. AC348,800.0<0.001344,243.0<0.001410,454.50.56
JC vs. JT507,711.0<0.001520,087.0<0.001443,092.5<0.001
AC vs. AT550,995.0<0.001561,807.0<0.001409,679.50.61
JT vs. AT391,895.50.23388,051.50.12377,887.0<0.001
Table 6. Samples t-tests on fixation count, investigating the left and right eyebrows, right ear, nose, chin, and the entire face.
Table 6. Samples t-tests on fixation count, investigating the left and right eyebrows, right ear, nose, chin, and the entire face.
t-TestRight EyebrowLeft EyebrowRight EarNoseChinThe Entire Face
TSigTSigTSigTSigTSigTSig
JC vs. AC−1.010.31−3.130.0020.840.40−6.81<0.001−3.43<0.0010.030.98
JC vs. JT−4.77<0.001−1.190.235.09<0.001−9.30<0.001−4.04<0.0010.630.53
AC vs. AT−4.81<0.001−0.930.35−0.880.378−2.120.03−1.980.48−1.610.11
JT vs. AT−2.150.03−3.220.001−5.26<0.001−0.110.91−1.710.08−2.240.02
Table 7. Correlation coefficients between fixation duration and geometrical features of AOI. A data value greater than 0.5 is shown in italics and bold. A data value < −0.5 is shown in bold.
Table 7. Correlation coefficients between fixation duration and geometrical features of AOI. A data value greater than 0.5 is shown in italics and bold. A data value < −0.5 is shown in bold.
AOIsGeometrical
Feature
JC
Pearson’s r
AC
Pearson’s r
JT
Pearson’s r
AT
Pearson’s r
Left EyeLA20.75 ***0.72 ***0.230.18
Left EyeR20.64 **0.57 **0.120.22
Right EyeLA20.75 ***0.72 ***0.230.12
Right EyeR10.64 ***0.57 **0.180.22
Left EarLB30.55 **0.66 **0.340.55 **
Right EarLB3−0.110.300.040.13
Left EyebrowLA20.030.89 ***0.080.39
Left EyebrowA5−0.200.74 ***0.090.16
Left EyebrowA7−0.140.60 ***0.220.35
Right EyebrowLA10.190.90 ***0.000.55 **
Right EyebrowA5−0.170.72 ***0.250.49
Right EyebrowA7−0.070.61 ***0.140.27
NoseLB20.68 **0.53 **0.240.36
NoseR20.64 **0.490.46−0.13
NoseA1−0.48−0.170.04−0.78 ***
NoseA2−0.010.67 **−0.13−0.64 **
NoseA3−0.42−0.70 ***−0.60 **0.04
ChinLA40.38−0.460.62 **0.50 **
ChinLA50.52 **0.80 ***0.370.33
ChinLA60.79 ***0.78 ***0.60 **0.48
Japanese animation character (JC), Japanese real faces (JT), American animation character (AC), and American real faces (AT). p value: ** p < 0.01, *** p < 0.001. Pearson’s r values are used to measure the strength and direction of the linear relationship between fixation duration and geometrical features. Geometric features (e.g., LA1, R1) are coded identifiers for specific landmarks or areas measured in the study.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Z.-L.; Chang, K.-M. Exploring the Correlation Between Gaze Patterns and Facial Geometric Parameters: A Cross-Cultural Comparison Between Real and Animated Faces. Symmetry 2025, 17, 528. https://doi.org/10.3390/sym17040528

AMA Style

Chen Z-L, Chang K-M. Exploring the Correlation Between Gaze Patterns and Facial Geometric Parameters: A Cross-Cultural Comparison Between Real and Animated Faces. Symmetry. 2025; 17(4):528. https://doi.org/10.3390/sym17040528

Chicago/Turabian Style

Chen, Zhi-Lin, and Kang-Ming Chang. 2025. "Exploring the Correlation Between Gaze Patterns and Facial Geometric Parameters: A Cross-Cultural Comparison Between Real and Animated Faces" Symmetry 17, no. 4: 528. https://doi.org/10.3390/sym17040528

APA Style

Chen, Z.-L., & Chang, K.-M. (2025). Exploring the Correlation Between Gaze Patterns and Facial Geometric Parameters: A Cross-Cultural Comparison Between Real and Animated Faces. Symmetry, 17(4), 528. https://doi.org/10.3390/sym17040528

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop