Next Article in Journal
Computer Vision-Based Path Planning for Robot Arms in Three-Dimensional Workspaces Using Q-Learning and Neural Networks
Next Article in Special Issue
Accuracy Assessment of Joint Angles Estimated from 2D and 3D Camera Measurements
Previous Article in Journal
Antenna/Propagation Domain Self-Interference Cancellation (SIC) for In-Band Full-Duplex Wireless Communication Systems
Previous Article in Special Issue
Effect of Obesity on Knee and Ankle Biomechanics during Walking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Significant Measures of Gaze and Pupil Movement for Evaluating Empathy between Viewers and Digital Content

1
Department of Emotion Engineering, Sangmyung University, Seoul 03016, Korea
2
Department of Human-Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(5), 1700; https://doi.org/10.3390/s22051700
Submission received: 31 December 2021 / Revised: 7 February 2022 / Accepted: 18 February 2022 / Published: 22 February 2022
(This article belongs to the Special Issue Human-Gait Analysis Based on 3D Cameras and Artificial Vision)

Abstract

:
The success of digital content depends largely on whether viewers empathize with stories and narratives. Researchers have investigated the elements that may elicit empathy from viewers. Empathic response involves affective and cognitive processes and is expressed through multiple verbal and nonverbal modalities. Specifically, eye movements communicate emotions and intentions and may reflect an empathic status. This study explores feature changes in eye movements when a viewer empathizes with the video’s content. Seven feature variables of eye movements (change of pupil diameter, peak pupil dilation, very short, mid, over long fixation duration, saccadic amplitude, and saccadic count) were extracted from 47 participants who viewed eight videos (four empathic videos and four non-empathic videos) distributed in a two-dimensional emotion axis (arousal and valence). The results showed that viewers’ saccadic amplitude and peak pupil dilation in the eigenvalues of eye movements increased in the empathic condition. The fixation time and pupil size change showed limited significance, and whether there were asymmetric pupil responses between the left and right pupils remained inconclusive. Our investigation suggests that saccadic amplitude and peak pupil dilation are reliable measures for recognizing whether viewers empathize with content. The findings provide physiological evidence based on eye movements that both affective and cognitive processes accompany empathy during media consumption.

1. Introduction

We live in a society with an overflow of media content through various media forms. Digital content consists of a stream of information in digital format that can be stored, streamed, and broadcast. Whereas digital content may include data devoid of any affective characteristics (e.g., weather information and geological information), some content, such as drama and movies, highly depends on its emotional value.
Digital content has a spectrum of affective characteristics depending on the purpose of the medium (drama, movie, ads). Most digital content shares a common and permeating goal to produce media that many viewers can relate to, understand, and engage emotionally. For example, the Netflix program most viewed in 2021 was South Korea’s Squid Game. People argue that Squid Game became popular because viewers readily empathize with a character’s emotional state and narrative. The psychology and physiology of empathy have long been studied in the fields of clinical psychology, social development, and neuroscience. While there is no consensus on the definition of empathy, researchers agree that empathy has multiple subcomponents [1,2,3], and some critical elements of empathy (e.g., recognition, process, outcome, and response) are commonly identified (for an extensive review of empathy as a concept, see [4]).
Based on the most prominent empathy theories [1,3,5,6], affective and cognitive processes are the underlying mechanisms that produce empathic outcomes. Affective empathy generally connotes an observer’s visceral reaction to the target’s affective state. Cognitive empathy involves taking the target’s perspective and drawing inferences about their thoughts, feelings, and characteristics.
Neuroscientists have identified underlying neurological evidence for empathy [7] by discovering mirror neurons in monkeys [8]. Overlapping brain patterns are observed when an observer perceives the same emotions from a target, suggesting shared affective neural networks [9,10,11]. In this paper, we first discuss related work, summarizing significant gaze and pupil movement measures and comparing eye movement studies on digital content. We then explain our experiment design and protocol, followed by data analysis. We conclude with a discussion on the implications of the findings, the limitations of the study, and call for future research.

2. Related Work

Attention to visual information is a prerequisite for recognition. The cortical area known as the frontal eye field (FEF) plays a vital role in controlling visual attention and eye movements [12]. The fovea on the retina is only a relatively small part, but it contains sufficiently dense cone cells to distinguish the visual world in great detail [13]. Owing to the relatively small fovea, the brain makes significant decisions when controlling eye movements. A saccade is a decision each time we move our eyes, and we have to decide where and when to move them [14,15]. Personalities, desires, goals, beliefs, expectations, predictions, memories, and intentions can influence these decisions.
Gaze is a potent social cue in which mutual gaze often signifies threat or evading conveying submission or avoidance [16,17,18]. Processing eye gaze is a foundation for social interactions because explication of the neural substrate for gaze processing is an important step in understanding neuroscience for social cognition [19,20]. Gaze tracking monitors the user’s attention and interests and personalizes the agent’s behaviors [21], which is an essential tool for detecting users’ attention information and focusing on particular content. It is critical to analyze consumers’ attention when an advertisement is shown [22].
Researchers have long confirmed through empirical evidence that eyes can perceive and express emotions. A classic study by Hess [23] demonstrated that pleasant imagery leads to pupil dilation. The relationship between pupil modulation and emotion perception develops with age [24]. Pupil size is generally regarded as a nonverbal communication channel in which social signals are exchanged between individuals at an unconscious level (i.e., non-reportable). Specifically, a person’s feelings or attitudes are embedded in pupil size as a source of information [25]. Involuntary pupil size change is also regulated by the autonomic nervous system.
Pupil dilation seems to occur when people feel attracted [25], surprised or uncertain [26], or social interest. Active storage or retrieval of memories also leads to pupil dilation and an increase in cognitive load [27,28]. A pleasant emotion leads to pupil dilation more than an unpleasant one [24].
In the context of empathy, the dilation pattern seems to get synchronized between conversation partners if the dyadic pair shares attention (i.e., “tunes in”) and gets engaged, evident in the shared emotional peak found in a video analysis by Kang [29]. Kang also found that pupil synchronization was the strongest among the high-expressive and high-empathic participant groups. Pupil synchronization also interacts with the degree of trust [30] and facial expression of the conversation pair [31]. For example, sad faces elicit more pupil synchronization than happy faces.
In short, the analysis of eye movement features is critical for understanding the degree of empathy among individuals. Eye features (i.e., gaze and pupil movement) change when an observer empathizes with an individual. However, research on whether eye features change when empathizing with content is in its infancy. Table 1 compares the most recent studies on eye movement features when viewing media. There is little research analyzing eye movement features between a person and media suggesting key indicators for use. Furthermore, except for [32], no study has investigated the relationship between gaze and pupil movement for evaluating empathy. In addition, the dependent measures of most studies are limited to a single index (e.g., only they investigated gaze points or time spent of fixation).
Our study sought to identify significant gaze and pupil movement measures for assessing empathy between viewers and digital content. To the best of our knowledge, this is the first study to investigate the relationship between significant gaze and pupil movements and empathic digital content. Second, the study analyzes a full range of significant measures involving gaze and pupil movements (change of pupil diameter, peak pupil dilation, very short, mid, over long fixation duration, saccadic amplitude, and saccadic count) for use when assessing digital content.

3. Materials and Methods

We adopted Russell’s two-dimensional model [37], where emotional states can be defined at any valence and arousal level. We invited participants to view empathic or non-empathic emotion-eliciting videos with varying valence (i.e., from unpleasant to pleasant) and arousal levels (i.e., from relaxed to aroused). Our research aimed to verify the following nine hypotheses. Based on the aforementioned literature review, we hypothesized a significant difference in eye movement features (pupil size, fixation, and saccade) when a person views digital content:
Hypothesis 1 (H1).
… between the empathic and non-empathic conditions in all videos (i.e., pleasant-aroused, pleasant-relaxed, unpleasant-aroused, and unpleasant-relaxed).
Hypothesis 2 (H2).
… between empathic and non-empathic conditions in aroused videos.
Hypothesis 3 (H3).
… between empathic and non-empathic conditions in relaxed videos.
Hypothesis 4 (H4).
… between empathic and non-empathic conditions in pleasant videos.
Hypothesis 5 (H5).
… between empathic and non-empathic conditions in unpleasant videos.
Hypothesis 6 (H6).
… between empathic and non-empathic conditions in pleasant-aroused videos.
Hypothesis 7 (H7).
… between empathic and non-empathic conditions in pleasant-relaxed videos.
Hypothesis 8 (H8).
… between empathic and non-empathic conditions in unpleasant-relaxed videos.
Hypothesis 9 (H9).
… between empathic and non-empathic conditions in unpleasant-aroused videos.

3.1. Stimuli Selection

In this study, we edited video clips (e.g., dramas or movies) to elicit empathy from the participants. The content to induce empathic conditions was collected in a two-dimensional model. To ensure that the empathic and non-empathic videos were effective, we conducted a stimulus selection experiment before the main experiment. We selected 20 edited dramas or movies containing emotions as candidates. Five video clips were used for each quadrant in a two-dimensional model. Thirty participants viewed emotional videos and responded to a subjective questionnaire. They received $20 for participation in the study. For each condition, among the five candidates, the video with the highest empathic score was selected as the empathic stimulus in the main experiment. Conversely, the video with the lowest empathic score was chosen as the non-empathic stimulus. That is, a pair of empathic and non-empathic videos for each of the four quadrants in the two-dimensional model was selected. In total, eight stimuli were selected for the main experiment. All stimuli are available online (see Supplementary Materials).

3.2. Experiment Design

When the observer is interested in the target stimulus, the observer’s eye movement characteristics change as a function of the target’s emotional characteristics (empathy, valence, and arousal). To understand the nature of such a change, the main experiment was a factorial design of two (empathy: empathic and non-empathic) × two (valence: pleasant and unpleasant) × two (arousal: aroused and relaxed) independent variables. A t-test was used to test the difference in eye movement-related dependent measures (pupil size, fixation, and saccade) between the empathic and non-empathic conditions.

3.3. Participants

We conducted an a priori power analysis using the program G*Power with power set at 0.8 and α = 0.05, d = 0.6 (independent t-test), two-tailed. The results suggest that an N of approximately 46 is needed to achieve appropriate statistical power. Therefore, 47 university students were recruited for this study. Participants’ ages ranged from 20 to 30 years (mean = 28, STD = 2.9), with 20 (44%) men and 27 (56%) women. We selected participants with a corrective vision of 0.8 or above without any vision deficiency, to ensure reliable recognition of visual stimuli. We recommended that participants sleep sufficiently and prohibited alcohol, caffeine, and smoking the day before the experiment. Because the experiment requires valid recognition of the participant’s facial expression, we limited the use of glasses and cosmetic makeup. All participants were briefed on the purpose and procedure of the experiment and signed a consent form. They were then compensated for their participation with a fee paid to them.

3.4. Experimental Protocol

Figure 1 outlines the experimental process and the environment used in this study. Participants were asked to sit 1 m away from a 27-inch LCD monitor. A webcam was installed on the monitor. Participants’ brainwaves (EEG cap 18 ch), facial expressions (webcam), and eye movements (gaze tracking device) were acquired in addition to subjective responses to a questionnaire. We set the frame rate of the gaze tracking device to 60 frames per second. The participants viewed eight emotion-eliciting (empathy or non-empathy) videos and responded to a questionnaire after each viewing. We excluded the brainwave data from the analysis in this paper.
We gathered the participants’ subjective responses using the Consumer Empathic Response to Advertising Scale (CERA), a comprehensive battery of measures involving affective and cognitive facets of empathy [38,39,40]. We adopted an empirically validated questionnaire based on the ethnicity of Korean participants, which consisted of nine items (see Table 2). The factor loading exceeded 0.4, and the Cronbach’s alpha exceeded 0.8. Each construct was measured on a seven-point Likert scale.

3.5. Feature Extraction of Eye Movement

Eye movement features play a vital role in face processing and social communication [41,42]. It is one of the most important facial cues for communicating with consumers [43,44]. Eye gaze direction is associated with viewer cognition, such as visual attention and emotion. Gaze movements convey emotions and intentions and can reflect empathic conditions. We selected seven feature measures of gaze movement and pupil characteristics for the extraction and analysis, as outlined in Table 3. We did not measure pupil response time and decision time.

3.5.1. Change in Pupil Diameter

Pupillometry, the measurement of changes in pupil diameter, is a relatively old method for inferring different types of activity in the brain. Pupil dilation is an autonomic sympathetic nervous system response that can provide attention, interest, or emotion indices, and is correlated with mental workload and arousal [45].
Pupil responses may be a useful alternative or an addition to subjective measures. Some cognitive and emotional events occur outside our conscious control and can cause pupils to constrict and expand. UX researchers have recorded data from these events to detect fear, anxiety, mental strain, or task difficulty [46]. In addition, because it is nearly impossible to mask implicit cognitive responses, biases such as social desirability that prevent people from accurately informing researchers of their experiences are of little concern during analysis.
Chatham, Frank, and Munakata [47] established the utility of both pupillometry for assessing the temporal dynamics of cognitive control. Changes in central nervous system activity that are systematically related to cognitive processing may be extracted from the raw pupillary record by performing time-locked averaging of critical events in the information-processing task. A task-evoked pupillary response bears the same relationship to the pupillary record from which it is derived, as does an event-related brain potential to spontaneous electroencephalographic (EEG) activity. With averaging, short-latency (i.e., from onset between 100 and 200 ms) phasic task-evoked dilations appear, which terminate rapidly following the completion of processing [48]. In pupillometry, participants were calibrated and then looked at a fixation cross on a blank page for one second to obtain a baseline pupil diameter measurement [49].
We were interested in the relationship between pupil size and the empathic and non-empathic video conditions. Since there is evidence that the left and right pupils may be different [50], we explored the possible differences in the responses of the left and right pupils. The perception-action model is adopted by many fields over time, and perception and action share a common code of representation in the brain [51,52]. The left hemisphere processes detailed information, whereas the right hemisphere is selective for more holistic information [53]. The left prefrontal area is more active in response to semantic cues, whereas the right prefrontal area is more active in generating information from memory. Both are active when the task requires voluntary or imagined actions [54,55]. While the left hemisphere subserves positive emotions, the right hemisphere may subserve fearful or negative emotions [56,57]. Owing to such differential activation as a function of emotion and because pupil sizes reflect brain activity, we speculate that pupil diameter changes may differ between empathic and non-empathic conditions. We calculated the mean baseline pupil diameter for each participant. Specifically, the change in the left pupil diameter (CLPD) and change in the right pupil diameter (CRPD) before and during the stimulus. We calculated the mean values of the CLPD and CRPD across all participant data as dependent measures.

3.5.2. Peak of Pupil Dilation

The decision-making process drives the time course of pupil response. The pupil response reveals the properties of the decisions, such as perceived emotional valence and confidence in the assessment [47,58]. Beatty [48] reviewed all empirical data involving task-evoked pupillary response (TEPR) studies. He concluded that it took six to eight seconds for the participants to recognize and respond during cognitive tasks. The most prominent TEPR research [59,60] has set the pupil dilation experiment’s window size to eight seconds. We also set the window size to be eight seconds because empathic response involves a cognitive process [5]. The phase for extracting the peak value of pupil dilation was divided into three steps.
Step one: identifying the peak every eight seconds
Figure 2 shows a schematic diagram of the peaks found every eight seconds in the raw data. However, peaks every eight seconds may contain false peaks. To counter false peaks, we compared the standard deviation (STD) of the peak positions of all 47 participants.
Step two: find the true peak
Because the peak with the smallest dispersion has the highest probability of being a true peak, we extracted the peak feature measures with the lowest STD for each empathic and non-empathic condition. The extracted measures were peak left pupil dilation (PLPD) and peak right pupil dilation (PRPD), as shown in Figure 3, Figure 4, Figure 5 and Figure 6. For the eigenvalue, we hypothesized that the maximum pupil dilation is greater in the empathic condition than in the non-empathic condition.

3.5.3. Fixation Duration

The time between the two saccades is generally called fixation duration. This event is closely related to cognitive processing in alert subjects [61,62,63]. Fixations of different lengths may reflect different neuronal processes, as observed in various studies [64,65,66,67]. Very short fixations (<150 ms), so-called express fixations, may turn out to be a distinct category caused by low-level visuomotor behavior; they could represent the reflexive unconscious or noncognitive aspects of behavioral control.
Media-related fixation involves cognitive saccades (between 150 and 900 ms), positioned between very short (<150 ms) and overlong (>900 ms) saccades [68]. Medium fixation has a reduced fatigue rate compared to short or long fixation [69,70,71]. Galley and Andres [71] reported that visual processing of complex scenes with rapidly changing stimuli (e.g., city rides) typically leads to a fixation of between 200–400 ms, which exceeds the fixation duration of approximately 250 ms during reading. Fixation is associated with content-related identification or cognitive processing; therefore, we focused on fixation duration, ranging from 150 ms to 900 ms, in this study. A short fixation time (150 ms) is insufficient to extract relevant information [65]. In the case of excessively long fixation (>900 ms), a general functional interpretation has not yet been established, except for unconscious driving or low-arousal phase starting during microsleep.
Three eigenvalues were extracted: very short fixation, medium fixation duration, and overlong fixation. We calculated the percent dependent measure (%), which represents the fixed time divided by total time. We speculated that empathic videos may elicit more cognitive engagement and increase medium fixation than the non-empathic videos.

3.5.4. Saccade

Experimental studies of saccadic eye movements have produced a considerable amount of data. In the case of eye movements elicited by specific visual targets, the significant measures were the metrics of saccadic amplitude and saccadic count. The amplitude is the angle in degrees between two fixation points [61]. Measures were provided based on the calculation of GazePoint equipment, which averages eye positions. We hypothesized that the saccadic amplitude would be greater in the empathic condition than in the non-empathic condition.

4. Results

The results are twofold: the analysis of subjective evaluation and eye movement features.

4.1. Subjective Evaluation

A t-test was used to test the differences between the key features in the empathic and non-empathic conditions.

4.1.1. The Analysis of Arousal Scores

We analyzed the differences in subjective arousal scores between the empathic and non-empathic conditions in four quadrants in the two-dimensional emotion model (i.e., pleasant-aroused, pleasant-relaxed, unpleasant-relaxed, and unpleasant-aroused; see Figure 7).
The results indicated that the arousal scores of the empathic condition in the pleasant-relaxed content were significantly lower than those in the non-empathic condition. Conversely, the arousal scores of the empathic condition in the unpleasant-relaxed content were significantly higher than those in the non-empathic condition. We found no significant difference between pleasant-aroused and unpleasant-aroused content.

4.1.2. The Analysis of Valence Scores

We analyzed the differences in subjective valence scores between the empathic and non-empathic conditions (see Figure 8). The results indicated that the valence scores of the empathic condition in the pleasant-aroused and pleasant-relaxed content were significantly higher than those in the non-empathic condition. Conversely, the valence scores of the empathic condition in the unpleasant-aroused content were significantly lower than those of the non-empathic condition. We found no significant differences in the unpleasant-relaxed content.

4.1.3. The Analysis of Cognitive and Affective Empathy Scores

We analyzed the differences in subjective cognitive and affective scores between the empathic and non-empathic conditions (Figure 9 and Figure 10). The results indicated that the cognitive empathy scores of the empathic condition in all four contents were significantly higher than those of the non-empathic condition. Similarly, the affective empathy scores of the empathic condition in all content except for pleasant-aroused content were significantly higher than those of the non-empathic condition. In summary, all empathic videos induced target empathy (empathic or non-empathy) in general from the participants as intended.

4.2. Eye Movement Features

A t-test analysis of the hypotheses was conducted by adjusting alpha levels of 0.05 per test. The results of key features of the nine groups are listed in Table 4, Table 5, Table 6 and Table 7. Overall, the saccadic amplitude measure (i.e., the mean angle between two fixation points) showed that, except for aroused and relaxed content, it is significantly greater in the empathic condition than in the non-empathic condition. In addition, pupil dilation showed a significant increase in the empathic condition compared to the non-empathic condition with aroused and pleasant content. The peak of pupil dilation ranged from 5.70 mm to 5.88 mm in the empathic condition. The detailed analysis of each content group follows.

4.2.1. All-Emotions Content

For all-emotions content, results indicated the peak of right pupil dilation (Table 5) and saccadic amplitude (Table 7) were significantly different between the empathic and non-empathic conditions (p < 0.001). Interestingly, the saccadic amplitude was greater in the empathic condition (M = 193.74, STD = 2.45; p < 0.001) than in the non-empathic condition (M = 165.86, STD = 3.12; p < 0.001; see Table 7). However, the saccadic count did not show a significant difference. The fixation time in all three ranges (very short, medium, overlong) did not show a significant difference either.
The results indicated that the higher the level of empathic condition, the more active the saccadic jump, which may imply that empathic content is more interesting and engaging than non-empathetic content, i.e., viewers engage more in cognitive and attentive processes.

4.2.2. Pleasant and Unpleasant Content

For pleasant content, the results indicated that the peak of left and right pupil dilation, and saccadic amplitude were significantly different between the empathic and non-empathic conditions (p < 0.05). This is consistent with the literature on pupil dilation due to pleasant images [25] and happy facial expressions [24].
For unpleasant content, the mean of medium fixation was significantly smaller in the non-empathic condition (M = 610.11, STD = 12.48; p < 0.05) than in the empathic content (M = 649.50, STD = 12.36; p < 0.05). In addition, the mean of overlong fixation was smaller in the empathic condition (M = 51.38, STD = 2.25; p < 0.05) than in the non-empathic condition (M = 59.58, STD = 3.0; p < 0.05).

4.2.3. Aroused and Relaxed Content

For aroused content, results indicated that the change in left and right pupil diameter, and peak left and right pupil dilation were significantly different between the empathic and non-empathic conditions (p < 0.05). Specifically, the change in left pupil diameter was significantly higher in the non-empathic condition (M = 0.39, STD = 0.06; p < 0.01) than that in the empathic condition (M = 0.22, STD = 0.04; p < 0.01). In addition, the change in right pupil diameter was significantly higher in the non-empathic condition (M = 0.38, STD = 0.06; p < 0.05) than in the empathic condition (M = 0.29, STD = 0.05; p < 0. 05). Overall, the significant pupil dilation is limited to pleasant and aroused conditions. The implications will be discussed in Discussion.
For relaxed content, results indicated that the change in left pupil diameter, overlong fixation, and saccadic amplitude were significantly different between the empathic and non-empathic conditions.

4.2.4. Empathic and Non-Empathic Content

Figure 11 and Figure 12 depict the relationship between eye-movement feature variables in a two-dimensional emotion map. For pleasant-aroused content, the results indicated that changes in left and right pupil diameter, peak left pupil dilation, medium fixation duration, overlong fixation duration, saccadic amplitude, and saccadic count were significantly different between the empathic and non-empathic conditions (p < 0.05).
For pleasant-relaxed content, the results indicated that the changes in left and right pupil diameter, medium fixation duration, and saccadic amplitude were significantly different between the empathic and non-empathic conditions (p < 0.05).
For unpleasant-relaxed content, the results indicated that peak left pupil dilation, medium fixation duration, saccadic amplitude, and saccadic count were significantly different between the empathic and non-empathic conditions (p < 0.05).
For unpleasant-aroused content, the results indicated that PLPD and saccadic amplitude were significantly different between the empathic and non-empathic conditions (p < 0.05).

5. Conclusions and Discussion

To the best of our knowledge, this is the first study to suggest significant measures involving gaze and pupil movements for use when assessing empathic digital content. We analyzed a full range of dependent measures (change in pupil diameter, peak pupil dilation, very short, mid, overlong fixation duration, saccadic amplitude, and saccadic count) to understand all aspects of gaze and pupil movements. Our study had more indices than other studies (Table 1).
The majority (H1, H3, H4, H5, H6) of the hypotheses on peak pupil dilation and saccadic amplitude were supported. In conclusion, we found that saccadic amplitude and peak pupil dilation are two significant measures that can be used to assess whether viewers empathize with digital content.
Saccadic amplitude measures showed that, except for aroused and relaxed content, the average angle between two fixation points is significantly greater in the empathic condition than in the non-empathic condition. Because the empathic video was designed to induce empathy, as confirmed by the manipulation check, participants may have engaged in the story or narrative of the stimuli video (e.g., drama or movie). Participants may be “tuned” into digital content and initiate active information-seeking behavior, which results in a more dynamic saccadic jump in the region of interest. Participants also had a longer fixation with the empathic video than with non-empathic videos, albeit only with unpleasant and pleasant videos.
Second, although not as substantial as with saccadic amplitude, pupil dilation showed a significant increment in the empathic condition compared to the non-empathic condition with aroused and pleasant videos. In general, pupil dilation increases when the pupil is attracted or interested [25]. Empathic videos may certainly have drawn more attention. However, it is paramount to note that a higher form of empathy includes perspective taking [72]. Some stimuli may have induced the participants to refer to their past memories to understand the narrative. Memory retrieval is known to elicit pupil dilation [73], leading to cognitive load [27].
It is also interesting that differential pupil dilation between empathic and non-empathic conditions is limited to aroused and pleasant videos. This may be because of the main effect of pleasant images [25] and happy facial expressions [24] on pupil dilation. That is, other videos (unpleasant, relaxed) may have offset the dilation owing to the nature of the video. Further studies may design a more sensitive experiment with substantial statistical power.
Third, we did not find conclusive evidence suggesting asymmetric pupil responses when viewing empathic digital content. This is consistent with the current literature suggesting pupil-size asymmetry as a physiological trait (e.g., gender, personality) [74] or limited to cases such as migraine and headache [75].
We acknowledge some limitations of the research. First, we have yet to unravel the physiological mechanisms behind the findings. Future studies may investigate the relationship between brainwaves and gaze movement through EEG data analysis.
Second, the videos were not qualitatively analyzed (for example, the identification of emotional peaks, analysis of the actor’s facial expressions) to cross-examine the content and the participants’ responses. Empathy is a co-social behavior between a dyadic pair; looking into the relationship between the content and the change in the participant’s gaze and eye movements in a time series merits further investigation.
Third, the current study acquired the gaze data through an eye tracking device, but future research may obtain data through a camera for better usability and ecological validity. For example, Naqvi et al. [76] proposed a fuzzy system-based target selection method for target selection for camera-based gaze trackers. The results suggested better usability and performance than other gaze tracking methods. The fuzzy system uses three features (pupil size, gaze position, texture information of a monitor image at the gaze target) to decide the user’s target selection. Future studies to understand the participant’s empathic gaze movement may adopt such state-of-the-art camera-based gaze tracking methods.

Supplementary Materials

The following are available online at https://pan.baidu.com/s/1DmsCUAStDvKk_BHHnlwNrg?pwd=d8b2 (accessed on 31 December 2021). All videos used, and all raw data collected in the experiment.

Author Contributions

J.Z.: conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing, visualization, project administration; S.P.: methodology, validation, formal analysis, investigation, writing, review, editing; A.C.: conceptualization, investigation, review, editing; M.W.: conceptualization, methodology, writing, review, supervision, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2021-0-00986, Development of Interaction Technology to Maximize Realization of Virtual Reality Contents using Multimodal Sensory Interface) and partly by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government (22ZS1100, Core Technology Research for Self-Improving Integrated Artificial Intelligence System).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Sangmyung University (protocol code C-2021-002, approved 9 July 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the subjects to publish this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoffman, M.L. Empathy and Moral Development: Implications for Caring and Justice; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  2. Davis, M.H. Measuring individual differences in empathy: Evidence for a multidimensional approach. J. Pers. Soc. Psychol. 1983, 44, 113. [Google Scholar] [CrossRef]
  3. Preston, S.D.; De Waal, F.B.M. Empathy: Its ultimate and proximate bases. Behav. Brain Sci. 2002, 25, 1–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Cuff, B.M.P.; Brown, S.J.; Taylor, L.; Howat, D.J. Empathy: A review of the concept. Emot. Rev. 2016, 8, 144–153. [Google Scholar] [CrossRef]
  5. Hoffman, M.L. Interaction of affect and cognition in empathy. In Emotions, Cognition, and Behavior; Cambridge University Press: Cambridge, UK, 1985; pp. 103–131. [Google Scholar]
  6. Eisenberg, N.; Fabes, R.A.; Miller, P.A.; Fultz, J.; Shell, R.; Mathy, R.M.; Reno, R.R. Relation of sympathy and personal distress to prosocial behavior: A multimethod study. J. Pers. Soc. Psychol. 1989, 57, 55. [Google Scholar] [CrossRef] [PubMed]
  7. De Vignemont, F.; Singer, T. The empathic brain: How, when and why? Trends Cogn. Sci. 2006, 10, 435–441. [Google Scholar] [CrossRef] [Green Version]
  8. Rizzolatti, G.; Fadiga, L.; Gallese, V.; Fogassi, L. Premotor cortex and the recognition of motor actions. Cogn. Brain Res. 1996, 3, 131–141. [Google Scholar] [CrossRef]
  9. Wicker, B.; Keysers, C.; Plailly, J.; Royet, J.-P.; Gallese, V.; Rizzolatti, G. Both of us disgusted in My insula: The common neural basis of seeing and feeling disgust. Neuron 2003, 40, 655–664. [Google Scholar] [CrossRef] [Green Version]
  10. Keysers, C.; Wicker, B.; Gazzola, V.; Anton, J.-L.; Fogassi, L.; Gallese, V. A touching sight: SII/PV activation during the observation and experience of touch. Neuron 2004, 42, 335–346. [Google Scholar] [CrossRef] [Green Version]
  11. Singer, T.; Seymour, B.; O’doherty, J.; Kaube, H.; Dolan, R.J.; Frith, C.D. Empathy for pain involves the affective but not sensory components of pain. Science 2004, 303, 1157–1162. [Google Scholar] [CrossRef] [Green Version]
  12. Thompson, K.G.; Biscoe, K.L.; Sato, T.R. Neuronal basis of covert spatial attention in the frontal eye field. J. Neurosci. 2005, 25, 9479–9487. [Google Scholar] [CrossRef]
  13. Curcio, C.A.; Allen, K.A. Topography of ganglion cells in human retina. J. Comp. Neurol. 1990, 300, 5–25. [Google Scholar] [CrossRef] [PubMed]
  14. Tatler, B.W.; Brockmole, J.R.; Carpenter, R.H.S. LATEST: A model of saccadic decisions in space and time. Psychol. Rev. 2017, 124, 267. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Carpenter, R.H.S. The neural control of looking. Curr. Biol. 2000, 10, R291–R293. [Google Scholar] [CrossRef] [Green Version]
  16. Argyle, M.; Cook, M. Gaze and Mutual Gaze; Cambridge University Press: Cambridge, UK, 1976. [Google Scholar]
  17. Baron-Cohen, S.; Campbell, R.; Karmiloff-Smith, A.; Grant, J.; Walker, J. Are children with autism blind to the mentalistic significance of the eyes? Br. J. Dev. Psychol. 1995, 13, 379–398. [Google Scholar] [CrossRef]
  18. Emery, N.J. The eyes have it: The neuroethology, function and evolution of social gaze. Neurosci. Biobehav. Rev. 2000, 24, 581–604. [Google Scholar] [CrossRef]
  19. Hood, B.M.; Willen, J.D.; Driver, J. Adult’s eyes trigger shifts of visual attention in human infants. Psychol. Sci. 1998, 9, 131–134. [Google Scholar] [CrossRef]
  20. Pelphrey, K.A.; Sasson, N.J.; Reznick, J.S.; Paul, G.; Goldman, B.D.; Piven, J. Visual scanning of faces in autism. J. Autism. Dev. Disord. 2002, 32, 249–261. [Google Scholar] [CrossRef]
  21. Wang, H.; Chignell, M.; Ishizuka, M. Empathic tutoring software agents using real-time eye tracking. In Proceedings of the 2006 Symposium on Eye Tracking Research & Applications, San Diego, CA, USA, 27–29 March 2006; pp. 73–78. [Google Scholar]
  22. Patterson, M.; Elliott, R. Negotiating masculinities: Advertising and the inversion of the male gaze. Consum. Mark Cult. 2002, 5, 231–249. [Google Scholar] [CrossRef]
  23. Hess, E.H. Attitude and pupil size. Sci. Am. 1965, 212, 46–55. [Google Scholar] [CrossRef]
  24. Kret, M.E. The role of pupil size in communication. Is there room for learning? Cogn. Emot. 2018, 32, 1139–1145. [Google Scholar] [CrossRef] [Green Version]
  25. Hess, E.H. The role of pupil size in communication. Sci. Am. 1975, 233, 110–119. [Google Scholar] [CrossRef] [PubMed]
  26. Preuschoff, K.; Hart, B.M.; Einhauser, W. Pupil dilation signals surprise: Evidence for noradrenaline’s role in decision making. Front Neurosci. 2011, 5, 115. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Wei, Y. Explore the Blended Teaching Model from the Perspective of Cognitive Load. In Proceedings of the 5th International Conference on Education, Management, Arts, Economics and Social Science, Sanya, China, 10–11 November 2018; pp. 10–11. [Google Scholar]
  28. Hyönä, J.; Tommola, J.; Alaja, A.-M. Pupil dilation as a measure of processing load in simultaneous interpretation and other language tasks. Q. J. Exp. Psychol. 1995, 48, 598–612. [Google Scholar] [CrossRef] [PubMed]
  29. Kang, O.; Wheatley, T. Pupil dilation patterns spontaneously synchronize across individuals during shared attention. J. Exp. Psychol. Gen. 2017, 146, 569. [Google Scholar] [CrossRef]
  30. Kret, M.E.; Fischer, A.H.; De Dreu, C.K.W. Pupil mimicry correlates with trust in in-group partners with dilating pupils. Psychol. Sci. 2015, 26, 1401–1410. [Google Scholar] [CrossRef] [Green Version]
  31. Harrison, N.A.; Wilson, C.E.; Critchley, H.D. Processing of observed pupil size modulates perception of sadness and predicts empathy. Emotion 2007, 7, 724. [Google Scholar] [CrossRef]
  32. Zhang, J.; Wen, X.; Whang, M. Empathy evaluation by the physical elements of the advertising. Multimed Tools Appl. 2021, 81, 2241–2257. [Google Scholar] [CrossRef]
  33. Liang, S.; Liu, R.; Qian, J. Fixation prediction for advertising images: Dataset and benchmark. J. Vis. Commun. Image Represent. 2021, 81, 103356. [Google Scholar] [CrossRef]
  34. Rizvi, W.H. Brand visual eclipse (BVE): When the brand fixation spent is minimal in relation to the celebrity. In Information Systems and Neuroscience; Springer: Berlin/Heidelberg, Germany, 2020; pp. 295–303. [Google Scholar]
  35. Li, Y.; Wang, G.; Gan, Q. Research on Cognitive Effects of Narrative Rhetoric in Print Advertisement Based on Eye Movement. In Proceedings of theE3S Web of Conferences, Odesa, Ukraine, 16 April 2021; EDP Sciences: Les Ulis, France, 2021. [Google Scholar]
  36. Wang, C.-C.; Hung, J.C. Comparative analysis of advertising attention to Facebook social network: Evidence from eye-movement data. Comput. Human. Behav. 2019, 100, 192–208. [Google Scholar] [CrossRef]
  37. Russell, J.A. A circumplex model of affect. J. Pers Soc. Psychol. 1980, 39, 1161. [Google Scholar] [CrossRef]
  38. Soh, H. Measuring consumer empathic response to advertising drama. J. Korea Contents Assoc. 2014, 14, 133–142. [Google Scholar] [CrossRef]
  39. Salanga, M.G.C.; Bernardo, A.B.I. Cognitive empathy in intercultural interactions: The roles of lay theories of multiculturalism and polyculturalism. Curr. Psychol. 2019, 38, 165–176. [Google Scholar] [CrossRef]
  40. Alcorta-Garza, A.; San-Martín, M.; Delgado-Bolton, R.; Soler-González, J.; Roig, H.; Vivanco, L. Cross-validation of the Spanish HP-version of the jefferson scale of empathy confirmed with some cross-cultural differences. Front Psychol. 2016, 7, 1002. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Patterson, M. Nonverbal Interpersonal Communication. In Oxford Research Encyclopedia of Communication; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  42. Hu, Z.; Gendron, M.; Liu, Q.; Zhao, G.; Li, H. Trait anxiety impacts the perceived gaze direction of fearful but not angry faces. Front Psychol. 2017, 8, 1186. [Google Scholar] [CrossRef] [Green Version]
  43. Verbeke, W.; Pozharliev, R.I. Preference Inferences from Eye-Related Cues in Sales-Consumer Settings: ERP Timing and Localization in Relation to Inferring Performance and Oxytocin Receptor (OXTR) Gene Polymorphisms. Int. J. Mark. Stud. 2016, 8, 1. [Google Scholar] [CrossRef] [Green Version]
  44. Sajjacholapunt, P.; Ball, L.J. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness. Front Psychol. 2014, 5, 5. [Google Scholar] [CrossRef] [Green Version]
  45. Bergstrom, J.R.; Duda, S.; Hawkins, D.; McGill, M. Physiological response measurements. In Eye Tracking in User Experience Design; Elsevier: Amsterdam, The Netherlands, 2014; pp. 81–108. [Google Scholar]
  46. Ahern, S.; Beatty, J. Pupillary responses during information processing vary with Scholastic Aptitude Test scores. Science 1979, 205, 1289–1292. [Google Scholar] [CrossRef]
  47. Chatham, C.H.; Frank, M.J.; Munakata, Y. Pupillometric and behavioral markers of a developmental shift in the temporal dynamics of cognitive control. Proc Natl Acad Sci. USA 2009, 106, 5529–5533. [Google Scholar] [CrossRef] [Green Version]
  48. Beatty, J. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 1982, 91, 276. [Google Scholar] [CrossRef]
  49. Wang, Q.; Wedel, M.; Huang, L.; Liu, X. Effects of model eye gaze direction on consumer visual processing: Evidence from China and America. Inf. Manag. 2018, 55, 588–597. [Google Scholar] [CrossRef]
  50. Clausen-May, T. Teaching Maths to Pupils with Different Learning Styles; SAGE: Newcastle upon Tyne, UK, 2005. [Google Scholar]
  51. Allport, A. Selection for Action: Some Behavioral and Neurophysiological Considerations of Attention and Action; Routledge: London, UK, 2016. [Google Scholar]
  52. Heuer, H.; Sanders, A. Perspectives on Perception and Action; Routledge: London, UK, 2016. [Google Scholar]
  53. Liotti, M.; Tucker, D.M. Emotion in Asymmetric Corticolimbic Networks; APA: Worcester, MA, USA, 1995. [Google Scholar]
  54. Adolphs, R.; Damasio, H.; Tranel, D.; Cooper, G.; Damasio, A.R. A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J. Neurosci. 2000, 20, 2683–2690. [Google Scholar] [CrossRef] [PubMed]
  55. Decety, J.; Grezes, J.; Costes, N.; Perani, D.; Jeannerod, M.; Procyk, E.; Grassi, F.; Fazio, F. Brain activity during observation of actions. Influence of action content and subject’s strategy. Brain J. Neurol. 1997, 120, 1763–1777. [Google Scholar] [CrossRef] [PubMed]
  56. Canli, T.; Desmond, J.E.; Zhao, Z.; Glover, G.; Gabrieli, J.D.E. Hemispheric asymmetry for emotional stimuli detected with fMRI. Neuroreport 1998, 9, 3233–3239. [Google Scholar] [CrossRef] [PubMed]
  57. Schwartz, G.E.; Davidson, R.J.; Maer, F. Right hemisphere lateralization for emotion in the human brain: Interactions with cognition. Science 1975, 190, 286–288. [Google Scholar] [CrossRef]
  58. Oliva, M.; Anikin, A. Pupil dilation reflects the time course of emotion recognition in human vocalizations. Sci. Rep. 2018, 8, 4871. [Google Scholar] [CrossRef] [Green Version]
  59. Colizoli, O.; de Gee, J.W.; Urai, A.E.; Donner, T.H. Task-evoked pupil responses reflect internal belief states. Sci. Rep. 2018, 8, 13702. [Google Scholar] [CrossRef] [Green Version]
  60. Wright, P.; Kahneman, D. Evidence for alternative strategies of sentence retention. Q. J. Exp. Psychol. 1971, 23, 197–213. [Google Scholar] [CrossRef]
  61. Findlay, J.M.; Walker, R. A model of saccade generation based on parallel processing and competitive inhibition. Behav. Brain Sci. 1999, 22, 661–674. [Google Scholar] [CrossRef]
  62. Victor, T.W.; Harbluk, J.L.; Engström, J.A. Sensitivity of eye-movement measures to in-vehicle task difficulty. Transp. Res. Part F Traffic Psychol. Behav. 2005, 8, 167–190. [Google Scholar] [CrossRef]
  63. Trappenberg, T.P.; Dorris, M.C.; Munoz, D.P.; Klein, R.M. A model of saccade initiation based on the competitive integration of exogenous and endogenous signals in the superior colliculus. J. Cogn. Neurosci. 2001, 13, 256–271. [Google Scholar] [CrossRef]
  64. Godijn, R.; Theeuwes, J. Programming of endogenous and exogenous saccades: Evidence for a competitive integration model. J. Exp. Psychol. Hum. Percept Perform. 2002, 28, 1039. [Google Scholar] [CrossRef] [PubMed]
  65. Hofmeister, J.; Heller, D.; Radach, R. The return sweep in reading. In Current Oculomotor Research; Springer: Berlin/Heidelberg, Germany, 1999; pp. 349–357. [Google Scholar]
  66. Pannasch, S.; Dornhoefer, S.M.; Unema, P.J.A.; Velichkovsky, B.M. The omnipresent prolongation of visual fixations: Saccades are inhibited by changes in situation and in subject’s activity. Vision Res. 2001, 41, 3345–3351. [Google Scholar] [CrossRef] [Green Version]
  67. Unema, P.; Dornhoefer, S.; Steudel, S.; Velichkovsky, B. An Attentive Look at Driver’s Fixation Durations (Draft). 2020. Available online: https://www.researchgate.net/publication/254739710_An_attentive_look_at_driver’s_fixation_durations_Draft (accessed on 31 December 2021).
  68. Schleicher, R.; Galley, N.; Briest, S.; Galley, L. Blinks and saccades as indicators of fatigue in sleepiness warnings: Looking tired? Ergonomics 2008, 51, 982–1010. [Google Scholar] [CrossRef]
  69. Lavine, R.A.; Sibert, J.L.; Gokturk, M.; Dickens, B. Eye-tracking measures and human performance in a vigilance task. Aviat. Space Environ. Med. 2002, 73, 367–372. [Google Scholar] [PubMed]
  70. Saito, S. Does fatigue exist in a quantitative measurement of eye movements? Ergonomics 1992, 35, 607–615. [Google Scholar] [CrossRef] [PubMed]
  71. Galley, N.; Andres, G. Saccadic eye movements and blinks during long-term driving on the autobahn with minimal alcohol ingestion. Vis. Veh. 1996, 5, 381–388. [Google Scholar]
  72. Lamm, C.; Batson, C.D.; Decety, J. The neural substrate of human empathy: Effects of perspective-taking and cognitive appraisal. J. Cogn. Neurosci. 2007, 19, 42–58. [Google Scholar] [CrossRef]
  73. Goldinger, S.D.; Papesh, M.H. Pupil dilation reflects the creation and retrieval of memories. Curr. Dir. Psychol. Sci. 2012, 21, 90–95. [Google Scholar] [CrossRef] [Green Version]
  74. Poynter, W.D. Pupil-size asymmetry is a physiologic trait related to gender, attentional function, and personality. Laterality Asymmetries Body Brain Cogn. 2017, 22, 654–670. [Google Scholar] [CrossRef]
  75. Drummond, P.D. Pupil diameter in migraine and tension headache. J. Neurol. Neurosurg. Psychiatry 1987, 50, 228–230. [Google Scholar] [CrossRef] [Green Version]
  76. Naqvi, R.A.; Arsalan, M.; Park, K.R. Fuzzy system-based target selection for a NIR camera-based gaze tracker. Sensors 2017, 17, 862. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Experimental protocol and configuration.
Figure 1. Experimental protocol and configuration.
Sensors 22 01700 g001
Figure 2. Identifying the peak every eight seconds in a video.
Figure 2. Identifying the peak every eight seconds in a video.
Sensors 22 01700 g002
Figure 3. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in pleasant-aroused content.
Figure 3. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in pleasant-aroused content.
Sensors 22 01700 g003
Figure 4. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in pleasant-relaxed content.
Figure 4. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in pleasant-relaxed content.
Sensors 22 01700 g004
Figure 5. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in unpleasant-relaxed content.
Figure 5. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in unpleasant-relaxed content.
Sensors 22 01700 g005
Figure 6. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in unpleasant-aroused content.
Figure 6. Comparison of standard deviation (STD) of peak pupil dilation between the empathic and non-empathic conditions in unpleasant-aroused content.
Sensors 22 01700 g006
Figure 7. Analysis (t-test) of the arousal values between the empathic and non-empathic conditions.
Figure 7. Analysis (t-test) of the arousal values between the empathic and non-empathic conditions.
Sensors 22 01700 g007
Figure 8. Analysis (t-test) of the valence values between the empathic and non-empathic conditions.
Figure 8. Analysis (t-test) of the valence values between the empathic and non-empathic conditions.
Sensors 22 01700 g008
Figure 9. Analysis (t-test) of the cognitive empathy values between the empathic and non-empathic conditions.
Figure 9. Analysis (t-test) of the cognitive empathy values between the empathic and non-empathic conditions.
Sensors 22 01700 g009
Figure 10. Analysis (t-test) of the affective empathy values between the empathic and non-empathic conditions.
Figure 10. Analysis (t-test) of the affective empathy values between the empathic and non-empathic conditions.
Sensors 22 01700 g010
Figure 11. Pupil feature variables in a two-dimensional emotion map.
Figure 11. Pupil feature variables in a two-dimensional emotion map.
Sensors 22 01700 g011
Figure 12. Gaze feature variables in a two-dimensional emotion map.
Figure 12. Gaze feature variables in a two-dimensional emotion map.
Sensors 22 01700 g012
Table 1. Comparison of previous and proposed methods.
Table 1. Comparison of previous and proposed methods.
MethodsStrengthsWeaknesses
The physical elements of the advertisements (e.g., color, saturation, and value) and the viewer’s region of interest (ROI) were analyzed through gaze tracking [32].The physical characteristics of the advertisements that elicit empathy were investigated.The study focused on the media’s physical characteristics, not necessarily on the viewer.
A gaze-points prediction method for advertising images was proposed. The method includes a CNNs-based model for saliency prediction of the multi-text advertising images [33].Analyzed the viewer’s attention based on the continuous distribution of gaze points when an ad is provided as a stimulus. The model adopted text enhanced learning to detect the multi-text peculiarity of ads.The analysis is limited to a single index (e.g., gaze points) on attention to the advertisement.
Studied the overshadowing effect of a celebrity based on the analysis of advertisement effect based on fixation [34].Analyzed the viewer’s fixation measures (time and count) on the celebrity and brand in an advertisementThe analysis is limited to a single index (e.g., time spent on fixation and fixation count) on attention to the advertisement
Analyzed the relationship between the quality of narrative rhetoric and the participant’s attention, duration, and pupil diameter [35].Investigated print advertisements that used narrative techniques to present product effects (e.g., dramatic conflict).The analysis is limited to a few indexes (e.g., gaze time and pupil diameter).
Analyzed differential visual attention to Facebook advertisements [36].Investigated visual attention to ads viewed by different interpersonal relationships.The analysis is limited to the fixation position, duration, and pupil magnification.
Empathy evaluation of gaze and pupil movement (proposed method).In-depth analysis with all aspects of significant measures of gaze and pupil movement, including various frequencies involving fixation.The understanding of the neurological mechanism is still absent. The stimulus’s subcomponents (e.g., celebrity, text) were not analyzed.
Table 2. Questionnaire of empathy, valence, and arousal.
Table 2. Questionnaire of empathy, valence, and arousal.
NO.QuestionnaireFactor
1I felt pleasant as opposed to unpleasantValence
2I felt aroused as opposed to relaxedArousal
3I understood the characters’ needsCognitive empathy
4I understood how the characters were feeling
5I understood the situation of the video
6I understood the motives behind the characters’ behavior
7I felt as if the events in the video were happening to meAffective empathy
8I felt as if I was in the middle of the situation
9I felt as if I was one of the characters
Table 3. Gaze movement and pupil for extraction features.
Table 3. Gaze movement and pupil for extraction features.
Pupil Size FeatureFixation FeatureSaccade Feature
Change in pupil diameter
Peak pupil dilation
Very short fixation durationSaccadic amplitude
Saccadic count
Mid-fixation duration
Over long fixation duration
Table 4. The t-test analysis of the change in pupil diameter between the empathic and non-empathic conditions.
Table 4. The t-test analysis of the change in pupil diameter between the empathic and non-empathic conditions.
GroupChange in Left Pupil Diameter (CLPD)Change in Right Pupil Diameter (CRPD)
Non-EmpathicEmpathic Non-EmpathicEmpathic
p-ValueMeanSTDMeanSTDp-ValueMeanSTDMeanSTD
All emotionsp > 0.10.310.040.240.06p > 0.10.380.030.380.03
Aroused p = 0.0020.390.060.220.04p = 0.0260.380.060.290.05
Relaxed p = 0.0070.100.110.410.06p > 0.10.370.040.460.05
Pleasant p > 0.10.250.080.280.06p > 0.10.400.040.410.05
Unpleasant p > 0.10.210.080.370.06p > 0.10.370.040.330.06
Pleasant arousedp = 0.0000.460.050.160.06p = 0.0080.480.050.280.08
Pleasant relaxedp = 0.0120.040.160.400.11p = 0.0050.310.060.540.06
Unpleasant relaxedp > 0.10.160.150.420.06p > 0.10.430.060.390.07
Unpleasant arousedp > 0.10.270.060.320.11p > 0.10.310.050.270.11
Table 5. The t-test analysis of the peak pupil dilation between the empathic and non-empathic conditions.
Table 5. The t-test analysis of the peak pupil dilation between the empathic and non-empathic conditions.
GroupPeak Left Pupil Dilation (PLPD)Peak Right Pupil Dilation (PRPD)
Non-EmpathicEmpathic Non-EmpathicEmpathic
p-ValueMeanSTDMeanSTDp-ValueMeanSTDMeanSTD
All emotionsp > 0.15.410.035.460.04p < 0.0015.300.035.700.03
Aroused p = 0.0005.650.035.180.04p = 0.0005.210.045.780.02
Relaxed p > 0.15.370.035.440.04p = 0.0005.210.065.880.02
Pleasant p = 0.0105.230.055.420.05p = 0.0005.410.035.730.05
Unpleasant p > 0.15.570.035.490.05p = 0.0005.120.065.670.03
Pleasant arousedp > 0.15.510.095.400.08p = 0.0455.520.075.270.09
Pleasant relaxedp > 0.15.360.095.580.12p > 0.15.110.105.380.10
Unpleasant relaxedp = 0.0125.470.175.640.07p > 0.15.150.095.410.08
Unpleasant arousedp = 0.0075.380.075.650.08p > 0.15.330.085.410.07
Table 6. The t-test analysis of the fixation duration between the empathic and non-empathic conditions.
Table 6. The t-test analysis of the fixation duration between the empathic and non-empathic conditions.
GroupVery Short Fixation
(<150 ms)
Medium Fixation
(150 ms–900 ms)
Overlong Fixation
(>900 ms)
Non-EmpathicEmpathic Non-EmpathicEmpathic Non-EmpathicEmpathic
p-ValueMeanSTDMeanSTDp-ValueMeanSTDMeanSTDp-ValueMeanSTDMeanSTD
All emotionsp > 0.118.020.7117.260.66p > 0.1632.19.61644.418.52p > 0.154.692.1252.821.61
Aroused p > 0.118.591.0917.820.98p > 0.1685.7511.54678.4911.02p > 0.142.902.4346.682.20
Relaxed p > 0.117.450.9016.700.90p > 0.1579.013.26611.312.02p = 0.04566.353.0258.762.20
Pleasant p > 0.118.201.1216.900.93p > 0.1654.314.29639.211.71p > 0.149.742.9154.262.31
Unpleasant p > 0.117.840.8717.610.95p = 0.027610.1112.48649.5012.36p = 0.03159.583.0051.382.25
Pleasant arousedp > 0.118.511.7418.691.48p = 0.012716.316.79669.814.65p = 0.00136.443.0747.713.00
Pleasant relaxedp > 0.117.891.4015.101.06p = 0.08592.2919.28609.9517.10p > 0.163.044.1260.543.24
Unpleasant relaxedp > 0.117.021.1318.191.43p > 0.1566.018.03610.2917.07p > 0.169.604.3857.532.96
Unpleasant arousedp > 0.118.681.3317.041.26p > 0.1655.1714.54687.8916.03p > 0.149.363.5345.373.16
Table 7. The t-test analysis on the saccadic amplitude and count between the empathic and non-empathic conditions.
Table 7. The t-test analysis on the saccadic amplitude and count between the empathic and non-empathic conditions.
GroupSaccadic Amplitude (SAC)Saccadic Count (SCC)
Non-EmpathicEmpathic Non-EmpathicEmpathic
p-ValueMeanSTDMeanSTDp-ValueMeanSTDMeanSTD
All emotionsp < 0.001165.863.12193.742.45p > 0.1705.817.82715.447.27
Aroused p > 0.1183.974.45189.472.78p > 0.1748.259.41744.019.19
Relaxed p < 0.001147.93.52197.83.96p > 0.1663.8210.86687.7810.47
Pleasant p = 0.001186.294.63206.303.18p > 0.1723.211.98711.39.87
Unpleasant p = 0.000145.652.98181.323.27p = 0.037688.549.76719.510.6
Pleasant arousedp = 0.003217.244.28199.313.71p = 0.019772.214.34737.212.19
Pleasant relaxedp = 0.000155.345.19213.004.92p > 0.1674.2316.31686.5214.54
Unpleasant relaxedp = 0.000140.74.54183.545.45p = 0.04653.614.22687.015.26
Unpleasant arousedp = 0.000150.703.73179.133.63p > 0.1724.211.16751.313.40
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, J.; Park, S.; Cho, A.; Whang, M. Significant Measures of Gaze and Pupil Movement for Evaluating Empathy between Viewers and Digital Content. Sensors 2022, 22, 1700. https://doi.org/10.3390/s22051700

AMA Style

Zhang J, Park S, Cho A, Whang M. Significant Measures of Gaze and Pupil Movement for Evaluating Empathy between Viewers and Digital Content. Sensors. 2022; 22(5):1700. https://doi.org/10.3390/s22051700

Chicago/Turabian Style

Zhang, Jing, Sung Park, Ayoung Cho, and Mincheol Whang. 2022. "Significant Measures of Gaze and Pupil Movement for Evaluating Empathy between Viewers and Digital Content" Sensors 22, no. 5: 1700. https://doi.org/10.3390/s22051700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop