2.1. Participants
We recruited 100 first-year undergraduates aged 18–22 (79 males and 21 females) in Zhejiang Province. All recruited subjects were psychologically assessed by the school’s mental health center. A total of 62 participants (48 males and 14 females) met the inclusion and exclusion criteria of this study. Psychological evaluation physicians divided all subjects into the high-risk depression group (HD group, n = 30, “n” represents sample size) and the low-risk depression group (LD group, n = 32) according to inclusion and exclusion criteria.
The HD group was recruited according to the following criteria: (1) they had a PHQ-9 score > 9; (2) they had normal visual and audio comprehension skills. Meanwhile, the LD group was recruited according to the following criteria: (1) they had no history of mental illness or family history of mental illness; (2) they had a PHQ-9 score < 5; and (3) they had normal visual and audio comprehension skills.
In addition, exclusion criteria included (1) suffering from other mental disorders; (2) suffering from severe organic diseases; (3) having taken psychotropic drugs or received antidepressant treatment in the last two months; (4) having an intense suicidal intention; (5) having sensory development disorders (such as audio-visual problems), high myopia, hearing abnormalities, color blindness, and other eye diseases affecting visual acuity.
The study was approved by the ethics committee of Zhejiang Chinese Medical University (Approval Number: 20230627-1). All participants volunteered to participate in the experiment and signed informed consent prior to the experiment.
In the course of this trial, three people were unable to cooperate with the data collection of the whole trial due to infection with the novel coronavirus (one person in the HD group and two people in the LD group), and one person in the LD group lost data due to an abnormal collection of digital biomarker data. The above four people were excluded because of incomplete data, and the final effective sample size was 58, including 29 subjects at high risk for depression and 29 subjects at low risk for depression. The flow chart of subject screening is shown in
Figure 1.
2.2. Design of the Paradigm and Experimental Process
Based on the hypothesis that the “emotional perception of eye movement” could characterize defects in emotional perception for the recognition, processing, and regulation of young people at high risk for depression, we designed an “eye movement emotional perception evaluation paradigm”. This paradigm consisted of four free viewing tasks: task 1 was the “European subtask of facial feature perception”, task 2 was the “European subtask of facial emotional perception”, task 3 was the “Asian subtask of facial feature perception”, task 4 was the “Asian subtask of facial emotional perception”; each task contained photos of neutral, sad, happy, and angry emotions. The stimulation materials used in tasks 1 and 2 were selected from the FACES facial emotion database, and the stimulation materials used in tasks 3 and 4 were Asian youth photos.
All participants were tested in a quiet room. We placed a comfortable and stable highchair 90 cm in front of the monitor. The viewing distance of the participants in a sitting posture was 70 ± 10 cm. We informed participants that the starting position of the eyes needed to be calibrated between each task and each face photo/face photo matrix, and participants needed to look at the gray cross on the white background for 1 s before the face photo/face photo matrix appeared. The task flow chart is shown in
Figure 2. Participants were familiar with the paradigm process and manipulation methods before the formal start of the “eye movement emotional perception evaluation paradigm”. Researchers used the following prompt to all participants: “all you have to do is sit in a chair in front of the monitor and feel free to look at emotional photos that automatically appear on the monitor. The gray cross that flashes during each photo switch is eye movement calibration. The next photo will appear only when your eyes fall on the gray cross. Please feel free to look at all emotional photos that appear until the end of the paradigm.” In addition, SDS is a common self-rating depression scale, which is an effective and sensitive method of evaluating the clinical severity of depressed patients [
25]. In this study, all participants completed the SDS scale prior to the “eye movement emotional perception evaluation paradigm”. The above evaluations were carried out on the same day to ensure the stability of the participants’ state of anxiety and to better clarify the effectiveness of this screening technique.
After the formal start of the “eye movement emotional perception evaluation paradigm” began, the tasks were performed sequentially. Task 1 showed single-faced photos of middle-aged European males and females in the order of neutral, sad, happy, and angry faces. After each photo appeared, the participants were free to watch it for 5 s with a total of 8 photos. Task 2 showed the face photo matrices of young males, old males, young females, and old females in the form of a two-by-two European face photo matrix (neutral on the upper left, angry on the upper right, sad on the lower left, happy on the lower right). Participants were free to watch for 10 s with a total of 4 photo matrices. Task 3 showed single-faced photos of three types of young Asian males in the order of neutrality, sadness, happiness, and anger. After each photo appeared, the participants were free to watch it for 5 s with a total of 12 photos. Task 4 showed face photo matrices of three different young Asian males in the form of a four-by-four face photo matrix. Participants were free to watch for 10 s with a total of 4 photos. The schematic diagrams of each task are shown in
Figure 3.
In the paradigm process, we collected objective data from the participants’ entire process through an eye tracker and recorded them using an HCI system.
Figure 4 shows partial eye movement heat maps of one low-risk and one high-risk subject with depression. The closer the position color in the heat maps was to red, the more the subject’s eye movement information was focused there, while yellow and green represented the areas with less attention. The upper parts of
Figure 4a, c are eye movement heat maps of the “facial feature perception task” for the subject at low risk of depression, and the lower parts are eye movement heat maps of the “facial feature perception task” for the subject at high risk of depression. The left side of
Figure 4b, d contains eye movement heat maps of the “facial emotion perception task” for the subject at low risk of depression, and the right side includes eye movement heat maps of the emotion perception task for the subject at high risk of depression.
The
m-th face photo/face photo matrix in the
n-th task is expressed as:
Formula (1) means that there are a total of m face photos or face photo matrices in the n-th task. We recorded the m-th face photo of task n-th as Task nP(m), and we recorded the y-th face photo of the m-th face photo matrix of task n-th as Task nP(m)y.
2.4. Definition and Quantitative Analysis of Digital Biomarkers
We extracted digital biomarkers based on the objective evaluation data mentioned above.
Because tasks 1 and 3 in the “eye movement emotional perception evaluation paradigm” are “facial feature perception tasks”, the extracted digital biomarkers and analysis algorithms were similar. We extracted three types of indexes: fixation time, scan path length, and average scanning speed from the two dimensions of the gaze feature and saccade feature for face photos with different emotions. For fine-grained analysis, we divided five areas of interest (AOI) into the single-face photo, including, namely, the overall area (O), face area (F), eye-to-mouth area (EM), eye-to-nose area (EN), and eye area (E), each of which is rectangular. The generalization of digital biomarkers characterizing the “facial feature perception task” is shown in
Figure 5.
The definition, abbreviation, and interpretation of digital biomarkers that characterize the “facial feature perception task” are shown in
Table 1,
Table 2 and
Table 3.
The calculation methods of digital biomarkers that characterize the ”facial feature perception task” are as follows:
The following is an example of the calculation of digital biomarkers for an AOI area of a happy face photo:
We assumed that when participants looked at the
m-th face photo in task
n and the photo was a happy face,
n {1, 3}; the range of
m is shown in Formula (1). The eye tracker accumulated a total of
I eye movement coordinates (
I N) where the
i-th eye movement coordinate was (
xi,
yi), (
,
i N). To determine whether the
i-th eye movement coordinate was in an AOI region (the AOI can be replaced with “O”, “F”, “EM”, “EN”, and “E” to represent a specified AOI), the formula is as follows:
where (
xmin,
ymax) and (
xmax,
ymin) in Formula (2) are the vertex coordinates of the upper left corner and the lower right corner of the AOI region, respectively.
The
Happy(FT,AOI) formula is as follows:
We set the distance between the two adjacent eye movement coordinates as
di:
Because the eye movement tracks in each AOI were not continuous during the task, we calculated each continuous eye movement track length in each region separately and then accumulated them to obtain the scan path length in a certain AOI. We assumed that the participants had
a continuous eye movement tracks (
a > 0,
a N) in a certain AOI region, and there are
k eye movement coordinates (0 <
j ≤
a, and
j,
k N) in the
j-th continuous eye movement track; then, the distance of the
j-th (
distancej) continuous eye movement tracks can be obtained as follows:
Hence,
Happy(SPL,AOI) is calculated as follows:
We calculated the velocity of the eye movement track based on the length of each continuous eye movement track and then obtained the average saccade velocity in a certain area. The velocity of the
j-th (
vj) continuous eye movement track is as follows:
Then,
Happy(ASS,AOI) is calculated as follows:
Because tasks 2 and 4 in the “eye movement emotional perception evaluation paradigm” are “facial emotional perception tasks”, the extracted digital biomarkers and analysis algorithms were similar. We extracted four types of indexes for different emotions: fixation time, attention level, attention shift, and attention times from the two dimensions of gaze feature and scan feature. The generalization of digital biomarkers characterizing the “facial emotional perception task” is shown in
Figure 6.
The definition, abbreviation, and interpretation of digital biomarkers characterizing the “facial emotional perception task” are shown in
Table 4,
Table 5,
Table 6 and
Table 7.
The calculation methods of digital biomarkers that characterize the “facial emotional perception task” are as follows:
During the “facial emotional perception task”, we set the time for participants to look at the
m-th face photo matrix as
T(n,m) and the time for staring at the
y-th face photo as
T(n,m)y. If the photo was neutral, the
NeutralFT of this photo was
T(n,m)y. The
SadFT,
HappyFT, and
AngryFT of this photo were all 0. The calculation method of fixation time for other emotional face photos was the same. The
NeutralAL of the photo could be calculated as follows:
Task nP(m)y in Formula (9) represents the y-th emotional photo of the m-th emotional photo matrix of task n. n {2, 4}, where the range of m and y is shown in Formula (1), and the SadAL, HappyAL, and AngryAL of this photo are all 0. The calculation of attention level for other emotional face photos is the same.
We set the time for participants to look at the sad, angry, happy, and neutral faces in the photo matrix as
T(n,m)_sad,
T(n,m)_angry,
T(n,m)_happy, and
T(n,m)_neutral, respectively. The
HappyAL calculation formula is as follows:
SadAL, AngryAL and NeutralAL are calculated in the same way.
The
SadAS calculation formula is as follows:
The
HappyAS calculation formula is as follows:
We set the number of times the participants switched their eyes from happy face photos to non-happy face photos and then returned to happy face photos during the evaluation process as
counthappy. The
HappyAT calculation formula is as follows:
NeutralAT, SadAT, and AngryAT are calculated in the same way.