Next Article in Journal
Anomaly Detection and Artificial Intelligence Identified the Pathogenic Role of Apoptosis and RELB Proto-Oncogene, NF-kB Subunit in Diffuse Large B-Cell Lymphoma
Next Article in Special Issue
ORASIS-MAE Harnesses the Potential of Self-Learning from Partially Annotated Clinical Eye Movement Records
Previous Article in Journal
Unlocking the Future of Drug Development: Generative AI, Digital Twins, and Beyond
Previous Article in Special Issue
Investigating the Effectiveness of an IMU Portable Gait Analysis Device: An Application for Parkinson’s Disease Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Physiological Data Augmentation for Eye Movement Gaze in Deep Learning

by
Alae Eddine El Hmimdi
1,2,* and
Zoï Kapoula
1,2,*
1
Orasis-Eye Analytics & Rehabilitation Research Group, Spinoff CNRS, 12 Rue Lacretelle, 75015 Paris, France
2
LIPADE, French University Institute (IUF), Laboratoire d’Informatique Paris Descartes, University of Paris, 45 Rue des Saints Pères, 75006 Paris, France
*
Authors to whom correspondence should be addressed.
BioMedInformatics 2024, 4(2), 1457-1479; https://doi.org/10.3390/biomedinformatics4020080
Submission received: 6 March 2024 / Revised: 8 April 2024 / Accepted: 21 May 2024 / Published: 6 June 2024

Abstract

:
In this study, the challenges posed by limited annotated medical data in the field of eye movement AI analysis are addressed through the introduction of a novel physiologically based gaze data augmentation library. Unlike traditional augmentation methods, which may introduce artifacts and alter pathological features in medical datasets, the proposed library emulates natural head movements during gaze data collection. This approach enhances sample diversity without compromising authenticity. The library evaluation was conducted on both CNN and hybrid architectures using distinct datasets, demonstrating its effectiveness in regularizing the training process and improving generalization. What is particularly noteworthy is the achievement of a macro F1 score of up to 79% when trained using the proposed augmentation (EMULATE) with the three HTCE variants. This pioneering approach leverages domain-specific knowledge to contribute to the robustness and authenticity of deep learning models in the medical domain.

1. Introduction

Deep learning has greatly influenced a range of fields, such as computer vision, signal processing, and natural language processing. However, training deep convolutional architectures from scratch necessitates a substantial amount of data compared to traditional machine learning algorithms. This requirement becomes even more crucial when considering Transformer architecture. It is widely recognized that Transformers are data-intensive [1,2,3,4], relative to traditional CNN architectures. The advantage of this architecture lies in its enhanced model expressivity and ability to effectively learn complex tasks given ample data. Yet, the use of small datasets often leads to overfitting issues and inadequate generalization to unseen data.
Moreover, collecting annotated data in the medical domain poses additional challenges, especially when human involvement is required, due to the sensitive nature of the information gathered. Additionally, obtaining an adequate number of individuals with specific target pathologies can be complex. To overcome the issue of limited dataset size, data augmentation techniques are commonly used to artificially increase the number of training samples. These techniques involve sampling new data by applying various transformations [5,6] or interpolating new samples based on existing ones [7,8,9].
However, for medical datasets, particularly those related to pathologies with distinct morphologies and structural characteristics, using such augmentation methods may not be appropriate as they can introduce artifacts and alter pathological features. Mixing-based algorithms like CutMix and MixUp, which assume a linear relationship between input and label, may also have drawbacks in this context.
As a result, pathologists and researchers often use specialized augmentation methods tailored to the characteristics of pathology patterns. These techniques involve learning the data distribution using generative approaches and then sampling from it. However, these methods are considered less effective in capturing complex or rare patterns compared to transformation-based techniques that generate diverse high-resolution samples. When trained on small-sized datasets with high-resolution images, these generative methods may not adequately capture all the patterns present in the data, as is evident in tasks like eye movement gaze classification where a two-minute recording corresponds to a multivariate time series consisting of approximately 24,000 points.
To address the challenges of limited annotated medical data, one potential solution is to enhance sample diversity by incorporating realistic physiological variations instead of directly learning the distribution. This approach leverages domain-specific knowledge and generates samples that align with the inherent characteristics of physiological data, contributing to robustness and authenticity. In this study, we introduce a physiologically based gaze data augmentation library that emulates head movements during data collection, capturing natural variability and intricacy in eye movement patterns.
The contributions are as follows:
  • We introduce EMULATE, a novel library for eye movement gaze data augmentation. Named EMULATE, which stands for Eye Movement data Augmentation by Emulating Head Position and Movement, this tool pioneers its category by emulating physiological aspects. The library generates augmented eye movement data by simulating natural head movements, both prior to recording and in real-time during gaze data collection.
  • We evaluate the data augmentation technique on three distinct architectures—two based on CNN and one hybrid, utilizing two separate datasets.
  • We explore various augmentation settings, demonstrating the effectiveness of the proposed library in regularizing the training process of the proposed architecture and improving its generalization ability.
  • We examine the complementarity between the proposed method and additional standard baseline approaches.
This paper is structured as follows: It begins with a summary of the state of the art for data augmentation, followed by an overview of the studies introducing the various architectures used. Additionally, detailed information is provided on the materials utilized in this study, including the eye movement recording setup and the resulting dataset, introduced in [10]. In this study, we explore the relevance of the proposed data augmentation method by integrating it into the existing training framework [10]. Thus, the three models are initially trained with and without the proposed method. Subsequently, comparisons are made between training with EMULATE and training with other baseline methods. Finally, we analyze the complementarity between EMULATE and the proposed baselines. In the methods section, we present the proposed data augmentation library, along with the experimental settings used to evaluate the significance of these methods. This includes the architecture used for training, as well as the augmentation and regularization methods for comparison. Implementation details such as model training and evaluation methods are also provided, together with the various hyperparameters used for the architecture, EMULATE, and the model training pipeline.
Furthermore, the experimental results are discussed in the results section and elaborated upon in the discussion section. Finally, the limitations and future directions of the proposed method are reviewed.

2. Related Work

In deep learning, data augmentation methods can be grouped into two groups: implicit distribution learning and explicit transformation modeling. The first group involves learning the underlying data distribution and sampling from it, while the second group focuses on modeling transformations to generate new samples based on existing ones.

2.1. Implicit Data Augmentation Methods

Several studies have explored the use of domain-adapted data augmentation methods, such as learning the underlying data distribution using sequence-to-sequence algorithms [11] or generative methods [12,13,14,15,16].
Zemblys et al. [11] studied eye-movement events through a supervised learning algorithm applied to recurrent networks, and then sampled from it to augment training sets. Additionally, when concidering generative methods, Fuhl [13] used variational autoencoders to learn the distribution of eye movement gaze and evaluated their method across three different datasets. Similarly, Ref. [12] learned to generate image-based scanpath representations and reported an improvement of up to 0.05 in the AUC score for augmented data in ASD classification tasks.
In similar studies involving electroencephalography time series data, the same approach of learning the data distribution was applied [15], by exploring, using conditional VAEs, to learn EEG distributions. Similarly, a second study [14] also studied several variants of CVAEs, GANs, and VAEs algorithms, observing an improvement of up to 10.2%.
Finally, in a different approach [16], investigated directly learning the distribution of extracted features. They trained a generative adversarial network to generate artificial EEG and eye movement parameters for a multimodal emotion recognition task, achieving an accuracy of up to 90.33%.

2.2. Explicit Transformation Modeling

On the other hand, many data augmentation techniques have been developed in the field of computer vision to improve the performance and robustness of deep learning models. Examples include RandAugment, AutoAugment, MixUp, CutMix, and Cutout [5,7,8,17,18].
RandAugment [5] and AutoAugment [17] are two transformation modeling techniques that can be used to apply random augmentations to images and improve the robustness of feature representation. RandAugment utilizes a set of predefined transformation operations for random augmentation while AutoAugment learn a set of optimal transformation, to effectively augment data and enhance feature representation.
On the other hand, MixUp, CutMix, and Cutout [7,8,18] generate new samples by combining pairs of two existing samples.
MixUp interpolates pairs of training examples to generate new samples, while CutMix combines two randomly selected samples by swapping a patch from one image onto another while preserving label information.
Lastly, Cutout, consist of masking out, square regions of an input image during training to encourage exploiting on other areas.
While data augmentation techniques like CutMix and MixUp are commonly used in the deep learning community to enhance generalization and robustness, these methods are less utilized in the medical classification domain as they may compromise the integrity and diagnostic value of data. Instead, the focus is on implementing data augmentation techniques that preserve essential diagnostic information within images. It is worth mentioning that these augmentations have not yet been applied to eye movement gaze classification studies.
Another type of explicit data augmentation technique involves learning a domain-adapted method, using algorithms that model transformations to increase the sample size while preserving the characteristics of the data manifold. Thus, as opposed to the first type discussed above, the generated data is part of the real data distribution.
For example, while several transformations of the Autoaugment library can be considered domain-adapted for computer vision tasks, they are not domain-adapted for the latter. when applied to eye movement positions in time series.
To the best of our knowledge, explicit transformation modeling methods for eye movement time series are lacking. To address this gap, we introduce a physiological data augmentation method.

2.3. Hierarchical Temporal Convolutions for Eye Movement Analysis

In a previous study [19], we investigated the screening of scholar-learning disorders using deep learning applied to clinical data. This dataset included 4243 time series of eye movement positions recorded from children across Europe. We introduced the hierarchical temporal convolutions for eye movement analysis (HTCE), a CNN architecture composed of multiple hierarchical convolutional blocks followed by a multi-layer perceptron for classification. The proposed method achieved precision and recall rates of up to 80.20% and 75.1%, respectively, when evaluated on clinical data. These results are significant considering the high variability in both input and label, particularly compared to research datasets collected under consistent protocols, with control populations consisting of healthy subjects. This setting reflects real-world scenarios more accurately, where the negative class (control) contains populations with various pathologies, making the screening task more challenging.

2.4. Multi-Segment HTCE-Based Classifier

In another study [10], we took a step further by extending the previous method to incorporate a multi-segment-based classifier. This classifier was trained to identify eight groups of pathologies, instead of exclusively focusing on screening scholar disorders, thus transitioning from a binary to a multi-label classification problem. Initially, 10 segments were randomly sampled from each recording, and each segment was processed using HTCE variants to generate embeddings. Subsequently, these embeddings were aggregated to provide a comprehensive prediction. Two aggregation strategies were explored:
  • Pooling-based aggregation: Two CNN architectures, HTCE-Max and HTCE-Mean, are introduced. Each classifier consists of two stages. The initial stage constructs an embedding from a segment of eye movement recordings, employing a refined version of the HTCE classifier proposed in [10]. However, the second stage, varies in implementation. The HTCE-Max architecture aggregates the embeddings using a max-pooling layer, followed by processing the resulting feature map with a multi-layer perceptron. Similarly, HTCE-Mean employs mean-pooling instead.
  • Attention-based aggregation: a second hybrid architecture is introduced, which first utilizes a lightweight version of HTCE to construct a high-dimensional representation for each segment. Subsequently, second-level feature extraction is performed using the VIT encoder [20]. This approach yields a hybrid architecture where the first stage conducts feature extraction at the temporal level, while the second stage operates at the segment level.
Additionally, contextual information is incorporated, including time series and gaze derivatives. We provide a brief overview of each time series:
  • Gaze derivative: Corresponding to the first and second derivatives (velocity and acceleration) for each of the four eye movement time series.
  • REMOBI target signal: Encodes the state of the different LEDs, allowing the model to infer information such as the latency.
  • LED coordinates: Encodes the coordinates of the activated LED within the optical axis.
  • Confidence level: Represents an estimation of the uncertainty of each eye movement coordinate estimation by the eye tracker.
In this study, the relevance of the proposed data augmentation is evaluated by utilizing the previous training setup, while incorporating the proposed augmentation method.

3. Material

3.1. Eye Movement Recording

Eye movements were recorded using the Pupil Core head-mounted video-oculography tool, which measures angular positions along the vertical and horizontal axes, forming a plane perpendicular to the optical axis, at a frequency of approximately 200 Hz.
The data collection process was stored anonymously during the eye movement analysis and complied with European regulations regarding personal data protection.
The clinical data were gathered from 20 different European clinical centers, where two technologies—REMOBI technology (patent WO2011073288) and AiDEAL technology (patent PCT/EP2021/062224)—were used to test and analyze various types of eye movement including saccade and vergence eye movements.

3.2. Eye Movement Visual Tasks

In this study, two visual tasks were explored: the saccade task and the vergence task. In the saccade task, participants responded to stimuli that appeared randomly along a horizontal axis, with analyses focusing on eye movements and fixation post-movement. The vergence task involved observing both convergent and divergent eye movements as participants fixated on a stimulus presented at various positions and durations along the optical axis.
To prevent participants from predicting motion, the duration and position of LEDs were randomized in both tests. Each test included 40 trials: 20 leftward and 20 rightward for the saccade test, and 20 coordinated and 20 uncoordinated for the vergence test.

3.3. Problem Statement

Our dataset, denoted as D and comprising N instances for i [ 1 , N ] , consists of pairs representing multivariate time series X i R 15 × T of length T and a corresponding target class y i . The input features include both the horizontal and vertical angular positions of each eye over the duration T, along with the first two order derivatives (velocity and acceleration), as well as contextual time series, namely, the latency, LED coordinates relative to the optical axis, and confidence level. The objective is to predict the class y i based on input X i .
This work builds upon previous studies [10]; thus, we reuse the same problem formulation; Our objective is to tackle a multi-annotation problem by predicting the vector classes y i based on input X i . Additionally, to reduce the model’s sensitivity to segments with high levels of noise, a multi-segment approach is adopted, with 10 segments of size S = 1024 , corresponding to 50 s of recording.
We evaluate the relevance of the proposed data augmentation by integrating it into the existing training framework developed in previous studies. Thus, each of the three models is initially trained with and without the proposed method. Subsequently, we compare training with EMULATE and training with other baselines. Finally, we examine the complementarity between EMULATE and the proposed baselines.

3.4. Dataset Overview

We utilized the Ora23 dataset previously introduced in [10], which encompassed two distinct datasets, corresponding to two different visual tasks (saccade and vergence).
The saccade visual task was composed of 92,207 segments of 5 s duration, recorded from 3181 subjects. Similarly, the vergence visual task consisted of 95,630 segments performed by 3228 subjects. For both visual tasks, the mean duration of each recording was approximately 3 min. Note that the Ora23 dataset was generated using the same method as described in the previous study [19] for constructing the Ora22 dataset. However, it included annotated data gathered between 2022 and 2023.
Table 1 presents the corresponding group of pathologies for each class identifier, as well as the corresponding patient count for each of the two datasets, namely the saccade and the vergence datasets. It is noteworthy that there are similarities between the class distributions in the two datasets; in the majority of cases, the same clinical protocols are used, involving performing both saccade and vergence tests.

3.5. Data Processing

This section presents the data preprocessing steps to sample the different training batches from the dataset, following the methodology outlined in a previous study [10]. We provide a concise summary of these key procedures.
Initially, two levels of data cleansing are performed: a low-pass filtering step using a Gaussian FIR filter with a cut-off frequency of 33 Hz and a z-score filtering step, eliminating data points with z-scores exceeding 2.5. Each time series recording undergoes individual filtering using its own statistics computed from the entire recording.
Additionally, to standardize each angular coordinate, the modulo angular coordinate value of 180 is computed, and each coordinate is then divided by 180. Finally, for contextual features, min–max standardization is employed as an alternative method.

4. Methods

The proposed approach utilizes physiologically based transformation techniques to augment eye movement gaze data. To overcome the challenges, two strategies for augmenting the dataset are proposed.
  • Static: This involves emulating head movements made before data acquisition without affecting head stability during acquisition. It incorporates nine parameters.
  • Dynamic: This method focuses on emulating head movements during data acquisition and allows for more extensive augmentation of the dataset. It includes 15 parameters.

4.1. Motivation

The methodology incorporates the use of the REMOBI system, allowing for unrestricted movement of the subject’s head instead of it being fixed. Additionally, accelerometer measurements have been included in the new data recordings to analyze head movements. Observations indicate a consistent slight variation in the initial head position and the presence of small ongoing movements. These parameters are introduced to augment highly physiological data, inspired by this physiological variability.

4.2. Algorithm

Figure 1 shows a schematic of the proposed model, with the two pupils and the center of the head projected in a two-dimensional plane. For simplicity, we will focus on the case of the right eye. Note that similar formulas are used to rotate the vector for the left eye as well.
Let r o t represent a rotation matrix, denoted as rot ( β , γ , α ) , where β , γ , and α are the respective angles of rotation around the x-axis, y-axis, and z-axis.
cos ( β ) cos ( γ ) sin ( α ) sin ( β ) cos ( γ ) cos ( α ) sin ( γ ) cos ( α ) sin ( β ) cos ( γ ) + sin ( α ) sin ( γ ) cos ( β ) sin ( γ ) sin ( α ) sin ( β ) sin ( γ ) + cos ( α ) cos ( γ ) cos ( α ) sin ( β ) sin ( γ ) sin ( α ) cos ( γ ) sin ( β ) sin ( α ) cos ( β ) cos ( α ) cos ( β )
Multiplying a vector (x,y,z) by this matrix corresponds to applying rotations with angles beta, gamma, and alpha around the first, second, and third axes, respectively. The proposed algorithm consists of the following steps:
  • 1. To find Θ 1 in Cartesian coordinates, the spherical coordinate triplet ( α , β , r ) is converted to the Cartesian reference system ( x , y , z ) using this equation.
    x = ϕ sin ( α ) cos ( θ )
    y = ϕ sin ( α ) sin ( θ )
    z = ϕ cos ( α )
  • 2. Each eye position vector is translated from the pupil center position ( Θ 1 ) to the head center position ( Θ 2 ).
    x head = x R x
    y head = y R y
    z head = z
  • 3. The head rotation transformation involves rotating the head within the three axes. This operation is achieved by multiplying the vector ( Θ 2 ) using the rotation matrix rot ( β , γ , α ) .
    x headbar y headbar z headbar = Rot ( α , β , γ ) · x head y head z head
  • 4. Each eye position vector ( Θ 3 ) is translated back to its corresponding eye coordinate ( Θ 4 ).
    x bar = x headbar + R x
    y bar = y headbar + R y
    z bar = z headbar
  • 5. Each eye coordinate is converted back to spherical coordinates using the following equation:
    r = x 2 + y 2 + z 2
    θ = arccos z r · 180 π
    ϕ = arctan 2 ( y , x ) · 180 π
Two data augmentation strategies are proposed based on this algorithm. The first strategy involves using a shared rotation matrix along the temporal axis, resulting in static data augmentation. In contrast, the second method samples different rotation matrices for each point within the temporal axis in each batch, leading to dynamic data augmentation.

4.2.1. Static Data Augmentation

The static version involves mimicking head movements before data acquisition without disrupting the stability of the head information during acquisition. To implement the proposed algorithm, we utilize eye and head coordinates from a study [21] to obtain a list of these coordinates for 10 subjects.
This method incorporates four parameters: three rotation angles and the index from the table of subject coordinates used for computation.

4.2.2. Dynamic Data Augmentation

This study explores advanced strategies for replicating dynamic head movements during data acquisition. A significant difference between the new approach and the previous one involves the handling of the rotation matrix. In the previous approach (static), the rotation matrix is shared within the temporal axis. However, in this new approach (dynamic), the real-time movement of a human head is modeled using a sinusoidal function parameterized by its initial angle, maximum angle, and period. Consequently, the rotation matrix is sampled differently for each time step based on a specific equation.
γ ( t ) = γ ( 0 ) + γ max · sin t 2 π γ period
β ( t ) = β ( 0 ) + β max · sin t 2 π β period
θ ( t ) = θ ( 0 ) + θ max · sin t 2 π θ period
In contrast to the previous method, where each batch required constructing a rotation matrix by sampling three angles (gamma, beta, and theta), this approach involves sampling three additional maximal angles and three periods. This increases the total number of free parameters from 4 to 10.

4.2.3. Interpolating Different Subject Coordinates

Different eye and head coordinates are sampled from the table presented in [21]. This table includes the left and right eye coordinates, as well as head coordinates, for 10 different subjects. To enhance the variety of the output space, for each batch, the eye and head coordinates are dynamically generated using the following approach:
  • Randomly sample eye and head coordinates from two subjects.
  • Generate a scalar value within a specified range.
  • Construct new eye and head coordinates by linearly interpolating between the coordinate systems of the two selected subjects.

4.2.4. Radius Approximation

To convert data to the Cartesian coordinate system, angular values within the x- and y-axes are necessary along with the radius of each point. However, the eye tracker employed does not directly estimate the radius; instead, an approximation is made using the coordinates of the stimulus along the optical axis. This approximation is based on the hypothesis of optimal convergence and vergence in terms of the amplitude when focusing on an LED. It is crucial to emphasize that this approximation serves solely for converting between spherical and Cartesian coordinates.

4.3. Experimental Setting

To assess the importance of EMULATE, the three architectures are trained using the three different setups of the proposed data augmentation methods (static, dynamic, and dynamic high) and are compared first with a regime where no augmentation is applied, and then with a regime where several non-physiological data augmentation and regularization techniques are explored. Finally, the complementarity between the proposed methods and the different baseline methods is explored.

4.3.1. Incorporating the Augmentation Method within the Three HTCE Variant Training Sessions

Firstly, the significance of the proposed augmentation method is assessed by integrating it into the initial training setup. For each of the two datasets (saccade and vergence) and the three architectures—HTCE-MAX, HTCE-MEAN, and HTCSE—a training session is conducted using the three different setups of the proposed data augmentation methods (static, dynamic, and dynamic high), and compared against a traditional regime where no augmentation is involved.
It is important to note that for these experiments, as well as all the experiments presented subsequently, the dilation mechanism is disabled to reduce training costs. The objective of these studies is to compare data augmentation methods rather than achieve the best performance. Thus, the three dilated layers and the subsequent concatenation module are replaced with a single convolution layer. This layer has a number of parameters equal to the sum of the parameters of the three previous dilated convolution layers, along with similar hyperparameters.

4.3.2. Comparing EMULATE with Other Augmentation Methods

To assess the relevance of the proposed augmentation library, in addition to a comparison with a no-augmentation regime, the three different architectures are trained with multiple standard regularization and data augmentation methods widely used in the deep learning community:
  • Dropout [22]: A dropout layer with a rate of 0.2 is inserted after each ConvBlock, Attention Block, and at the input of the MLP.
  • CutMix [7]: CutMix data augmentation is employed with default parameters (alpha set to 1.0), utilizing the Keras implementation [23].
  • Cutout [18]: The Keras implementation [24] is utilized with default parameters (height and width factors set to 0.2).
  • MixUp [8]: The Keras implementation [25] is utilized with default parameters (inverse scale parameter set to 0.2).

Exploring the Complementarity with Non-Physiological Methods

The next objective is to investigate the complementarity between EMULATE and various non-physiological methods such as CutMix, MixUp, Cutout, and Dropout. Therefore, in this approach, the physiological plausibility of the entire augmentation setup is ‘sacrificed’ in favor of enhancing the model’s generalization ability further. For each dataset, the three models, and the four augmentation methods, each model’s performance is compared with a regimen where it is trained using the corresponding augmentation method combined with the dynamics and then the dynamic high variants.

4.4. Model Training and Evaluation

4.4.1. Train/Test Split

To ensure consistency across experiments, we initially generate and store distinct training and test folds for each iteration. A three-fold stratified cross-validation approach is employed, which is a variant of the cross-validation train/test split method. This method accounts for label distribution to ensure similar label distributions between the training and test sets.
Stratification is applied at the level of anonymized patient identifiers to prevent overlap between the training and test sets, using the iterative-stratification library from the scikit-multilearn package [26]. During the process, patient IDs are divided into three folds. Subsequently, the corresponding recording time series and annotations are collected for each candidate patient ID. Finally, two folds of data are used for training, while the remaining fold is reserved for model testing.

4.4.2. Random Batch Sampling

To enhance the variability of the training dataset, a sampling heuristic similar to previous studies is employed. Rather than constructing a static training set using 10 consecutive segments, the different segments are randomly selected for each sample recording in real-time during model fitting.
This improves training regularization by diversifying the training set compared to consecutive sampling. For instance, from a recording of 50 segments, consecutive sampling yields 41 unique samples, while random sampling can generate over 10 15 tuple samples, significantly increasing diversity. By incorporating more segments and dynamic random sampling, we aim to maximize dataset utilization and enhance model generalization.

4.4.3. Data Augmentation Hyperparameters

We explored various setups by varying the sampling law as well as the different angular values.
We experimented with angles in the range of 3 and 30 on the logarithmic scale. Additionally, we tested two sampling distributions, namely the normal and uniform distribution. We found that to increase variability, a uniform distribution is preferred. Additionally, we discovered that when sampling small angles (for example, from U[−5,5]), the regularization effect diminishes. Conversely, when allowing for larger angles (for example, sampling from U[−45,45]), it affects the model’s performance.
As a result, we selected three different configurations for further experimentation: one variant for the static mode and two variants for the dynamic mode, namely “dynamic” and “dynamic high”. Table 2 displays the different hyperparameters used for the three proposed configurations. In the static mode, only the initial angular value is sampled from a uniform distribution within [−10, 10]. For the first variant of the dynamic mode, both initial and peak angular rotations are sampled from a uniform distribution within [−15, 15]. In the second variant, a larger sampling interval of [−20, 20] is considered. For all three variants of dynamic modes, sampling periods were chosen using random values from a uniform distribution ranging between 4 and 40.

4.4.4. Model and Training Hyperparameters

The training setup from a prior study [10] is reused to evaluate the proposed method. The setup is briefly outlined here. Additionally, the hyperparameters for the HTCE encoder and its lightweight variant are provided in Appendix A, in Table A10 and Table A11, respectively. Refer to [10] for a comprehensive presentation of the model training set.
The different deep learning architectures are implemented using the TensorFlow package for model fitting on a single NVIDIA A100 80 GB GPU. Each model’s hyperparameters are manually optimized, and training lasts for 100 epochs using the AdamW optimizer with a learning rate set to 1 × 10−4 for stability. The weight decay, set at 1× 10−5, aids in regularization [27]. Furthermore, we optimize the focal loss with a gamma of 5 and class balancing. The different settings for class balancing are also presented in the Appendix A, in Table A11, which lists the various training hyperparameters.
Finally, an early stopping technique is used to prevent the model from overfitting by monitoring the validation global F1 score and stopping the training after 10 consecutive epochs without improvement.

Model Evaluation

Medical datasets often exhibit high imbalances with pathologic populations being rarer compared to normal ones. As a result, precision, recall, sensitivity, and specificity are preferred over accuracy in evaluating the method performance for each class. Additionally, recall that the classification problem is a multi-annotation problem, where the model learns binary decisions for each of the eight classes. However, assessing 32 metrics simultaneously is not straightforward; therefore, the macro F1 score for each class allows for a global evaluation of screening performance across both positive and negative classes.
The model performance evaluation involves several metrics to assess the screening ability for both positive and negative cases across different pathologies. These metrics include per-class macro F1 scores, positive F1 scores, negative global F1 scores, and global F1 scores.
The positive F1 score evaluates the overall model performance in screening each pathology for each class by averaging the positive F1 scores for each class separately. Conversely, the negative global F1 score is computed as the weighted mean of micro F1 scores from normal subjects within each class.
The global F1 score, on the other hand, provides a comprehensive assessment by averaging the positive and negative F1 scores. When ranking model performances, priority is given to the global F1 score, followed by the positive F1 score, and finally the negative F1 score.
Subsequently, to ensure robustness and fairness in comparison, each model is evaluated using a stratified three-fold cross-validation method. Then, all metric scores are collected during each fold, and the mean values across the three folds are reported. finally, to maintain consistency, all models are trained and evaluated using the same fold split.

5. Result

5.1. Comparison with Existing Literature

5.1.1. Comparing the Overall Model Performances

Figure 2 presents a comparison of the overall model performances in terms of the global F1 score. Additionally, in the Appendix A, in Table A1 and Table A2, we present the global F1 scores for the three architectures and the applied data augmentation methods when trained on the saccade and vergence datasets, respectively.
Overall, the hybrid architecture exhibits the highest performance in screening the positive samples, consistently achieving the highest positive F1 score across the two datasets and all the experiments, with a positive F1 score up to 53.9% and 51.2% on the saccade and vergence visual tasks, respectively. On the other hand, the best performance in terms of negative F1 score is achieved by HTCE-MEAN, with negative F1 scores up to 89.6% and 89.2% on the saccade and vergence visual tasks, respectively.

Saccade Visual Task

When considering the saccade visual task, the best overall performance (in terms of global F1 score and positive F1 score) is achieved when training using CutMix augmentation, using HTCE-MEAN (71.6%) and HTCSE (71.5%), respectively.
Furthermore, for the three architectures, EMULATE consistently improves performance relative to training with no augmentation, with improvements of up to 2.8, 5.3, and 0.9 points, respectively. Additionally, in comparison to other non-physiological augmentation methods, the proposed method performs competitively.
When considering only the physiological augmentation, the best performance is achieved when training the HTCSE using the dynamic variant. Additionally, the dynamic variant consistently achieves the highest score relative to the static variant. However, when inspecting the per-class macro F1 scores presented in Appendix A Table A4, the differences in performance are not consistent. For example, when considering the performance of the HTCSE on classes 0 and 1, which achieve the highest scores in these two classes, altering the head stability information (dynamic) decreases the performance relative to the static variant for classes 0 and 1, corresponding to dyslexia and reading disorders, with losses in the macro F1 score of 0.2 and 0.3 points, respectively.

Vergence Visual Task

With the vergence dataset, the best performance in terms of the global F1 score is achieved when training using CutMix, with relatively similar performances for the three models.
Additionally, EMULATE consistently improves global performance relative to training without augmentation, resulting in an improvement of the overall best score from 68.6% to 69.1%. HTCE-MEAN and HTCSE training show better performances when trained using the dynamic EMULATE variant compared to Cutout and dropout, improving the best global F1 score from 68.3% to 69.1%. Furthermore, compared to MixUp, the difference in overall best performance is 0.2 points.
When considering the overall performance of the physiological data augmentation methods, in terms of the global F1 score, the best performance is achieved when training the HTCSE using the dynamic variant (69.1%). Additionally, while training HTCE-MEAN and HTCSE using the dynamic variant achieves the best scores (68.9% and 69.1%). Similarly, in Appendix A, in Table A3, the per-class macro F1 score is presented for each model when trained on the vergence visual task.

5.2. Extending the Baseline Methods with EMULATE

Table 3 and Table 4 showcase the different global F1 scores obtained by extending the various baseline methods using the two EMULATE dynamic variants. Additionally, in Figure 3 and Figure 4, we present barplots comparing the different baseline performances when combined with the dynamic and dynamic high EMULATE settings, in terms of the global F1 score. Finally, detailed per-class F1 scores for the corresponding experiments are presented in Appendix A, in Table A6, Table A7, Table A8 and Table A9.
On the vergence dataset, notable enhancements are observed across the three methods—Cutout, Dropout, and MixUp—excluding CutMix. Specifically, improvements of (2.1, 1.3, and 0.2), (1.4, 2.5, and 0.3), and (0.9, 1.3, and 0.4) points are observed for HTCE-MAX, HTCE-MEAN, and HTCSE, respectively.
Conversely, in the saccade visual task, when considering the performance of each model separately, Cutout and Dropout consistently improve performance across all three architectures.
Finally, when comparing the best performance gain of each augmentation, relatively to a training with EMULATE Disabled, across the three models, notable differences in performance emerge: 1.2, 0.6, and 0 for Cutout, Dropout, and MixUp, respectively.

6. Discussion

6.1. Major Finding

In this study, EMULATE, a novel data augmentation library tailored to time-series deep learning projects, is introduced. A comprehensive exploration of various settings was conducted, employing both a proposed CNN-based architecture and a hybrid-based architecture.
The evaluation encompasses two datasets: the saccade and the vergence visual tasks. The proposed method enhances generalization performance compared to training without augmentation and several baseline methods, such as Dropout and Cutout, and achieves competitive performance with MixUp. Moreover, although some physiological plausibility is sacrificed in the augmentation setup, incorporating EMULATE—except for CutMix—improves the overall performance across the tested regularization and augmentation methods. From a physiological perspective, it is important to consider that eye movements are not isolated from head movements and positions, as commands for eye movements are known to influence neck muscles, even when the head is artificially stabilized [28].

6.2. Analysis of the Degree of Freedom

The proposed data augmentation method significantly increases the dataset size. For instance, the dynamic mode comprises 10 different parameters, while the static mode is characterized by four parameters. These parameters correspond to the interpolation parameters and a tuple of values representing the initial angular position within each axis. Table 5 lists the specific parameters for each of the two configurations. Note that all the parameters sampled from the continuous interval increase the density of the output space.

6.3. Physiological Data Augmentation

By using physiological data augmentation in deep learning, one can benefit from several advantages and implications. Realistic data augmentation enhances the discrimination performance by increasing the size of the dataset. On the other hand, non-realistic methods have different effects on accuracy, primarily through regularization techniques. For example, Zeshan et al. [6] compared different data augmentation techniques for the medical imaging classification task and found a strong relationship between the chosen augmentation method and the discrimination performance. In addition, the results highlight that realistic methods such as scaling, shearing, and rotating resulted in an accuracy range of 87.4% to 88.0%, while non-realistic methods like noise and power had validation accuracies of 66.0% and 73.7%, respectively. This can be attributed to realistic augmentations effectively increasing the sample size, whereas non-realistic methods serve more as regularization techniques, rather than expanding the information in the dataset itself.

6.4. Generative Method Limitation

A generative model for data augmentation does not involve sampling real data or introducing new information. Moreover, the quality of the generated data relies on the generative model’s ability to capture complex or rare patterns. The task becomes more challenging when dealing with high-resolution time series data and a limited dataset size. Such constraints are especially pronounced when using a limited dataset.

6.5. On the Importance of the Transformation-Based Method

Deep learning-based data augmentation has shown promise, but it can struggle to capture rare patterns in small training datasets and introduce noise when sampling at different resolutions. Various studies have attempted to address these limitations to increase the image resolution [29,30,31,32,33] or enhance the quality [34]. However, this task remains challenging, particularly with a limited training set and high-resolution samples. On the other hand, transformation-based data augmentation generates new samples through realistic transformations. For instance, in the present study, head movements are simulated to generate new rotated patterns that cannot be obtained through linear interpolation of existing samples.
Furthermore, when transformation-based methods are applied to generative-based data augmentation techniques, they can improve generalization ability, leading to better variety. Simply put, transformation-based methods can enhance generalization ability on their own or be combined with cutting-edge generative data augmentation methods to increase variability and improve augmentation. This makes them even more crucial in practice.

7. Limitations and Future Directions

7.1. Dynamic Mode Limitations

The dynamic method allows for higher augmentation capabilities. However, unlike the static mode, which makes no assumptions when performing data augmentation, the dynamic mode tends to exhibit a bias toward the sinusoidal motion model. By authorizing a near-manifold sampling, the velocity of the head is modeled using parameterization to approximate head movement during testing. This is achieved by employing a parameterized sinusoidal function and selecting the frequency range accordingly. Consequently, in contrast to the static mode, which samples from the true data manifold, the dynamic mode may introduce artifacts due to the artificial sinusoidal motion model.
Furthermore, for certain pathologies, dynamic head movement can be a pertinent criterion for screening the corresponding pathology. Additionally, small head movements can be discriminative for the same use case. For such pathologies, the static mode is recommended, as this method does not alter the real-time head instability parameters.

7.2. Computational Efficiency

While EMULATE shows promising results, it introduces significant computational costs when implemented using native Python. However, when incorporating the various computations into the TensorFlow computation graph, the introduced computational cost is relatively low compared to the initial computation cost introduced by the deep learning model. Additional techniques such as pre-fetching and preprocessing each batch in parallel further reduce the introduced computation cost.

7.3. Sensitivity to the Radius Coordinates

In order to convert the different coordinates from the spherical system to the Cartesian system, the radius distance is required, corresponding to the eye fixation coordinate within the optical axis. In the current study, this distance is approximate with the coordinate of the stimulus LED, which implies perfect accuracy in convergence and divergence, achieving exact precision.

7.4. Future Direction

At this stage, the proposed method shows promising performance when compared to other methods. One important direction involves exploring the performance of the proposed method on different standard datasets as well as other model architectures. This exploration aims to assess the relevance of EMULATE in various settings, including different datasets, models, and learning tasks.
Additionally, a noteworthy future direction involves investigating why, unlike other augmentation methods, incorporating EMULATE diminishes the generalization ability of CutMix. and push the exploration further by exploring the performance of the proposed method on different standard datasets as well as other model architectures, in order to assess the relevance of EMULATE under different settings, including different datasets, models, as well as learning tasks.
A notable future direction involves investigating why, unlike other augmentation methods, incorporating EMULATE diminishes the generalization ability of CutMix. Further exploration could involve assessing the performance of the proposed method on various standard datasets and model architectures to evaluate the relevance of EMULATE under different settings, encompassing different datasets, models, and learning tasks.
Another area of improvement involves replacing the naive sinusoidal model, with a more complex model, thus enabling a better approximation of head motion to enhance the accuracy of generated data when utilizing the dynamic mode.
Finally, the number of head and eye coordinates taken from [21] is relatively small. Therefore, the next step would be to enhance the richness of the used coordinates by extending the database of the eye and head coordinates. A future direction will be to augment the table with the collected head and eye coordinates.

8. Conclusions

In conclusion, the challenges associated with limited annotated medical data necessitate innovative solutions for effective data augmentation. Traditional methods, such as mixing-based algorithms, may not be suitable due to their potential to introduce artifacts and alter pathological features. Generative augmentation methods tailored to the characteristics of pathology images are often preferred, yet they may be less effective at capturing complex or rare patterns compared to transformation-based techniques.
In response to these challenges, we propose a novel physiologically based head data augmentation library (EMULATE) that emulates natural head movements during data collection, contributing to enhanced sample diversity and authenticity. Our library is the first of its kind to incorporate physiological aspects, generating transformed eye movement data efficiently. Additionally, we perform a first exploration of different architectures and datasets, demonstrating the effectiveness of EMULATE in regularizing the training process and improving the generalization ability of the proposed hybrid architecture, outperforming a CNN-based approach in eye movement classification.

9. Patents

Zoï Kapoula has applied for patents for the technology used to conduct this experiment: REMOBI table (patent US8851669, WO2011073288); AiDEAL analysis software (EP20306166.8, 7 October 2020; EP20306164.3, 7 October 2020—Europe). Patent application pending EP22305903.1.

Author Contributions

Supervision, Z.K.;methodology, A.E.E.H.; software, A.E.E.H.; validation, A.E.E.H. and ZK; formal analysis, A.E.E.H.; investigation, A.E.E.H.; resources, ZK.; data curation, A.E.E.H.; Conceptualization, A.E.E.H.; writing—original draft, A.E.E.H.; writing—review and editing, Z.K.; visualization, A.E.E.H.; project administration, Z.K.; funding acquisition, Z.K. All authors have read and agreed to the published version of the manuscript.

Funding

A.E.E.H. is funded by Orasis-Ear, ANRT, and CIFRE.

Informed Consent Statement

This meta-analysis drew upon data sourced from Orasis Ear, in collaboration with clinical centers employing Remobi and Aideal technology. Participating centers agreed to store their data anonymously for further analysis.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are not publicly available. This meta-analysis drew upon data sourced from Orasis Ear, in collaboration with clinical centers employing REMOBI and AiDEAL technology. Participating centers agreed to store their data anonymously for further analysis. However, upon reasonable request, they are available from the corresponding author.

Acknowledgments

This work was granted access to the HPC resources of IDRIS under the allocation 2024-AD011014231 made by GENCI.

Conflicts of Interest

Zoï Kapoula is the founder of Orasis-EAR.

Appendix A

Table A1. The presentation of the global F1 (Macro F1), global positive F1 (Pos. F1), and global negative F1 (Neg. F1) scores when trained on the vergence visual task. For each model, the best global F1 score is highlighted in bold.
Table A1. The presentation of the global F1 (Macro F1), global positive F1 (Pos. F1), and global negative F1 (Neg. F1) scores when trained on the vergence visual task. For each model, the best global F1 score is highlighted in bold.
HTCE-MAXHTCE-MEANHTCSE
MacroNeg.Pos.MacroNeg.Pos.MacroNeg.Pos.
F1F1F1F1F1F1F1F1F1
Cutout67.286.947.568.288.647.868.588.248.8
Dropout68.388.048.666.886.047.668.287.449.1
CutMix70.189.151.070.289.251.170.188.951.2
MixUp69.289.548.969.589.649.469.388.350.4
No Aug.68.689.148.066.988.245.768.388.048.5
Dynamic67.687.048.168.988.649.369.188.449.8
Dynamic High66.186.445.968.387.549.169.188.349.9
Static68.688.448.967.886.948.668.888.349.3
Table A2. Presentation of the global F1 (Macro F1), global positive F1 (Pos. F1), and global negative F1 (Neg. F1) scores when trained on the saccade visual task. For each model, the best global F1 score is highlighted in bold.
Table A2. Presentation of the global F1 (Macro F1), global positive F1 (Pos. F1), and global negative F1 (Neg. F1) scores when trained on the saccade visual task. For each model, the best global F1 score is highlighted in bold.
HTCE-MAXHTCE-MEANHTCSE
MacroNeg.Pos.MacroNeg.Pos.MacroNeg.Pos.
F1F1F1F1F1F1F1F1F1
Cutout69.789.150.268.988.149.869.588.150.9
Dropout69.887.951.669.087.250.770.488.452.5
CutMix71.389.453.371.689.653.571.589.153.9
MixUp69.889.450.370.889.252.470.788.652.8
No Aug.67.086.847.364.886.043.669.288.050.4
Dynamic69.788.950.670.188.451.870.388.652.0
Dynamic High69.888.651.069.989.250.570.088.751.3
Static69.088.649.569.688.450.869.788.251.3
Table A3. Peer class macro F1 scores for each augmentation and regularization method when separately training the three different architectures on the vergence dataset.
Table A3. Peer class macro F1 scores for each augmentation and regularization method when separately training the three different architectures on the vergence dataset.
ModelAugmentation MethodClass 0Class 1Class 2Class 3Class 4Class 5Class 6Class 7
HTCE-MAXCutMix67.471.363.963.578.070.274.073.1
Cutout64.568.263.362.273.866.070.869.9
Dropout65.969.462.462.775.767.571.772.3
MixUp66.370.462.963.276.968.773.772.4
No Aug.65.770.162.162.876.769.073.171.9
Dynamic63.469.962.060.174.967.771.971.6
Dynamic High64.368.062.562.572.465.770.264.3
Static65.869.663.461.276.468.973.271.6
HTCE-MEANCutMix68.171.763.963.978.270.473.173.0
Cutout65.769.563.763.775.768.571.868.0
Dropout63.967.762.161.373.067.369.870.2
MixUp66.870.962.763.377.170.173.672.5
No Aug.64.968.463.360.672.968.371.766.4
Dynamic66.870.763.562.776.769.171.871.1
Dynamic High66.170.562.261.974.468.671.771.8
Static65.568.063.863.275.167.771.868.3
HTCSECutMix67.671.564.663.477.269.574.173.7
Cutout66.569.762.762.975.268.472.071.7
Dropout66.269.363.363.174.567.571.071.8
MixUp66.870.163.464.076.668.373.173.3
No Aug.65.968.862.962.575.567.772.571.2
Dynamic67.369.963.762.876.168.172.373.7
Dynamic High67.069.963.762.576.468.073.472.9
Static67.570.263.062.676.168.272.071.7
Table A4. Peer class macro F1 scores for each augmentation and regularization method when separately training the three different architectures on the saccade dataset.
Table A4. Peer class macro F1 scores for each augmentation and regularization method when separately training the three different architectures on the saccade dataset.
ModelAugmentation MethodClass 0Class 1Class 2Class 3Class 4Class 5Class 6Class 7
HTCE-MAXCutMix70.574.465.566.779.072.873.069.8
Cutout68.771.864.564.078.570.872.767.6
Dropout68.171.664.566.177.171.472.368.0
MixUp68.973.063.165.578.670.872.267.7
No Aug.67.370.663.261.476.665.666.965.5
Dynamic68.572.764.364.978.570.072.367.8
Dynamic High68.171.764.564.978.671.472.568.2
Static68.471.762.864.578.670.672.064.7
HTCE-MEANCutMix70.574.465.567.279.873.474.168.9
Cutout69.072.265.664.877.771.368.463.5
Dropout69.472.466.165.774.070.168.466.8
MixUp69.773.166.265.579.572.173.867.5
No Aug.64.366.863.261.073.866.367.856.2
Dynamic69.772.366.065.478.971.472.066.1
Dynamic High68.972.364.465.079.072.071.167.2
Static69.171.864.965.877.471.472.065.5
HTCSECutMix70.373.965.567.179.872.973.770.2
Cutout67.971.164.865.178.069.872.368.3
Dropout69.373.165.866.577.071.872.169.0
MixUp69.772.165.766.978.572.373.068.3
No Aug.68.271.664.464.576.969.671.667.8
Dynamic68.873.063.466.278.671.873.268.3
Dynamic High69.272.664.065.378.471.572.467.8
Static68.471.964.665.577.270.172.368.8
Table A5. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic variant of EMULATE, the three different architectures on the vergence dataset.
Table A5. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic variant of EMULATE, the three different architectures on the vergence dataset.
ModelAugmentation MethodClass 0Class 1Class 2Class 3Class 4Class 5Class 6Class 7
HTCE-MAXCutMix67.771.963.662.577.669.174.072.8
Cutout66.669.764.363.376.569.273.072.9
Dropout67.470.864.062.876.769.772.873.5
MixUp67.370.962.662.576.368.773.773.8
HTCE-MEANCutMix68.071.163.062.976.970.173.174.6
Cutout67.570.664.662.477.268.772.872.4
Dropout67.370.863.963.575.669.770.273.1
MixUp67.371.263.163.777.169.473.673.8
HTCSECutMix67.571.062.863.477.068.974.173.3
Cutout67.570.363.363.275.968.672.973.1
Dropout68.070.863.563.275.668.572.972.7
MixUp68.270.864.364.376.467.573.573.5
Table A6. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic High variant of EMULATE, the three different architectures on the vergence dataset.
Table A6. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic High variant of EMULATE, the three different architectures on the vergence dataset.
ModelAugmentation MethodClass 0Class 1Class 2Class 3Class 4Class 5Class 6Class 7
HTCE-MAXCutMix67.770.764.462.277.269.673.373.3
Cutout66.970.463.163.076.468.373.472.8
Dropout67.171.463.364.575.969.272.873.8
MixUp67.470.562.962.176.968.572.973.5
HTCE-MEANCutMix68.071.464.063.377.269.772.973.6
Cutout67.271.363.562.477.469.273.673.7
Dropout67.371.164.663.875.369.670.772.9
MixUp68.171.364.162.477.369.571.874.1
HTCSECutMix66.870.863.462.976.868.173.473.9
Cutout67.369.963.663.576.567.974.173.3
Dropout68.671.264.364.174.968.773.272.3
MixUp67.570.364.063.876.867.773.373.8
Table A7. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic variant of EMULATE, the three different architectures on the saccade dataset.
Table A7. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic variant of EMULATE, the three different architectures on the saccade dataset.
ModelAugmentation MethodClass 0Class 1Class 2Class 3Class 4Class 5Class 6Class 7
HTCE-MAXCutMix69.473.465.065.679.671.672.867.7
Cutout68.972.663.564.079.572.072.367.6
Dropout69.573.064.965.479.473.372.270.9
MixUp68.172.763.564.379.871.172.669.0
HTCE-MEANCutMix70.473.565.066.479.673.073.168.4
Cutout68.972.164.465.479.373.271.367.6
Dropout69.672.664.964.477.571.870.167.6
MixUp69.573.065.063.979.572.473.469.5
HTCSECutMix69.673.164.166.379.672.473.168.7
Cutout69.373.164.966.278.471.873.468.5
Dropout69.472.365.467.177.873.774.067.8
MixUp69.573.164.267.079.672.373.568.0
Table A8. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic High variant of EMULATE, the three different architectures on the saccade dataset.
Table A8. Peer class macro F1 scores for each augmentation and regularization baseline method when training in combination with the dynamic High variant of EMULATE, the three different architectures on the saccade dataset.
ModelAugmentation MethodClass 0Class 1Class 2Class 3Class 4Class 5Class 6Class 7
HTCE-MAXCutMix69.072.963.865.579.271.772.267.7
Cutout68.272.263.564.179.171.072.368.6
Dropout68.772.564.965.179.273.272.269.8
MixUp67.772.663.164.279.671.372.468.4
HTCE-MEANCutMix70.073.464.665.479.373.473.368.0
Cutout69.572.464.265.679.173.070.966.7
Dropout69.773.565.565.677.772.071.667.2
MixUp69.173.364.965.978.772.173.068.6
HTCSECutMix69.873.163.765.379.373.173.168.1
Cutout69.973.064.766.378.972.674.268.8
Dropout70.173.264.967.177.973.673.268.5
MixUp69.172.863.766.279.272.673.168.9
Table A9. HTCE feature extractor hyperparameters.
Table A9. HTCE feature extractor hyperparameters.
StageFilter SizePoolingKernel SizeActivation
1128-128-1280-0-25-5-5relu
2128-128-1280-0-25-5-5relu
3256-256-2560-2-25-5-5relu
4512-512-5120-2-23-3-3relu
Table A10. Lightweight HTCE hyperparameters.
Table A10. Lightweight HTCE hyperparameters.
StageFilter SizePoolingKernel SizeActivation
164-640-25-5relu
2128-1280-25-5relu
3256-2562-25-5relu
4512-5122-23-3relu
Table A11. Model Training hyperparameters.
Table A11. Model Training hyperparameters.
Value
Optimizer
   NameAdamW
   Learning rate0.0001
   Beta10.9
   Beta20.999
   Weight decay0.00001
Loss
   nameFocal loss
   Alpha class 00.73
   Alpha class 10.61
   Alpha class 20.90
   Alpha class 30.88
   Alpha class 40.67
   Alpha class 50.83
   Alpha class 60.81
   Alpha class 70.32
   Gamma5
Training
   Batch size (HTCE-MAX)128
   Batch size (HTCE-MEAN)128
   Batch size (Baselines)128
   Batch size (HTCSE)256
   Epochs100
   Number of folds3

References

  1. Tagnamas, J.; Ramadan, H.; Yahyaouy, A.; Tairi, H. Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. Vis. Comput. Ind. Biomed. Art 2024, 7, 2. [Google Scholar]
  2. Pan, X.; Xiong, J. DCTNet: A Hybrid Model of CNN and Dilated Contextual Transformer for Medical Image Segmentation. In Proceedings of the 2023 IEEE 6th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 24–26 February 2023; IEEE: New York, NY, USA, 2023; Volume 6, pp. 1316–1320. [Google Scholar]
  3. Lin, X.; Yan, Z.; Deng, X.; Zheng, C.; Yu, L. ConvFormer: Plug-and-Play CNN-Style Transformers for Improving Medical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Vancouver, BC, Canada, 8–12 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 642–651. [Google Scholar]
  4. Abibullaev, B.; Keutayeva, A.; Zollanvari, A. Deep Learning in EEG-Based BCIs: A Comprehensive Review of Transformer Models, Advantages, Challenges, and Applications. IEEE Access 2023, 11, 127271–127301. [Google Scholar] [CrossRef]
  5. Cubuk, E.D.; Zoph, B.; Shlens, J.; Le, Q.V. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 702–703. [Google Scholar]
  6. Fons, E.; Dawson, P.; Zeng, X.j.; Keane, J.; Iosifidis, A. Adaptive weighting scheme for automatic time-series data augmentation. arXiv 2021, arXiv:2102.08310. [Google Scholar]
  7. Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6023–6032. [Google Scholar]
  8. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  9. Alex, A.; Wang, L.; Gastaldo, P.; Cavallaro, A. Mixup augmentation for generalizable speech separation. In Proceedings of the 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP), Tampere, Finland, 6–8 October 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  10. El Hmimdi, A.E.; Themis Palpanas, Z.K. Efficient Diagnostic Classification of Diverse Pathologies through Contextual Eye Movement Data Analysis with a Novel Hybrid Architecture. Sci. Rep.
  11. Zemblys, R.; Niehorster, D.C.; Holmqvist, K. gazeNet: End-to-end eye-movement event detection with deep neural networks. Behav. Res. Methods 2019, 51, 840–864. [Google Scholar] [CrossRef] [PubMed]
  12. Elbattah, M.; Loughnane, C.; Guérin, J.L.; Carette, R.; Cilia, F.; Dequen, G. Variational autoencoder for image-based augmentation of eye-tracking data. J. Imaging 2021, 7, 83. [Google Scholar] [CrossRef] [PubMed]
  13. Fuhl, W.; Rong, Y.; Kasneci, E. Fully convolutional neural networks for raw eye tracking data segmentation, generation, and reconstruction. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: New York, NY, USA, 2021; pp. 142–149. [Google Scholar]
  14. Luo, Y.; Zhu, L.Z.; Wan, Z.Y.; Lu, B.L. Data augmentation for enhancing EEG-based emotion recognition with deep generative models. J. Neural Eng. 2020, 17, 056021. [Google Scholar] [CrossRef] [PubMed]
  15. Özdenizci, O.; Erdoğmuş, D. On the use of generative deep neural networks to synthesize artificial multichannel EEG signals. In Proceedings of the 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Virtual, 4–6 May 2021; IEEE: New York, NY, USA, 2021; pp. 427–430. [Google Scholar]
  16. Luo, Y.; Zhu, L.Z.; Lu, B.L. A GAN-based data augmentation method for multimodal emotion recognition. In Proceedings of the Advances in Neural Networks—ISNN 2019: 16th International Symposium on Neural Networks, ISNN 2019, Moscow, Russia, 10–12 July 2019; Proceedings, Part I 16. Springer: Berlin/Heidelberg, Germany, 2019; pp. 141–150. [Google Scholar]
  17. Cubuk, E.D.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q.V. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Beach, CA, USA, 15–20 June 2019; pp. 113–123. [Google Scholar]
  18. DeVries, T.; Taylor, G.W. Improved regularization of convolutional neural networks with cutout. arXiv 2017, arXiv:1708.04552. [Google Scholar]
  19. El Hmimdi, A.E.; Kapoula, Z.; Sainte Fare Garnot, V. Deep Learning-Based Detection of Learning Disorders on a Large Scale Dataset of Eye Movement Records. BioMedInformatics 2024, 4, 519–541. [Google Scholar] [CrossRef]
  20. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  21. Singh, P.; Thoke, A.; Verma, K. A Novel Approach to Face Detection Algorithm. Int. J. Comput. Appl. 2011, 975, 8887. [Google Scholar] [CrossRef]
  22. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  23. Cutmix Algorithm. Available online: https://keras.io/api/keras_cv/layers/augmentation/cut_mix (accessed on 2 February 2024).
  24. Cutout Algorithm. Available online: https://keras.io/api/keras_cv/layers/augmentation/random_cutout/ (accessed on 2 February 2024).
  25. Mixup Algorithm. Available online: https://keras.io/api/keras_cv/layers/augmentation/mix_up/ (accessed on 2 February 2024).
  26. Iterative Stratification. Available online: https://scikit.ml/api/skmultilearn.model_selection.iterative_stratification.html (accessed on 2 February 2024).
  27. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  28. André-Deshays, C.; Berthoz, A.; Revel, M. Eye-head coupling in humans: I. Simultaneous recording of isolated motor units in dorsal neck muscles and horizontal eye movements. Exp. Brain Res. 1988, 69, 399–406. [Google Scholar] [CrossRef] [PubMed]
  29. Baur, C.; Albarqouni, S.; Navab, N. MelanoGANs: High resolution skin lesion synthesis with GANs. arXiv 2018, arXiv:1804.04338. [Google Scholar]
  30. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  31. Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv 2017, arXiv:1710.10196. [Google Scholar]
  32. Hayat, K. Super-resolution via deep learning. arXiv 2017, arXiv:1706.09077. [Google Scholar]
  33. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part IV 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 184–199. [Google Scholar]
  34. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
Figure 1. Illustration of the physical model used to build the proposed data augmentation method. Point R corresponds to the position of the right eye pupil center. Point L corresponds to the position of the left eye pupil center. Point O corresponds to the center of the referential system, as well as the position of the head center. Illustration of the plane (OY, OX) where the pupil and head center.
Figure 1. Illustration of the physical model used to build the proposed data augmentation method. Point R corresponds to the position of the right eye pupil center. Point L corresponds to the position of the left eye pupil center. Point O corresponds to the center of the referential system, as well as the position of the head center. Illustration of the plane (OY, OX) where the pupil and head center.
Biomedinformatics 04 00080 g001
Figure 2. A comparison of the performance differences among different methods in terms of the global F1 scores for the three architectures, when trained on the saccade dataset (right subfigure) and the vergence dataset (left subfigure).
Figure 2. A comparison of the performance differences among different methods in terms of the global F1 scores for the three architectures, when trained on the saccade dataset (right subfigure) and the vergence dataset (left subfigure).
Biomedinformatics 04 00080 g002
Figure 3. A barplot comparing the different baseline performances when combined with the dynamic and dynamic high EMULATE settings, and trained with the HTCE-MAX (left subfigure), the HTCE-MEAN (middle subfigure), and the HTCSE (right subfigure) on the vergence dataset.
Figure 3. A barplot comparing the different baseline performances when combined with the dynamic and dynamic high EMULATE settings, and trained with the HTCE-MAX (left subfigure), the HTCE-MEAN (middle subfigure), and the HTCSE (right subfigure) on the vergence dataset.
Biomedinformatics 04 00080 g003
Figure 4. A barplot comparing the different baseline performances when combined with the dynamic and dynamic high EMULATE settings, and trained with the HTCE-MAX (left subfigure), the HTCE-MEAN (middle subfigure), and the HTCSE (right subfigure) on the saccade dataset.
Figure 4. A barplot comparing the different baseline performances when combined with the dynamic and dynamic high EMULATE settings, and trained with the HTCE-MAX (left subfigure), the HTCE-MEAN (middle subfigure), and the HTCSE (right subfigure) on the saccade dataset.
Biomedinformatics 04 00080 g004
Table 1. Presentation of the different groups of pathologies and the patient count for the saccade and the vergence datasets.
Table 1. Presentation of the different groups of pathologies and the patient count for the saccade and the vergence datasets.
Class
Identifier
Corresponding
Disorder
Saccade
Dataset
Vergence
Dataset
0Dyslexia873854
1Reading disorder12641265
2Listening and expressing331321
3Vertigo and postural396372
4Attention and neurological1016975
5Neuro-strabismus455511
6Visual fatigue678567
7Other pathologies195279
Table 2. An overview of the various parameters defining the configuration for each of the 5 augmentation strategies. Note that U(a,b) corresponds to the uniform distribution on the interval [a,b].
Table 2. An overview of the various parameters defining the configuration for each of the 5 augmentation strategies. Note that U(a,b) corresponds to the uniform distribution on the interval [a,b].
ParameterDynamicDynamic HighStatic
Initial angular
position
U(−15,15)U(−20,20)U(−10,10)
Maximum angular
position
U(−15,15)U(−20,20)-
PeriodU(4,40)U(4,40)-
Table 3. A comparisonof the three global F1 scores across different architectures during training with baseline augmentation, both with and without the integration of the proposed methods (dynamic and dynamic high variants) on the saccade visual task. The best global F1 score for each method within the three setups is highlighted in bold.
Table 3. A comparisonof the three global F1 scores across different architectures during training with baseline augmentation, both with and without the integration of the proposed methods (dynamic and dynamic high variants) on the saccade visual task. The best global F1 score for each method within the three setups is highlighted in bold.
TechniqueHeadEMULATE DisabledDynamicDynamic High
MacroNeg.Pos.MacroNeg.Pos.MacroNeg.Pos.
F1F1F1F1F1F1F1F1F1
HTCE-MAXCutout69.789.150.269.989.050.869.789.150.4
Dropout69.887.951.671.089.152.870.588.852.3
CutMix71.389.453.370.589.052.070.189.251.1
MixUp69.889.450.370.089.350.769.889.350.2
HTCE-MEANCutout68.988.149.870.188.851.570.188.651.5
Dropout69.087.250.769.788.151.370.288.651.9
CutMix71.689.653.571.089.452.770.889.152.4
MixUp70.889.252.470.789.451.970.688.952.2
HTCSECutout69.588.150.970.688.952.270.989.152.7
Dropout70.488.452.570.888.553.170.988.553.3
CutMix71.589.153.970.789.052.470.688.952.3
MixUp70.788.652.870.889.252.470.688.952.2
Table 4. A comparison of the three global F1 scores across different architectures during training with baseline augmentations, both with and without the integration of the proposed methods (dynamic and dynamic high variants) on the vergence visual task. The best global F1 score for each method within the three setups is highlighted in bold.
Table 4. A comparison of the three global F1 scores across different architectures during training with baseline augmentations, both with and without the integration of the proposed methods (dynamic and dynamic high variants) on the vergence visual task. The best global F1 score for each method within the three setups is highlighted in bold.
TechniqueHeadEMULATE DisabledDynamicDynamic High
MacroNeg.Pos.MacroNeg.Pos.MacroNeg.Pos.
F1F1F1F1F1F1F1F1F1
HTCE-MAXCutout67.286.947.569.389.049.669.288.649.7
Dropout68.388.048.669.688.650.669.688.650.6
CutMix70.189.151.069.889.350.369.789.350.1
MixUp69.289.548.969.489.149.669.289.349.1
HTCE-MEANCutout68.288.647.869.488.350.569.689.549.8
Dropout66.886.047.669.188.250.069.388.050.6
CutMix70.289.251.169.989.250.569.989.050.8
MixUp69.589.649.469.889.450.169.788.850.6
HTCSECutout68.588.248.869.388.250.369.488.550.3
Dropout68.287.449.169.388.050.669.588.051.1
CutMix70.188.951.269.688.750.669.488.650.2
MixUp69.388.350.469.788.550.969.588.650.4
Table 5. Presentation of the sampling parameters.
Table 5. Presentation of the sampling parameters.
ParametersConfiguration
DynamicStatic
Coordinates interpolation parameter (1 parameter)XX
Initial angular position (3 parameters)XX
Maximum angular rotation amplitude (3 parameters)X
Sinusoidal period (3 parameters)X
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Hmimdi, A.E.; Kapoula, Z. Physiological Data Augmentation for Eye Movement Gaze in Deep Learning. BioMedInformatics 2024, 4, 1457-1479. https://doi.org/10.3390/biomedinformatics4020080

AMA Style

El Hmimdi AE, Kapoula Z. Physiological Data Augmentation for Eye Movement Gaze in Deep Learning. BioMedInformatics. 2024; 4(2):1457-1479. https://doi.org/10.3390/biomedinformatics4020080

Chicago/Turabian Style

El Hmimdi, Alae Eddine, and Zoï Kapoula. 2024. "Physiological Data Augmentation for Eye Movement Gaze in Deep Learning" BioMedInformatics 4, no. 2: 1457-1479. https://doi.org/10.3390/biomedinformatics4020080

APA Style

El Hmimdi, A. E., & Kapoula, Z. (2024). Physiological Data Augmentation for Eye Movement Gaze in Deep Learning. BioMedInformatics, 4(2), 1457-1479. https://doi.org/10.3390/biomedinformatics4020080

Article Metrics

Back to TopTop