-
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach
-
Influence of Time Pressure on Successive Visual Searches
-
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification
-
Improving Reading and Eye Movement Control in Readers with Oculomotor and Visuo-Attentional Deficits
Journal Description
Journal of Eye Movement Research
Journal of Eye Movement Research
(JEMR) is an international, peer-reviewed, open access journal on all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas published bimonthly online by MDPI (from Volume 18, Issue 1, 2025).
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, PMC, and other databases.
- Journal Rank: JCR - Q1 (Ophthalmology) / CiteScore - Q2 (Ophthalmology)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 39.9 days after submission; acceptance to publication is undertaken in 5.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
2.8 (2024)
Latest Articles
Active Gaze Guidance and Pupil Dilation Effects Through Subject Engagement in Ophthalmic Imaging
J. Eye Mov. Res. 2025, 18(5), 45; https://doi.org/10.3390/jemr18050045 - 19 Sep 2025
Abstract
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning
[...] Read more.
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning by guiding the gaze along predefined patterns. An infrared fundus camera setup was used with nine healthy adults (aged 20–57) who completed tasks comparing passive viewing of moving patterns to actively tracing them by drawing using a touchpad interface. The active task involved participant-controlled target movement with real-time color feedback for accurate pattern tracing. Results showed that active tracing significantly increased pupil diameter by an average of 17.8% (range 8.9–43.6%; p < 0.001) and reduced blink frequency compared to passive viewing. More complex patterns led to greater pupil dilation, confirming the link between cognitive load and physiological response. These findings demonstrate that patient driven gaze guidance can stabilize gaze, reduce blinking, and naturally dilate the pupil. These conditions might enhance the quality of scannerless OCT or other imaging techniques benefiting from guided gaze and larger pupils. There could be benefits for children and people with compliance issues, although further research is needed to consider cognitive load.
Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
►
Show Figures
Open AccessArticle
Processing Written Language in Video Games: An Eye-Tracking Study on Subtitled Instructions
by
Haiting Lan, Sixin Liao, Jan-Louis Kruger and Michael J. Richardson
J. Eye Mov. Res. 2025, 18(5), 44; https://doi.org/10.3390/jemr18050044 - 17 Sep 2025
Abstract
►▼
Show Figures
Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed
[...] Read more.
Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed subtitled instructions and resultant game performance. Sixty-four participants were recruited to play a videogame set in a foggy desert, where they were guided by subtitled instructions to locate, corral, and contain robot agents (targets). These instructions were manipulated into three modalities: visual-only (with subtitled instructions only), auditory only (with spoken instructions), and visual–auditory (with both subtitled and spoken instructions). The instructions were addressed to participants (as relevant subtitles) or their AI teammates (as irrelevant subtitles). Subtitle-level results of eye movements showed that participants primarily focused on the relevant subtitles, as evidenced by more fixations and higher dwell time percentages. Moreover, the word-level results indicate that participants showed lower skipping rates, more fixations, and higher dwell time percentages on words loaded with immediate action-related information, especially in the absence of audio. No significant differences were found in player performance across conditions. The findings of this study contribute to a better understanding of subtitle processing in video games and, more broadly, text processing in multimedia contexts. Implications for future research on digital literacy and computer-mediated text processing are discussed.
Full article

Figure 1
Open AccessArticle
Entropy as a Lens: Exploring Visual Behavior Patterns in Architects
by
Renate Delucchi Danhier, Barbara Mertins, Holger Mertins and Gerold Schneider
J. Eye Mov. Res. 2025, 18(5), 43; https://doi.org/10.3390/jemr18050043 - 16 Sep 2025
Abstract
►▼
Show Figures
This study examines how architectural expertise shapes visual perception, extending the “Seeing for Speaking” hypothesis into a non-linguistic domain. Specifically, it investigates whether architectural training influences unconscious visual processing of architectural content. Using eye-tracking, 48 architects and 48 laypeople freely viewed 15 still
[...] Read more.
This study examines how architectural expertise shapes visual perception, extending the “Seeing for Speaking” hypothesis into a non-linguistic domain. Specifically, it investigates whether architectural training influences unconscious visual processing of architectural content. Using eye-tracking, 48 architects and 48 laypeople freely viewed 15 still images of built, mixed, and natural environments. Visual behavior was analyzed using Shannon’s entropy scores based on dwell times within 16 × 16 grids during the first six seconds of viewing. Results revealed distinct visual attention patterns between groups. Architects showed lower entropy, indicating more focused and systematic gaze behavior, and their attention was consistently drawn to built structures. In contrast, laypeople exhibited more variable and less organized scanning patterns, with greater individual differences. Moreover, architects demonstrated higher intra-group similarity in their gaze behavior, suggesting a shared attentional schema shaped by professional training. These findings highlight that domain-specific expertise deeply influences perceptual processing, resulting in systematic and efficient attention allocation. Entropy-based metrics proved effective in capturing these differences, offering a robust tool for quantifying expert vs. non-expert visual strategies in architectural cognition. The visual patterns exhibited by architects are interpreted to reflect a “Grammar of Space”, i.e., a structured way of visually parsing spatial elements.
Full article

Figure 1
Open AccessArticle
How Visual Style Shapes Tourism Advertising Effectiveness: Eye-Tracking Insights into Traditional and Modern Chinese Ink Paintings
by
Fulong Liu, Xiheng Shao, Zhengwei Tao, Nurul Hanim Md Romainoor and Mohammad Khizal Mohamed Saat
J. Eye Mov. Res. 2025, 18(5), 42; https://doi.org/10.3390/jemr18050042 - 12 Sep 2025
Abstract
►▼
Show Figures
This study investigates how traditional versus modern Chinese ink painting styles in tourism advertisements affect viewers’ visual attention, aesthetic evaluations, and tourism intentions. Using eye-tracking experiments combined with surveys and interviews, the researchers conducted a mixed-design experiment with 80 Chinese college students. Results
[...] Read more.
This study investigates how traditional versus modern Chinese ink painting styles in tourism advertisements affect viewers’ visual attention, aesthetic evaluations, and tourism intentions. Using eye-tracking experiments combined with surveys and interviews, the researchers conducted a mixed-design experiment with 80 Chinese college students. Results indicate that traditional ink-style advertisements attracted longer total fixation durations, higher aesthetic evaluations, and stronger cultural resonance in natural landscape contexts, while modern ink-style advertisements captured initial attention more quickly and performed better aesthetically in urban settings. Qualitative analyses further revealed cultural familiarity and aesthetic resonance underpinning preferences for traditional style, whereas modern style mainly attracted attention through novelty and creativity. These findings expand Cultural Schema Theory and the aesthetic processing model within advertising research, suggesting practical strategies for tourism advertising to match visual styles appropriately with destination types and audience characteristics to enhance promotional effectiveness.
Full article

Figure 1
Open AccessArticle
Eye Movement Impairment in Women Undergoing Chemotherapy
by
Milena Edite Casé de Oliveira, José Marcos Nascimento de Sousa, Gerlane Da Silva Vieira Torres, Ruanna Priscila Silva de Brito, Nathalia dos Santos Negreiros, Bianca da Nóbrega Tomaz Trombetta, Kedma Anne Lima Gomes Alexandrino, Waleska Fernanda Souto Nóbrega, Letícia Lorena Soares Silva Polimeni, Catarina Cavalcanti Braga, Cristiane Maria Silva de Souza Lima, Thiago P. Fernandes and Natanael Antonio dos Santos
J. Eye Mov. Res. 2025, 18(5), 41; https://doi.org/10.3390/jemr18050041 - 11 Sep 2025
Abstract
►▼
Show Figures
The assessment of visual attention is important in visual and cognitive neuroscience, providing objective measures for researchers and clinicians. This study investigated the effects of chemotherapy on eye movements in women with breast cancer. Twelve women with breast cancer and twelve healthy controls
[...] Read more.
The assessment of visual attention is important in visual and cognitive neuroscience, providing objective measures for researchers and clinicians. This study investigated the effects of chemotherapy on eye movements in women with breast cancer. Twelve women with breast cancer and twelve healthy controls aged between 33 and 59 years completed a visual search task, identifying an Arabic number among 79 alphabetic letters. Test duration, fixation duration, total fixation duration, and total visit duration were recorded. Compared to healthy controls, women with breast cancer exhibited significantly longer mean fixation duration [t = 4.54, p < 0.00]; mean total fixation duration [t = 2.41, p < 0.02]; mean total visitation duration [t = 2.05, p < 0.05]; and total test time [t = 2.32, p < 0.03]. Additionally, positive correlations were observed between the number of chemotherapy cycles and the eye tracking parameters. These results suggest the possibility of slower information processing in women experiencing acute effects of chemotherapy. However, further studies are needed to clarify this relationship.
Full article

Figure 1
Open AccessArticle
Interpretable Quantification of Scene-Induced Driver Visual Load: Linking Eye-Tracking Behavior to Road Scene Features via SHAP Analysis
by
Jie Ni, Yifu Shao, Yiwen Guo and Yongqi Gu
J. Eye Mov. Res. 2025, 18(5), 40; https://doi.org/10.3390/jemr18050040 - 9 Sep 2025
Abstract
►▼
Show Figures
Road traffic accidents remain a major global public health concern, where complex urban driving environments significantly elevate drivers’ visual load and accident risks. Unlike existing research that adopts a macro perspective by considering multiple factors such as the driver, vehicle, and road, this
[...] Read more.
Road traffic accidents remain a major global public health concern, where complex urban driving environments significantly elevate drivers’ visual load and accident risks. Unlike existing research that adopts a macro perspective by considering multiple factors such as the driver, vehicle, and road, this study focuses on the driver’s visual load, a key safety factor, and its direct source—the driver’s visual environment. We have developed an interpretable framework combining computer vision and machine learning to quantify how road scene features influence oculomotor behavior and scene-induced visual load, establishing a complete and interpretable link between scene features, eye movement behavior, and visual load. Using the DR(eye)VE dataset, visual attention demand is established through occlusion experiments and confirmed to correlate with eye-tracking metrics. K-means clustering is applied to classify visual load levels based on discriminative oculomotor features, while semantic segmentation extracts quantifiable road scene features such as the Green Visibility Index, Sky Visibility Index and Street Canyon Enclosure. Among multiple machine learning models (Random Forest, Ada-Boost, XGBoost, and SVM), XGBoost demonstrates optimal performance in visual load detection. SHAP analysis reveals critical thresholds: the probability of high visual load increases when pole density exceeds 0.08%, signage surpasses 0.55%, or buildings account for more than 14%; while blink duration/rate decrease when street enclosure exceeds 38% or road congestion goes beyond 25%, indicating elevated visual load. The proposed framework provides actionable insights for urban design and driver assistance systems, advancing traffic safety through data-driven optimization of road environments.
Full article

Figure 1
Open AccessReview
A Review of Digital Eye Strain: Binocular Vision Anomalies, Ocular Surface Changes, and the Need for Objective Assessment
by
Maria João Barata, Pedro Aguiar, Andrzej Grzybowski, André Moreira-Rosário and Carla Lança
J. Eye Mov. Res. 2025, 18(5), 39; https://doi.org/10.3390/jemr18050039 - 5 Sep 2025
Abstract
►▼
Show Figures
(1) Background: This study investigates the impact of digital device usage on the visual system, with a focus on binocular vision. It also highlights the importance of objective assessment in accurately diagnosing and guiding therapeutic approaches for Digital Eye Strain Syndrome (DESS). (2)
[...] Read more.
(1) Background: This study investigates the impact of digital device usage on the visual system, with a focus on binocular vision. It also highlights the importance of objective assessment in accurately diagnosing and guiding therapeutic approaches for Digital Eye Strain Syndrome (DESS). (2) Methods: A comprehensive narrative review was conducted to synthesize existing evidence. The methodological quality of observational and case–control studies was assessed using the Newcastle–Ottawa scale, while randomized controlled trials (RCTs) were evaluated using the Cochrane risk-of-bias (RoB 2) tool. (3) Results: Fifteen articles were included in this review, with a predominant focus on binocular vision anomalies, particularly accommodative and vergence dysfunctions, as well as ocular surface anomalies related to DESS. Clinical assessments relied primarily on symptom-based questionnaires, which represent a significant limitation. The included studies were largely observational, with a lack of longitudinal and RCTs. In contrast, research in dry eye disease has been more comprehensive, with multiple RCTs already conducted. (4) Therefore, it is essential to develop validated objective metrics that support accurate clinical diagnosis and guide evidence-based interventions. Conclusions: It remains unclear whether changes in binocular vision are a cause or consequence of DESS. However, prolonged screen time can exacerbate pre-existing binocular vision anomalies due to continuous strain on convergence and accommodation, leading to symptoms. Future research should prioritize prospective longitudinal studies and well-designed RCTs that integrate objective clinical measures to elucidate causal relationships and improve diagnostic and therapeutic frameworks.
Full article

Graphical abstract
Open AccessArticle
Reading Assessment and Eye Movement Analysis in Bilateral Central Scotoma Due to Age-Related Macular Degeneration
by
Polona Zaletel Benda, Grega Jakus, Jaka Sodnik, Nadica Miljković, Ilija Tanasković, Smilja Stokanović, Andrej Meglič, Nataša Vidovič Valentinčič and Polona Jaki Mekjavić
J. Eye Mov. Res. 2025, 18(5), 38; https://doi.org/10.3390/jemr18050038 - 30 Aug 2025
Abstract
►▼
Show Figures
This study investigates reading performances and eye movements in individuals with eccentric fixation due to age-related macular degeneration (AMD). Overall, 17 individuals with bilateral AMD (7 males; mean age 77.47 ± 5.96 years) and 17 controls (10 males; mean age 72.18 ± 5.98
[...] Read more.
This study investigates reading performances and eye movements in individuals with eccentric fixation due to age-related macular degeneration (AMD). Overall, 17 individuals with bilateral AMD (7 males; mean age 77.47 ± 5.96 years) and 17 controls (10 males; mean age 72.18 ± 5.98 years) were assessed for reading visual acuity (VA), reading speed (Minnesota low vision reading chart in Slovene, MNREAD-SI), and near contrast sensitivity (Pelli-Robson). Microperimetry (NIDEK MP-3) was used to evaluate preferential retinal locus (PRL) location and fixation stability. Eye movements were recorded with Tobii Pro-glasses 2 and analyzed for reading duration, saccade amplitude, peak velocity, number of saccades, saccade duration, and fixation duration. Individuals with AMD exhibited significantly reduced reading indices (worse reading VA (p < 0.001), slower reading (p < 0.001), and lower near contrast sensitivity (p < 0.001)). Eye movement analysis revealed prolonged reading duration, longer fixation duration, and an increased number of saccades in individuals with AMD per paragraph. The number of saccades per paragraph was significantly correlated with all measured reading indices. These findings provide insights into reading adaptations in AMD. Simultaneously, the proposed approach in analyzing eye movements puts forward eye trackers as a prospective diagnostic tool in ophthalmology.
Full article

Figure 1
Open AccessArticle
Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns
by
Qiaoling Zou, Wanyu Zheng, Xinyan Jiang and Dongning Li
J. Eye Mov. Res. 2025, 18(5), 37; https://doi.org/10.3390/jemr18050037 - 25 Aug 2025
Abstract
(1) Background: Virtual Reality (VR) films challenge traditional visual cognition by offering novel perceptual experiences. This study investigates the applicability of Gestalt grouping principles in dynamic VR scenes, the influence of VR environments on grouping efficiency, and the relationship between viewer experience and
[...] Read more.
(1) Background: Virtual Reality (VR) films challenge traditional visual cognition by offering novel perceptual experiences. This study investigates the applicability of Gestalt grouping principles in dynamic VR scenes, the influence of VR environments on grouping efficiency, and the relationship between viewer experience and grouping effects. (2) Methods: Eye-tracking experiments were conducted with 42 participants using the HTC Vive Pro Eye and Tobii Pro Lab. Participants watched a non-narrative VR film with fixed camera positions to eliminate narrative and auditory confounds. Eye-tracking metrics were analyzed using SPSS version 29.0.1, and data were visualized through heat maps and gaze trajectory plots. (3) Results: Viewers tended to focus on spatial nodes and continuous structures. Initial fixations were anchored near the body but shifted rapidly thereafter. Heat maps revealed a consistent concentration of fixations on the dock area. (4) Conclusions: VR reshapes visual organization, where proximity, continuity, and closure outweigh traditional saliency. Dynamic elements draw attention only when linked to user goals. Designers should prioritize spatial logic, using functional nodes as cognitive anchors and continuous paths as embodied guides. Future work should test these mechanisms in narrative VR and explore neural correlates via fNIRS or EEG.
Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
►▼
Show Figures

Figure 1
Open AccessArticle
Multimodal Assessment of Therapeutic Alliance: A Study Using Wearable Technology
by
Mikael Rubin, Robert Hickson, Caitlyn Suen and Shreya Vaishnav
J. Eye Mov. Res. 2025, 18(4), 36; https://doi.org/10.3390/jemr18040036 - 12 Aug 2025
Abstract
►▼
Show Figures
This empirical pilot study explored the use of wearable eye-tracking technology to gain objective insights into interpersonal interactions, particularly in healthcare provider training. Traditional methods of understanding these interactions rely on subjective observations, but wearable tech offers a more precise, multimodal approach. This
[...] Read more.
This empirical pilot study explored the use of wearable eye-tracking technology to gain objective insights into interpersonal interactions, particularly in healthcare provider training. Traditional methods of understanding these interactions rely on subjective observations, but wearable tech offers a more precise, multimodal approach. This multidisciplinary study integrated counseling perspectives on therapeutic alliance with an empirically motivated wearable framework informed by prior research in clinical psychology. The aims of the study were to describe the complex data that can be achieved with wearable technology and to test our primary hypothesis that the therapeutic alliance in clinical training interactions is associated with certain behaviors consistent with stronger interpersonal engagement. One key finding was that a single multimodal feature predicted discrepancies in client versus therapist working alliance ratings (b = −4.29, 95% CI [−8.12, −0.38]), suggesting clients may have perceived highly structured interactions as less personal than therapists did. Multimodal features were more strongly associated with therapist rated working alliance, whereas linguistic analysis better captured client rated working alliance. The preliminary findings support the utility of multimodal approaches to capture clinical interactions. This technology provides valuable context for developing actionable insights without burdening instructors or learners. Findings from this study will motivate data-driven methods for providing actionable feedback to clinical trainees.
Full article

Figure 1
Open AccessArticle
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach
by
Paweł Cybulski
J. Eye Mov. Res. 2025, 18(4), 35; https://doi.org/10.3390/jemr18040035 - 7 Aug 2025
Abstract
►▼
Show Figures
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were
[...] Read more.
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces.
Full article

Figure 1
Open AccessArticle
Digital Eye Strain Monitoring for One-Hour Smartphone Engagement Through Eye Activity Measurement System
by
Bhanu Priya Dandumahanti, Prithvi Krishna Chittoor and Murali Subramaniyam
J. Eye Mov. Res. 2025, 18(4), 34; https://doi.org/10.3390/jemr18040034 - 5 Aug 2025
Abstract
►▼
Show Figures
Smartphones have revolutionized our daily lives, becoming portable pocket computers with easy internet access. India, the second-highest smartphone and internet user, experienced a significant rise in smartphone usage between 2013 and 2024. Prolonged smartphone use, exceeding 20 min at a time, can lead
[...] Read more.
Smartphones have revolutionized our daily lives, becoming portable pocket computers with easy internet access. India, the second-highest smartphone and internet user, experienced a significant rise in smartphone usage between 2013 and 2024. Prolonged smartphone use, exceeding 20 min at a time, can lead to physical and mental health issues, including psychophysiological disorders. Digital devices and their extended exposure to blue light cause digital eyestrain, sleep disorders and visual-related problems. This research examines the impact of 1 h smartphone usage on visual fatigue among young Indian adults. A portable, low-cost system has been developed to measure visual activity to address this. The developed visual activity measurement system measures blink rate, inter-blink interval, and pupil diameter. Measured eye activity was recorded during 1 h smartphone usage of e-book reading, video watching, and social-media reels (short videos). Social media reels show increased screen variations, affecting pupil dilation and reducing blink rate due to continuous screen brightness and intensity changes. This reduction in blink rate and increase in inter-blink interval or pupil dilation could lead to visual fatigue.
Full article

Graphical abstract
Open AccessArticle
Visual Word Segmentation Cues in Tibetan Reading: Comparing Dictionary-Based and Psychological Word Segmentation
by
Dingyi Niu, Zijian Xie, Jiaqi Liu, Chen Wang and Ze Zhang
J. Eye Mov. Res. 2025, 18(4), 33; https://doi.org/10.3390/jemr18040033 - 4 Aug 2025
Abstract
►▼
Show Figures
This study utilized eye-tracking technology to explore the role of visual word segmentation cues in Tibetan reading, with a particular focus on the effects of dictionary-based and psychological word segmentation on reading and lexical recognition. The experiment employed a 2 × 3 design,
[...] Read more.
This study utilized eye-tracking technology to explore the role of visual word segmentation cues in Tibetan reading, with a particular focus on the effects of dictionary-based and psychological word segmentation on reading and lexical recognition. The experiment employed a 2 × 3 design, comparing six conditions: normal sentences, dictionary word segmentation (spaces), psychological word segmentation (spaces), normal sentences (green), dictionary word segmentation (color alternation), and psychological word segmentation (color alternation). The results revealed that word segmentation with spaces (whether dictionary-based or psychological) significantly improved reading efficiency and lexical recognition, whereas color alternation showed no substantial facilitative effect. Psychological and dictionary word segmentation performed similarly across most metrics, though psychological segmentation slightly outperformed in specific indicators (e.g., sentence reading time and number of fixations), and dictionary word segmentation slightly outperformed in other indicators (e.g., average saccade amplitude and number of regressions). The study further suggests that Tibetan reading may involve cognitive processes at different levels, and the basic units of different levels of cognitive processes may not be consistent. These findings hold significant implications for understanding the cognitive processes involved in Tibetan reading and for optimizing the presentation of Tibetan text.
Full article

Figure 1
Open AccessArticle
Eye Movements During Pareidolia: Exploring Biomarkers for Thinking and Perception Problems on the Rorschach
by
Mellisa Boyle, Barry Dauphin, Harold H. Greene, Mindee Juve and Ellen Day-Suba
J. Eye Mov. Res. 2025, 18(4), 32; https://doi.org/10.3390/jemr18040032 - 22 Jul 2025
Abstract
►▼
Show Figures
Eye movements (EMs) offer valuable insights into cognitive and perceptual processes, serving as potential biomarkers for disordered thinking. This study explores the relationship between EM indices and perception and thinking problems in the Rorschach Performance Assessment System (R-PAS). Sixty non-clinical participants underwent eye-tracking
[...] Read more.
Eye movements (EMs) offer valuable insights into cognitive and perceptual processes, serving as potential biomarkers for disordered thinking. This study explores the relationship between EM indices and perception and thinking problems in the Rorschach Performance Assessment System (R-PAS). Sixty non-clinical participants underwent eye-tracking while completing the Rorschach test, focusing on variables from the Perception and Thinking Problems Domain (e.g., WSumCog, SevCog, FQo%). The results reveal that increased cognitive disturbances were associated with greater exploratory activity but reduced processing efficiency. Regression analyses highlighted the strong predictive role of cognitive variables (e.g., WSumCog) over perceptual ones (e.g., FQo%). Minimal overlap was observed between performance-based (R-PAS) and self-report measures (BSI), underscoring the need for multi-method approaches. The findings suggest that EM patterns could serve as biomarkers for early detection and intervention, offering a foundation for future research on psychotic-spectrum processes in clinical and non-clinical populations.
Full article

Graphical abstract
Open AccessArticle
Influence of Time Pressure on Successive Visual Searches
by
Alejandro J. Cambronero-Delgadillo, Christof Körner, Iain D. Gilchrist and Margit Höfler
J. Eye Mov. Res. 2025, 18(4), 31; https://doi.org/10.3390/jemr18040031 - 17 Jul 2025
Abstract
►▼
Show Figures
In the current eye-tracking experiment, we explored the effects of time pressure on visual search performance and oculomotor behavior. Participants performed two consecutive time-pressured searches for a T-shaped target among L-shaped distractors in two separate displays of fifteen items, with the option to
[...] Read more.
In the current eye-tracking experiment, we explored the effects of time pressure on visual search performance and oculomotor behavior. Participants performed two consecutive time-pressured searches for a T-shaped target among L-shaped distractors in two separate displays of fifteen items, with the option to self-interrupt the first search (Search 1) to proceed to the second (Search 2). Our results showed that participants maintained high search accuracy during Search 1 across all conditions, but performance noticeably declined during Search 2 with increasing time pressure. Time pressure also led to decreased numbers of fixations and faster response times overall. When both targets where acquired, fixation durations were longer in Search 2 than in Search 1, while saccade amplitudes were shorter in Search 2. Our findings suggest that time pressure leads to the first target being prioritized when targets possess equal value, emphasizing the challenges of optimizing performance in time-sensitive tasks.
Full article

Graphical abstract
Open AccessArticle
Eye Movement Patterns as Indicators of Text Complexity in Arabic: A Comparative Analysis of Classical and Modern Standard Arabic
by
Hend Al-Khalifa
J. Eye Mov. Res. 2025, 18(4), 30; https://doi.org/10.3390/jemr18040030 - 16 Jul 2025
Abstract
►▼
Show Figures
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA
[...] Read more.
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA text, we examined differences in fixation patterns, regression rates, and overall reading behavior between these two forms of Arabic. Our analyses revealed significant differences in eye movement metrics between CA and MSA text, with CA text consistently eliciting more fixations, longer fixation durations, and more frequent revisits. Multivariate analysis confirmed that language type has a significant combined effect on eye movement patterns. Additionally, we identified different relationships between text features and eye movements for CA versus MSA text, with sentence-level features emerging as significant predictors across both language types. Notably, we observed an interaction between language type and readability level, with readers showing less sensitivity to readability variations in CA text compared to MSA text. These findings contribute to our understanding of how historical language evolution affects reading behavior and have practical implications for Arabic language education, publishing, and assessment. The study demonstrates the value of eye movement analysis for understanding text complexity in Arabic and highlights the importance of considering language-specific features when studying reading processes.
Full article

Graphical abstract
Open AccessArticle
Through the Eyes of the Viewer: The Cognitive Load of LLM-Generated vs. Professional Arabic Subtitles
by
Hussein Abu-Rayyash and Isabel Lacruz
J. Eye Mov. Res. 2025, 18(4), 29; https://doi.org/10.3390/jemr18040029 - 14 Jul 2025
Cited by 1
Abstract
►▼
Show Figures
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic
[...] Read more.
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic subtitles impose with that of professional human translations among 82 native Arabic speakers who viewed a 10 min episode (“Syria”) from the BBC comedy drama series State of the Union. Participants were randomly assigned to view the same episode with either professionally produced Arabic subtitles (Amazon Prime’s human translations) or machine-generated GPT-4o Arabic subtitles. In a between-subjects design, with English proficiency entered as a moderator, we collected fixation count, mean fixation duration, gaze distribution, and attention concentration (K-coefficient) as indices of cognitive processing. GPT-4o subtitles raised cognitive load on every metric; viewers produced 48% more fixations in the subtitle area, recorded 56% longer fixation durations, and spent 81.5% more time reading the automated subtitles than the professional subtitles. The subtitle area K-coefficient tripled (0.10 to 0.30), a shift from ambient scanning to focal processing. Viewers with advanced English proficiency showed the largest disruptions, which indicates that higher linguistic competence increases sensitivity to subtle translation shortcomings. These results challenge claims that large language models (LLMs) lighten viewer burden; despite fluent surface quality, GPT-4o subtitles demand far more cognitive resources than expert human subtitles and therefore reinforce the need for human oversight in audiovisual translation (AVT) and media accessibility.
Full article

Figure 1
Open AccessArticle
GMM-HMM-Based Eye Movement Classification for Efficient and Intuitive Dynamic Human–Computer Interaction Systems
by
Jiacheng Xie, Rongfeng Chen, Ziming Liu, Jiahao Zhou, Juan Hou and Zengxiang Zhou
J. Eye Mov. Res. 2025, 18(4), 28; https://doi.org/10.3390/jemr18040028 - 9 Jul 2025
Abstract
►▼
Show Figures
Human–computer interaction (HCI) plays a crucial role across various fields, with eye-tracking technology emerging as a key enabler for intuitive and dynamic control in assistive systems like Assistive Robotic Arms (ARAs). By precisely tracking eye movements, this technology allows for more natural user
[...] Read more.
Human–computer interaction (HCI) plays a crucial role across various fields, with eye-tracking technology emerging as a key enabler for intuitive and dynamic control in assistive systems like Assistive Robotic Arms (ARAs). By precisely tracking eye movements, this technology allows for more natural user interaction. However, current systems primarily rely on the single gaze-dependent interaction method, which leads to the “Midas Touch” problem. This highlights the need for real-time eye movement classification in dynamic interactions to ensure accurate and efficient control. This paper proposes a novel Gaussian Mixture Model–Hidden Markov Model (GMM-HMM) classification algorithm aimed at overcoming the limitations of traditional methods in dynamic human–robot interactions. By incorporating sum of squared error (SSE)-based feature extraction and hierarchical training, the proposed algorithm achieves a classification accuracy of 94.39%, significantly outperforming existing approaches. Furthermore, it is integrated with a robotic arm system, enabling gaze trajectory-based dynamic path planning, which reduces the average path planning time to 2.97 milliseconds. The experimental results demonstrate the effectiveness of this approach, offering an efficient and intuitive solution for human–robot interaction in dynamic environments. This work provides a robust framework for future assistive robotic systems, improving interaction intuitiveness and efficiency in complex real-world scenarios.
Full article

Figure 1
Open AccessArticle
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification
by
Michael Barz, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, Duy Minh Ho Nguyen, Kristin Altmeyer, Sarah Malone and Daniel Sonntag
J. Eye Mov. Res. 2025, 18(4), 27; https://doi.org/10.3390/jemr18040027 - 7 Jul 2025
Abstract
►▼
Show Figures
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal,
[...] Read more.
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations’ validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals.
Full article

Figure 1
Open AccessArticle
Efficiency Analysis of Disruptive Color in Military Camouflage Patterns Based on Eye Movement Data
by
Xin Yang, Su Yan, Bentian Hao, Weidong Xu and Haibao Yu
J. Eye Mov. Res. 2025, 18(4), 26; https://doi.org/10.3390/jemr18040026 - 2 Jul 2025
Abstract
►▼
Show Figures
Disruptive color on animals’ bodies can reduce the risk of being caught. This study explores the camouflaging effect of disruptive color when applied to military targets. Disruptive and non-disruptive color patterns were placed on the target surface to form simulation materials. Then, the
[...] Read more.
Disruptive color on animals’ bodies can reduce the risk of being caught. This study explores the camouflaging effect of disruptive color when applied to military targets. Disruptive and non-disruptive color patterns were placed on the target surface to form simulation materials. Then, the simulation target was set in woodland-, grassland-, and desert-type background images. The detectability of the target in the background was obtained by collecting eye movement indicators after the observer observed the background targets. The influence of background type (local and global), camouflage pattern type, and target viewing angle on the disruptive-color camouflage pattern was investigated. This study aims to design eye movement observation experiments to statistically analyze the indicators of first discovery time, discovery frequency, and first-scan amplitude in the target area. The experimental results show that the first discovery time of mixed disruptive-color targets in a forest background was significantly higher than that of non-mixed disruptive-color targets (t = 2.54, p = 0.039), and the click frequency was reduced by 15% (p < 0.05), indicating that mixed disruptive color has better camouflage effectiveness in complex backgrounds. In addition, the camouflage effect of mixed disruptive colors on large-scale targets (viewing angle ≥ 30°) is significantly improved (F = 10.113, p = 0.01), providing theoretical support for close-range reconnaissance camouflage design.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics

Conferences
Special Issues
Special Issue in
JEMR
Eye Tracking and Visualization
Guest Editor: Michael BurchDeadline: 20 November 2025
Special Issue in
JEMR
New Horizons and Recent Advances in Eye-Tracking Technology
Guest Editor: Lee FriedmanDeadline: 20 December 2025
Special Issue in
JEMR
Eye Movements in Reading and Related Difficulties
Guest Editors: Argyro Fella, Timothy C. Papadopoulos, Kevin B. Paterson, Daniela ZambarbieriDeadline: 30 June 2026