-
Relationship Between Ocular Motility and Motor Skills
-
Computational Approaches to Apply the String Edit Algorithm to Create Accurate Visual Scan Paths
-
Understanding Consumer Perception and Acceptance of AI Art Through Eye Tracking and Bidirectional Encoder Representations from Transformers-Based Sentiment Analysis
Journal Description
Journal of Eye Movement Research
Journal of Eye Movement Research
(JEMR) is an international, peer-reviewed, open access journal on all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas published bimonthly online by MDPI (from Volume 18, Issue 1, 2025).
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, PMC, and other databases.
- Journal Rank: CiteScore - Q2 (Ophthalmology)
- Rapid Publication: first decisions in 18 days; acceptance to publication in 4 days (median values for MDPI journals in the second half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
Impact Factor:
1.3 (2023);
5-Year Impact Factor:
2.0 (2023)
subject
Imprint Information
Open Access
ISSN: 1995-8692
Latest Articles
Analyzing Gaze During Driving: Should Eye Tracking Be Used to Design Automotive Lighting Functions?
J. Eye Mov. Res. 2025, 18(2), 13; https://doi.org/10.3390/jemr18020013 - 10 Apr 2025
Abstract
►
Show Figures
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the
[...] Read more.
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior.
Full article
Open AccessArticle
Influence of Visual Coding Based on Attraction Effect on Human–Computer Interface
by
Linlin Wang, Yujie Liu, Xinyi Tang, Chengqi Xue and Haiyan Wang
J. Eye Mov. Res. 2025, 18(2), 12; https://doi.org/10.3390/jemr18020012 - 8 Apr 2025
Abstract
►▼
Show Figures
Decision-making is often influenced by contextual information on the human–computer interface (HCI), with the attraction effect being a common situational effect in digital nudging. To address the role of visual cognition and coding in the HCI based on the attraction effect, this research
[...] Read more.
Decision-making is often influenced by contextual information on the human–computer interface (HCI), with the attraction effect being a common situational effect in digital nudging. To address the role of visual cognition and coding in the HCI based on the attraction effect, this research takes online websites as experimental scenarios and demonstrates how the coding modes and attributes influence the attraction effect. The results show that similarity-based attributes enhance the attraction effect, whereas difference-based attributes do not modulate its intensity, suggesting that the influence of the relationship driven by coding modes is weaker than that of coding attributes. Additionally, variations in the strength of the attraction effect are observed across different coding modes under the coding attribute of similarity, with color coding having the strongest effect, followed by size, and labels showing the weakest effect. This research analyzes the stimulating conditions of the attraction effect and provides new insights for exploring the relationship between cognition and visual characterization through the attraction effect at the HCI. Furthermore, our findings can help apply the attraction effect more effectively and assist users in making more reasonable decisions.
Full article

Figure 1
Open AccessArticle
OKN and Pupillary Response Modulation by Gaze and Attention Shifts
by
Kei Kanari and Moe Kikuchi
J. Eye Mov. Res. 2025, 18(2), 11; https://doi.org/10.3390/jemr18020011 (registering DOI) - 7 Apr 2025
Abstract
►▼
Show Figures
Pupil responses and optokinetic nystagmus (OKN) are known to vary with the brightness and direction of motion of attended stimuli, as well as gaze position. However, whether these processes are controlled by a common mechanism remains unclear. In this study, we investigated how
[...] Read more.
Pupil responses and optokinetic nystagmus (OKN) are known to vary with the brightness and direction of motion of attended stimuli, as well as gaze position. However, whether these processes are controlled by a common mechanism remains unclear. In this study, we investigated how OKN latency relates to pupil response latency under two conditions: gaze shifts (eye movement) and attention shifts (covert attention without eye movement). As a result, while OKN showed consistent temporal changes across both gaze and attention conditions, pupillary responses exhibited distinct patterns. Moreover, the results revealed no significant correlation between pupil latency and OKN latency in either condition. These findings suggest that, although OKN and pupillary responses are influenced by similar attentional processes, their underlying mechanisms may differ.
Full article

Figure 1
Open AccessArticle
Eye Movement Indicator Difference Based on Binocular Color Fusion and Rivalry
by
Xinni Zhang, Mengshi Dai, Feiyan Cheng, Lijun Yun and Zaiqing Chen
J. Eye Mov. Res. 2025, 18(2), 10; https://doi.org/10.3390/jemr18020010 - 5 Apr 2025
Abstract
►▼
Show Figures
Color fusion and rivalry are two key information integration mechanisms in binocular vision, representing the visual system’s processing patterns for consistent and conflicting inputs, respectively. This study hypothesizes that there are quantifiable differences in eye movement indicators under states of binocular color fusion
[...] Read more.
Color fusion and rivalry are two key information integration mechanisms in binocular vision, representing the visual system’s processing patterns for consistent and conflicting inputs, respectively. This study hypothesizes that there are quantifiable differences in eye movement indicators under states of binocular color fusion and rivalry, which can be verified through multi-paradigm eye movement experiments. The experiment recruited eighteen subjects with normal vision (nine males and nine females), employing the Gaze Stability paradigm, Straight Curve Eye Hopping paradigm, and Smoothed Eye Movement Tracking paradigm for eye movement tracking. Each paradigm included a binocular color rivalry experimental group (R-G) and two binocular color fusion control groups (R-R, G-G). Data analysis indicates significant differences in indicators such as Average Saccade Amplitude, Median Saccade Amplitude, and SD of Saccade Amplitude between binocular color fusion and rivalry states. For instance, through Z-Score normalization and cross-paradigm merged analysis, specific ranges of these indicators were identified to distinguish between the two states. When the Average Saccade Amplitude falls within the range of −0.905–−0.693, it indicates a state of binocular color rivalry; when the range is 0.608–1.294, it reflects a state of binocular color fusion. Subsequently, ROC curve analysis confirmed the effectiveness of the experimental paradigms in analyzing the mechanisms of binocular color fusion and rivalry, with AUC values of 0.990, 0.741, and 0.967, respectively. These results reveal the potential of eye movement behaviors as biomarkers for the dynamic processing of visual conflicts. This finding provides empirical support for understanding the neural computational models of binocular vision and lays a methodological foundation for developing visual impairment assessment tools based on eye movement features.
Full article

Figure 1
Open AccessArticle
Numerosity Perception and Perceptual Load: Exploring Sex Differences Through Eye-Tracking
by
Julia Bend and Anssi Öörni
J. Eye Mov. Res. 2025, 18(2), 9; https://doi.org/10.3390/jemr18020009 - 3 Apr 2025
Abstract
►▼
Show Figures
This study investigates sex differences in numerosity perception and visuospatial abilities in adults using eye-tracking methodology. We report the results of a controlled dual-task experiment that assessed the participants’ visuospatial and numerosity estimation abilities. We did not observe sex differences in reaction times
[...] Read more.
This study investigates sex differences in numerosity perception and visuospatial abilities in adults using eye-tracking methodology. We report the results of a controlled dual-task experiment that assessed the participants’ visuospatial and numerosity estimation abilities. We did not observe sex differences in reaction times and accuracy. However, we found that females consistently underestimated numerosity. This underestimation correlated with higher perceptual load in females, as evidenced by shorter fixation durations and increased fixation rates. These findings suggest that perceptual load, rather than visual or spatial abilities, significantly influences numerosity estimation. Our study contributes novel insights into sex differences in both numerosity estimation and visuospatial abilities. These results provide a foundation for future research on numerosity perception across various populations and contexts, with implications for educational strategies and cognitive training programs.
Full article

Figure 1
Open AccessSystematic Review
Eye-Based Recognition of User Traits and States—A Systematic State-of-the-Art Review
by
Moritz Langner, Peyman Toreini and Alexander Maedche
J. Eye Mov. Res. 2025, 18(2), 8; https://doi.org/10.3390/jemr18020008 - 1 Apr 2025
Abstract
►▼
Show Figures
Eye-tracking technology provides high-resolution information about a user’s visual behavior and interests. Combined with advances in machine learning, it has become possible to recognize user traits and states using eye-tracking data. Despite increasing research interest, a comprehensive systematic review of eye-based recognition approaches
[...] Read more.
Eye-tracking technology provides high-resolution information about a user’s visual behavior and interests. Combined with advances in machine learning, it has become possible to recognize user traits and states using eye-tracking data. Despite increasing research interest, a comprehensive systematic review of eye-based recognition approaches has been lacking. This study aimed to fill this gap by systematically reviewing and synthesizing the existing literature on the machine-learning-based recognition of user traits and states using eye-tracking data following PRISMA 2020 guidelines. The inclusion criteria focused on studies that applied eye-tracking data to recognize user traits and states with machine learning or deep learning approaches. Searches were performed in the ACM Digital Library and IEEE Xplore and the found studies were assessed for the risk of bias using standard methodological criteria. The data synthesis included a conceptual framework that covered the task, context, technology and data processing, and recognition targets. A total of 90 studies were included that encompassed a variety of tasks (e.g., visual, driving, learning) and contexts (e.g., computer screen, simulator, wild). The recognition targets included cognitive and affective states (e.g., emotions, cognitive workload) and user traits (e.g., personality, working memory). A set of various machine learning techniques, such as Support Vector Machines (SVMs), Random Forests, and deep learning models were applied to recognize user states and traits. This review identified state-of-the-art approaches and gaps, which highlighted the need for building up best practices, larger-scale datasets, and diversifying tasks and contexts. Future research should focus on improving the ecological validity, multi-modal approaches for robust user modeling, and developing gaze-adaptive systems.
Full article

Figure 1
Open AccessArticle
How Do Stress Situations Affect Higher-Level Text Processing in L1 and L2 Readers? An Eye-Tracking Study
by
Ziqing Xia, Chun-Hsien Chen, Jo-Yu Kuo and Mingmin Zhang
J. Eye Mov. Res. 2025, 18(2), 7; https://doi.org/10.3390/jemr18020007 - 24 Mar 2025
Abstract
►▼
Show Figures
Existing studies have revealed that the reading comprehension ability of readers can be adversely affected by their psychosocial stress. Yet, the detailed impact of stress on various stages of text processing is understudied. This study aims to explore how the higher-level text processing
[...] Read more.
Existing studies have revealed that the reading comprehension ability of readers can be adversely affected by their psychosocial stress. Yet, the detailed impact of stress on various stages of text processing is understudied. This study aims to explore how the higher-level text processing ability, including syntactic parsing, sentence integration, and global text processing, of first language (L1) and second language (L2) English readers is affected under stress situations. In addition, the roles of trait anxiety, the central executive function moderating stress effects, in text processing were also examined. Twenty-two L1 readers and twenty-one L2 readers were asked to perform reading comprehension tasks under different stress situations. Eye-tracking technology was adopted to record participants’ visual behaviors while reading, and ten eye-movement measurements were computed to represent the effect of different types of text processing. The results demonstrate that the stress reduced the efficiency of syntactic parsing and sentence integration in both L1 and L2 groups, but only impaired global text processing in L2 readers. Specifically, L2 readers focused more on the topic structure of text to facilitate comprehension under stress situations. Moreover, only L1 readers’ higher-level text processing was affected by trait anxiety, while L2 readers’ processing was mainly related to their reading proficiency level. Future studies and applications were discussed. The findings advance our understanding of stress effects on different stages of higher-level text processing. They also have practical implications for developing interventions to help language learners suffering from stress disorders.
Full article

Figure 1
Open AccessArticle
Quantitative Assessment of Fixational Disparity Using a Binocular Eye-Tracking Technique in Children with Strabismus
by
Xiaoyi Hou, Xubo Yang, Bingjie Chen and Yongchuan Liao
J. Eye Mov. Res. 2025, 18(2), 6; https://doi.org/10.3390/jemr18020006 - 10 Mar 2025
Abstract
►▼
Show Figures
Fixational eye movements are important for holding the central visual field on a target for a specific period of time. In this study, we aimed to quantitatively assess fixational disparities using binocular eye tracking in children with strabismus (before and after surgical alignment)
[...] Read more.
Fixational eye movements are important for holding the central visual field on a target for a specific period of time. In this study, we aimed to quantitatively assess fixational disparities using binocular eye tracking in children with strabismus (before and after surgical alignment) and healthy children. Fixational disparities in 117 children (4–18 years; 57 with strabismus and 60 age-similar healthy controls) were recorded under binocular viewing with corrected refractive errors. Disparities in gaze positions relative to the target location were recorded for both eyes. The main outcome measures included fixational disparities along horizontal and vertical axes in the fixation test. Children with strabismus exhibited significant (p < 0.001) fixational disparities compared to healthy children in both horizontal and vertical directions. Additionally, children with esotropia had poorer fixational function compared to those with exotropia. The occurrence of fixational disparities significantly decreased in the horizontal direction following strabismus surgery. A significant negative correlation was observed between binocular best-corrected visual acuity and fixational disparities in children with strabismus. Children with strabismus had significant fixational disparities that were observably diminished in the horizontal direction after surgical alignment. Binocular assessment of fixational disparities can provide a more comprehensive evaluation of visual function in individuals with strabismus.
Full article

Figure 1
Open AccessArticle
The Impact of Shape and Decoration on User Experience and Visual Attention in Anthropomorphic Robot Design
by
Tao Song
J. Eye Mov. Res. 2025, 18(2), 5; https://doi.org/10.3390/jemr18020005 - 3 Mar 2025
Abstract
►▼
Show Figures
This study aims to explore the effects of Shape and Decoration on user experience and visual attention in anthropomorphic robot design. Eighty undergraduate students were divided into four groups, each viewing one of four stimuli: (a) Non-hat and Non-pattern, (b) Hat and Non-pattern,
[...] Read more.
This study aims to explore the effects of Shape and Decoration on user experience and visual attention in anthropomorphic robot design. Eighty undergraduate students were divided into four groups, each viewing one of four stimuli: (a) Non-hat and Non-pattern, (b) Hat and Non-pattern, (c) Non-hat and Pattern, and (d) Hat and Pattern. Eye-tracking data and subjective user experience ratings were collected. The results indicate that both Shape and Decoration have significant effects on user experience and visual attention. The Hat significantly outperformed Non-hat in the dimensions of Attractiveness and Stimulation, while the Pattern showed significant advantages in Stimulation and Novelty. Additionally, Shape and Decoration exhibited a significant interaction effect in the dimensions of Novelty and time to first fixation, suggesting that their combination provides complementary benefits in enhancing perceived novelty and initial visual appeal. Hat and Pattern attracted users’ attention earlier and prolonged fixation time, as seen in time to first fixation, first-pass total fixation duration, and second-pass total fixation duration. For time to first fixation, there was an interaction effect between Shape and Decoration. This study offers strong theoretical support for the design of anthropomorphic robots, highlighting the critical role of Shape and Decoration in user experience.
Full article

Figure 1
Open AccessArticle
Preschool Children with High Reading Ability Show Inversion Sensitivity to Words in Environment: An Eye-Tracking Study
by
Yaowen Li, Jing Zhao, Wangmei Chen, Shaoxue Zhang, Wenjing Zhang, Wei Wang, Limin Xu, Shifeng Li and Licheng Xue
J. Eye Mov. Res. 2025, 18(2), 4; https://doi.org/10.3390/jemr18020004 - 28 Feb 2025
Abstract
►▼
Show Figures
Words in environmental print are exposed to young children before formally learning to read, and attention to these words is linked to their reading ability. Inversion sensitivity, the ability to distinguish between upright and inverted words, is a pivotal milestone in reading development.
[...] Read more.
Words in environmental print are exposed to young children before formally learning to read, and attention to these words is linked to their reading ability. Inversion sensitivity, the ability to distinguish between upright and inverted words, is a pivotal milestone in reading development. To further explore the relationship between attention to words in environmental print and early reading development, we examined whether children with varying reading abilities differed in inversion sensitivity to these words. Participants included children with low (18, 8 males, 5.06 years) and high (19, 10 males, 5.00 years) reading levels. Using an eye-tracking technique, we compared children’s attention to upright and inverted words in environmental print and ordinary words during a free-viewing task. In terms of the percentage of fixation duration and fixation count, results showed that children with high reading abilities exhibited inversion sensitivity to words in environmental print, whereas children with low reading abilities did not. Unexpectedly, in terms of first fixation latency, children with low reading abilities showed inversion sensitivity to ordinary words, while children with high reading abilities did not. These findings suggest that inversion sensitivity to words in environmental print is closely linked to early reading ability.
Full article

Figure 1
Open AccessArticle
Reduced Capacity for Parafoveal Processing (ReCaPP) Leads to Differences in Prediction Between First and Second Language Readers of English
by
Leigh B. Fernandez and Shanley E. M. Allen
J. Eye Mov. Res. 2025, 18(2), 3; https://doi.org/10.3390/jemr18020003 - 26 Feb 2025
Abstract
►▼
Show Figures
Research has shown that first (L1) and second language (L2) speakers actively make predictions about upcoming linguistic information, though L2 speakers are less efficient. While prediction mechanisms are assumed to be qualitatively the same, quantitative prediction-driven processing differences may be modulated by individual
[...] Read more.
Research has shown that first (L1) and second language (L2) speakers actively make predictions about upcoming linguistic information, though L2 speakers are less efficient. While prediction mechanisms are assumed to be qualitatively the same, quantitative prediction-driven processing differences may be modulated by individual differences We tested whether L2 proficiency and quality of lexical representation (QLR) impact the capacity of L2 readers to extract parafoveal information while reading, leading to quantitative differences in prediction. Using the same items as Slattery and Yates, we investigated the impact of predictability and length of a critical word on bottom-up parafoveal processing, measured by skipping rates, and top-down predictability processing, measured by reading times. Comparing our L2 English to their L1 English data, we found that L2 speakers skipped less and had longer gaze duration. However, both groups showed increased skipping rate and decreased gaze duration for predictable relative to unpredictable words and for shorter relative to longer words. We argue that L1 and L2 predictability mechanisms are qualitatively the same and quantitative differences stem from L2 speakers’ Reduced Capacity for Parafoveal Processing, the ReCaPP hypothesis.
Full article

Figure 1
Open AccessEditorial
Publisher’s Note: A New Addition to the MDPI Portfolio—Journal of Eye Movement Research
by
Carla Aloè
J. Eye Mov. Res. 2025, 18(1), 2; https://doi.org/10.3390/jemr18010002 - 17 Jan 2025
Abstract
We are delighted to announce that the Journal of Eye Movement Research (JEMR) has joined the MDPI portfolio [...]
Full article
Open AccessEditorial
Journal of Eye Movement Research: Opening a New Chapter with MDPI
by
Rudolf Groner
J. Eye Mov. Res. 2025, 18(1), 1; https://doi.org/10.3390/jemr18010001 - 16 Jan 2025
Cited by 1
Abstract
In 1980, together with Dieter Heller, I initiated an interdisciplinary network called the “European Group of Scientists active in Eye Movement Research”, including scientists who use eye movement registration as a research tool, developing models based on oculomotor data obtained from a wide
[...] Read more.
In 1980, together with Dieter Heller, I initiated an interdisciplinary network called the “European Group of Scientists active in Eye Movement Research”, including scientists who use eye movement registration as a research tool, developing models based on oculomotor data obtained from a wide spectrum of phenomena, ranging from the neurophysiological to the perceptual and the cognitive levels [...]
Full article
Open AccessArticle
Understanding Consumer Perception and Acceptance of AI Art Through Eye Tracking and Bidirectional Encoder Representations from Transformers-Based Sentiment Analysis
by
Tao Yu, Junping Xu and Younghwan Pan
J. Eye Mov. Res. 2024, 17(5), 1-34; https://doi.org/10.16910/jemr.17.5.3 - 22 Dec 2024
Abstract
This study investigates public perception and acceptance of AI-generated art using an integrated system that merges eye-tracking methodologies with advanced bidirectional encoder representations from transformers (BERT)-based sentiment analysis. Eye-tracking methods systematically document the visual trajectories and fixation spots of consumers viewing AI-generated artworks,
[...] Read more.
This study investigates public perception and acceptance of AI-generated art using an integrated system that merges eye-tracking methodologies with advanced bidirectional encoder representations from transformers (BERT)-based sentiment analysis. Eye-tracking methods systematically document the visual trajectories and fixation spots of consumers viewing AI-generated artworks, elucidating the inherent relationship between visual activity and perception. Thereafter, the BERT-based sentiment analysis algorithm extracts emotional responses and aesthetic assessments from numerous internet reviews, offering a robust instrument for evaluating public approval and aesthetic perception. The findings indicate that consumer perception of AI-generated art is markedly affected by visual attention behavior, whereas sentiment analysis uncovers substantial disparities in aesthetic assessments. This paper introduces enhancements to the BERT model via domain-specific pre-training and hyperparameter optimization utilizing deep Gaussian processes and dynamic Bayesian optimization, resulting in substantial increases in classification accuracy and resilience. This study thoroughly examines the underlying mechanisms of public perception and assessment of AI-generated art, assesses the potential of these techniques for practical application in art creation and evaluation, and offers a novel perspective and scientific foundation for future research and application of AI art.
Full article
Open AccessProceeding Paper
Programme and Abstracts ECEM 2024
by
Ronan Reilly and Ralph Radach
J. Eye Mov. Res. 2024, 17(6), 1-144; https://doi.org/10.16910/jemr.17.6.1 - 20 Dec 2024
Abstract
Conference chairs: Ralph Radach (Programme), General and Biological Psychology, University of Wuppertal; Ronan Reilly (Local organisation), Department of Computer Science, Maynooth University [...]
Full article
Open AccessArticle
Impact of Leading Line Composition on Visual Cognition: An Eye-Tracking Study
by
Hsien-Chih Chuang, Han-Yi Tseng and Chia-Yun Chiang
J. Eye Mov. Res. 2024, 17(5), 1-17; https://doi.org/10.16910/jemr.17.5.2 - 13 Dec 2024
Abstract
Leading lines, a fundamental composition technique in photography, are crucial to guiding the viewer’s visual attention. Leading line composition is an effective visual strategy for influencing viewers’ cognitive processes. However, in-depth research on the impact of leading line composition on cognitive psychology is
[...] Read more.
Leading lines, a fundamental composition technique in photography, are crucial to guiding the viewer’s visual attention. Leading line composition is an effective visual strategy for influencing viewers’ cognitive processes. However, in-depth research on the impact of leading line composition on cognitive psychology is lacking. This study investigated the cognitive effects of leading line composition on perception and behavior. The eye movement behaviors of 34 participants while they viewed photographic works with leading lines were monitored through eye-tracking experiments. Additionally, subjective assessments were conducted to collect the participants’ perceptions of the images in terms of aesthetics, complexity, and directional sense. The results revealed that leading lines significantly influenced the participants’ attention to key elements of the work, particularly when prominent subject elements were present. This led to greater engagement, longer viewing times, and enhanced ratings on aesthetics and directional sense. These findings suggest that skilled photographers can employ leading lines to guide the viewer’s gaze and create visually compelling and aesthetically pleasing works. This research offers specific compositional strategies for photography applications and underscores the importance of leading lines and subject elements in enhancing visual impact and artistic expression.
Full article
Open AccessArticle
Intelligent Standalone Eye Blinking Monitoring System for Computer Users
by
Ahmad A. Jiman, Amjad J. Abdullateef, Alaa M. Almelawi, Khan M. Yunus, Yasser M. Kadah, Ahmad F. Turki, Mohammed J. Abdulaal, Nebras M. Sobahi, Eyad T. Attar and Ahmad H. Milyani
J. Eye Mov. Res. 2024, 17(5), 1-17; https://doi.org/10.16910/jemr.17.5.1 - 6 Dec 2024
Abstract
Purpose: Working on computers for long hours has become a regular task for millions of people around the world. This has led to the increase of eye and vision issues related to prolonged computer use, known as computer vision syndrome (CVS). A main
[...] Read more.
Purpose: Working on computers for long hours has become a regular task for millions of people around the world. This has led to the increase of eye and vision issues related to prolonged computer use, known as computer vision syndrome (CVS). A main contributor to CVS caused by dry eyes is the reduction of blinking rates. In this pilot study, an intelligent, standalone eye blinking monitoring system to promote healthier blinking behaviors for computer users was developed using components that are affordable and easily available in the market. Methods: The developed eye blinking monitoring system used a camera to track blinking rates and operated audible, visual and tactile alarm modes to induce blinks. The hypothesis in this study is that the developed eye blinking monitoring system would increase eye blinks for a computer user. To test this hypothesis, the developed system was evaluated on 20 subjects. Results: The eye blinking monitoring system detected blinks with high accuracy (95.9%). The observed spontaneous eye blinking rate was 43.1 ± 14.7 blinks/min (mean ± standard deviation). Eye blinking rates significantly decreased when the subjects were watching movie trailers (25.2 ± 11.9 blinks/min; Wilcoxon signed rank test; p < 0.001) and reading articles (24.2 ± 12.1 blinks/min; p < 0.001) on a computer. The blinking monitoring system with the alarm function turned on showed an increase in blinking rates (28.2 ± 12.1 blinks/min) compared to blinking rates without the alarm function (25.2 ± 11.9 blinks/min; p = 0.09; Cohen’s effect size d = 0.25) when the subjects were watching movie trailers. Conclusions: The developed blinking monitoring system was able to detect blinking with high accuracy and induce blinking with a personalized alarm function. Further work is needed to refine the study design and evaluate the clinical impact of the system. This work is an advancement towards the development of a profound technological solution for preventing CVS.
Full article
Open AccessArticle
Eye Movement and Recall of Visual Elements in Eco-Friendly Product
by
Jing Li and Myun Kim
J. Eye Mov. Res. 2024, 17(4), 1-20; https://doi.org/10.16910/jemr.17.4.6 - 6 Dec 2024
Abstract
This study aims to explore the distribution of visual attention on sustainability graphics when viewing an eco-friendly product and the recall of sustainability information afterward. Twenty-five students majoring in environmental studies and twenty-five students from non-environmental majors participated in the study. They were
[...] Read more.
This study aims to explore the distribution of visual attention on sustainability graphics when viewing an eco-friendly product and the recall of sustainability information afterward. Twenty-five students majoring in environmental studies and twenty-five students from non-environmental majors participated in the study. They were further divided into a higher group and a lower group based on their sustainability level. Participants viewed diagrams of an eco-trash boat design with sustainability graphics and a 15-page design description. Their eye-movement data and verbal reports on the recall of sustainability information were collected. Higher sustainability group had higher fixation count in sustainability graphics. Non-environmental majors had a shorter time to first fixation to sustainability graphics, and there was an interaction effect. Environmental students detected graphics faster in the lower group, but the opposite occurred in the higher group. Higher-sustainability non-environmental students were quicker, while the reverse was true for environmental students. In terms of recalling sustainability graphics, the higher group scored higher, while environmental majors scored higher in recalling sustainability features. In the recall coding, the most frequently mentioned terms were "green," "plant," "vivid," and "eco." The study offers new insights into sustainable development and provides design recommendations for eco-product designers.
Full article
Open AccessArticle
The Interference Effect of Low-Relevant Animated Elements on Digital Picture-Book Comprehension in Preschoolers: An Eye-Movement Study
by
Nina Liu, Chen Chen, Yingying Liu, Shan Jiang, QianCheng Gao and Ruihan Wu
J. Eye Mov. Res. 2024, 17(4), 1-16; https://doi.org/10.16910/jemr.17.4.1 - 6 Dec 2024
Abstract
Digital picture-book (DPB) with animated elements can enhance children’s engagement, but irrelevant animations may interfere with their comprehension. To determine the effect of the relevance of animated elements on preschoolers’ comprehension, an experimental study was conducted. Thirtythree preschoolers between the aged 4-5 years
[...] Read more.
Digital picture-book (DPB) with animated elements can enhance children’s engagement, but irrelevant animations may interfere with their comprehension. To determine the effect of the relevance of animated elements on preschoolers’ comprehension, an experimental study was conducted. Thirtythree preschoolers between the aged 4-5 years engaged with DPB in three conditions: high- and lowrelevant animations and a static control while listening to the story; their eye movements were recorded simultaneously. The study found that preschoolers had lower comprehension when exposed to low-relevant animation, but had comparable scores to the static condition with high-relevant animation. The results of eye-movement analysis showed that children who focused less on highrelevant or more on low-relevant elements had poorer comprehension. Those exposed to low-relevant animations looked less at high-relevant elements and more at low-relevant elements than those in the static and high-relevant conditions. These results suggested that low-relevant animations in DPB interfered with children’s comprehension by directing their visual attention away from crucial, highrelevant elements and more to less relevant elements. Therefore, designers creating DPBs, as well as parents and caregivers selecting DPBs for children, should consider the importance of the relevance of animated elements. And the corresponding mechanism of animation effect in DPB comprehension was discussed.
Full article
Open AccessArticle
Reading Comics: The Effect of Expertise on Eye Movements
by
Hong Yang
J. Eye Mov. Res. 2024, 17(4), 1-17; https://doi.org/10.16910/jemr.17.4.5 - 15 Nov 2024
Abstract
The theory of expertise suggests that there should be observable differences in the eye movement patterns between experts and non-experts. Previous studies have investigated how expertise influences eye movement patterns during cognitive tasks like reading. However, the impact of expertise on eye movements
[...] Read more.
The theory of expertise suggests that there should be observable differences in the eye movement patterns between experts and non-experts. Previous studies have investigated how expertise influences eye movement patterns during cognitive tasks like reading. However, the impact of expertise on eye movements in comics, a multimodal form of text, remains unexplored. This article reports on a study that uses eye tracking to examine patterns in the ways that experts and non-experts read comics. Expert participants (14) with experience in reading comics than non-expert participants (17). When controlling for variables such as layout and text quantity, we found significant differences in visual scanning between experts and non-experts. Experts exhibited more frequent saccades and greater amplitude of saccades. Further analysis revealed distinct strategies in processing text and image content between the two groups: the interaction between expertise level and content type in specific AOI showed significant differences across multiple visual measurement metrics, including Average duration of fixations, number of fixations, and number of saccades within AOI. These findings not only support the applicability of the expertise level theory in the field of comic reading but also provide a new perspective for understanding the reading processing of multimodal texts.
Full article
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics

Conferences
Special Issues
Special Issue in
JEMR
Eye Tracking and Visualization
Guest Editor: Michael BurchDeadline: 20 November 2025