Eye Tracking and Visualization

Special Issue Editor


E-Mail Website
Guest Editor
Center for Data Analytics, Visualization, and Simulation, University of Applied Sciences, 7000 Chur, Switzerland
Interests: eye tracking; information visualization; visual analytics; data science; software engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The application of eye tracking technology to application-specific research questions generates vast amounts of spatio-temporal data. Algorithmically analyzing the recorded data can support the identification of patterns and anomalies in the data; however, visualizations typically exploit humans’ perceptual abilities to detect visual patterns and hence, provide a powerful and faster means in addition to the pure algorithmic solutions. In this Special Issue, we focus on concepts, approaches, and techniques that make use of interactive visualizations to analyze eye movement data or that provide a way to better interact in visualizations or visual user interfaces by applying gaze-assisted interaction. In addition, more complex visual analytics tools including algorithms, visualizations, and human–computer interaction to facilitate pattern finding and to support decision making are welcome. Moreover, evaluations of visualizations and visual analytics tools—static or dynamic ones that are integrated into small-, medium-, or large-scale displays in the real world, virtual, augmented, or immersive reality—that make use of eye tracking are in the scope of this Special Issue.  

Dr. Michael Burch
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Eye Movement Research is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • eye tracking
  • visualization
  • visual analytics
  • user evaluation
  • human–computer interaction
  • pattern identification
  • visual perception

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2534 KB  
Article
Calendar Horizon as a Boundary Affordance: An Attempt-Centric Eye-Tracking Analysis of Calendar Scheduling Interfaces
by Nina Xie, Yuanyuan Wang and Yujun Liu
J. Eye Mov. Res. 2026, 19(2), 27; https://doi.org/10.3390/jemr19020027 - 2 Mar 2026
Viewed by 631
Abstract
Digital calendars are interactive representations of time that shape both scheduling outcomes and the micro-process of searching, verifying, and revising candidate placements. We examine calendar horizon—whether weekend time is visible in the default week view—as a boundary affordance in scheduling interfaces. Using eye [...] Read more.
Digital calendars are interactive representations of time that shape both scheduling outcomes and the micro-process of searching, verifying, and revising candidate placements. We examine calendar horizon—whether weekend time is visible in the default week view—as a boundary affordance in scheduling interfaces. Using eye tracking and interaction logs, we model each scheduling episode as a sequence of placement attempts and align gaze to each attempt, partitioning it into Early/Mid/Late phases and summarizing attention across structural AOIs (task panel, calendar grid, and the weekend column when present). Two experiments used drag-and-drop and dropdown slot-picking; weekend visibility was manipulated within the dropdown interface, while evening slots remained available. Across 105 participants (1018 task episodes), AttemptsCount ranged from 1 to 7. AttemptsCount predicted gaze-based process cost: each additional attempt corresponded to ~56% more total fixation duration. Personal tasks required more attempts than work tasks and elicited stronger Late-phase weekend verification when the weekend was visible. Horizon cues also shifted boundary outcomes: hiding the weekend reduced weekend placements and increased reliance on evening scheduling, indicating displacement into adjacent time regions. These findings position calendar horizon as a design lever that shapes both process (verification) and outcomes (boundary placements), with implications for calendar UIs and mixed-initiative scheduling tools. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Graphical abstract

18 pages, 10325 KB  
Article
Eye Movement Analysis: A Kernel Density Estimation Approach for Saccade Direction and Amplitude
by Paula Fehlinger, Bernhard Ertl and Bianca Watzka
J. Eye Mov. Res. 2026, 19(1), 10; https://doi.org/10.3390/jemr19010010 - 19 Jan 2026
Viewed by 888
Abstract
Eye movements are important indicators of problem-solving or solution strategies and are recorded using eye-tracking technologies. As they reveal how viewers interact with presented information during task processing, their analysis is crucial for educational research. Traditional methods for analyzing saccades, such as histograms [...] Read more.
Eye movements are important indicators of problem-solving or solution strategies and are recorded using eye-tracking technologies. As they reveal how viewers interact with presented information during task processing, their analysis is crucial for educational research. Traditional methods for analyzing saccades, such as histograms or polar diagrams, are limited in capturing patterns in direction and amplitude. To address this, we propose a kernel density estimation approach that explicitly accounts for the data structure: for the circular distribution of saccade direction, we use the von Mises kernel, and for saccade amplitude, a Gaussian kernel. This yields continuous probability distributions that not only improve accuracy of representations but also model the underlying distribution of eye movements. This method enables the identification of strategies used during task processing and reveals the connections to the underlying cognitive processes. It allows for a deeper understanding of information processing during learning. By applying our new method to an empirical dataset, we uncovered differences in solution strategies that conventional techniques could not reveal. The insights gained can contribute to the development of more effective teaching methods, better tailored to the individual needs of learners, thereby enhancing their academic success. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Graphical abstract

23 pages, 67974 KB  
Article
Analyzing the “Opposite” Approach in Additions to Historic Buildings Using Visual Attention Tools: Dresden Military History Museum Case
by Nuray Özkaraca Özalp, Hicran Hanım Halaç, Mehmet Fatih Özalp and Fikret Bademci
J. Eye Mov. Res. 2026, 19(1), 7; https://doi.org/10.3390/jemr19010007 - 12 Jan 2026
Viewed by 824
Abstract
From past to present, modern additions have continued to transform historic environments. While some argue that contemporary extensions disrupt the integrity of historic buildings, others suggest that the contrast between past and present creates a meaningful architectural dialog. This debate raises a key [...] Read more.
From past to present, modern additions have continued to transform historic environments. While some argue that contemporary extensions disrupt the integrity of historic buildings, others suggest that the contrast between past and present creates a meaningful architectural dialog. This debate raises a key question: in contrasting compositions, which architectural elements draw more visual attention, the historic or the modern? To address this, a visual attention-based analytical approach is adopted. In this study, eye-tracking-based visual attention analysis is used to examine how viewers perceive the relationship between historical and contemporary architectural elements. Instead of conventional laboratory-based eye-tracking, artificial intelligence-supported visual attention software developed from eye-tracking datasets is employed. Four tools—3M-VAS, EyeQuant, Attention Insight, and Expoze—were used to generate heat maps, gaze sequence maps, hotspots, focus maps, attention distribution diagrams, and saliency predictions. These visualizations enabled both a qualitative and quantitative comparison of viewer focus. The case study is the Military History Museum in Dresden, Germany, known for its widely debated contemporary addition representing an oppositional design approach. The results illustrate which architectural components are visually prioritized, offering insight into how contrasting architectural languages are cognitively perceived in historic settings. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

21 pages, 2975 KB  
Article
Where Vision Meets Memory: An Eye-Tracking Study of In-App Ads in Mobile Sports Games with Mixed Visual-Quantitative Analytics
by Ümit Can Büyükakgül, Arif Yüce and Hakan Katırcı
J. Eye Mov. Res. 2025, 18(6), 74; https://doi.org/10.3390/jemr18060074 - 10 Dec 2025
Viewed by 1032
Abstract
Mobile games have become one of the fastest-growing segments of the digital economy, and in-app advertisements represent a major source of revenue while shaping consumer attention and memory processes. This study examined the relationship between visual attention and brand recall of in-app advertisements [...] Read more.
Mobile games have become one of the fastest-growing segments of the digital economy, and in-app advertisements represent a major source of revenue while shaping consumer attention and memory processes. This study examined the relationship between visual attention and brand recall of in-app advertisements in a mobile sports game using mobile eye-tracking technology. A total of 79 participants (47 male, 32 female; Mage = 25.8) actively played a mobile sports game for ten minutes while their eye movements were recorded with Tobii Pro Glasses 2. Areas of interest (AOIs) were defined for embedded advertisements, and fixation-related measures were analyzed. Brand recall was assessed through unaided, verbal-aided, and visual-aided measures, followed by demographic comparisons based on gender, mobile sports game experience and interest in tennis. Results from Generalized Linear Mixed Models (GLMMs) revealed that brand placement was the strongest predictor of recall (p < 0.001), overriding raw fixation duration. Specifically, brands integrated into task-relevant zones (e.g., the central net area) achieved significantly higher recall odds compared to peripheral ads, regardless of marginal variations in dwell time. While eye movement metrics varied by gender and interest, the multivariate model confirmed that in active gameplay, task-integration drives memory encoding more effectively than passive visual salience. These findings suggest that active gameplay imposes unique cognitive demands, altering how attention and memory interact. The study contributes both theoretically by extending advertising research into ecologically valid gaming contexts and practically by informing strategies for optimizing mobile in-app advertising. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

14 pages, 3698 KB  
Article
Active Gaze Guidance and Pupil Dilation Effects Through Subject Engagement in Ophthalmic Imaging
by David Harings, Niklas Bauer, Damian Mendroch, Uwe Oberheide and Holger Lubatschowski
J. Eye Mov. Res. 2025, 18(5), 45; https://doi.org/10.3390/jemr18050045 - 19 Sep 2025
Cited by 1 | Viewed by 1417
Abstract
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning [...] Read more.
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning by guiding the gaze along predefined patterns. An infrared fundus camera setup was used with nine healthy adults (aged 20–57) who completed tasks comparing passive viewing of moving patterns to actively tracing them by drawing using a touchpad interface. The active task involved participant-controlled target movement with real-time color feedback for accurate pattern tracing. Results showed that active tracing significantly increased pupil diameter by an average of 17.8% (range 8.9–43.6%; p < 0.001) and reduced blink frequency compared to passive viewing. More complex patterns led to greater pupil dilation, confirming the link between cognitive load and physiological response. These findings demonstrate that patient driven gaze guidance can stabilize gaze, reduce blinking, and naturally dilate the pupil. These conditions might enhance the quality of scannerless OCT or other imaging techniques benefiting from guided gaze and larger pupils. There could be benefits for children and people with compliance issues, although further research is needed to consider cognitive load. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

20 pages, 44464 KB  
Article
Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns
by Qiaoling Zou, Wanyu Zheng, Xinyan Jiang and Dongning Li
J. Eye Mov. Res. 2025, 18(5), 37; https://doi.org/10.3390/jemr18050037 - 25 Aug 2025
Cited by 1 | Viewed by 2008
Abstract
(1) Background: Virtual Reality (VR) films challenge traditional visual cognition by offering novel perceptual experiences. This study investigates the applicability of Gestalt grouping principles in dynamic VR scenes, the influence of VR environments on grouping efficiency, and the relationship between viewer experience and [...] Read more.
(1) Background: Virtual Reality (VR) films challenge traditional visual cognition by offering novel perceptual experiences. This study investigates the applicability of Gestalt grouping principles in dynamic VR scenes, the influence of VR environments on grouping efficiency, and the relationship between viewer experience and grouping effects. (2) Methods: Eye-tracking experiments were conducted with 42 participants using the HTC Vive Pro Eye and Tobii Pro Lab. Participants watched a non-narrative VR film with fixed camera positions to eliminate narrative and auditory confounds. Eye-tracking metrics were analyzed using SPSS version 29.0.1, and data were visualized through heat maps and gaze trajectory plots. (3) Results: Viewers tended to focus on spatial nodes and continuous structures. Initial fixations were anchored near the body but shifted rapidly thereafter. Heat maps revealed a consistent concentration of fixations on the dock area. (4) Conclusions: VR reshapes visual organization, where proximity, continuity, and closure outweigh traditional saliency. Dynamic elements draw attention only when linked to user goals. Designers should prioritize spatial logic, using functional nodes as cognitive anchors and continuous paths as embodied guides. Future work should test these mechanisms in narrative VR and explore neural correlates via fNIRS or EEG. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

Back to TopTop