Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = head-mounted eye tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 2865 KB  
Article
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification
by Michael Barz, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, Duy Minh Ho Nguyen, Kristin Altmeyer, Sarah Malone and Daniel Sonntag
J. Eye Mov. Res. 2025, 18(4), 27; https://doi.org/10.3390/jemr18040027 - 7 Jul 2025
Viewed by 690
Abstract
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, [...] Read more.
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations’ validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals. Full article
Show Figures

Figure 1

18 pages, 5112 KB  
Article
Gaze–Hand Steering for Travel and Multitasking in Virtual Environments
by Mona Zavichi, André Santos, Catarina Moreira, Anderson Maciel and Joaquim Jorge
Multimodal Technol. Interact. 2025, 9(6), 61; https://doi.org/10.3390/mti9060061 - 13 Jun 2025
Viewed by 670
Abstract
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique [...] Read more.
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique that combines eye tracking with hand pointing: users steer only when gaze aligns with a hand-defined target, reducing unintended actions and enabling free look. Speed is controlled via either a joystick or a waist-level speed circle. We evaluated our method in a user study (n = 20) across multitasking and single-task scenarios, comparing it to a similar technique. Results show that gaze–hand steering maintains performance and enhances user comfort and spatial awareness during multitasking. Our findings support using gaze–hand steering in gaze-dominant VR applications requiring precision and simultaneous interaction. Our method significantly improves VR navigation in gaze–dominant, multitasking-intensive applications, supporting immersion and efficient control. Full article
Show Figures

Figure 1

13 pages, 1193 KB  
Article
Validation of an Automated Scoring Algorithm That Assesses Eye Exploration in a 3-Dimensional Virtual Reality Environment Using Eye-Tracking Sensors
by Or Koren, Anais Di Via Ioschpe, Meytal Wilf, Bailasan Dahly, Ramit Ravona-Springer and Meir Plotnik
Sensors 2025, 25(11), 3331; https://doi.org/10.3390/s25113331 - 26 May 2025
Viewed by 656
Abstract
Eye-tracking studies in virtual reality (VR) deliver insights into behavioral function. The gold standard of evaluating gaze behavior is based on manual scoring, which is labor-intensive. Previously proposed automated eye-tracking algorithms for VR head mount display (HMD) were not validated against manual scoring, [...] Read more.
Eye-tracking studies in virtual reality (VR) deliver insights into behavioral function. The gold standard of evaluating gaze behavior is based on manual scoring, which is labor-intensive. Previously proposed automated eye-tracking algorithms for VR head mount display (HMD) were not validated against manual scoring, or tested in dynamic areas of interest (AOIs). Our study validates the accuracy of an automated scoring algorithm, which determines temporal fixation behavior on static and dynamic AOIs in VR, against subjective human annotation. The interclass-correlation coefficient (ICC) was calculated for the time of first fixation (TOFF) and total fixation duration (TFD), in ten participants, each presented with 36 static and dynamic AOIs. High ICC values (≥0.982; p < 0.0001) were obtained when comparing the algorithm-generated TOFF and TFD to the raters’ annotations. In sum, our algorithm is accurate in determining temporal parameters related to gaze behavior when using HMD-based VR. Thus, the significant time required for human scoring among numerous raters can be rendered obsolete with a reliable automated scoring system. The algorithm proposed here was designed to sub-serve a separate study that uses TOFF and TFD to differentiate apathy from depression in those suffering from Alzheimer’s dementia. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

16 pages, 7057 KB  
Article
VRBiom: A New Periocular Dataset for Biometric Applications of Head-Mounted Display
by Ketan Kotwal, Ibrahim Ulucan, Gökhan Özbulak, Janani Selliah and Sébastien Marcel
Electronics 2025, 14(9), 1835; https://doi.org/10.3390/electronics14091835 - 30 Apr 2025
Cited by 1 | Viewed by 1006
Abstract
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially [...] Read more.
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially available HMD devices are equipped with internal inward-facing cameras to record the periocular areas. Given the nature of these devices and captured data, many applications such as biometric authentication and gaze analysis become feasible. To effectively explore the potential of HMDs for these diverse use-cases and to enhance the corresponding techniques, it is essential to have an HMD dataset that captures realistic scenarios. In this work, we present a new dataset of periocular videos acquired using a virtual reality headset called VRBiom. The VRBiom, targeted at biometric applications, consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. These 10 s long videos have been captured using the internal tracking cameras of Meta Quest Pro at 72 FPS. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. We have also ensured an equal split of recordings without and with glasses to facilitate the analysis of eye-wear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400×400), can be instrumental in advancing state-of-the-art research across various biometric applications. The VRBiom dataset can be utilized to evaluate, train, or adapt models for biometric use-cases such as iris and/or periocular recognition and associated sub-tasks such as detection and semantic segmentation. In addition to data from real individuals, we have included around 1100 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins. These PA videos, combined with genuine (bona fide) data, can be utilized to address concerns related to spoofing, which is a significant threat if these devices are to be used for authentication. The VRBiom dataset is publicly available for research purposes related to biometric applications only. Full article
Show Figures

Figure 1

24 pages, 12563 KB  
Article
Analyzing Gaze During Driving: Should Eye Tracking Be Used to Design Automotive Lighting Functions?
by Korbinian Kunst, David Hoffmann, Anıl Erkan, Karina Lazarova and Tran Quoc Khanh
J. Eye Mov. Res. 2025, 18(2), 13; https://doi.org/10.3390/jemr18020013 - 10 Apr 2025
Viewed by 831
Abstract
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the [...] Read more.
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior. Full article
Show Figures

Figure 1

17 pages, 3569 KB  
Article
Wearable Biosensor Smart Glasses Based on Augmented Reality and Eye Tracking
by Lina Gao, Changyuan Wang and Gongpu Wu
Sensors 2024, 24(20), 6740; https://doi.org/10.3390/s24206740 - 20 Oct 2024
Cited by 6 | Viewed by 6327
Abstract
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that [...] Read more.
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that must be addressed. In this study, we develop smart glasses based on augmented reality and eye tracking technology. Through real-time information interaction with the server, the smart glasses realize accurate scene perception and analysis of the user’s intention and combine with mixed-reality display technology to provide dynamic and real-time intelligent interaction services. A multi-level hardware architecture and optimized data processing process are adopted during the research process to enhance the system’s real-time accuracy. Meanwhile, combining the deep learning method with the geometric model significantly improves the system’s ability to perceive user behavior and environmental information in complex environments. The experimental results show that when the distance between the subject and the display is 1 m, the eye tracking accuracy of the smart glasses can reach 1.0° with an error of no more than ±0.1°. This study demonstrates that the effective integration of AR and eye tracking technology dramatically improves the functional performance of smart glasses in multiple scenarios. Future research will further optimize smart glasses’ algorithms and hardware performance, enhance their application potential in daily health monitoring and medical diagnosis, and provide more possibilities for the innovative development of wearable devices in medical and health management. Full article
Show Figures

Figure 1

11 pages, 5434 KB  
Article
An Innovative Device Based on Human-Machine Interface (HMI) for Powered Wheelchair Control for Neurodegenerative Disease: A Proof-of-Concept
by Arrigo Palumbo, Nicola Ielpo, Barbara Calabrese, Remo Garropoli, Vera Gramigna, Antonio Ammendolia and Nicola Marotta
Sensors 2024, 24(15), 4774; https://doi.org/10.3390/s24154774 - 23 Jul 2024
Cited by 3 | Viewed by 2089
Abstract
In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control [...] Read more.
In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control a robotic wheelchair for people with limited mobility. The test group comprised 11 healthy subjects (six male, five female, mean age 35.2 ± 11.7 years). A novel platform that integrates a smart wheelchair and an eye-tracking-enabled head-mounted display was proposed to reduce the cognitive requirements needed for wheelchair movement and control. The approach’s effectiveness was demonstrated by evaluating our system in realistic scenarios. The demonstration of the proposed AR head-mounted display user interface for controlling a smart wheelchair and the results provided in this paper could highlight the potential of the HoloLens 2-based innovative solutions and bring focus to emerging research topics, such as remote control, cognitive rehabilitation, the implementation of patient autonomy with severe disabilities, and telemedicine. Full article
(This article belongs to the Special Issue Computational Intelligence Based-Brain-Body Machine Interface)
Show Figures

Figure 1

16 pages, 3338 KB  
Article
Expert Performance in Action Anticipation: Visual Search Behavior in Volleyball Spiking Defense from Different Viewing Perspectives
by Ruihan Zhu, Deze Zou, Keji Wang and Chunmei Cao
Behav. Sci. 2024, 14(3), 163; https://doi.org/10.3390/bs14030163 - 22 Feb 2024
Cited by 3 | Viewed by 3135
Abstract
Volleyball spiking requires defenders to possess exceptional anticipatory skills. However, most volleyball defense video eye-tracking studies have used fixed or off-court perspectives, failing to replicate real-world environments. This study explored different visual search behaviors between elite and novice volleyball players from various viewing [...] Read more.
Volleyball spiking requires defenders to possess exceptional anticipatory skills. However, most volleyball defense video eye-tracking studies have used fixed or off-court perspectives, failing to replicate real-world environments. This study explored different visual search behaviors between elite and novice volleyball players from various viewing perspectives using video eye tracking. We examined spiking anticipation in 14 competitive elite, 13 semi-elite, and 11 novice players. We captured spiking videos from three on-court perspectives using GoPro cameras mounted on the defenders’ heads, closely replicating real game scenarios. For comparison, we recorded baseline videos using a fixed camera. The present study revealed that competitive and semi-elite players demonstrated higher accuracy than novices. Competitive elite players used fewer fixations, indicating that their superior performance was related to stable visual search patterns. All participant groups, regardless of skill level, showed similar visual allocation among areas of interest (AOIs). However, notable differences in visual search patterns and AOI allocation were observed between baseline and on-court perspective videos. From the baseline perspective, the participants primarily utilized global perception and peripheral vision, focusing more on the setter zone or the spiker’s trunk. Conversely, from the on-court perspective, they employed more fixations, focusing more intensely on the spiker’s detailed movements. Full article
Show Figures

Figure 1

10 pages, 655 KB  
Article
Optical Rules to Mitigate the Parallax-Related Registration Error in See-Through Head-Mounted Displays for the Guidance of Manual Tasks
by Vincenzo Ferrari, Nadia Cattari, Sara Condino and Fabrizio Cutolo
Multimodal Technol. Interact. 2024, 8(1), 4; https://doi.org/10.3390/mti8010004 - 4 Jan 2024
Cited by 2 | Viewed by 3279
Abstract
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent [...] Read more.
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent and simultaneous perception. In particular, the coherence between the real and virtual content is affected by a viewpoint parallax-related misalignment, which is due to the inaccessibility of the user-perceived reality through the semi-transparent optical combiner of the OST Optical See-Through (OST) display. Recent works demonstrated that a proper selection of the collimation optics of the HMD significantly mitigates the parallax-related registration error without the need for any eye-tracking cameras and/or for any error-prone alignment-based display calibration procedures. These solutions are either based on HMDs that projects the virtual imaging plane directly at arm’s distance, or they require the integration on the HMD of additional lenses to optically move the image of the observed scene to the virtual projection plane of the HMD. This paper describes and evaluates the pros and cons of both the suggested solutions by providing an analytical estimation of the residual registration error achieved with both solutions and discussing the perceptual issues generated by the simultaneous focalization of real and virtual information. Full article
Show Figures

Figure 1

16 pages, 2600 KB  
Article
Cognitive Load Estimation in VR Flight Simulator
by P Archana Hebbar, Sanjana Vinod, Aumkar Kishore Shah, Abhay A Pashilkar and Pradipta Biswas
J. Eye Mov. Res. 2022, 15(3), 1-16; https://doi.org/10.16910/jemr.15.3.11 - 5 Jul 2023
Cited by 15 | Viewed by 713
Abstract
This paper discusses the design and development of a low-cost virtual reality (VR) based flight simulator with cognitive load estimation feature using ocular and EEG signals. Focus is on exploring methods to evaluate pilot’s interactions with aircraft by means of quantifying pilot’s perceived [...] Read more.
This paper discusses the design and development of a low-cost virtual reality (VR) based flight simulator with cognitive load estimation feature using ocular and EEG signals. Focus is on exploring methods to evaluate pilot’s interactions with aircraft by means of quantifying pilot’s perceived cognitive load under different task scenarios. Realistic target tracking and context of the battlefield is designed in VR. Head mounted eye gaze tracker and EEG headset are used for acquiring pupil diameter, gaze fixation, gaze direction and EEG theta, alpha, and beta band power data in real time. We developed an AI agent model in VR and created scenarios of interactions with the piloted aircraft. To estimate the pilot’s cognitive load, we used low-frequency pupil diameter variations, fixation rate, gaze distribution pattern, EEG signal-based task load index and EEG task engagement index. We compared the physiological measures of workload with the standard user’s inceptor control-based workload metrics. Results of the piloted simulation study indicate that the metrics discussed in the paper have strong association with pilot’s perceived task difficulty. Full article
Show Figures

Figure 1

21 pages, 7455 KB  
Article
Technologies Supporting Screening Oculomotor Problems: Challenges for Virtual Reality
by Are Dæhlen, Ilona Heldal and Qasim Ali
Computers 2023, 12(7), 134; https://doi.org/10.3390/computers12070134 - 1 Jul 2023
Cited by 2 | Viewed by 2074
Abstract
Oculomotor dysfunctions (OMDs) are problems relating to coordination and accuracy of eye movements for processing visual information. Eye-tracking (ET) technologies show great promise in the identification of OMDs. However, current computer technologies for vision screening are specialized devices with limited screen size and [...] Read more.
Oculomotor dysfunctions (OMDs) are problems relating to coordination and accuracy of eye movements for processing visual information. Eye-tracking (ET) technologies show great promise in the identification of OMDs. However, current computer technologies for vision screening are specialized devices with limited screen size and the inability to measure depth, while visual field and depth are important information for detecting OMDs. In this experimental study, we examine the possibilities of immersive virtual reality (VR) technologies compared with laptop technologies for increased user experiences, presence, immersiveness, and the use of serious games for identifying OMDs. The results present increased interest in VR-based screening, motivating users to focus better using VR applications free from outside distractions. These limitations currently include lower performance and confidence in results of identifying OMDs with the used HMDs. Using serious games for screening in VR is also estimated to have great potential for developing a more robust vision screening tool, especially for younger children. Full article
Show Figures

Figure 1

14 pages, 2685 KB  
Article
Predicting Decision-Making in Virtual Environments: An Eye Movement Analysis with Household Products
by Almudena Palacios-Ibáñez, Javier Marín-Morales, Manuel Contero and Mariano Alcañiz
Appl. Sci. 2023, 13(12), 7124; https://doi.org/10.3390/app13127124 - 14 Jun 2023
Cited by 10 | Viewed by 2858
Abstract
Understanding consumer behavior is crucial for increasing the likelihood of product success. Virtual Reality head-mounted displays incorporating physiological techniques such as eye-tracking offer novel opportunities to study user behavior in decision-making tasks. These methods reveal unconscious or undisclosed consumer responses. Yet, research into [...] Read more.
Understanding consumer behavior is crucial for increasing the likelihood of product success. Virtual Reality head-mounted displays incorporating physiological techniques such as eye-tracking offer novel opportunities to study user behavior in decision-making tasks. These methods reveal unconscious or undisclosed consumer responses. Yet, research into gaze patterns during virtual product evaluations remains scarce. In this context, an experiment was conducted to investigate users’ gaze behavior when evaluating their preferences for 64 virtual prototypes of a bedside table. Here, 24 participants evaluated and selected their preferred design through eight repeated tasks of an 8-AFC, with individual evaluations conducted for each design to ensure the reliability of the findings. Several eye-tracking metrics were computed (i.e., gaze time, visits, and time to first gaze), statistical tests were applied, and a Long Short-Term Memory model was created to recognize decisions based on attentional patterns. Our results revealed that the Gaze Cascade Model was replicated in virtual environments and that a correlation between product liking and eye-tracking metrics exists. We recognize subjects’ decisions with a 90% accuracy, based on their eye patterns during the three seconds before their decision. The results suggest that eye-tracking can be an effective tool for decision-making prediction during product assessment in virtual environments. Full article
(This article belongs to the Special Issue Human–Machine Interaction in Metaverse)
Show Figures

Figure 1

14 pages, 3381 KB  
Article
Facial Motion Capture System Based on Facial Electromyogram and Electrooculogram for Immersive Social Virtual Reality Applications
by Chunghwan Kim, Ho-Seung Cha, Junghwan Kim, HwyKuen Kwak, WooJin Lee and Chang-Hwan Im
Sensors 2023, 23(7), 3580; https://doi.org/10.3390/s23073580 - 29 Mar 2023
Cited by 7 | Viewed by 4399
Abstract
With the rapid development of virtual reality (VR) technology and the market growth of social network services (SNS), VR-based SNS have been actively developed, in which 3D avatars interact with each other on behalf of the users. To provide the users with more [...] Read more.
With the rapid development of virtual reality (VR) technology and the market growth of social network services (SNS), VR-based SNS have been actively developed, in which 3D avatars interact with each other on behalf of the users. To provide the users with more immersive experiences in a metaverse, facial recognition technologies that can reproduce the user’s facial gestures on their personal avatar are required. However, it is generally difficult to employ traditional camera-based facial tracking technology to recognize the facial expressions of VR users because a large portion of the user’s face is occluded by a VR head-mounted display (HMD). To address this issue, attempts have been made to recognize users’ facial expressions based on facial electromyogram (fEMG) recorded around the eyes. fEMG-based facial expression recognition (FER) technology requires only tiny electrodes that can be readily embedded in the HMD pad that is in contact with the user’s facial skin. Additionally, electrodes recording fEMG signals can simultaneously acquire electrooculogram (EOG) signals, which can be used to track the user’s eyeball movements and detect eye blinks. In this study, we implemented an fEMG- and EOG-based FER system using ten electrodes arranged around the eyes, assuming a commercial VR HMD device. Our FER system could continuously capture various facial motions, including five different lip motions and two different eyebrow motions, from fEMG signals. Unlike previous fEMG-based FER systems that simply classified discrete expressions, with the proposed FER system, natural facial expressions could be continuously projected on the 3D avatar face using machine-learning-based regression with a new concept named the virtual blend shape weight, making it unnecessary to simultaneously record fEMG and camera images for each user. An EOG-based eye tracking system was also implemented for the detection of eye blinks and eye gaze directions using the same electrodes. These two technologies were simultaneously employed to implement a real-time facial motion capture system, which could successfully replicate the user’s facial expressions on a realistic avatar face in real time. To the best of our knowledge, the concurrent use of fEMG and EOG for facial motion capture has not been reported before. Full article
(This article belongs to the Special Issue Sensing Technology in Virtual Reality)
Show Figures

Figure 1

11 pages, 3407 KB  
Article
The Pupil Near Response Is Short Lasting and Intact in Virtual Reality Head Mounted Displays
by Hidde Pielage, Adriana A. Zekveld, Sjors van de Ven, Sophia E. Kramer and Marnix Naber
J. Eye Mov. Res. 2022, 15(3), 1-11; https://doi.org/10.16910/jemr.15.3.6 - 27 Mar 2023
Cited by 7 | Viewed by 284
Abstract
The pupil of the eye constricts when moving focus from an object further away to an object closer by. This is called the pupil near response, which typically occurs together with accommodation and vergence responses. When immersed in virtual reality mediated through a [...] Read more.
The pupil of the eye constricts when moving focus from an object further away to an object closer by. This is called the pupil near response, which typically occurs together with accommodation and vergence responses. When immersed in virtual reality mediated through a head-mounted display, this triad is disrupted by the vergence-accommodation conflict. However, it is not yet clear if the disruption also affects the pupil near response. Two experiments were performed to assess this. The first experiment had participants follow a target that first appeared at a far position and then moved to either a near position (far-to-near; FN) or to another far position (far-to-far; FF). The second experiment had participants follow a target that jumped between five positions, which was repeated at several distances. Experiment 1 showed a greater pupil constriction amplitude for FN trials, compared to FF trials, suggesting that the pupil near response is intact in head-mounted display mediated virtual reality. Experiment 2 did not find that average pupil dilation differed when fixating targets at different distances, suggesting that the pupil near response is transient and does not result in sustained pupil size changes. Full article
Show Figures

Figure 1

13 pages, 8871 KB  
Article
HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality
by Tianyu Song, Michael Sommersperger, The Anh Baran, Matthias Seibold and Nassir Navab
J. Imaging 2022, 8(11), 302; https://doi.org/10.3390/jimaging8110302 - 6 Nov 2022
Cited by 10 | Viewed by 2880
Abstract
Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, [...] Read more.
Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, to correctly place the surgical trocar and insert the endoscope to gain access to the surgical site. One factor complicating the placement is deformable soft tissue, as it can obscure important anatomical landmarks. In addition, the commonly used endoscopes with an angled camera complicate hand–eye coordination and, thus, navigation to the target area. Adjusting for an incorrectly positioned endoscope prolongs surgery time, requires a further incision and increases the radiation exposure as well as the risk of infection. In this work, we propose an augmented reality system to support endoscope placement during arthroscopy. Our method comprises the augmentation of a tracked endoscope with a virtual augmented frustum to indicate the reachable working volume. This is further combined with an in situ visualization of the patient anatomy to improve perception of the target area. For this purpose, we highlight the anatomy that is visible in the endoscopic camera frustum and use an automatic colorization method to improve spatial perception. Our system was implemented and visualized on a head-mounted display. The results of our user study indicate the benefit of the proposed system compared to baseline positioning without additional support, such as an increased alignment speed, improved positioning error and reduced mental effort. The proposed approach might aid in the positioning of an angled endoscope, and may result in better access to the surgical area, reduced surgery time, less patient trauma, and less X-ray exposure during surgery. Full article
Show Figures

Figure 1

Back to TopTop