Next Article in Journal
Investigating the Role of Flight Phase and Task Difficulty on Low-Time Pilot Performance, Gaze Dynamics and Subjective Situation Awareness During Simulated Flight
Previous Article in Journal
Advancing Dynamic-Time Warp Techniques for Correcting Eye Tracking Data in Reading Source Code
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Potential of a Laser Pointer Contact Lens to Improve the Reliability of Video-Based Eye-Trackers in Indoor and Outdoor Conditions

by
François-Maël Robert
1,
Marion Otheguy
2,
Vincent Nourrit
2,* and
Jean-Louis de Bougrenet de la Tocnaye
2
1
IMT Atlantique–Optics Dpt, France
2
IMT Atlantique–Optics Dpt, LaTIM, France
*
Author to whom correspondence should be addressed.
J. Eye Mov. Res. 2024, 17(1), 1-16; https://doi.org/10.16910/jemr.17.1.5
Submission received: 21 October 2023 / Published: 16 May 2024

Abstract

:
Many video-based eye trackers rely on detecting and tracking ocular features, a task that can be negatively affected by a number of individual or environmental factors. In this context, the aim of this study was to practically evaluate how the use of a scleral contact lens with two integrated nearinfrared lasers (denoted CLP) could improve the tracking robustness in difficult lighting conditions, particularly outdoor ones. We assessed the ability of the CLP (on a model eye) to detect the lasers and to deduce a gaze position with an accuracy better than 1° under four lighting conditions (1 lx, 250 lx, 50 klux and alternating 1 lx/250 lx) on an artificial eye. These results were compared to the ability of a commercial eye tracker (Pupil Core) to detect the pupil on human eyes with a confidence score equal to or greater than 0.9. CLP provided good results in all conditions (tracking accuracy and detection rates). In comparison, the Pupil Core performed well in all indoor conditions (99% detection) but failed in outdoor conditions (9.85% detection). In conclusion, the CLP presents strong potential to improve the reliability of video-based eye-trackers in outdoor conditions by providing easy trackable feature.

1. Introduction

Since the second half of the twentieth century, technological progresses have allowed eye tracking to enter different fields outside the fields of neuroscience, vision or psychology (Jacob & Karn, 2003; Duchowski, 2007; Majaranta et al., 2009; König et al., 2016; Płużyczka, 2018) such as human factors (Fitts et al., 2005), advertising (Hervet et al., 2011), science education (Jarodzka et al., 2017) or human-computer interaction (Levine, 1984; Møllenbach et al., 2013). These same advances have also made it possible to propose new hardware and software architectures to gain in precision, speed and reliability. Among the various existing techniques: scleral search coil (Stahl, 2014), electro-oculography, etc., video oculography is nowadays by far the dominant technique due to its non-invasive nature, relative ease of implementation and constant progresses in terms of sensors, computing power and image processing. In this approach, one or multiple camera(s) take an image of the eyes, often illuminated by infrared light sources (Nourrit et al., 2021). Gaze direction is then traditionally estimated using model-based methods, i.e., methods relying on a geometric eye model and analysis of pupil and/or corneal reflections.
Such methods rely on accurately detecting particular features, and consequently can be negatively affected by a number of environmental or individual factors such as ambient light, unequal eyes illumination, multiple corneal reflexes, drooping eyelids, pupil center shift, head position, eye-glasses, etc. (Morimoto & Mimica, 2005; Fuhl et al., 2017). Appearance based methods, i.e., methods that do not rely on any explicit segmentation stage, have been developed to tackle some of these issues and commercial applications exist (i.e., Pupil Lab’s Neon eye tracker; (Kassner et al., 2014)) and benefit from advances in computing power and deep learning approaches. However, these methods can also be negatively impacted by different factors (e.g., lighting changes, scale variability; (Hansen & Ji, 2010) and requires a large amount of data to be trained.
In this context, we have proposed to take advantage of recent results in embedded electronics to develop an electronic contact lens (Khaldi et al., 2020) in order to simplify the tracking. Various contact lens configurations have been presented: first using photodiodes (to have sensors whose response varies directly with eye movements (Massin et al., 2020)) then using on-board lasers that could be used to interact with position sensitive devices (Robert et al., 2022), to materialize the gaze direction (Robert et al., 2023) or simply to provide easily trackable features. These papers focused on the technical design and in each case, the lens was basically just placed on an artificial eye to validate the detection process. The aim of this study was to report functional tests with a calibrated system, and to practically assess how the use of two embedded near-infrared lasers could improve the overall sensor’s robustness to different lighting conditions when compared to a conventional wearable eye tracker.
Pupil detection, which is at the basis of the model-based approach, is actually not a trivial task (Santini et al., 2018). Various conditions such as physiological irregularities, reflection or complex illuminations can prevent a correct pupil detection and hence an accurate gaze estimation. Replacing the tracking of the pupil and possibly other elements by the tracking of two light sources should strongly simplify the problem and make it possible to track the eye even in difficult conditions such as for example outdoor illumination conditions (Evans et al., 2012; Fuhl et al., 2016) that can significantly differ from classic lab conditions (e.g., large illuminance dynamic, illuminance variations associated with subject’s mobility, important infrared radiations, etc.).
The paper is organized as follow. In the Method section, we rapidly present the device, the method used to calibrate it taking advantage of the double laser configuration, the tests conditions that represent different conditions of use and that may be demanding for many eye trackers, and the protocol used. Results obtained for the different tests conditions with the contact lens eye tacker and a reference one (Pupil Lab’s core eye tracker (Kassner et al., 2014)) are then presented followed by a discussion on achieved performance and future work.

2. Methods

2.1. Contact Lens Pointer

We refer in this article to the eye tracking system using the contact lens as the contact lens pointer (CLP). The system presented here has already been presented elsewhere but we describe it here quickly for the sake of clarity.
The system is made up of three parts: an instrumented scleral contact lens (Figure 1), an eyewear to power the contact lens and to take a video of the eye region (Figure 2), and a computer to process the data acquired by the camera.
The scleral contact lens is made of polymethyl methacrylate (PMMA) and has a diameter of 16.5 mm. It encapsulates a circuit comprising two vertical cavity self-emitting lasers (VCSELs) emitting at a wavelength of 850 nm, i.e., in the infrared spectrum, and a secondary antenna to power them by inductive coupling. A more precise description of the contact lens can be found in (Massin et al., 2020). Like any scleral lens, the lens is particularly stable on the eye and this point and the absence of health risks have been described in our previous papers (Khaldi et al., 2020; Massin et al., 2020; Robert et al., 2022, 2023).
The eyewear is a spectacles frame embedding a primary antenna to power the VCSELs, together with a generator and amplifier circuit to generate the signal at the right frequency and amplitude. This circuit is powered and controlled by a microcontroller card compatible with a Raspberry Pi RP2040 platform; and combined with a direct digital synthesis (DDS) chip. The DDS allows adjusting the generator to optimize the inductive transmission with the contact lens. In a conventional configuration, two cameras are mounted on the eyewear in front of the eyes to detect the VCSELs, while a third camera (world camera) records the scene seen by the user. We used here the Pupil Core architecture, from the company Pupil-Labs, for its flexibility, its open software and its performances. We added a removable IR bandpass filter to the eyes camera and covered the Pupil Core’s IR LEDs to see only the light from the contact lens’ VCSELs on the images acquired (400×400 pixels at a framerate of 120 Hz). The eyewear is 3D printed which allowed us to adjust it to position adequately the cameras.
The data from the eyes camera and world camera are transmitted to a computer through a USB connection. Because of the IR filter, the images from the eye cameras basically consist of a dark frame with only two spots corresponding to the two VCSELs. Tracking the eye therefore comes down to tracking these two posts which is simple enough to be done in real time with a Python script running on a Raspberry pi (Figure 3). The calibration step (described in the next section) allows to associate the centroids from these spots to a precise gaze direction.

2.2. Calibration Algorithm

In conventional video-based eye tracking, a calibration is required to establish a reliable correspondence between the real gaze direction and the measured features in the eye cameras images (Hoormann et al., 2007). In this section, we describe for the first time the algorithm used for the CLP.
Usually the participant is asked to focus their gaze on targets that appear successively at different locations in a given plane (e.g., the surface of the display monitor). The data from the eye cameras collected during this period are mapped to these specific locations using a standard configuration of the eye model (Świrski et al., 2012). A successful calibration means that the collected gaze samples and the detected calibration marker, allowed to compute a 3D eye model and that the resulting mapping is correct.
The calibration procedure we use for the contact lens eye tracker establishes a relation between a given number of fixation points, represented by their coordinates in the word camera image (Xworld, Yworld), and the associated coordinates of the centroids of the two VCSELs’ spots (xleft, yleft) and (xright, yright) seen by the eye camera. Mathematically, this relation can be written:
(Xworld, Yworld) = f(xleft, yleft, xright, yright)
Based on the literature (Cerrolaza et al., 2008; Blignaut & Wium, 2013; Blignaut, 2014; Kar & Corcoran, 2017) and our own experience, we choose for f a second order polynomial with crossed terms, to account for the geometric dependence between VCSELs.
Jemr 17 00005 i001
Jemr 17 00005 i002
Practical details about the calibration procedure are presented here after, after description of the test bench.

2.3. Test Bench

In this section, we explain how and why the CLP was tested on an artificial eye when data for the Pupil Core were obtained on humans.
As previously stated, the aim of this study was to practically assess how the use of two embedded lasers could improve the overall sensor’s robustness to different lighting conditions. For this reason, tests were warried out using only one eye.
The CLP being in the process of CE certification, tests with the CLP were carried out on a model eye for ethical reasons. Ideally, all tests with the Pupil Core would have been done on the same artificial eye to ensure that measurements were made exactly in the same conditions and to avoid any uncertainties about the gaze direction. (The term “artificial eye” may suggest greater complexity than the term “model eye”, but we use it here interchangeably to refer to the same element).
We tested several eye models (Figure 3a): holed table tennis ball, 3D printed scleral lens, 3D printed colored models, and finally obtained the best results with a standard ocular prosthesis set on a 3D printed eyeball (Figure 3). Unfortunately, even though there was no noticeable difference in appearance between the eye prosthesis and the human eye in the Pupil Core images (Figure 3), tracking performances with the Pupil Core were somehow poorer with the prothesis than with a human eye. For this reason, tests with the Pupil Core were eventually carried out on humans (using a chin rest), rather than with an artificial eye, to ensure optimum performances. In addition, to account for the fact that the quality of the measurements with the Pupil Core may depend on the user, measurements were carried out on 4 subjects, and for each test condition, only the best results were retained. (As this study involved human subjects, the approval of IMT Atlantique’s ethics committee was obtained).
The opto-mechanical set-up is presented in Figure 4 next to the calibration chart as seen by the world-camera. The model eye was placed behind the eyewear (including the driving antenna and Pupil core cameras) and mounted on two rotative plates that allowed its rotation in the horizontal and vertical directions (precision of 0.5° horizontally and 0.02 degrees vertically) (Figure 4). When using the CLP, since the appearance of the iris had no importance (only the laser spots are detected) a second artificial eye was used which consisted in a 3D printed eyeball with a red laser inside it (Figure 5). This additional laser allowed visualizing directly the gaze position.
The calibration chart consists of five points placed at the extremities of a square with a 20° side and centered on (0°,0°). The calibration sequence starts with the central point and then moves from the bottom right corner to the top right one in clockwise order. When using the Pupil core, the subject sat and gazed successively at the five CP, his head immobilized by a chin rest. When calibrating the CLP, the contact lens was placed on the model eye with the embedded red laser (Figure 5) to visualize the gaze position and the eye was rotated to gaze successively at the five CP.

2.4. Protocol

The method used to assess the accuracy and robustness of each eye tracker when confronted with demanding lighting conditions is described below and summarized in Table 1. In order to assess the potential advantage of the contact lens eye tracker in terms of robustness, four testing conditions were defined.
In the first one (C-In), lighting conditions correspond to indoor lighting (250 lux), as in a test room lit by neon lights and without windows. The second condition (C-Dark) corresponds to the case where the user is in the dark (1 lux) as could be found in some interactive environment or in some cognitive studies on the effect of darkness.
The third one (C-Alt) aims at simulating changing lighting conditions, for instance due to the user moving in a darker environment. The ambient light is alternatively turned on and out (1/250 lux) every 3s. Variations of illumination conditions could indeed impact the quality in the gaze direction’s detection in two ways: by intensity variations on the camera’s sensor (the auto-exposure algorithm may not react enough rapidly and correctly), and, in the case of a real eye, by the fact that when the pupil’s size change, the line of gaze does not necessarily intersect its center (Wildenmann & Schaeffel, 2013).
The fourth lighting condition (C-Out) corresponds to outdoor conditions on a sunny afternoon (50 klux). The eye tracker was set so that the Pupil Core cameras were not directly into the sun (Figure 4) and the user was not blinded by the sun. As in previous conditions, the photometer was held next to the front face of the eyewear. We did not measure the amount of infrared light arriving on the eye tracker because lighting measurements are traditionally given in photometric units, but it is important to note that visible light represents little more than 40% of solar radiation, and that therefore the quantity of ambient infrared was much higher than in indoors conditions.
For all conditions but condition C-Alt, the eye had to follow the same trajectory as the one used during calibration. The duration of this task varies between approximately 30 s when using the Pupil Core and one minute when using the CLP (as the movements of the model were not fully automated). For condition C-Alt the eye stayed still and data was recorded for 20s. When using the Pupil Core, the auto-exposure function of the eye camera was activated to ensure optimum results (based on our experience). When using the CLP, such function was not implemented in our program and the exposure time was fixed prior to calibration.
Accuracy. Once our device calibrated, assessing its accuracy is straightforward. We can use the artificial eye embedding a laser pointer to point to a particular location and compare it directly to the calculated gaze position. The accuracy is then defined as the average angular offset between target and gaze position. For the Pupil Core, we simply considered the given theoretical accuracy of 0.6°.
Robustness. The surrounding illumination where the eye-tracking experience takes place (that may change rapidly in outdoor conditions or due to mobility) can decrease the performances of video-based eye-tracking systems, even if the eye is illuminated by specific light sources (IR LEDs for the Pupil Core). In the case of the Pupil Core, we first filter out blinks (using the Pupil Lab software and checking the video). Then we calculate three percentages: when the pupil is not detected (i.e., pupil diameter null), when the pupil is detected but with low confidence, and when the pupil is detected with high confidence. This confidence level is the one returned by the Pupil Core software as an assessment of the quality of the pupil detection for a given eye image, where 0 means that the pupil could not be detected and 1 when the pupil was detected with very high certainty (Ehinger et al., 2019). The threshold for high confidence is set arbitrarily to 0.9 which is less conservative than the 0.98 suggested by Pupil Labs (Dierkes et al., 2018). For the CLP, we calculate three percentages: when the laser spots are not detected, when they are detected and point to a coordinate more than 1 degree away to the true gaze position and when they are detected and point to a coordinate less than 1 degree away to the true gaze position.

3. Results

Eye images for the CLP and Pupil Core are illustrated in Figure 6. As expected images for the CLP are basically simple binary images with two bright spots corresponding to each VCSELs. As a result, the laser spots could be easily detected in all tested conditions (Table 2). The CLP demonstrated an accuracy equal or better than 0.27 ± 0.27° (Table 2; Figure 7). This result depends on the calibration model used but also on the resolution of the eye camera. In our experiment, the calibration model was not optimized and the VCSEL pair only used a small part of the CMOS sensor so a better accuracy could be easily obtained by adjusting the eye camera optics. A small percentage of gaze points were calculated with an error larger than one degree. Such points usually corresponded to the case when the image of one VCSEL spot would be saturated, leading an error on the calculation of the spot centroid.
In comparison, the Pupil Core performed well in all indoor conditions with a “Good detection” score above 99% (cf. Table 3) but failed in outdoor conditions. The poor pupil detection performances in sunlight demonstrate the difficulty to have a device that can track a passive object over a wide illumination dynamic. Good performances in condition C-In (250 lux) were expected since it corresponds to a relatively classic use case for commercial eye trackers. Similar performances in the dark are not surprising since the Pupil Core uses additional IR sources to illuminate the eye. As explained in the previous section, the tracking accuracy of the pupil core was not measured. We assume it to be equal to the theoretical value when pupil detection rate is high, and worse otherwise (e.g., outdoors conditions).
Results for condition C-Alt, were in agreement with results for conditions C-In and C-Dark, i.e., the change in light conditions did not significantly impact the CLP and Pupil Core performances. For the CLP, this is not surprising since, by design the camera only receives light from the VCSELs. For the Pupil Core, this means that the auto-exposure algorithm could react rapidly and precisely enough. Temporal analysis of the data did not show that performances would decrease during lighting transitions. The fact that, in Table 3, the rate of poorly detected pupils is lower in condition C-Alt than in condition C-Dark is possibly due to the variability of results associated to measurements on humans.

4. Discussion

The first objective of this study was to report functional tests of the CLP after calibration. In terms of accuracy, our results do not show any benefit for the CLP when compared to the reported accuracy of the Pupil Core (Kassner et al., 2014), although it was tested on a model eye. This is first due to the fact that they both relied on the same camera resolution. With a dedicated sensor adapted to the VCELs trajectory, higher performances should be expected. Also, in this study, we used for mapping function a generic polynomial in x and y with first order interaction. This function was chosen because it provided good performances but it may not be the most appropriate function. Other functions could be investigated based on previous work (Cerrolaza et al., 2008).
One parameter limiting the accuracy of the CLP is the high directionality of the VCSEL. As a consequence, the camera can be saturated when the VCSEL beam hits straight the sensor, leading to centroid estimation error. This directionality also limits the useful range to ±10°. This can be enough for some applications, particularly outdoors ones, i.e., when the CLP presents significant advantages over conventional eye trackers, but we also performed some tests replacing the VCSELs by a LED. The tracking range was then ±20° so large enough for any wearable application and larger than some high-end desktop eye trackers.
Another aim of the study was to assess how the use of two embedded near-infrared lasers could improve the overall sensor’s robustness to different lighting conditions when compared to a conventional wearable eye tracker.
As presented in the manuscript, the ideal method to compare both systems (CLP and Pupil Core) would have been to use a model eye to ensure that measurements were made exactly in the same conditions and to avoid any uncertainties about the gaze direction. Unfortunately, on the one hand all the various tests confirming the safety of the device had not yet been passed so the CLP had to be tested on an artificial eye. And on the other hand, we did not succeed in developing an artificial eye that would allow us to obtain with the Pupil Core results as good as with humans. Positive results with artificial eyes used to test other eyetrackers have been reported (Wang et al., 2017) but often without much detail on how the eye was made, and not for the Pupil Core, so our work may be of interest to the community.
We thus used an artificial eye for the CLP and decided to use the Pupil Core on human eyes (and retaining only the best results) because these were the most favourable conditions for the Pupil Core. This is a limitation to the study but since we only focused on the impact of the different lighting conditions on the reliability of detection of the tracking features (i.e., the VCSELs for the CLP and the pupil for the Pupil Core) we do not think that this invalidates our results, and in particular given their solidity (100% detection of CLP in conditions external vs. 9.85% for the Pupil Core). The main difference between the artificial eye and the human eye is the blink and this was accounted for in our analysis. However, a follow-up study with the CLP worn by humans is warranted to fully confirm these results.
According to our results the CLP thus presents strong potential to improve the reliability of video-based eye-trackers in outdoor conditions by providing easy trackable feature. When classic eye trackers try to find a 2D ellipse that fits the pupil, the CLP approach just relies on finding two bright spots at a known distance from one another in a binary image. Simplifying the tracking also means easier calibration, simpler and faster processing time for increased mobility and reduced latencies, and better data continuity.
In addition to the simplicity of the stimulus on the eye camera sensor, the fact of using active tracking features helps avoid potential issues with mydriasis (Choe et al., 2016) and provide an unmissable target that stands out from any parasitic signal. This is why the CLP solution gives much better detection results in outdoor conditions than the pupil core. This increased reliability could be useful for various outdoors applications such as HMI for smart cockpits, out of home advertising or sport studies, etc.
In this study, outdoor conditions remained unchanged but in real mobility situations the wearer could move through strongly varying and complex illuminations for instance facing the sun with part of his face in the shadow. In such cases the system presented here will outperform conventional image-based eye-trackers. One solution for conventional image-based eye-trackers could be to use a very narrow spectral filter to eliminate all but the light sent by their IR light sources. However, this would naturally lead to increase the energy sent in the selected spectral band, raising potential safety issues. In addition, light from the sun is strong in all the IR part of the spectrum. The CLP could also be used in combination with existing devices. The circular antenna in the lens could then be tracked and the VCSELs turned on only in demanding situations to increase reliability.

Ethics and Conflict of Interest

The authors declares that the contents of the article are in agreement with the ethics described in http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html and that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This research was supported in part by a grant from IMT Carnot research program and a grant from Agence Nationale de la Recherche (ANR-21-CE19-0053). We wish to thank the company Pupil Labs for their answers to our technical questions, Bernard Abiven for manufacturing the 3D model eyes and LCS laboratoires and Vincent Ruesh for supplying the eye prostheses.

References

  1. Blignaut, P. 2014. Mapping the Pupil-Glint Vector to Gaze Coordinates in a Simple Video-Based Eye Tracker. Journal of Eye Movement Research 7, 1: 1. [Google Scholar] [CrossRef]
  2. Blignaut, P., and D. Wium. 2013. The effect of mapping function on the accuracy of a video-based eye tracker. Proceedings of the 2013 Conference on Eye Tracking South Africa; pp. 39–46. [Google Scholar] [CrossRef]
  3. Cerrolaza, J. J., A. Villanueva, and R. Cabeza. 2008. Taxonomic study of polynomial regressions applied to the calibration of video-oculographic systems. Proceedings of the 2008 Symposium on Eye Tracking Research & Applications; pp. 259–266. [Google Scholar] [CrossRef]
  4. Choe, K. W., R. Blake, and S.-H. Lee. 2016. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation. Vision Research 118: 48–59. [Google Scholar] [CrossRef]
  5. Dierkes, K., M. Kassner, and A. Bulling. 2018. A novel approach to single camera, glint-free 3D eye model fitting including corneal refraction. Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications; pp. 1–9. [Google Scholar] [CrossRef]
  6. Duchowski, A. T. 2007. Eye Tracking Methodology. Springer. [Google Scholar] [CrossRef]
  7. Ehinger, B. V., K. Groß, I. Ibs, and P. König. 2019. A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000. PeerJ 7: e7086. [Google Scholar] [CrossRef]
  8. Evans, K. M., R. A. Jacobs, J. A. Tarduno, and J. B. Pelz. 2012. Collecting and Analyzing Eye-Tracking Data in Outdoor Environments. Journal of Eye Movement Research 5, 2: 2. [Google Scholar] [CrossRef]
  9. Fitts, P. M., R. E. Jones, and J. L. Milton. 2005. Eye movements of aircraft pilots during instrument-landing approaches. Ergonomics: Major Writings. [Google Scholar]
  10. Fuhl, W., D. Hospach, T. C. Kübler, W. Rosenstiel, O. Bringmann, and E. Kasneci. 2017. Ways of improving the precision of eye tracking data: Controlling the influence of dirt and dust on pupil detection. Journal of Eye Movement Research 10, 3: 3. [Google Scholar] [CrossRef]
  11. Fuhl, W., M. Tonsen, A. Bulling, and E. Kasneci. 2016. Pupil detection for head-mounted eye tracking in the wild: An evaluation of the state of the art. Machine Vision and Applications 27, 8: 1275–1288. [Google Scholar] [CrossRef]
  12. Hansen, D. W., and Q. Ji. 2010. In the Eye of the Beholder: A Survey of Models for Eyes and Gaze. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 3: 478–500. [Google Scholar] [CrossRef]
  13. Hervet, G., K. Guérard, S. Tremblay, and M. S. Chtourou. 2011. Is banner blindness genuine? Eye tracking internet text advertising. Applied Cognitive Psychology 25, 5: 708–716. [Google Scholar] [CrossRef]
  14. Hoormann, J., S. Jainta, and W. Jaschinski. 2007. The effect of calibration errors on the accuracy of the eye movement recordings. Journal of Eye Movement Research 1, 2: 2. [Google Scholar] [CrossRef]
  15. Jacob, R. J. K., and K. S. Karn. 2003. Commentary on Section 4—Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises. Edited by J. Hyönä, R. Radach and H. Deubel. In The Mind’s Eye. North-Holland: pp. 573–605. [Google Scholar] [CrossRef]
  16. Jarodzka, H., K. Holmqvist, and H. Gruber. 2017. Eye tracking in Educational Science: Theoretical frameworks and research agendas. Journal of Eye Movement Research 10, 1: 1. [Google Scholar] [CrossRef]
  17. Kar, A., and P. Corcoran. 2017. A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms. IEEE Access 5: 16495–16519. [Google Scholar] [CrossRef]
  18. Kassner, M., W. Patera, and A. Bulling. 2014. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication; pp. 1151–1160. [Google Scholar] [CrossRef]
  19. Khaldi, A., E. Daniel, L. Massin, C. Kärnfelt, F. Ferranti, C. Lahuec, F. Seguin, V. Nourrit, and J.-L. de Bougrenet de la Tocnaye. 2020. A laser emitting contact lens for eye tracking. Scientific Reports 10, 1: 14804. [Google Scholar] [CrossRef]
  20. König, P., N. Wilming, T. C. Kietzmann, J. P. Ossandón, S. Onat, B. V. Ehinger, R. R. Gameiro, and K. Kaspar. 2016. Eye movements as a window to cognitive processes. Journal of Eye Movement Research 9, 5: 5. [Google Scholar] [CrossRef]
  21. Levine, J. L. 1984. Performance of an eyetracker for office use. Computers in Biology and Medicine 14, 1: 77–89. [Google Scholar] [CrossRef]
  22. Majaranta, P., R. Bates, and M. Donegan. 2009. Eye-tracking. Edited by C. Stephanidis. In The Universal Access Handbook. CRC Press: pp. 587–603. [Google Scholar]
  23. Massin, L., L. Massin, V. Nourrit, C. Lahuec, C. Lahuec, F. Seguin, F. Seguin, L. Adam, E. Daniel, and J.-L.B. de la Tocnaye. 2020. Development of a new scleral contact lens with encapsulated photodetectors for eye tracking. Optics Express 28, 19: 28635–28647. [Google Scholar] [CrossRef]
  24. Møllenbach, E., J. P. Hansen, and M. Lillholm. 2013. Eye Movements in Gaze Interaction. Journal of Eye Movement Research 6, 2: 2. [Google Scholar] [CrossRef]
  25. Morimoto, C. H., and M. R. M. Mimica. 2005. Eye gaze tracking techniques for interactive applications. Computer Vision and Image Understanding 98, 1: 4–24. [Google Scholar] [CrossRef]
  26. Nourrit, V., R. Poilane, and J.-L. de Bougrenet de la Tocnaye. 2021. Custom on-axis headmounted eye tracker for 3D active glasses. Proc. Electronic Imaging 33, 2. [Google Scholar] [CrossRef]
  27. Płużyczka, M. 2018. The First Hundred Years: A History of Eye Tracking as a Research Method. Applied Linguistics Papers 25, 4: 101–116. [Google Scholar] [CrossRef]
  28. Robert, F.-M., B. Abiven, M. Sinou, K. Heggarty, L. Adam, V. Nourrit, and J.-L. de Bougrenet de la Tocnaye. 2023. Contact lens embedded holographic pointer. Scientific Reports 13, 1: 6919. [Google Scholar] [CrossRef]
  29. Robert, F.-M., V. Nourrit, L. Adam, and J.-L. de Bougrenet de la Tocnaye. 2022. VCSEL pair used as optical pointers in a contact lens for gaze tracking and visual target designation. PLoS ONE 17, 7: e0267393. [Google Scholar] [CrossRef]
  30. Santini, T., W. Fuhl, and E. Kasneci. 2018. PuRe: Robust pupil detection for real-time pervasive eye tracking. Computer Vision and Image Understanding 170: 40–50. [Google Scholar] [CrossRef]
  31. Stahl, J. S. 2014. Eye Movement Recording. Edited by M. J. Aminoff and R. B. Daroff. In Encyclopedia of the Neurological Sciences, 2nd ed. Academic Press: pp. 245–247. [Google Scholar] [CrossRef]
  32. Świrski, L., A. Bulling, and N. Dodgson. 2012. Robust real-time pupil tracking in highly off-axis images. Proceedings of the Symposium on Eye Tracking Research and Applications; pp. 173–176. [Google Scholar] [CrossRef]
  33. Wang, D., F. B. Mulvey, J. B. Pelz, and K. Holmqvist. 2017. A study of artificial eyes for the measurement of precision in eye-trackers. Behavior Research Methods 49, 3: 947–959. [Google Scholar] [CrossRef] [PubMed]
  34. Wildenmann, U., and F. Schaeffel. 2013. Variations of pupil centration and their effects on video eye tracking. Ophthalmic and Physiological Optics 33, 6: 634–641. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Contact lens with the secondary antenna (1) and the two VCSELs (2). The primary antenna is set in an eyewear worn by the subject. The SCL is here worn during a wearing test by the scleral lens specialist in charge of the design and manufacture of these lenses.
Figure 1. Contact lens with the secondary antenna (1) and the two VCSELs (2). The primary antenna is set in an eyewear worn by the subject. The SCL is here worn during a wearing test by the scleral lens specialist in charge of the design and manufacture of these lenses.
Jemr 17 00005 g001
Figure 2. Left image: the eyewear, i.e., a 3D printed glasses frame with the primary antenna inside and supporting the Pupil Core cameras. Right image: the detection system with: (a) the eyewear with an additional head-strap for improved stability, (b) the battery of the Raspberry PI and (c) the Raspberry Pi RP2040. This is the fully wearable configuration. When connected to a standard PC, the Raspberry and battery are not needed.
Figure 2. Left image: the eyewear, i.e., a 3D printed glasses frame with the primary antenna inside and supporting the Pupil Core cameras. Right image: the detection system with: (a) the eyewear with an additional head-strap for improved stability, (b) the battery of the Raspberry PI and (c) the Raspberry Pi RP2040. This is the fully wearable configuration. When connected to a standard PC, the Raspberry and battery are not needed.
Jemr 17 00005 g002
Figure 3. Left image: The different model eyes tested. Middle and left images: view of the eye prosthesis and a human eye when using the Pupil Core. The bright spots on the iris correspond to the Pupil Core IR source which is used to illuminate the eye.
Figure 3. Left image: The different model eyes tested. Middle and left images: view of the eye prosthesis and a human eye when using the Pupil Core. The bright spots on the iris correspond to the Pupil Core IR source which is used to illuminate the eye.
Jemr 17 00005 g003
Figure 4. Left: view of the calibration chart by the world camera in indoor conditions. Right: the model eye on its rotating platform and driving eyewear during outdoor tests.
Figure 4. Left: view of the calibration chart by the world camera in indoor conditions. Right: the model eye on its rotating platform and driving eyewear during outdoor tests.
Jemr 17 00005 g004
Figure 5. (a) Model eye with an embedded laser to visualize the gaze position. (b) Cross section of the model eye showing the laser.
Figure 5. (a) Model eye with an embedded laser to visualize the gaze position. (b) Cross section of the model eye showing the laser.
Jemr 17 00005 g005
Figure 6. Image recorded by the Pupil Labs eye camera when using the CLP (left image) or in the classic Pupil Core configuration (right image).
Figure 6. Image recorded by the Pupil Labs eye camera when using the CLP (left image) or in the classic Pupil Core configuration (right image).
Jemr 17 00005 g006
Figure 7. Tracking accuracy of the CLP in outdoor conditions using the calibration chart. The red line represents the trajectory (denoted “TC”) the eye has to follow. The blue line represents the gaze position calculated for the CLP.
Figure 7. Tracking accuracy of the CLP in outdoor conditions using the calibration chart. The red line represents the trajectory (denoted “TC”) the eye has to follow. The blue line represents the gaze position calculated for the CLP.
Jemr 17 00005 g007
Table 1. Summary of the testing protocol used in this study.
Table 1. Summary of the testing protocol used in this study.
DeviceCalibrationTesting ConditionsMetrics
CLP on an artificial eye5 points calibration where the eye follows the trajectory “TC”
Calibration indoor is done under 250lux and outdoor under 50k lux.
  • C-In: Indoor—250 lux
    Eye follows “TC”
  • C-Dark: Indoor—1 lux
    Eye follows “TC”
  • C-Alt: Indoor—Alternating 1/250 lux every 3 s
    Eye fixed.
  • C-Out Outdoor—50k lux
    Eye follows “TC”
Metrics for CLP
  • Accuracy
  • Detection rate:
    -
    Not detected
    -
    Detected with accuracy > 1°
    -
    Detected with accuracy ≤ 1°
Pupil Core on a human eyeMetrics for Pupil Core
  • Detection rate:
    -
    Not detected
    -
    Poor detection (Q < 0.9)
    -
    Good detection (Q ≥ 0.9)
Table 2. Results with the CLP. Tracking accuracy was measured using a model eye. Pupil detection performances were measured on a model eye.
Table 2. Results with the CLP. Tracking accuracy was measured using a model eye. Pupil detection performances were measured on a model eye.
Testing ConditionsTracking Accuracy (°)No DetectionPoor Detection (Accuracy > 1°)Good Detection (Accuracy ≤ 1°)
C-In (250 lux)0.21 ± 0.21°0%2.9%97.1%
C-Dark (1 lux)0.15 ± 0.24°0%2.7%97.3%
C-Alt (1 lx/250 lx)NA0%0%100%
C-Out (50k lux)0.27 ± 0.27°0%0.1%99.9%
Table 3. Results with the Pupil Core. Pupil detection performances were measured on a human eye.
Table 3. Results with the Pupil Core. Pupil detection performances were measured on a human eye.
Testing ConditionsNo Pupil Detected (Pupil Diameter Null)Poor Detection (Q < 0.9)Good Detection (Q ≥ 0.9)
C-In (250 lux)0.05%0.00%99.95%
C-Dark (1 lux)0.04%0.25%99.71%
C-Alt (1 lx/250 lx)0.03%0.08%99.89%
C-Out (50k lux)4.25%85.9%9.85%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Robert, F.-M.; Otheguy, M.; Nourrit, V.; de Bougrenet de la Tocnaye, J.-L. Potential of a Laser Pointer Contact Lens to Improve the Reliability of Video-Based Eye-Trackers in Indoor and Outdoor Conditions. J. Eye Mov. Res. 2024, 17, 1-16. https://doi.org/10.16910/jemr.17.1.5

AMA Style

Robert F-M, Otheguy M, Nourrit V, de Bougrenet de la Tocnaye J-L. Potential of a Laser Pointer Contact Lens to Improve the Reliability of Video-Based Eye-Trackers in Indoor and Outdoor Conditions. Journal of Eye Movement Research. 2024; 17(1):1-16. https://doi.org/10.16910/jemr.17.1.5

Chicago/Turabian Style

Robert, François-Maël, Marion Otheguy, Vincent Nourrit, and Jean-Louis de Bougrenet de la Tocnaye. 2024. "Potential of a Laser Pointer Contact Lens to Improve the Reliability of Video-Based Eye-Trackers in Indoor and Outdoor Conditions" Journal of Eye Movement Research 17, no. 1: 1-16. https://doi.org/10.16910/jemr.17.1.5

APA Style

Robert, F.-M., Otheguy, M., Nourrit, V., & de Bougrenet de la Tocnaye, J.-L. (2024). Potential of a Laser Pointer Contact Lens to Improve the Reliability of Video-Based Eye-Trackers in Indoor and Outdoor Conditions. Journal of Eye Movement Research, 17(1), 1-16. https://doi.org/10.16910/jemr.17.1.5

Article Metrics

Back to TopTop