Exploring the Potential of Event Camera Imaging for Advancing Remote Pupil-Tracking Techniques
Abstract
:1. Introduction
2. Proposed Method
2.1. Event Camera Imaging
2.2. Pupil-Tracking Algorithm
3. Experimental Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yiu, Y.-H.; Aboulatta, M.; Raiser, T.; Ophey, L.; Flanagin, V.L.; Zu Elenburg, P.; Ahmadi, S.-A. DeepVOG: Open-source pupil segmentation and gaze estimation in neuroscience using deep learning. J. Neurosci. Methods 2019, 324, 108307. [Google Scholar] [CrossRef] [PubMed]
- Skaramagkas, V.; Giannakakis, G.; Ktistakis, E.; Manousos, D.; Karatzanis, I.; Tachos, N.S.; Tripoliti, E.; Marias, K.; Fotiadis, D.I.; Tsiknakis, M. Review of eye tracking metrics involved in emotional and cognitive processes. IEEE Rev. Biomed. Eng. 2021, 16, 260–277. [Google Scholar] [CrossRef] [PubMed]
- Asish, S.M.; Kulshreshth, A.K.; Borst, C.W. User identification utilizing minimal eye-gaze features in virtual reality applications. Virtual Worlds 2022, 1, 42–61. [Google Scholar] [CrossRef]
- Kang, D.; Ma, L. Real-Time Eye Tracking for Bare and Sunglasses-Wearing Faces for Augmented Reality 3D Head-Up Displays. IEEE Access 2021, 9, 125508–125522. [Google Scholar] [CrossRef]
- Yousefi, M.S.; Reisi, F.; Daliri, M.R.; Shalchyan, V. Stress Detection Using Eye Tracking Data: An Evaluation of Full Parameters. IEEE Access 2022, 10, 118941–118952. [Google Scholar] [CrossRef]
- Ou, W.-L.; Kuo, T.-L.; Chang, C.-C.; Fan, C.-P. Deep-learning-based pupil center detection and tracking technology for visible-light wearable gaze tracking devices. Appl. Sci. 2021, 11, 851. [Google Scholar] [CrossRef]
- Bozomitu, R.G.; Păsărică, A.; Tărniceriu, D.; Rotariu, C. Development of an Eye Tracking-Based Human-Computer Interface for Real-Time Applications. Sensors 2019, 19, 3630. [Google Scholar] [CrossRef] [PubMed]
- Thiago, S.; Fuhl, W.; Kasneci, E. PuRe: Robust pupil detection for real-time pervasive eye tracking. Comput. Vis. Image Underst. 2018, 170, 40–50. [Google Scholar]
- Majaranta, P.; Bulling, A. Eye tracking and eye-based human–computer interaction. In Advances in Physiological Computing; Springer: London, UK, 2014; pp. 39–65. [Google Scholar]
- Zheng, L.J.; Mountstephens, J.; Teo, J. Emotion recognition using eye-tracking: Taxonomy, review and current challenges. Sensors 2020, 20, 2384. [Google Scholar]
- Kang, D.; Heo, J. Content-Aware Eye Tracking for Autostereoscopic 3D Display. Sensors 2020, 20, 4787. [Google Scholar] [CrossRef] [PubMed]
- Braiden, B.; Rose, J.; Eizenman, M. Hybrid eye-tracking on a smartphone with CNN feature extraction and an infrared 3D model. Sensors 2020, 20, 543. [Google Scholar]
- Gallego, G.; Delbruck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Danilidis, K.; et al. Event-based vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 154–180. [Google Scholar] [CrossRef] [PubMed]
- Xuehan, X.; De la Torre, F. Supervised descent method and its applications to face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 53–539. [Google Scholar]
- DAVIS346. Available online: https://inivation.com/wp-content/uploads/2019/08/DAVIS346.pdf (accessed on 1 August 2023).
- Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Paul, V.; Jones, M.J. Robust real-time face detection. Int. J. Comput. Vis. 2004, 57, 137–154. [Google Scholar]
- Paul, V.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
- Zhang, L.; Chu, R.; Xiang, S.; Liao, S.; Li, S.Z. Face detection based on multi-block lbp representation. In Proceedings of the International Conference on Biometrics, Seoul, Korea, 27–29 August 2007; pp. 11–18. [Google Scholar]
- Cao, X.; Wei, Y.; Wen, F.; Sun, J. Face alignment by explicit shape regression. Int. J. Comput. Vis. 2014, 107, 177–190. [Google Scholar] [CrossRef]
- Wenyan, W.; Yang, S. Leveraging intra and inter-dataset variations for robust face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 150–159. [Google Scholar]
- Wu, W.; Qian, C.; Yang, S.; Wang, Q.; Cai, Y.; Zhou, Q. Look at boundary: A boundary-aware face alignment algorithm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2129–2138. [Google Scholar]
- Feng, Z.H.; Kittler, J.; Awais, M.; Huber, P.; Wu, X.J. Wing loss for robust facial landmark localisation with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2235–2245. [Google Scholar]
- Wang, X.; Bo, L.; Fuxin, L. Adaptive wing loss for robust face alignment via heatmap regression. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6971–6981. [Google Scholar]
- Qian, S.; Sun, K.; Wu, W.; Qian, C.; Jia, J. Aggregation via separation: Boosting facial landmark detector with semi-supervised style translation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 10153–10163. [Google Scholar]
- Kujur, A.; Raza, Z.; Khan, A.A.; Wechtaisong, C. Data Complexity Based Evaluation of the Model Dependence of Brain MRI Images for Classification of Brain Tumor and Alzheimer’s Disease. IEEE Access 2022, 10, 112117–112133. [Google Scholar] [CrossRef]
- Khan, A.A.; Madendran, R.K.; Thirunavukkarasu, U.; Faheem, M. D2PAM: Epileptic Seizures Prediction Using Adversarial Deep Dual Patch Attention Mechanism. 2023. Available online: https://ietresearch.onlinelibrary.wiley.com/action/showCitFormats?doi=10.1049%2Fcit2.12261 (accessed on 24 July 2023).
- Belda, J.; Vergara, L.; Safont, G.; Salazar, A.; Parcheta, Z. A New Surrogating Algorithm by the Complex Graph Fourier Transform (CGFT). Entropy 2019, 21, 759. [Google Scholar] [CrossRef] [PubMed]
Parameter | Details |
---|---|
Tracked Shape Points | 11 Eye-nose Points (3 left eye, 3 right eye, 4 nose) |
Distance between camera and users (cm) | 50 to 100 |
Computing System | 2.0 GHz CPU |
Event Camera Model | DAVIS 346 (Inivation) |
Event Camera Resolution | 346 × 260 (resized to 640 × 480) |
Event Camera Latency | 20 μs |
Event Frame Speed (event aggregation time) | 30 fps |
Method | Light Condition | Detection Accuracy | Tracking Accuracy (Pupil Precision < 10 mm) | Speed |
---|---|---|---|---|
Content-aware [11] (CIS RGB Camera) | Normal Light (100~400 lux) | 99.4% | 99.4% | 200 fps (CPU) |
Proposed Method | Normal Light (100~400 lux) | 98.1% | 80.9% | 200 fps (CPU) |
Training DB (Detector) | DB Type | DB Number |
---|---|---|
Real Event Camera Images | Face Images with Subtle and Large Movement (Image with verifiable eye shape) | 3608 |
Non-face Background Images | 3949 | |
Training DB (Aligner) | DB Type | DB Number |
Real Event Camera Images | Face Images with Large Movement (Image with verifiable eye shape) | 2273 |
Test DB (Pupil Tracking) | DB Type | DB Number |
Real Event Camera Images | Face Images with Large Movement (Image with verifiable eye shape) | 474 |
Test DB | DB Type | Detection Accuracy | Tracking Accuracy (Pupil Precision < 10 mm) |
---|---|---|---|
Real Event Camera Images Indoor Office (100~400 lux) | Large Movement (Image with verifiable eye shape) | 98.1% | 80.9% |
Various Movement (minimal to large movement) | 69.1% | 52.7% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kang, D.; Lee, Y.K.; Jeong, J. Exploring the Potential of Event Camera Imaging for Advancing Remote Pupil-Tracking Techniques. Appl. Sci. 2023, 13, 10357. https://doi.org/10.3390/app131810357
Kang D, Lee YK, Jeong J. Exploring the Potential of Event Camera Imaging for Advancing Remote Pupil-Tracking Techniques. Applied Sciences. 2023; 13(18):10357. https://doi.org/10.3390/app131810357
Chicago/Turabian StyleKang, Dongwoo, Youn Kyu Lee, and Jongwook Jeong. 2023. "Exploring the Potential of Event Camera Imaging for Advancing Remote Pupil-Tracking Techniques" Applied Sciences 13, no. 18: 10357. https://doi.org/10.3390/app131810357
APA StyleKang, D., Lee, Y. K., & Jeong, J. (2023). Exploring the Potential of Event Camera Imaging for Advancing Remote Pupil-Tracking Techniques. Applied Sciences, 13(18), 10357. https://doi.org/10.3390/app131810357