Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR
Abstract
:1. Introduction
2. Materials and Methods
2.1. Participants
2.2. Experimental Setup
2.2.1. Hardware Specifications
2.2.2. Software Specifications
2.3. Virtual Environment and Stimuli
2.3.1. Real-World Scenes
2.3.2. Saliency Maps
2.3.3. Blurred Images
2.3.4. Search Target
2.3.5. Search Target Locations
2.4. Experimental Procedure
2.4.1. General Procedure
2.4.2. Training Phase
2.4.3. Main Experiment
2.5. Analysis
2.5.1. Behavioral Performance Metrics
2.5.2. Eye Movement Metrics
2.5.3. Eye Movement Raw Data Pre-Processing
2.5.4. Fixation Detection Algorithm—I-VT
3. Results
3.1. Behavioral Data
3.2. Eye Movement Data
4. Discussion
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
VR | Virtual reality |
AR | Augmented reality |
AOI | Area of interest |
References
- Branchini, L.; Regatieri, C.V.; Flores-Moreno, I.; Baumann, B.; Fujimoto, J.G.; Duker, J.S. Reproducibility of choroidal thickness measurements across three spectral domain optical coherence tomography systems. Ophthalmology 2012, 119, 119–123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wahl, S.; Dragneva, D.; Rifai, K. Digitalization versus immersion: Performance and subjective evaluation of 3D perception with emulated accommodation and parallax in digital microsurgery. J. Biomed. Opt. 2019, 24, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chan, L.K.; Hayward, W.G. Visual search. Wiley Interdiscip. Rev. Cogn. Sci. 2013, 4, 415–429. [Google Scholar] [CrossRef]
- Wolfe, J.M. Visual Search: How Do We Find What We Are Looking For? Annu. Rev. Vis. Sci. 2020, 6, 539–562. [Google Scholar] [CrossRef] [Green Version]
- Verghese, P. Visual Search and Attention: A Signal Detection Theory Approach. Neuron 2001, 31, 523–535. [Google Scholar] [CrossRef] [Green Version]
- Posner, M.I. Orienting of attention. Q. J. Exp. Psychol. 1980, 32, 3–25. [Google Scholar] [CrossRef]
- Ferrante, O.; Patacca, A.; Di Caro, V.; Della Libera, C.; Santandrea, E.; Chelazzi, L. Altering spatial priority maps via statistical learning of target selection and distractor filtering. Cortex 2018, 102, 67–95. [Google Scholar] [CrossRef]
- Jiang, Y.V. Habitual versus goal-driven attention. Cortex 2018, 102, 107–120. [Google Scholar] [CrossRef]
- Borji, A.; Sihite, D.N.; Itti, L. What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 523–538. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Zelinsky, G.J. Real-world visual search is dominated by top-down guidance. Vis. Res. 2006, 46, 4118–4133. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Henderson, J.M.; Brockmole, J.R.; Castelhano, M.S.; Mack, M. Visual saliency does not account for eye movements during visual search in real-world scenes. In Eye Movements; Elsevier Ltd.: Amsterdam, The Netherlands, 2007; pp. 537–562. [Google Scholar] [CrossRef] [Green Version]
- Rothkegel, L.O.; Schütt, H.H.; Trukenbrod, H.A.; Wichmann, F.A.; Engbert, R. Searchers adjust their eye-movement dynamics to target characteristics in natural scenes. Sci. Rep. 2019, 9, 1635. [Google Scholar] [CrossRef] [Green Version]
- Jung, K.; Han, S.W.; Min, Y. Search efficiency is not sufficient: The nature of search modulates stimulus-driven attention. Atten. Percept. Psychophys. 2019, 81, 61–70. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bertleff, S.; Fink, G.R.; Weidner, R. Attentional capture: Role of top-down focused spatial attention and the need to search among multiple locations. Vis. Cogn. 2017, 25, 326–342. [Google Scholar] [CrossRef]
- Foulsham, T.; Underwood, G. If Visual Saliency Predicts Search, Then Why? Evidence from Normal and Gaze-Contingent Search Tasks in Natural Scenes. Cogn. Comput. 2011, 3, 48–63. [Google Scholar] [CrossRef]
- Theeuwes, J. Top-down search strategies cannot override attentional capture. Psychon. Bull. Rev. 2004, 11, 65–70. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wolfe, J.M.; Horowitz, T.S. Five factors that guide attention in visual search. Nat. Hum. Behav. 2017, 1, 0058. [Google Scholar] [CrossRef]
- Itti, L.; Koch, C. A Saliency-Based Search Mechanism for Overt and Covert Shifts of Visual Attention; Vision Research; Pergamon: Oxford, UK, 2000; Volume 40, pp. 1489–1506. [Google Scholar] [CrossRef] [Green Version]
- Bahmani, H.; Wahl, S. Distorted Low-Level Visual Features Affect Saliency-Based Visual Attention. Front. Comput. Neurosci. 2016, 10, 124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Schütt, H.H.; Rothkegel, L.O.; Trukenbrod, H.A.; Engbert, R.; Wichmann, F.A. Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time. J. Vis. 2019, 19, 1. [Google Scholar] [CrossRef] [Green Version]
- Nuthmann, A. On the visual span during object search in real-world scenes. Vis. Cogn. 2013, 21, 803–837. [Google Scholar] [CrossRef]
- Li, C.L.; Aivar, M.P.; Kit, D.M.; Tong, M.H.; Hayhoe, M.M. Memory and visual search in naturalistic 2D and 3D environments. J. Vis. 2016, 16, 9. [Google Scholar] [CrossRef]
- Cajar, A.; Engbert, R.; Laubrock, J. How spatial frequencies and color drive object search in real-world scenes: A new eye-movement corpus. J. Vis. 2020, 20, 8. [Google Scholar] [CrossRef]
- Drewes, J.; Trommershauser, J.; Gegenfurtner, K.R. Parallel visual search and rapid animal detection in natural scenes. J. Vis. 2011, 11, 20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Eckstein, M.P.; Koehler, K.; Welbourne, L.E.; Akbas, E. Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes. Curr. Biol. 2017, 27, 2827–2832.e3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Boettcher, S.E.; Draschkow, D.; Dienhart, E.; Võ, M.L. Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search. J. Vis. 2018, 18, 11. [Google Scholar] [CrossRef]
- Olk, B.; Dinu, A.; Zielinski, D.J.; Kopper, R. Measuring visual search and distraction in immersive virtual reality. R. Soc. Open Sci. 2018, 5, 172331. [Google Scholar] [CrossRef] [Green Version]
- Dey, A.; Billinghurst, M.; Lindeman, R.W.; Swan, J.E. A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 2018, 5, 37. [Google Scholar] [CrossRef] [Green Version]
- Coughlan, J.M.; Miele, J. AR4VI: AR as an Accessibility Tool for People with Visual Impairments. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; Volume 2017, pp. 288–292. [Google Scholar] [CrossRef] [Green Version]
- Booth, T.; Sridharan, S.; McNamara, A.; Grimm, C.; Bailey, R. Guiding attention in controlled real-world environments. In Proceedings of the ACM Symposium on Applied Perception—SAP ’13, Dublin, Ireland, 22–23 August 2013; ACM Press: New York, NY, USA, 2013; p. 75. [Google Scholar] [CrossRef] [Green Version]
- Raja, V.; Calvo, P. Augmented reality: An ecological blend. Cogn. Syst. Res. 2017, 42, 58–72. [Google Scholar] [CrossRef]
- Gatys, L.A.; Kümmerer, M.; Wallis, T.S.A.; Bethge, M. Guiding human gaze with convolutional neural networks. arXiv 2017, arXiv:1712.06492. [Google Scholar]
- Lu, W.; Duh, B.L.H.; Feiner, S. Subtle cueing for visual search in augmented reality. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Altanta, GA, USA, 5–8 November 2012; pp. 161–166. [Google Scholar] [CrossRef]
- Grogorick, S.; Stengel, M.; Eisemann, E.; Magnor, M. Subtle gaze guidance for immersive environments. In Proceedings of the SAP 2017, ACM Symposium on Applied Perception, Cottbus, Germany, 16–17 September 2017; Association for Computing Machinery, Inc: New York, NY, USA, 2017; pp. 1–7. [Google Scholar] [CrossRef]
- Biocca, F.; Owen, C.; Tang, A.; Bohil, C. Attention issues in spatial information systems: Directing mobile users’ visual attention using augmented reality. J. Manag. Inf. Syst. 2007, 23, 163–184. [Google Scholar] [CrossRef]
- Lu, W.; Duh, H.B.L.; Feiner, S.; Zhao, Q. Attributes of subtle cues for facilitating visual search in augmented reality. IEEE Trans. Vis. Comput. Graph. 2014, 20, 404–412. [Google Scholar] [CrossRef] [PubMed]
- Bailey, R.; McNamara, A.; Sudarsanam, N.; Grimm, C. Subtle gaze direction. ACM Trans. Graph. 2009, 28, 1–14. [Google Scholar] [CrossRef]
- Danieau, F.; Guillo, A.; Dore, R. Attention guidance for immersive video content in head-mounted displays. In Proceedings of the 2017 IEEE Virtual Reality (VR), IEEE Computer Society, Los Angeles, CA, USA, 18–22 March 2017; pp. 205–206. [Google Scholar] [CrossRef]
- Pomarjanschi, L.; Dorr, M.; Barth, E. Gaze guidance reduces the number of collisions with pedestrians in a driving simulator. ACM Trans. Interact. Intell. Syst. 2012, 1, 1–14. [Google Scholar] [CrossRef]
- Sridharan, S.; Pieszala, J.; Bailey, R. Depth-Based Subtle Gaze Guidance in Virtual Reality Environments. In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, Tübingen, Germany, 13–14 September 2015; ACM: New York, NY, USA, 2015. [Google Scholar]
- Lin, Y.C.; Chang, Y.J.; Hu, H.N.; Cheng, H.T.; Huang, C.W.; Sun, M. Tell me where to look: Investigating ways for assisting focus in 360-degree video. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 2535–2545. [Google Scholar] [CrossRef]
- Hata, H.; Koike, H.; Sato, Y. Visual guidance with unnoticed blur effect. In Proceedings of the Workshop on Advanced Visual Interfaces AVI, Bari, Italy, 7–10 June 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 28–35. [Google Scholar] [CrossRef]
- Ueda, T.; Iwai, D.; Sato, K. IlluminatedFocus: Vision augmentation using spatial defocusing. In Proceedings of the SIGGRAPH Asia 2019 Emerging Technologies, SA 19, Brisbane, QLD, Australia, 17–20 November 2019; Association for Computing Machinery, Inc.: New York, NY, USA, 2019; pp. 21–22. [Google Scholar] [CrossRef]
- Khan, R.A.; Dinet, E.; Konik, H. Visual attention: Effects of blur. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3289–3292. [Google Scholar] [CrossRef]
- Enns, J.T.; MacDonald, S.C. The role of clarity and blur in guiding visual attention in photographs. J. Exp. Psychol. Hum. Percept. Perform. 2013, 39, 568–578. [Google Scholar] [CrossRef] [Green Version]
- Yamaura, H.; Tamura, M.; Nakamura, S. Image blurring method for enhancing digital content viewing experience. In Lecture Notes in Computer Science; including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10901 LNCS, pp. 355–370. [Google Scholar] [CrossRef]
- Huynh-Thu, Q.; Vienne, C.; Blondé, L. Visual Storytelling in 2D and Stereoscopic 3D Video: Effect of Blur on Visual Attention; Human Vision and Electronic Imaging XVIII; Rogowitz, B.E., Pappas, T.N., de Ridder, H., Eds.; SPIE: Bellingham, WA, USA, 2013; Volume 8651, p. 865112. [Google Scholar] [CrossRef]
- Sitzmann, V.; Serrano, A.; Pavel, A.; Agrawala, M.; Gutierrez, D.; Masia, B.; Wetzstein, G. Saliency in VR: How Do People Explore Virtual Environments? IEEE Trans. Vis. Comput. Graph. 2018, 24, 1633–1642. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Unity Technologies. Unity; Unity Technologies: San Francisco, CA, USA, 2019. [Google Scholar]
- Getting VerboseData at the Fastest Rate Possible. Vive Eye Tracking SDK—Community Forum. Available online: https://forum.vive.com/topic/5897-getting-verbosedata-at-the-fastest-rate-possible/ (accessed on 30 June 2019).
- Van Der Walt, S.; Colbert, S.C.; Varoquaux, G. The NumPy array: A structure for efficient numerical computation. Comput. Sci. Eng. 2011, 13, 22–30. [Google Scholar] [CrossRef] [Green Version]
- Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mckinney, W. Data Structures for Statistical Computing in Python; AQR Capital Management, LLC: Greenwich, CT, USA, 2010. [Google Scholar]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2020. [Google Scholar]
- Bates, D.; Mächler, M.; Bolker, B.; Walker, S. Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw. 2015, 67, 1–48. [Google Scholar] [CrossRef]
- Hunter, J.D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 2007, 9, 99–104. [Google Scholar] [CrossRef]
- Waskom, M.; Botvinnik, O.; O’Kane, D.; Hobson, P.; Lukauskas, S.; Gemperline, D.C.; Augspurger, T.; Halchenko, Y.; Cole, J.B.; Warmenhoven, J.; et al. mwaskom/seaborn: V0.8.1 (September 2017). 2017. Available online: https://seaborn.pydata.org/ (accessed on 30 June 2019). [CrossRef]
- Borji, A. Saliency Prediction in the Deep Learning Era: Successes and Limitations. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1. [Google Scholar] [CrossRef]
- Xu, M.; Li, C.; Zhang, S.; Callet, P.L. State-of-the-Art in 360-degree Video/Image Processing: Perception, Assessment and Compression. IEEE J. Sel. Top. Signal Process. 2020, 14, 5–26. [Google Scholar] [CrossRef] [Green Version]
- Startsev, M.; Dorr, M. 360-aware saliency estimation with conventional image saliency predictors. Signal Process. Image Commun. 2018, 69, 43–52. [Google Scholar] [CrossRef]
- ICME’17 | Salient360!—Visual Attention Modeling for 360-Degree Content. Available online: https://salient360.ls2n.fr/grand-challenges/icme17/ (accessed on 30 July 2020).
- Baayen, R.H.; Davidson, D.J.; Bates, D.M. Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 2008, 59, 390–412. [Google Scholar] [CrossRef] [Green Version]
- VIVE SRanipal SDK. Available online: https://hub.vive.com/en-US/download (accessed on 30 July 2020).
- Imaoka, Y.; Flury, A.; de Bruin, E.D. Assessing Saccadic Eye Movements With Head-Mounted Display Virtual Reality Technology. Front. Psychiatry 2020, 11, 1. [Google Scholar] [CrossRef] [PubMed]
- Salvucci, D.D.; Goldberg, J.H. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the Eye Tracking Research and Applications Symposium 2000, Palm Gardens, FL, USA, 6–8 November 2000; Association for Computing Machinery (ACM): New York, NY, USA, 2000; pp. 71–78. [Google Scholar] [CrossRef]
- Kübler, T.C. The perception engineer’s toolkit for eye-tracking data analysis. In Proceedings of the Eye Tracking Research and Applications Symposium (ETRA), Stuttgart, Germany, 2–5 June 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Olsen, A. The Tobii I-VT Fixation Filter Algorithm Description; Technical Report; Tobii Technology (USA): Reston, VA, USA, 2012. [Google Scholar]
- Leube, A.; Rifai, K.; Wahl, S. Sampling rate influences saccade detection in mobile eye tracking of a reading task. J. Eye Mov. Res. 2017, 10. [Google Scholar] [CrossRef]
- Komogortsev, O.V.; Gobert, D.V.; Jayarathna, S.; Koh, D.H.; Gowda, S.M. Standardization of automated analyses of oculomotor fixation and saccadic behaviors. IEEE Trans. Biomed. Eng. 2010, 57, 2635–2645. [Google Scholar] [CrossRef]
- Over, E.A.; Hooge, I.T.; Vlaskamp, B.N.; Erkelens, C.J. Coarse-to-fine eye movement strategy in visual search. Vis. Res. 2007, 47, 2272–2280. [Google Scholar] [CrossRef] [Green Version]
- Nuthmann, A.; Smith, T.J.; Engbert, R.; Henderson, J.M. CRISP: A Computational Model of Fixation Durations in Scene Viewing. Psychol. Rev. 2010, 117, 382–405. [Google Scholar] [CrossRef] [Green Version]
- Cajar, A.; Engbert, R.; Laubrock, J. Spatial frequency processing in the central and peripheral visual field during scene viewing. Vis. Res. 2016, 127, 186–197. [Google Scholar] [CrossRef]
- Cajar, A.; Schneeweiß, P.; Engbert, R.; Laubrock, J. Coupling of attention and saccades when viewing scenes with central and peripheral degradation. J. Vis. 2016, 16, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Laubrock, J.; Cajar, A.; Engbert, R. Control of fixation duration during scene viewing by interaction of foveal and peripheral processing. J. Vis. 2013, 13. [Google Scholar] [CrossRef] [Green Version]
- Shioiri, S.; Ikeda, M. Useful resolution for picture perception as a function of eccentricity. Perception 1989, 18, 347–361. [Google Scholar] [CrossRef]
- Ojanpää, H.; Näsänen, R. Utilisation of spatial frequency information in face search. Vis. Res. 2003, 43, 2505–2515. [Google Scholar] [CrossRef] [Green Version]
- Nuthmann, A. How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. J. Exp. Psychol. Hum. Percept. Perform. 2014, 40, 342–360. [Google Scholar] [CrossRef] [PubMed]
- Najemnik, J.; Geisler, W.S. Optimal eye movement strategies in visual search. Nature 2005, 434, 387–391. [Google Scholar] [CrossRef] [PubMed]
- Najemnik, J.; Geisler, W.S. Eye movement statistics in humans are consistent with an optimal search strategy. J. Vis. 2008, 8, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Becker, S.I.; Atalla, M.; Folk, C.L. Conjunction search: Can we simultaneously bias attention to features and relations? Atten. Percept. Psychophys. 2020, 82, 246–268. [Google Scholar] [CrossRef] [PubMed]
- Assens, M.; Giro-i Nieto, X.; McGuinness, K.; O’Connor, N.E. PathGAN: Visual Scanpath Prediction with Generative Adversarial Networks. In Lecture Notes in Computer Science; (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2018; Volume 11133 LNCS, pp. 406–422. [Google Scholar]
- Diaz, G.; Cooper, J.; Kit, D.; Hayhoe, M. Real-time recording and classification of eye movements in an immersive virtual environment. J. Vis. 2013, 13, 5. [Google Scholar] [CrossRef] [Green Version]
- Frintrop, S.; Rome, E.; Christensen, H.I. Computational visual attention systems and their cognitive foundations: A survey. ACM Trans. Appl. Percept. 2010, 7, 1–39. [Google Scholar] [CrossRef]
- Fecteau, J.H.; Munoz, D.P. Salience, relevance, and firing: A priority map for target selection. Trends Cogn. Sci. 2006, 10, 382–390. [Google Scholar] [CrossRef] [PubMed]
- Liesefeld, H.R.; Müller, H.J. Distractor handling via dimension weighting. Curr. Opin. Psychol. 2019, 29, 160–167. [Google Scholar] [CrossRef] [PubMed]
- Lewis, T.M.; Aggarwal, R.; Rajaretnam, N.; Grantcharov, T.P.; Darzi, A. Training in surgical oncology—The role of VR simulation. Surg. Oncol. 2011, 20, 134–139. [Google Scholar] [CrossRef] [PubMed]
- Goedicke, D.; Li, J.; Evers, V.; Ju, W. VR-OOM: Virtual reality on-road driving simulation. In Proceedings of the Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–11. [Google Scholar] [CrossRef]
- Oberhauser, M.; Dreyer, D. A virtual reality flight simulator for human factors engineering. Cogn. Technol. Work 2017, 19, 263–277. [Google Scholar] [CrossRef]
Variable | Units | Meaning |
---|---|---|
Timestamp | any integer number | The system time in ms at the moment of sample recording. |
Eye data validity bit mask | an integer from 0 to 31 | Indicates the validity of the data. A value of 31 indicates the highest validity of the recorded data. This parameter is used to filter the raw data where the eye tracker lost the pupil, including filtering blinks. |
Gaze normalized direction vector | A three-coordinates vector (x, y, z) with each coordinate ranging from −1 to 1 | A gaze vector indicating the direction of gaze in the headset right-hand coordinate system. To convert it to the left-hand coordinate system (Figure 1B), the x coordinate was multiplied by −1. |
Head rotation | a rotation quaternion (x, y, z, w) of head | A quaternion describing the rotation of the headset in Unity world coordinates. The position of the headset was always fixed to the origin (0, 0, 0). |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lukashova-Sanz, O.; Wahl, S. Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR. Brain Sci. 2021, 11, 283. https://doi.org/10.3390/brainsci11030283
Lukashova-Sanz O, Wahl S. Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR. Brain Sciences. 2021; 11(3):283. https://doi.org/10.3390/brainsci11030283
Chicago/Turabian StyleLukashova-Sanz, Olga, and Siegfried Wahl. 2021. "Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR" Brain Sciences 11, no. 3: 283. https://doi.org/10.3390/brainsci11030283
APA StyleLukashova-Sanz, O., & Wahl, S. (2021). Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR. Brain Sciences, 11(3), 283. https://doi.org/10.3390/brainsci11030283