Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality
Abstract
:1. Introduction
2. Related Works
2.1. Gestural Interaction with Hand-Held Devices
2.2. Freehand Gestural Interaction
2.3. Haptic Feedback
3. Study 1: Freehand Gestural Selection in Wearable OST AR
3.1. Study Design
3.1.1. Freehand Gestural Selection Design
3.1.2. Experimental Settings
3.1.3. Independent Variables
- Target placement (2 levels): centre and right (15 cm away from the centre);
- Target size (2 levels): large (48 mm length of side) and small (32 mm length of side);
- Target distance from the user (3 levels): short (30 cm), middle (40 cm), and long (50 cm);
- Target position (9 levels): right-up (1), up (2), left-up (3), left (4), centre (5), right (6), left-down (7), down (8), and right-down (9).
3.1.4. Experimental Design
3.1.5. Participants and Procedure
3.2. Results
3.2.1. Selection Time
3.2.2. Error Rate
3.2.3. Hand Movement Distance
3.2.4. Head Movement Distance
3.2.5. Head Rotation
3.2.6. User Target Distance
3.2.7. User Preference
4. Study 2: Haptic Feedback for Freehand Gestural Selection in Wearable OST AR
4.1. Haptic Feedback Design for Freehand Gestural Target Selection
4.2. Study Design
4.2.1. Experimental Settings
4.2.2. Independent Variables
- Haptic feedback type (4 levels): no haptic feedback, hand haptic feedback, body haptic feedback, hand and body haptic feedback.
- Target position (9 levels): right-up (1), up (2), left-up (3), left (4), centre (5), right (6), left-down (7), down (8), and right-down (9).
4.2.3. Experimental Design
4.2.4. Participants and Procedure
4.3. Results
4.3.1. Selection Time, Error Rate, and User Behaviour Data
4.3.2. User Preference and Feedback
5. Discussion
5.1. Effect of Target Distance
5.2. Effect of Target Size
5.3. Effect of Target Position
5.4. Effect of Target Placement
5.5. Effect of Haptic Feedback
6. Conclusions
- Our study proves that the large target size could reduce the selection time and error rate, so when the application designers only have a small number of options or targets available, a large target size should be considered. For other selection tasks (e.g., input a string of text or number with a keyboard), a large target size might introduce longer hand and head movement distance and more fatigue. Thus, a small target size could be used for a sequence selection with a group of targets tiled together.
- A 40 cm target distance to the user could be a very suitable target distance. Although the evaluation results indicated that target distance had little impact on the user selection performance, our participants showed a strong subjective preference for a 40 cm target distance. The 40 cm distance provides a good balance of a comfortable reach distance and sufficient visual space to observe the targets. A longer target distance could lead to a far arm reach motion; thus, it may bring fatigue issues. If the viewing angle of HMDs is similar to our study, a short target distance could make targets challenging to observe and cause more body adjustment before the selection.
- For a three-by-three grid layout, a target located in the centre position is the most accessible position to select because the users naturally put their visual attention and hand position in the centre, resulting in less selection time and hand movement distance. The four corner locations, especially the left-top corner, could be difficult to select for right-handed users. Therefore, the designer should put the targets frequently used to the centre and set the other options less used to the corner locations.
- Target placement has little effect on user’s performance. We recommend putting the targets in the centre if possible due to the user preference data in the experiment.
- For the haptic feedback design, the user preferences and comments imply that wearable haptic feedback on the hand and body might improve the user experience. Particularly, haptic feedback on the body may achieve the optimal user experience as multiple haptic feedback units are provided on the chest, giving the confirmation of the selection and the target position. Thus, haptic input on the upper body may boost freehand interaction confidence and engagement. Therefore, with the emergence of wearable haptic feedback devices, the designer might explore including haptic feedback in the OST AR interface to enhance the user experience.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AR | Augmented reality |
VR | Virtual reality |
HMD | Head-mounted displays |
OST | Optical see-through |
VST | Video see-through |
MRTK | Mixed Reality Toolkit |
3D | Three-dimensional |
GUI | Graphical user interface |
References
- Billinghurst, M.; Clark, A.; Lee, G. A Survey of Augmented Reality. Found. Trends -Hum.-Comput. Interact. 2015, 8, 73–272. [Google Scholar] [CrossRef]
- Grubert, J.; Itoh, Y.; Moser, K.; Swan, J.E. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2649–2662. [Google Scholar] [CrossRef] [Green Version]
- MacKenzie, I.S. Fitts’ Law as a Research and Design Tool in Human-Computer Interaction. Hum.–Comput. Interact. 1992, 7, 91–139. [Google Scholar] [CrossRef]
- Soukoreff, R.W.; MacKenzie, I.S. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. Int. J.-Hum.-Comput. Stud. 2004, 61, 751–789. [Google Scholar] [CrossRef]
- Po, B.A.; Fisher, B.D.; Booth, K.S. Mouse and Touchscreen Selection in the Upper and Lower Visual Fields. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, Vienna, Austria, 24–29 April 2004; Association for Computing Machinery: New York, NY, USA, 2004; pp. 359–366. [Google Scholar] [CrossRef] [Green Version]
- Grossman, T.; Balakrishnan, R. Pointing at trivariate targets in 3D environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, Paris, France, 27 April–2 May 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 447–454. [Google Scholar] [CrossRef] [Green Version]
- Ren, G.; O’Neill, E. 3D selection with freehand gesture. Comput. Graph. 2013, 37, 101–120. [Google Scholar] [CrossRef]
- Wolf, D.; Dudley, J.J.; Kristensson, P.O. Performance Envelopes of in-Air Direct and Smartwatch Indirect Control for Head-Mounted Augmented Reality. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; pp. 347–354. [Google Scholar] [CrossRef] [Green Version]
- Uzor, S.; Kristensson, P.O. An Exploration of Freehand Crossing Selection in Head-Mounted Augmented Reality. ACM Trans.-Comput.-Hum. Interact. 2021, 28, 33:1–33:27. [Google Scholar] [CrossRef]
- Kytö, M.; Ens, B.; Piumsomboon, T.; Lee, G.A.; Billinghurst, M. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–14. [Google Scholar] [CrossRef]
- Wang, D.; Guo, Y.; Liu, S.; Zhang, Y.; Xu, W.; Xiao, J. Haptic display for virtual reality: Progress and challenges. Virtual Real. Intell. Hardw. 2019, 1, 136–162. [Google Scholar] [CrossRef] [Green Version]
- Bermejo, C.; Hui, P. A Survey on Haptic Technologies for Mobile Augmented Reality. Acm Comput. Surv. 2021, 54, 184:1–184:35. [Google Scholar] [CrossRef]
- Israr, A.; Zhao, S.; Schwalje, K.; Klatzky, R.; Lehman, J. Feel Effects: Enriching Storytelling with Haptic Feedback. ACM Trans. Appl. Percept. 2014, 11, 11:1–11:17. [Google Scholar] [CrossRef]
- Schneider, O.S.; Israr, A.; MacLean, K.E. Tactile Animation by Direct Manipulation of Grid Displays. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST ’15, Charlotte, NC, USA, 11–15 November 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 21–30. [Google Scholar] [CrossRef]
- Israr, A.; Kim, S.C.; Stec, J.; Poupyrev, I. Surround haptics: Tactile feedback for immersive gaming experiences. In Proceedings of the CHI ’12 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’12, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1087–1090. [Google Scholar] [CrossRef]
- Gaffary, Y.; Lécuyer, A. The Use of Haptic and Tactile Information in the Car to Improve Driving Safety: A Review of Current Technologies. Front. Ict 2018, 5, 5. [Google Scholar] [CrossRef] [Green Version]
- Vo, D.B.; Brewster, S.A. Touching the invisible: Localizing ultrasonic haptic cues. In Proceedings of the 2015 IEEE World Haptics Conference (WHC), Evanston, IL, USA, 22–26 June 2015; pp. 368–373. [Google Scholar] [CrossRef] [Green Version]
- Long, B.; Seah, S.A.; Carter, T.; Subramanian, S. Rendering volumetric haptic shapes in mid-air using ultrasound. Acm Trans. Graph. 2014, 33, 181:1–181:10. [Google Scholar] [CrossRef] [Green Version]
- Lee, B.; Isenberg, P.; Riche, N.; Carpendale, S. Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2689–2698. [Google Scholar] [CrossRef] [Green Version]
- Kurtenbach, G. The Design and Evaluation of Marking Menus. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 1993. [Google Scholar]
- Rubine, D. Combining gestures and direct manipulation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’92, Monterey, CA, USA, 3–7 May 1992; ACM: New York, NY, USA, 1992; pp. 659–660. [Google Scholar] [CrossRef]
- Kettebekov, S.; Sharma, R. Toward Natural Gesture/Speech Control of a Large Display. In Engineering for Human-Computer Interaction; Little, M., Nigay, L., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2254, pp. 221–234. [Google Scholar] [CrossRef] [Green Version]
- Farhadi-Niaki, F.; Etemad, S.; Arya, A. Design and Usability Analysis of Gesture-Based Control for Common Desktop Tasks. In Human-Computer Interaction. Interaction Modalities and Techniques; Kurosu, M., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8007, pp. 215–224. [Google Scholar] [CrossRef]
- Reddy, V.; Raghuveer, V.; Krishna, J.; Chandralohit, K. Finger gesture based tablet interface. In Proceedings of the 2012 IEEE International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 18–20 December 2012; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
- LaViola, J. Bringing VR and Spatial 3D Interaction to the Masses through Video Games. Comput. Graph. Appl. 2008, 28, 10–15. [Google Scholar] [CrossRef]
- Sherman, W.R.; Craig, A.B. Understanding Virtual Reality: Interface, Application, and Design; Elsevier: Cambridge, MA, USA, 2002. [Google Scholar]
- Bowman, D.A.; Coquillart, S.; Froehlich, B.; Hirose, M.; Kitamura, Y.; Kiyokawa, K.; Stuerzlinger, W. 3d user interfaces: New directions and perspectives. Comput. Graph. Appl. 2008, 28, 20–36. [Google Scholar] [CrossRef]
- Bowman, D.A.; Hodges, L.F. An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. In Proceedings of the 1997 Symposium on Interactive 3D Graphics, I3D ’97, Providence, RI, USA, 27–30 April 1997; ACM: New York, NY, USA, 1997; p. 35. [Google Scholar] [CrossRef]
- Cohen, P.; McGee, D.; Oviatt, S.; Wu, L.; Clow, J.; King, R.; Julier, S.; Rosenblum, L. Multimodal interaction for 2D and 3D environments [virtual reality]. Comput. Graph. Appl. 1999, 19, 10–13. [Google Scholar] [CrossRef]
- Duval, T.; Lecuyer, A.; Thomas, S. SkeweR: A 3D Interaction Technique for 2-User Collaborative Manipulation of Objects in Virtual Environments. In Proceedings of the 3D User Interfaces, 3DUI 2006, Alexandria, VA, USA, 25–26 March 2006; pp. 69–72. [Google Scholar] [CrossRef] [Green Version]
- Cao, X.; Balakrishnan, R. VisionWand: Interaction techniques for large displays using a passive wand tracked in 3D. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, UIST ’03, Vancouver, CO, Canada, 2–5 November 2003; ACM: New York, NY, USA, 2003; pp. 173–182. [Google Scholar] [CrossRef]
- Gallo, L.; Ciampi, M. Wii Remote-enhanced Hand-Computer interaction for 3D medical image analysis. In Proceedings of the Current Trends in Information Technology (CTIT), 2009 International Conference, Dubai, United Arab Emirates, 15–16 December 2009; pp. 1–6. [Google Scholar] [CrossRef]
- Song, J.; Kim, W.; Son, H.; Yoo, J.; Kim, J.; Kim, R.; Oh, J. Design and Implementation of a Remote Control for IPTV with Sensors. In Future Generation Information Technology; Kim, T.h., Adeli, H., Slezak, D., Sandnes, F., Song, X., Chung, K.i., Arnett, K., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7105, pp. 223–228. [Google Scholar] [CrossRef]
- Jones, E.; Alexander, J.; Andreou, A.; Irani, P.; Subramanian, S. GesText: Accelerometer-based gestural text-entry systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA, USA, 10–15 April 2010; ACM: New York, NY, USA, 2010; pp. 2173–2182. [Google Scholar] [CrossRef]
- Shoemaker, G.; Findlater, L.; Dawson, J.Q.; Booth, K.S. Mid-air text input techniques for very large wall displays. In Proceedings of the Graphics Interface 2009, GI ’09, British, CO, Canada, 25–27 May 2009; Canadian Information Processing Society: Toronto, ON, Canada, 2009; pp. 231–238. [Google Scholar]
- Lee, J. Hacking the Nintendo Wii Remote. Pervasive Comput. 2008, 7, 39–45. [Google Scholar] [CrossRef]
- Wang, J.; Zhai, S.; Canny, J. SHRIMP: Solving collision and out of vocabulary problems in mobile predictive input with motion gesture. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA, USA, 10–15 April 2010; ACM: New York, NY, USA, 2010; pp. 15–24. [Google Scholar] [CrossRef]
- Ruiz, J.; Li, Y. DoubleFlip: A motion gesture delimiter for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; ACM: New York, NY, USA, 2011; pp. 2717–2720. [Google Scholar] [CrossRef]
- Vlasic, D.; Baran, I.; Matusik, W.; Popović, J. Articulated mesh animation from multi-view silhouettes. ACM Trans. Graph. 2008, 27, 97:1–97:9. [Google Scholar] [CrossRef] [Green Version]
- Remondino, F.; Roditakis, A. Human motion reconstruction and animation from video sequences. In Proceedings of the 17th International Conference on Computer Animation and Social Agents (CASA2004), Geneva, Switzerland, 7–9 July 2004; pp. 347–354. [Google Scholar]
- Herda, L.; Fua, P.; Plänkers, R.; Boulic, R.; Thalmann, D. Using skeleton-based tracking to increase the reliability of optical motion capture. Hum. Mov. Sci. 2001, 20, 313–341. [Google Scholar] [CrossRef] [Green Version]
- Margolis, T.; DeFanti, T.A.; Dawe, G.; Prudhomme, A.; Schulze, J.P.; Cutchin, S. Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback. Proc.-Spie-Int. Soc. Opt. Eng. 2011, 7864, 786417. [Google Scholar] [CrossRef] [Green Version]
- Bideau, B.; Kulpa, R.; Vignais, N.; Brault, S.; Multon, F.; Craig, C. Using Virtual Reality to Analyze Sports Performance. Comput. Graph. Appl. 2010, 30, 14–21. [Google Scholar] [CrossRef]
- Murphy-Chutorian, E.; Trivedi, M. Head Pose Estimation and Augmented Reality Tracking: An Integrated System and Evaluation for Monitoring Driver Awareness. Intell. Transp. Syst. IEEE Trans. 2010, 11, 300–311. [Google Scholar] [CrossRef]
- Beaudouin-Lafon, M. Lessons learned from the WILD room, a multisurface interactive environment. In Proceedings of the 23rd French Speaking Conference on Human-Computer Interaction, IHM ’11, Antipolis, France, 24–27 October 2011; ACM: New York, NY, USA, 2011; pp. 18:1–18:8. [Google Scholar] [CrossRef] [Green Version]
- Andersen, D.; Villano, P.; Popescu, V. AR HMD Guidance for Controlled Hand-Held 3D Acquisition. IEEE Trans. Vis. Comput. Graph. 2019, 25, 3073–3082. [Google Scholar] [CrossRef] [PubMed]
- Vogel, D.; Balakrishnan, R. Interactive public ambient displays: Transitioning from implicit to explicit, public to personal, interaction with multiple users. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST ’04, Santa Fe, NM, USA, 24–27 October 2004; ACM: New York, NY, USA, 2004; pp. 137–146. [Google Scholar] [CrossRef]
- Segen, J.; Kumar, S. Gesture VR: Vision-based 3D hand interace for spatial interaction. In Proceedings of the Sixth ACM International Conference on Multimedia, MULTIMEDIA ’98, Bristol, UK, 12–16 September 1998; ACM: New York, NY, USA, 1998; pp. 455–464. [Google Scholar] [CrossRef]
- Segen, J.; Kumar, S. Video acquired gesture interfaces for the handicapped. In Proceedings of the Sixth ACM International Conference on Multimedia: Face/Gesture Recognition and Their Applications, MULTIMEDIA ’98, Bristol, UK, 12–16 September 1998; ACM: New York, NY, USA, 1998; pp. 45–48. [Google Scholar] [CrossRef]
- Baldauf, M.; Zambanini, S.; Fröhlich, P.; Reichl, P. Markerless visual fingertip detection for natural mobile device interaction. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI ’11, Stockholm, Sweden, 30 August–2 September 2011; ACM: New York, NY, USA, 2011; pp. 539–544. [Google Scholar] [CrossRef]
- Song, P.; Yu, H.; Winkler, S. Vision-based 3D finger interactions for mixed reality games with physics simulation. In Proceedings of the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI ’08, Hachioji, Japan, 8–9 December 2008; ACM: New York, NY, USA, 2008; pp. 7:1–7:6. [Google Scholar] [CrossRef]
- Song, P.; Goh, W.B.; Hutama, W.; Fu, C.W.; Liu, X. A handle bar metaphor for virtual object manipulation with mid-air interaction. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI ’12, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 1297–1306. [Google Scholar] [CrossRef]
- Ren, G.; Li, C.; O’Neill, E.; Willis, P. 3D Freehand Gestural Navigation for Interactive Public Displays. Comput. Graph. Appl. 2013, 33, 47–55. [Google Scholar] [CrossRef]
- Benko, H. Beyond flat surface computing: Challenges of depth-aware and curved interfaces. In Proceedings of the 17th ACM International Conference on Multimedia, MM ’09, Beijing, China, 19–24 October 2009; ACM: New York, NY, USA, 2009; pp. 935–944. [Google Scholar] [CrossRef]
- Benko, H.; Jota, R.; Wilson, A. MirageTable: Freehand interaction on a projected augmented reality tabletop. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI ’12, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 199–208. [Google Scholar] [CrossRef]
- Harrison, C.; Benko, H.; Wilson, A.D. OmniTouch: Wearable multitouch interaction everywhere. In Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology, UIST ’11, Santa Barbara, CA, USA, 16–19 October 2011; ACM: New York, NY, USA, 2011; pp. 441–450. [Google Scholar] [CrossRef]
- Ababsa, F.; He, J.; Chardonnet, J.R. Combining HoloLens and Leap-Motion for Free Hand-Based 3D Interaction in MR Environments. In Proceedings of the 7th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, Lecce, Italy, 7–10 September 2020; De Paolis, L.T., Bourdot, P., Eds.; Lecture Notes in Computer Science. Springer International Publishing: Cham, Switzerland, 2020; pp. 315–327. [Google Scholar] [CrossRef]
- Chaconas, N.; Höllerer, T. An Evaluation of Bimanual Gestures on the Microsoft HoloLens. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Serrano, R.; Morillo, P.; Casas, S.; Cruz-Neira, C. An empirical evaluation of two natural hand interaction systems in augmented reality. Multimed. Tools Appl. 2022, 81, 31657–31683. [Google Scholar] [CrossRef]
- Chang, Y.S.; Nuernberger, B.; Luan, B.; Höllerer, T.; O’Donovan, J. Gesture-based augmented reality annotation. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 469–470. [Google Scholar] [CrossRef]
- Kao, H.L.C.; Dementyev, A.; Paradiso, J.A.; Schmandt, C. NailO: Fingernails as an Input Surface. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Republic of Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 3015–3018. [Google Scholar] [CrossRef]
- Ashbrook, D.; Baudisch, P.; White, S. Nenya: Subtle and eyes-free mobile input with a magnetically-tracked finger ring. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 2043–2046. [Google Scholar] [CrossRef]
- Ham, J.; Hong, J.; Jang, Y.; Ko, S.H.; Woo, W. Smart Wristband: Touch-and-Motion–Tracking Wearable 3D Input Device for Smart Glasses. In Proceedings of the Distributed, Ambient, and Pervasive Interactions, Heraklion, Crete, Greece, 22–27 June 2014; Streitz, N., Markopoulos, P., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2014; pp. 109–118. [Google Scholar] [CrossRef]
- Rekimoto, J. GestureWrist and GesturePad: Unobtrusive wearable interaction devices. In Proceedings of the Fifth International Symposium on Wearable Computers, Zürich, Switzerland, 8–9 October 2001; pp. 21–27. [Google Scholar] [CrossRef]
- Srikulwong, M.; O’Neill, E. A comparative study of tactile representation techniques for landmarks on a wearable device. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 2029–2038. [Google Scholar] [CrossRef]
- Asif, A.; Boll, S. Where to turn my car? comparison of a tactile display and a conventional car navigation system under high load condition. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’10, Pittsburgh, PA, USA, 11–12 November 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 64–71. [Google Scholar] [CrossRef]
- Prasad, M.; Taele, P.; Goldberg, D.; Hammond, T.A. HaptiMoto: Turn-by-turn haptic route guidance interface for motorcyclists. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, Toronto, ON, Canada, 26 April–1 May 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 3597–3606. [Google Scholar] [CrossRef]
- Israr, A.; Poupyrev, I. Tactile brush: Drawing on skin with a tactile grid display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; ACM: Vancouver, BC, Canada, 2011; pp. 2019–2028. [Google Scholar] [CrossRef]
- Israr, A.; Poupyrev, I. Control space of apparent haptic motion. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 457–462. [Google Scholar] [CrossRef]
- Saba, M.P.; Filippo, D.; Pereira, F.R.; de Souza, P.L.P. Hey yaa: A Haptic Warning Wearable to Support Deaf People Communication. In Proceedings of the 17th International Conference on Collaboration and Technology, Paraty, Brazil, 2–7 October 2011; Vivacqua, A.S., Gutwin, C., Borges, M.R.S., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany; pp. 215–223. [Google Scholar] [CrossRef]
- Mujibiya, A. Haptic feedback companion for Body Area Network using body-carried electrostatic charge. In Proceedings of the 2015 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 9–12 January 2015; pp. 571–572. [Google Scholar] [CrossRef]
- Withana, A.; Groeger, D.; Steimle, J. Tacttoo: A Thin and Feel-Through Tattoo for On-Skin Tactile Output. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST ’18, Berlin, Germany, 14–17 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 365–378. [Google Scholar] [CrossRef] [Green Version]
- Zhu, M.; Sun, Z.; Zhang, Z.; Shi, Q.; He, T.; Liu, H.; Chen, T.; Lee, C. Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications. Sci. Adv. 2020, 6, eaaz8693. [Google Scholar] [CrossRef]
- Pfeiffer, M.; Schneegass, S.; Alt, F.; Rohs, M. Let me grab this: A comparison of EMS and vibration for haptic feedback in free-hand interaction. In Proceedings of the 5th Augmented Human International Conference, AH ’14, Kobe, Japan, 7–9 March 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 1–8. [Google Scholar] [CrossRef]
- Pezent, E.; O’Malley, M.K.; Israr, A.; Samad, M.; Robinson, S.; Agarwal, P.; Benko, H.; Colonnese, N. Explorations of Wrist Haptic Feedback for AR/VR Interactions with Tasbi. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Ren, G.; Li, W.; O’Neill, E. Towards the design of effective freehand gestural interaction for interactive TV. J. Intell. Fuzzy Syst. 2016, 31, 2659–2674. [Google Scholar] [CrossRef] [Green Version]
- Harrington, K.; Large, D.R.; Burnett, G.; Georgiou, O. Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User Interfaces. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’18, Toronto, ON, Canada, 23–25 September 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 11–20. [Google Scholar] [CrossRef]
- Grossman, T.; Wigdor, D.; Balakrishnan, R. Multi-finger gestural interaction with 3d volumetric displays. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST ’04, Santa Fe, NM, USA, 24–27 October 2004; ACM: New York, NY, USA, 2004; pp. 61–70. [Google Scholar] [CrossRef] [Green Version]
- Batmaz, A.U.; Machuca, M.D.B.; Pham, D.M.; Stuerzlinger, W. Do Head-Mounted Display Stereo Deficiencies Affect 3D Pointing Tasks in AR and VR? In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 585–592. [Google Scholar] [CrossRef]
- Barrera Machuca, M.D.; Stuerzlinger, W. The Effect of Stereo Display Deficiencies on Virtual Hand Pointing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–14. [Google Scholar] [CrossRef]
- Ansari, S.; Nikpay, A.; Varmazyar, S. Design and Development of an Ergonomic Chair for Students in Educational Settings. Health Scope 2018, 7, e60531. [Google Scholar] [CrossRef] [Green Version]
- Erickson, A.; Kim, K.; Bruder, G.; Welch, G.F. Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays. In Proceedings of the Symposium on Spatial User Interaction, Virtual Event, 30 October–1 November 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Balakrishnan, R. “Beating” Fitts’ law: Virtual enhancements for pointing facilitation. Int. J.-Hum.-Comput. Stud. 2004, 61, 857–874. [Google Scholar] [CrossRef]
Target Size | Large (48 mm) | Small (32 mm) |
---|---|---|
Mean Selection Time | 0.71 s | 0.79 s |
Mean Error Rate | 0.5% | 1.2% |
Mean Hand Movement Distance | 38.4 cm | 31.2 cm |
Mean Head Movement Distance | 2.7 cm | 2.3 cm |
Mean User Target Distance | 40.4 cm | 39.0 cm |
Target Position | Centre | Corner |
---|---|---|
Mean Selection Time | 0.71 s | 0.79 s (top-left) |
Mean Hand Movement Distance | 31.4 cm | 37.0 cm (top-left) |
Mean Head Movement Distance | 2.2 cm | 2.7 cm (top-left) |
Mean Head Rotation | 28.74 degree | 30.21 degree (bottom-right) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, G.; Ren, G.; Hong, X.; Peng, X.; Li, W.; O’Neill, E. Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information 2022, 13, 566. https://doi.org/10.3390/info13120566
Wang G, Ren G, Hong X, Peng X, Li W, O’Neill E. Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information. 2022; 13(12):566. https://doi.org/10.3390/info13120566
Chicago/Turabian StyleWang, Gang, Gang Ren, Xinye Hong, Xun Peng, Wenbin Li, and Eamonn O’Neill. 2022. "Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality" Information 13, no. 12: 566. https://doi.org/10.3390/info13120566
APA StyleWang, G., Ren, G., Hong, X., Peng, X., Li, W., & O’Neill, E. (2022). Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information, 13(12), 566. https://doi.org/10.3390/info13120566