Next Article in Journal
The Impact of Story Structure, Meaningfulness, and Concentration in Serious Games
Next Article in Special Issue
Comparison of Point Cloud Registration Algorithms for Mixed-Reality Cross-Device Global Localization
Previous Article in Journal
CA-STD: Scene Text Detection in Arbitrary Shape Based on Conditional Attention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality

1
School of Design Arts, Xiamen University of Technology, Xiamen 361024, China
2
Department of Computer Science, University of Bath, Bath BA2 7AY, UK
*
Author to whom correspondence should be addressed.
Information 2022, 13(12), 566; https://doi.org/10.3390/info13120566
Submission received: 7 November 2022 / Revised: 29 November 2022 / Accepted: 29 November 2022 / Published: 2 December 2022
(This article belongs to the Special Issue Extended Reality: A New Way of Interacting with the World)

Abstract

:
Augmented reality (AR) technologies can blend digital and physical space and serve a variety of applications intuitively and effectively. Specifically, wearable AR enabled by optical see-through (OST) AR head-mounted displays (HMDs) might provide users with a direct view of the physical environment containing digital objects. Besides, users could directly interact with three-dimensional (3D) digital artefacts using freehand gestures captured by OST HMD sensors. However, as an emerging user interaction paradigm, freehand interaction with OST AR still requires further investigation to improve user performance and satisfaction. Thus, we conducted two studies to investigate various freehand selection design aspects in OST AR, including target placement, size, distance, position, and haptic feedback on the hand and body. The user evaluation results indicated that 40 cm might be an appropriate target distance for freehand gestural selection. A large target size might lower the selection time and error rate, and a small target size could minimise selection effort. The targets positioned in the centre are the easiest to select, while those in the corners require extra time and effort. Furthermore, we discovered that haptic feedback on the body could lead to high user preference and satisfaction. Based on the research findings, we conclude with design recommendations for effective and comfortable freehand gestural interaction in OST AR.

1. Introduction

Augmented reality (AR) enabled by mobile and wearable computing devices is becoming popular. With the fast development of personal computing devices and mobile networks, AR has enormous potential in various application areas such as education, healthcare, and industry [1]. Compared to virtual reality (VR), which provides an immersive virtual environment by completely occluding the user’s view with head-mounted displays (HMDs), AR overlays digital information directly in the physical world [2], and users are still aware of the real world. Thus, AR could be applied in scenarios requiring interaction with virtual and physical objects and environments.
AR could be implemented on various platforms. Mobile devices, such as smartphones or tablets, could use the camera to capture the real world and then display the digital information on top of the camera video feed. Users then interact with the multi-touch displays to control the mobile AR application. Apart from widely available mobile devices, video see-through (VST) HMDs could also overlay the digital object with the video feed captured by the cameras [2]. As there is no touch surface available any more, users have to use hand-held devices or freehand gestures to perform the interaction.
Another emerging method to implement AR is using wearable optical see-through (OST) devices. Users could view the physical world directly through the translucent displays rather than looking at a video feed captured by the cameras. At the same time, the displays could also render the three-dimensional (3D) digital content, and users could interact with OST AR applications similarly as they interact with physical objects.
The directness of OST AR has the potential to improve both user performance and satisfaction. Users could interact with both the digital and the physical world simultaneously and perform freehand gestures (e.g., direct hand touch or manipulation). However, such a leap of user interaction design from flat touch surfaces to wearable OST AR devices could significantly affect user adoption. To accelerate the AR applications, the research and design communities should pay more attention to the challenges of user interaction design for wearable OST AR.
For example, selecting a target is one of the most-frequent user interaction tasks. Target selection has been extensively studied in a variety of interaction contexts, including desktop computers [3,4], multi-touch mobile devices [5], 3D displays with hand-held devices [6], or freehand gestures [7,8,9,10]. In particular, freehand gestural interaction, which could provide a direct and natural user experience, is already expanding outside lab settings and is commonly supported by commercial wearable OST AR products. However, due to numerous under-researched interaction design issues, it is still challenging to complete some user interaction tasks.
Besides the commonly used visual and audio interaction modalities, haptic wearable devices might also enable rapid and straightforward access to information. Wearable haptic devices, such as smartwatches and wristbands, are becoming essential to our digital infrastructure. Employing haptic feedback, they may provide on-the-go information access without occupying the user’s visual attention. Smartwatches and wristbands are convenient, but their interaction design options are restricted due to their diminutive size. In contrast, body-worn devices, such as vests and jackets with tactile units, offer a significantly bigger form factor and broader body coverage. Therefore, they may expand the design space for haptic feedback from the wrist to the whole body.
Haptic feedback is receiving more attention from the research communities [11,12]. Especially, the wearable haptic feedback devices could be applied in many scenarios to improve the user engagement, such as storytelling [13], animation [14], gaming [15], or driving [16]. Haptic feedback could also offer information awareness for tasks such as driving [16]. Ultrasonic haptic feedback could also facilitate the freehand gestural interaction [17,18].
Unlike smartphones with multitouch displays, wearable OST AR lacks a portable screen or physical surface to provide touch input or haptic feedback. Therefore, it is essential for wearable OST AR applications to provide suitable haptic feedback. However, the investigation of the combination of haptic feedback and freehand interaction in wearable OST AR settings is still limited. Based on the existing research and investigations, we analysed the design factors of freehand gestural target selection in the OST AR context. We conducted two experimental studies on user freehand interaction selection. We present the results, including user behaviour, preferences, and feedback, and we conclude with design recommendations and guidelines for the freehand gestural target selection in OST AR environments, as well as the future avenues of research.

2. Related Works

Numerous devices potentially enable gestural interaction, and interaction techniques that enhance the user experience of gestural and haptic interaction have been investigated for decades. Such efforts provide a strong foundation for the design of wearable OST AR applications with freehand target selection. This section briefly summarises the research on freehand gestural selection and haptic feedback design.

2.1. Gestural Interaction with Hand-Held Devices

The mouse is widely used as a pointing device with a graphical user interface (GUI) for menu and object selection [19]. A mouse can also be used as a gestural input device for many tasks such as menu selection [20] and object manipulations [21]. With the popularity of smartphones and tablets, touch-sensitive devices are widely used currently. The touch screen can enable similar interaction techniques such as a mouse and pen and extend the possibilities with multiple fingers via multi-touch support. The touch surface can also be used with different types of devices, such as a large interactive surface [22], desktop-sized screen [23], tablet and mobile phone [24], as well as wearable devices.
Hand-held devices are standard techniques for motion capture and, thus, widely used for gestural input in video game applications [25,26,27]. There are several commercially available implementations of six-degree-of-freedom (6-DOF) tracking devices, and they have been commonly used for interacting with large displays for directly manipulating virtual objects [28,29,30,31]. The Wii remote, for example, is a hand-held tracking game controller widely adopted by both consumers and researchers. It has been used for interaction tasks such as image analysis [32], TV control [33], and text input [34,35]. The Wii remote can also be augmented for better performance [36].
With various sensors on mobile devices, interaction on smartphones can also be enhanced by motion tracking. For example, SHRIMP [37] uses camera-based motion sensing to enable the user to express preference through a tilting or movement interaction. To address the problem of a gestural delimiter of mobile devices, DroubleFlip [38] proposes a motion gesture for motion-based interaction.

2.2. Freehand Gestural Interaction

Early freehand interaction systems needed fiducial markers on the user to enable tracking of the gestures. This type of computer vision tracking has been used in different tasks and applications, such as skeleton animation [39,40,41], virtual reality [42,43], monitoring users’ behaviour [44], interactive environments [45], 3D reconstructions [46], and pervasive displays [47].
Besides tracking with markers, users can also interact with the system more directly without wearing tags. For example, the hand positions and gestures can be tracked in 3D with two regular RGB cameras [48,49]. The fingertip can also be detected by cameras on mobile devices [50] or stereo cameras [51].
Besides regular cameras, it is also possible to track freehand gestures with depth cameras [7], enabling human skeleton tracking in 3D space without holding any device or attaching any markers. However, such tracking techniques with a single remote camera typically have low resolution and tracking accuracy. In practice, the skeleton tracking based on the raw depth data can be even noisier.
Research illustrates the directness and immediacy of such gestural interaction enabled by depth cameras in daily life use cases. For example, motion sensing input has been explored using a Kinect sensor for object manipulation [52]. Two-handed operation is used to address the lack of hand orientation tracking [53]. Virtual objects can also be manipulated using skeleton tracking techniques, including on curved surfaces [54,55] and projected directly onto everyday objects [56].
Computer-vision-enabled freehand gesture interaction is also increasingly used in wearable HMDs for VR and AR applications. Users could interact with digital applications using freehand gestures tracked by cameras on the wearable glasses. Numerous user scenarios and applications could be supported, such as manipulation and interaction with digital virtual objects [57,58,59] and annotation drawing for creating visual and spatial references [60].

2.3. Haptic Feedback

With the development of smartphones and smartwatches, haptic feedback is growing in popularity across various mobile devices. Devices such as finger rings [61,62] and waistbands [63,64] could also be utilised as haptic input devices. Furthermore, haptic interaction is utilised in many user scenarios. For example, through a wearable tactile feedback belt, the designers might offer the visually impaired navigational information [65]. In addition, haptic vests and belts are utilised in navigation systems for automobiles and motorcycles [66,67]. Experiments indicate that a haptic navigation belt can also give car drivers a more precise sense of direction than conventional navigation techniques without raising their cognitive load [66]. Likewise, a tactile vest might be employed to enhance the awareness of motorcycle drivers on peripheral information and road conditions [67]. The effects of the dynamic tactile feedback, such as speed, position, direction, length, thickness, and intensity, could all be controlled [68,69] and utilised in entertainment such as gaming [15] or storytelling [13,14].
Additionally, tactile feedback can increase people’s capacity to perceive and engage with their home environment. For instance, tactile belts might assist those with hearing loss in perceiving environmental or interpersonal cues through vibrations [70]. In addition to tactile feedback via belts or vests, haptic feedback could also be provided via controlled electrostatic stimulation. The application of electrical haptic feedback offers the benefit of further miniaturising the tactile feedback device while retaining its natural sensitivity [71,72].
Adding haptic feedback to freehand gestural interaction could offer the potential for more realistic and immersive user experiences. Various wearable devices such as gloves [73], armbands [74], or wrist-based devices [75,76] have been applied to provide haptic feedback. Besides wearable haptic devices, an array of ultrasonic transducers could also be used to provide haptic feedback for gestures [77] or spatial cues for digital visualisations [17,18]. The user evaluation indicates that the haptic feedback from ultrasonic transducers could improve the interaction accuracy [17] and reduce visual demand [77].
As one of the most-common and -frequent interaction tasks, target selection design is essential for improving the user experience for freehand interaction in OST AR. However, key interaction design features, such as the optimal target size, location, and depth, have yet to be thoroughly examined. Moreover, limited research has been conducted on haptic feedback for freehand selection in OST AR contexts. In addition, most current research on haptic feedback for freehand selection concentrates on devices for the hand, wrist, or arms, disregarding the potential of other body areas.

3. Study 1: Freehand Gestural Selection in Wearable OST AR

Wearable OST AR devices are now available as consumer electronics products. Hololens 2, for example, has depth sensors supporting 3D spatial awareness and hand tracking. With translucent displays, users can now view and touch the digital artefacts directly like physical objects, as shown in Figure 1. The user interaction enabled by OST AR devices has some exciting characteristics compared to the multi-touch interaction.
Firstly, as users can see their hands directly, traditional cursors or virtual indicators of hands in previous studies [6,7,78] are not necessary any more. The user could directly view their real hands and interact with the digital targets in 3D environments. Such a direct interaction style is very similar to the touch interaction on the multi-touch displays on smartphones or tablets, where no cursor is needed. Secondly, unlike multi-touch user interaction, there are no mobile screens or physical surfaces to support the user’s hand. Thus, haptic feedback from the physical world is absent, and the user’s hand could go through the digital targets during the interaction.
The 3D selection techniques in VR are also commonly applied in wearable OST AR environments. For example, hand-held tracking devices could control virtual hands for target selection [79,80]. Other input modalities, such as head motion, hand gesture, and eye gazing, could also be used to point and select small remote targets [10]. Besides virtual hands and pointing selection techniques, other selection methods, such as goal-crossing selection enabled by freehand tracking of the Microsoft Hololens, have also been evaluated in OST AR environments [9].
One of the most intuitive selection methods in the physical environment is using real hands to touch the object directly. Although freehand tracking is commonly used in commercial OST AR products (e.g., Microsoft Hololens 2), the natural selection with freehand gestures is still under-explored. Few design suggestions and implications are available for the research and design communities on wearable OST AR selection design.

3.1. Study Design

This section explores the optimal design of natural gestural selection to enhance user performance and preference. We conducted an experimental evaluation to investigate several main design factors for freehand gestural selection with a grid target layout, including target placement, target size, target distance, and target location.

3.1.1. Freehand Gestural Selection Design

The direct user experience of grid layout selection in smartphone applications could also deliver intuitive and straightforward OST AR interaction. However, compared to other selection modalities (e.g., head motion and eye gazing [10]) or selection techniques (e.g., ray casting or gaze-and-commit [9]), direct hand gestural selection has not been sufficiently studied in OST AR environments.
Grid target layouts are now widely used on various digital devices, such as smartphones, tablets, and smart TVs. The grid layout could offer well-organised targets to support easy selection. Users are now familiar with interactions such as selecting an application from the smartphone home screen or inputting text and numbers. Thus, we employed three-by-three grid layout targets to investigate the freehand target selection in OST AR, as shown in Figure 2a.

3.1.2. Experimental Settings

We used a Microsoft Hololens 2 as the experimental platform. It has see-through stereo displays, and the hand gesture is tracked by the camera array mounted in front. The experimental applications were programmed using Unity 2019.4 with the Mixed Reality Toolkit (MRTK) 2.6 and built with Microsoft Visual Studio 2019 version 16.9.4 on a Windows 10 PC. We used a group of nine squares in a three-by-three grid layout as the targets. In both visual and motor space, the centre was 10 cm below the user’s head. The experimental settings and target layout are illustrated in Figure 2b.

3.1.3. Independent Variables

There were 4 independent variables (all sizes and distances here are measured the same in both visual and motor space):
  • Target placement (2 levels): centre and right (15 cm away from the centre);
  • Target size (2 levels): large (48 mm length of side) and small (32 mm length of side);
  • Target distance from the user (3 levels): short (30 cm), middle (40 cm), and long (50 cm);
  • Target position (9 levels): right-up (1), up (2), left-up (3), left (4), centre (5), right (6), left-down (7), down (8), and right-down (9).
The design of independent variables was based on the previous work on target selection. For example, the main factors that could affect the target selection are target distance and target size according to previous studies [3,4,5,6,7]. The target location also affects the user performance of freehand selection [76]. Considering that the user’s main hand rests along the right side of the body (right-handed participants in our study), the placement of the target (i.e., right or centre) could also affect the target distance from the user’s hand.
Microsoft Hololens is currently one of the most commonly used OST AR devices, and Microsoft provides some documentation for MRTK developers. According to the documents, Microsoft recommends a minimum target size of 32 mm from a distance of 48 cm (https://learn.microsoft.com/en-us/windows/mixed-reality/design/button (accessed on 28 November 2022)). We extended the target distance range from 30 cm to 50 cm and the target size range from 32 mm to 48 mm to further the understanding of the design factors. We noticed that the user’s hand usually rests within the shoulder breadth during the interaction. Considering the average shoulder breadth is about 35 cm [81], we used the target placement at 15 cm on the right side.

3.1.4. Experimental Design

We used a repeated-measures within-participants design in this study. There were two main sessions for different target placements (centre and right). The order of the two main sessions was counterbalanced across 12 participants. There was a 5-min break between the main sessions.
In each session, there were 6 test groups for different combinations of target size and distance. The order of the different combinations was randomised. Each test group had 9 practice trials (1 trial in each target position) and 27 test trials (3 trials in each target position). Thus, there were 432 trials performed in total for each participant.

3.1.5. Participants and Procedure

Twelve participants (three males and nine females) were recruited from the campus. Their mean age was 22.5 ( s d = 5.21 ) , and they were all right-handed. Most participants had some experience with 3D interaction, mainly with 3D gaming (10 participants); 4 participants had VR experiences; 5 participants had used gestural interaction experience. During the experiment, participants sat in a chair and were asked to select the target with their right index finger quickly and accurately while remaining relaxed and comfortable.
In each trial, the target was displayed in red, and the other was in blue. When the user’s hand approached the targets, the proximity light shader from the MRTK standard shader library was applied. The default selection audio track from the MRTK standard audio library was played when the index finger touched the target. During the experiment, the MRTK hand mesh visualisation was enabled so users could see their hands’ tracking status.
After the test, the participant filled out a questionnaire about user preferences for different target placements, sizes, distances, and positions. They rated the user preference from 1 (strongly dislike) to 9 (strongly like). Participants were also asked to provide their comments and discussions about their opinions of different interaction designs. The whole experiment took about 50 min.

3.2. Results

We recorded the selection time and error rate to evaluate the user performance. We also recorded the user behaviour data such as hand movement, head position, and rotation. A repeated-measures analysis of variance (ANOVA) for target placement × size × distance × position was used to analyse the user performance measurements, including selection time, error rate, hand movement, head movement, and rotation, as well as the mean user distance to the target during the selection.

3.2.1. Selection Time

Main effects were found for size ( F 1 , 11 = 6.30 , p < 0.05) and position ( F 8 , 88 = 5.17, p < 0.001). An interaction effect was found for menu placement x distance ( F 2 , 22 = 4.75, p < 0.05 ) . No main effect or interaction effect was found for target placement or distance. The mean selection time across target placement, size, distance, and position is illustrated in Figure 3.
Post hoc Bonferroni pairwise comparisons showed that the selection time of the large target size (0.75 s) was significantly faster than the small target size (0.77 s) ( p < 0.05 ) . The selection time of the centre target (0.71 s) was significantly faster than the top-left (0.79 s), bottom-left (0.77 s), and bottom-right (0.79 s) target item ( p < 0.05 ) .

3.2.2. Error Rate

Users were asked to select the target accurately. They needed to try more than once if they failed to select the target correctly. Each failure to select the target was recorded as an error. Main effects were found for size ( F 1 , 11 = 18.57 , p < 0.05 ) . No main effect or interaction effect was found for target placement, distance, or position. The mean error rate across target placement, size, distance, and position is illustrated in Figure 3. Post hoc Bonferroni pairwise comparisons showed that large target item selection had a significantly less error rate ( 0.5 % ) than a small target size ( 1.2 % ) .

3.2.3. Hand Movement Distance

The hand movement distance was recorded while users were selecting the target items. Main effects were found for target size ( F 1 , 11 = 44.76 , p < 0.001 ) and position (The sphericity assumption was not met, so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) ( F 1.69 , 18.55 = 5.51 , p < 0.05 ) . No main effect or interaction effect was found for target placement or distance. The mean hand movement distance across target placement, size, distance, and position is illustrated in Figure 4a.
Post hoc Bonferroni pairwise comparisons showed that large target item selection had a significantly longer hand movement distance overall (38.4 cm) than the small target item selection (31.2 cm). The hand movement distance for the centre item was shortest (31.4 cm) and was significantly shorter than four corner items (i.e., top-left, top-right, bottom left, bottom right) and left ( p < 0.05). The hand movement distance for top-left was longest (37.0 cm) and was significantly longer than the centre item ( p < 0.05).
We also recorded the hand movement distance along the depth dimension. Main effects were found for size ( F 1 , 11 = 17.76 , p < 0.05). No other main effect or interaction effect was found. Post hoc Bonferroni pairwise comparisons showed that large target item selection had significantly longer hand movement distance along the depth dimension (27.9 cm) than small target item selection (23.8 cm) ( p < 0.05 ) .

3.2.4. Head Movement Distance

The user’s head also moved slightly during the selection, and we recorded the head movement distance. For head movement, main effects were found for size ( F 1 , 11 = 25.77, p < 0.001) and position ( F 8 , 88 = 8.56 , p < 0.001 ) . No main effect was found for target distance or placement. The mean head movement distance across target placement, size, distance, and position is illustrated in Figure 4b.
Post hoc Bonferroni pairwise comparisons showed that the mean head movement distance for large target item selection (2.7 cm) was significantly longer than a short target item (2.3 cm) ( p < 0.001 ) . The head movement for the centre item selection was the shortest (2.2 cm), significantly shorter than four corner target items ( p < 0.05 ) . The top-left target required the longest head movement distance (2.7 cm).

3.2.5. Head Rotation

Besides the head movements, users also rotated their heads to locate the target to select. We also recorded the head rotation. Main effects were found for position ( F 8 , 88 = 3.10, p < 0.05). Interaction effects were found for placement x size ( F 1 , 11 = 10.40, p < 0.05). No main effect or interaction effect was found for target size, distance, and placement. The mean head rotation across target placement, size, distance, and position is illustrated in Figure 4c.
Post hoc Bonferroni pairwise comparisons showed that the head rotation for centre target item selection was the smallest (28.74 degrees), significantly smaller ( p < 0.05 ) than the bottom right (30.21 degrees).

3.2.6. User Target Distance

While the users selected the target, they would slightly adjust their head position to a more comfortable position. We recorded the average distance along the depth dimension from the user’s head to the targets to evaluate the adjustment of the user’s head position. Main effects were found for size ( F 1 , 11 = 32.93 , p < 0.001 ) and distance ( F 2 , 22 = 365.70 , p < 0.001). Interaction effects were found for size × distance ( F 2 , 22 = 3.78 , p < 0.05). No main effect was found for target placement and position. The mean user target distance across target placement, size, distance, and position is illustrated in Figure 4d.
Post hoc Bonferroni pairwise comparisons showed user target distance along the depth dimension of a large target item (40.24 cm) was significantly longer than a small target item (39.0 cm) ( p < 0.001 ) . User target distance along the depth dimension (32.2 cm, 39.6 cm, 47.3 cm) was also significantly different ( p < 0.001 ) across three target distance conditions (30 cm, 40 cm, 50 cm).
To understand the distance more accurately, we also recorded the overall distance from the user to the centre of the menu. Main effects were found for target placement ( F 1 , 11 = 14.76 , p < 0.05 ) , size ( F 1 , 11 = 29.71 , p < 0.001 ) and distance ( F 2 , 22 = 384.94 , p < 0.001). No main effect was found for the target position. Interaction effects were found for size × position ( F 1 , 11 = 2.16 , p < 0.05 ) .
Post hoc Bonferroni pairwise comparisons showed that the distance from the user to the targets placed in the centre (42.8 cm) was significantly shorter than targets placed on the right (44.2 cm) ( p < 0.05 ) . The user distance of a large target item (44.2 cm) was significantly longer than a small target item (42.8 cm) ( p < 0.001 ) . The average user target distance of short (30 cm), middle (40 cm), and long (50 cm) targets was 36.5 cm, 43.3 cm, and 50.8 cm, respectively ( p < 0.001 ) .

3.2.7. User Preference

After the test, the participant needed to complete a questionnaire about her/his preference and comments. A two-tailed dependent T-test found that placement ( t 11 = 2.10 , p = 0.59) and size ( t 11 = 1.33 , p = 0.21 ) had no significant effect on user preference, as shown in Figure 5a,b.
A one-way ANOVA across the target distance found a significant effect on user preference ( F 2 , 33 = 11.29 , p < 0.001), as shown in Figure 5c. Post hoc Bonferroni pairwise comparisons showed that the mean user preference of a 40 cm target distance ( 8.17 ) was significantly higher than 30 cm ( 4.9 ) ( p < 0.001) and 50 cm ( 6.2 ) ( p < 0.05 ) .
A one-way ANOVA across the target position found a significant effect on preference ( F 8 , 99 = 2.60 , p < 0.05 ) , as shown in Figure 5d. Post hoc Bonferroni pairwise comparisons showed that the preference of the centre item ( 8.00 ) was significantly higher than top-left ( 5.00 ) ( p < 0.05 ) .
Figure 5 shows user preference for target placement, size, distance, and position (from 1 for strongly dislike to 10 for strongly like).
User feedback supported the preference findings. The 40 cm distance was chosen as the most comfortable distance by the participants. The 30 cm target distance was too short, especially considering the target size. Users must manage their hands and move their heads to look at a close and large target to avoid making mistakes. Users found it difficult to choose a target at a distance of 50 cm because they had to extend their hands, adding to their exhaustion.

4. Study 2: Haptic Feedback for Freehand Gestural Selection in Wearable OST AR

This section explores the haptic feedback design and evaluation of freehand gestural selection in OST AR. We designed and implemented haptic feedback on the hand and body and conducted another experimental evaluation to investigate haptic feedback design on the hand and body for freehand gestural selection.

4.1. Haptic Feedback Design for Freehand Gestural Target Selection

In freehand interactions, one fundamental design challenge is the absence of haptic feedback [7,76]. Although haptic feedback has been studied in various user scenarios, such as entertainment [13,14,15] and driving [16], the research on haptic feedback in freehand selection tasks in the OST AT environment is still limited.
Therefore, we improved the freehand selection with OST AR by integrating wearable haptic feedback based on Study 1. Based on the findings and user comments of Study 1, we could determine a set of target selection designs for optimum user experience. For example, most users preferred the 40 cm target distance and the target location in the centre; the large target size had fewer errors and a faster selection time. Thus, we used those optimised freehand target selection settings to investigate the haptic feedback design of freehand selection in OST AR.
Previous investigations indicated that numerous tactile units could be sensed as a single area by the user [68,69]. Thus, we could map the target location rendered by the Hololens 2 to the position on the user’s chest via a haptic vest device with numerous tactile units, as shown in Figure 6. For the haptic feedback on the hand, we could use a haptic wristband to deliver tactile feedback while the selection was performed, similar to the haptic design in [76].

4.2. Study Design

Similar to Study 1, we still used the same Microsoft Hololens 2 as the OST AR device with the same development tools. For wearable haptic feedback on the body and hand, we used the bHaptics TactSuit X40 and Tactosy for Hands. The TactSuit X40 had 40 tactile feedback units around the upper body (20 units on the chest and 20 on the back), and the Tactosy for Hands had 3 tactile feedback units.

4.2.1. Experimental Settings

In our experiment, we used the top four rows of tactile units on the chest region of the bHaptics TactSuit X40 to create a four-by-four tactile grid to map the three-by-three targets rendered in the Hololens 2. For instance, while Target Number 6 was selected, the four tactile units located at the right-middle area on the user’s chest were activated to vibrate, as shown in Figure 6. The fifth row (with four tactile units) on the chest region was disabled during the experiment. Similarly, we disabled all the tactile units on the back. Thus, all target selection and corresponding haptic feedback were in front of the user. For the haptic feedback on the wristband, all three tactile units on the Tactosy for Hands vibrated together simultaneously.
The haptic devices were controlled by an application developed with Unity 2019.4 and the bHaptics Haptic Plugin on a desktop PC running Windows 10. While the user selected a target with the Hololens, the selection message was published via the Message Queuing Telemetry Transport (MQTT) messaging protocol (https://mqtt.org/ (accessed on 28 November 2022)). The MQTT server was built on a Tencent Cloud (https://cloud.tencent.com/product/lighthouse (accessed on 28 November 2022)) with EMQX V4.0.4 (https://www.emqx.io (accessed on 28 November 2022)) running on a centOS 7 server. The haptic application subscribed to the selection message to activate the corresponding haptic feedback. Each haptic feedback lasted for 50 ms with a 100 % intensity.

4.2.2. Independent Variables

In the second study, we mainly investigated the effect of haptic feedback, so we used the settings that could deliver an optimised user experience according to the user evaluation results of Study 1. We used the targets located in the centre, with a size of 48 cm and a distance of 40 cm. The independent variables in Study 2 were the haptic feedback type and the target position:
  • Haptic feedback type (4 levels): no haptic feedback, hand haptic feedback, body haptic feedback, hand and body haptic feedback.
  • Target position (9 levels): right-up (1), up (2), left-up (3), left (4), centre (5), right (6), left-down (7), down (8), and right-down (9).

4.2.3. Experimental Design

We used a repeated-measures within-participants design in this study. In each experiment session, there were 4 test groups for different haptic feedback types. The order of different haptic feedback types was randomised. Each test group had 9 practice trials (1 trial in each target position) and 27 test trials (3 trials in each target position). Thus, there were 144 trials performed in total for each participant.

4.2.4. Participants and Procedure

Twelve right-handed participants (five males and seven females) were recruited from the campus. Their mean age was 24.5 ( s d = 5.28 ) . The experiment procedure and user settings were similar to Study 1. The main difference is that the user wore a haptic vest and a haptic wristband on the right hand during the experiment. The participants in Study 1 and Study 2 were different.
After the test, the participant filled out a questionnaire about user preferences for different haptic feedback from 1 (strongly dislike) to 9 (strongly like). The participants were also asked to provide their comments and discussions about their opinions of different interaction designs. The whole experiment took about 30 min.

4.3. Results

Similar to Study 1, we recorded the selection time, error rate, and user behaviour data (i.e., hand movement, head position, and rotation) to evaluate the user performance with different haptic feedback types in Study 2. A repeated-measures analysis of variance (ANOVA) for haptic type × position was used to analyse the user performance measurements about selection time, error rate, and user behaviour data during the selection.

4.3.1. Selection Time, Error Rate, and User Behaviour Data

No main or interaction effect was found for target placement or distance for selection time or error rate. The mean selection time and error rate across haptic feedback type and position are illustrated in Figure 7a,b.
Similarly, no main or interaction effect was found for user behaviour. The mean user behaviour data across haptic feedback type and position are illustrated from Figure 7c–f.

4.3.2. User Preference and Feedback

After the experiment, the participant needed to fill out a questionnaire about her/his preference and comments. A one-way repeated-measures ANOVA across haptic feedback types found a significant effect on user preference ( F 3 , 33 = 23.59 , p < 0.001), as shown in Figure 8. Post hoc Bonferroni pairwise comparisons showed that mean user preference of selection with all haptic feedback conditions was significantly higher than without ( 2.83 ) ( p < 0.01), and selection with vest haptic feedback ( 7.08 ) and vest and hand combined haptic feedback ( 6.92 ) were significantly higher than haptic feedback only on the hand ( 4.92 ) ( p < 0.01).
The user comments and feedback confirmed the benefits of haptic feedback from many aspects. Firstly, users felt more confident while selecting with haptic feedback provided as a gestural selection confirmation. Secondly, the haptic feedback on the hand via the wristband was relatively weak for our participants. On the other hand, the haptic feedback on the chest via the vest could offer a larger vibration area. Thus, the participants could feel stronger feedback. Thirdly, the large haptic area on the vest could accommodate more tactile units, thus it could provide a cue for the selected target position. The haptic cue of the target position could also help the participants confirm if they have made the correct selection.
Another interesting user feedback was that the large body coverage could offer a better immersive experience. Some participants indicated that the haptic feedback around the body improved the immersion, making it fun to interact with the OST AR application.
Overall, users agreed the haptic feedback could be effective. Thus, they expected more haptic units and a larger coverage area on various body parts. The participants would also like haptic feedback on their fingertips and the back. They mentioned more information could be offered via haptic feedback, such as error selection warnings.

5. Discussion

In this section, we discuss the effects of the design factors in OST AR environments based on the experimental results.

5.1. Effect of Target Distance

As Fitts’ law and the following related investigations indicated [4], the longer the target distance, the more time it takes to select the target. This phenomenon has been evaluated in various settings, such as the 2D desktop environment [3], touch displays [5], and 3D selection environments [6]. For example, in 3D display environments [6], a tracking clicker was used for evaluating the 3D selection performance.
Previous results revealed that the target distance affected the selection time significantly. In our study, on the other hand, we found that the target distance from the user had no significant effect on the selection time. The main reason could be that the targets in the digital visual space of OST AR environments do not exist in the physical space. Thus, the target cannot stop the user’s hand like a real physical target (e.g., a multi-touch surface or a physical button). While users move their hands towards the virtual target, their hands and fingers will go through it and continue until they actively hold to stop themselves.
Furthermore, the user’s hands could move freely in a 3D environment in OST AR environments. There are no other physical surfaces available to keep their hands still (e.g., moving a mouse on a desktop), so users have no support on their hands to dwell and hover on the target at a specific location. Thus, users must keep moving their hands around the targets to select.
The results of the error rate and other user performance measurements provided further evidence. There was no main effect found for error rate, target distance on hand movement, head movement, and head rotation. All the user performance data suggested that the user behaviour was similar across different target distances.
Although the target distance to users had no significant effect on user performance, our participants expressed a strong user preference for different distance settings. Most users liked the 40 cm distance. On the other hand, the shortest 30 cm target distance could make the target difficult to view, so users needed to lean back for a larger view angle. The furthest 50 cm distance could offer a large view angle for all the targets, but required extra reach distance. Thus, the longer target distance could cause more hand fatigue during the selection and more body movements as users might need to lean forward a little to make a more comfortable selection.
The 40 cm target distance received no complaints from our participants. All users agreed that 40 cm was a proper target distance to perform the direct gestural selection. The user target distance measurements could further support the user preference for a 40 cm distance. The participants moved away from the target at a 30 cm distance (32.2 cm), moved towards the target at a 50 cm distance (47.3 cm), and kept almost the same distance in the 40 cm condition (39.6 cm). Thus, we could conclude that a 40 cm target distance to the user could provide a clear view of the targets while still keeping a comfortable reach distance for the selection.

5.2. Effect of Target Size

Compared to the target distance, the target size had much more impact on the user performance, including selection time, error rate, hand movement distance, head movement distance, and user target distance. A large target size showed apparent benefits for selection time and error rate. The users had fewer errors (0.5% for the large target size and 1.2% for the small target size) and a faster selection time (0.71 s for the large target size and 0.79 for the small target size).
However, the subjective feedback from the participants suggests otherwise. Some participants mentioned that it felt uncomfortable when the large target appeared at a close distance (i.e., 30 cm), making it difficult to view and select the target. Many participants agreed that the small targets were easier to select and the large targets required longer hand movements and more head and body movements.
The user performance measurement data support the user comments well. The hand movement data suggests that a large target size required significantly longer hand movement distance (38.4 cm for the large and 31.2 cm for a small target), head movement distance (2.7 cm for the large and 2.3 cm for a small target) and user target distance along depth dimension (40.4 cm for the large target and 39.0 cm for a small target). The comparison of the user performance measurements of large and small target sizes is shown in Table 1.
As suggested in a previous study [76], a large target size is not always a better option in freehand selection environments. This study indicates that the application designer should consider a large target size when only a few targets are available or interaction precision is crucial and consider a small target size when there are numerous targets.

5.3. Effect of Target Position

Our participants liked to put their hands near the centre position and put the visual focus on the centre. Thus, all participants expressed a high preference for the central location. On the other hand, the targets located in the corners were not preferred by the participants as extra selection efforts were required. Some participants also mentioned that targets in the top-left corner were particularly challenging to select.
All the user preferences about the target location were well-supported by the user performance measurement data. Target position significantly affected the selection time, hand movement distance, head movement distance, and head rotation. The centre target significantly outperformed all four corner targets for both hand and head movement distance. For selection time, the centre target was significantly faster than the top-left, bottom-left, and bottom-right targets. The centre target required significantly less head rotation than the bottom-right target for head rotation. The comparison of the user performance measurements of the target located in the centre and corner is shown in Table 2.
According to the user comments, we also noticed that locating the target position was also an essential part of the target selection in OST AR environments. Participants mentioned that the targets located in the corners, especially when rendered in a large size, could be difficult to find. Again, this was mainly due to the limited viewing angle (a diagonal field of view of 52 degrees and a resolution of 2048 × 1080 pixels per eye) [82] of the OST AR device (i.e., Microsoft Hololens 2) used in this study. Users tended to move their heads to ensure they had not missed the targets rendered in red before performing the selection with their hands.

5.4. Effect of Target Placement

Like the target distance to the user, there was no main effect between the targets placed in the centre and right in all user performance measurements. During the user interview, while a minority mentioned that targets on the right side required less hand motion, most participants preferred targets placed in the centre as they were more convenient and comfortable to select.

5.5. Effect of Haptic Feedback

In Study 2, although there was no significant user performance difference in the target selection task, the user preference and feedback suggest that haptic feedback has unique advantages. First, haptic feedback could provide the cue for completing the selection gesture via a different interaction modality, making the user interaction more confident and fluent when the hand moves freely in the air.
Secondly, the haptic feedback on the chest is often stronger than on the wrist and hand since the haptic vest provides a larger body coverage area and more tactile units, which offers extra benefits in OST AR interaction. For example, the numerous tactile units on the vest could provide the position of the selected target, giving users an additional selection cue. Additionally, the large haptic coverage area on the upper body may also provide a more immersive user experience.

6. Conclusions

The emergence of wearable OST AR devices such as the Microsoft HoloLens 2 implies that AR might be a potential platform for personal computing. Thus, it is increasingly imperative to advance our understanding of the interface design of wearable OST AR. As natural freehand gestural selection is highly intuitive, it is vital to investigate user performance and preference for a pleasant, fluid, and efficient selection approach.
This paper evaluated the design of selection tasks in the AR environment. We investigated different design factors, such as target placement, target size, target distance to the user, target position, and haptic feedback. According to user performance and feedback, we revealed several design guidelines and suggestions for natural gestural selection in the OST AR environment:
  • Our study proves that the large target size could reduce the selection time and error rate, so when the application designers only have a small number of options or targets available, a large target size should be considered. For other selection tasks (e.g., input a string of text or number with a keyboard), a large target size might introduce longer hand and head movement distance and more fatigue. Thus, a small target size could be used for a sequence selection with a group of targets tiled together.
  • A 40 cm target distance to the user could be a very suitable target distance. Although the evaluation results indicated that target distance had little impact on the user selection performance, our participants showed a strong subjective preference for a 40 cm target distance. The 40 cm distance provides a good balance of a comfortable reach distance and sufficient visual space to observe the targets. A longer target distance could lead to a far arm reach motion; thus, it may bring fatigue issues. If the viewing angle of HMDs is similar to our study, a short target distance could make targets challenging to observe and cause more body adjustment before the selection.
  • For a three-by-three grid layout, a target located in the centre position is the most accessible position to select because the users naturally put their visual attention and hand position in the centre, resulting in less selection time and hand movement distance. The four corner locations, especially the left-top corner, could be difficult to select for right-handed users. Therefore, the designer should put the targets frequently used to the centre and set the other options less used to the corner locations.
  • Target placement has little effect on user’s performance. We recommend putting the targets in the centre if possible due to the user preference data in the experiment.
  • For the haptic feedback design, the user preferences and comments imply that wearable haptic feedback on the hand and body might improve the user experience. Particularly, haptic feedback on the body may achieve the optimal user experience as multiple haptic feedback units are provided on the chest, giving the confirmation of the selection and the target position. Thus, haptic input on the upper body may boost freehand interaction confidence and engagement. Therefore, with the emergence of wearable haptic feedback devices, the designer might explore including haptic feedback in the OST AR interface to enhance the user experience.
Research in user interaction suggests that OST AR could offer a natural and enhanced user experience. However, we still require comprehensive knowledge of optimising the interaction design for the research community and product designers. Our future work would be progressively investigating more OST AR selection interaction design aspects. For example, an artificial algorithm could be introduced to improve selection performance [83], and more interaction modalities should be considered in the selection scenarios containing both near and distant targets [10]. Future explorations should also investigate the quantitative models of freehand selection tasks in OST AR environments and take future OST AR technique improvements (e.g., larger viewing angles, higher tracking accuracy, and more feedback modalities) and more user aspects (e.g., gender, age group) into research consideration.

Author Contributions

Conceptualisation, G.W. and G.R.; methodology, G.W.; software, X.H. and W.L.; validation, G.R.; formal analysis, G.W. and X.H.; investigation, X.H.; resources, X.H.; data curation, X.H. and X.P.; writing—original draft preparation, G.W.; writing—review and editing, G.R.; visualisation, X.P.; supervision, E.O.; project administration, G.R.; funding acquisition, G.W. and G.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the Fujian Social Science Planning Projects (No. FJ2020B106), the Xiamen Educational Science 13th Five-Year Plan (No. 1724) and the Fujian Educational Science Project (No. FJJKCGZ16-183).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data are contained within the manuscript. Raw data are available from the corresponding author upon request.

Acknowledgments

We appreciate all participants who took part in the studies.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented reality
VRVirtual reality
HMDHead-mounted displays
OSTOptical see-through
VSTVideo see-through
MRTKMixed Reality Toolkit
3DThree-dimensional
GUIGraphical user interface

References

  1. Billinghurst, M.; Clark, A.; Lee, G. A Survey of Augmented Reality. Found. Trends -Hum.-Comput. Interact. 2015, 8, 73–272. [Google Scholar] [CrossRef]
  2. Grubert, J.; Itoh, Y.; Moser, K.; Swan, J.E. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2649–2662. [Google Scholar] [CrossRef] [Green Version]
  3. MacKenzie, I.S. Fitts’ Law as a Research and Design Tool in Human-Computer Interaction. Hum.–Comput. Interact. 1992, 7, 91–139. [Google Scholar] [CrossRef]
  4. Soukoreff, R.W.; MacKenzie, I.S. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. Int. J.-Hum.-Comput. Stud. 2004, 61, 751–789. [Google Scholar] [CrossRef]
  5. Po, B.A.; Fisher, B.D.; Booth, K.S. Mouse and Touchscreen Selection in the Upper and Lower Visual Fields. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, Vienna, Austria, 24–29 April 2004; Association for Computing Machinery: New York, NY, USA, 2004; pp. 359–366. [Google Scholar] [CrossRef] [Green Version]
  6. Grossman, T.; Balakrishnan, R. Pointing at trivariate targets in 3D environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, Paris, France, 27 April–2 May 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 447–454. [Google Scholar] [CrossRef] [Green Version]
  7. Ren, G.; O’Neill, E. 3D selection with freehand gesture. Comput. Graph. 2013, 37, 101–120. [Google Scholar] [CrossRef]
  8. Wolf, D.; Dudley, J.J.; Kristensson, P.O. Performance Envelopes of in-Air Direct and Smartwatch Indirect Control for Head-Mounted Augmented Reality. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; pp. 347–354. [Google Scholar] [CrossRef] [Green Version]
  9. Uzor, S.; Kristensson, P.O. An Exploration of Freehand Crossing Selection in Head-Mounted Augmented Reality. ACM Trans.-Comput.-Hum. Interact. 2021, 28, 33:1–33:27. [Google Scholar] [CrossRef]
  10. Kytö, M.; Ens, B.; Piumsomboon, T.; Lee, G.A.; Billinghurst, M. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–14. [Google Scholar] [CrossRef]
  11. Wang, D.; Guo, Y.; Liu, S.; Zhang, Y.; Xu, W.; Xiao, J. Haptic display for virtual reality: Progress and challenges. Virtual Real. Intell. Hardw. 2019, 1, 136–162. [Google Scholar] [CrossRef] [Green Version]
  12. Bermejo, C.; Hui, P. A Survey on Haptic Technologies for Mobile Augmented Reality. Acm Comput. Surv. 2021, 54, 184:1–184:35. [Google Scholar] [CrossRef]
  13. Israr, A.; Zhao, S.; Schwalje, K.; Klatzky, R.; Lehman, J. Feel Effects: Enriching Storytelling with Haptic Feedback. ACM Trans. Appl. Percept. 2014, 11, 11:1–11:17. [Google Scholar] [CrossRef]
  14. Schneider, O.S.; Israr, A.; MacLean, K.E. Tactile Animation by Direct Manipulation of Grid Displays. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST ’15, Charlotte, NC, USA, 11–15 November 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 21–30. [Google Scholar] [CrossRef]
  15. Israr, A.; Kim, S.C.; Stec, J.; Poupyrev, I. Surround haptics: Tactile feedback for immersive gaming experiences. In Proceedings of the CHI ’12 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’12, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1087–1090. [Google Scholar] [CrossRef]
  16. Gaffary, Y.; Lécuyer, A. The Use of Haptic and Tactile Information in the Car to Improve Driving Safety: A Review of Current Technologies. Front. Ict 2018, 5, 5. [Google Scholar] [CrossRef] [Green Version]
  17. Vo, D.B.; Brewster, S.A. Touching the invisible: Localizing ultrasonic haptic cues. In Proceedings of the 2015 IEEE World Haptics Conference (WHC), Evanston, IL, USA, 22–26 June 2015; pp. 368–373. [Google Scholar] [CrossRef] [Green Version]
  18. Long, B.; Seah, S.A.; Carter, T.; Subramanian, S. Rendering volumetric haptic shapes in mid-air using ultrasound. Acm Trans. Graph. 2014, 33, 181:1–181:10. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, B.; Isenberg, P.; Riche, N.; Carpendale, S. Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2689–2698. [Google Scholar] [CrossRef] [Green Version]
  20. Kurtenbach, G. The Design and Evaluation of Marking Menus. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 1993. [Google Scholar]
  21. Rubine, D. Combining gestures and direct manipulation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’92, Monterey, CA, USA, 3–7 May 1992; ACM: New York, NY, USA, 1992; pp. 659–660. [Google Scholar] [CrossRef]
  22. Kettebekov, S.; Sharma, R. Toward Natural Gesture/Speech Control of a Large Display. In Engineering for Human-Computer Interaction; Little, M., Nigay, L., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2254, pp. 221–234. [Google Scholar] [CrossRef] [Green Version]
  23. Farhadi-Niaki, F.; Etemad, S.; Arya, A. Design and Usability Analysis of Gesture-Based Control for Common Desktop Tasks. In Human-Computer Interaction. Interaction Modalities and Techniques; Kurosu, M., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8007, pp. 215–224. [Google Scholar] [CrossRef]
  24. Reddy, V.; Raghuveer, V.; Krishna, J.; Chandralohit, K. Finger gesture based tablet interface. In Proceedings of the 2012 IEEE International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 18–20 December 2012; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
  25. LaViola, J. Bringing VR and Spatial 3D Interaction to the Masses through Video Games. Comput. Graph. Appl. 2008, 28, 10–15. [Google Scholar] [CrossRef]
  26. Sherman, W.R.; Craig, A.B. Understanding Virtual Reality: Interface, Application, and Design; Elsevier: Cambridge, MA, USA, 2002. [Google Scholar]
  27. Bowman, D.A.; Coquillart, S.; Froehlich, B.; Hirose, M.; Kitamura, Y.; Kiyokawa, K.; Stuerzlinger, W. 3d user interfaces: New directions and perspectives. Comput. Graph. Appl. 2008, 28, 20–36. [Google Scholar] [CrossRef]
  28. Bowman, D.A.; Hodges, L.F. An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. In Proceedings of the 1997 Symposium on Interactive 3D Graphics, I3D ’97, Providence, RI, USA, 27–30 April 1997; ACM: New York, NY, USA, 1997; p. 35. [Google Scholar] [CrossRef]
  29. Cohen, P.; McGee, D.; Oviatt, S.; Wu, L.; Clow, J.; King, R.; Julier, S.; Rosenblum, L. Multimodal interaction for 2D and 3D environments [virtual reality]. Comput. Graph. Appl. 1999, 19, 10–13. [Google Scholar] [CrossRef]
  30. Duval, T.; Lecuyer, A.; Thomas, S. SkeweR: A 3D Interaction Technique for 2-User Collaborative Manipulation of Objects in Virtual Environments. In Proceedings of the 3D User Interfaces, 3DUI 2006, Alexandria, VA, USA, 25–26 March 2006; pp. 69–72. [Google Scholar] [CrossRef] [Green Version]
  31. Cao, X.; Balakrishnan, R. VisionWand: Interaction techniques for large displays using a passive wand tracked in 3D. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, UIST ’03, Vancouver, CO, Canada, 2–5 November 2003; ACM: New York, NY, USA, 2003; pp. 173–182. [Google Scholar] [CrossRef]
  32. Gallo, L.; Ciampi, M. Wii Remote-enhanced Hand-Computer interaction for 3D medical image analysis. In Proceedings of the Current Trends in Information Technology (CTIT), 2009 International Conference, Dubai, United Arab Emirates, 15–16 December 2009; pp. 1–6. [Google Scholar] [CrossRef]
  33. Song, J.; Kim, W.; Son, H.; Yoo, J.; Kim, J.; Kim, R.; Oh, J. Design and Implementation of a Remote Control for IPTV with Sensors. In Future Generation Information Technology; Kim, T.h., Adeli, H., Slezak, D., Sandnes, F., Song, X., Chung, K.i., Arnett, K., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7105, pp. 223–228. [Google Scholar] [CrossRef]
  34. Jones, E.; Alexander, J.; Andreou, A.; Irani, P.; Subramanian, S. GesText: Accelerometer-based gestural text-entry systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA, USA, 10–15 April 2010; ACM: New York, NY, USA, 2010; pp. 2173–2182. [Google Scholar] [CrossRef]
  35. Shoemaker, G.; Findlater, L.; Dawson, J.Q.; Booth, K.S. Mid-air text input techniques for very large wall displays. In Proceedings of the Graphics Interface 2009, GI ’09, British, CO, Canada, 25–27 May 2009; Canadian Information Processing Society: Toronto, ON, Canada, 2009; pp. 231–238. [Google Scholar]
  36. Lee, J. Hacking the Nintendo Wii Remote. Pervasive Comput. 2008, 7, 39–45. [Google Scholar] [CrossRef]
  37. Wang, J.; Zhai, S.; Canny, J. SHRIMP: Solving collision and out of vocabulary problems in mobile predictive input with motion gesture. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA, USA, 10–15 April 2010; ACM: New York, NY, USA, 2010; pp. 15–24. [Google Scholar] [CrossRef]
  38. Ruiz, J.; Li, Y. DoubleFlip: A motion gesture delimiter for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; ACM: New York, NY, USA, 2011; pp. 2717–2720. [Google Scholar] [CrossRef]
  39. Vlasic, D.; Baran, I.; Matusik, W.; Popović, J. Articulated mesh animation from multi-view silhouettes. ACM Trans. Graph. 2008, 27, 97:1–97:9. [Google Scholar] [CrossRef] [Green Version]
  40. Remondino, F.; Roditakis, A. Human motion reconstruction and animation from video sequences. In Proceedings of the 17th International Conference on Computer Animation and Social Agents (CASA2004), Geneva, Switzerland, 7–9 July 2004; pp. 347–354. [Google Scholar]
  41. Herda, L.; Fua, P.; Plänkers, R.; Boulic, R.; Thalmann, D. Using skeleton-based tracking to increase the reliability of optical motion capture. Hum. Mov. Sci. 2001, 20, 313–341. [Google Scholar] [CrossRef] [Green Version]
  42. Margolis, T.; DeFanti, T.A.; Dawe, G.; Prudhomme, A.; Schulze, J.P.; Cutchin, S. Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback. Proc.-Spie-Int. Soc. Opt. Eng. 2011, 7864, 786417. [Google Scholar] [CrossRef] [Green Version]
  43. Bideau, B.; Kulpa, R.; Vignais, N.; Brault, S.; Multon, F.; Craig, C. Using Virtual Reality to Analyze Sports Performance. Comput. Graph. Appl. 2010, 30, 14–21. [Google Scholar] [CrossRef]
  44. Murphy-Chutorian, E.; Trivedi, M. Head Pose Estimation and Augmented Reality Tracking: An Integrated System and Evaluation for Monitoring Driver Awareness. Intell. Transp. Syst. IEEE Trans. 2010, 11, 300–311. [Google Scholar] [CrossRef]
  45. Beaudouin-Lafon, M. Lessons learned from the WILD room, a multisurface interactive environment. In Proceedings of the 23rd French Speaking Conference on Human-Computer Interaction, IHM ’11, Antipolis, France, 24–27 October 2011; ACM: New York, NY, USA, 2011; pp. 18:1–18:8. [Google Scholar] [CrossRef] [Green Version]
  46. Andersen, D.; Villano, P.; Popescu, V. AR HMD Guidance for Controlled Hand-Held 3D Acquisition. IEEE Trans. Vis. Comput. Graph. 2019, 25, 3073–3082. [Google Scholar] [CrossRef] [PubMed]
  47. Vogel, D.; Balakrishnan, R. Interactive public ambient displays: Transitioning from implicit to explicit, public to personal, interaction with multiple users. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST ’04, Santa Fe, NM, USA, 24–27 October 2004; ACM: New York, NY, USA, 2004; pp. 137–146. [Google Scholar] [CrossRef]
  48. Segen, J.; Kumar, S. Gesture VR: Vision-based 3D hand interace for spatial interaction. In Proceedings of the Sixth ACM International Conference on Multimedia, MULTIMEDIA ’98, Bristol, UK, 12–16 September 1998; ACM: New York, NY, USA, 1998; pp. 455–464. [Google Scholar] [CrossRef]
  49. Segen, J.; Kumar, S. Video acquired gesture interfaces for the handicapped. In Proceedings of the Sixth ACM International Conference on Multimedia: Face/Gesture Recognition and Their Applications, MULTIMEDIA ’98, Bristol, UK, 12–16 September 1998; ACM: New York, NY, USA, 1998; pp. 45–48. [Google Scholar] [CrossRef]
  50. Baldauf, M.; Zambanini, S.; Fröhlich, P.; Reichl, P. Markerless visual fingertip detection for natural mobile device interaction. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI ’11, Stockholm, Sweden, 30 August–2 September 2011; ACM: New York, NY, USA, 2011; pp. 539–544. [Google Scholar] [CrossRef]
  51. Song, P.; Yu, H.; Winkler, S. Vision-based 3D finger interactions for mixed reality games with physics simulation. In Proceedings of the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI ’08, Hachioji, Japan, 8–9 December 2008; ACM: New York, NY, USA, 2008; pp. 7:1–7:6. [Google Scholar] [CrossRef]
  52. Song, P.; Goh, W.B.; Hutama, W.; Fu, C.W.; Liu, X. A handle bar metaphor for virtual object manipulation with mid-air interaction. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI ’12, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 1297–1306. [Google Scholar] [CrossRef]
  53. Ren, G.; Li, C.; O’Neill, E.; Willis, P. 3D Freehand Gestural Navigation for Interactive Public Displays. Comput. Graph. Appl. 2013, 33, 47–55. [Google Scholar] [CrossRef]
  54. Benko, H. Beyond flat surface computing: Challenges of depth-aware and curved interfaces. In Proceedings of the 17th ACM International Conference on Multimedia, MM ’09, Beijing, China, 19–24 October 2009; ACM: New York, NY, USA, 2009; pp. 935–944. [Google Scholar] [CrossRef]
  55. Benko, H.; Jota, R.; Wilson, A. MirageTable: Freehand interaction on a projected augmented reality tabletop. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI ’12, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 199–208. [Google Scholar] [CrossRef]
  56. Harrison, C.; Benko, H.; Wilson, A.D. OmniTouch: Wearable multitouch interaction everywhere. In Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology, UIST ’11, Santa Barbara, CA, USA, 16–19 October 2011; ACM: New York, NY, USA, 2011; pp. 441–450. [Google Scholar] [CrossRef]
  57. Ababsa, F.; He, J.; Chardonnet, J.R. Combining HoloLens and Leap-Motion for Free Hand-Based 3D Interaction in MR Environments. In Proceedings of the 7th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, Lecce, Italy, 7–10 September 2020; De Paolis, L.T., Bourdot, P., Eds.; Lecture Notes in Computer Science. Springer International Publishing: Cham, Switzerland, 2020; pp. 315–327. [Google Scholar] [CrossRef]
  58. Chaconas, N.; Höllerer, T. An Evaluation of Bimanual Gestures on the Microsoft HoloLens. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; pp. 1–8. [Google Scholar] [CrossRef]
  59. Serrano, R.; Morillo, P.; Casas, S.; Cruz-Neira, C. An empirical evaluation of two natural hand interaction systems in augmented reality. Multimed. Tools Appl. 2022, 81, 31657–31683. [Google Scholar] [CrossRef]
  60. Chang, Y.S.; Nuernberger, B.; Luan, B.; Höllerer, T.; O’Donovan, J. Gesture-based augmented reality annotation. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 469–470. [Google Scholar] [CrossRef]
  61. Kao, H.L.C.; Dementyev, A.; Paradiso, J.A.; Schmandt, C. NailO: Fingernails as an Input Surface. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Republic of Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 3015–3018. [Google Scholar] [CrossRef]
  62. Ashbrook, D.; Baudisch, P.; White, S. Nenya: Subtle and eyes-free mobile input with a magnetically-tracked finger ring. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 2043–2046. [Google Scholar] [CrossRef]
  63. Ham, J.; Hong, J.; Jang, Y.; Ko, S.H.; Woo, W. Smart Wristband: Touch-and-Motion–Tracking Wearable 3D Input Device for Smart Glasses. In Proceedings of the Distributed, Ambient, and Pervasive Interactions, Heraklion, Crete, Greece, 22–27 June 2014; Streitz, N., Markopoulos, P., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2014; pp. 109–118. [Google Scholar] [CrossRef]
  64. Rekimoto, J. GestureWrist and GesturePad: Unobtrusive wearable interaction devices. In Proceedings of the Fifth International Symposium on Wearable Computers, Zürich, Switzerland, 8–9 October 2001; pp. 21–27. [Google Scholar] [CrossRef]
  65. Srikulwong, M.; O’Neill, E. A comparative study of tactile representation techniques for landmarks on a wearable device. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 2029–2038. [Google Scholar] [CrossRef]
  66. Asif, A.; Boll, S. Where to turn my car? comparison of a tactile display and a conventional car navigation system under high load condition. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’10, Pittsburgh, PA, USA, 11–12 November 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 64–71. [Google Scholar] [CrossRef]
  67. Prasad, M.; Taele, P.; Goldberg, D.; Hammond, T.A. HaptiMoto: Turn-by-turn haptic route guidance interface for motorcyclists. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, Toronto, ON, Canada, 26 April–1 May 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 3597–3606. [Google Scholar] [CrossRef]
  68. Israr, A.; Poupyrev, I. Tactile brush: Drawing on skin with a tactile grid display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; ACM: Vancouver, BC, Canada, 2011; pp. 2019–2028. [Google Scholar] [CrossRef]
  69. Israr, A.; Poupyrev, I. Control space of apparent haptic motion. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 457–462. [Google Scholar] [CrossRef]
  70. Saba, M.P.; Filippo, D.; Pereira, F.R.; de Souza, P.L.P. Hey yaa: A Haptic Warning Wearable to Support Deaf People Communication. In Proceedings of the 17th International Conference on Collaboration and Technology, Paraty, Brazil, 2–7 October 2011; Vivacqua, A.S., Gutwin, C., Borges, M.R.S., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany; pp. 215–223. [Google Scholar] [CrossRef]
  71. Mujibiya, A. Haptic feedback companion for Body Area Network using body-carried electrostatic charge. In Proceedings of the 2015 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 9–12 January 2015; pp. 571–572. [Google Scholar] [CrossRef]
  72. Withana, A.; Groeger, D.; Steimle, J. Tacttoo: A Thin and Feel-Through Tattoo for On-Skin Tactile Output. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST ’18, Berlin, Germany, 14–17 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 365–378. [Google Scholar] [CrossRef] [Green Version]
  73. Zhu, M.; Sun, Z.; Zhang, Z.; Shi, Q.; He, T.; Liu, H.; Chen, T.; Lee, C. Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications. Sci. Adv. 2020, 6, eaaz8693. [Google Scholar] [CrossRef]
  74. Pfeiffer, M.; Schneegass, S.; Alt, F.; Rohs, M. Let me grab this: A comparison of EMS and vibration for haptic feedback in free-hand interaction. In Proceedings of the 5th Augmented Human International Conference, AH ’14, Kobe, Japan, 7–9 March 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 1–8. [Google Scholar] [CrossRef]
  75. Pezent, E.; O’Malley, M.K.; Israr, A.; Samad, M.; Robinson, S.; Agarwal, P.; Benko, H.; Colonnese, N. Explorations of Wrist Haptic Feedback for AR/VR Interactions with Tasbi. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–4. [Google Scholar] [CrossRef]
  76. Ren, G.; Li, W.; O’Neill, E. Towards the design of effective freehand gestural interaction for interactive TV. J. Intell. Fuzzy Syst. 2016, 31, 2659–2674. [Google Scholar] [CrossRef] [Green Version]
  77. Harrington, K.; Large, D.R.; Burnett, G.; Georgiou, O. Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User Interfaces. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’18, Toronto, ON, Canada, 23–25 September 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 11–20. [Google Scholar] [CrossRef]
  78. Grossman, T.; Wigdor, D.; Balakrishnan, R. Multi-finger gestural interaction with 3d volumetric displays. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST ’04, Santa Fe, NM, USA, 24–27 October 2004; ACM: New York, NY, USA, 2004; pp. 61–70. [Google Scholar] [CrossRef] [Green Version]
  79. Batmaz, A.U.; Machuca, M.D.B.; Pham, D.M.; Stuerzlinger, W. Do Head-Mounted Display Stereo Deficiencies Affect 3D Pointing Tasks in AR and VR? In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 585–592. [Google Scholar] [CrossRef]
  80. Barrera Machuca, M.D.; Stuerzlinger, W. The Effect of Stereo Display Deficiencies on Virtual Hand Pointing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–14. [Google Scholar] [CrossRef]
  81. Ansari, S.; Nikpay, A.; Varmazyar, S. Design and Development of an Ergonomic Chair for Students in Educational Settings. Health Scope 2018, 7, e60531. [Google Scholar] [CrossRef] [Green Version]
  82. Erickson, A.; Kim, K.; Bruder, G.; Welch, G.F. Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays. In Proceedings of the Symposium on Spatial User Interaction, Virtual Event, 30 October–1 November 2020; pp. 1–8. [Google Scholar] [CrossRef]
  83. Balakrishnan, R. “Beating” Fitts’ law: Virtual enhancements for pointing facilitation. Int. J.-Hum.-Comput. Stud. 2004, 61, 857–874. [Google Scholar] [CrossRef]
Figure 1. Target selection in OST AR environments.
Figure 1. Target selection in OST AR environments.
Information 13 00566 g001
Figure 2. Target layout and experimental settings. (a) Target Layout; (b) Experimental Settings.
Figure 2. Target layout and experimental settings. (a) Target Layout; (b) Experimental Settings.
Information 13 00566 g002
Figure 3. Mean selection time and error rate. In this and later charts, error bars represent 95% confidence intervals. (a) Mean selection time; (b) Mean error rate.
Figure 3. Mean selection time and error rate. In this and later charts, error bars represent 95% confidence intervals. (a) Mean selection time; (b) Mean error rate.
Information 13 00566 g003
Figure 4. Mean hand movement distance, head movement distance, head rotation degree, and head target distance in the depth dimension. (a) Mean hand movement distance; (b) Mean head movement distance; (c) Mean head rotation degree; (d) Mean head target distance in depth dimension.
Figure 4. Mean hand movement distance, head movement distance, head rotation degree, and head target distance in the depth dimension. (a) Mean hand movement distance; (b) Mean head movement distance; (c) Mean head rotation degree; (d) Mean head target distance in depth dimension.
Information 13 00566 g004
Figure 5. Mean user preference. (a) Target Placement; (b) Target Size; (c) Target Distance; (d) Target Position.
Figure 5. Mean user preference. (a) Target Placement; (b) Target Size; (c) Target Distance; (d) Target Position.
Information 13 00566 g005
Figure 6. Experimental settings of Study 2.
Figure 6. Experimental settings of Study 2.
Information 13 00566 g006
Figure 7. Mean selection time, error rate, hand movement distance, head movement distance, head rotation degree, and head target distance in the depth dimension of Study 2. (a) Mean selection time; (b) Mean error rate; (c) Mean hand movement distance; (d) Mean head movement distance; (e) Mean head rotation degree; (f) Mean head target depth distance.
Figure 7. Mean selection time, error rate, hand movement distance, head movement distance, head rotation degree, and head target distance in the depth dimension of Study 2. (a) Mean selection time; (b) Mean error rate; (c) Mean hand movement distance; (d) Mean head movement distance; (e) Mean head rotation degree; (f) Mean head target depth distance.
Information 13 00566 g007aInformation 13 00566 g007b
Figure 8. Mean user preference.
Figure 8. Mean user preference.
Information 13 00566 g008
Table 1. The comparison of user performance measurements of large and small target size.
Table 1. The comparison of user performance measurements of large and small target size.
Target SizeLarge (48 mm)Small (32 mm)
Mean Selection Time0.71 s0.79 s
Mean Error Rate0.5%1.2%
Mean Hand Movement Distance38.4 cm31.2 cm
Mean Head Movement Distance2.7 cm2.3 cm
Mean User Target Distance40.4 cm39.0 cm
Table 2. The comparison of user performance measurements of the centre and corner target position.
Table 2. The comparison of user performance measurements of the centre and corner target position.
Target PositionCentreCorner
Mean Selection Time0.71 s0.79 s (top-left)
Mean Hand Movement Distance31.4 cm37.0 cm (top-left)
Mean Head Movement Distance2.2 cm2.7 cm (top-left)
Mean Head Rotation28.74 degree30.21 degree (bottom-right)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, G.; Ren, G.; Hong, X.; Peng, X.; Li, W.; O’Neill, E. Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information 2022, 13, 566. https://doi.org/10.3390/info13120566

AMA Style

Wang G, Ren G, Hong X, Peng X, Li W, O’Neill E. Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information. 2022; 13(12):566. https://doi.org/10.3390/info13120566

Chicago/Turabian Style

Wang, Gang, Gang Ren, Xinye Hong, Xun Peng, Wenbin Li, and Eamonn O’Neill. 2022. "Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality" Information 13, no. 12: 566. https://doi.org/10.3390/info13120566

APA Style

Wang, G., Ren, G., Hong, X., Peng, X., Li, W., & O’Neill, E. (2022). Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information, 13(12), 566. https://doi.org/10.3390/info13120566

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop