1. Introduction
With the increasing availability and affordability of virtual reality (VR) devices [
1], VR applications have expanded across various domains, including training and education [
2], healthcare [
3], and the military [
4]. However, input and interactions in VR remain challenging [
5]. The most popular input method for VR involves handheld controllers that come with head-mounted displays (HMDs). Users hold these controllers with one or both hands and interact with the system by pressing buttons, raycasting, and performing gestures. These devices require the use of one or both hands, limiting users’ ability to perform other tasks simultaneously.
Gestural interactions facilitated by optical sensors are becoming increasingly popular. These techniques focus on spatial gestures, which are expressive and engaging but can lead to physical strain and lack accuracy [
6,
7]. Additionally, the optical sensors responsible for hand tracking are typically mounted on HMDs and have a limited field of view (FOV). This often forces users to maintain their gaze on their hands for interaction, restricting multitasking. Gestural interactions are also cumbersome in confined spaces, such as when seated in an airplane. While wearable solutions like digital gloves exist, they tend to be expensive, bulky, and impractical for extended use [
8].
To mitigate these issues, a recent trend has been to explore the potential of using traditional input devices in virtual reality, such as the mouse and keyboard [
6,
9,
10]. Interestingly, despite being traditionally associated with 2D environments, these input methods have proven effective in various 3D applications, such as gaming or computer-aided design (CAD) [
11]. In certain tasks like object manipulation, they often surpass the performance of dedicated 3D input devices [
12,
13]. However, finding and using these external devices while wearing a VR headset can be challenging and can distract users from the task at hand [
14].
This work aims to combine the convenience of wearable devices with the reliability of traditional input devices, specifically a mouse, for input and interactions in VR. Towards this end, the Digital Thimble was designed and evaluated. This is a portable, force-based, index-finger-wearable device that is always reachable and can be used conveniently and comfortably in various settings and scenarios. The device is equipped with a pressure sensor for detecting contact forces and an optical mouse sensor for tracking finger movements. This optical sensor enables the thimble to be used on a wide range of surfaces, including the human body. This work also investigates two selection methods: on-press, which activates upon pressing or applying extra force, and on-release, which triggers upon releasing touch or extra force, aiming to accommodate different user preferences and interaction styles.
2. Related Work
This section discusses novel input, selection, teleportation, and sorting methods for VRs that involve the use of hands or fingers. For a more comprehensive discussion on input and interaction in VR, please refer to recent survey papers [
15,
16].
2.1. Input Devices for Virtual Reality
Most VR headsets or HMDs come with handheld controllers, which are typically composed of buttons, triggers, or thumbsticks and usually have motion-tracking capabilities. These features enable users to manipulate virtual objects or navigate through virtual spaces. The most common motion-tracking approach is raycasting, which allows users to control a ray pointing to a target with hand movements and use buttons or triggers to select and manipulate virtual objects [
17]. There are also numerous approaches that enable the use of a controller as a virtual mouse [
18]. Controllers, however, need to be held at all times, limiting the ability to perform other tasks with the hands. Performing gestures in mid-air with controllers often leads to errors, lacks precision, and can cause fatigue during extended use [
6,
11,
16]. Additionally, controllers are challenging to use in confined spaces, such as while seated on a bus or train.
Researchers have proposed various input solutions to address these challenges, including the use of smartphones [
19,
20], digital pens or styli [
21,
22,
23], and tangible objects [
24,
25]. However, most of these solutions are aimed at and optimized for specific contexts or use cases. For example, smartphones are predominantly used to provide text entry solutions in VR, while digital pens and styli are primarily utilized to offer solutions for precise target selection. Consequently, these solutions outperform controllers in those specific scenarios.
In recent years, there has been a growing interest in using traditional mice in VR [
6,
11]. This interest is attributed to their longstanding use in computer systems, their higher accuracy and precision in target selection, and user comfort, as users can rest their hand on a flat surface [
13,
21,
26]. In studies investigating the accuracy and precision of mice according to Fitts’ law, mice have consistently outperformed modern controllers in terms of throughput by around 12% [
21,
26]. Recent work has shown that a mouse is also effective in a 3D world despite being a 2D input device [
11]. However, like controllers, mice require the use of the hand, thus limiting the hand’s availability for other tasks. In addition, locating the mouse is not always straightforward, as users’ sight is occluded by the HMD.
Some researchers have proposed using existing or new wearable devices, such as smartwatches [
27], digital gloves [
28,
29], and finger wearables, for touch interactions [
30]. However, these solutions either target specific use cases or come with a steep learning curve. They also tend to use bulky and heavy sensors, making them impractical in real-world scenarios. Wearable mice (
Figure 1), commonly available on online shopping platforms, could serve as a bridge between traditional input methods and wearable technology, combining the accuracy and precision of a traditional mouse with the flexibility and portability of a wearable device. However, the effectiveness of these mice in VR has not yet been explored. The only related research identified that involved a wearable mouse examined its use to remotely control a telepresence robot [
31], where it demonstrated a superior performance compared to a keyboard-based system.
Bare-hand interactions using mid-air gestures are also commonly used in VR due to the availability of affordable hand-tracking technologies [
32,
33]. However, the performance of these methods depends on the quality of the cameras used and can suffer due to occlusion or other environmental factors. This approach is also known to compromise comfort when used for prolonged periods [
6,
7].
2.2. Hand and Finger Tracking in Virtual Reality
Most VR systems utilize a camera-based approach to track hand and finger movements, with RGB, infrared (IR) thermal imaging, or depth cameras wither mounted on the headset or placed within the environment [
32,
34]. However, they face challenges from environmental factors such as lighting, skin tone, and occlusion [
32,
34]. For instance, it can be difficult for these systems to function in dark places, some marker-based methods struggle with different skin tones, and reflective materials might interfere with the sensing capabilities of some cameras. Consequently, research is actively exploring alternative sensors, including inertial measurement units (IMUs); optical sensors; and electromyography (EMG) in digital gloves [
28,
29], wrist-worn devices [
35,
36], and finger-worn devices [
8,
14], to overcome these limitations. These methods demonstrate potential in certain scenarios, but encounter similar challenges related to comfort, scalability, and cost as those outlined earlier.
2.3. Target Selection in Virtual Reality
Target selection in VR is accomplished through either direct manipulation, where virtual objects are selected using a virtual representation of the user’s actual hands, or remote pointing, which involves controlling a virtual cursor using raycasting or a similar technique [
11,
26,
32]. Direct manipulation enables users to reach out, grasp, and manipulate virtual objects, providing a sense of depth perception, typically facilitated by various optical hand-tracking systems. Although intuitive, this method can lead to high levels of fatigue [
6,
7]. Conversely, cursor movement techniques utilize controllers with six degrees of freedom (DOF), eye gaze, head movements, joysticks, digital pens, and various controllers [
21,
26,
37], requiring users to point at an object and then perform an additional action to select it (called a “switch”), such as pressing a button or issuing speech commands.
Raycasting is the most popular approach to target selection in VR. It functions similarly to a laser pointer, with users directing a ray of light at a target and confirming their selection with a switch. Extensive research has been conducted to develop innovative methods for optimizing raycasting in VR, with the goal of enhancing both efficiency and user experience [
11,
26,
38,
39].
2.4. Teleportation in Virtual Reality
Locomotion is a crucial aspect of VR that enables users to explore virtual worlds. Researchers have investigated various locomotion techniques, such as simulated walking, which allows users to walk in place while navigating the virtual environment. This can be achieved through the use of treadmills [
40], low-friction surfaces [
41], novel devices like spheres [
42], and other motion-based systems. One issue with this approach is that when users physically move, their eyes and inner ear are not always in sync, which can lead to symptoms such as nausea and dizziness [
43]. Additionally, these methods require heavy and expensive setups, which are not always practical or scalable. As a result, teleportation has become the dominant method for navigating the virtual environment. It is a virtual locomotion technique that allows users to move around a virtual environment without physically moving their body [
43]. This is achieved by instantly transporting the user’s avatar or point of view (POV) to a new location. Since teleportation does not involve any physical movement, it is less likely to cause motion sickness compared to other locomotion techniques [
43]. Point-and-click is the method most commonly used for teleportation, allowing users to point to their desired destination and then click a button on their controller to teleport [
43,
44]. However, researchers have explored alternative approaches, including teleportation by eye gaze [
45], mid-air gestures [
44], jump gestures [
46], and head movements [
45].
2.5. Sorting in Virtual Reality
Sorting tasks are commonly used to assess the performance of new input devices or techniques for VR [
47,
48]. These tasks require users to manipulate objects, such as cubes [
48,
49,
50] or balls [
51], by grabbing them, picking them up, and placing them in designated locations. Through these tasks various aspects including the tracking, pointing, comfort, and usability of these methods can be evaluated. Sorting tasks have also been used to evaluate collaboration [
52,
53], supply chain simulation [
54], feedback approaches [
49,
50], physical therapy [
51], and even motivational relevance [
55]. Consequently, this work included sorting tasks as a means to further evaluate the performance of the Digital Thimble.
3. A Force-Based Digital Thimble
The Digital Thimble enables free-hand input and interaction in VR by varying the touch contact force on a surface. This approach is based on the premise that accurate tracking of finger movements can enable a wide range of interactions. The device is worn on the index finger and is versatile enough to function on nearly any diffuse opaque surface, including rough or uneven surfaces and even the human body.
The Digital Thimble uses an optical mouse sensor to track the finger’s movements. As the thimble moves across a surface, the sensor captures a series of consecutive images and applies digital signal processing techniques to analyze them. By comparing these images, the sensor precisely computes both the distance and direction of the thimble’s movement, which is then used to control an on-screen cursor. During development, various approaches to finger tracking were explored, including RGB and depth cameras, magnetism, gyroscopes, and IR sensors [
56,
57,
58,
59]. Ultimately, an optical sensor was chosen due to its widespread availability, affordability, and effectiveness across a wide range of surfaces [
57,
60,
61].
The optical sensor integrated into the thimble is the FCT 3065-XY, PixArt, Hsinchu, Taiwan, which has a resolution of 1200 dpi. This high resolution makes it suitable for accurately tracking finger gestures with precision and control. The sensor was extracted from the Unique Station Mini Wireless Finger Mouse, China, and repurposed its circuit board for integration into the thimble (
Figure 1). The sensor is mounted on the side of the index finger using a 3D-printed frame. This frame positions the sensor within a 3D-printed cap that has a flat surface, enhancing its tracking capability even on uneven or irregular surfaces.
The Digital Thimble is constructed from a lightweight and comfortable fabric material. The design of the frame casing underwent an iterative process to ensure that the sensor cap makes contact with the surface in a way that aligns with a comfortable finger posture. Various sensor placement options were evaluated, including positioning it underneath the index finger and over the index finger. However, placing the sensor in these positions compromised both comfort and the functionality of the pressure sensor. As a result, the sensor was positioned on the side of the finger, achieving an optimal balance between tracking accuracy and user comfort.
To detect touch and contact forces, a Force Sensing Resistor (FSR) 400 series, Interlink Electronics, Camarillo, CA, USA, pressure sensor (FSR 400:
https://digikey.com/en/htmldatasheets/production/1184367/0/0/1/34-00022 (accessed on 11 March 2024)) was attached to the tip of the index finger; it was coated with silicon to prevent irritation. The sensor has a 12.7 mm diameter with a 20 mm
2 sensing area and a sensing range between 100 g and 10 kg, which is sufficient for detecting the contact force from an index finger. The circular shape of the sensor is ideal for measuring force from the fingertip. The FSR is connected to an Arduino Uno REV3 microcontroller, Arduino, Ivrea, Italy, housed within a 3D-printed casing worn on the wrist (
Figure 1). In the absence of force, the FSR exhibits a resistance slightly greater than 1 M
. As the pressure on the sensor increases, its resistance decreases correspondingly, allowing for the measurement of the applied force.
The design prioritizes user comfort and wearability. Hence, lightweight components were selected for the circuitry to ensure that the device is comfortable to wear on the wrist. Additionally, a flexible material was chosen for the thimble to accommodate various finger sizes, enhancing the device’s versatility and practicality for extended use.
Force-Based Target Selection
In terms of pointing, the Digital Thimble mirrors the functionality of a traditional mouse. Users apply extra force on the surface to activate the cursor, then slide their index finger while keeping the thimble’s cap in contact with the surface. The optical sensor detects the movements and adjusts the cursor’s position accordingly. The translation of finger movements into cursor movements is facilitated through the Unity3D mouse input API. The device uses contact force for target selection, with two different approaches developed for this purpose:
On-press: Users navigate the cursor to the target and then exert extra pressure to select the target.
On-release: Users navigate the cursor to the target while maintaining extra pressure and then either release the extra pressure or lift their finger to confirm selection.
The system detects extra pressure when users’ contact force exceeds 400 g. This threshold was carefully chosen through multiple lab trials to maintain a balance between detection accuracy and user comfort. It is set at a level that is high enough to avoid accidental activations yet low enough to ensure a comfortable user experience.
The system provides visual feedback to inform users of its
active (ready for cursor positioning) and
inactive status. The cursor is initially black but turns green when it is ready for positioning. Specifically, with the on-press selection method, the cursor turns green when the system senses the finger’s presence on a surface. With the on-release method, the cursor turns green when the system detects extra pressure applied to a surface. The use of green to indicate the active state was chosen due to its universal association with the “go” signal, helping users intuitively understand the system’s current operational mode [
62].
4. Evaluation Protocol
A comprehensive evaluation of the Digital Thimble was conducted through two user studies. The first study applied Fitts’ law principles to compare the thimble’s performance against two commercial input devices: an Oculus Touch controller, Oculus VR, Menlo Park, CA, USA and an AOKID Creative finger mouse, Guangzhou, China. This study also assessed the effectiveness of the on-press and on-release selection methods. The second study further investigated the performance of the three input devices in two common virtual reality scenarios: teleportation and sorting. These studies provide valuable insights into the thimble’s functionality and its comparative advantages and disadvantages in practical virtual reality applications.
4.1. Experimental Devices
An Oculus Touch Controller (
Figure 2) was used as the baseline condition in the evaluations since it is the most common input device used in VR. The embedded sensors of the controller can detect users’ hand position and orientation within a virtual space. The controller includes three physical buttons, one thumbstick, one thumbrest, and a trigger for interacting with virtual objects. In the evaluations, the controller is used to point at objects, using raycasting, and then the trigger is pulled to confirm selection. The Unity3D Oculus integration was used to translate the controller’s horizontal and vertical movements into the cursor’s movements along the
x and
y axes, respectively. Movements along the
z axis (depth) were disregarded for simplicity.
A commercial input device popular in the Asian market, the AOKID Creative Finger Mouse (
Figure 2), was also included as it has the form factor closest to that of the Digital Thimble. Both devices are worn on the finger, allowing users to perform other tasks (e.g., typing on a keyboard) while wearing them. Users wear the device on either their index or middle finger, with the optical sensor facing the open side of the finger. Like conventional mice, the finger mouse has a left and a right button, which are pressed with the thumb. To select a target, users bend the finger to place the tip of the device on the surface, move the hand to reposition the cursor, and then press the left button to confirm selection. As with the controller, the movements of the mouse were mapped to the cursor’s movements using the Unity3D mouse input API. It is important to note that while the use of traditional mice has been thoroughly investigated in VR, the finger mouse has not yet been evaluated in this context.
4.2. Experimental System
Our experimental system was developed using Unity3D v2019.4.8f1, with the Oculus Unity Integration toolkit incorporated to facilitate support for Oculus controllers. This setup allowed for the control of the cursor using the three devices under investigation. Users could manipulate the cursor and select targets using either the on-press or the on-release methods with any of these devices.
To use the on-press method with the controller and the mouse, users move the cursor to the target and then confirm their selection by pressing the left button (for the mouse) or the trigger (for the controller). For the on-release method, users press and hold the left button (mouse) or the trigger (controller), reposition the cursor to the target, and then release the button or the trigger to confirm their selection.
5. User Study 1: Fitts’ Law
This study conducted a comparative analysis of the target selection performance achieved using the Digital Thimble, a controller, and a finger mouse, as well as the efficiency of the on-press and on-release selection methods, based on Fitts’ law principles. Fitts’ law, as outlined in ISO 9241-9 [
63] and ISO 9241-411 [
64], is a standard method for assessing target selection efficiency on computing systems [
65,
66]. This method typically involves a 2D task where targets are arranged in a circular pattern, requiring selections across the circle (
Figure 3). Each selection spans a distance or amplitude (
A) equal to the circle’s diameter, with the selection time (
) recorded and averaged across trials. The task’s difficulty is measured by the index of difficulty (
) using the formula
, where
W is the target width. Performance is gauged by throughput (
, in bits per second or bps), calculated as the ratio of the effective index of difficulty (
) to
, where
, and adjusted for accuracy based on the standard deviation of the selection coordinates (
), with
.
5.1. Participants
Twelve participants took part in the user study (M = 27.5 years, = 4.7). Six of them identified as female, and six as male. All participants had attained a university-level education and self-reported as right-handed. Five participants had previous experience with VR. None required corrective eyeglasses. Each participant was compensated with USD 15 for their involvement in the study.
5.2. Apparatus
The experimental setup was run on a Windows 10 HP OMEN desktop, powered by an AMD Ryzen 5 2500X Quad-Core processor, 8 GB of RAM, and an Nvidia GeForce GTX 1060 graphics card. The setup featured an Oculus Rift HMD with an OLED display offering a resolution of 2060 × 1200 ppi, a refresh rate of 90 Hz, and a 110° FOV. It was also connected to an HP Omen 32-inch gaming monitor with a 2560 × 1440 pixel resolution. The Fitts’ law protocol was implemented using Unity3D v2019.4.8f1. The controller weighed 153 g, the finger mouse 25 g, and the digital thimble 124 g with its circuit box (5 g without).
5.3. Design
The experiment had a 3 × 2 × 3 × 3 within-subject design. The independent variables and levels were as follows:
Device (Mouse, Thimble, Controller);
Selection Method (Press, Release);
Amplitude (30, 115, 200 pixels);
Width (8, 16, 24 pixels).
There were fifteen trials per sequence, with the selected amplitudes ranging between 30 and 200 pixels to accommodate the headset’s field of view (FOV). Amplitudes above 200 pixels necessitated additional head movements for item visibility, while those below 30 pixels were deemed excessively small. Widths were chosen between 8 and 24 pixels, reflecting the minimum size comfortably visible through the HMD, with widths exceeding 24 pixels considered too large and impractical for the study’s objectives.
The dependent variables in the study included throughput, movement time, target re-entries, and the error rate. Target re-entries refer to the number of times the cursor re-entered a target in a trial after its initial entry, measured as a count per trial. The error rate, on the other hand, is the average percentage of trials in which selections were made outside of the intended target boundaries, reflecting the accuracy of target selections.
5.4. Procedure
The study started with a researcher explaining the research goals and demonstrating the system to participants. After this introduction, participants gave their consent by signing an informed consent form and filled out a demographics questionnaire. For comfort and to ensure reliability, participants were seated at a desk in a posture conducive to using the thimble and mouse on the desk surface and the controller in the air for mid-air gestures (
Figure 4). Chair adjustments were made as needed for optimal comfort. Participants then underwent a 10 min training session, which involved selecting ten circular targets, each 18 pixels in diameter, arranged within a 120-pixel-diameter circle. This training covered the use of all three devices across both selection methods, resulting in six training conditions.
After the training session, participants moved on to the main study, selecting fifteen targets using the six available methods in a counterbalanced order. They were advised to balance speed and accuracy while staying comfortable. To avoid fatigue, a 2 min break was scheduled after every three sequences, and a 3 min break after completing each condition. Participants could request at most 3 extra breaks or extend the scheduled ones by 3 min as needed.
After finishing the exercise under all conditions, participants completed the NASA-TLX questionnaire [
67] to assess the perceived workload of the methods used on a 20-point scale. They also filled out a custom questionnaire to rate their perceived performance and express their preferences for the selection methods on a 5-point Likert scale. The study wrapped up with a short debriefing session, allowing participants to share comments and insights about their experimental experience and their questionnaire responses.
5.5. Results
The entire study, including the demonstration, questionnaires, and breaks, took about 50 min to complete. A Martinez–Iglewicz test confirmed the normal distribution of response variable residuals. Mauchly’s sphericity test verified the equality of the variances across populations, allowing for the use of a repeated-measures ANOVA in the analyses. For subjective data involving more than two levels, a Friedman test was employed. This work also reports effect sizes for statistically significant findings: eta-squared () for ANOVA, Pearson’s r for the Wilcoxon Signed-Rank test, and Kendall’s W for the Friedman test.
5.5.1. Throughput
An ANOVA identified a significant effect of the device used on the throughput (
). The mouse achieved the highest mean throughput at 3.11 bps, followed by the controller at 2.89 bps and the thimble at 2.61 bps. There was also a significant effect based on selection method (
). The on-press selection method demonstrated a superior performance, yielding a mean throughput of 3.20 bps, whereas the on-release method resulted in a slightly lower mean throughput of 2.54 bps. A Tukey–Kramer test revealed that all input devices were significantly different from each other in terms of performance, but they all exhibited superior performance when using the on-press selection method compared to the on-release selection method.
Figure 5 displays the average throughput data categorized by input device and selection method.
5.5.2. Movement Time
An ANOVA identified a significant effect of the device used on movement time (
). The mouse exhibited the quickest performance, with a mean time of 1258 ms, followed by the controller at 1327 ms and the thimble at 1487 ms. The selection method also a significant effect (
). The on-press selection method achieved the fastest performance, with a mean time of 1,185 ms, while the on-release method resulted in a slightly longer mean time of 1530 ms. A Tukey–Kramer test revealed the thimble required a significantly longer movement time compared to both the mouse and the controller. In addition, the test showed that all input methods performed significantly better when using the on-press selection method compared to the on-release method.
Figure 6 illustrates the average movement time categorized by input device and selection method.
5.5.3. Target Re-Entries
An ANOVA failed to identify a significant effect of the device used on target re-entries (
). The results indicated that the input devices produced similar levels of target re-entries. Specifically, the thimble exhibited the lowest average count of target re-entries, at 0.22 per trial, followed closely by the mouse and the controller, both at 0.25 per trial. However, the selection method had a significant effect (
). The on-release selection method demonstrated the lowest average count at 0.22 per trial, while the on-press method yielded a slightly higher average count of 0.26 per trial. The results of a Tukey–Kramer test did not reveal any clear and consistent patterns regarding the pairing of specific input devices with different selection methods.
Figure 7 displays the average number of target re-entries, organized by input device and selection method.
Figure 8 and
Figure 9 provide examples of cursor traces for the six conditions examined in this study.
5.5.4. Error Rate
An ANOVA identified a significant effect of the device used on the error rate (
). The thimble achieved the lowest error rate at 2.01%, followed by the mouse at 2.34% and the controller at 2.39%. There was no significant effect from the selection method (
). Both the on-press and on-release selection methods resulted in comparable error rates. Specifically, the on-press method had an error rate of 2.61%, while the on-release method exhibited a slightly lower error rate of 2.23%. A Tukey–Kramer test revealed that the thimble was significantly more accurate than the controller.
Figure 10 shows the average error rate categorized by input device and selection method.
5.5.5. Perceived Workload
A Friedman test identified that there was a significant effect of condition (device × selection method) on mental demand (
), physical demand (
), performance (
), and frustration (
). However, there was no significant effect on effort (
) or temporal demand (
).
Figure 11 illustrates the median perceived workload ratings for all conditions in the user study.
5.5.6. Perceived Usability
A Friedman test identified a significant effect of condition (device × selection method) on perceived speed (
), accuracy (
), user-friendliness (
), naturalness (
), and preference (
).
Figure 12 illustrates the median perceived performance ratings for all conditions in the user study.
5.6. Discussion
Similar to the performance of the traditional mouse in VR [
6,
11], the finger mouse produced a higher throughput (
Figure 5) and faster movement times (
Figure 6) compared to the controller and the Digital Thimble. The throughput and movement times were relatively comparable between the Digital Thimble and the controller. However, as participants were less experienced with both the controllers and, particularly, the Digital Thimble, it is possible that their performance while using these devices could improve with practice. A longitudinal study is necessary to fully assess the effects of practice on throughput and movement times. The Digital Thimble demonstrated better precision (
Figure 7) and accuracy (
Figure 10) than both the controller and the finger mouse. This could be attributed to the Digital Thimble’s form factor, which integrates seamlessly with the user’s finger and does not require the use of an additional finger to pull a trigger or press buttons, unlike the controller or the finger mouse.
The on-press selection method consistently outperformed the on-release method in terms of throughput (
Figure 5) and movement times (
Figure 6) across all devices. This is likely because the on-release method requires an additional action (e.g., pressing the left mouse button or the controller trigger) while simultaneously moving the cursor. The effect of the on-release method was more adverse with the Digital Thimble, likely due to the friction caused by the device while applying extra pressure during cursor movement. Interestingly, although the controller outperformed the Digital Thimble with both selection methods, the difference was not significant when using the on-press method. This suggests that with sufficient practice, the Digital Thimble using the on-press method could potentially match the performance of the controller.
Additionally, the on-release selection method led to fewer target re-entries than the on-press method (
Figure 7), enhancing users’ precision. This is likely due to the speed–accuracy trade-off effect, where users made fewer errors and achieved greater precision due to the slower nature of the method.
Subjective Feedback
Despite the Digital Thimble exhibiting a slower performance than the other devices, participants rated it similarly in terms of speed and accuracy (
Figure 12). Notably, participants preferred the Digital Thimble for continued use in VR systems, highlighting its convenience and user experience. However, they expressed concerns about the difficulty in determining the correct amount of pressure to apply, which could affect usability. To address this, participants suggested incorporating additional haptic feedback (e.g., vibration) to provide better guidance. This enhancement could also improve the on-release selection method, enhancing both performance and usability.
Participants also found the Digital Thimble to be the most comfortable device (lower task load) among the options available (
Figure 11). Conversely, the controller was associated with significantly higher physical discomfort due to the need to perform mid-air gestures, which are known to induce physical strain [
68]. Participants described their experience with the controller as
“physically demanding”,
“stressful”, and
“uncomfortable”. While the finger mouse was considered more efficient, participants found it uncomfortable for extended use. Some mentioned discomfort from using the thumb for clicking, while others appreciated its unique characteristics despite this novelty. The higher comfort rating for the Digital Thimble could be attributed to its compact and adaptable design, allowing for comfortable postures when worn. Furthermore, when paired with the on-press method, participants found it more natural and familiar, akin to operating a smartphone. However, one participant expressed discomfort with wearing something on the wrist. Future iterations of the Digital Thimble prototype could benefit from a self-contained design that addresses potential discomfort related to wrist-worn accessories. These results highlight the potential of the Digital Thimble to offer versatile and user-friendly input solutions in virtual reality environments.
5.7. Limitations
One limitation of this study is its small, non-diverse sample size.
Table 1 summarizes the effect sizes of all significant relationships identified in the ANOVAs. Most of the results showed medium to large effects, following Cohen’s interpretation, where an
of 0.01 indicates a small effect, 0.06 a medium effect, and 0.14 a large effect [
69]. Two significant relationships with small effects were also observed and should be interpreted with caution. In contrast, all statistically significant relationships in the subjective feedback produced medium effects (Kendall’s
W between 0.2 and 0.4), based on Cohen’s interpretation of Kendall’s
W, where
W = 0.1 indicates a small effect, 0.3 a medium effect, and 0.5 a large effect [
69].
6. User Study 2: Teleportation and Sorting
A second user study was conducted to compare the performance of the three input devices in teleportation and object sorting tasks. This study used the same setup as the previous one but exclusively utilized the on-press selection method due to its superior performance in the first study.
6.1. Participants
Twelve volunteers took part in this study (M = 31.8 years, = 5.7). Six identified as female and six as male. All participants had attained a university-level education and self-identified as right-handed. None required corrective eyeglasses. Approximately half of the participants (five out of twelve) had prior experience with virtual reality. None of them had participated in the previous study. They were compensated with USD 15 for their participation.
6.2. Design
This study employed a within-subject design and two sessions: teleportation and sorting. The independent variable was the input device, which had three levels: the finger mouse, Digital Thimble, and controller. In the teleportation session, participants teleported to eight predetermined targets, while the sorting session involved sorting ten sequences. Both the session and device orders were counterbalanced to minimize order effects. The dependent variables were task completion time and accuracy performance metrics. The task completion time represents the average time taken by participants to correctly complete a teleportation or sorting task, measured in milliseconds. Accuracy, measured as a percentage, indicates the average proportion of tasks accurately performed by the users. For teleportation, an error is recorded when participants select a location outside the intended teleport target. In the sorting task, errors occur when the completed sequence deviates from the correct order.
6.2.1. Teleportation Tasks
The teleportation scene featured eight targets on a 25 × 50 m plane (
Figure 13), designed as cylindrical objects with dynamic animations for visibility (slowly moving up and down) and labeled with numerical identifiers for clarity. The teleportation tasks used the “point and select” technique and raycasting [
44], where users point a ray to their desired destination and then confirm the selection to teleport. The ray extended up to 2 m, allowing targeting at various distances. The cursor movement and target selection approaches were identical to those used in the Fitts’ law study. However, this study exclusively used the on-press method due to its significantly better performance in the previous study. The experimental system changed the viewpoint according to cursor movement, similar to first-person shooter games. In the study, participants visited eight predetermined destinations, following a route containing varying distances (6–32 m) and angles (10–130°). The first four targets allowed for direct teleportation, while the last four required viewpoint rotation. Errors, marked by a beep sound, occurred if participants deviated from the correct target.
Figure 14 illustrates three participants using the three devices to teleport.
6.2.2. Sorting Tasks
For sorting, an immersive scene was created with four numbered cubes placed on a table in front of the users (
Figure 15). Users picked up a cube by casting a ray onto the target and performing the corresponding selection action: pulling the trigger with the controller, pressing the left key with the finger mouse, or applying extra pressure with the Digital Thimble. When the ray makes contact with a cube, it changes color to provide visual feedback (
Section 3). To reposition the cube, users move it by moving the cursor, and then drop the cube by performing the selection action again. Cube movement was limited to the
x and
y axes, with gravity ensuring the cubes fell onto the table if not placed directly on it. To keep users focused and undistracted, the scene was deliberately simple, including just a table, numbered cubes, and a “Start/Finish” toggle button. In the study, participants were asked to arrange the four numbered cubes in ascending order. To balance the complexity of the task, ten unique sequences were created using the Levenshtein Distance (
) algorithm [
70], with edit distances between 2 and 4. Specifically, there were three sequences with
, four sequences with
, and three sequences with
. Participants pressed the toggle button to start a sorting task and to see the next task. The system did not provide error notifications about sorting errors to avoid potential confounding factors.
Figure 16 illustrates three participants using the three devices to sort cubes.
6.3. Procedure
This study followed the same procedure as the previous study, with the Fitts’ law tasks replaced by teleportation and sorting tasks. All participants completed two 5 min training sessions to familiarize themselves with the sorting and teleportation tasks, where they arranged three different sequences of cubes and teleported to three different destinations. These tasks were not repeated in the final study. Both the device and tasks were counterbalanced to reduce the order effect. Like the previous study, upon completion, participants completed perceived workload and usability questionnaires.
6.4. Results
The entire study, including demonstrations, questionnaires, and breaks, took approximately fifty minutes to complete. A Martinez–Iglewicz test confirmed that the residuals of the response variables were normally distributed. Mauchly’s sphericity test showed equal variances across populations, allowing for the use of repeated-measures ANOVAs in the analyses. For subjective data on more than two levels, a Friedman test was employed. The work also reports effect sizes for statistically significant findings, including eta-squared () for ANOVAs, Pearson’s r for the Wilcoxon Signed-Rank test, and Kendall’s W for the Friedman test.
6.4.1. Task Completion Time
An ANOVA failed to identify a significant effect of the device on the teleportation task’s completion time (
). On average, the mouse achieved the fastest task completion time at 27,410 ms for teleportation, followed by the Digital Thimble at 29,137 ms and the controller at 29,880 ms. But an ANOVA did identify a significant effect of the device on the sorting task’s completion time (
). The mouse achieved the fastest task completion time in the sorting task at 13,040 ms, followed by the controller at 15,834 ms and the Digital Thimble at 16,056 ms. A Duncan test indicated that the mouse was significantly faster than the other two devices in sorting tasks, while there was no significant difference between the controller and the Digital Thimble.
Figure 17 presents the average task completion time, categorized by task and input device.
6.4.2. Accuracy
An ANOVA did not reveal a significant effect of the device used on teleportation accuracy (
). The mouse exhibited the highest accuracy, with a 99.8% success rate, followed closely by the controller at 99.7% and the Digital Thimble at 99.6%. An ANOVA also failed to identify a significant effect of the device on sorting the accuracy (
). All three devices exhibited excellent accuracy in the sorting task, with the mouse achieving a 98.3% accuracy rate, the Digital Thimble achieving 98.3%, and the controller achieving 97.5%.
Figure 18 presents the average accuracy rate, categorized by task and input device.
6.4.3. Perceived Workload
A Friedman test identified a significant effect of the device used on physical demand (
). However, no significant effect on mental demand (
), temporal demand (
), performance (
), effort (
), or frustration (
) was identified.
Figure 19 illustrates the median perceived workload ratings for all conditions.
6.4.4. Perceived Usability
A Friedman test failed to identify a significant effect of the device on speed (
), accuracy (
), user-friendliness (
), naturalness (
), or preference (
).
Figure 20 illustrates the median perceived performance ratings for all conditions in the user study.
7. Discussion
Similar to the first study, the finger mouse outperformed both the controller and the Digital Thimble in terms of speed, allowing participants to complete tasks more swiftly. In the teleportation scenario, participants were faster when using the Digital Thimble compared to the controller (
Figure 17), but the difference was not statistically significant. All devices exhibited high accuracy in both scenarios (
Figure 18), contributing to participants’ favorable opinions of the devices, which were reflected in their lower frustration ratings.
Participants consistently praised the Digital Thimble for its usability, describing it as
“intuitive”,
“innovative”,
“easy”,
“amazing”, and
“refreshing”. They found it straightforward to learn and more comfortable than other devices, reporting lower physical and mental strain (
Figure 19). This comfort is likely due to the Digital Thimble’s design, which provides support and reduces fatigue compared to holding a controller in mid-air.
One participant highlighted the ease of using the Digital Thimble and finger mouse on a surface, noting that the controller caused hand strain. Another remarked that the Digital Thimble felt like an extension of their hand, enhancing both comfort and usability. Many drew parallels between the Digital Thimble’s interaction style and smartphone interactions, noting that this familiarity made the device easy to learn. However, one participant preferred the controller for tasks requiring vertical movements, attributing this preference to familiarity with controllers in VR settings.
These findings showcase the Digital Thimble’s potential as an effective and comfortable input device in VR. While the finger mouse demonstrated a superior quantitative performance, it requires improvements to reduce hand strain. The controller, although widely used, may not be ideal for prolonged use or in certain situations. Thus, alternatives like the Digital Thimble could complement traditional devices by providing additional comfort and convenience.
8. Conclusions
This article introduced the Digital Thimble, a force-based wearable input device for VR that enables free-hand interactions by varying the touch contact force on a surface. It tracks finger movements with an optical mouse sensor and detects contact force with a pressure sensor. We demonstrated the device’s effectiveness as a versatile input and interaction tool through two user studies. In the studies, the finger mouse, previously unexplored in VR contexts, showed a superior quantitative performance, while the performance of the Digital Thimble was comparable to that of a controller. Furthermore, participants preferred the Digital Thimble for its convenience and comfort. These findings underscore the potential of finger-wearable devices in VR and advocate for further research in this direction.
9. Future Work
Future work on the Digital Thimble will focus on enhancing its functionality, usability, and adaptability to diverse VR applications. One key area is the integration of haptic feedback, such as vibration or tactile cues, to provide users with more nuanced guidance during interactions. This enhancement could improve its accuracy, especially in tasks that rely on subtle pressure changes for selection and control. Furthermore, incorporating inertial measurement units (IMUs) and other sensors could extend the applicability of the thimble by enabling more complex gesture recognition, opening up new possibilities for free-hand VR interactions. Exploring adaptive algorithms that adjust to individual users’ pressure levels and movement patterns may also improve its overall performance and user experience.
Another avenue for future work involves exploring ways to enhance the device’s performance while retaining its user-friendly features. For example, enhancing sensor accuracy or optimizing interaction mechanics could improve task speed without compromising comfort, allowing users to complete tasks more efficiently.
Longitudinal studies will also be conducted to examine how practice impacts the thimble’s performance, particularly regarding throughput and movement times. These studies could assess the potential of the thimble to match or surpass traditional controllers as users become more accustomed to the device. Finally, future research will explore the effectiveness of the Digital Thimble in different VR tasks and environments, including confined and seated spaces. Expanding the size and diversity of our sample will allow for more generalizable results, ensuring that the device’s design and functionality accommodate a wide range of users and contexts.
Author Contributions
Conceptualization, A.S.A. and T.J.D.; methodology, A.S.A.; software, T.J.D.; validation, A.S.A. and T.J.D.; formal analysis, A.S.A. and T.J.D.; investigation, A.S.A. and T.J.D.; resources, A.S.A.; data curation, T.J.D.; writing—original draft preparation, T.J.D.; writing—review and editing, A.S.A.; visualization, A.S.A.; supervision, A.S.A.; project administration, A.S.A.; funding acquisition, A.S.A. All authors have read and agreed to the published version of the manuscript.
Funding
It was internally supported by a UC Merced Faculty Research Grant.
Institutional Review Board Statement
The study protocol was reviewed and approved by the Institutional Review Board of the University of California, Merced (UCM2018-004, 5 March 2024).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Grand View Research. Virtual Reality Market Size & Share Report, 2022–2030. 2022. Available online: https://www.grandviewresearch.com/industry-analysis/virtual-reality-vr-market (accessed on 1 March 2024).
- Smutny, P.; Babiuch, M.; Foltynek, P. A Review of the Virtual Reality Applications in Education and Training. In Proceedings of the 2019 20th International Carpathian Control Conference (ICCC), Krakow-Wieliczka, Poland, 26–29 May 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Mishra, R.; Narayanan, M.D.K.; Umana, G.E.; Montemurro, N.; Chaurasia, B.; Deora, H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. Int. J. Environ. Res. Public Health 2022, 19, 1719. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Zhang, J.; Hou, G.; Wang, Z. Virtual Reality and Its Application in Military. IOP Conf. Ser. Earth Environ. Sci. 2018, 170, 032155. [Google Scholar] [CrossRef]
- Dube, T.J.; Arif, A.S. Text Entry in Virtual Reality: A Comprehensive Review of the Literature. In Proceedings of the Human-Computer Interaction. Recognition and Interaction Technologies; Lecture Notes in Computer Science; Kurosu, M., Ed.; Springer Nature Switzerland: Cham, Switzerland, 2019; pp. 419–437. [Google Scholar] [CrossRef]
- Grubert, J.; Ofek, E.; Pahud, M.; Kristensson, P.O. Back to the Future: Revisiting Mouse and Keyboard Interaction for HMD-based Immersive Analytics. arXiv 2020, arXiv:cs/2009.02927. [Google Scholar]
- Schneider, D.; Otte, A.; Kublin, A.S.; Martschenko, A.; Kristensson, P.O.; Ofek, E.; Pahud, M.; Grubert, J. Accuracy of Commodity Finger Tracking Systems for Virtual Reality Head-Mounted Displays. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; pp. 804–805. [Google Scholar] [CrossRef]
- Demolder, C.; Molina, A.; Hammond, F.L.; Yeo, W.H. Recent Advances in Wearable Biosensing Gloves and Sensory Feedback Biosystems for Enhancing Rehabilitation, Prostheses, Healthcare, and Virtual Reality. Biosens. Bioelectron. 2021, 190, 113443. [Google Scholar] [CrossRef]
- Grubert, J.; Ofek, E.; Pahud, M.; Kristensson, P.O. The Office of the Future: Virtual, Portable, and Global. IEEE Comput. Graph. Appl. 2018, 38, 125–133. [Google Scholar] [CrossRef]
- Grubert, J.; Witzani, L.; Ofek, E.; Pahud, M.; Kranz, M.; Kristensson, P.O. Text Entry in Immersive Head-Mounted Display-Based Virtual Reality Using Standard Keyboards. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; pp. 159–166. [Google Scholar] [CrossRef]
- Zhou, Q.; Fitzmaurice, G.; Anderson, F. In-Depth Mouse: Integrating Desktop Mouse into Virtual Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–17. [Google Scholar] [CrossRef]
- Perelman, G.; Serrano, M.; Raynal, M.; Picard, C.; Derras, M.; Dubois, E. The Roly-Poly Mouse: Designing a Rolling Input Device Unifying 2D and 3D Interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Republic of Korea, 18–23 April 2015; pp. 327–336. [Google Scholar] [CrossRef]
- Bérard, F.; Ip, J.; Benovoy, M.; El-Shimy, D.; Blum, J.R.; Cooperstock, J.R. Did “Minority Report” Get It Wrong? Superiority of the Mouse over 3D Input Devices in a 3D Placement Task. In Proceedings of the Human-Computer Interaction—INTERACT 2009; Lecture Notes in Computer Science; Gross, T., Gulliksen, J., Kotzé, P., Oestreicher, L., Palanque, P., Prates, R.O., Winckler, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 400–414. [Google Scholar] [CrossRef]
- Dube, T.J.; Johnson, K.; Arif, A.S. Shapeshifter: Gesture Typing in Virtual Reality with a Force-based Digital Thimble. In Proceedings of the Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, CHI EA ’22, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–9. [Google Scholar] [CrossRef]
- Spittle, B.; Frutos-Pascual, M.; Creed, C.; Williams, I. A Review of Interaction Techniques for Immersive Environments. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3900–3921. [Google Scholar] [CrossRef] [PubMed]
- Kim, Y.M.; Rhiu, I.; Yun, M.H. A Systematic Review of a Virtual Reality System from the Perspective of User Experience. Int. J. Human Comput. Interact. 2020, 36, 893–910. [Google Scholar] [CrossRef]
- Doerner, R.; Geiger, C.; Oppermann, L.; Paelke, V.; Beckhaus, S. Interaction in Virtual Worlds. In Virtual and Augmented Reality (VR/AR): Foundations and Methods of Extended Realities (XR); Doerner, R., Broll, W., Grimm, P., Jung, B., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 201–244. [Google Scholar] [CrossRef]
- Kim, W.; Jung, J.; Xiong, S. VRMouse: Mouse Emulation with the VR Controller for 2D Selection in VR. In Proceedings of the Advances in Usability, User Experience, Wearable and Assistive Technology; Advances in Intelligent Systems and Computing; Ahram, T., Falcão, C., Eds.; Springer: Cham, Switzerland, 2020; pp. 663–670. [Google Scholar] [CrossRef]
- Kim, Y.R.; Kim, G.J. HoVR-Type: Smartphone as a Typing Interface in VR Using Hovering. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 8–10 January 2017; pp. 200–203. [Google Scholar] [CrossRef]
- Zhang, L.; Bai, H.; Billinghurst, M.; He, W. Is This My Phone? Operating a Physical Smartphone in Virtual Reality. In Proceedings of the SIGGRAPH Asia 2020 XR, SA ’20, Virtual, 4–13 December 2020; pp. 1–2. [Google Scholar] [CrossRef]
- Pham, D.M.; Stuerzlinger, W. Is the Pen Mightier than the Controller? A Comparison of Input Devices for Selection in Virtual and Augmented Reality. In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology, Parramatta, NSW, Australia, 12–15 November 2019; pp. 1–11. [Google Scholar] [CrossRef]
- Jackson, B. OVR Stylus: Designing Pen-Based 3D Input Devices for Virtual Reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; pp. 13–18. [Google Scholar] [CrossRef]
- Romat, H.; Fender, A.; Meier, M.; Holz, C. Flashpen: A High-Fidelity and High-Precision Multi-Surface Pen for Virtual Reality. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; pp. 306–315. [Google Scholar] [CrossRef]
- Besançon, L.; Issartel, P.; Ammi, M.; Isenberg, T. Mouse, Tactile, and Tangible Input for 3D Manipulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, Denver, CO, USA, 6–11 May 2017; pp. 4727–4740. [Google Scholar] [CrossRef]
- Muender, T.; Reinschluessel, A.V.; Salzmann, D.; Lück, T.; Schenk, A.; Weyhe, D.; Döring, T.; Malaka, R. Evaluating Soft Organ-Shaped Tangibles for Medical Virtual Reality. In Proceedings of the Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, CHI EA ’22, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–8. [Google Scholar] [CrossRef]
- Ramcharitar, A.; Teather, R.J. EZCursorVR: 2D Selection with Virtual Reality Head-Mounted Displays. In Proceedings of the 44th Graphics Interface Conference, GI ’18, Toronto, ON, Canada, 8–11 May 2018; pp. 123–130. [Google Scholar] [CrossRef]
- Kharlamov, D.; Woodard, B.; Tahai, L.; Pietroszek, K. TickTockRay: Smartwatch-Based 3D Pointing for Smartphone-Based Virtual Reality. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST ’16, Munich, Germany, 2–4 November 2016; pp. 365–366. [Google Scholar] [CrossRef]
- Bowman, D.A.; Wingrave, C.A.; Campbell, J.M.; Ly, V.Q.; Rhoton, C.J. Novel Uses of Pinch Gloves™ for Virtual Environment Interaction Techniques. Virtual Real. 2002, 6, 122–129. [Google Scholar] [CrossRef]
- Shigapov, M.; Kugurakova, V.; Zykov, E. Design of Digital Gloves with Feedback for VR. In Proceedings of the 2018 IEEE East-West Design & Test Symposium (EWDTS), Kazan, Russia, 14–17 September 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Xu, Z.; Wong, P.C.; Gong, J.; Wu, T.Y.; Nittala, A.S.; Bi, X.; Steimle, J.; Fu, H.; Zhu, K.; Yang, X.D. TipText: Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST ’19, New Orleans, LA, USA, 20–23 October 2019; pp. 883–899. [Google Scholar] [CrossRef]
- Zand, G.; Arif, A.S. THUMBDRIVER: Telepresence Robot Control with a Finger-Worn Mouse. Available online: https://www.theiilab.com/pub/Zand_TELE2024_ThumbDriver.pdf (accessed on 1 November 2024).
- Li, Y.; Huang, J.; Tian, F.; Wang, H.A.; Dai, G.Z. Gesture Interaction in Virtual Reality. Virtual Real. Intell. Hardw. 2019, 1, 84–112. [Google Scholar] [CrossRef]
- Hoppe, A.H.; Klooz, D.; van de Camp, F.; Stiefelhagen, R. Mouse-Based Hand Gesture Interaction in Virtual Reality. In Proceedings of the HCI International 2023 Posters, Copenhagen, Denmark, 23–28 July 2023; Communications in Computer and Information Science. Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G., Eds.; Springer: Cham, Swizerland, 2023; pp. 192–198. [Google Scholar] [CrossRef]
- Buckingham, G. Hand Tracking for Immersive Virtual Reality: Opportunities and Challenges. Front. Virtual Real. 2021, 2, 728461. [Google Scholar] [CrossRef]
- Torres, T. Myo Gesture Control Armband Review. 2015. Available online: https://www.pcmag.com/reviews/myo-gesture-control-armband (accessed on 1 March 2024).
- Chen, Y.; Su, X.; Tian, F.; Huang, J.; Zhang, X.L.; Dai, G.; Wang, H. Pactolus: A Method for Mid-Air Gesture Segmentation within EMG. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16, San Jose, CA, USA, 7–12 May 2016; pp. 1760–1765. [Google Scholar] [CrossRef]
- Hou, W.j.; Chen, X.l. Comparison of Eye-Based and Controller-Based Selection in Virtual Reality. Int. J. Hum.-Comput. Interact. 2021, 37, 484–495. [Google Scholar] [CrossRef]
- Lu, Y.; Yu, C.; Shi, Y. Investigating Bubble Mechanism for Ray-Casting to Improve 3D Target Acquisition in Virtual Reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Antlanta, GA, USA, 22–26 March 2020; pp. 35–43. [Google Scholar] [CrossRef]
- Pietroszek, K. Raycasting in Virtual Reality. In Encyclopedia of Computer Graphics and Games; Lee, N., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 1–3. [Google Scholar] [CrossRef]
- Iwata, H. The Torus Treadmill: Realizing Locomotion in VEs. IEEE Comput. Graph. Appl. 1999, 19, 30–35. [Google Scholar] [CrossRef]
- Huang, J.Y. An Omnidirectional Stroll-Based Virtual Reality Interface and Its Application on Overhead Crane Training. IEEE Trans. Multimed. 2003, 5, 39–51. [Google Scholar] [CrossRef]
- Medina, E.; Fruland, R.; Weghorst, S. Virtusphere: Walking in a Human Size VR “Hamster Ball”. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, New York, NY, USA, 22–26 September 2008; Volume 52, pp. 2102–2106. [Google Scholar] [CrossRef]
- Prithul, A.; Adhanom, I.B.; Folmer, E. Teleportation in Virtual Reality; A Mini-Review. Front. Virtual Real. 2021, 2, 730792. [Google Scholar] [CrossRef]
- Bozgeyikli, E.; Raij, A.; Katkoori, S.; Dubey, R. Point & Teleport Locomotion Technique for Virtual Reality. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’16, Austin, TX, USA, 16–19 October 2016; pp. 205–216. [Google Scholar] [CrossRef]
- Prithul, A.; Bhandari, J.; Spurgeon, W.; Folmer, E. Evaluation of Hands-free Teleportation in VR. In Proceedings of the 2022 ACM Symposium on Spatial User Interaction, SUI ’22, Virtual, 1–2 December 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Bolte, B.; Steinicke, F.; Bruder, G. The jumper metaphor: An effective navigation technique for immersive display setups. In Proceedings of the Virtual Reality International Conference, Nice, France, 4–5 November 2011; Volume 1, p. 8. [Google Scholar]
- Young, M.K.; Gaylor, G.B.; Andrus, S.M.; Bodenheimer, B. A Comparison of Two Cost-Differentiated Virtual Reality Systems for Perception and Action Tasks. In Proceedings of the ACM Symposium on Applied Perception, SAP ’14, Vancouver, BC, Canada, 8–9 August 2014; pp. 83–90. [Google Scholar] [CrossRef]
- Choi, I.; Culbertson, H.; Miller, M.R.; Olwal, A.; Follmer, S. Grabity: A Wearable Haptic Interface for Simulating Weight and Grasping in Virtual Reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST ’17, Québec City, QC, Canada, 22–25 October 2017; pp. 119–130. [Google Scholar] [CrossRef]
- Shang, X.; Kallmann, M.; Arif, A.S. Effects of correctness and suggestive feedback on learning with an autonomous virtual trainer. In Companion Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 16–20 March 2019; ACM: New York, NY, USA, 2019; pp. 93–94. [Google Scholar] [CrossRef]
- Shang, X.; Arif, A.S.; Kallmann, M. Evaluating Feedback Strategies for Virtual Human Trainers. arXiv 2020, arXiv:2011.11704. [Google Scholar]
- Zweighaft, A.R.; Slotness, G.L.; Henderson, A.L.; Osborne, L.B.; Lightbody, S.M.; Perhala, L.M.; Brown, P.O.; Haynes, N.H.; Kern, S.M.; Usgaonkar, P.N.; et al. A Virtual Reality Ball Grasp and Sort Task for the Enhancement of Phantom Limb Pain Proprioception. In Proceedings of the 2012 IEEE Systems and Information Engineering Design Symposium, Charlottesville, VA, USA, 27 April 2012; pp. 178–183. [Google Scholar] [CrossRef]
- Gürerk, Ö.; Bönsch, A.; Kittsteiner, T.; Staffeldt, A. Virtual Humans as Co-Workers: A Novel Methodology to Study Peer Effects. SSRN 2018, 34. [Google Scholar] [CrossRef]
- Narasimha, S.; Dixon, E.; Bertrand, J.W.; Chalil Madathil, K. An Empirical Study to Investigate the Efficacy of Collaborative Immersive Virtual Reality Systems for Designing Information Architecture of Software Systems. Appl. Ergon. 2019, 80, 175–186. [Google Scholar] [CrossRef]
- DeHoratius, N.; Gürerk, Ö.; Honhon, D.; Hyndman, K.B. Execution Failures in Retail Supply Chains—A Virtual Reality Experiment. SSRN 2020, 41. [Google Scholar] [CrossRef]
- Lee, J.; Eden, A.; Park, T.; Ewoldsen, D.R.; Bente, G. Embodied Motivation: Spatial and Temporal Aspects of Approach and Avoidance in Virtual Reality. Media Psychol. 2022, 25, 387–410. [Google Scholar] [CrossRef]
- Kienzle, W.; Hinckley, K. LightRing: Always-Available 2D Input on Any Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST ’14, Honolulu, HI, USA, 5–8 October 2014; pp. 157–160. [Google Scholar] [CrossRef]
- Yang, X.D.; Grossman, T.; Wigdor, D.; Fitzmaurice, G. Magic Finger: Always-Available Input through Finger Instrumentation. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST ’12, Cambridge, MA, USA, 7–10 October 2012; pp. 147–156. [Google Scholar] [CrossRef]
- Park, K.; Kim, D.; Heo, S.; Lee, G. MagTouch: Robust Finger Identification for a Smartwatch Using a Magnet Ring and a Built-in Magnetometer. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar] [CrossRef]
- Schrapel, M.; Herzog, F.; Ryll, S.; Rohs, M. Watch My Painting: The Back of the Hand as a Drawing Space for Smartwatches. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, Honolulu, HI, USA, 25–30 April 2020; pp. 1–10. [Google Scholar] [CrossRef]
- Ni, T.; Baudisch, P. Disappearing Mobile Devices. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, UIST ’09, Victoria, BC, Canada, 4–7 October 2009; pp. 101–110. [Google Scholar] [CrossRef]
- Baudisch, P.; Sinclair, M.; Wilson, A. Soap: A Pointing Device That Works in Mid-Air. In Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, Montreux, Switzerland, 15–18 October 2006; pp. 43–46. [Google Scholar] [CrossRef]
- DeLong, S.; Arif, A.S.; Mazalek, A. Design and Evaluation of Graphical Feedback on Tangible Interactions in a Low-Resolution Edge Display; ACM: New York, NY, USA, 2019; p. 8. [Google Scholar] [CrossRef]
- ISO 9241-9:2000; Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs)—Part 9: Requirements for Non-Keyboard Input Devices. International Organization for Standardization: Geneva, Switzerland, 2000.
- ISO/TS 9241-411:2012; Ergonomics of Human-System Interaction—Part 411: Evaluation Methods for the Design of Physical Input Devices. International Organization for Standardization: Geneva, Switzerland, 2012.
- MacKenzie, I.S. Fitts’ Law. In The Wiley Handbook of Human Computer Interaction; John Wiley & Sons, Ltd: Hoboken, NJ, USA, 2018; Chapter 17; pp. 347–370. [Google Scholar] [CrossRef]
- Soukoreff, R.W.; MacKenzie, I.S. Towards a Standard for Pointing Device Evaluation, Perspectives on 27 Years of Fitts’ Law Research in HCI. Int. J.-Hum.-Comput. Stud. 2004, 61, 751–789. [Google Scholar] [CrossRef]
- Hart, S.G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, San Fransisco, CA, USA, 16–20 October 2006; Volume 50, pp. 904–908. [Google Scholar] [CrossRef]
- Dube, T.J.; Ren, Y.; Limerick, H.; MacKenzie, I.S.; Arif, A.S. Push, Tap, Dwell, and Pinch: Evaluation of Four Mid-air Selection Methods Augmented with Ultrasonic Haptic Feedback. ACM Hum. Comput. Interact. 2022, 6, 565:207–565:225. [Google Scholar] [CrossRef]
- Arif, A.S. A Brief Note on Selecting and Reporting the Right Statistical Test; Technical report; University of California: Merced, CA, USA, 2017. [Google Scholar]
- Levenshtein, V.I. Binary Codes Capable of Correcting Deletions, Insertions, and Reversals. Sov. Phys. Dokl. 1966, 10, 707–710. [Google Scholar]
Figure 1.
Different components of the Digital Thimble: (a) The Unique Station Mini Wireless finger mouse from which the optical mouse sensor was sourced. (b) The disassembled finger mouse, showing the circuit and the optical mouse’s sensor. (c) The Digital Thimble components, including the pressure sensor, optical mouse sensor, and 3D-printed case that houses the circuitry.
Figure 1.
Different components of the Digital Thimble: (a) The Unique Station Mini Wireless finger mouse from which the optical mouse sensor was sourced. (b) The disassembled finger mouse, showing the circuit and the optical mouse’s sensor. (c) The Digital Thimble components, including the pressure sensor, optical mouse sensor, and 3D-printed case that houses the circuitry.
Figure 2.
The devices used in the evaluation: (a) An Oculus Touch Controller. (b) An AOKID Creative Finger Mouse. (c) The Digital Thimble.
Figure 2.
The devices used in the evaluation: (a) An Oculus Touch Controller. (b) An AOKID Creative Finger Mouse. (c) The Digital Thimble.
Figure 3.
The 2D Fitts’ law task in ISO 9241-9 [
63]. The target is indicated in red. Arrows and numbers illustrate the selection sequence.
Figure 3.
The 2D Fitts’ law task in ISO 9241-9 [
63]. The target is indicated in red. Arrows and numbers illustrate the selection sequence.
Figure 4.
Three participants performing Fitts’ law tasks in the first user study using the (a) controller, (b) finger mouse, and (c) Digital Thimble.
Figure 4.
Three participants performing Fitts’ law tasks in the first user study using the (a) controller, (b) finger mouse, and (c) Digital Thimble.
Figure 5.
Average throughput (bps) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 5.
Average throughput (bps) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 6.
Average movement time (ms) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 6.
Average movement time (ms) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 7.
Average target re-entries (count/trial) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 7.
Average target re-entries (count/trial) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 8.
Cursor trace examples for on-press selection method using (a) controller, (b) finger mouse, and (c) Digital Thimble.
Figure 8.
Cursor trace examples for on-press selection method using (a) controller, (b) finger mouse, and (c) Digital Thimble.
Figure 9.
Cursor trace examples for on-release selection method using (a) controller, (b) finger mouse, and (c) Digital Thimble.
Figure 9.
Cursor trace examples for on-release selection method using (a) controller, (b) finger mouse, and (c) Digital Thimble.
Figure 10.
Average error rate (%) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 10.
Average error rate (%) by input device and selection method. Error bars represent ±1 standard deviation.
Figure 11.
The median perceived workload across user study conditions measured by a 20-point NASA-TLX questionnaire. The scale from 1 to 20 represents very low to very high ratings for all factors except performance, where 1 to 20 represents the range from perfect to failure. The error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.
Figure 11.
The median perceived workload across user study conditions measured by a 20-point NASA-TLX questionnaire. The scale from 1 to 20 represents very low to very high ratings for all factors except performance, where 1 to 20 represents the range from perfect to failure. The error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.
Figure 12.
The median perceived usability of the study conditions, rated on a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). Error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.
Figure 12.
The median perceived usability of the study conditions, rated on a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). Error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.
Figure 13.
(a) Bird’s-eye view of the teleportation destinations, with red arrows indicating the designated path. The green target marks the starting point. (b) An animated cylindrical target with a cube displaying the target’s number.
Figure 13.
(a) Bird’s-eye view of the teleportation destinations, with red arrows indicating the designated path. The green target marks the starting point. (b) An animated cylindrical target with a cube displaying the target’s number.
Figure 14.
Participants using the three devices to teleport: (a) Controller. (b) Finger mouse. (c) Digital Thimble.
Figure 14.
Participants using the three devices to teleport: (a) Controller. (b) Finger mouse. (c) Digital Thimble.
Figure 15.
The sorting scene featuring four numbered cubes on a table.
Figure 15.
The sorting scene featuring four numbered cubes on a table.
Figure 16.
Participants using the three devices to sort cubes: (a) Controller. (b) Finger mouse. (c) Digital Thimble.
Figure 16.
Participants using the three devices to sort cubes: (a) Controller. (b) Finger mouse. (c) Digital Thimble.
Figure 17.
Average task completion time (milliseconds) categorized by tasks and input device. Error bars represent ±1 standard deviation.
Figure 17.
Average task completion time (milliseconds) categorized by tasks and input device. Error bars represent ±1 standard deviation.
Figure 18.
Average accuracy rate (%) categorized by task and input device. Error bars represent ±1 standard deviation.
Figure 18.
Average accuracy rate (%) categorized by task and input device. Error bars represent ±1 standard deviation.
Figure 19.
The median perceived workload across the study conditions, measured by a 20-point NASA-TLX questionnaire. The scale from 1 to 20 represents very low to very high ratings for all factors except performance, where 1 to 20 represents the range from perfect to failure. Error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.
Figure 19.
The median perceived workload across the study conditions, measured by a 20-point NASA-TLX questionnaire. The scale from 1 to 20 represents very low to very high ratings for all factors except performance, where 1 to 20 represents the range from perfect to failure. Error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.
Figure 20.
The median perceived usability of the study conditions, rated on a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). Error bars represent ±1 standard deviation.
Figure 20.
The median perceived usability of the study conditions, rated on a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). Error bars represent ±1 standard deviation.
Table 1.
Effect size () of the statistically significant differences observed in this study’s ANOVA analysis. The “-” symbol indicates non-significant results.
Table 1.
Effect size () of the statistically significant differences observed in this study’s ANOVA analysis. The “-” symbol indicates non-significant results.
| Throughput | Movement Time | Target Re-Entries | Error Rate |
---|
Device | 0.07 | 0.03 | - | 0.01 |
Method | 0.19 | 0.11 | 0.01 | - |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).