Next Article in Journal
Some Rational Approximations and Bounds for Bateman’s G-Function
Previous Article in Journal
Predictive Modelling of Statistical Downscaling Based on Hybrid Machine Learning Model for Daily Rainfall in East-Coast Peninsular Malaysia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces

1
School of Humanity, Art and Digital Media, Hangzhou Dianzi University, Hangzhou 310018, China
2
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
3
Department of Computer and Systems Science, Stockholm University, SE-10691 Stockholm, Sweden
4
College of Computer Science & Technology, Zhejiang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 928; https://doi.org/10.3390/sym14050928
Submission received: 26 March 2022 / Revised: 24 April 2022 / Accepted: 27 April 2022 / Published: 2 May 2022
(This article belongs to the Section Computer)

Abstract

:
Hand motion sensing-based interaction, abbreviated as ‘free-hand interaction’, provides a natural and intuitive method for touch-less interaction on a large display. But due to inherent usability deficiencies of the unconventional size of the large display and the kinematic limitations of the user’s arm joint movement, a large display-based free-hand interaction is suspected to have different performance across the whole areas of the large display. To verify this, a multi-directional target pointing and selection experiment was designed and conducted based on the ISO 9241-9 evaluation criteria. Results show that (1) free-hand interaction in display areas close to the center of the body had a higher accuracy than that in peripheral-body areas; (2) free-hand interaction was asymmetric at the left side and the right side of the body. More specifically, left-hand interaction in the left-sided display area was more efficient and accurate than in the right-sided display area. For the right-hand interaction, the result was converse; moreover, (3) the dominant hand generated a higher interaction accuracy than the non-dominant hand. Lessons and strategies are discussed for designing user-friendly natural user interfaces in large displays-based interactive applications.

1. Introduction

Continued advances in display and motion sensing technologies in recent years have given rise to a great deal of large display-based interactive systems, across the fields of office collaboration, public advertising, digital entertainment, and smart home assistance [1,2,3]. In contrast to conventional mouse- and touching-based interaction means on normal-sized monitors and portable devices, a large display-based free-hand interaction has the advantages of being highly simple, natural and easy for novices and special users who are unfamiliar with computers, thus becomes an essential technology in ubiquitous computing and ambience intelligently sensitive environments [1,4]. For example, in a public site monitoring system, when integrating with intelligent analysis algorithms, free-hand interaction and related natural behavior recognition techniques can be applied to capture and interpret behavioral patterns, and based on this provides intelligently generated content on the large display [4]. In another case of smart home services and applications, a motion sensing-based interactive system can act as a health-care aide monitoring and predicting the elders’ daily activities and offering assistance whenever needed [2]. As a novel interaction paradigm, a large display-based free-hand interaction was also proven to benefit task productivity, visual navigation and multi-task management in various situations [5,6,7].
Despite the practical advantages, free-hand interaction on a large display was criticized to have usability deficiencies: weak accuracy in recognizing hand movements and overall inefficiency in acquiring targets distributed on a wide display area [5,6]. To some extent, these deficiencies are caused by the oversize of the large display, inevitable hand jitter and the intrinsically physical limitations of human reach [1,8]. For example, in interaction with an ultra-large display wider than 5 or above meters, the user relies on frequent head rotations and physical movements from left to right to aid visual searching and interaction tasks, which bring a negative influence on the overall efficiency. In addition, the user’s arm movement and hand aiming operations are closely relevant to the bio-mechanical structure and kinematic features of the shoulder and the elbow joints, arm movement in some directions is less flexible or comfortable than in other directions [9]. Though earlier research [10,11,12,13] had proposed various aiding techniques to make free-hand interaction on large displays easier and more convenient, the potential effect of bio-mechanical structure of the arm and its kinematic features had still less been or at least not comprehensively concerned nor investigated.
It is well known that the recognizing and tracking ability of the camera has a crucial effect on the free-hand interaction accuracy. Apart from this, the hand jitter and inherently limited movement precision of the arm are other two ergonomic factors having obvious influences on the free-hand interaction result [8]. The asymmetric motor ability of two hands or the handedness characteristic, which is derived from the asymmetric functions of the human left and right semi-brains, is also an explicit factor having a potential influence on the hand operational tasks. In conventional mouse operations, it has been found that the dominant hand outperformed the non-dominant hand in terms of operational flexibility, accuracy and task completion efficiency [14]. But in free-hand interaction on a large display, the user does not directly operate a controlling device and thus the free-hand interaction result is not directly correlated to the hand operational flexibility and accuracy. Instead, the bio-mechanical structure of the arm and the kinematic features of the arm movement is more closely correlated to the free-hand interaction accuracy and efficiency, but it has still not been investigated completely.
To gain a deeper insight into the kinematic feature of the arm movement and its influences on free-hand interaction on a large display, and more importantly to verify the practical effect of the arm’s asymmetric operation on the free-hand interaction result, an empirical experiment was carried out in which 30 participants were recruited to repeatedly complete a specifically designed free-hand interaction task on a 70-inch large display. In this study, the participant’s task performance, including the free-hand interaction efficiency and accuracy on divided display areas, were comparatively measured. Performing differences between the dominant arm and the non-dominant hand were also compared. Based on the findings, implications and suggestions were discussed for designing more effective and user-satisfied natural user interfaces in future large display-based interactive systems.
The remainder of this paper is structured as follows: Section 2 provides a comprehensive review of the related work; Section 3 describes the research objective and hypotheses development; Section 4 presents the experimental details, including the participants and apparatus information, experimental design and procedures; Section 5 provides analyses and experimental results in detail; Section 6 summarizes the findings and hypotheses judgement; it is then followed by the Section 7 which provides a systematic discussion on the lessons and implications derived from the experiment. Finally, Section 8 concludes this study.

2. Literature Review

A large display-based free-hand interaction and its relevant applications have been investigated for a long time and many ergonomic and usability issues have been concerned. In this section, a brief summary of related previous work is provided.

2.1. Fundamental Task in Free-Hand Interaction and Implementing Techniques

No matter in mouse-based interaction or in free-hand natural interaction applications, ‘target pointing and selection’ is still by far the most fundamental task. Other more complicated interactive operations, such as target dragging and rotation, zoom-in and zoom-out, are performed on the basis of the most fundamental task [15]. Target pointing and selection often consist of two stages: ballistic and collection. At the former stage, the user performs a rapid movement to approach the target; then at the following stage, the user adjusts the cursor position and triggers a target selection event.
In different application situations, the target pointing and selection technique was implemented by different means. For example, Vogel and Balakrishnan [16] utilised a ViconTM optical high precision motion tracking system to recognize and track the user’s whole body movement and gestures, based on this to develop a fine-grained free-hand distant interaction technique on a large display. Malik et al. [11] had ever developed a touching sensitive trackpad-based interactive technique to aid distant interaction on a large display. The trackpad had two asymmetric areas, the left area was provided to translate the thumb’s movement into coarse and fast travel on the display, while the right area was provided to convert the thumb’s movement into refined moving on the display. Additionally, Haque et al. [17] used another sensing technique—surface electromyography (sEMG) detection and pattern recognition—to perceive and translate the user’s natural hand gestures into operational commands in the interactive task. But according to a review of more recent work such as [18,19], sEMG detection-based hand gestures recognizing method had not enough of a satisfying precision as expected. In a lately published research work, Patil et al. [20] proposed a multi-sensory fusion technique to make hand poses tracking and recognition more accurate. In this technique, multiple lidars and inertial sensors were simultaneously used.
Strictly speaking, a really natural free-hand interaction technique should be performed through bare hands without any devices worn or stuck on the arm. From this perspective, earlier proposals based on a ViconTM motion tracking system or an sEMG detecting device are not real free-hand interaction techniques. In contrast, the computer vision-based technique is a non-contacting method that has few limitations in the use of context or physical configuration, thus is a more suitable and satisfying choice in designing free-hand natural interaction techniques on a large display [21,22]. But according to our survey on the state-of-the-art computer vision-based techniques, these techniques are universally criticized to have deficiencies in precision and robustness [8].

2.2. Ergonomic Concerns in Free-Hand Interaction and User Interfaces

Quite a few technical solutions had been raised to remedy these deficiencies. For example, Bi et al. [23] used a laser pointer with functional buttons and wireless communication modules to aid remote interaction with large displays. Another research by Clark et al. [21] combined a voice recognition technique and a hand gestural interaction technique to implement a multi-modal user interface on a large display-based application. To improve hand operational accuracy in particular to make small target acquisition easier, Mäkelä et al. [24] developed a novel user interface technique—‘Magnetic Cursor’. In this technique, the cursor was automatically adhered to a nearby target, thus making the target pointing and selection easier and more promptly. Through a similar method, Bateman et al. [25] introduced ‘Sticky’ and ‘Gravity’ features into the large display user interfaces, thus making small targets acquired more conveniently. Ren and O’Neill [26] had ever proved that a step-by-step operation was an effective means for improving target selection accuracy in a large-scale 3D user interface. Additionally, Banerjee et al. [13] developed a finger-pointing-based interactive technique to aid target acquisition on a horizontally flat large display. Shoemaker et al. [12] created another novel whole-body interactive technique—‘Shadow Reaching’—to make the target acquisition and manipulation more natural and intuitive. In this technique, a virtual light was set behind the user and the user’s shadow was cast on the large display user interface, targets covered by the body shadow then became selected or activated. When the user adjusted the interactive distance toward the large display, the body shadow size shrank or enlarged synchronously.
In addition to the deficiencies in interaction efficiency and accuracy, user interaction difference on divided display areas is another common problem on large displays. Neurophysiology studies found that in hand operations users can spontaneously form three cognitive spatial representations around the body, within-body space, peripheral space and extra-personal space [27]. It has been reported that arm movement in personal and peripheral spaces is more refined and subtle than that in the extra-personal space [28]. Based on this, Shoemaker et al. [29] put forward a new interactive paradigm—body-centric interaction (BCI)—to make free-hand interaction more interesting and expressive. In this paradigm, all-natural movements, behaviors and metaphorical postures centered on the human body are recognized and translated into interactive commands and inputs, such as accessing personal data by putting one hand on the stomach, exiting the task by performing a hand waving action and so on. The functional features of the user’s sensorimotor systems had been proven to cause hand operational differences within upper and lower visual fields. More specifically, hand operation in the lower visual field is more efficient and precise than that in the upper visual field [30,31,32,33].

2.3. Arm Movement Kinematic Features and Influences on Free-Hand Interaction

In computer vision-based free-hand interaction, the user’s arm movement and hand gestures are recognized and translated into cursor travel and manipulations on the user interface, its performance is heavily dependent on the recognizing and tracking ability of the camera [15]. It has been widely accepted that those simple and easy-to-learn hand gestures, e.g., hand pointing and dwelling, are the most effective and intuitive ones that should be primarily adopted in free-hand interaction techniques [15,34]. But to the best of our knowledge, the bio-mechanical structure of the arm and the kinematic features of the arm movement had seldom been concerned in free-hand interaction-relevant studies. Their potential influences on free-hand interaction had often been neglected.
According to our survey on bio-mechanical studies, arm movement kinematic features include but are not limited to a limited range of arm movement, hand moving accuracy, degrees of freedom (DoFs) of the elbow and the shoulder joints, and the motor ability difference between two hands or the ‘handedness’ characteristic. Earlier studies had proven that two-hand collaborative interaction and one-hand interaction resulted in different task performance. Specifically, one-hand interaction was found to be more efficient and perceived to be more comfortable than two hands collaborative interaction [35]. But in more complicated tasks such as ‘Pan-and-Zoom’ and ‘Point-and-Command’, bi-manual interaction showed advantages in overall efficiency and naturalness [22,36]. The handedness characteristic or the motor ability difference between the left and the right hands, which is derived from the asymmetric structure and the functional difference between the user’s left and right semi-brains, has been proven to cause an operational difference in dominant hand and non-dominant hand operations. Given the handedness characteristic and its effect on hand operations, it is suggested that the user’s two hands should be assigned to diverse tasks. For example, the left hand is assigned to complete coarse-grained selection, while the right hand is assigned to complete fine-grained manipulation [11]. In conventional mouse and touching screen operations, the user’s dominant hand was found to operate more efficiently than the non-dominant hand [14]. But in free-hand interaction on a large display, the interaction result is not immediately correlated with the hand operational performance. Instead, it is more closely correlated with the arm movement kinematic features. Whether the asymmetric motor abilities of two arms generate a free-hand interaction difference is still an unclear issue.
Being constrained by the bio-mechanical structure of the elbow and the shoulder joints as well as the arm movement kinematic features, arm movement and hand operation in some directions are more flexible and comfortable than in other directions. From this perspective, (1) choice of the operating hand and (2) the hand operational position relative to the shoulder joint are two explicit factors having potential effects on the free-hand interaction result, which should be particularly investigated.

3. Research Objectives and Hypotheses

An earlier study by Ren and O’Neill [26] had ever explored the effect of arm movement direction on free-hand interaction results, but their study had not taken both hands into consideration, the individual characteristic of asymmetric motor ability of the dominant hand and the non-dominant hand or the user’s handedness had also not been investigated.

3.1. Objectives

To fill the knowledge gap about the arm movement kinematic characteristics and more importantly to check the effect of the arm’s asymmetric motor ability on free-hand interaction efficiency and accuracy, a repeated-measures free-hand target pointing and selection experiment was conducted in which arm movement was classified according to 8 directions (see in Figure 1a) and two arms’ task performance in different directions were comparatively evaluated. The asymmetry in the arm movement refers in particular to two aspects: (1) the functional or motor ability asymmetry between the left and the right arms; and (2) the operational asymmetry between the arm’s left and right-sided spaces, as illustrated in Figure 1b.

3.2. Hypotheses Development

In the BCI and some other similar natural interaction paradigms [27,29], body surrounding spaces in an interactive environment can often be separated into three ranges, i.e., within-body space (WBS), peripheral-body space (PBS) and extra-personal space (EPS), from close proximity to a faraway distance. From early relevant literature, it has been learned that human proprioception and the bio-mechanical structure of the upper limb have a crucial effect on the hand operational tasks, which implies that hand interaction at a closer distance or more close to the body center is more flexible and natural since the visual feedback and operational position are closer and more integrated. In addition, the kinematic characteristics of the arm’s elbow and the shoulder joints determine that arm movement often has a constrained DoF and the hand operation in some directions is more comfortable and labor-saving than in other directions. Based on these, research hypotheses are developed as follows:
Hypothesis 1 (H1).
Free-hand interaction in the display area close to the body center is more efficient and accurate than that in peripheral-body display areas.
Hypothesis 2 (H2).
Free-hand interaction generates different results in the arm’s different moving directions. For the right-hand interaction, arm movement in the rightward direction is more efficient and accurate than in the leftward direction.
Hypothesis 3 (H3).
Although two arms are generally symmetric in body structure, their performance is asymmetric: the dominant arm generated a higher interaction efficiency and a more satisfying accuracy than the non-dominant arm.

4. Methods

In this section, detailed information about the experimental methods is provided, including the implementing method of the free-hand interactive technique, participants engaged in the experiment and the selection criteria, experimental apparatus, design and independent variables, evaluation metrics and the complete procedure.

4.1. Free-Hand Target Acquisition Technique

Target pointing and selection technique, abbreviated as target acquisition technique, can be implemented by mainly three interaction metaphors: grabbing metaphor, pointing metaphor and mouse metaphor [37]. The grabbing metaphor mimics the object acquisition process in the real world, only the objects within an arm-length distance are reachable, thus not an applicable metaphor in an ultra-large display-based application. The pointing metaphor captures two points, such as the user’s finger and the nose, to realize a laser pointing technique. But because one point easily shields the other point, this metaphor is not sufficiently reliable. The mouse metaphor was adopted in this study, it simulates a target pointing operation on a vertical plane. Hand movement in a vertical direction is converted into cursor movement through a transferring ratio, which is also called control-to-display (CD) gain.
To ensure that the user can control the cursor to move across the whole area of the large display as well as to provide sufficient precision, a satisfying trade-off between movement speed and precision should be provided. Nancel et al. [38] have developed a flexible transferring technique—Pointer Acceleration (PA)—to satisfy moving speed and precision simultaneously on a large-scale user interface. In the PA technique, the transferring ratio between the cursor and the hand movement speed, i.e., the CD gain, was dynamically adjusted based on the assumption that users generally perform a faster hand movement when the cursor is far away from the target but slow hand movement down when the cursor is close to the target. In the present study, the CD gain changes based on the hand movement velocity through a specially designed sigmoid curve function as below, which is suggested by Nancel et al. [38].
C D ( x ) = C D m a x C D m i n 1 + e λ ( x V i n f ) + C D m i n
V i n f = r a t i o i n f × ( V m a x V m i n ) + V m i n
There are 6 parameters involved in this function: Vmin, Vmax, CDmin, CDmax, λ and ratioinf. Vmin and Vmax are estimates of the lower and upper bounds of hand movement speed; ratioinf refers to the position of the inflexion point, ranging from 0 to 1; CDmin and CDmax are the lower and upper bounds of the CD gain. Through an iterative process, all parameters were tuned to make the transferring function suited to the present study. Figure 2 illustrates the transferring function implemented in this study, by plotting hand velocity against CD gains.

4.2. Participants

23 volunteered undergraduate students and 7 staff (16 males and 14 females) from a local university were recruited to participate in the empirical experiment. The participants were aged from 19 to 42 (M = 27.8, SD = 4.12), all having normal or corrected-to-normal eyesight without body impairment. Note that applicants who were suffering or had ever suffered from hyperthyroidism, diabetes and arthritis which had any influences on upper limb motor stability were not accepted and excluded. Among all participants, 10 students (aged from 20 to 24) and 2 staffs (aged 36 and 41) are foreigners from the college of international exchange of the university, who came from the United States of America, Pakistan, Japan and the South of Korea with different cultural backgrounds; while other 20 participants are native Chinese. The participants were required to self-report their handedness, and one ‘handedness survey questionnaire’ (HSQ) [39] was provided to verify their answers. Finally, all participants were confirmed to be right-handers. All participants had a basic knowledge about the natural hand interaction technique or experienced relevant applications before, e.g., playing Kinect-based motion-sensing games. All participants gave their informed consent for inclusion before they participated in the study. Considering that the experiment was conducted in a special period for preventing COVID-19, to ensure the safety of the public, each participant was also required to provide a health report certifying no risk of contracting the virus. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Hangzhou Dianzi University (HDU20211215). Note that the experimental environment and all apparatus used were sterilized and it was informed to the participants.

4.3. Apparatus

The empirical experiment was conducted in a semi-public conference hall which was sterilized per half-day, all equipment and devices in this hall were also sterilized. One 4.8 m × 2.7 m liquid crystal screen with a high resolution of 4960 × 2790 was used as the large display which was internally equipped with a workstation (Windows 10, 64 GB memory, and 4.0 GHz Intel 64-core processor). A Microsoft Kinect 2.0 RGB-Depth camera with a refresh rate of up to 60 fps was mounted on the middle top of the large display. Figure 3 shows the experimental apparatus and scenario. To circumvent the potential effect of interaction distance, a distance of 2.5 m was selected and a label was marked on the ground in the central front of the display. All participants were required to stay at a specific distance while performing the tasks.

4.4. Independent Variables

A single-hand multi-directional target acquisition task was specifically designed based on the reciprocal evaluation task of ISO 9241-9 [40]. This task consisted of selecting circular targets in 8 different directions across the whole area of the large display, thus hand moving direction is an independent variable in this experiment. In accordance with the target pointing evaluation model of Fitts’ law [41] and related tasks, amplitude (A) and target width (W) are also essential independent variables that should be considered. Amplitude refers to the moving distance from starting position to the target, while target width is the diametral size of the circular target. Besides, to evaluate the asymmetric performance of two arms, the hand use choice was also treated as an independent variable. Table 1 provides detailed information about the values of all independent variables.
Figure 4a shows the task interface, and independent variables of A and W. Figure 4b illustrates the independent variables of hand moving direction and hand use choice. In the experimental task, two circular objects were shown per trial, one was the start point in white colour, and the other was the target coloured in orange, as shown in Figure 3. The linear direction from the start point to the target was the hand-moving direction. The number shown in Figure 4a refers to the execution order of selection trials, in the clockwise direction. According to the Fitts’ law expression:
M T = a + b × log 2 ( A W + 1 )
where the logarithmic expression of log 2 ( A / W + 1 ) refers to the Fitts’ index of difficulty (ID). According to the experimental design in the present study, the ID values ranged from 2.86 to 6.25 bits which were appropriate in target pointing evaluations [42].

4.5. Procedure

Prior to the formal experiment, participants were provided a detailed introduction on the task requirements in Chinese and English, to ensure that both native and foreign participants understood the experimental purpose well. In this procedure, one 5-point Likert assessment scale was provided to the participants to evaluate their subjective satisfaction with the task interface layout, visual style and the color scheme. A nonparametric test showed that participants from different countries with diverse cultural backgrounds had no difference in interface satisfaction. While completing the experimental task, the participant was required to stand at the central front of the large display at a specified distance of 2.5 m. At the beginning of each trial, one target (an orange-filled circle, which was inactivated at original status) and one start point (a white-filled circle of the same size as the target) were presented symmetrically from the centre of the large display. The participant moved a white-coloured hand cursor to the start point to activate the target, then moved the cursor toward the target and sustained 500 milliseconds (ms) to complete the selection event. Then the target disappeared and another trial started. In this procedure, the interval from the moment when the target was activated to the moment when the cursor touched the target was defined as ‘movement time’ (MT). It referred to the travel time of the cursor, but the dwelling time was not included. While sustaining the cursor on the target, if the cursor moved away without keeping enough time, the present trial was recorded to be a failing selection and one error was counted. After the participant completed 120 trials, one block was completed and the interface exited automatically. The MT of 120 trials and the error frequencies in each direction were recorded and stored in a log file.
Participants were instructed to select the targets as quickly as possible while attempting to minimize selection error. Prior to formal blocks, participants were given sufficient time to practice until they had a stable task performance. Each participant completed two blocks for each combination of amplitude (A) and target width (W) by each hand, thus each participant performed 36 blocks. Between two blocks, participants were given a half-day or a longer time to rest their arms and eyes.

4.6. Design

A repeated-measures within-participants design was used. Each participant alternately used the left hand and the right hand to complete blocks. The order of hand use choice was counterbalanced across participants: one half used the left hand firstly while the other half used the right hand first. There were 8 moving directions in the target acquisition task, and the target acquisition in each direction was repeated 15 times in one block. The order of hand moving directions was also counterbalanced. Each participant completed 4320 (36 blocks × 120 trials per block) trials in the experiment.

5. Analyses and Results

A total of 1080 (30 participants × 36 blocks per participant) performance recording logs were collected. Each log recorded MT and error data of 120 trials. MT records that exceeded 3 standard deviations from the mean value (Z-score ≥ 3) were treated as outliers and were excluded. Finally, 109,744 (92.4 %) trials were reserved. The analysis focused on two aspects: time efficiency (measured by MT), and interaction error rate.

5.1. Movement Time

Table 2 presents the statistical result of the MT at different moving directions by each hand. A repeated-measures ANOVA of amplitude (A) × target width (W) × hand use choice × hand moving direction was conducted, and a main effect was found for hand moving direction (F(7, 6251) = 2921.48, p < 0.001, η p 2 = 0.99), but no significant effect was identified for hand use choice (F(1, 893) = 1.51, p = 0.220,   η p 2 = 0.23). Interaction effects were found for hand use choice × hand moving direction (F(7, 6251) = 2152.31, p < 0.001, η p 2 = 0.99), amplitude × hand moving direction (F(14, 12,502) = 92.08, p < 0.001, η p 2 = 0.99), and target width × hand moving direction (F(14, 12,502) = 317.30, p < 0.001, η p 2 = 0.99).
Further analyses were conducted to compare the MT result in different hand-moving directions. Two-tailed dependent t-test showed that free-hand target acquisition in downward directions (D, LD and RD) had a significantly shorter MT than that in upward directions (U, RU and LU), the former generated a mean MT of 1365.43 ms (SD = 173.85) while the latter generated a mean MT of 1512.74 ms (SD = 178.73), t(96,822) = −19.20, p < 0.001, Cohen’s d = −3.12. In particular, the vertically downward direction (D) generated the shortest MT while the vertically upward direction (U) generated the longest MT, as shown in Figure 5a.
A further analysis was conducted to prove the effect of hand use choice and the interaction effect of hand use choice × hand moving direction. As shown in Figure 5b, in left-hand interaction, free-hand target acquisition in leftward directions (L, LU and LD) was more efficient than that in rightward directions (R, RU and RD), and the difference was obvious. The former generated a mean MT of 1335.78 ms (SD = 173.96) while the latter generated a mean MT of 1499.36 ms (SD = 172.65), t(96,822) = −14.77, p < 0.001, Cohen’s d = −2.94. But in right-hand interaction, free-hand target acquisition in leftward directions was obviously less efficient than that in rightward directions, the former generated a mean MT of 1496.17 ms (SD = 173.11) while the latter generated a mean MT of 1334.86 ms (SD = 172.28), t(96,822) = 14.53, p < 0.001, Cohen’s d = 2.91. It indicated that arm movement and free-hand interaction generated different time efficiency at different hand moving directions, the result was heavily dependent on the choice of hand use. This finding proved that there was an asymmetric pattern in free-hand interaction efficiency across the whole large display area, arm movement and hand interaction at the same side of the operating hand were more efficient in target acquisition than that at the opposite side.

5.2. Error Rate

Table 3 shows the statistical result of target acquisition error rate at different directions by each hand. A repeated-measures ANOVA of amplitude × target width × hand use choice × hand moving direction showed that there were significant effects of amplitude (F(2, 1786) = 7587.82, p < 0.001, η p 2 = 0.99), target width (F(2, 1786) = 86,080.12, p < 0.001, η p 2 = 0.99), hand use choice (F(1, 893) = 10,330.31, p < 0.001, η p 2 = 0.99), and hand moving direction (F(7, 6251) = 650.84, p < 0.001, η p 2 = 0.99). Post-hoc Bonferroni pairwise comparisons of amplitudes found that a larger amplitude resulted in a higher error rate (800 mm-amplitude: M = 8.28%, SD = 1.05; 1600 mm-amplitude: M = 9.75%, SD = 1.12; 2400 mm-amplitude: M =11.29%, SD = 1.32; all p < 0.05), indicating that users generated a higher interaction accuracy in central areas of the large display than in peripheral areas. The right-hand interaction resulted in a mean error rate of 8.88% (SD = 1.02) while the left-hand interaction resulted in a mean error rate of 10.48% (SD = 1.24). A two-tailed dependent t-test showed that, the difference was significant (t(3238) = 6.71, p < 0.001, Cohen’s d = 1.46), indicating the accuracy advantage of the user’s dominant arm.
Significant interaction effects were found for target width × hand use choice (F(2, 1786) = 3546.60, p < 0.001, η p 2 = 0.99), and hand moving direction × hand use choice (F(7, 6251) = 621.41, p < 0.001, η p 2 = 0.99). Figure 6 shows the interaction effect of target width × hand use choice: in acquiring 32 mm- and 64 mm-wide targets, right-hand interaction generated an obviously lower error rate than the left-hand interaction; but in acquiring 128 mm-wide target, left hand interaction and right hand interaction had little difference.
Figure 7a presents the error rate result in different directions without separating two hands, while Figure 7b presents comparative results between left- and right-hand interactions. It can be found that (1) the vertically upward direction (U) generated the highest error rate while the vertically downward direction (D) generated the lowest error rate; (2) three downward directions (LD, D and RD) generated a mean error rate of 9.29% (SD = 1.03) while three upward directions (LU, U and RU) generated a mean error rate of 10.32% (SD = 1.24), the former was significantly lower than the latter (t(2158) = −9.21, p < 0.001, Cohen’s d = −2.15). It indicated that downward arm movement and hand interaction toward the lower display area generated a higher accuracy than upward arm movement and hand interaction toward the upper display area; (3) an asymmetric pattern was found in interaction accuracy between left-hand and right-hand interactions: the left-hand generated less target acquisition errors at leftward directions, but the right-hand generated less target acquisition errors at rightward directions. Figure 7b shows the comparative result. Considering the bio-mechanical structure of the upper limb and its physical limitations in free arm movements, and based on the observations of arm movements in different directions, a possible cause of interaction accuracy difference is that there are different muscles or muscle groups involved in different directed movements and different muscles have diverse movement controlling accuracy. For example, in upward directed arm movement, both bicipital and triceps muscles are stretched which results in more difficulties in performing subtle and precise hand movements; but in a downward directed arm movement, upper arm muscles are more relaxed and hand moving control becomes more precise and easier. Take the left-hand interaction as another example, in rightward movement the triceps muscle is more stretched than in leftward movement, the former results in a weaker hand movement controlling accuracy than the latter.

6. Findings Summary

While completing the free-hand target acquisition tasks, the participant performed different arm postures and changed the hand positions in different conditions of hand choice, amplitude (i.e., the target distance) and target position. For example, bending the arm to put the hand in front of the chest to acquire the target on the display area close to the body; but stretching the arm out to place the hand at an arm-length distance to reach targets on the distal display areas. Figure 8 is an illustration of the ‘asymmetry’ patterns in large display interaction performance. The experimental findings are summarized as follows:
(1)
Free-hand acquisition of targets at a closer distance (closer to the display center) generated a higher accuracy than that at a farther distance. This can be further interpreted as an ergonomic feature that hand movement and interaction in the within-body space through a bent arm posture was more accurate than that in the peripheral-body space through a stretched arm posture, as illustrated in Figure 8a.
(2)
All the participants engaged in the experiment were right-handed. The experimental result proved that the right-hand interaction not only had a higher target acquisition efficiency but also generated fewer errors, indicating an ‘asymmetry’ pattern in the dominant hand interaction and the other hand interaction, as illustrated in Figure 8b.
(3)
In either hand interaction, target acquisition on the display area at the hand’s convenient side was more efficient and accurate than that at the hand’s inverse side, as illustrated in Figure 8c.
(4)
Apart from the asymmetric interaction performance between the user’s left-sided and the right-sided areas, there was another ‘asymmetry’ pattern between upward and downward areas, as illustrated in Figure 8d. It was found that downward arm movement not only had a higher target acquisition efficiency but also generated a more satisfying accuracy than upward arm movement.
Table 4 presents a findings comparison between the present study and the earlier research, from which it can be found that contributions of this study is more comprehensive. Based on the finding (1), it is confirmed that Hypothesis 1 is supported; based on the findings (2) and (3), Hypotheses 2 and 3 are also supported.

7. Discussion and Future Work

This study proved that arm movement and based free-hand interaction on a large display generated different performance across the large display areas, not only between upper and lower display areas but also between left-sided and right-sided display areas. It indicates that there are ‘asymmetry’ patterns in arm movement ability and free-hand interaction performance on divided areas of the large display. More importantly, it provides a deeper insight into the kinematic feature of the arm movement and its effect on free-hand interaction results. The experimental findings also extend and supplement the current knowledge about the usability and ergonomic characteristics of universal large displays [5,6].
In earlier investigations on mouse- and touching-based interaction on large displays, a similar ‘asymmetry’ pattern was found in interaction results between the upper and the lower display areas, but it was attributed to the functional difference of the user’s sensorimotor systems [31,33]. In other studies, the functional feature of the sensorimotor systems was also widely accepted as the main effect of the hand operational difference in upper and lower visual fields [26,32]. From this perspective, this study provides a new and seems more rational explanation on free-hand interaction differences at different directions. It is suspected that the arm movement kinematic change causes free-hand interaction differences at upward and downward directions. For example, when the user raises the hand up to acquire the target on the upper area, the user needs to overcome the gravity which generates arm muscle fatigue more easily, perceived arm fatigue then harms operational accuracy in target pointing and acquisition tasks.
This study also provides proof of the effect of the hand use choice on the free-hand interaction performance across the large display areas. In the case of left-hand interaction, arm movement and free-hand target acquisition in the leftward direction were found to be more efficient and accurate than that in the rightward direction. But in the case of right hand interaction, the result was converse. Such difference can also be attributed to the kinematic feature of the arm movement. Take the left-hand interaction as an example, when the user stretches the arm to acquire a target on the right-sided area, the user needs to move the left hand across his or her whole body to reach the target. It demands more physical effort which harms hand moving comfort and flexibility. In earlier studies, the hand choice effect was often neglected or treated as a controlled variable being eliminated. But in this study, it was proven to be an explicit factor closely relevant to the free-hand interaction performance. Even if it was concerned, the user preferred hand and the non-preferred hand were found to have few differences in operational task performance [14], which was inconsistent with the findings achieved in this study. It is suspected that the experimental apparatus and task differences generate conflicting findings in two studies. In research by Jude et al. [14], a free-hand interactive task was performed on a conventional-size monitor where only the user’s lower half arm was engaged in arm movement, thus their experimental finding could not reflect the overall motor ability and operational features of the whole arm. But in contrast, in the present study, the participant confronted a far larger display, he or she needed to perform larger arm movements to acquire targets distributed on the large display, in which both the lower half arm and the upper half arm were engaged in arm movements.
The findings and lessons learned from this study are beneficial and instructive for designing natural hand interactive user interfaces in large displays-based applications. For example, because the downward arm movement-based hand interaction has a higher interaction efficiency and accuracy than the upward arm movement-based hand interaction, it is better to place interactive objects on lower display areas but place non-interactive content, such as media playing and text information, on upper display areas. The interaction difference between the within-body area and the peripheral-body area is a prevalent ‘asymmetry’ pattern in large display user interfaces, user interface elements that fall in the user’s within-body area are more frequently and easily visited than those located on the peripheral-body area. It implies that visual elements on a complicated large display user interface should be arranged according to their importance, priority and use frequency. For example, in a map searching and navigating user interface, zoom-in and zoom-out tools and viewing control widgets are frequently used elements, thus are better to be placed on within-body areas close to the user. And if possible, a user-adaptive technique could be applied to detect and track the user’s position in front of the large display in real-time, and based on this adjust the user interface layout and the elements’ arrangement.
Handedness and the user’s choice of operating hand in free-hand operational tasks are other two significantly influencing factors that should be given extra attention in free-hand interaction evaluations. This study not only verified the dominant hand’s superiority in operational accuracy against the non-dominant hand, but also proved that two hands respectively interacted more efficiently and accurately at the convenient side than at the inverse side. This is a ‘hand use choice’ effect which is also defined as an ‘asymmetry’ pattern in free-hand interaction on a large display-based user interface. It implies more guidelines and strategies for designing more user-satisfied and personalized user interfaces on large displays. For example, in information classification and searching user interface, interface elements are not necessarily organized in a static layout. Those frequently operated widgets and controls, such as navigation tabs, are better to be placed on the same side of the user’s operating hand. When the user uses the left hand to interact, the navigation bars are placed on the display area on the user’s left side; but when the user interacts through the right hand, they are placed on the user’s right-sided display area. This is essentially a user hand-adaptive strategy that relies on a real-time recognizing and tracking technique to identify the user’s operating hand and, based on this, adjust the user interface layout in real-time.
Given the experimental findings, interdisciplinary contributions are also noted. For example, from a sports training and rehabilitation perspective, this study provides an example of measuring hand motor ability through a computer vision-based non-invasive method, and proves its feasibility and effectiveness in terms of efficiency and accuracy. In actual applications, this method can be applied to model the limb motor agility and control accuracy in completing specific tasks, based on this to judge whether the player or athlete has a qualified motor capacity. In a similar way, this method can be used to assess whether a fractured arm has been completely recovered, which is essentially a medical testing approach. It also has sufficient measuring precision to detect any abnormal moving or controlling performance of the limb, which implies a convenient but reliable means of detecting and predicting sicknesses such as Hyperthyroidism and Parkinson’s that have early symptoms of involuntary hand jitter or reduced arm moving stability.
In the present study, free-hand interaction performance was evaluated through a multi-directional reciprocal target pointing and selection task because it represents the most fundamental operation in nowadays universal user interfaces. Other more complicated operating tasks and interactive commands, such as objects dragging and views zooming in and out, are performed on the basis of target pointing and selection. Findings achieved from this study can benefit a wide variety of applications. All participants involved in the experiment were healthy persons having normal or corrected-to-normal eyesight without body impairments who were selected based on strict criteria, thus avoiding the potential influences of abnormal factors on the experimental results. For example, persons who had ever been hurt in upper limb joints or suffered from diseases such as hyperthyroidism, diabetes and arthritis that affect upper limb motor stability were excluded. One-third of the participants were foreigners while others were native Chinese. In order to eliminate the influencing effects of different nationalities and cultural backgrounds, multilingual introductions and assistance were provided to the participants; at the same time, a user-subjective assessment on the experimental task interface was conducted prior to the formal experiment and it was confirmed that participants having different cultural backgrounds showed no differences on subjective satisfaction on the task interface.
Limitations are also noted in this study. First, all experimental findings were achieved from the use of a 4.8 m-wide landing screen, whether the findings are applicable to smaller- or larger-size displays is still an unclear question. In the following research, the authors plan to conduct a more comprehensive investigation on a wider variety of display sizes and types, including but not limited to curved table size displays, cylindrical and even spherical wall-size displays. Second, the user-to-display interactive distance was controlled to circumvent its potential effect. But in actual applications, users often change their positions dynamically based on different purposes. For example, moves closer to the display for searching more subtle information but steps farther away for a global view of the whole user interface. How varying distance influences interaction results on the large display has still less been investigated. In future work, the authors plan to research this issue at varied conditions of interactive distance. Last, evaluation metrics adopted in this study are also limited. Apart from MT and error rate metrics, others such as the user’s subjective preference, perceived arm fatigue, and user comfort are also crucial metrics that should be concerned. In the following research, the authors mean to conduct a more comprehensive evaluation of free-hand interaction on large displays, in terms of quantitative and qualitative evaluation metrics.

8. Conclusions

A multi-directional target pointing and selection experiment was conducted to evaluate arm movement and free-hand interaction performance on a 4.8 m-wide landing large display, in terms of task completion efficiency and accuracy. Experimental results show that bio-mechanical structure of the upper limb and its kinematic features, e.g., the limited rotation range of the elbow and the shoulder joints and the hand-moving precision in different directions, had significant influences on the task performance on the large display. More specifically, there was asymmetric performance noticed in 4 aspects: (1) an asymmetry between the within-body display area and the peripheral-body display area, which supports the hypothesis that free-hand interaction in within-body space is more efficient and accurate than in peripheral-body space; (2) an asymmetry between the dominant hand interaction and the non-dominant hand interaction, which supports the hypothesis that dominant hand outperforms the non-dominant one in terms of interaction efficiency and accuracy; (3) an asymmetry between the arm’s conveniently directed interaction and conversely directed interaction, which supports the hypothesis that free arm movement and hand interaction at different directions are different; and (4) an asymmetry between upward and downward arm movement and hand interaction. Asymmetric performance implies ergonomic design guideline and optimizing strategies for natural hand interactive user interfaces on large displays.

Author Contributions

Conceptualization, X.L.; methodology, P.H.; software, Z.C.; validation, X.L., Z.C., P.H. and R.P.; formal analysis, X.L. and Z.C.; investigation, X.L.; resources, Z.C.; data curation, X.L. and Z.C.; writing—original draft preparation, X.L.; writing—review and editing, X.L. and Z.C.; visualization, X.L.; supervision, P.H. and R.P.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Zhejiang Province, grant number Q19F020010; the National Natural Science Foundation of China (NSFC), grant number 61902097; and the State Key Laboratory of Virtual Reality Technology and Systems (Beihang University), grant number VRLAB2020B03.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Hangzhou Dianzi University (15 December 2021).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Ardito, C.; Buono, P.; Costabile, M.F.; Desolda, G. Interaction with Large Displays: A Survey. ACM Comput. Surv. 2015, 47, 1–38. [Google Scholar] [CrossRef]
  2. Muñoz, G.F.; Cardenas, R.A.M.; Pla, F. A Kinect-Based Interactive System for Home-Assisted Active Aging. Sensors 2021, 21, 417. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, Z. Microsoft Kinect Sensor and Its Effect. IEEE MultiMedia 2012, 19, 4–10. [Google Scholar] [CrossRef] [Green Version]
  4. Thakur, N.; Han, C.Y. An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments. Information 2021, 12, 81. [Google Scholar] [CrossRef]
  5. Czerwinski, M.; Robertson, G.; Meyers, B.; Smith, G.; Robbins, D.; Tan, D. Large Display Research Overview. In Proceedings of the Extended Abstracts at CHI 2006 Conference on Human Factors in Computing Systems, Montréal, QC, Canada, 22–27 April 2006; pp. 69–74. [Google Scholar] [CrossRef]
  6. Stojko, L. Intercultural Usability of Large Public Displays. In Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Virtual Event, Mexico, 12–17 September 2020; pp. 218–222. [Google Scholar] [CrossRef]
  7. Tan, D.S.; Gergle, D.; Scupelli, P.; Pausch, R. Physically large displays improve performance on spatial tasks. ACM Trans. Comput. Hum. Interact. 2006, 13, 71–99. [Google Scholar] [CrossRef]
  8. Han, J.; Shao, L.; Xu, D.; Shotton, J. Enhanced Computer Vision with Microsoft Kinect Sensor: A Review. IEEE Trans. Cybern. 2013, 43, 1318–1334. [Google Scholar] [CrossRef] [PubMed]
  9. Cavallo, M.; Rotini, R.; Cutti, A.G.; Parel, I. Functional Anatomy and Biomechanic Models of the Elbow. In The Elbow: Principles of Surgical Treatment and Rehabilitation, 1st ed.; Porcellini, G., Rotini, R., Stignani Kantar, S., Di Giacomo, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 29–40. [Google Scholar] [CrossRef]
  10. Keefe, D.F.; Gupta, A.; Feldman, D.; Carlis, J.V.; Keefe, S.K.; Griffin, T.J. Scaling up multi-touch selection and querying: Interfaces and applications for combining mobile multi-touch input with large-scale visualization displays. Int. J. Hum. Comput. Stud. 2012, 70, 703–713. [Google Scholar] [CrossRef]
  11. Malik, S.; Ranjan, A.; Balakrishnan, R. Interacting with Large Displays from a Distance with Vision-Tracked Multi-Finger Gestural Input. In Proceedings of the 18th Annual ACM Symposium on User interface software and technology, Seattle, WA, USA, 23–26 October 2005; pp. 43–52. [Google Scholar] [CrossRef] [Green Version]
  12. Shoemaker, G.; Tang, A.; Booth, K.S. Shadow Reaching: A new Perspective on Interaction for Large Displays. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, Newport, RI, USA, 7–10 October 2007; pp. 53–56. [Google Scholar]
  13. Banerjee, A.; Burstyn, J.; Girouard, A.; Vertegaal, R. Pointable: An In-Air Pointing Technique to Manipulate Out-of-Reach Targets on Tabletops. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, Kobe, Japan, 13–16 November 2011; pp. 11–20. [Google Scholar] [CrossRef]
  14. Jude, A.; Poor, G.M.; Guinness, D. An Evaluation of Touchless Hand Gestural Interaction for Pointing Tasks with Preferred and Non-Preferred Hands. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, Helsinki, Finland, 26–30 October 2014; pp. 668–676. [Google Scholar] [CrossRef]
  15. Hespanhol, L.; Tomitsch, M.; Grace, K.; Collins, A.; Kay, J. Investigating Intuitiveness and Effectiveness of Gestures for Free Spatial Interaction with Large Displays. In Proceedings of the 2012 International Symposium on Pervasive Displays, Porto, Portugal, 4–5 June 2012; pp. 1–6. [Google Scholar] [CrossRef]
  16. Vogel, D.; Balakrishnan, R. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, Seattle, WA, USA, 23–26 October 2005; pp. 33–42. [Google Scholar] [CrossRef] [Green Version]
  17. Haque, F.; Nancel, M.; Vogel, D. Myopoint: Pointing and Clicking Using Forearm Mounted Electromyography and Inertial Motion Sensors. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 3653–3656. [Google Scholar] [CrossRef] [Green Version]
  18. Toledo-Perez, D.C.; Rodriguez-Resendiz, J.; Gomez-Loenzo, R.A. A study of computing zero crossing methods and an improved proposal for emg signals. IEEE Access 2020, 8, 8783–8790. [Google Scholar] [CrossRef]
  19. Toledo-Pérez, D.C.; Martínez-Prado, M.A.; Gómez-Loenzo, R.A.; Paredes-García, W.J.; Rodríguez-Resendiz, J. A study of movement classification of the lower limb based on up to 4-EMG channels. Electronics 2019, 8, 259. [Google Scholar] [CrossRef] [Green Version]
  20. Patil, A.K.; Balasubramanyam, A.; Ryu, J.Y.; B N, P.K.; Chakravarthi, B.; Chai, Y.H. Fusion of Multiple Lidars and Inertial Sensors for the Real-Time Pose Tracking of Human Motion. Sensors 2020, 20, 5342. [Google Scholar] [CrossRef]
  21. Clark, A.; Dünser, A.; Billinghurst, M.; Piumsomboon, T.; Altimira, D. Seamless Interaction in Space. In Proceedings of the 23rd Australian Computer-Human Interaction Conference, Canberra, Australia, 28 November–2 December 2011; pp. 88–97. [Google Scholar] [CrossRef]
  22. Schwaller, M.; Brunner, S.; Lalanne, D. Two Handed Mid-Air Gestural HCI: Point + Command. In Proceedings of the 15th International Conference on Human-Computer Interaction, Las Vegas, NV, USA, 21–26 July 2013; pp. 388–397. [Google Scholar] [CrossRef]
  23. Bi, X.; Shi, Y.; Chen, X.; Xiang, P. Facilitating Interaction with Large Displays in Smart Spaces. In Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies, Grenoble, France, 12–14 October 2005; pp. 105–110. [Google Scholar] [CrossRef]
  24. Mäkelä, V.; Heimonen, T.; Turunen, M. Magnetic Cursor: Improving Target Selection in Freehand Pointing Interfaces. In Proceedings of the International Symposium on Pervasive Displays, Copenhagen, Denmark, 5–6 June 2014; pp. 112–117. [Google Scholar] [CrossRef]
  25. Bateman, S.; Mandryk, R.L.; Gutwin, C.; Xiao, R. Analysis and comparison of target assistance techniques for relative ray-cast pointing. Int. J. Hum. Comput. Stud. 2013, 71, 511–532. [Google Scholar] [CrossRef]
  26. Ren, G.; O’Neill, E. 3D selection with freehand gesture. Comput. Graph. 2013, 37, 101–120. [Google Scholar] [CrossRef]
  27. Zanini, A.; Patané, I.; Blini, E.; Salemme, R.; Brozzoli, C. Peripersonal and reaching space differ: Evidence from their spatial extent and multisensory facilitation pattern. Psychon. Bull. Rev. 2021, 28, 1894–1905. [Google Scholar] [CrossRef] [PubMed]
  28. Ball, R.; North, C.; Bowman, D.A. Move to Improve: Promoting Physical Navigation to Increase User Performance with Large Displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; pp. 191–200. [Google Scholar] [CrossRef]
  29. Shoemaker, G.; Tsukitani, T.; Kitamura, Y.; Booth, K.S. Body-Centric Interaction Techniques for Very Large Wall Displays. In Proceedings of the Nordic Conference on Human-Computer Interaction, Reykjavik, Iceland, 16–20 October 2010; pp. 463–472. [Google Scholar] [CrossRef] [Green Version]
  30. Feng, J.; Spence, I. Left or Right? Spatial Arrangement for Information Presentation on Large Displays. In Proceedings of the 2010 Conference of the Center for Advanced Studies on Collaborative Research, Toronto, ON, Canada, 1–4 November 2010; pp. 154–159. [Google Scholar] [CrossRef]
  31. Previc, F.H. Functional specialization in the lower and upper visual fields in humans: Its ecological origins and neurophysiological implications. Behav. Brain Sci. 1990, 13, 519–542. [Google Scholar] [CrossRef]
  32. Fan, X.; Liu, Z.; Zhou, Q.; Xie, F. Spatial Effect of Target Display on Visual Search. In Proceedings of the International Conference on Human-Computer Interaction, Los Angeles, CA, USA, 2–7 August 2015; pp. 98–103. [Google Scholar] [CrossRef]
  33. Po, B.A.; Fisher, B.D.; Booth, K.S. Mouse and Touchscreen Selection in the Upper and Lower Visual Fields. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; pp. 359–366. [Google Scholar] [CrossRef] [Green Version]
  34. Fikkert, W.; Vet, P.V.D.; Nijholt, A. User-Evaluated Gestures for Touchless Interactions from a Distance. In Proceedings of the 2010 IEEE International Symposium on Multimedia, Taichung, Taiwan, 13–15 December 2010; pp. 153–160. [Google Scholar] [CrossRef] [Green Version]
  35. Banerjee, A.; Burstyn, J.; Girouard, A.; Vertegaal, R. MultiPoint: Comparing laser and manual pointing as remote input in large display interactions. Int. J. Hum. Comput. Stud. 2012, 70, 690–702. [Google Scholar] [CrossRef] [Green Version]
  36. Nancel, M.; Wagner, J.; Pietriga, E.; Chapuis, O.; Mackay, W. Mid-Air pan-and-Zoom on Wall-Sized Displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 177–186. [Google Scholar] [CrossRef] [Green Version]
  37. Jota, R.; Pereira, J.M.; Jorge, J.A. A Comparative Study of Interaction Metaphors for Large-Scale Displays. In Proceedings of the Extended Abstracts at CHI 2009 Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 4135–4140. [Google Scholar] [CrossRef]
  38. Nancel, M.; Pietriga, E.; Chapuis, O.; Beaudouin-Lafon, M. Mid-Air Pointing on Ultra-Walls. ACM Trans. Comput. Hum. Interact. 2015, 22, 1–62. [Google Scholar] [CrossRef]
  39. Büsch, D.; Hagemann, N.; Bender, N. The dimensionality of the edinburgh handedness inventory: An analysis with models of the item response theory. Later. Asymmetries Body Brain Cogn. 2010, 15, 610–628. [Google Scholar] [CrossRef] [PubMed]
  40. Natapov, D.; Castellucci, S.J.; MacKenzie, I.S. ISO 9241-9 Evaluation of Video Game Controllers. In Proceedings of the Graphics Interface 2009, Kelowna, BC, Canada, 25–27 May 2009; pp. 223–230. [Google Scholar]
  41. Fitts, P.M. The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 1954, 47, 381–391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Soukoreff, R.W.; Mackenzie, I.S. Towards a standard for pointing device evaluation, perspectives on 27 years of fitts’ law research in HCI. Int. J. Hum. Comput. Stud. 2004, 61, 751–789. [Google Scholar] [CrossRef]
Figure 1. (a) A classification of arm movement directions; and (b) asymmetric features in the arm motor ability and behaviors.
Figure 1. (a) A classification of arm movement directions; and (b) asymmetric features in the arm motor ability and behaviors.
Symmetry 14 00928 g001
Figure 2. Transferring function of CD gain based on the hand moving velocity ( V m a x = 0.45 m/s, V m i n = 0.06 m/s, C D m a x = 24.0, C D m i n = 0.20, λ = 0.01 s/mm).
Figure 2. Transferring function of CD gain based on the hand moving velocity ( V m a x = 0.45 m/s, V m i n = 0.06 m/s, C D m a x = 24.0, C D m i n = 0.20, λ = 0.01 s/mm).
Symmetry 14 00928 g002
Figure 3. Experimental apparatus and scenario.
Figure 3. Experimental apparatus and scenario.
Symmetry 14 00928 g003
Figure 4. An explanatory illustration about the task interface and independent variables: (a) interface, amplitude and target width; (b) hand moving direction, and hand use choice.
Figure 4. An explanatory illustration about the task interface and independent variables: (a) interface, amplitude and target width; (b) hand moving direction, and hand use choice.
Symmetry 14 00928 g004
Figure 5. MT result at different hand moving directions: (a) overall result; (b) comparative result of two hands.
Figure 5. MT result at different hand moving directions: (a) overall result; (b) comparative result of two hands.
Symmetry 14 00928 g005
Figure 6. Target acquisition error rate at 3 target sizes in left- and right-hand interactions.
Figure 6. Target acquisition error rate at 3 target sizes in left- and right-hand interactions.
Symmetry 14 00928 g006
Figure 7. Target acquisition error rate at 8 directions: (a) overall result; (b) comparative result of two hands.
Figure 7. Target acquisition error rate at 8 directions: (a) overall result; (b) comparative result of two hands.
Symmetry 14 00928 g007
Figure 8. An illustration of experimental findings and ‘asymmetry’ patterns: (a) interaction accuracy difference between the within-body display area and the peripheral-body display area; (b) asymmetric performance between the dominant hand and the non-dominant hand; (c) asymmetric performance between the arm’s convenient side and the inverse side; (d) asymmetric performance between the upward and the downward display areas.
Figure 8. An illustration of experimental findings and ‘asymmetry’ patterns: (a) interaction accuracy difference between the within-body display area and the peripheral-body display area; (b) asymmetric performance between the dominant hand and the non-dominant hand; (c) asymmetric performance between the arm’s convenient side and the inverse side; (d) asymmetric performance between the upward and the downward display areas.
Symmetry 14 00928 g008
Table 1. Independent variables and parameters.
Table 1. Independent variables and parameters.
Independent VariablesValues
Moving amplitude (A)(1) 800 mm;
(2) 1600 mm;
(3) 2400 mm;
Target width (W)(1) 32 mm;
(2) 64 mm;
(3) 128 mm;
Moving direction(1) Upward (U);
(2) Downward (D);
(3) Leftward (L);
(4) Rightward (R);
(5) Left-up-ward (LU);
(6) Right-up-ward (RU);
(7) Left-down-ward (LD);
(8) Right-down-ward (RD);
Hand use choice(1) Left hand;
(2) Right hand;
Table 2. MT result at different hand moving directions.
Table 2. MT result at different hand moving directions.
Hand Moving DirectionMean MT (ms) ± SD
Left HandRight Hand
Upward (U)1553.08 ± 182.511558.86 ± 183.27
Right-up-ward (RU)1593.50 ± 176.701389.29 ± 175.77
Rightward (R)1414.21 ± 172.991270.84 ± 172.16
Right-down-ward (RD)1490.38 ± 168.261344.44 ± 168.92
Downward (D)1280.38 ± 183.541274.42 ± 182.68
Left-down-ward (LD)1331.33 ± 169.771471.64 ± 169.94
Leftward (L)1282.83 ± 173.421428.34 ± 173.90
Left-up-ward (LU)1393.19 ± 178.691588.52 ± 175.48
Table 3. Free-hand target acquisition error rate at different hand moving directions.
Table 3. Free-hand target acquisition error rate at different hand moving directions.
Hand Moving DirectionMean Error Rate (%) ± SD
Left HandRight Hand
Upward (U)11.85 ± 1.2310.02 ± 1.15
Right-up-ward (RU)11.34 ± 1.368.42 ± 1.07
Rightward (R)10.64 ± 1.297.67 ± 0.90
Right-down-ward (RD)10.96 ± 1.248.08 ± 0.84
Downward (D)9.52 ± 0.987.68 ± 0.83
Left-down-ward (LD)9.92 ± 1.059.61 ± 1.09
Leftward (L)9.35 ± 1.179.50 ± 1.03
Left-up-ward (LU)10.24 ± 1.0910.03 ± 1.02
Table 4. Findings comparison between the present study and the earlier research.
Table 4. Findings comparison between the present study and the earlier research.
ResearchInteraction Difference: Within-Body Space vs. Peripheral SpaceInteraction Difference: Dominant Hand vs. Non-Dominant HandInteraction Difference: Hand’s Convenient Side vs. Inverse SideInteraction Difference: Upward Space vs. Downward Space
Lou et al. (2022)YesYesYesYes
Ball et al. [26]YesNoNoNo
Shoemaker et al. [27]YesNoNoNo
Previc [29]NoNoNoYes
Po et al. [31]NoNoNoYes
Malik et al. [11]; Jude et al. [14]NoYesNoNo
Ren & O’Neill [24]NoNoYesYes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lou, X.; Chen, Z.; Hansen, P.; Peng, R. Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces. Symmetry 2022, 14, 928. https://doi.org/10.3390/sym14050928

AMA Style

Lou X, Chen Z, Hansen P, Peng R. Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces. Symmetry. 2022; 14(5):928. https://doi.org/10.3390/sym14050928

Chicago/Turabian Style

Lou, Xiaolong, Ziye Chen, Preben Hansen, and Ren Peng. 2022. "Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces" Symmetry 14, no. 5: 928. https://doi.org/10.3390/sym14050928

APA Style

Lou, X., Chen, Z., Hansen, P., & Peng, R. (2022). Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces. Symmetry, 14(5), 928. https://doi.org/10.3390/sym14050928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop