*Article* **Investigation of Input Modalities Based on a Spatial Region Array for Hand-Gesture Interfaces**

**Huanwei Wu <sup>1</sup> , Yi Han <sup>2</sup> , Yanyin Zhou <sup>1</sup> , Xiangliang Zhang <sup>3</sup> , Jibin Yin 1,\* and Shuoyu Wang <sup>2</sup>**


**Abstract:** To improve the efficiency of computer input, extensive research has been conducted on hand movement in a spatial region. Most of it has focused on the technologies but not the users' spatial controllability. To assess this, we analyze a users' common operational area through partitioning, including a layered array of one dimension and a spatial region array of two dimensions. In addition, to determine the difference in spatial controllability between a sighted person and a visually impaired person, we designed two experiments: target selection under a visual and under a non-visual scenario. Furthermore, we explored two factors: the size and the position of the target. Results showed the following: the 5 × 5 target blocks, which were 60.8 mm × 48 mm, could be easily controlled by both the sighted and the visually impaired person; the sighted person could easily select the bottom-right area; however, for the visually impaired person, the easiest selected area was the upper right. Based on the results of the users' spatial controllability, we propose two interaction techniques (non-visual selection and a spatial gesture recognition technique for surgery) and four spatial partitioning strategies for human-computer interaction designers, which can improve the users spatial controllability.

**Keywords:** target selection; spatial controllability; gesture recognition; spatial regions; visual and non-visual; regional division
