Human-computer interaction (HCI) is an ever-evolving advancement in the development of technology as a new method of communication between people and computers in the modern world [
1,
2]. Several new assistive methods, such as virtual reality [
3], sign language recognition [
4,
5], speech recognition [
6], visual analysis [
7], brain activity [
8], touch-free writing [
9], have emerged in recent years to achieve this goal. Hand gesture recognition implies the importance of performing various visual tasks and working in an unobtrusive environment. However, the ability to control hand movement action provides a convenient and natural interaction mechanism for HCI. In particular, hand gesture-based character writing is an important aspect of the HCI character input system, which allows users to provide rich interaction commands using different movements of hand gestures. Recently, virtual keyboard-based character writing has been widely adopted to realize the non-touch input system [
10,
11]. As technology advances, these types of innovations are contributing enormously to everyday tasks and making life easier. This technology includes touch and non-touch devices that help users secure their personal and institutional information, operate in risky environments—even instruct robots—and ensure a healthy environment. The invention and use of virtual keyboards creates a new dimension for the character input system. The user can use it in any environment through the camera and image processing methods. In most cases, hand gestures are considered for controlling the virtual keyboard. A text input system using hand gestures is presented in Reference [
12]. The author suggested performing 260 different words without repeating for about 30 min. They achieved an average accuracy of 70%. In Reference [
13], the authors developed a template pattern-based handwriting character recognition system. There are 46 Japanese hiragana characters and 26 alphabets are used to identify. The overall average error rate is 12.3%. Also, joysticks are often used as input devices for remote activities [
14]. However, the controls can be a bit of a concern as the directions of the joystick are limited and can be broken if extra force is used on it. In addition, sensor-based HGR technology is used to provide user authentication, sign language recognition, character input, computer vision and virtual reality and so on. However, there has been a high demand for character input systems for the protection of confidential information and even for actions in a healthy environment.
Therefore, in this paper, we propose HGR techniques based on wearable devices, such as the Myo Armband, to provide a character input using a virtual keyboard. Keyboards and mice are often used as HCI devices. However, it is unsafe and also unhealthy due to dirt, as it is used for a long time. Many researchers have suggested the HCI using computer vision, voice recognition, bio-signals such as electroencephalogram (EEG), electromyography (EMG) [
15], electrooculography (EOG) [
16]. However, noise at the voice interface and the speed of processing of vision-based HCIs are major concerns. The EMG-based approach is of significant use in advancing and understanding HCI by sensing users’ hand movements. Moreover, EOG-based eye-tracking devices are used as input devices for communication, especially for people with amyotrophic lateral sclerosis. In this case, an expensive and stationary device is required to record the bio signal which is concerned with the performance of the recognition accuracy. However, this system is capable of distinguishing finger spread configurations, hold fist, wave left and right, and wrist movements. Using these types of signals, the activity of a user’s muscles and how they generate energy have been determined. The active areas of the nozzle make a huge difference when performing different gestures. Thus, the speed can be characterized by EMG. In this study, we used a Myo Armband device that includes accelerometer, gyroscope and EMG sensor. We acquired and analyzed different gestures and movements to perform character input using these sensors. This system provides an efficient technique for providing reliable performance for character input without touching any device or screen, which facilitates HCI and allows the user to operate the system in a healthy and secure way.
The paper outlines the following—in
Section 2, we explain the proposed method for the character writing system, where signal preprocessing, feature extraction, and classification processes are described.
Section 3 describes the system configuration and provides an overview of the virtual keyboard.
Section 4 explains the experimental results and describes the findings with figures and tables. We discuss a brief review of the results in
Section 5. Finally, we summarize the research in
Section 6.