**1. Introduction**

The ever-increasing prevalence of mobile phones, wearable devices, and smart speakers has spurred intense exploration into user interfaces. These new user interfaces need to address the challenges posed by the ubiquitous interaction paradigm, while having available the possibilities that these varied smart technologies provide.

Arenas for exploration of mobile user interfaces include improving gesture-based interfaces to enable interaction in limit mobility settings or by decreasing the social disruption that is caused by repeated disruptive interactions. Interfaces have been developed that use the movement of the hands, arms, eyes, and feet.

Touch gesture controls still dominate mobile system interfaces because of the ubiquity of touch screens [1]. However, the dominant tap, scroll, and pinch gestures have been linked to repetitive strain injuries on smart phones [2,3]. In addition, they have their limitations on wearable devices because of the limited screen size and, in turn, the available interface surface. The gestures on smartwatch screens need to be done with greater precision and with more constriction of the hand muscles, since the smartwatch screens are significantly smaller than the smartphone screens.

Voice user interfaces (VUIs) that are used for smart speakers have been another arena for improvement, with voiceless speech being explored for situations where there is background noise and for microinteractions.

In this work, we examine the benefits that sensoring the neck can provide within the breadth of mobile user interfaces. We explore and develop a new user interface for mobile systems, independent of limb motions. For example, in place of a scroll down, the head can be tilted forward. In place of a tap, the head can be turned to one side, all with only an inexpensive sensor affixed to the neck or shirt collar.

**Citation:** Lacanlale, J.; Isayan, P.; Mkrtchyan, K.; Nahapetian, A. Sensoring the Neck: Classifying Movements and Actions with a Neck-Mounted Wearable Device. *Sensors* **2022**, *22*, 4313. https://doi.org/10.3390/s22124313

Academic Editors: Claudio Palazzi, Ombretta Gaggi, Pietro Manzoni and Mehmet Rasit Yuce

Received: 6 March 2022 Accepted: 30 May 2022 Published: 7 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

We sensor the neck with an inexpensive and nonintrusive flex sensor and show the range of interfaces that are possible with the incorporation of this simple wearable technology into our lives. Our efforts provide a proof of concept that common actions, such as head tilts, mouth movements, and even speech, can be classified through the interpretation of the bend angle received from the neck. We explore the size of the flex sensor and the positioning of the sensor on the neck and use our classification results to tailor the prototype.

Applications for neck interfaces include use in assistive devices where limb motion is limited, in gaming and augmented reality systems for more immersive experiences, and in wearable and vehicular systems where hand and/or voice use is restricted or inconvenient. Neck interactions expand a user's bandwidth for information transference, in conjunction with or in place of the typically saturated visual and the audial channels.

A neck-mounted prototype was designed and developed, as detailed in Section 3. The system design considered comfort and the range of motion in the neck and upper body. The form factor and the positioning of the system was finalized to enable the embedding in clothing, such as in a shirt collar. A range of sensor types, sizes, and positions were considered and evaluated.

The prototype's head gesture and position classification accuracy was evaluated for five different classes of common head tilt positions. These experimental evaluations are detailed in Section 4. Head tilt classification is important because it enables user interface input with simple and subtle head gestures.

The encouraging results from the head gesture classification motivated us to explore more possibilities, including using the prototype for mouth movement and speech classification. The experimental evaluations of mouth movements and speech classification are detailed in Section 5. By also incorporating speech and/or mouth movement detection, head gestures for software interactions can be differentiated from head gestures that arise during regular conversation.

The main contributions of this work are (1) the development of a neck-mounted prototype, with an evaluation of sensor types, sizes, and positions; (2) the evaluation of the prototype's head-position classification accuracy; (3) mouth movement detection; and (4) speech detection and classification.

### **2. Related Work**

Interfaces that sense hand and arm gestures are widespread [4], including those that rely on motion sensors [5–8], changes in Bluetooth received signal strength [9], and light sensors [10,11]. Interfaces that leverage the movement of the legs and the feet have also been explored [12,13]. Computer vision-based approaches using the camera to capture head and body motions [14,15], facial expressions [16], and eye movement [17] also exist.

Detection of throat activity has been explored using different enabling technologies. Acoustic sensors have been used for muscle movement recognition [18], speech recognition, ref. [19] and actions related to eating [20–22]. Prior research has been done on e-textiles used in the neck region for detecting posture [23] and swallowing [24], but those efforts have relied on capacitive methods that have limitations in daily interactions. Researchers have explored sensoring the neck with piezoelectric sensors for monitoring eating [25] and medication adherence [26].

In addition to the neck-mounted sensors systems, there has been an exploration of actuation at the neck region using vibrotactile stimulation for accomplishing haptic perception [27–29].

The use of video image processing for speech recognition has been applied to lip reading [30–32]. More recently, as part of the silent or unvoiced speech recognition research efforts, mobile phone and wearable cameras have been used for speech classification from mouth movements. Researchers have used bespoke wearable hardware for detecting mouth and chin movements [33], or leveraged smart phone cameras [34].

Electromyography (EMG) has also been used for speech and/or silent speech classification. Researchers have used EMG sensors on the fingers placed on the face for mouth movement classifications [35]. EMG sensoring of the face for speech detection has also been carried out [36].

Tongue movement has been monitored for human–computer interfaces, including using a magnetometer to track a magne<sup>t</sup> in the mouth [37], using capacitive touch sensors mounted on a retainer in the mouth [38], using EMG from the face muscles around the mouth [39], and using EMG coupled with electroencephalography (EEG) as sensed from behind the ear [40]. Detecting tooth clicks has also been explored including a teeth-based interface that senses tooth clicks using microphones placed behind the ears [41].

Head position classification has been carried out with motion sensors on the head [42], pairing ultrasound transmitters and ultrasonic sensors mounted on the body [43] and barometric pressure sensing inside the ear [44].

This work is an expansion on our previously published conference paper [45] that classified head gestures using on a single neck-mounted bend sensor. In this expanded work, we look not only at head gesture classification using our neck-mounted sensor interface, but also at mouth movement classification, speech detection, and speech classification.
