**6. Discussion**

The experiments with sensor data captured from the neck-mounted prototype show that the short sensor with low placement on the neck and the long sensor had the best results. For a three-class dictionary of head tilts, random forest is the best performing model with test accuracy of ~83.4% for the short sensor with low placement and ~96% for the long sensor. For a five-class dictionary of head tilts, random forest again had the best performance with a test accuracy of ~83% for the short sensor with low placement and ~91% for the long sensor.

Movements farther from the neck were also successfully detected and classified. Sensor data captured from the neck was able to differentiate speaking from static breathing, with ~83% accuracy. The presence and the number of mouth movements was classified with ~68% accuracy. Speech classification was more challenging, achieving up to 62.5% accuracy in differentiating spoken sentences from a four-class dictionary.

### **7. Conclusions**

In this work, we show that subtle neck tilts, mouth movements, and speech can be detected and classified using an inexpensive flex sensor placed at the neck, and thus can prove to be enabling technology for use in software interfaces.

A flex sensor incorporated into a shirt collar or as part of a necklace opens new possibilities for software interaction. The accuracy of the classification of head tilts and their socially undisruptive nature makes head tilting a good option for signally software micro-interactions. For example, a tilt of the head can dismiss a smartwatch notification.

As head gestures can be made during the course of natural speech, the detection of speech and mouth movements allows for the interface to be tailored to times when a person is not speaking and thus improve the interface with greater context awareness.

**Author Contributions:** Conceptualization, A.N.; methodology, J.L., P.I., K.M. and A.N.; software, J.L. and P.I.; validation, J.L. and P.I.; investigation, J.L., P.I., K.M. and A.N.; writing—original draft preparation, J.L., P.I. and A.N.; writing—review and editing, J.L., A.N. and K.M.; visualization, J.L., P.I. and A.N.; supervision, A.N. and K.M.; project administration, A.N. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by CSUN Research, Scholarship, and Creative Activity (RSCA) 2021–2022, PI: Ani Nahapetian.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Conflicts of Interest:** The authors declare no conflict of interest.
