**1. Introduction**

Stress has a direct impact on our well-being [1]. Although stress is often perceived negatively, it can have positive aspects. For instance, acute stress aims to mentally and physically prepare our body to accomplish a demanding task. In contrast, episodic occurring acute stress can cause a variety of negative symptoms, such as sleep disorders, headache, stomach pain and exhaustion. A constant elevated stress level may even result in chronic stress [2], which can lead to severe health conditions, such as depression, anxiety, hypertension and cardiovascular diseases [3]. Therefore, it is imperative to identify stressful situations to prevent resulting illnesses.

Previous studies have demonstrated the capability of identifying stress based on physiological parameters [4], such as electrodermal activity (EDA) [5], heart rate variability (HRV), brainwaves via electroencephalography (EEG) [6–8], muscle tension [9,10], facial expressions [11] and body language [12,13], as well as self-reporting [14,15]. Although these methods provide reliable stress indicators, there are drawbacks. For instance, sensing physiological parameters are sensitive to movement artifacts. The sensing device also needs to be instrumented tightly on the user's body, resulting in low comfort. An alternative method is contact free sensing, such as using cameras [16–18]. However, these systems suffer from varying lighting conditions, require line of sight, and typically create privacy concerns.

Manual approaches, such as having an experimenter interpret facial expressions and body language or relying on self-reporting, are prone to issues such as scalability and subjective bias. Several other studies explore stress detection based on smartphone usage [19–21] by correlating screen-time with daytime, and utilising the phone's sensor data. These studies show the capability of detecting stress over long periods of time. Identifying and predicting acute stress in the short-term may also be possible, although still problematic [21].

Smart shoes, in particular insoles, have been used in a variety of scenarios, such as to analyse gait [22], identify postures [23], calculate walking speeds [24], determine the ground surface [25] and recognising foot tapping gestures for an interaction control [26]. These smart insoles provide an unobtrusive way of collecting data. However, to our knowledge, insole-based tracking has not ye<sup>t</sup> been proposed to identify stress.

Motivated by the fact that feet and legs may carry essential information about a person's stress level [12,27] and the widespread availability of shoes, we present StressFoot. Our prototype encapsulates a smart shoe system that incorporates a pressure-sensitive insole based on force-sensitive resistor (FSR) technology and an inertial measurement unit (IMU). We drive a machine learning approach to reliably detect acute stress situations in sitting postures, such as sedentary office work. To scientifically validate this, we followed the standard research design process [28] by Constructing Validity—Study 1 developing a machine learning model based on the data of 23 participants, evidencing Empirical Replicability—Study 2 with 11 participants showing a distinguishability between stressed and relaxed conditions and finally, testing for External Validity—Study 3 showing the generalisability in terms of robustness of our model for office workers during a typical 8 h work day with 10 participants. In summary, we contribute:

