3.2.1. Training Phase

We trained our model over three participants and asked them to annotate the daily routines' context by miming short duration trials and keeping a note on a piece of paper of the start and end time. We employ non-overlapping time-based windows to cut the signal into equal-length frames. After windowing, feature vectors were extracted (i.e., explained in the feature extraction section) from signal frames and fed into a classifier trainer function to construct a training model. Figure 3 illustrates a block diagram of the training module.

**Figure 3.** Training of contextual models.

Signal

Segmentation Set of Frames Feature Extraction Context Classifier Contexts We implement a simple, yet robust non-parametric *k*-nearest neighbour algorithm in our proposed model [37]. It features two stages: the first is the determination of the nearest neighbours, and the second is assigning the context label using those neighbours. In the proposed method, the Euclidean

distance metric is applied to find the neighbours, and three neighbours (i.e., *k* = 3) are taken into account. The value of *k* = 3 has been proven to provide good results in some related work and for different settings [30,38,39]. Assume "*Cf v*" is the current feature vector that wants to discover the most relevant instances in the context miner "*CM*".

$$\mathsf{C}\_{fv} \leftarrow \mathsf{CM}\left(\mathbf{X}\_{\mathsf{n}}\right) \tag{1}$$

where *X<sup>n</sup>* is the number of stored training examples to classify the contexts. In order to find the optimal similarities between the current feature vector and selected classifier module, we calculate the Euclidean distance of "*Cf v*" with all instances of selected "*CM*" as follows:

$$\text{Euclidean distance} : d(\mathbf{x}\_{i\prime}, \mathbf{x}\_{j}) = \sqrt{\sum\_{k=1}^{n} (\mathbf{C}\_{fv}(\mathbf{x}\_{ik}) - \mathbf{C}M(\mathbf{x}\_{jk}))^2} \tag{2}$$

In Equation (2), the Euclidean distance between two instances "*xi*" and "*xj*" is denoted by "*dij*". The distance is calculated for the *k*-th attribute of instance *x*; where, *Cf v*(*xik*) is the feature vector that wants to associate itself with the instances of *CM*(*xjk*). Based on this structure, most relevant instances are filtered out, and context class labels are assigned by considering the three nearest neighbours. The selection of the *k*-nearest neighbour algorithm is based on one of the most useful and lightweight algorithms for various applications. It is also ranked among the top 10 data-mining algorithms [15].
