2.5.1. Cross-Validation

We used k-fold cross-validation (*k* = 10) to evaluate the proposed model. Both the driving and the mental arithmetic data sets were split by subject by cross-validation. The DNN should not have seen data in the test set presented during training. It is obvious that a neural network achieves a high performance with the data used in training. In other words, we needed to divide the data set into both a training set and a test set, and perform training and testing based on each set individually. In the case of the physiological signals, it is difficult to acquire a satisfactory amount of data to train a neural network. There are several limitations in a laboratory environment, such as the portability of the sensors and the inconvenience of the person to be measured. However, cross-validation makes it possible to generate both the training and testing sets with only a small amount of data. We randomly split the data set into individual subjects that make *k* folds using cross-validation. Each fold consists either of one subject or more than one subject. We trained the models with *k* − 1 folds and assessed them with the one fold left. Thus, the training and test sets had a 9:1 ratio. All these processes were iterated *k* times with the individual end-to-end models, which produced *k* models. Therefore, each model was trained by an individual training set and also validated by a test set which had never been seen during training. All of the three training types were evaluated with the data set which was included in the same data set used during the training session, but the model never had seen it. For example, the Type I model was trained and tested with the driving data set. In the case of Type III, although the pretrained model was made based on the driving data set, it was retrained using the mental arithmetic data set. Therefore, the mental arithmetic data set was used for the evaluation. All the evaluation metrics were cross-validation results, which are the mean values of all the folds.

All training and validation was performed on a personal computer (CPU; AMD Ryzen 7 2700X, GPU; NVIDIA GeForce GTX 1080 Ti 11 Gb, Memory; 32 Gb). With the use of GPU, it took less than two and one seconds per epoch for training the model with the driving and mental arithmetic data sets, respectively. However, without GPU, the driving data set required 22 s per epoch and the mental arithmetic data set needed 5 s per epoch to train the model.

#### 2.5.2. Statistical Analysis

A softmax classifier placed at the end of the DNN produces probabilistic outputs, which indicate how likely it is that the inputs are related to the true labels. Among the outputs, a classifier selects the highest probability for its predictions. It is an important way to compare these predictions with the true labels to evaluate the model performance. Many metrics are used to assess such models, such as receiver operating characteristic (ROC) [31] curves and precision–recall (PR) curves [32]. An ROC curve plots sensitivity against 1-specificity with a changing threshold value. Similarly, the precision against the sensitivity (recall) is plotted in the PR curves, which gives an additional analysis to the ROC curves for an imbalanced data set [32]. We calculated the area under the curve (AUC) for the ROC curves, which was nearly 1.0 when the model had successfully operated. We also computed the *F*1 score, which represents the mean of sensitivity and precision. The sensitivity, also called recall, refers to how well a model detects stress among the true stress events. The specificity shows the correct detection rate of the rest state. The precision represents the ratio of the number of true-positives to the number of cases in which a model predicted stress. We compared the proposed model to other models [9–13], and to itself, for each type of training (i.e., Types I, II, and III) using the evaluation metrics. We utilized one-way analysis of variance (ANOVA) and Tukey's test to assess the model itself within each training type.
